CN115359401A - Method and device for identifying unsealing of article, computer storage medium and electronic equipment - Google Patents

Method and device for identifying unsealing of article, computer storage medium and electronic equipment Download PDF

Info

Publication number
CN115359401A
CN115359401A CN202211027212.XA CN202211027212A CN115359401A CN 115359401 A CN115359401 A CN 115359401A CN 202211027212 A CN202211027212 A CN 202211027212A CN 115359401 A CN115359401 A CN 115359401A
Authority
CN
China
Prior art keywords
article
intact
video
unsealed
outer package
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211027212.XA
Other languages
Chinese (zh)
Inventor
贺冠楠
于伟
梅涛
潘滢炜
郑少杰
张熠恒
陈越
左佳伟
王林芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Holding Co Ltd
Original Assignee
Jingdong Technology Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Holding Co Ltd filed Critical Jingdong Technology Holding Co Ltd
Priority to CN202211027212.XA priority Critical patent/CN115359401A/en
Publication of CN115359401A publication Critical patent/CN115359401A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0207Discounts or incentives, e.g. coupons or rebates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure relates to the technical field of artificial intelligence, and provides an article unsealing identification method, an article unsealing identification device, a computer storage medium and electronic equipment, wherein the article unsealing identification method comprises the following steps: acquiring an article unsealing video uploaded by a target user, identifying whether an outer package of the article is intact or not through a first video segment, and identifying whether the outer package is unsealed or not through a second video segment after identifying that the outer package is intact or not; after the outer package is recognized to be unsealed and a preset action is detected, whether a sealing ring of the article is intact or not is recognized through a third video clip; after the sealing ring is identified to be intact, identifying whether the sealing ring and the sealing cover of the article are unsealed through a fourth video clip; and issuing a reward to the target user after recognizing that the seal ring and the seal cover are unsealed. This openly can whether the automatic identification article are unsealed, promote discernment efficiency.

Description

Method and device for identifying unsealing of article, computer storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a method and an apparatus for identifying an article to be unpacked, a computer storage medium, and an electronic device.
Background
With the rapid development of internet and computer technologies, the functions of the related goods display platform are increasingly diversified. To avoid the user from purchasing some value article stocked curiositions, a functional need arises to identify whether an article is unpacked on the article display platform line.
At present, whether an article is unsealed is generally identified through a manual identification mode, however, the accuracy rate cannot be guaranteed through the manual identification mode, and when the identification quantity is large, the efficiency is low.
In view of the above, there is a need in the art to develop a new method and apparatus for identifying the unsealing of an article.
It is to be noted that the information disclosed in the background section above is only used to enhance understanding of the background of the present disclosure.
Disclosure of Invention
The present disclosure is directed to a method for identifying the unsealing of an article, an apparatus for identifying the unsealing of an article, a computer storage medium, and an electronic device, thereby overcoming, at least to some extent, the problem of low identification efficiency in the related art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the present disclosure, there is provided a method of tamper-evident for an article, comprising: acquiring an article unsealing video uploaded by a target user, wherein the article unsealing video is used for reflecting the unsealing process of an article; the article unsealing video at least comprises a first video segment, a second video segment, a third video segment and a fourth video segment; identifying whether the outer package of the article is intact through a first video clip, and identifying whether the outer package is unsealed through a second video clip after identifying that the outer package is intact; after the outer package is recognized to be unsealed and a preset action is detected, whether a sealing ring of the article is intact or not is recognized through a third video clip; the preset action is taken as the action of taking the article out of the outer package; after the sealing ring is identified to be intact, identifying whether the sealing ring and the sealing cover of the article are unsealed through a fourth video clip; and issuing a reward to the target user after recognizing that the seal ring and the seal cover are unsealed.
In an exemplary embodiment of the present disclosure, the identifying whether the outer package of the article is intact through the first video clip includes: performing frame extraction processing on the first video clip to obtain at least two frames of images; and identifying whether the outer package of the article is intact or not through the at least two frames of images.
In an exemplary embodiment of the present disclosure, the identifying whether the overwrap of the article is intact comprises: identifying whether a top label of the item is intact; and identifying whether the bottom label of the article is intact.
In an exemplary embodiment of the present disclosure, the identifying whether the top label of the item is intact comprises: performing integrity detection on the top labels in the at least two frames of images by using a pre-trained integrity detection model; the integrity detection model is a classification model; determining that the top label is intact when the integrity detection model outputs a first result; the first result is used to characterize the top label as intact; when the integrity detection model outputs a second result, determining whether the top label is intact or not by using a pre-trained shooting part detection model; the second result is used to characterize the top label as damaged or not detected.
In an exemplary embodiment of the present disclosure, the determining whether the top label is intact using a pre-trained photographed portion detection model includes: recognizing the shooting part of the outer package in the at least two frames of images by using the shooting part recognition model; determining that the top label is damaged in response to the photographing part being consistent with a preset position where the top label is located; responding to the inconsistency between the shooting part and the preset position of the top label, sending retransmission prompt information to the target user, and identifying whether the top label is intact or not according to the first video clip uploaded by the target user again; and the retransmission prompt message is used for prompting the target user to upload the first video clip again.
In an exemplary embodiment of the disclosure, identifying whether the overpack is unsealed by the second video clip after identifying that the overpack is intact comprises: acquiring a purchaser of the article through the source tracing code on the outer package; and if the buyer is consistent with the target user, identifying whether the outer package is unsealed through a second video clip.
In an exemplary embodiment of the present disclosure, the identifying whether the outer package in the second video segment is unsealed includes: identifying whether a top label and/or a bottom label on the outer package is unsealed.
In an exemplary embodiment of the present disclosure, the identifying whether a seal ring of the article is intact through a third video clip after identifying that the outer package is unsealed and detecting a preset action includes: acquiring a logistics code from the article after recognizing that the outer package is unsealed and detecting a preset action; determining a target tracing code corresponding to the logistics code based on a corresponding relation between the pre-stored logistics code and the tracing code; and if the target traceability code is consistent with the traceability code on the outer package, identifying whether the sealing ring of the article is intact or not through the third video clip.
In an exemplary embodiment of the present disclosure, after obtaining the item unsealing video uploaded by the target user, the method further comprises: extracting the characteristics of a plurality of frames of images contained in each video clip to obtain the characteristic information of each frame of image; calculating the deviation degree between two adjacent frames of images according to the characteristic information of the two adjacent frames of images; in response to the degree of deviation being less than a degree of deviation threshold, determining that the target user is not eligible for reward claim.
In an exemplary embodiment of the present disclosure, the method further comprises: obtaining the number of frames not containing the article from each video clip; in response to the number of frames being greater than a preset threshold, determining that the target user is not eligible for reward claim.
According to a second aspect of the present disclosure, there is provided a tamper evident device for an article, comprising: the video acquisition module is used for acquiring an article unsealing video uploaded by a target user, and the article unsealing video is used for reflecting the unsealing process of an article; the article unsealing video includes at least a first video clip, a second video clip, a third video clip and a fourth video clip; the first identification module is used for identifying whether the outer package of the article is intact or not through a first video clip and identifying whether the outer package is unsealed or not through a second video clip after the outer package is identified to be intact or not; the second identification module is used for identifying whether a sealing ring of the article is intact or not through a third video segment after the outer package is identified to be unsealed and a preset action is detected; the preset action is taken out of the outer package; the third identification module is used for identifying whether the sealing ring and the sealing cover of the article are unsealed or not through a fourth video clip after the sealing ring is identified to be intact; and the reward dispensing module is used for dispensing rewards to the target users after recognizing that the sealing ring and the sealing cover are unsealed.
According to a third aspect of the present disclosure, there is provided a computer storage medium having stored thereon a computer program which, when executed by a processor, implements the method of tamper recognition of an article of the first aspect described above.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of tamper identification of an item of the first aspect described above via execution of the executable instructions.
As can be seen from the foregoing technical solutions, the method for identifying the unsealing of an article, the device for identifying the unsealing of an article, the computer storage medium and the electronic device in the exemplary embodiment of the present disclosure have at least the following advantages and positive effects:
in the technical solutions provided by some embodiments of the present disclosure, on one hand, whether the outer package of the article is intact is identified through a first video clip, and after the outer package is identified as intact, whether the outer package is unsealed is identified through a second video clip, after the outer package is identified as unsealed and a preset action is detected, whether the seal ring and the sealing cover of the article are intact is identified through a third video clip, after the seal ring is identified as intact, whether the seal ring and the sealing cover of the article are unsealed is identified through a fourth video clip, and after the seal ring and the sealing cover are identified as unsealed, a reward is issued to the target user. On one hand, whether the article is unpacked or not can be automatically identified, the problem of low efficiency caused by manual identification of the article splitting process in the related technology is solved, and the identification efficiency is improved; on the other hand, the problem that manual identification is easily affected by factors such as personnel fatigue and emotion and the identification effect is not stable enough can be avoided, and the stability of the identification effect is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
FIG. 1 illustrates a flow diagram of a method of tamper identification of an item in an embodiment of the disclosure;
FIG. 2 illustrates a schematic flow chart of identifying whether a top label of an item is intact in an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart illustrating how to identify whether a top tag is intact using a shot portion detection model in an embodiment of the present disclosure;
FIG. 4 shows a schematic flow chart of how to identify whether the over-wrap is unsealed by the second video clip after identifying that the over-wrap is intact in the disclosed embodiment;
fig. 5 is a schematic flow chart illustrating how to identify whether a seal ring of an article is intact through a third video segment after recognizing that the outer package is unsealed and detecting a preset action in the embodiment of the present disclosure;
FIG. 6 is a flow chart illustrating how to determine whether to invalidate reward picking-up qualifications of a target user according to continuity of an item splitting video in the embodiment of the present disclosure;
FIG. 7 is a schematic overall flow chart illustrating a method for identifying the unsealing of an article according to an embodiment of the present disclosure;
FIG. 8 is a schematic overall flow chart of another method of identifying the unsealing of an article according to the embodiment of the present disclosure;
FIG. 9 shows a schematic view of the structure of a tamper evident device for an article in an exemplary embodiment of the disclosure;
fig. 10 shows a schematic structural diagram of an electronic device in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the embodiments of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The terms "a," "an," "the," and "said" are used in this specification to denote the presence of one or more elements/components/parts/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.; the terms "first" and "second", etc. are used merely as labels, and are not limiting on the number of their objects.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities.
The rare wine is a good product for tasting, collecting and presenting to relatives and friends. The article display platform can be used for carrying out bottle opening recognition on the famous and precious wine purchased by the user in order to encourage the user to purchase the famous and precious wine for drinking. If the wine bottle is identified to be opened, the user can be confirmed to drink by oneself and not to buy the cattle, and then the user is awarded a reward. As the concept of bottle opening identification is more recent, the prior art can only ensure that a user takes a video of the bottle opening process, uploads the video to an article display platform and identifies the video manually.
However, the manual identification method has the problems of low efficiency, low accuracy, incapability of real-time feedback and the like. In addition, the requirement of manual identification on personnel is high, the identification effect is not stable enough due to the influence of factors such as fatigue and emotion of the identification personnel. Under the condition that the identification request amount is large, the identification efficiency is greatly limited by the mode, and the labor cost of the article display platform is increased.
In the embodiment of the disclosure, firstly, a method for identifying the unsealing of an article is provided, which overcomes the defect of low identification efficiency in the prior art at least to a certain extent.
Fig. 1 is a schematic flow chart illustrating a method for identifying the unsealing of an article according to an embodiment of the present disclosure, where an execution subject of the method for identifying the unsealing of the article may be a server for identifying the unsealing of the article.
Referring to fig. 1, a method of tamper recognition of an article according to one embodiment of the present disclosure includes the steps of:
step S110, acquiring an article unsealing video uploaded by a target user, wherein the article unsealing video is used for reflecting the unsealing process of an article; the article unsealing video at least comprises a first video segment, a second video segment, a third video segment and a fourth video segment;
step S120, identifying whether the outer package of the article is intact or not through a first video clip, and identifying whether the outer package is unsealed or not through a second video clip after identifying that the outer package is intact or not;
step S130, after the outer package is unsealed and a preset action is detected, whether a sealing ring of the article is intact or not is identified through a third video clip; the preset action is taken out of the outer package;
step S140, after the sealing ring is recognized to be intact, whether the sealing ring and the sealing cover of the article are unsealed or not is recognized through a fourth video clip;
and step S150, after recognizing that the sealing ring and the sealing cover are unsealed, issuing a reward to the target user.
In the technical scheme provided by the embodiment shown in fig. 1, whether the outer package of the article is intact or not is identified through a first video segment, after the outer package is identified as intact or not, whether the outer package is unsealed or not is identified through a second video segment, after the outer package is identified as unsealed and a preset action is detected, whether the sealing ring and the sealing cover of the article are intact or not is identified through a third video segment, after the sealing ring is identified as intact or not, whether the sealing ring and the sealing cover of the article are unsealed or not is identified through a fourth video segment, and after the sealing ring and the sealing cover are identified as unsealed, a reward is issued to the target user. On one hand, whether the article is unpacked or not can be automatically identified, the problem of low efficiency caused by the process of manually identifying the article in the related technology is solved, and the identification efficiency is improved; on the other hand, the problem that manual identification is easily affected by factors such as personnel fatigue and emotion and the identification effect is not stable enough can be avoided, and the stability of the identification effect is improved.
The following describes the specific implementation of each step in fig. 1 in detail:
in step S110, an article unsealing video uploaded by the target user is acquired.
In this step, an article unsealing video uploaded by a target user may be obtained, and the target user may be a user who uploads the article unsealing video.
The above-mentioned articles may be valuable articles such as rare wine, for example: certain brand of white spirit can be set according to actual conditions, and the disclosure does not specially limit the white spirit.
The article unsealing video is used for the unsealing process of the reaction article. Illustratively, the process of unsealing the article may include at least the following four stages: the non-unsealed complete outer package display-the outer package is unsealed, and the article is taken out from the outer package, the non-unsealed article is displayed by a sealing ring, the sealing ring and the sealing cover of the article are unsealed, so that the article unsealing process at least comprises four video clips corresponding to the four stages, namely a first video clip, a second video clip, a third video clip and a fourth video clip.
It should be noted that the process of unsealing the article can be split into different numbers of stages according to actual situations, for example: the article unsealing video can be divided into 10 stages in a finer granularity, so that the article unsealing video can include 10 video segments corresponding to the 10 stages, and the article unsealing video can be set according to actual conditions, which is not limited by the disclosure.
In the following embodiments, the article unsealing video includes a first video clip, a second video clip, a third video clip and a fourth video clip.
After the article unsealing video is acquired, step S120 may be executed.
In step S120, it is identified whether the outer package of the article is intact by the first video segment, and after identifying that the outer package is intact, it is identified whether the outer package is unsealed by the second video segment.
In this step, it can be identified whether the outer package of the article is intact or not through the first video clip, where the first video clip is used to reflect "the complete outer package display without unsealing", and the second video clip is used to reflect "the outer package unsealing, and the article is taken out from the outer package".
Specifically, in order to reduce the amount of operation in the recognition process without affecting the recognition, frame extraction processing may be performed on the first video segment (for example, one frame is extracted at an interval of 10 frames, which may be set according to actual conditions, and this is not specially limited by the present disclosure) to obtain at least two frames of images, and then, through the at least two frames of images, whether the outer package of the article is intact or not is recognized. By frame extraction processing, all frames of the video clip are not required to be calculated in the identification process, and the purposes of saving calculation power and improving speed are achieved.
Specifically, whether the top label of the article is intact or not and whether the bottom label of the article is intact or not can be identified through the at least two frames of images, and under the condition that both the top label and the bottom label are intact, the outer package of the article can be determined to be intact.
For example, whether the top label of the article is intact or not may be identified first, and after it is determined that the top label is intact, whether the bottom label of the article is intact or not may be identified first, or after it is determined that the bottom label is intact or not, whether the top label of the article is intact or not may be identified, and after it is determined that the bottom label is intact or not, the specific sequence may be set according to the actual situation, and the disclosure does not specially limit the sequence.
In the following embodiments, the sequence of identifying whether the top label of the article is intact and then identifying whether the bottom label of the article is intact is described as an example.
Referring to fig. 2, fig. 2 shows a schematic flow chart of identifying whether the top label of the article is intact in the embodiment of the present disclosure, which includes steps S201 to S203:
in step S201, integrity detection is performed on top labels in at least two frames of images by using a pre-trained integrity detection model; the integrity detection model is a classification model.
In this step, integrity detection may be performed on the top label in at least two frames of images by using a pre-trained integrity detection model. The integrity detection model has the following functions: and detecting the position frame of the top label from the at least two frames of images, and detecting whether the top label in the position frame is intact or not after the position frame of the top label is detected.
Since the top label occupies a small area on the outer package, in order to shield the interference of the background information on the outer package, the integrity detection model may be set as a deep learning neural network target detection algorithm model YOLOv5m.
The integrity detection model may output the output result in the form of probability, and for example, the output result may be: 0 (90%), 1 (10%), wherein 0 represents intact, and 90% represents the probability corresponding to the output result of the top label being intact; 1 represents other cases (cases such as loss, blur, and no label recognized), and 10% represents the probability corresponding to the output result of other cases; 2 represents others (e.g., fuzzy, no tag identified, etc.), and 5% represents the probability that the output result of the top tag is not identified.
In the training of the integrity detection model, if the number of training sample images is small, the training sample images may be artificially generated, and the training sample images may be subjected to data enhancement, so as to increase the data amount of the training sample images. Meanwhile, in order to enable the integrity detection model to detect the label with a small target, the training sample image may be scaled, for example: each side of the training sample image is scaled to 512 pixels, and sides less than 512 pixels are padded with (0,0,0) pixel values.
In step S202, when the integrity check model outputs the first result, it is determined that the top tag is intact.
In this step, when the integrity check model detects the position frame of the top label from the at least two images and determines that the top label is intact, a first result (e.g., 0 (90%), 1 (10%) may be output, and the first result is used to indicate that the top label is intact, so that it may be determined that the top label is intact.
In step S203, when the integrity detection model outputs the second result, determining whether the top label is intact or not by using a pre-trained photographed part detection model; .
In this step, if the integrity detection model does not detect the position of the top label from the at least two images, or determines that the top label is damaged, a second result (e.g., 0 (10%), 1 (90%)) may be output, where the second result is used to represent that the top label is damaged or not detected, and at this time, it cannot be determined whether the top label is damaged or the top label is not captured in the at least two images, so that the at least two images may be detected by using a pre-trained captured region detection model to determine whether the top label is intact.
Specifically, referring to fig. 3, fig. 3 is a schematic flow chart illustrating how to identify whether the top label is intact or not by using the photographed part detection model in the embodiment of the present disclosure, including steps S301 to S303:
in step S301, a photographed portion of the outer wrapper in at least two frames of images is recognized using the photographed portion recognition model.
In this step, the photographed portion of the exterior package in the at least two frames of images may be recognized using a previously trained photographed portion recognition model. For example, the above-mentioned shooting part recognition model may also output a result of outputting in a probabilistic form, and for example, the result of outputting may be: 0 (96%), 1 (1%), 2 (1%), 3 (1%), and 4 (1%), wherein 0 represents the top of the outer package, and 96% represents the probability corresponding to the output result of the top of the outer package; 1 represents the bottom of the outer package, and 1 percent represents the corresponding probability of the output result of the bottom of the outer package; 2, representing the bottle mouth of the wine bottle, and 1% representing the probability corresponding to the output result of the bottle mouth of the wine bottle; 3, representing the whole wine bottle, and 1% representing the probability corresponding to the output result of the wine bottle mouth; 4 represents others (for example, the photographed part cannot be identified), and 1% represents the probability corresponding to the output result of the bottle mouth.
In step S302, it is determined that the top label is damaged in response to the photographing part being in accordance with the preset position where the top label is located.
In this step, when the photographed portion is identified as the top of the outer package (that is, the probability corresponding to the top of the outer package in the output result of the photographed portion identification model is the maximum), it may be determined that the photographed portion is consistent with the preset position of the top label, and it may be determined that the preset position of the top label is photographed by at least two frames of images, and the top label is not identified at the preset position, so that it may be determined that the top label on the outer package has been damaged. At this time, the relevant identification process may be terminated directly to determine that the target user does not qualify for reward claim.
In step S303, in response to the fact that the shooting location is inconsistent with the preset location where the top tag is located, a retransmission prompt message is sent to the target user, and whether the top tag is intact or not is identified according to the first video clip re-uploaded by the target user.
In this step, when the shooting part is not consistent with the preset position of the top tag (that is, the probability of other parts is the highest in the output result of the shooting part recognition model), it may be determined that the top of the outer package is not shot in the at least two frames of images, and further, it may be determined that the shooting content of the first video clip of the user is incorrect (the position of the top tag is not shot in the video clip), and thus, a retransmission prompt message may be sent to the target user to prompt the target user to re-upload the first video clip including the top tag.
After the top label is identified as intact, whether the bottom label is intact or not may be identified based on the above-mentioned processes in fig. 2 and 3, and if the bottom label is identified as damaged, the related identification process may be terminated directly to determine that the target user does not qualify for reward picking.
If both the top label and the bottom label are identified as intact, then it may be identified whether the overwrap is unsealed based on the second video clip.
Specifically, referring to fig. 4, fig. 4 shows a schematic flow chart of how to identify whether the outer package is unsealed through the second video clip after identifying that the outer package is intact in the embodiment of the present disclosure, which includes steps S401 to S402:
in step S401, the purchaser of the article is obtained through the traceability code on the outer package.
In this step, the purchaser of the article may be obtained through the source code on the outer package (in the first video clip). The traceability code can be arranged on the top label, and detailed information such as purchasers, production batches, production dates and the like of the articles can be traced through the traceability code.
In step S402, if the purchaser is consistent with the target user, it is identified whether the outer package is unsealed through the second video clip.
In this step, after the buyer of the article is obtained through the source tracing code, it may be compared whether the buyer is consistent with the target user, for example: if the ID (Identification), address, telephone, etc. of the purchaser is consistent with the target user, it can be determined that the item is actually purchased by the target user, and the target user does not upload the counterfeit item unsealing video.
Before the second video clip identifies whether the outer package is unsealed, in order to reduce the amount of computation in the identification process, the second video clip may be first subjected to frame extraction to obtain at least two frames of images (hereinafter referred to as a second image set), and then, whether the outer package is unsealed is identified based on the at least two frames of images.
Specifically, whether the top label and/or the bottom label of the outer package are unsealed or not can be identified, namely when the top label is unsealed, the outer package can be judged to be unsealed; when the bottom label is unsealed, the outer package can be considered to be unsealed; when both the top label and the bottom label are unsealed, it can also be assumed that the outer package has been unsealed.
The following examples illustrate the process of identifying whether the top label is unsealed:
specifically, the integrity detection model may be used to perform integrity detection on the top label in the second image set, and when the integrity detection model outputs a first result (e.g., 0 (90%), 1 (10%)), it may be determined that the top label is intact, and then the identification process is terminated, and it is determined that the target user does not qualify for reward picking. When the integrity check model outputs a second result (e.g., 0 (10%), 1 (90%)), a pre-trained shot site check model may be used to determine whether the top label is intact.
Specifically, the photographed portion of the outer package in the second image set may be recognized by using the photographed portion recognition model, and if the probability that 0 (0 corresponds to the top of the outer package) is the highest in the output result of the photographed portion detection model, it may be determined that the positions of the photographed portion and the top label are the same, and if the probability that 0 is not the highest in the output result of the photographed portion detection model, it may be determined that the positions of the photographed portion and the top label are not the same.
In response to the shot location coinciding with the preset location at which the top label is located, it may be determined that the top label is damaged. And responding to the inconsistency between the shooting part and the preset position of the top label, sending retransmission prompt information to the target user, and identifying whether the top label is intact or not according to the first video clip uploaded by the target user again.
If it is recognized that the top label has been unsealed, the process may proceed to step S130.
In step S130, after recognizing that the outer package is unsealed and detecting a preset action, recognizing whether a sealing ring of the article is intact through a third video clip; the action is preset as the action of taking out the article from the outer package.
In this step, after recognizing that the outer package has been unsealed, a pre-trained classifier based on two classes of single frames may be used to detect whether the second video segment contains an action of the user to take the article out of the outer package. The classifier can be a classifier constructed by using a classification model ResNet50 in a deep learning neural network model.
It should be noted that, in training the above classification, data enhancement may be performed on the training sample image, for example: random rotation, random inversion, cutout (that is, the Cutout randomly removes a part of an area cut in a sample and fills a 0 pixel value, and the classification result is unchanged), affine transformation (also called Affine transformation, which means that in geometry, a vector space is subjected to linear transformation once and then translated, and then transformed into another vector space), and the like, so as to generate more training sample images, improve the detection accuracy of a model, enable the model to learn more robust features, and improve the generalization capability of the model. In addition, since this step only needs to detect motion from the image, and does not need to pay attention to details of the image, in order to reduce the amount of calculation, the image size may be scaled to 224 × 224 pixels, and the specific image size may be set according to the actual situation, which is not particularly limited by this disclosure.
The classifier can output the result in a probability form, the detection result comprises two types, 0 represents the action of taking out the wine bottle, and 1 represents the action of not taking out the wine bottle, so that when the probability of outputting 1 is the maximum, the classifier can determine that the preset action of taking out the article from the outer package is detected.
After the outer package is recognized to be unsealed and the preset action is detected, whether the sealing ring of the article is intact or not can be recognized through the third video clip, the third video clip is used for reflecting the process of displaying the sealing ring of the article which is not unsealed, the illustrative process is described by taking the article as rare wine, and the sealing ring can be a rubber cap covering the bottle cap of the wine bottle.
Exemplarily, referring to fig. 5, fig. 5 shows a flow chart of how to identify whether a sealing ring of an article is intact through a third video segment after recognizing that an outer package is unsealed and detecting a preset action in the embodiment of the present disclosure, including steps S501-S503:
in step S501, a logistics code is acquired from an article.
In this step, the logistics code may be obtained from the article, and the logistics code may include the production information of the article and the dealer information.
In step S502, a target tracing code corresponding to the logistics code is determined based on a correspondence between the pre-stored logistics code and the tracing code.
In this step, the correspondence between the pre-stored logistics codes and the tracing codes can be obtained, and the target tracing codes corresponding to the logistics codes can be determined.
In step S503, if the target traceability code is consistent with the traceability code on the outer package, it is identified whether the seal ring of the article is intact or not through the third video clip.
In this step, the target tracing code and the tracing code on the outer package (in the first video segment) may be compared for consistency to determine whether the two are consistent. If the two are consistent, the target user can be determined not to replace the object, and no fake behavior exists. Furthermore, it can be recognized by the third video segment whether the seal ring of the article is intact or not.
Specifically, the frame extraction processing may be performed on the third video segment to obtain at least two frames of images (hereinafter referred to as a third image set), and then, based on the at least two frames of images, it may be identified whether the sealing ring of the article is intact or not.
When identifying whether the gasket of the article is intact, the integrity test may be performed on the gasket in the third image set by using the integrity test model, and when the integrity test model outputs the first result (e.g., 0 (90%), 1 (10%)), it may be determined that the gasket is intact. When the integrity test model outputs a second result (e.g., 0 (10%), 1 (90%)), it can be determined whether the seal ring is intact using the pre-trained photographed portion test model.
Specifically, the photographed portion of the article in the third image set may be recognized by using the photographed portion recognition model, and if the probability of 2 (2 corresponding to the position of the opening of the article) in the output result of the photographed portion detection model is the maximum, it may be determined that the photographed portion is located at the same position as the sealing ring, and if the probability of 2 not being present in the output result of the photographed portion detection model is the maximum, it may be determined that the photographed portion is located at the different position from the sealing ring.
And in response to the fact that the shooting position is consistent with the preset position of the sealing ring, the sealing ring can be determined to be damaged, and further, the identification process can be terminated, and the target user is determined not to have reward picking qualification. And responding to the inconsistency between the shooting part and the preset position of the sealing ring, sending retransmission prompt information to the target user, and identifying whether the sealing ring is intact according to a third video clip uploaded by the target user again.
In an alternative embodiment, the production information of the article can be read from the sealing ring, and the production information can include the production date, the production lot and the like, and then whether the sealing ring of the article is intact or not can be determined according to the integrity of the production information. For example, when the production information is complete (for example, 10 characters are read on the complete production information and 10 characters are read on the sealing ring, the complete production information can be confirmed), the sealing ring of the article can be determined to be intact, and when the production date is incomplete (for example, 10 characters are read on the complete production information and 8 characters are read on the sealing ring, the incomplete production information can be confirmed), the sealing ring of the article can be determined to be damaged.
If the seal ring is identified as intact, the process may proceed to step S140.
In step S140, after the seal ring is identified as intact, whether the seal ring and the sealing cover of the article are unsealed is identified through a fourth video clip.
In this step, after it is recognized that the sealing ring is intact, whether the sealing ring of the article is unsealed can be recognized through the fourth video clip, after it is recognized that the sealing ring of the article is unsealed, whether the sealing cover of the article is unsealed can be recognized, the fourth video clip is used for reflecting a process of unsealing the sealing ring and the sealing cover of the article, for example, the article is used as rare wine for explanation, and the sealing cover can be a bottle cover of a wine bottle.
Specifically, the fourth video segment may be first subjected to frame extraction processing to obtain at least two frames of images (hereinafter referred to as a fourth image set), and then, based on the at least two frames of images, whether the seal ring and the seal cover of the article are unsealed is identified.
When identifying whether the seal ring of the article is unsealed, integrity detection may be performed on the seal ring in the fourth image set by using the integrity detection model, and when the integrity detection model outputs a first result (e.g., 0 (90%), 1 (10%)), it may be determined that the seal ring is intact, at this time, the above identification process may be terminated, and it is determined that the target user does not have the reward picking qualification. When the integrity check model outputs a second result (e.g., 0 (10%), 1 (90%)), it may be determined whether the seal ring is intact using a pre-trained photographed portion check model.
Specifically, the photographed portion of the article in the fourth image set may be recognized by using the photographed portion recognition model, and if the probability that 2 (2 corresponds to the position of the bottle mouth) is the highest in the output result of the photographed portion detection model, it may be determined that the photographed portion is located at the same position as the position of the sealing ring, and if the probability that 2 is not the highest in the output result of the photographed portion detection model, it may be determined that the photographed portion is located at the different position from the position of the sealing ring.
In response to the shot location coinciding with the predetermined location at which the seal ring is located, it may be determined that the seal ring is damaged (i.e., unsealed). And responding to the inconsistency between the shooting part and the preset position of the sealing ring, sending retransmission prompt information to the target user, and identifying whether the sealing ring is unsealed according to a fourth video clip uploaded by the target user again.
After recognizing that the packing has been unsealed, it is possible to recognize whether the sealing cap of the article is unsealed.
When the integrity detection model is used for identifying whether the sealing cover of the article is unsealed, the integrity detection model can be used for detecting the integrity of the sealing cover in the fourth image set, when the integrity detection model outputs a first result (for example: 0 (90%), 1 (10%)), the sealing cover can be determined to be intact, and at the moment, the identification process can be terminated, and the target user is determined not to have the reward picking qualification. When the integrity check model outputs a second result (e.g., 0 (10%), 1 (90%)), it may be determined whether the sealing cap is intact using a pre-trained photographing part check model.
Specifically, the photographed portion of the article in the fourth image set may be recognized by using the photographed portion recognition model, and if the probability of 2 (2 corresponding to the position of the bottle mouth) in the output result of the photographed portion detection model is the highest, it may be determined that the photographed portion is located at the same position as the sealing cap, and if the probability of not 2 in the output result of the photographed portion detection model is the highest, it may be determined that the photographed portion is located at the different position from the sealing cap.
In response to the photographing part being in accordance with the preset position of the sealing cover, it may be determined that the sealing cover is damaged (i.e., unsealed). And responding to the inconsistency between the shooting part and the preset position of the sealing cover, sending retransmission prompt information to the target user, and identifying whether the sealing cover is unsealed according to a fourth video clip uploaded by the target user again.
In step S150, after recognizing that the seal ring and the seal cover are unsealed, a reward is issued to the target user.
In this step, after recognizing that the seal ring and the sealing cover are both unsealed, it can be confirmed that the object has been unsealed, and the object user purchases the object not for stocking curiosity, so that the reward can be issued to the object user. For example, the manner of issuing the reward may be returning points, a purchase ticket, a cash reward, etc. to the target user, which may be set according to the actual situation, and the disclosure is not limited thereto.
In an optional implementation manner, after an item decapsulation video uploaded by a target user is acquired, the present disclosure may further detect whether the item decapsulation video is continuous, and in a case that the item decapsulation video is continuous, whether the user has the reward pickup qualification may be determined through the identification process in the above steps S110 to S150, and in a case that the item decapsulation video is discontinuous, it is directly determined that the target user does not have the reward pickup qualification without performing the identification process in the above steps S110 to S150.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating how to determine whether to invalidate the reward picking-up qualification of the target user according to the continuity of the item splitting video in the embodiment of the present disclosure, including steps S601-S603:
in step S601, feature extraction is performed on multiple frames of images included in each video clip, so as to obtain feature information of each frame of image.
In this step, feature information corresponding to the multi-frame image included in each video segment may be extracted, where the feature information may be a feature vector. Illustratively, a neural network model pre-trained by ImageNet can be adopted, the last full-connection layer is removed, only the output logit of the second-to-last layer is taken as a feature vector, and the vector dimension is 2048 dimensions. For example, the feature vector corresponding to a certain frame image can be represented as: a (x) 11 ,x 12 ,x 13…… x 1n ),n=2048。
In order to reduce the amount of computation and speed up the computation process, each frame of image can be scaled to 448 × 448 pixel size, then the preprocessed image is input into the neural network model for feature extraction to obtain a feature vector corresponding to each frame of image, and normalization operation can be performed on each feature vector.
In step S602, a degree of deviation between two adjacent frames of images is calculated according to the feature information of the two adjacent frames of images.
In this step, the deviation degree between two adjacent frames of images can be calculated according to the feature information of the two adjacent frames of images. Illustratively, the degree of deviation may be calculated based on the euclidean distance, which calculation formula refers to the following formula 1:
Figure BDA0003816030380000171
where d represents the above-mentioned deviation degree, and the characteristic vector of the ith frame image is exemplarily taken as a (x) 11 ,x 12 ,x 13…… x 1n ) The feature vector of the i +1 th frame image is b (x) 21 ,x 22 ,x 23…… x 2n ) For example, the degree of deviation between the ith frame image and the (i + 1) th frame image can refer to the following formula 2:
Figure BDA0003816030380000172
after the deviation degree is calculated, when the deviation degree of any two adjacent frames is smaller than the deviation degree threshold value, it may be determined that the video clip uploaded by the target user does not have the suspicion of splicing a counterfeit video, and thus, the above steps S110 to S150 may be performed to identify whether the article is unsealed. When the deviation degree of any two frame images is greater than the deviation degree threshold, step S603 may be performed:
in step S603, in response to the degree of deviation being greater than the degree of deviation threshold, it is determined that the target user is not eligible for reward claim.
In this step, when the deviation degree of any two frames of images is greater than the deviation degree threshold value, it can be determined that the video segment is discontinuous and there may be a suspicion of forging the unsealing video, and thus it can be directly determined that the target user does not have the qualification of reward picking up.
In an optional implementation manner, after the item unsealing video uploaded by the target user is acquired, the present disclosure may further acquire a number of frames that do not include the item (i.e., a number of frames that the item is moved out of the shot) from each video segment, when the number of frames is greater than a preset threshold, it may be determined that the target user is suspected of replacing the item halfway, and it is determined that the target user does not have the reward picking-up qualification, otherwise, the steps in the above-mentioned step S110 to step S150 may be performed to identify whether the item is unsealed.
This is openly through utilizing artificial intelligence automatic identification article kaifeng, on the one hand, has saved the manual identification cost, has promoted recognition efficiency, on the other hand for the recognition result is more reliable and more stable, does not receive the interference of factors such as artificial operating condition.
Referring to fig. 7, fig. 7 is a schematic overall flow chart illustrating an article unsealing identification method according to an embodiment of the present disclosure, including steps S701 to S711:
before step S701, continuity identification may be performed on each of the acquired video segments, and if any video segment is not continuous, the identification process is terminated; if the plurality of video clips are continuous, step S701 may be executed;
in step S701, frame extraction processing is performed on each video clip to obtain at least two frames of images corresponding to each video clip;
in step S702, identifying whether the top label is intact or not according to at least two frames of images corresponding to the first video segment;
after the top label is confirmed to be intact, the method enters step S703 for tracing code check (obtaining the article purchaser through the tracing code on the top label, and confirming whether the video is counterfeit according to whether the article purchaser is consistent with the target user);
after the video is determined to be not fake, step S704 is performed to identify whether the bottom label is intact according to the at least two frames of images corresponding to the second video segment;
after the bottom label is determined to be intact, step S705 is performed to identify whether the top label is unsealed according to at least two frames of images corresponding to the third video segment;
after the top label is confirmed to be unsealed, the step S706 is carried out, and whether the article is taken out of the outer package is identified according to at least two frames of images corresponding to the fourth video clip;
after the article is taken out from the outer package, the step S707 is entered, and the logistics code is checked (the logistics code is read from the article, and whether a binding relationship exists between the logistics code and the tracing code is confirmed);
if the binding relationship exists, the process goes to step S708, and whether the rubber cap on the bottle cap is intact or not is identified according to the fifth video segment; in the step, information such as production date, batch number and the like on the sealing cover can be read, and whether the rubber cover is intact or not is verified in an auxiliary way according to whether the information is complete or not;
in step S709, according to the sixth video segment, it is identified whether the rubber cap on the bottle cap is unsealed;
after the rubber cap is confirmed to be unsealed, the step S710 is carried out, and whether the bottle cap is unsealed or not is identified according to the seventh video segment;
after confirming that the seal cap is unsealed, the flow proceeds to step S711, where the unsealing is confirmed and the unsealing recognition result is output.
Referring to fig. 8, fig. 8 is a schematic overall flow chart illustrating another method for identifying the article unsealing in the embodiment of the present disclosure, including steps S801 to S813:
after the object unsealing video is obtained, firstly performing continuity identification on each video clip through a GPU resource pool, and if the continuity identification is passed, entering step S801, performing frame extraction on each video clip to obtain at least two frames of images corresponding to each video clip;
in step S802, at least two frames of images corresponding to each video clip are preprocessed;
in step S803, it is identified whether the top label is intact;
after the top label is confirmed to be intact, the step S804 is entered for tracing code check (whether the purchaser is consistent with the target user is checked according to the tracing code on the top label);
if yes, entering step S805 to identify whether the bottom label is intact;
if the bottom label is intact, step S806 is entered to identify whether the top label is unsealed;
after confirming that the top label has been unsealed, the flow proceeds to step S807, and it is identified whether or not there is an action of taking out the article from the exterior package;
if yes, entering step S808, checking the logistics codes (checking whether the logistics codes on the article and the traceability codes on the outer package have an association relation or not);
if the correlation exists, the process goes to step S809 to identify whether the seal ring is intact;
after the seal ring is confirmed to be intact, the process proceeds to step S810, where the production date and the batch number are checked (whether the production date and the batch number information on the seal cap are intact is checked);
if the sealing ring is complete, the step S811 is carried out to identify whether the sealing ring on the article is unsealed;
after confirming that the seal ring is unsealed, the process proceeds to step S812, where whether the seal cap on the article is unsealed is identified;
after confirming that the seal cap is unsealed, the process proceeds to step S813, where a logical judgment is performed based on the above recognition results, and an unsealing recognition result is output.
According to the method, modules are split according to different requirements of the modules in each step on a Central Processing Unit (CPU) and a Graphic Processing Unit (GPU), and the modules are deployed in a micro-service distributed mode. For the CPU operation-intensive steps of the above steps S801, S802, S804, S808, S810, and S813, it may be deployed to the CPU resource pool, and for the GPU budget-intensive steps of the steps S803, S805, S806, S807, S809, S811, and S812, it may be deployed to the GPU resource pool server. Therefore, the user can dynamically adjust the CPU and GPU resources used by the service according to the actual computing performance requirement, and the CPU and GPU resources are utilized to the maximum extent while the service throughput capacity is met. Through the micro-service architecture mode, each step can form pipeline processing, calculation of each module is enabled to be parallel as much as possible, the overall service speed and service throughput are greatly improved, more calculation requests can be borne as much as possible under limited calculation resources, and the capability of identifying more paths of videos is realized under the condition of lower hardware requirements.
The present disclosure also provides a tamper recognition device for an article, and fig. 9 shows a schematic structural view of the tamper recognition device for an article in an exemplary embodiment of the present disclosure; as shown in fig. 9, the tamper identification device 900 for an item may include a video capture module 910, a first identification module 920, a second identification module 930, a third identification module 940, and a reward issuance module 950. Wherein:
a video obtaining module 910, configured to obtain an article unsealing video uploaded by a target user, where the article unsealing video is used to reflect an article unsealing process; the article unsealing video at least comprises a first video segment, a second video segment, a third video segment and a fourth video segment;
a first identification module 920, configured to identify whether the outer package of the article is intact through a first video segment, and after identifying that the outer package is intact, identify whether the outer package is unsealed through a second video segment;
a second identification module 930, configured to identify whether a sealing ring of the article is intact or not through a third video segment after identifying that the outer package is unsealed and detecting a preset action; the preset action is taken out of the outer package;
a third identification module 940, configured to identify whether the seal ring and the seal cover of the article are unsealed through a fourth video segment after identifying that the seal ring is intact;
a reward dispensing module 950 for dispensing a reward to the target user after recognizing that the seal ring and the seal cover are unsealed.
In an exemplary embodiment of the present disclosure, the first identifying module 920 is configured to:
performing frame extraction processing on the first video clip to obtain at least two frames of images; and identifying whether the outer package of the article is intact or not through the at least two frames of images.
In an exemplary embodiment of the present disclosure, the first identifying module 920 is configured to:
identifying whether a top label of the item is intact; and identifying whether the bottom label of the article is intact.
In an exemplary embodiment of the present disclosure, the first identifying module 920 is configured to:
carrying out integrity detection on the top labels in the at least two frames of images by using a pre-trained integrity detection model; the integrity detection model is a classification model; determining that the top label is intact when the integrity detection model outputs a first result; the first result is used to characterize the top label as intact; when the integrity detection model outputs a second result, determining whether the top label is intact or not by using a pre-trained shooting part detection model; the second result is used to characterize the top label as damaged or not detected.
In an exemplary embodiment of the present disclosure, the first identifying module 920 is configured to:
recognizing the shooting part of the outer package in the at least two frames of images by using the shooting part recognition model; determining that the top label is damaged in response to the photographing part being consistent with a preset position where the top label is located; responding to the fact that the shooting part is inconsistent with the preset position of the top label, sending retransmission prompt information to the target user, and identifying whether the top label is intact or not according to the first video clip uploaded by the target user again; and the retransmission prompt information is used for prompting the target user to upload the first video segment again.
In an exemplary embodiment of the present disclosure, the second identifying module 930 is configured to:
acquiring a purchaser of the article through the source tracing code on the outer package; and if the buyer is consistent with the target user, identifying whether the outer package is unsealed through a second video clip.
In an exemplary embodiment of the present disclosure, the second identifying module 930 is configured to:
identifying whether a top label and/or a bottom label on the outer package is unsealed.
In an exemplary embodiment of the present disclosure, the third identifying module 940 is configured to:
acquiring a logistics code from the article after recognizing that the outer package is unsealed and detecting a preset action; determining a target traceability code corresponding to the logistics code based on a corresponding relation between a pre-stored logistics code and the traceability code; and if the target traceability code is consistent with the traceability code on the outer package, identifying whether the sealing ring of the article is intact or not through the third video clip.
In an exemplary embodiment of the present disclosure, after obtaining the item unsealing video uploaded by the target user, the reward issuance module 950 is configured to:
extracting the characteristics of a plurality of frames of images contained in each video clip to obtain the characteristic information of each frame of image; calculating the deviation degree between two adjacent frames of images according to the characteristic information of the two adjacent frames of images; responsive to the degree of deviation being greater than a degree of deviation threshold, determining that the target user is not eligible for reward claim.
In an exemplary embodiment of the present disclosure, the reward issuance module 950 is configured to:
obtaining the number of frames not containing the article from each video clip; in response to the number of frames being greater than a preset threshold, determining that the target user is not eligible for reward claim.
The details of each module in the device for identifying the article by means of unsealing have already been described in detail in the corresponding method for identifying the article by means of unsealing, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, and may also be implemented by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
The present application also provides a computer-readable storage medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device.
A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable storage medium may transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The computer readable storage medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the above embodiments.
In addition, the embodiment of the disclosure also provides an electronic device capable of implementing the method.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.), or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 1000 according to this embodiment of the disclosure is described below with reference to fig. 10. The electronic device 1000 shown in fig. 10 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the electronic device 1000 is embodied in the form of a general purpose computing device. The components of the electronic device 1000 may include, but are not limited to: the at least one processing unit 1010, the at least one memory unit 1020, a bus 1030 connecting different system components (including the memory unit 1020 and the processing unit 1010), and a display unit 1040.
Wherein the storage unit stores program code that is executable by the processing unit 1010 to cause the processing unit 1010 to perform steps according to various exemplary embodiments of the present disclosure described in the above section "exemplary methods" of the present specification. For example, the processing unit 1010 may perform the following as shown in fig. 1: step S110, an article unsealing video uploaded by a target user is obtained, and the article unsealing video is used for reflecting the unsealing process of an article; the article unsealing video at least comprises a first video segment, a second video segment, a third video segment and a fourth video segment; step S120, identifying whether the outer package of the article is intact or not through a first video clip, and identifying whether the outer package is unsealed or not through a second video clip after identifying that the outer package is intact or not; step S130, after the outer package is unsealed and a preset action is detected, whether a sealing ring of the article is intact or not is identified through a third video clip; the preset action is taken as the action of taking the article out of the outer package; step S140, after the sealing ring is recognized to be intact, whether the sealing ring and the sealing cover of the article are unsealed or not is recognized through a fourth video clip; and step S150, after recognizing that the sealing ring and the sealing cover are unsealed, issuing a reward to the target user.
The storage unit 1020 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 10201 and/or a cache memory unit 10202, and may further include a read-only memory unit (ROM) 10203.
The memory unit 1020 may also include a program/utility 10204 having a set (at least one) of program modules 10205, such program modules 10205 including but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1030 may be any one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, and a local bus using any of a variety of bus architectures.
The electronic device 1000 may also communicate with one or more external devices 1100 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1000, and/or with any device (e.g., router, modem, etc.) that enables the electronic device 1000 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interfaces 1050. Also, the electronic device 1000 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 1060. As shown, the network adapter 1060 communicates with the other modules of the electronic device 1000 over the bus 1030. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 1000, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (13)

1. A method of tamper-evident for an article, comprising:
acquiring an article unsealing video uploaded by a target user, wherein the article unsealing video is used for reflecting the unsealing process of an article; the article unsealing video at least comprises a first video segment, a second video segment, a third video segment and a fourth video segment;
identifying whether the outer package of the article is intact through a first video clip, and identifying whether the outer package is unsealed through a second video clip after identifying that the outer package is intact;
after the outer package is recognized to be unsealed and a preset action is detected, whether a sealing ring of the article is intact or not is recognized through a third video clip; the preset action is taken out of the outer package;
after the sealing ring is identified to be intact, identifying whether the sealing ring and the sealing cover of the article are unsealed through a fourth video clip;
and issuing a reward to the target user after recognizing that the seal ring and the seal cover are unsealed.
2. The method of claim 1, wherein identifying whether the overwrap of the article is intact via the first video clip comprises:
performing frame extraction processing on the first video clip to obtain at least two frames of images;
and identifying whether the outer package of the article is intact or not through the at least two frames of images.
3. The method of claim 2, wherein the identifying whether the overwrap of the article is intact comprises:
identifying whether a top label of the item is intact; and the number of the first and second groups,
identifying whether a bottom label of the item is intact.
4. The method of claim 3, wherein said identifying whether the top label of the item is intact comprises:
carrying out integrity detection on the top labels in the at least two frames of images by using a pre-trained integrity detection model; the integrity detection model is a classification model;
determining that the top label is intact when the integrity detection model outputs a first result; the first result is used to characterize the top label as intact;
when the integrity detection model outputs a second result, determining whether the top label is intact or not by using a pre-trained shooting part detection model; the second result is used to characterize the top label as damaged or not detected.
5. The method of claim 4, wherein determining whether the top label is intact using a pre-trained camera site detection model comprises:
recognizing the shooting part of the outer package in the at least two frames of images by using the shooting part recognition model;
determining that the top label is damaged in response to the photographing part being consistent with a preset position where the top label is located;
responding to the inconsistency between the shooting part and the preset position of the top label, sending retransmission prompt information to the target user, and identifying whether the top label is intact or not according to the first video clip uploaded by the target user again;
and the retransmission prompt information is used for prompting the target user to upload the first video segment again.
6. The method of claim 1, wherein identifying whether the overpack is unsealed via a second video clip after identifying that the overpack is intact comprises:
acquiring a purchaser of the article through the source tracing code on the outer package;
and if the buyer is consistent with the target user, identifying whether the outer package is unsealed through a second video clip.
7. The method of claim 1, wherein identifying whether the overwrap in the second video segment is unsealed comprises:
identifying whether a top label and/or a bottom label on the outer package is unsealed.
8. The method of claim 1, wherein identifying whether a seal ring of the article is intact via a third video segment after identifying that the overpack is unsealed and detecting a predetermined action comprises:
acquiring a logistics code from the article after recognizing that the outer package is unsealed and detecting a preset action;
determining a target tracing code corresponding to the logistics code based on a corresponding relation between the pre-stored logistics code and the tracing code;
and if the target traceability code is consistent with the traceability code on the outer package, identifying whether the sealing ring of the article is intact or not through the third video clip.
9. The method according to any one of claims 1 to 8, wherein after acquiring the item unsealing video uploaded by the target user, the method further comprises:
extracting the characteristics of a plurality of frames of images contained in each video clip to obtain the characteristic information of each frame of image;
calculating the deviation degree between two adjacent frames of images according to the characteristic information of the two adjacent frames of images;
responsive to the degree of deviation being greater than a degree of deviation threshold, determining that the target user is not eligible for reward claim.
10. The method according to any one of claims 1 to 8, further comprising:
obtaining the number of frames not containing the article from each video clip;
in response to the number of frames being greater than a preset threshold, determining that the target user is not eligible for reward claim.
11. An apparatus for tamper recognition of an article, comprising:
the video acquisition module is used for acquiring an article unsealing video uploaded by a target user, wherein the article unsealing video is used for reflecting the unsealing process of an article; the article unsealing video at least comprises a first video segment, a second video segment, a third video segment and a fourth video segment;
the first identification module is used for identifying whether the outer package of the article is intact or not through a first video clip and identifying whether the outer package is unsealed or not through a second video clip after the outer package is identified to be intact or not;
the second identification module is used for identifying whether a sealing ring of the article is intact or not through a third video segment after the outer package is identified to be unsealed and a preset action is detected; the preset action is taken out of the outer package;
the third identification module is used for identifying whether the sealing ring and the sealing cover of the article are unsealed or not through a fourth video clip after the sealing ring is identified to be intact;
and the reward issuing module is used for issuing rewards to the target users after recognizing that the sealing ring and the sealing cover are unsealed.
12. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements a method of tamper recognition of an article according to any one of claims 1 to 10.
13. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of tamper identification of an item of any one of claims 1 to 10 via execution of the executable instructions.
CN202211027212.XA 2022-08-25 2022-08-25 Method and device for identifying unsealing of article, computer storage medium and electronic equipment Pending CN115359401A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211027212.XA CN115359401A (en) 2022-08-25 2022-08-25 Method and device for identifying unsealing of article, computer storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211027212.XA CN115359401A (en) 2022-08-25 2022-08-25 Method and device for identifying unsealing of article, computer storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115359401A true CN115359401A (en) 2022-11-18

Family

ID=84004620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211027212.XA Pending CN115359401A (en) 2022-08-25 2022-08-25 Method and device for identifying unsealing of article, computer storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115359401A (en)

Similar Documents

Publication Publication Date Title
CN108229478B (en) Image semantic segmentation and training method and device, electronic device, storage medium, and program
WO2018121737A1 (en) Keypoint prediction, network training, and image processing methods, device, and electronic device
EP4099217A1 (en) Image processing model training method and apparatus, device, and storage medium
US20200394414A1 (en) Keyframe scheduling method and apparatus, electronic device, program and medium
CN111415336B (en) Image tampering identification method, device, server and storage medium
JP2020013553A (en) Information generating method and apparatus applicable to terminal device
CN114663871A (en) Image recognition method, training method, device, system and storage medium
CN111680546A (en) Attention detection method, attention detection device, electronic equipment and storage medium
US11967125B2 (en) Image processing method and system
CN114550051A (en) Vehicle loss detection method and device, computer equipment and storage medium
CN113255516A (en) Living body detection method and device and electronic equipment
CN114282258A (en) Screen capture data desensitization method and device, computer equipment and storage medium
US20210166028A1 (en) Automated product recognition, analysis and management
KR102170416B1 (en) Video labelling method by using computer and crowd-sourcing
CN115359401A (en) Method and device for identifying unsealing of article, computer storage medium and electronic equipment
CN116010707A (en) Commodity price anomaly identification method, device, equipment and storage medium
CN113688650B (en) Method and device for identifying picture
CN109087439A (en) Bill method of calibration, terminal device, storage medium and electronic equipment
CN114972500A (en) Checking method, marking method, system, device, terminal, equipment and medium
US11763595B2 (en) Method and system for identifying, tracking, and collecting data on a person of interest
CN114238968A (en) Application program detection method and device, storage medium and electronic equipment
CN112559340A (en) Picture testing method, device, equipment and storage medium
CN112183347A (en) Depth space gradient-based in-vivo detection method, device, equipment and medium
CN111027371A (en) Intelligent vehicle checking method and system, computer equipment and storage medium
CN111860070A (en) Method and device for identifying changed object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination