CN118038378B - Article identification data acquisition method, computer storage medium and electronic equipment - Google Patents

Article identification data acquisition method, computer storage medium and electronic equipment Download PDF

Info

Publication number
CN118038378B
CN118038378B CN202410424964.2A CN202410424964A CN118038378B CN 118038378 B CN118038378 B CN 118038378B CN 202410424964 A CN202410424964 A CN 202410424964A CN 118038378 B CN118038378 B CN 118038378B
Authority
CN
China
Prior art keywords
image
identified
identification
similarity
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410424964.2A
Other languages
Chinese (zh)
Other versions
CN118038378A (en
Inventor
刘西洋
李鹏
王炎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shenxiang Intelligent Technology Co ltd
Original Assignee
Zhejiang Shenxiang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shenxiang Intelligent Technology Co ltd filed Critical Zhejiang Shenxiang Intelligent Technology Co ltd
Priority to CN202410424964.2A priority Critical patent/CN118038378B/en
Publication of CN118038378A publication Critical patent/CN118038378A/en
Application granted granted Critical
Publication of CN118038378B publication Critical patent/CN118038378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an article identification data acquisition method, a computer storage medium and electronic equipment, wherein the method comprises the following steps: determining a target object to be identified with identification attribute in an identification state according to the acquired image of the object to be identified in the identification area range; acquiring an image set of the target object to be identified, the identification attribute of which is in an identification state, according to tracking of the motion trail of the target object to be identified; when the identification information of the object to be identified is identified, acquiring object data corresponding to the identification information; and correspondingly storing the target data and the image set as data record information of the target object to be identified. Thus, abundant data is provided for the identification processing, and the acquisition cost of the article identification data can be reduced and the acquisition period can be reduced.

Description

Article identification data acquisition method, computer storage medium and electronic equipment
Technical Field
The application relates to the technical field of computer application, in particular to a method and a device for acquiring article identification data. The application also relates to a commodity database construction method and device based on the cashing equipment, a computer storage medium and electronic equipment.
Background
Databases have a wide range of applications and roles in commercial and technical fields, including but not limited to the following:
And (3) data storage: databases are used to store large amounts of structured or unstructured data, such as customer information, product information, transaction records, and the like.
And (3) data management: the database may help organize, manage, and maintain data, ensuring consistency, integrity, and security of the data.
Data query and analysis: the database can provide efficient data query and analysis functions to assist users in obtaining useful information from a large amount of data according to specific needs.
Support service no application: databases are the basis for many service applications, such as Enterprise Resource Planning (ERP), customer Relationship Management (CRM), management of user products (SKUs, SPUs), and the like, providing data storage and operational support for applications.
Data security and backup: the database can ensure the security and reliability of the data through the authority management and backup mechanism, and prevent the data from being lost or unauthorized access.
In general, databases play an important role in storing and managing data, supporting business operations, providing data analysis, and securing data security, and are one of the indispensable infrastructures for modern application systems.
Therefore, the database is based on different application service scenes and stores different data, but no matter what application service scene is, the database needs to be constructed, updated and other corresponding operations and maintenance.
Disclosure of Invention
The application provides a method for acquiring article identification data, which aims to solve a series of problems of low article identification efficiency, high identification cost, high identification error rate and the like in the prior art.
The application provides a method for acquiring article identification data, which comprises the following steps:
determining a target object to be identified with identification attribute in an identification state according to the acquired image of the object to be identified in the identification area range;
acquiring an image set of the target object to be identified, the identification attribute of which is in an identification state, according to tracking of the motion trail of the target object to be identified; when the identification information of the object to be identified is identified, acquiring object data corresponding to the identification information;
and correspondingly storing the target data and the image set as data record information of the target object to be identified.
In some embodiments, the identification attribute is determined according to the following:
Judging whether the object to be identified is clamped or held according to the image of the object to be identified in the identification area range, and if so, judging that the identification attribute is in an identification state.
In some embodiments, the article to be identified is an article to be sold, the identification area range is an article checkout area, and the identification information is a bar code or a two-dimensional code corresponding to the article to be sold; the identification information of the object to be identified is identified, comprising: the bar code or the two-dimensional code of the commodity to be sold is scanned; the objective data includes at least SKU information for the commodity to be sold.
In some embodiments, the determining whether the object to be identified is clamped or held according to the image of the object to be identified in the range of the identification area includes:
Carrying out probability detection of the holding or clamping state on the image characteristics of the object image through an identification attribute detection branch in a detection model;
And determining the identification attribute of the object to be identified according to the probability value of the object image acquired by detection in the holding or clamping mode.
In some embodiments, the acquiring the image set of the target object to be identified with the identification attribute in the identification state according to the tracking of the motion trail of the target object to be identified includes:
acquiring a first image of the target object to be identified before identification information of the target object to be identified is identified according to tracking of the motion trail of the target object to be identified;
Acquiring a second image of the target object to be identified when the identification information of the target object to be identified is identified;
and/or;
After the identification information of the target object to be identified is identified, acquiring a third image of the target object to be identified;
Determining the first image and the second image as the image set, and/or determining the first image, the second image and the third image as the image set.
In some embodiments, the storing the target data and the image set correspondingly as the data record information of the target object to be identified includes:
Taking the second image in the image set as a reference image, screening the first image and/or the third image in the image set, and determining a target image set;
And correspondingly storing the target data and the target image set as data record information of the target object to be identified.
In some embodiments, the selecting the second image as a reference image, the first image and/or the third image in the image set, and determining a target image set includes:
determining a similarity value of the reference image and a first image in the image set;
And/or the number of the groups of groups,
Performing similarity calculation on the reference image and a third image in the image set, and determining a similarity value of the reference image and the third image;
and determining the image with the similarity value within the similarity threshold interval as the image in the target image set.
In some embodiments, further comprising:
when the similarity value is smaller than the lower limit value of the similarity threshold value interval, determining an image with the similarity value smaller than the lower limit value as a comparison image;
calculating the average value of the similarity between the reference image and the rest images except the contrast image in the image set, and determining a first average value of the similarity;
carrying out average similarity calculation on the contrast image and the rest images except the reference image in the image set, and determining a second similarity average value;
And comparing the first similarity average value with the second similarity average value, and deleting the comparison graph if the first similarity average value is larger than the second similarity average value.
In some embodiments, further comprising:
and if the first similarity average value is smaller than the second similarity average value, deleting the image set.
The application also provides an article identification data acquisition device, which comprises:
The determining unit is used for determining a target object to be identified, the identification attribute of which is in an identification state, according to the acquired image of the object to be identified in the identification area range;
The acquisition unit is used for acquiring an image set of the target object to be identified, the identification attribute of which is in an identification state, according to tracking of the motion trail of the target object to be identified; when the identification information of the object to be identified is identified, acquiring object data corresponding to the identification information;
and the storage unit is used for correspondingly storing the target data and the image set as data record information of the object to be identified.
The application also provides a commodity database construction method based on the cashier device, which comprises the following steps:
determining a target commodity to be identified with identification attribute in an identification state according to the commodity image to be identified acquired in the commodity settlement identification area of the cashing equipment;
acquiring an image set of the target commodity to be identified, the identification attribute of which is in an identification state, according to tracking of the motion trail of the target commodity to be identified; acquiring commodity target data corresponding to the identification information when the identification information of the target commodity to be identified is identified;
And correspondingly storing the commodity target data and the image set as data record information for constructing a commodity database.
The application also provides a computer storage medium for storing the network platform generated data and a program for processing the network platform generated data;
The program, when read and executed by the processor, performs the article identification data acquisition method as described above, or performs the merchandise database construction method based on the cashier device as described above.
The present application also provides an electronic device including:
A processor;
And the memory is used for storing a program for processing the network platform generated data, and the program, when being read and executed by the processor, executes the article identification data acquisition method or the commodity database construction method based on the cashier device.
Compared with the prior art, the application has the following advantages:
According to the method for acquiring the article identification data, provided by the application, related scenes or equipment for special article identification do not need to be developed independently in the acquisition processing process, corresponding data acquisition can be performed by utilizing the existing identification equipment, rich data is provided for identification processing, the acquisition cost of the article identification data can be reduced, and the acquisition period is shortened. In addition, the method and the device can directly acquire corresponding target data through the identification information of the target object to be identified without additionally acquiring the characteristics of the object, not only can screen and de-duplicate the image in the image set of the target object to be identified to ensure the quality of the object identification data, but also can acquire the multi-angle image aiming at one object, so that the data record information of the identification image is more abundant, and a more accurate, reliable and rich data basis is provided for managing the object, constructing an object database and the like through the data record information of the identified object.
Drawings
Fig. 1 is a flowchart of a method for acquiring article identification data according to the present application.
Fig. 2 is a schematic structural diagram of an article identification data acquisition device provided by the application.
Fig. 3 is a flowchart of a commodity database construction method based on a cashier device provided by the application.
Fig. 4 is a schematic view of a scenario in which a target commodity to be identified is identified with respect to a commodity settlement identification area in a commodity database construction method based on a cashing device according to the present application.
Fig. 5 is a schematic view of a scenario regarding an embodiment of commodity database construction in the commodity database construction method based on a cashing device provided by the present application.
Fig. 6 is a schematic structural diagram of a commodity database construction device based on a cashier device.
Fig. 7 is a schematic structural diagram of an electronic device provided by the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The present application may be embodied in many other forms than those herein described, and those skilled in the art will readily appreciate that the present application may be similarly embodied without departing from the spirit or essential characteristics thereof, and therefore the present application is not limited to the specific embodiments disclosed below.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The manner of description used in the present application and in the appended claims is for example: "a", "a" and "a" etc. are not limited in number or order, but are used to distinguish the same type of information from each other.
Based on the above background, the source of the inventive concept is based on the commodity database construction process according to the encountered problems. It will be appreciated that the construction of databases for goods databases or other application services scenarios may extend the sources of the inventive concept in situations where the same technical problems arise, for example: for the situation that the database can exist in and out of the purchasing application service scene, the construction of the corresponding database can also exist in the production application service scene, and in summary, the management and maintenance of the data can be realized by constructing the database under the condition that the production, purchasing or output is convenient to be related to the data change. Accordingly, the description of the prior art in this embodiment takes the commodity data as an example and not a limitation, and of course, in this embodiment, the commodity database is not the only online case in the e-commerce scenario, and may further include: the commodity database of the off-line entity and the commodity database combined on-line, that is, included in various scenes where management and maintenance of commodity data are required.
The prior art is described below with reference to commodity data examples, and the construction of the commodity database according to the prior art may schematically include the following:
1. And (5) building a commodity database based on the goods shelf scene. The method is mainly based on the collection of commodities on the physical shelf so as to construct a commodity database, and has the defects that: because goods shelves are placed regularly, only the front view angle of goods can be shot, but other view angles such as side surfaces, back surfaces, bottoms and the like cannot be shot, and the acquired data is not rich enough. In addition, commodity of the commodity super-shelf is generally placed and replaced in a daily, weekly or monthly cycle, so that the data acquisition cycle is longer and the efficiency is lower. 2. Commodity database construction based on depth feature comparison commodity SKU (Stock Keeping Unit: minimum stock unit) identification. The method is to identify commodity SKU by extracting depth features of commodity graphs and comparing the depth features with the features of the bottom library. The defects are that: the features of the bottom library commodity need to be preprocessed to construct corresponding bottom library feature data, and a new type of commodity which is not added cannot be directly identified, so that the construction is complex and the efficiency is low. In addition, when the difference of similar commodities is too large or the similarity of different commodities is too high, the accuracy of feature comparison and identification is low, and the false identification of the commodity SKU is easy to cause.
Therefore, in the prior art, the acquisition efficiency of the article identification data is low, and the acquisition cost and the acquisition error rate are high due to the reasons of acquisition data, limitation of the identification range and the like.
Based on this, the present application provides a method for acquiring article identification data, as shown in fig. 1, fig. 1 is a flowchart of a method for acquiring article identification data, where an embodiment of the method may include:
Step S101: determining a target object to be identified with identification attribute in an identification state according to the acquired image of the object to be identified in the identification area range;
Step S102: acquiring an image set of the target object to be identified, the identification attribute of which is in an identification state, according to tracking of the motion trail of the target object to be identified; when the identification information of the object to be identified is identified, acquiring object data corresponding to the identification information;
Step S103: and correspondingly storing the target data and the image set as data record information of the target object to be identified.
The above steps S101 to S103 are described in detail below.
Regarding step S101: and determining the target object to be identified with the identification attribute in the identification state according to the acquired image of the object to be identified in the identification area range.
The identification method comprises the steps that an article to be identified is an article to be sold, the identification area range is an article checkout area, and the identification information is a bar code or a two-dimensional code corresponding to the article to be sold; the identification information of the object to be identified is identified, comprising: the bar code or the two-dimensional code of the commodity to be sold is scanned; the objective data includes at least SKU information for the commodity to be sold.
Of course, the identification area range may also be understood as an area for identifying the item to be identified, for example: identifying the area range by scanning the code; the item to be identified may include one or more; the identifying attributes may include: entering an identification state, waiting for identification state, completing identification state and the like, and specifically, whether the commodity to be sold is clamped or held.
When the identification area range includes a plurality of objects to be identified, for example, the object to be identified A is ready to start moving to the identification equipment, the object to be identified A enters an identification state; the article B to be identified needs to wait for the article a to finish identification, and can enter the identification state when the article a is ready to start moving to the equipment, so that the article B to be identified is in the identification waiting state at the moment. And when the object A to be identified completes the identification operation and moves away from the identification equipment, the identification state is completed. By way of example, it is understood that when the item to be identified is held within the identification area, it is then understood that the identification state is entered, and that the item to be identified is changed from held to disengaged is understood as the identification state is completed. The process of identifying the location where the item is placed within the identification area may also be understood as entering an identification state, such as: and placing the object to be identified on the transmission equipment, and carrying out identification information identification and other modes through the transmission of the transmission equipment. Therefore, the entering the identification state in this embodiment may be performed by various manners such as holding, placing, etc., that is, the process of moving the object to be identified from the waiting identification state to the identification position may be understood as entering the identification state. That is, the entry recognition state may be understood as a state in which the article to be sold is held or held, ready for scanning recognition, and performs scanning recognition, that is, a progressive state from ready for recognition to perform recognition; the waiting recognition state can be understood as a state that the commodity in the commodity checkout area is not clamped or held, namely is waiting to be clamped or held; the complete identification state may be understood as having performed the scanning identification but still being in a hand-held or gripping state.
The purpose of step S101 is to detect all the objects to be identified within the identification area, determine the object to be identified with the identification attribute in the identification state as the target object to be identified, and track the motion trail. And the judging of the identification attribute can be to judge whether the object to be identified is clamped or held according to the image of the object to be identified in the identification area range, if so, the identification attribute is in an identification state.
The specific implementation process of step S101 may include:
Step S101-1: inputting the image of the object to be identified, which is acquired in the identification area range, into a detection model for detection, and determining an object detection frame of the object to be identified;
step S101-2: carrying out the identification attribute state identification on the image features in the object detection frame area, and determining whether the features entering the identification state exist or not;
step S101-3: if yes, the object to be identified is determined to be the target object to be identified.
In this embodiment, the detection model may be an article detection and attribute identification model based on multitasking learning. The detection model may include: three modules, backbone (Backbone network), neck (neck) and Head (Head), backbone is the main component of the model, typically a Convolutional Neural Network (CNN) or residual neural network (ResNet), etc. The backface is responsible for extracting features of the input image for subsequent processing and analysis. backbone typically includes multiple layers and multiple parameters that can extract a multi-scale depth profile of the image; neck is located in the middle layer of the Backbone and Head. Neck is to fuse depth feature maps of different scales from backbones, neck may employ convolutional layers, pooled layers, fully connected layers, or the like. Full tie layer (Fully Connected Layer): the fully connected layer is typically used in classification tasks to convert feature maps into vector form and implement classification through multiple fully connected layers. Head is mainly used to predict the kind and location (binding boxes) of targets, making predictions using previously extracted features. In this embodiment, two branches are used to map the feature map to the center position of the object to be identified and the boundary of the object to be identified, so as to predict the coordinate position of the frame of the object to be identified.
In order to predict the identification attribute state of the object to be identified, an identification attribute branch can be added in the Head. Firstly, labeling identification attribute through each training sample data in a training data set, for example: the entering identification state is 1, the non-entering identification state is 0, and the holding state is 1, and the non-holding state is 0 in combination with the above example. Normalized to the [0, 1] interval through the Sigmoid activation function, and used for returning the probability score of the object to be identified entering the identification state. The original binary BCE Loss formula is that. In the scheme, the Loss function can adopt an improved BCE Loss, namely, compared with the original BCE Loss, the super parameters beta and gama are increased, the Loss function curve can be adjusted, and the gradient corresponding to a difficult sample is increased, so that the model training convergence speed is increased. In this embodiment, only the positive sample may be used to identify the article, that is, only the positive sample test frame matched to the article may be used to identify the loss function of the attribute, and only the article test frame (i.e., the positive sample) matched to the article may be calculated; matching to the background (i.e., negative sample) may not calculate the penalty, i.e., the penalty is 0 in the formula. The modified BCE Loss formula in this embodiment is as follows:
Wherein y is a prediction score output by the model, and the interval is 0-1; y is a label of the training sample, which is 1 or 0; the greek letters beta and gama are two super parameters, which in this embodiment can be set to 0.5 and 1.5, respectively; judging conditions of each row of formulas: the obj belongs to pos, and represents that the predicted detection frame is distributed to the commodity, and a loss corresponding to the first row is calculated; obj belongs to neg, meaning that the predicted detection box is assigned to the background, then loss is 0.
Based on this, the specific implementation procedure of the step S101-1 may include:
Step S101-11: extracting a depth feature map of the object image to be identified;
Step S101-12: fusing the depth feature images to obtain a fused feature image;
Step S101-13: and mapping the fusion feature map to the center position and the edge of the object to be identified, and determining the object detection frame. I.e. the width and height of the item to be identified can be predicted from the centre position to the edge, from which width and height the item detection frame can be determined.
The specific implementation process of the step S101-2 may include:
Step S101-21: carrying out probability detection of a handheld or clamping state on the image features of the object image through an identification attribute detection branch in the detection model;
Step S101-22: and determining the identification attribute of the object to be identified according to the probability value of the object image acquired by detection in the holding or clamping mode.
It may be appreciated that, for the identification of the identification attribute state, the cross-over ratio process may be performed on the article detection frame of the article to be identified and the prediction boxes where the article detection frame has overlapping frames, and whether the article to be identified enters the identification state may be determined according to the cross-over ratio between the boxes, for example: and setting a threshold value for entering the identification state, and determining the identification attribute state as entering the identification state when the intersection ratio meets the threshold range requirement, wherein the threshold value can be determined according to an empirical value, a pixel value and the like. Of course, the identification of the identification attribute state is not limited to the two modes, and can be determined by analyzing the depth feature map of the object to be identified. In the present embodiment, the description is made only with the detection model described above, and is not intended to limit the determination manner of the identification attribute state.
When the identification attribute of the object to be identified is determined to be in a clamped or handheld state, the object to be identified needs to be subjected to motion track so as to acquire corresponding target data.
Regarding the step S102: acquiring an image set of the target object to be identified, the identification attribute of which is in an identification state, according to tracking of the motion trail of the target object to be identified; and acquiring target data corresponding to the identification information when the identification information of the target object to be identified is identified.
Step S102 may intercept an image of the target object to be identified in the tracking video during the tracking of the motion trail of the target object to be identified, namely: according to the tracking of the motion trail of the object to be identified, acquiring an image set of the object to be identified according to a set time interval, wherein the image set can comprise: the image of the object to be identified before identification, the image at the identification time and/or the image after identification, therefore, the image set may include a plurality of images with different time points, and the plurality of images may present the same, different, partially the same and other image states with multiple viewing angles.
The specific implementation process of step S102 may include:
step S102-1: acquiring a first image of the target object to be identified before identification information of the target object to be identified is identified according to tracking of the motion trail of the target object to be identified;
step S102-2: acquiring a second image of the target object to be identified when the identification information of the target object to be identified is identified;
and/or;
step S102-3: after the identification information of the target object to be identified is identified, acquiring a third image of the target object to be identified;
step S102-4: determining the first image and the second image as the image set, and/or determining the first image, the second image and the third image as the image set.
In this embodiment, the image set may include: the first image and the second image may further include: an image set of the first image, the second image, and the third image. The method comprises the steps that the time ranges of a first image, a second image and a third image screenshot are different, the first image is taken according to a set time interval in a time range before identification information of a target object to be identified is identified; the second image is an image of a screenshot at the identification moment when the identification information of the object to be identified is identified; the third image is taken at set time intervals in a time range after identification information of the object to be identified is identified. The third image can be intercepted at the same time interval as the first image, and of course, different time intervals can be set, usually, before identification information of the object to be identified is identified, the motion trail of the object to be identified is rich, and after the identification information is identified, the object to be identified is identified, so that the motion trail is single, and compared with the second image, the first image interception time interval can be set to be shorter; of course, the setting can also be performed according to the change rate of the motion trail. And when the identification information of the object to be identified is identified by the identification equipment, acquiring corresponding object data. Namely: when the identification device scans the identification information, an identification signal is generated, for example: the code scanning signal can identify the identification information at the moment, corresponding target data can be found through the identified identification information, and when the identification signal is generated, an image of the target object to be identified at the moment can be intercepted to be used as a second image. For example: when the commodity is scanned, the commodity bar code number can be identified, and then corresponding commodity detailed information can be queried from the commodity management system through the number, such as: the commodity name, commodity specification, etc. may be associated with the commodity drawing under the same SKU.
It can be understood that the motion track of the object to be identified moves to the periphery of the identification area after the object to be identified is identified, and the motion track of the object to be identified is converted from an entering identification state to a finishing identification state, so that the embodiment tracks the object to be identified in a progressive state from the time that the object to be identified is held or clamped until the object to be identified is separated from the hand or the clamping part, and stops tracking, so that an image of the object to be identified is acquired from multiple angles during tracking, and description information of the object to be identified is enriched according to a combination database of the image and object data.
Regarding step S103: and correspondingly storing the target data and the image set as data record information of the target object to be identified.
And storing the image set determined based on the intercepted first image and second image and/or the image set determined based on the first image, second image and third image and target data as data record information of the target object to be identified.
In order to improve the data quality and avoid the generation of redundant data, the images in the image set can be screened to remove the images with repeated or larger repeated areas, and also remove some completely dissimilar images, thereby ensuring the image quality.
Thus, step S103 of the present embodiment may include:
step S103-1: taking the second image in the image set as a reference image, screening the first image and/or the third image in the image set, and determining a target image set;
step S103-2: and correspondingly storing the target data and the target image set as data record information of the target object to be identified.
The filtering of the image set in step S103-1 may filter the first image, filter the third image, or filter both the first image and the third image. A particular implementation may include at least three ways.
Mode one:
step S103-111: and respectively carrying out similarity calculation on the reference image and the first image and the third image in the image set, and determining a similarity value of the reference image and the first image and a similarity value of the reference image and the third image.
Step S103-112: and determining the image with the similarity value in the similarity threshold interval as the image in the target image set. The target image set may be an image set of the remaining target images after removing the images not meeting the similarity requirement, or may be a target image set using the target images meeting the similarity requirement as the target image set. Wherein, the image description with the similarity value larger than the upper limit value of the similarity threshold value area has larger repeatability with the reference image, so the image description can be deleted from the image set. When the similarity value is smaller than the lower limit value of the similarity threshold section, there may be an error in the reference image or the image for which the similarity calculation is performed with the reference image, and therefore, the method includes: and taking the image with the similarity value smaller than the lower limit value as a comparison image, calculating a first similarity average value of the similarity values of the reference image and the residual images after the comparison image is removed from the first image and the third image, respectively calculating a second similarity average value of the similarity values of the comparison image and the residual images, deleting the comparison image if the first average value is larger than the second average value, deleting the image set if the first average value is smaller than the second average value, and returning to the step S101 to execute or output a prompt.
Mode two:
Step S103-121: performing similarity calculation on the reference image and a first image in the image set, and determining a similarity value of the reference image and the first image;
Step S104-122: and determining the first image with the similarity value within a similarity threshold interval as an image in the target image set.
Further comprises:
When the similarity value is smaller than the lower limit value of the similarity threshold interval, determining the first image as a comparison image;
calculating the average value of the similarity between the reference image and the rest images except the contrast image in the first image, and determining a first average value of the similarity;
Carrying out average similarity calculation on the contrast image and the rest images except the reference image in the image set, and determining a second similarity average value; comparing the first similarity average value with the second similarity average value, and deleting the comparison graph if the first similarity average value is larger than the second similarity average value; and if the first similarity average value is smaller than the second similarity average value, deleting the image set.
Mode three:
Step S103-131: performing similarity calculation on the reference image and a third image in the image set, and determining a similarity value of the reference image and the third image;
Step S103-132: and determining the third image with the similarity value within the similarity threshold interval as an image in the target image set.
Further comprises:
When the similarity value is smaller than the lower limit value of the similarity threshold interval, determining the third image as a comparison image;
calculating the average value of the similarity between the reference image and the rest images except the contrast image in the image set, and determining a first average value of the similarity;
Calculating the average value of the similarity between the contrast image and the rest images except the reference image in the image set, and determining a second average value of the similarity;
comparing the first similarity average value with the second similarity average value, and deleting the comparison graph if the first similarity average value is larger than the second similarity average value; and if the first similarity average value is smaller than the second similarity average value, deleting the image set.
It should be noted that, in this embodiment, the similarity threshold interval may be determined according to an empirical value, or may be determined according to a number proportion selected from images, for example: the selection range can be determined according to the distribution condition of similarity values, and the selection range can be set for the first image and the third image respectively by considering the difference between the interception conditions of the first image and the third image, so that the image selection is more accurate.
And step S103, according to the screening of the image set, reserving the images which meet the screening requirements, and eliminating the images which do not meet the screening requirements, so that the accuracy of the target data according to the first image and the third image is ensured. And storing the target data and the target image set correspondingly based on screening, and taking the target data and the target image set as data record information of the object to be identified.
The above is a description of an embodiment of an article identification data acquisition method provided by the application, where the article identification data acquisition method can provide rich data under application scenarios and requirements such as database construction, and no special relevant scenarios or devices need to be developed separately in the acquisition processing process, and the corresponding functions can be performed by using the existing identification devices, and by incorporating the article identification data acquisition method provided by the application in the implementation process, the data acquisition cost is reduced, the data acquisition period is also reduced, and the data acquisition efficiency is improved.
The foregoing is a specific description of an embodiment of a method for acquiring article identification data, corresponding to the foregoing embodiment of the method for acquiring article identification data, and the application further discloses an embodiment of an apparatus for acquiring article identification data, referring to fig. 2, and since the apparatus embodiment is substantially similar to the method embodiment, the description is relatively simple, and the relevant points will be referred to the part of the description of the method embodiment. The device embodiments described below are merely illustrative.
As shown in fig. 2, fig. 2 is a schematic structural diagram of an article identification data obtaining apparatus provided by the present application, and an embodiment of the apparatus may include: a determining unit 201, an acquiring unit 202, and a storing unit 203.
The determining unit 201 is configured to determine, according to the acquired image of the object to be identified within the identification area, a target object to be identified whose identification attribute is in an identification state. The identification attribute can judge whether the object to be identified is clamped or held according to the image of the object to be identified in the identification area range, and if yes, the identification attribute is in an identification state.
The acquiring unit 202 is configured to acquire an image set of the target object to be identified, where the identification attribute is in an identification state, according to tracking of a motion trail of the target object to be identified; and acquiring target data corresponding to the identification information when the identification information of the target object to be identified is identified.
The storage unit 203 is configured to store the target data and the image set correspondingly, as data record information of the target object to be identified. The identification method comprises the steps that an article to be identified is an article to be sold, the identification area range is an article checkout area, and the identification information is a bar code or a two-dimensional code corresponding to the article to be sold; the identification information of the object to be identified is identified, comprising: the bar code or the two-dimensional code of the commodity to be sold is scanned; the objective data includes at least SKU information for the commodity to be sold.
The determination unit 201 includes: a first determination subunit, a second determination subunit, and a third determination subunit;
The first determining subunit is used for inputting the image of the object to be identified, which is acquired in the identification area range, into a detection model for detection, and determining an object detection frame of the object to be identified;
The second determining subunit is used for judging whether the object to be identified is clamped or held according to the image of the object to be identified in the identification area range;
and the third determining subunit is configured to determine, when the determination result of the second determining subunit is yes, the object to be identified as the target object to be identified.
The first determination subunit includes: an extraction subunit, an acquisition subunit and a determination subunit;
The extraction subunit is used for extracting the depth feature map of the object image to be identified;
the acquisition subunit is used for fusing the depth feature images to acquire fusion feature images;
and the determining subunit is used for determining the article detection frame according to the mapping of the fusion characteristic diagram to the center position and the edge of the article to be identified.
The second determination subunit includes: a probability detection subunit and an attribute determination subunit;
The probability detection subunit is used for detecting branches through the identification attribute in the detection model and carrying out probability detection on the image features of the object image in a handheld or clamping state;
and the attribute determination subunit is used for determining the identification attribute of the object to be identified according to the probability value that the object image acquired by detection is in the holding or clamping state.
The acquisition unit 202 includes:
The first acquisition subunit is used for acquiring a first image of the target object to be identified before the identification information of the target object to be identified is identified according to the tracking of the motion trail of the target object to be identified;
a second obtaining subunit, configured to obtain a second image of the target object to be identified when the identification information of the target object to be identified is identified;
And/or the number of the groups of groups,
A third obtaining subunit, configured to obtain a third image of the target object to be identified after the identification information of the target object to be identified is identified;
And an image set determination subunit configured to determine the first image and the second image as the image set, and/or determine the first image, the second image, and the third image as the image set.
The storage unit 203 may be specifically configured to store the image set determined by the truncated first image and the second image, and/or the image set determined according to the first image, the second image, and the third image, and the target data as data record information of the target object to be identified.
The storage unit 203 may further include:
a screening subunit, configured to screen the first image and/or the third image in the image set by using the second image in the image set as a reference image, and determine a target image set;
the storage unit is specifically configured to store the target data and the target image set correspondingly, and use the target data and the target image set as data record information of the object to be identified.
In other embodiments, the screening subunit 203 may further include:
A first determining subunit, configured to perform similarity calculation on the reference image and the first image and the third image in the image set, and determine a similarity value of the reference image and the first image, and a similarity value of the reference image and the third image;
And the second determining subunit is used for determining the images with the similarity values within the similarity threshold interval as images in the target image set.
In other embodiments, the screening subunit 203 may further include:
A third determining subunit, configured to perform similarity calculation on the reference image and a first image in the image set, and determine a similarity value of the reference image and the first image;
And a fourth determining subunit, configured to determine, as an image in the target image set, the first image whose similarity value is within a similarity threshold interval.
To improve data quality and avoid data redundancy, the method further comprises the following steps:
a contrast map determining subunit, configured to determine the first image as a contrast image when the similarity value is smaller than a lower limit value of the similarity threshold interval;
the first calculating subunit is used for calculating the average value of the similarity between the reference image and the rest images except the contrast image in the first image, and determining a first average value of the similarity;
a second calculating subunit, configured to perform average similarity calculation on the comparison graph and the remaining images in the image set except for the reference image, and determine a second average similarity value;
The processing subunit is configured to compare the first similarity average value with the second similarity average value, and delete the comparison graph if the first similarity average value is greater than the second similarity average value; and if the first similarity average value is smaller than the second similarity average value, deleting the image set.
In other embodiments, the screening subunit 203 may further include:
A fifth determining subunit, configured to perform similarity calculation on the reference image and a third image in the image set, and determine a similarity value of the reference image and the third image;
A sixth determining subunit, configured to determine, as an image in the target image set, the third image whose similarity value is within the similarity threshold interval.
Further comprises:
When the similarity value is smaller than the lower limit value of the similarity threshold interval, determining the third image as a comparison image;
calculating the average value of the similarity between the reference image and the rest images except the contrast image in the image set, and determining a first average value of the similarity;
Calculating the average value of the similarity between the contrast image and the rest images except the reference image in the image set, and determining a second average value of the similarity;
comparing the first similarity average value with the second similarity average value, and deleting the comparison graph if the first similarity average value is larger than the second similarity average value; and if the first similarity average value is smaller than the second similarity average value, deleting the image set.
The foregoing is a description of an embodiment of an article identification data obtaining apparatus according to the present application, and the content of the embodiment of the apparatus may refer to the content of steps S101 to S103 in the foregoing method embodiment, which is not described in detail herein.
Based on the above, the present application further provides a method for constructing a commodity database based on a cashier device, as shown in fig. 3, fig. 3 is a flowchart of the method for constructing a commodity database based on a cashier device, where the method includes:
Step S301: determining a target commodity to be identified with identification attribute in an identification state according to the commodity image to be identified acquired in the commodity settlement identification area of the cashing equipment; in this embodiment, the commodity result identifying area may be a scanning table area of a manual cash register or a self-service cash register (as shown in fig. 4), and generally there will be a position indicating that the non-scanned commodity is placed and/or a position indicating that the scanned commodity is placed. The commodity in the commodity settlement identification area can be detected through the acquisition equipment arranged in the commodity identification area, so that the tracking of the target motion trail of the commodity with the identification attribute state of entering the identification state is realized, and video data or image data of the motion trail tracking are obtained.
Step S302: acquiring the target commodity image set to be identified, the identification attribute of which is in an identification state, according to tracking of the target commodity motion trail to be identified; acquiring commodity target data corresponding to the identification information when the identification information of the target commodity to be identified is identified;
Step S303: and correspondingly storing the commodity target data and the commodity image set as data record information for constructing a commodity database. In this embodiment, the construction of the commodity database may include: and storing the commodity target data, the first commodity image set and the second commodity image set and/or the third commodity image set into a commodity database (shown in fig. 5), wherein the commodity database can comprise multi-view appearance images corresponding to the same commodity, and references can be provided for training a commodity Reid model (commodity re-identification), constructing a commodity feature base for searching and the like.
Reference is made to the contents of steps S101 to S103 in each of steps S301 to S303, and detailed description thereof will not be provided here.
Based on the foregoing, the present application further provides a commodity database construction device based on a cashier device, as shown in fig. 6, fig. 6 is a schematic structural diagram of the commodity database construction device based on a cashier device, where an embodiment of the device may include:
A determining unit 601, configured to determine, according to an image of a commodity to be identified acquired in a commodity settlement identification area of a cashing device, a target commodity to be identified with an identification attribute in an identification state;
an obtaining unit 602, configured to obtain an image set of the target commodity to be identified, where the identification attribute is in an identification state, according to tracking of a motion track of the target commodity to be identified; acquiring commodity target data corresponding to the identification information when the identification information of the target commodity to be identified is identified;
and a storage unit 603, configured to store the commodity target data and the commodity image set correspondingly, as data record information for constructing a commodity database.
For the content of the above device, reference may also be made to the content of steps S101 to S103 and steps S301 to S303, and the detailed description is not repeated here.
Based on the above, the present application also provides a computer storage medium for storing network platform generated data and a program for processing the network platform generated data;
The sequence, when read by a processor for execution, performs the contents as described above with respect to steps S101 to S103 in the item identification data acquisition method embodiment, or performs the contents as described above with respect to steps S301 to 303 in the cashier-based commodity database construction method embodiment.
Based on the foregoing, the present application further provides an electronic device, as shown in fig. 7, fig. 7 is a schematic structural diagram of the electronic device provided by the present application, where the electronic device may include:
A processor 701;
A memory 702 for storing a program for processing network platform production data, which when read by the processor performs the contents as described above with respect to steps S101 to S103 in the item identification data acquisition method embodiment, or performs the contents as described above with respect to steps S301 to 303 in the cashier-based commodity database construction method embodiment.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
1. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer readable media, as defined herein, does not include non-transitory computer readable media (transmission media), such as modulated data signals and carrier waves.
2. It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
While the application has been described in terms of preferred embodiments, it is not intended to be limiting, but rather, it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the application as defined by the appended claims.

Claims (9)

1. An article identification data acquisition method, characterized by comprising:
Determining a target object to be identified with identification attribute in an identification state according to the acquired image of the object to be identified in the identification area range; the identification attribute judges whether the object to be identified is clamped or held according to the image of the object to be identified in the identification area range, if so, the identification attribute is in an identification state;
acquiring an image set of the target object to be identified, the identification attribute of which is in an identification state, according to tracking of the motion trail of the target object to be identified; when the identification information of the object to be identified is identified, acquiring object data corresponding to the identification information;
Storing the target data and the image set correspondingly as data record information of the target object to be identified; comprising the following steps: taking the second image in the image set as a reference image, screening the first image and/or the third image in the image set, and determining a target image set; storing the target data and the target image set correspondingly as data record information of the target object to be identified; the step of screening the first image and/or the third image in the image set by taking the second image in the image set as a reference image to determine a target image set includes:
determining a similarity value of the reference image and a first image in the image set; and/or performing similarity calculation on the reference image and a third image in the image set, and determining a similarity value of the reference image and the third image; determining the image with the similarity value within a similarity threshold interval as the image in the target image set;
Further comprises:
when the similarity value is smaller than the lower limit value of the similarity threshold value interval, determining an image with the similarity value smaller than the lower limit value as a comparison image;
calculating the average value of the similarity between the reference image and the rest images except the contrast image in the image set, and determining a first average value of the similarity;
Carrying out average similarity calculation on the contrast image and the rest images except the reference image in the image set, and determining a second similarity average value;
And comparing the first similarity average value with the second similarity average value, and deleting the comparison image if the first similarity average value is larger than the second similarity average value.
2. The method for acquiring article identification data according to claim 1, wherein the article to be identified is an article to be sold, the identification area range is an article checkout area, and the identification information is a bar code or a two-dimensional code corresponding to the article to be sold; the identification information of the object to be identified is identified, comprising: the bar code or the two-dimensional code of the commodity to be sold is scanned; the objective data includes at least SKU information for the commodity to be sold.
3. The method of claim 1, wherein determining whether the item to be identified is held or hand held based on the image of the item to be identified within the identification area comprises:
Carrying out probability detection of the holding or clamping state on the image characteristics of the object image through an identification attribute detection branch in a detection model;
And determining the identification attribute of the object to be identified according to the probability value of the object image acquired by detection in the holding or clamping mode.
4. The method according to claim 1, wherein the acquiring the image set of the target object to be identified with the identification attribute in the identification state according to the tracking of the motion trail of the target object to be identified includes:
acquiring a first image of the target object to be identified before identification information of the target object to be identified is identified according to tracking of the motion trail of the target object to be identified;
Acquiring a second image of the target object to be identified when the identification information of the target object to be identified is identified;
Determining the first image and the second image as the image set;
Or alternatively
Acquiring a first image of the target object to be identified before identification information of the target object to be identified is identified according to tracking of the motion trail of the target object to be identified;
Acquiring a second image of the target object to be identified when the identification information of the target object to be identified is identified;
After the identification information of the target object to be identified is identified, acquiring a third image of the target object to be identified;
And determining the first image, the second image and the third image as the image set.
5. The method as recited in claim 1, further comprising:
and if the first similarity average value is smaller than the second similarity average value, deleting the image set.
6. An article identification data acquisition device, comprising:
The determining unit is used for determining a target object to be identified, the identification attribute of which is in an identification state, according to the acquired image of the object to be identified in the identification area range; the identification attribute judges whether the object to be identified is clamped or held according to the image of the object to be identified in the identification area range, if so, the identification attribute is in an identification state;
The acquisition unit is used for acquiring an image set of the target object to be identified, the identification attribute of which is in an identification state, according to tracking of the motion trail of the target object to be identified; when the identification information of the object to be identified is identified, acquiring object data corresponding to the identification information;
the storage unit is used for correspondingly storing the target data and the image set as data record information of the object to be identified; comprising the following steps: taking the second image in the image set as a reference image, screening the first image and/or the third image in the image set, and determining a target image set; storing the target data and the target image set correspondingly as data record information of the target object to be identified; the step of screening the first image and/or the third image in the image set by taking the second image in the image set as a reference image to determine a target image set includes:
determining a similarity value of the reference image and a first image in the image set; and/or performing similarity calculation on the reference image and a third image in the image set, and determining a similarity value of the reference image and the third image; determining the image with the similarity value within a similarity threshold interval as the image in the target image set;
Further comprises:
when the similarity value is smaller than the lower limit value of the similarity threshold value interval, determining an image with the similarity value smaller than the lower limit value as a comparison image;
calculating the average value of the similarity between the reference image and the rest images except the contrast image in the image set, and determining a first average value of the similarity;
Carrying out average similarity calculation on the contrast image and the rest images except the reference image in the image set, and determining a second similarity average value;
And comparing the first similarity average value with the second similarity average value, and deleting the comparison image if the first similarity average value is larger than the second similarity average value.
7. The commodity database construction method based on the cashier equipment is characterized by comprising the following steps of:
Determining a target commodity to be identified with identification attribute in an identification state according to the commodity image to be identified acquired in the commodity settlement identification area of the cashing equipment; the identification attribute judges whether the commodity to be identified is clamped or held according to the image of the commodity to be identified in the identification area range, if so, the identification attribute is in an identification state;
acquiring an image set of the target commodity to be identified, the identification attribute of which is in an identification state, according to tracking of the motion trail of the target commodity to be identified; acquiring commodity target data corresponding to the identification information when the identification information of the target commodity to be identified is identified;
Correspondingly storing the commodity target data and the image set as data record information for constructing a commodity database; comprising the following steps: taking the second image in the image set as a reference image, screening the first image and/or the third image in the image set, and determining a target image set; storing the target data and the target image set correspondingly as data record information of the target object to be identified; the step of screening the first image and/or the third image in the image set by taking the second image in the image set as a reference image to determine a target image set includes:
determining a similarity value of the reference image and a first image in the image set; and/or performing similarity calculation on the reference image and a third image in the image set, and determining a similarity value of the reference image and the third image; determining the image with the similarity value within a similarity threshold interval as the image in the target image set;
Further comprises:
when the similarity value is smaller than the lower limit value of the similarity threshold value interval, determining an image with the similarity value smaller than the lower limit value as a comparison image;
calculating the average value of the similarity between the reference image and the rest images except the contrast image in the image set, and determining a first average value of the similarity;
Carrying out average similarity calculation on the contrast image and the rest images except the reference image in the image set, and determining a second similarity average value;
And comparing the first similarity average value with the second similarity average value, and deleting the comparison image if the first similarity average value is larger than the second similarity average value.
8. A computer storage medium for storing network platform generated data and a program for processing the network platform generated data;
The program, when being read and executed by a processor, performs the method of any of the preceding claims 1-5 or performs the method of claim 7.
9. An electronic device, comprising:
A processor;
A memory for storing a program for processing network platform generated data, which program, when read by the processor, performs the method of any one of the preceding claims 1-5 or performs the method of claim 7.
CN202410424964.2A 2024-04-09 2024-04-09 Article identification data acquisition method, computer storage medium and electronic equipment Active CN118038378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410424964.2A CN118038378B (en) 2024-04-09 2024-04-09 Article identification data acquisition method, computer storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410424964.2A CN118038378B (en) 2024-04-09 2024-04-09 Article identification data acquisition method, computer storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN118038378A CN118038378A (en) 2024-05-14
CN118038378B true CN118038378B (en) 2024-07-16

Family

ID=90989492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410424964.2A Active CN118038378B (en) 2024-04-09 2024-04-09 Article identification data acquisition method, computer storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN118038378B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437264A (en) * 2023-11-29 2024-01-23 浙江深象智能科技有限公司 Behavior information identification method, device and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210158430A1 (en) * 2018-07-16 2021-05-27 Accel Robotics Corporation System that performs selective manual review of shopping carts in an automated store
US20220414398A1 (en) * 2021-06-29 2022-12-29 7-Eleven, Inc. Database management system and method for updating a training dataset of an item identification model
JP2023050826A (en) * 2021-09-30 2023-04-11 富士通株式会社 Information processing program, information processing method, and information processing apparatus
CN115311450A (en) * 2022-07-26 2022-11-08 中国海洋大学 Light weight commodity identification tracking system, method, storage medium, equipment and terminal

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437264A (en) * 2023-11-29 2024-01-23 浙江深象智能科技有限公司 Behavior information identification method, device and storage medium

Also Published As

Publication number Publication date
CN118038378A (en) 2024-05-14

Similar Documents

Publication Publication Date Title
US10861086B2 (en) Computer vision system and method for automatic checkout
CN106408369B (en) Method for intelligently identifying commodity information in shopping cart
JP6992874B2 (en) Self-registration system, purchased product management method and purchased product management program
US8117071B1 (en) Method and system for matching via an image search query at a point of sale
CN111263224B (en) Video processing method and device and electronic equipment
CN112464697A (en) Vision and gravity sensing based commodity and customer matching method and device
TW201742007A (en) Shelf pickup detection system and method capable of analyzing popular areas where the shelves are accessed for using the information as references for logistics management and selling strategies
CN115249356B (en) Identification method, device, equipment and storage medium
CN112329521A (en) Multi-target tracking video shop-patrol method based on deep learning
CN114943586A (en) Commodity recommendation method, device and equipment based on position detection
CN115063084A (en) Inventory checking method and system for cigarette retail merchants
CN112991379B (en) Unmanned vending method and system based on dynamic vision
CN118038378B (en) Article identification data acquisition method, computer storage medium and electronic equipment
CN114255377A (en) Differential commodity detection and classification method for intelligent container
CN111507792A (en) Self-service shopping method, computer readable storage medium and system
KR20240101455A (en) Information processing program, information processing method, and information processing device
CN110443946A (en) Vending machine, the recognition methods of type of goods and device
US10671972B2 (en) Automated zone location characterization
JP2022036983A (en) Self-register system, purchased commodity management method and purchased commodity management program
CN115588239B (en) Interactive behavior recognition method, device, equipment and storage medium
Morán et al. Towards a Robust Solution for the Supermarket Shelf Audit Problem.
US20220373385A1 (en) Object-detection using pressure and capacitance sensors
CN114022139A (en) Monocular vision-based automatic shopping settlement system and method for unmanned convenience cabinet
CN114971800A (en) Commodity recommendation method, device and equipment based on position detection
KR20240018988A (en) Container yard matching system for efficient port logistics operation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant