CN115330381A - Intelligent payment management method and system for goods and method thereof - Google Patents

Intelligent payment management method and system for goods and method thereof Download PDF

Info

Publication number
CN115330381A
CN115330381A CN202211017115.2A CN202211017115A CN115330381A CN 115330381 A CN115330381 A CN 115330381A CN 202211017115 A CN202211017115 A CN 202211017115A CN 115330381 A CN115330381 A CN 115330381A
Authority
CN
China
Prior art keywords
commodity
image
matrix
scanned
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211017115.2A
Other languages
Chinese (zh)
Inventor
张凯元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Junkai Electronic Technology Co ltd
Original Assignee
Shaanxi Junkai Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Junkai Electronic Technology Co ltd filed Critical Shaanxi Junkai Electronic Technology Co ltd
Priority to CN202211017115.2A priority Critical patent/CN115330381A/en
Publication of CN115330381A publication Critical patent/CN115330381A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/327Short range or proximity payments by means of M-devices
    • G06Q20/3276Short range or proximity payments by means of M-devices using a pictured code, e.g. barcode or QR-code, being read by the M-device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Accounting & Taxation (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an article intelligent payment management method system and a method thereof, wherein commodity information obtained by scanning a bar code or a two-dimensional code attached to an FRID label of a scanned commodity through a camera is coded through a context encoder based on a converter to obtain a commodity description characteristic vector, a commodity generation image is obtained through an image generator based on an antagonism generation network, then a commodity image and the commodity generation image of the scanned commodity are input into a convolutional neural network model serving as a characteristic extractor to obtain a generated image characteristic matrix and a scanned image characteristic matrix, and then a classification result indicating whether the commodity information obtained by scanning the bar code or the two-dimensional code attached to the FRID label of the scanned commodity is matched with the scanned commodity or not is obtained by comparing the difference between the image characteristic matrix and the scanned image characteristic matrix. In this way, the verification of the merchandise information and the RRID tag is performed prior to payment to reduce the chance of a wrong payment.

Description

Intelligent payment management method and system for goods and method thereof
Technical Field
The present application relates to the field of intelligent payment technologies, and more particularly, to a system and method for managing intelligent payment of goods.
Background
With the development of intelligent payment, various off-line stores such as big and small supermarkets, convenience stores, shopping malls, clothing stores, bookstores and other stores mainly selling articles adopt payment modes such as mobile phone code scanning or code scanning gun code scanning and the like, and become one of mainstream payment modes.
In order to ensure the accuracy and reliability of intelligent payment, how to identify whether the commodity information obtained by scanning the bar code or the two-dimensional code attached to the rfid tag of the scanned commodity is matched with the scanned commodity is one of important research subjects. In the existing payment system, the phenomenon that the commodity information obtained by scanning the bar code or the two-dimensional code attached to the rfid tag of the scanned commodity is not matched with the scanned commodity often occurs, so that the expense settlement is wrong, and the payment experience is poor.
Therefore, an optimized intelligent payment management system for goods is desired.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. An embodiment of the present application provides an article intelligent payment management system, method and electronic device, which discloses an article intelligent payment management method system and method, which encodes commodity information obtained by scanning a barcode or a two-dimensional code attached to an FRID tag of a scanned commodity by a camera through a context encoder based on a converter to obtain a commodity description feature vector, encodes the commodity description feature vector through an image generator based on a countermeasure generation network to obtain a commodity generation image, then encodes the commodity generation image and a commodity image of the scanned commodity through a convolutional neural network model as a feature extractor to obtain a generation image feature matrix and a scanned image feature matrix, then fuses a data matrix of the commodity generation image and the generation image feature matrix to obtain an optimized generation image feature matrix, and calculates a difference matrix feature between the optimized generation image feature matrix and the scanned image feature matrix, and then passes the difference feature matrix through a classifier to obtain a classification result for indicating whether the commodity information obtained by scanning the barcode or the two-dimensional code attached to the FRID tag of the scanned commodity is adapted to the scanned commodity, thus facilitating improvement of classification accuracy.
According to an aspect of the present application, there is provided an article intelligent payment management system, including:
the data acquisition module is used for acquiring commodity information obtained by scanning a bar code or a two-dimensional code attached to an FRID label of a scanned commodity through a camera and a commodity image of the scanned commodity acquired by the camera;
the commodity information description coding module is used for obtaining a plurality of semantic feature vectors through a context coder based on a converter after word segmentation processing is carried out on the commodity information, and cascading the semantic feature vectors to obtain a commodity description feature vector;
the text image understanding module is used for enabling the commodity description feature vector to pass through an image generator based on a confrontation generation network so as to obtain a commodity generation image;
the convolution coding module is used for enabling the commodity generation image and the commodity image of the scanned commodity to pass through a convolution neural network model serving as a feature extractor respectively so as to obtain a generated image feature matrix and a scanned image feature matrix;
the fusion module is used for fusing the data matrix of the commodity generation image and the generated image characteristic matrix to obtain an optimized generated image characteristic matrix;
the difference module is used for calculating a difference characteristic matrix between the optimized generated image characteristic matrix and the scanned image characteristic matrix; and
and the management result generating module is used for enabling the differential characteristic matrix to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the commodity information obtained by scanning the bar code or the two-dimensional code attached to the FRID label of the scanned commodity is matched with the scanned commodity.
In another aspect, the present application provides an intelligent payment management method for goods, including:
acquiring commodity information obtained by scanning a bar code or a two-dimensional code attached to an FRID label of a scanned commodity through a camera and a commodity image of the scanned commodity acquired by the camera;
after word segmentation processing is carried out on the commodity information, a plurality of semantic feature vectors are obtained through a context encoder based on a converter, and the semantic feature vectors are cascaded to obtain commodity description feature vectors;
passing the commodity description feature vector through an image generator based on a countermeasure generation network to obtain a commodity generation image;
respectively enabling the commodity generation image and the commodity image of the scanned commodity to pass through a convolutional neural network model serving as a feature extractor to obtain a generated image feature matrix and a scanned image feature matrix;
fusing the data matrix of the commodity generation image and the generated image characteristic matrix to obtain an optimized generated image characteristic matrix;
calculating a difference characteristic matrix between the optimized generated image characteristic matrix and the scanned image characteristic matrix; and
and passing the differential characteristic matrix through a classifier to obtain a classification result, wherein the classification result is used for indicating whether commodity information obtained by scanning a bar code or a two-dimensional code attached to an FRID label of the scanned commodity is matched with the scanned commodity.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to perform the item intelligent payment management method as described above.
According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the intelligent payment management method for items as described above.
Compared with the prior art, the article intelligent payment management system and the article intelligent payment management method have the advantages that commodity information obtained by scanning a bar code or a two-dimensional code attached to an FRID label of a scanned commodity through a context encoder based on a converter is encoded to obtain a commodity description characteristic vector, the commodity description characteristic vector is encoded through an image generator based on an antagonistic generation network to obtain a commodity generation image, the commodity generation image and a commodity image of the scanned commodity are encoded through a convolutional neural network model serving as a characteristic extractor respectively to obtain a generation image characteristic matrix and a scanning image characteristic matrix, then a data matrix of the commodity generation image and the generation image characteristic matrix are fused to obtain an optimized generation image characteristic matrix, a difference characteristic matrix between the optimized generation image characteristic matrix and the scanning image characteristic matrix is calculated, and the difference characteristic matrix is then passed through a classifier to obtain a classification result for indicating whether the commodity information obtained by scanning the bar code or the two-dimensional code attached to the FRID label of the scanned commodity is matched with the scanned commodity, so that the accuracy of the classification result is improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally indicate like parts or steps.
Fig. 1 illustrates an application scenario diagram of an intelligent payment management method for goods according to an embodiment of the present application.
FIG. 2 illustrates a block diagram of the structure of an intelligent payment management system for items according to an embodiment of the present application;
FIG. 3 illustrates an architectural diagram of an intelligent payment management system for items according to an embodiment of the present application;
FIG. 4 illustrates a flow chart of a method of intelligent payment management of items according to an embodiment of the application;
FIG. 5 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Summary of the application
As described above, how to identify whether or not the product information obtained by scanning the barcode or the two-dimensional code attached to the rfid tag of the scanned product is matched with the scanned product is one of important research subjects to ensure the security and reliability of the smart payment.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. In addition, deep learning and neural networks also exhibit a level close to or even exceeding that of humans in the fields of image classification, object detection, semantic segmentation, text translation and the like.
Deep learning and the application of the neural network provide a new solution idea and scheme for ensuring the accuracy and reliability of intelligent payment.
Correspondingly, in the technical scheme, the method comprises the steps of firstly obtaining commodity information obtained by scanning a bar code or a two-dimensional code attached to an FRID label of a scanned commodity through a camera, and obtaining a commodity image of the scanned commodity, wherein the commodity image is acquired through the camera. It should be appreciated that when the barcode or two-dimensional code on the rfid tag is scanned by the camera, the camera approaches the scanned product from far to near, and thus, an image of the scanned product may be captured by the camera before the barcode or two-dimensional code on the rfid tag is scanned.
And then, carrying out word segmentation processing on the commodity information, then passing through a semantic encoder to obtain a plurality of semantic feature vectors, and cascading the semantic feature vectors to obtain a commodity description feature vector. That is, the semantic encoder is used to semantically understand the text description of the commodity information to obtain a commodity description feature vector. In an embodiment of the present application, the context encoder is a Bert model based on a converter, and it should be understood that the Bert model based on the converter can perform global context-based semantic coding on the word sequence after the commodity information is subjected to word segmentation processing to obtain a plurality of semantic feature vectors, and concatenate the semantic feature vectors to obtain a commodity description feature vector.
Then, the commodity description feature vector is passed through an image generator based on a countermeasure generation network to obtain a commodity generation image. The image generator based on the countermeasure generation network is obtained through deep convolution neural network training, and the specific training process comprises the following steps: firstly, a scanned commodity image is taken as a reference image to pass through the deep convolutional neural network to obtain a reference feature map, namely, the scanned commodity image is taken as the reference image to pass through the convolutional neural network to obtain the reference feature map. Then, inputting the reference feature map and the commodity description feature vector into a confrontation generation network to obtain a discriminator loss function value; then, the countermeasure generating network is trained with the discriminator loss function value, that is, the parameters of the countermeasure generating network are updated based on the discriminator loss function value, and at this time, the convolutional neural network may be further trained by back propagation of the gradient, so as to obtain a trained image generator based on the countermeasure generating network.
Then, the commodity generation image and the commodity image of the scanned commodity are respectively passed through a convolutional neural network model as a feature extractor to obtain a generated image feature matrix and a scanned image feature matrix. That is, a convolutional neural network model is used as a feature extractor to perform feature extraction on the commodity generation image and the commodity image of the scanned commodity to obtain an image feature matrix and a scanned image feature matrix. Then, calculating a differential feature matrix between the image feature matrix and the scanned image feature matrix, and classifying the differential feature matrix through a classifier to obtain a classification result, wherein the classification result can be used for indicating whether the commodity information obtained by scanning the bar code or the two-dimensional code attached to the FRID label of the scanned commodity is matched with the scanned commodity.
However, since the commodity generation image is generated by the producerThe generated image feature matrix obtained by the convolution neural network as the feature extractor has corresponding deeper feature distribution, so that the data matrix of the commodity production image, such as M, is preferably used for improving the expression capability of the generated image feature matrix 1 And said generating of an image feature matrix, e.g. denoted as M 2 Performing fusion of shallow-deep feature distributions, namely:
Figure BDA0003810319540000051
Figure BDA0003810319540000052
represents a mean value of values of respective positions of the generated image feature matrix, and N is a scale of the generated image feature matrix.
Thus, with the deep level feature M 2 As attention-directing weight, for shallow features M 1 And applying a consistency attention mechanism of sub-dimension distribution to perform volume matching between manifolds with depth difference, so that the fused generated image feature matrix is distributed on the data matrix of the commodity production image before fusion and each sub-dimension of the generated image feature matrix in a simultaneous manner, high consistency is achieved on each sub-dimension of the shallow-deep feature distribution, and the expression capacity of the commodity production image feature matrix is improved. Thus, the accuracy of the classification result is improved.
Based on the above, the present application provides an article intelligent payment management system and method, which encode commodity information obtained by scanning a barcode or a two-dimensional code attached to an rfid tag of a scanned commodity through a context encoder based on a converter to obtain a commodity description feature vector, obtain a commodity generation image through an image generator based on a countermeasure generation network, input a commodity image and a commodity generation image of the scanned commodity into a convolutional neural network model as a feature extractor to obtain a generated image feature matrix and a scanned image feature matrix, and obtain a classification result indicating whether the commodity information obtained by scanning the barcode or the two-dimensional code attached to the rfid tag of the scanned commodity is matched with the scanned commodity by comparing a difference between the image feature matrix and the scanned image feature matrix. In this way, the verification of the merchandise information and the RRID tag is performed prior to payment to reduce the chance of a wrong payment.
Fig. 1 illustrates a scene diagram of an intelligent payment management method for goods according to an embodiment of the present application. As shown in fig. 1, in an application scenario of the present application, first, commodity information (e.g., T shown in fig. 1) obtained by scanning a barcode or a two-dimensional code attached to an rfid tag of a scanned commodity (e.g., G1 shown in fig. 1) by a camera (e.g., C1 shown in fig. 1) and a commodity image (e.g., F1 shown in fig. 1) of the scanned commodity collected by the camera (e.g., C1 shown in fig. 1) are obtained, and then, the commodity information and the commodity image of the scanned commodity are input to a server (e.g., S shown in fig. 1) in which an intelligent commodity payment management algorithm is deployed, wherein the server can process the commodity information and the commodity image of the scanned commodity through the intelligent commodity payment management algorithm to output a classification result indicating whether the commodity information obtained by scanning the barcode or the two-dimensional code attached to the rfid tag of the scanned commodity is adapted to the scanned commodity.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
Fig. 2 illustrates a block diagram of an intelligent payment management system for items according to an embodiment of the application.
As shown in fig. 2, an article intelligent payment management system 100 provided in the embodiment of the present application includes: a data acquisition module 110, configured to acquire commodity information obtained by scanning a barcode or a two-dimensional code attached to an rfid tag of a scanned commodity with a camera and a commodity image of the scanned commodity acquired by the camera; the commodity information description coding module 120 is configured to perform word segmentation processing on the commodity information, obtain a plurality of semantic feature vectors through a context coder based on a converter, and cascade the semantic feature vectors to obtain a commodity description feature vector; a text image understanding module 130, configured to pass the commodity description feature vector through an image generator based on a confrontation generation network to obtain a commodity generation image; a convolution coding module 140, configured to obtain a generated image feature matrix and a scanned image feature matrix by respectively passing the commodity generated image and the commodity image of the scanned commodity through a convolution neural network model serving as a feature extractor; the fusion module 150 is configured to fuse the data matrix of the commodity generated image and the generated image feature matrix to obtain an optimized generated image feature matrix; a difference module 160, configured to calculate a difference feature matrix between the optimally generated image feature matrix and the scanned image feature matrix; and a management result generating module 170, configured to pass the differential feature matrix through a classifier to obtain a classification result, where the classification result is used to indicate whether the commodity information obtained by scanning the barcode or the two-dimensional code attached to the rfid tag of the scanned commodity is matched with the scanned commodity.
Fig. 3 illustrates an architectural diagram of an intelligent payment management system for items according to an embodiment of the application. As shown in fig. 3, in this network architecture, first, the commodity information is encoded by the context encoder based on the converter to obtain the commodity description feature vector, and then, the commodity description feature vector is passed through the image generator based on the countermeasure generation network to obtain the commodity generation image; then, the commodity generation image and the commodity image of the scanned commodity respectively pass through a convolutional neural network model serving as a feature extractor to obtain a generated image feature matrix and a scanned image feature matrix; then, fusing the data matrix of the commodity generation image and the generated image characteristic matrix to obtain an optimized generated image characteristic matrix; then, calculating a difference characteristic matrix between the optimized generated image characteristic matrix and the scanned image characteristic matrix; and finally, passing the differential characteristic matrix through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the commodity information obtained by scanning the bar code or the two-dimensional code attached to the FRID label of the scanned commodity is matched with the scanned commodity.
In this embodiment, the intelligent article payment management system mainly meets the requirements of intelligent article management and payment schemes of various off-line stores (including off-line stores such as various size supermarkets, convenience stores, shopping malls, clothing stores, bookstores and the like which mainly sell articles, hereinafter collectively referred to as "stores"), warehouses and manufacturers through the cooperation of software and hardware technologies such as an RFID technology, an electric gate, an intelligent camera and the like, and realizes the intelligent management and payment of all articles such as fast, convenient, safe and intelligent articles (articles include clothes, foods, daily necessities, vehicles, machinery, electrical appliances, electronic products, furniture and the like) through the combination of software and hardware.
The following describes an application of the intelligent payment management system 100 for goods by way of example.
And (4) inputting the shop commodities. When a shop receives goods, a mobile phone camera or a code scanning gun scans a bar code of the goods or the bar code of the shop to obtain information of the goods, and fills in selling price and quantity (for example, the goods code of the shop has selling price, the system can automatically obtain the goods without manual filling), then a mobile phone NFC or RFID reader-writer reads and writes RFID electronic tag paster to bind with the goods, and the goods are recorded and finished after the paster is pasted on the surface of the goods (when the received goods have the RFID electronic tag matched with the system, the shop reads and writes the RFID electronic tag information through the mobile phone NFC or the RFID reader-writer, judges the authenticity of the goods according to the circulation information of the goods, fills in basic information such as the selling price after verification, writes the information into the RFID electronic tag of the goods, and finishes warehousing the goods). If the shop has own commodity information management software or inventory management software, the system can directly acquire the commodity information through the API technical interface, and only the commodity and the RFID label are bound and pasted after the commodity information is acquired.
And (5) purchasing the commodities. After a customer enters a shop and selects and purchases commodities, the system App or the WeChat public number is used for scanning the barcodes or the two-dimensional codes on the self-carried barcodes or the RFID label stickers of the commodities to add the commodities into an electronic shopping cart (if the mobile phone has an NFC function, the RFID label stickers of the commodities can be directly added into the shopping cart near an NFC induction area, after the commodities are selected and added into the shopping cart, the commodity payment settlement is directly carried out through the mobile phone APP or the public number of the system, and the customer can directly take the commodities out of the shop after the payment is successful.
And (6) safety verification. When a customer carries a store commodity to leave the store and passes through the vertical RFID tag reader-writer, the RFID identifies information of RFID tag stickers on the commodity by sending radio frequency, and verifies whether the commodity corresponding to the RFID tag is successfully paid or not in the system, if the payment is completed, the RFID tag reader-writer changes the content of the RFID tag on the commodity into a paid state, and opens the gate to allow the customer to pass, and when the system verifies that the commodity is not paid, the electric gate is not opened and is prompted by an acousto-optic alarm, and meanwhile, the system can push early warning information (which can be pushed by short messages, voice phones and App information) to designated personnel, such as security personnel, management personnel and the like.
A tear resistant RFID label sticker. Through this company redesign to RFID label sticker structure, this RFID label sticker has the early warning function of tearing prevention, mainly carries out real time status monitoring through RFID transceiver to RFID label sticker through the circuit to RFID label sticker opens and shuts and sticker modularization, and when discovering unusually, the system sends audible-visual alarm instruction to the propelling movement early warning information of appointed personnel. When the label is torn, the sticker module drives part of the circuit link and the internal conductive wire to be separated from the main circuit, the state of the RFID chip is automatically switched to be abnormal at the moment, when the RFID transceiving module receives the abnormal state of the chip, the system judges that the RFID label is torn, the commodity tearing position is recorded, meanwhile, the system sends an audible and visual alarm instruction, the early warning information and the label sticker tearing position are pushed to appointed personnel, and the torn label sticker position picture and the personnel face recognition portrait are checked through calling intelligent monitoring at the moment.
The bundling type anti-dismantling RFID electronic tag (comprises an active type, a passive type and a semi-active type). Redesign RFID electronic tags structure again through this company, change the RFID label into the ribbon formula, and at ribbon frenulum department built-in conductive filament, when the frenulum is destroyed or is cracked, built-in conductive filament is also cracked immediately, the built-in lithium cell that activates the RFID label this moment will switch the RFID chip state and save in the chip, also can add the network module in the RFID electronic tags, when ribbon frenulum is destroyed or is cracked, the chip is automatic to system server upload state information and this commodity position information and save, the appointed personnel are reminded through APP propelling movement or cell-phone SMS, pronunciation to the system simultaneously.
And (5) monitoring and managing the articles in real time. A plurality of RFID receiving and transmitting devices are installed in a shop according to the actual area, so that no dead angle is formed in the shop space, radio frequency signals are received and transmitted to the RFID, the RFID receiving and transmitting devices transmit and receive the radio frequency signals to the commodities attached with RFID electronic tags in real time, and commodity information bound by the RFID tags is pasted with paper, so that the position placing and moving conditions of each commodity are realized.
And (4) article anti-theft early warning. After the shop covers the signal of the RFID receiving and transmitting device and sets the area of the shop within the signal range, when the commodity is not paid or is not moved out, the system identifies that the commodity exceeds the set shop range or is about to exceed or cannot receive the content of the RFID label chip on the commodity through the RFID positioning system, the system automatically judges that the commodity is about to lose or is lost, sends an acousto-optic alarm instruction and pushes early warning information to appointed personnel (the system can also be set to automatically dial an alarm telephone to give an alarm and the like).
And (5) recovering the payment of the goods. The intelligent face recognition cameras are installed at entrances and exits of stores and warehouses, face recognition information of each person entering and exiting the stores or warehouses is collected and stored, when goods are lost or damaged, managers can select a function of forcibly recovering the goods money in the system, the system automatically pushes the information to relevant administrative authorities after uploading and submitting corresponding evidences, after the administrative authorities agree, the information can be directly verified through face recognition data and banks, the goods money of the lost or damaged goods is automatically and forcibly transferred to the stores, and the goods money is recovered.
The system supports all stores and all warehouses simultaneously, the realization mode and the flow are the same as those of the stores, only the payment of goods money is changed into warehouse delivery application, when the goods in the warehouses need to be delivered, the goods needing to be delivered are subjected to code scanning through a mobile phone camera or near field induction identification of RFID electronic tag stickers on the goods through mobile phone NFC, the system automatically adds the goods into a delivery list, then the delivery application is submitted, after the delivery application passes, the goods are identified and the state of the goods in the system is verified through RFID reading and writing equipment at the warehouse outlet, when the system identifies the goods application passing state, the RFID tag reader changes the RFID tag content on the goods into the delivery state, a gate is opened for passing, when the system verifies that the goods do not pass the application or do not apply, the electric gate is not opened, and the system can give early warning information (short message pushing can be realized through voice and App information pushing) to appointed personnel, such as security personnel, managers and the like.
The article is anti-counterfeiting. When a manufacturer finishes producing an article, an RFID label sticker can be pasted on each article or an anti-disassembly RFID label is hung on each article, commodity information (the content comprises the commodity name, the commodity category, the production time, the name of a manufacturer and other basic information) is written in the RFID electronic label, when a dealer or a merchant needs to purchase the goods, the system performs order payment, after the payment is successful, the system automatically binds the batch of goods to the dealer or the merchant, and when the dealer or the merchant receives the batch of goods, the system reads the commodity information (the content comprises the commodity name, the commodity category, the production time, the name of the manufacturer and other basic information) in the commodity RFID electronic label by using a mobile phone APP (application software) of the system to sense the RFID label or RFID label reading and writing equipment matched with the system through a mobile phone NFC, writes information into an RFID label chip after the information is checked, and automatically stores the information; if the goods circulation nodes are more, a plurality of information can be set and written in according to the process nodes, and finally when the goods are circulated to a consumer end, the consumer reads the circulation information in the RFID label of the goods according to the NFC of the mobile phone or inquires the circulation information by scanning the two-dimensional code or the bar code on the RFID electronic label, so that the authenticity of the goods is distinguished. Each RFID label of the commodity has uniqueness, cannot be copied and modified, can only be written with information, cannot be modified and deleted, and each piece of information and the RFID label chip carry out safety protection on the information in a special encryption mode. The manufacturer can check the circulation condition of each commodity and each circulation node, and can effectively manage the commodity sale path and the commodity safety (also can monitor and control the commodity sale price). In order to protect business confidentiality, a manufacturer can set viewing authority of each node information according to circumstances, and for example, can set functions that a consumer cannot see dealer information and the like.
In the embodiment of the application, when the intelligent management and payment of the goods are carried out, how to identify whether the goods information obtained by scanning the bar code or the two-dimensional code attached to the FRID label of the scanned goods is matched with the scanned goods or not is carried out. The following describes, by way of example, the intelligent item payment management system 100 according to an embodiment of the present application, configured to output a classification result indicating whether or not item information obtained by scanning a barcode or a two-dimensional code attached to an rfid tag of a scanned item is matched with the scanned item.
Specifically, the data acquisition module 110 is configured to acquire commodity information obtained by scanning a barcode or a two-dimensional code attached to an rfid tag of a scanned commodity with a camera, and a commodity image of the scanned commodity acquired by the camera. In the embodiment of the application, cameras with different shooting angles can be arranged in a shop so as to scan bar codes or two-dimensional codes attached to FRID labels of scanned commodities from the shooting angles to obtain commodity information of the shooting angles.
The commodity information description coding module 120 and the text image understanding module 130 are configured to perform word segmentation processing on the commodity information, obtain a plurality of semantic feature vectors through a context encoder based on a converter, and cascade the semantic feature vectors to obtain a commodity description feature vector; and passing the commodity description feature vector through an image generator based on a confrontation generation network to obtain a commodity generation image. In the embodiment of the application, the commodity information is subjected to word segmentation processing, a plurality of semantic feature vectors are obtained through a context encoder based on a converter, and the semantic feature vectors are cascaded to obtain a commodity description feature vector. That is, the licensed text description is semantically understood using a semantic encoder (in the present embodiment, a converter-based context encoder is used as the semantic encoder) to obtain a licensed text understanding feature vector. Then, the commodity description feature vector is passed through an image generator based on a confrontation generation network to obtain a commodity generation image. The image generator based on the countermeasure generation network is obtained through deep convolution neural network training.
In an embodiment of the present application, the commodity information description coding module 120 includes:
the embedded vectorization unit is used for mapping the commodity information into an embedded vector after word segmentation processing is carried out on the commodity information by using an embedded layer of the context encoder so as to obtain a sequence of the embedded vector;
a context semantic association encoding unit for performing global semantic encoding on the sequence of embedded vectors based on upper and lower bits using a converter of the context encoder to obtain a plurality of semantic feature vectors.
In an embodiment of the present application, the context encoder is a converter-based Bert model, and it should be understood that the converter-based Bert model can perform global context-based semantic encoding on the multiple items of data after the commodity information is subjected to word segmentation processing to obtain multiple semantic feature vectors, and concatenate the multiple semantic feature vectors to obtain the commodity description feature vector.
The specific training process of the image generator based on the countermeasure generation network comprises the following steps: firstly, a scanned commodity image is taken as a reference image to pass through the deep convolutional neural network to obtain a reference feature map, namely, the scanned commodity image is taken as the reference image to pass through the convolutional neural network to obtain the reference feature map. Then, inputting the reference feature map and the commodity description feature vector into a confrontation generation network to obtain a discriminator loss function value; then, the countermeasure generation network is trained with the discriminator loss function value, that is, the parameters of the countermeasure generation network are updated based on the discriminator loss function value, and in this case, the convolutional neural network may be further trained by back propagation of the gradient, so as to obtain a trained image generator based on the countermeasure generation network.
The convolution coding module 140 is configured to obtain a generated image feature matrix and a scanned image feature matrix by respectively passing the commodity generated image and the commodity image of the scanned commodity through a convolution neural network model serving as a feature extractor. Specifically, the commodity generation image is passed through a convolutional neural network model as a feature extractor to obtain a generation image feature matrix, and the commodity image of the scanned commodity is passed through a convolutional neural network model as a feature extractor to obtain a scanned image feature matrix. That is, a convolutional neural network model is used as a feature extractor to perform feature extraction on the commodity generation image and the commodity image of the scanned commodity to obtain an image feature matrix and a scanned image feature matrix. Then, calculating a differential feature matrix between the image feature matrix and the scanned image feature matrix, and classifying the differential feature matrix through a classifier to obtain a classification result, wherein the classification result can be used for indicating whether the commodity information obtained by scanning the bar code or the two-dimensional code attached to the FRID label of the scanned commodity is matched with the scanned commodity.
The convolutional encoding module 140 is further configured to perform convolutional kernel-based convolutional processing, pooling processing, and nonlinear activation processing on input data in layer forward pass using the layers of the convolutional neural network to output the image feature matrix or the scanned image feature matrix from the last layer of the convolutional neural network, respectively, wherein the input of the first layer of the convolutional neural network generates an image for the commodity or a commodity image for the scanned commodity. In this way, the high-dimensional local features of the generated image, namely the image feature matrix, are extracted from the commodity image of the commodity to be scanned through the convolutional neural network, and the high-dimensional local features of the scanned commodity, namely the scanned image feature matrix, are extracted from the commodity image of the commodity to be scanned through the convolutional neural network.
However, fromThe commodity production image is a pseudo image generated by a convolutional neural network as a generator and has a characteristic distribution similar to that obtained by the convolutional neural network, and a production image characteristic matrix obtained by a convolutional neural network model as a characteristic extractor has a corresponding deeper characteristic distribution, so that in order to improve the expression capacity of the production image characteristic matrix, a data matrix of the commodity production image, such as M, is preferably used 1 And said generating of an image feature matrix, e.g. denoted as M 2 And finally, passing the differential feature matrix through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the commodity information obtained by scanning the bar code or the two-dimensional code attached to the FRID label of the scanned commodity is matched with the scanned commodity or not, so that the accuracy of the classification result is favorably improved.
Specifically, the fusion module 150 and the difference module 160 are configured to fuse the data matrix of the commodity generated image and the generated image feature matrix to obtain an optimized generated image feature matrix; and calculating a difference characteristic matrix between the optimized generation image characteristic matrix and the scanned image characteristic matrix.
In this embodiment, the fusion module 150 is further configured to: performing shallow-deep feature distribution fusion on the data matrix of the commodity generation image and the generation image feature matrix to obtain an optimized generation image feature matrix according to the following formula:
Figure BDA0003810319540000131
wherein M is 2 ' Generation of image feature matrix for optimization, M 1 Generating a data matrix of images for said commodity, M 2 For the purpose of said generating an image feature matrix,
Figure BDA0003810319540000132
represents a mean value of values of respective positions of the generated image feature matrix, and N is a scale of the generated image feature matrix, indicates dot-by-position multiplication,
Figure BDA0003810319540000133
indicating an exponential operation of the feature matrix, which means the calculation of a natural exponential function value raised to the power of the feature value at each position in the matrix, by the addition of position, exp (-).
Thus, with the deep level feature M 2 As attention-directing weight, for shallow features M 1 And applying a consistency attention mechanism of sub-dimension distribution to perform volume matching between manifolds with depth difference, so that the fused generated image feature matrix is distributed on the data matrix of the commodity production image before fusion and each sub-dimension of the generated image feature matrix in a simultaneous manner, high consistency is achieved on each sub-dimension of the shallow-deep feature distribution, and the expression capacity of the commodity production image feature matrix is improved. Thus, the accuracy of the classification result is improved.
The difference module 160 is configured to calculate a difference feature matrix between the optimally generated image feature matrix and the scanned image feature matrix according to the following formula:
Figure BDA0003810319540000134
wherein, F d For the difference feature map, F 1 Generating an image feature matrix for said optimization, and F 2 For the feature matrix of the scanned image,
Figure BDA0003810319540000135
indicating a difference by position.
The management result generating module 170 is configured to pass the differential feature matrix through a classifier to obtain a classification result, where the classification result is used to indicate whether the commodity information obtained by scanning the barcode or the two-dimensional code attached to the rfid tag of the scanned commodity is matched with the scanned commodity.
In some embodiments of the present application, the classification process of the management result generating module 170 includes: fully concatenating the differential feature matrix using a plurality of fully concatenated layers of the classifier to convert the differential feature matrix into a classification feature vector; inputting the corrected classification characteristic vector into a Softmax classification function to obtain probability values of whether the classification characteristic vectors respectively belong to scanned commodity information and are matched with the scanned commodity; and if the probability value of the scanned commodity information matched with the scanned commodity is greater than or equal to the probability value of the scanned commodity information not matched with the scanned commodity, outputting the classification result that the scanned commodity information is matched with the scanned commodity, and if the probability value of the scanned commodity information matched with the scanned commodity is smaller than the probability value of the scanned commodity information not matched with the scanned commodity, outputting the classification result that the scanned commodity information is not matched with the scanned commodity.
The management result generating module 170 is further configured to:
processing the differential feature matrix using the classifier to generate a classification result with the following formula: softmax { (W) n ,B n ):…:(W 1 ,B 1 ) L Project (F), where Project (F) represents the projection of the difference feature matrix as a vector, W 1 To W n As a weight matrix for each fully connected layer, B 1 To B n A bias matrix representing the layers of the fully connected layer.
In summary, the article intelligent payment management system of the embodiment of the present application is illustrated, which encodes commodity information obtained by scanning a barcode or a two-dimensional code attached to an rfid tag of a scanned commodity through a converter-based context encoder to obtain a commodity description feature vector, encodes the commodity description feature vector through an image generator based on a countermeasure generation network to obtain a commodity generation image, then encodes the commodity generation image and a commodity image of the scanned commodity through a convolutional neural network model as a feature extractor, respectively, to obtain a generation image feature matrix and a scanned image feature matrix, then fuses a data matrix of the commodity generation image and the generation image feature matrix to obtain an optimized generation image feature matrix, and calculates a difference feature matrix between the optimized generation image feature matrix and the scanned image feature matrix, and then passes the difference feature matrix through a classifier to obtain a classification result indicating whether the commodity information obtained by scanning the barcode or the two-dimensional code attached to the rfid tag of the scanned commodity is adapted to the scanned commodity, thus facilitating improvement of accuracy of the classification result.
As described above, the intelligent payment management system 100 for goods according to the embodiment of the present application may be implemented in various terminal devices, such as a server for intelligent payment management for goods. In one example, the intelligent payment management system 100 for goods according to the embodiment of the present application may be integrated into a terminal device as one software module and/or hardware module. For example, the intelligent payment management system 100 for goods may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the intelligent payment management system 100 for goods can also be one of many hardware modules of the terminal device.
Alternatively, in another example, the intelligent payment management system 100 for goods and the terminal device may be separate devices, and the intelligent payment management system 100 for goods and the terminal device may be connected to the intelligent payment management system for goods and the terminal device through a wired and/or wireless network and transmit the interactive information according to the agreed data format.
Exemplary method
Fig. 4 illustrates a flow chart of an item intelligent payment management method according to an embodiment of the application. As shown in fig. 4, the intelligent payment management method for goods according to the embodiment of the application includes:
s101, acquiring commodity information obtained by scanning a bar code or a two-dimensional code attached to an FRID label of a scanned commodity through a camera and a commodity image of the scanned commodity, which is acquired through the camera;
s102, after word segmentation processing is carried out on the commodity information, a plurality of semantic feature vectors are obtained through a context encoder based on a converter, and the semantic feature vectors are cascaded to obtain commodity description feature vectors;
s103, enabling the commodity description feature vector to pass through an image generator based on a countermeasure generation network to obtain a commodity generation image;
s104, enabling the commodity generation image and the commodity image of the scanned commodity to pass through a convolutional neural network model serving as a feature extractor respectively to obtain a generated image feature matrix and a scanned image feature matrix;
s105, fusing the data matrix of the commodity generation image and the generated image characteristic matrix to obtain an optimized generated image characteristic matrix;
s106, calculating a difference characteristic matrix between the optimized generated image characteristic matrix and the scanned image characteristic matrix; and
s107, enabling the differential characteristic matrix to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether commodity information obtained by scanning a bar code or a two-dimensional code attached to an FRID label of the scanned commodity is matched with the scanned commodity.
In one possible implementation manner, in the method for intelligent payment management of an article, the obtaining a plurality of semantic feature vectors by a context encoder based on a converter after performing word segmentation processing on the commodity information includes:
using an embedding layer of the context encoder to map the commodity information into an embedding vector after word segmentation processing so as to obtain a sequence of the embedding vector;
global semantic encoding the sequence of embedded vectors based on the upper and lower bits using a converter of the context encoder to obtain a plurality of semantic feature vectors.
In one possible implementation manner, in the method for intelligent payment management of an article, the passing the product generated image and the product image of the scanned product through a convolutional neural network model as a feature extractor to obtain a generated image feature matrix and a scanned image feature matrix respectively includes: performing convolution processing, pooling processing and nonlinear activation processing based on convolution kernels on input data in forward transmission of layers by using the layers of the convolutional neural network respectively to output the image characteristic matrix or the scanned image characteristic matrix by the last layer of the convolutional neural network, wherein input of the first layer of the convolutional neural network is used for generating an image for the commodity or a commodity image of the scanned commodity.
In one possible implementation manner, in the intelligent item payment management method: performing fusion of shallow-deep feature distribution on the data matrix of the commodity generated image and the generated image feature matrix to obtain an optimized generated image feature matrix according to the following formula, wherein the formula is as follows:
Figure BDA0003810319540000161
wherein M is 2 ' Generation of image feature matrix for optimization, M 1 Generating a data matrix of images for said commodity, M 2 For the purpose of said generating an image feature matrix,
Figure BDA0003810319540000162
represents a mean value of values of respective positions of the generated image feature matrix, and N is a scale of the generated image feature matrix, indicates dot-by-position multiplication,
Figure BDA0003810319540000163
denotes an exponential operation of the feature matrix, which means to calculate a natural exponential function value raised to the feature value of each position in the matrix, by adding, exp (·) by position.
In one possible implementation manner, in the intelligent item payment management method, the difference feature matrix between the optimally generated image feature matrix and the scanned image feature matrix is calculated according to the following formula:
Figure BDA0003810319540000164
wherein, F d For the difference feature map, F 1 Generating an image feature matrix for said optimization, and F 2 For the feature matrix of the scanned image,
Figure BDA0003810319540000165
indicating a difference by position.
In one possible implementation manner, in the method for intelligent payment management of an item, the passing the differential feature matrix through a classifier to obtain a classification result includes:
processing the differential feature matrix using the classifier to generate a classification result with the following formula: softmax { (W) n ,B n ):…:(W 1 ,B 1 ) L Project (F), where Project (F) represents the projection of the difference feature matrix as a vector, W 1 To W n As a weight matrix for each fully connected layer, B 1 To B n A bias matrix representing the fully connected layers of each layer.
Here, it will be understood by those skilled in the art that the specific functions and steps in the above-described item intelligent payment management method have been described in detail in the above description of the item intelligent payment management system with reference to fig. 2 to 3, and thus, a repetitive description thereof will be omitted.
It is to be understood that some or all of the steps or operations in the above-described embodiments are merely examples, and other operations or variations of various operations may be performed by the embodiments of the present application. Further, the various steps may be performed in a different order presented in the above-described embodiments, and it is possible that not all of the operations in the above-described embodiments are performed.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 5.
FIG. 5 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 5, the electronic device 10 includes one or more processors 11 and memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 11 to implement the intelligent item payment management methods of the various embodiments of the present application described above and/or other desired functions. Various contents such as parameters may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 13 may include, for example, a keyboard, a mouse, and the like.
The output device 14 can output various information including classification results or warning prompts to the outside. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 5, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the intelligent payment management method for items according to various embodiments of the present application described in the "exemplary methods" section of this specification, supra.
The computer program product may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages, for carrying out operations according to embodiments of the present application. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the intelligent payment management method for items according to various embodiments of the present application described in the "exemplary methods" section above of this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably herein. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, each component or step can be decomposed and/or re-combined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. An intelligent payment management system for goods, comprising:
the data acquisition module is used for acquiring commodity information obtained by scanning a bar code or a two-dimensional code attached to an FRID label of a scanned commodity through a camera and a commodity image of the scanned commodity acquired by the camera;
the commodity information description coding module is used for obtaining a plurality of semantic feature vectors through a context coder based on a converter after word segmentation processing is carried out on the commodity information, and cascading the semantic feature vectors to obtain a commodity description feature vector;
the text image understanding module is used for enabling the commodity description feature vector to pass through an image generator based on a countermeasure generation network to obtain a commodity generation image;
the convolution coding module is used for enabling the commodity generation image and the commodity image of the scanned commodity to pass through a convolution neural network model serving as a feature extractor respectively so as to obtain a generated image feature matrix and a scanned image feature matrix;
the fusion module is used for fusing the data matrix of the commodity generation image and the generated image characteristic matrix to obtain an optimized generated image characteristic matrix;
the difference module is used for calculating a difference characteristic matrix between the optimized generated image characteristic matrix and the scanned image characteristic matrix; and
and the management result generating module is used for enabling the differential characteristic matrix to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the commodity information obtained by scanning the bar code or the two-dimensional code attached to the FRID label of the scanned commodity is matched with the scanned commodity.
2. The intelligent payment management system for goods as claimed in claim 1, wherein the goods information description coding module comprises:
the embedded vectorization unit is used for mapping the commodity information into embedded vectors after word segmentation processing is carried out on the commodity information by using an embedded layer of the context encoder so as to obtain a sequence of the embedded vectors;
a context semantic association encoding unit for performing global semantic encoding on the sequence of embedded vectors based on upper and lower bits using a converter of the context encoder to obtain a plurality of semantic feature vectors.
3. The intelligent payment management system for items of claim 2, wherein said convolutional encoding module comprises: and the system is further used for performing convolution processing, pooling processing and nonlinear activation processing based on convolution kernel on input data in forward transmission of layers by using the layers of the convolutional neural network respectively so as to output the image characteristic matrix or the scanned image characteristic matrix from the last layer of the convolutional neural network, wherein the input of the first layer of the convolutional neural network generates an image for the commodity or a commodity image of the scanned commodity.
4. The intelligent payment management system for items of claim 3, wherein the fusion module is further configured to: performing fusion of shallow-deep feature distribution on the data matrix of the commodity generated image and the generated image feature matrix to obtain an optimized generated image feature matrix according to the following formula, wherein the formula is as follows:
Figure FDA0003810319530000021
wherein M is 2 ' Generation of image feature matrix for optimization, M 1 Generating a data matrix of images for said commodity, M 2 For the purpose of said generating an image feature matrix,
Figure FDA0003810319530000022
represents an average of values of respective positions of the generated image feature matrix, and N is a scale of the generated image feature matrix, <' > indicates dot-by-dot product,
Figure FDA0003810319530000023
indicating an exponential operation of the feature matrix, which means the calculation of a natural exponential function value raised to the power of the feature value at each position in the matrix, by the addition of position, exp (-).
5. The intelligent item payment management system of claim 4, wherein the difference module is configured to calculate the difference feature matrix between the optimally generated image feature matrix and the scanned image feature matrix according to the following formula:
Figure FDA0003810319530000024
wherein, F d For the difference feature map, F 1 Generating an image feature matrix for said optimization, and F 2 For the feature matrix of the scanned image,
Figure FDA0003810319530000025
indicating a difference by position.
6. The intelligent payment management system for goods as claimed in claim 5, wherein the management result generating module is further configured to:
using the classifier to pair the differential feature matrix with the following formulaProcessing to generate a classification result, wherein the formula is: softmax { (W) n ,B n ):...:(W 1 ,B 1 ) Project (F), where Project (F) represents projecting the differential feature matrix as a vector, W 1 To W n As a weight matrix for all connected layers of each layer, B 1 To B n A bias matrix representing the layers of the fully connected layer.
7. An intelligent payment management method for goods, which is characterized by comprising the following steps:
acquiring commodity information obtained by scanning a bar code or a two-dimensional code attached to an FRID label of a scanned commodity through a camera and a commodity image of the scanned commodity acquired by the camera;
after word segmentation processing is carried out on the commodity information, a plurality of semantic feature vectors are obtained through a context encoder based on a converter, and the semantic feature vectors are cascaded to obtain commodity description feature vectors;
passing the commodity description feature vector through an image generator based on a countermeasure generation network to obtain a commodity generation image;
respectively enabling the commodity generation image and the commodity image of the scanned commodity to pass through a convolutional neural network model serving as a feature extractor to obtain a generated image feature matrix and a scanned image feature matrix;
fusing the data matrix of the commodity generation image and the generated image characteristic matrix to obtain an optimized generated image characteristic matrix;
calculating a difference characteristic matrix between the optimized generated image characteristic matrix and the scanned image characteristic matrix; and
and passing the differential characteristic matrix through a classifier to obtain a classification result, wherein the classification result is used for indicating whether commodity information obtained by scanning a bar code or a two-dimensional code attached to an FRID label of the scanned commodity is matched with the scanned commodity.
8. The intelligent payment management method for goods as claimed in claim 7, wherein the obtaining of the plurality of semantic feature vectors by the context encoder based on the converter after the word segmentation processing on the goods information comprises:
using an embedding layer of the context encoder to perform word segmentation processing on the commodity information and then mapping the commodity information into an embedding vector so as to obtain a sequence of the embedding vector;
global semantic encoding the sequence of embedded vectors based on the upper and lower bits using a converter of the context encoder to obtain a plurality of semantic feature vectors.
9. The intelligent payment management method for goods as claimed in claim 8, comprising: performing fusion of shallow-deep feature distribution on the data matrix of the commodity generated image and the generated image feature matrix to obtain an optimized generated image feature matrix according to the following formula, wherein the formula is as follows:
Figure FDA0003810319530000041
wherein M is 2 ' Generation of image feature matrix for optimization, M 1 Generating a data matrix of images for said commodity, M 2 For the purpose of said generating an image feature matrix,
Figure FDA0003810319530000042
represents an average of values of respective positions of the generated image feature matrix, and N is a scale of the generated image feature matrix, <' > indicates dot-by-dot product,
Figure FDA0003810319530000043
indicating an exponential operation of the feature matrix, which means the calculation of a natural exponential function value raised to the power of the feature value at each position in the matrix, by the addition of position, exp (-).
10. The intelligent payment management method for goods as claimed in claim 9, wherein the difference feature matrix between the optimally generated image feature matrix and the scanned image feature matrix is calculated with the following formula:
Figure FDA0003810319530000044
wherein, F d For the difference feature map, F 1 Generating an image feature matrix for said optimization, and F 2 For the feature matrix of the scanned image,
Figure FDA0003810319530000045
indicating a difference by position.
CN202211017115.2A 2022-08-23 2022-08-23 Intelligent payment management method and system for goods and method thereof Pending CN115330381A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211017115.2A CN115330381A (en) 2022-08-23 2022-08-23 Intelligent payment management method and system for goods and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211017115.2A CN115330381A (en) 2022-08-23 2022-08-23 Intelligent payment management method and system for goods and method thereof

Publications (1)

Publication Number Publication Date
CN115330381A true CN115330381A (en) 2022-11-11

Family

ID=83925566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211017115.2A Pending CN115330381A (en) 2022-08-23 2022-08-23 Intelligent payment management method and system for goods and method thereof

Country Status (1)

Country Link
CN (1) CN115330381A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116111906A (en) * 2022-11-17 2023-05-12 浙江精盾科技股份有限公司 Special motor with hydraulic brake for turning and milling and control method thereof
CN116307446A (en) * 2022-12-05 2023-06-23 浙江型相网络科技有限公司 Clothing supply chain management system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116111906A (en) * 2022-11-17 2023-05-12 浙江精盾科技股份有限公司 Special motor with hydraulic brake for turning and milling and control method thereof
CN116307446A (en) * 2022-12-05 2023-06-23 浙江型相网络科技有限公司 Clothing supply chain management system
CN116307446B (en) * 2022-12-05 2023-10-27 浙江型相网络科技有限公司 Clothing supply chain management system

Similar Documents

Publication Publication Date Title
US11657241B2 (en) Authentication systems and methods
US20200364817A1 (en) Machine type communication system or device for recording supply chain information on a distributed ledger in a peer to peer network
CN115330381A (en) Intelligent payment management method and system for goods and method thereof
Want Near field communication
CN103198337B (en) Coding information reading terminals with article positioning function
CN107016783A (en) Self-service vending method and device
RU2670800C9 (en) Inter-system data interaction platform based on data tags and application method thereof
CN105190663A (en) System for authenticating items
US20180268175A1 (en) Method and arrangement for providing and managing information linked to rfid data storage media in a network
WO2007134378A1 (en) A receipt storage system
CN103136682A (en) System using cover type label to achieve counterfeiting and method
US7649460B2 (en) Clip chip
CA2940398A1 (en) Systems and methods for customer deactivation of security elements
CN106652236A (en) Locker selling method and system
CN110210866A (en) Commodity purchasing method and device based on recognition of face
WO2019096200A1 (en) Electronic label-based self-service vending method and device
CN112543933A (en) Information processing system, information code generation system, information processing method, and information code generation method
Iqbal et al. NFC based inventory control system for secure and efficient communication
Gupte et al. Automated shopping cart using rfid with a collaborative clustering driven recommendation system
CN112288420A (en) Information processing method, device, system and computer readable storage medium
US20140164175A1 (en) Shopping cart list
US11651329B2 (en) Machine readable technologies for the smart shipping of multiple products
US20170186076A1 (en) Product tracking and management using image recognition
KR20210103122A (en) Blockchain based trading system and Method thereof
KR20040052278A (en) System and Method for Confirming Goods by Using Unique Identification Code

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination