CN109508974B - Shopping checkout system and method based on feature fusion - Google Patents

Shopping checkout system and method based on feature fusion Download PDF

Info

Publication number
CN109508974B
CN109508974B CN201811440047.4A CN201811440047A CN109508974B CN 109508974 B CN109508974 B CN 109508974B CN 201811440047 A CN201811440047 A CN 201811440047A CN 109508974 B CN109508974 B CN 109508974B
Authority
CN
China
Prior art keywords
commodity
module
weight distribution
identification
cloud server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811440047.4A
Other languages
Chinese (zh)
Other versions
CN109508974A (en
Inventor
雷嘉宝
陈泳璇
姚若河
李泽威
林中卡
陈敏
叶长青
余卫宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201811440047.4A priority Critical patent/CN109508974B/en
Publication of CN109508974A publication Critical patent/CN109508974A/en
Application granted granted Critical
Publication of CN109508974B publication Critical patent/CN109508974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/20Point-of-sale [POS] network systems
    • G06Q20/208Input by product or record sensing, e.g. weighing or scanner processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/16Payments settled via telecommunication systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a shopping checkout system based on feature fusion, which comprises: the commodity identification checkout terminal comprises a processor, a commodity camera, a binocular camera, a pressure sensor array module, a storage table, a communication module, a loudspeaker, a sound acquisition module and a touch display screen, and the cloud server comprises a commodity database, a transaction record management module, a checkout module, a commodity identification module, a face recognition module and a voiceprint identification module; the surface of the object placing table is stuck with a pressure sensor array module. The customer does not need to additionally take the bulk commodity to a weighing position for weighing, but directly takes the bulk commodity to a commodity identification and checkout system for checkout, so that the settlement mode of the bulk packaged commodity is unified. And this scheme can discern all commodity simultaneously, need not to place commodity in proper order, and the customer only need put all commodity on the mesa simultaneously, and no matter this commodity is packing commodity or bulk commodity, homoenergetic unification discernment.

Description

Shopping checkout system and method based on feature fusion
Technical Field
The invention relates to the technical field of shopping checkout, in particular to a shopping checkout system and method based on feature fusion.
Background
The current commodity settlement mode is to identify commodities by scanning the bar codes one by one, so that the scanning of the commodities one by one takes more time, and the manual scanning requires larger manpower expenditure. In addition, the self-service code scanning does not support settlement of bulk goods, so that the settlement of the bulk goods depends on a weighing person, and the commodity settlement depends on a cashier. The bulk package commodity settlement separation, the expense of manpower and material resources of the supermarket are large, the economic benefit of the supermarket is low, and the time for consuming customers is too much. The recently emerging self-service cash registers and self-service weighing platforms only relieve the pressure of cashiers and weighing staff when the flow of people is large, and do not significantly improve the shopping experience of customers. Therefore, there is an urgent need in the industry for a system or method for simultaneously identifying all goods, uniformly settling the bulk and packaging the goods, and solving the problem of separation of the settlement modes of the bulk goods and the packaging goods.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides a shopping checkout system based on feature fusion.
The invention further aims to overcome the defects in the prior art, and provides a shopping checkout method based on feature fusion.
The aim of the invention is achieved by the following technical scheme:
a shopping checkout system based on feature fusion, comprising: the commodity identification checkout terminal comprises a processor, a commodity camera, a binocular camera, a pressure sensor array module, a storage table, a communication module, a loudspeaker, a sound acquisition module and a touch display screen, and the cloud server comprises a commodity database, a transaction record management module, a checkout module, a commodity identification module, a face recognition module and a voiceprint identification module; the commodity camera, the binocular camera, the pressure sensor array module, the loudspeaker, the sound acquisition module, the touch display screen and one end of the processor are connected, and the other end of the processor is connected with the commodity database, the transaction record management module, the checkout module, the commodity identification module, the face recognition module and the voiceprint identification module through the communication module; the surface of the object placing table is stuck with a pressure sensor array module.
Preferably, the left side of putting the thing platform is the box, and the inside treater and the communication module of being provided with of box, the downside surface of the protruding department of box closed angle is provided with commodity camera, and the positive inclined plane of box is provided with touch display screen, and the below of touch display screen just is located and is provided with little round hole on the inclined plane, and inside speaker and the sound collection module of being provided with of little round hole, the top of touch display screen just is located and sets up binocular camera on the inclined plane.
Preferably, the pressure sensor array module includes: a pressure sensor array unit and an analog-to-digital conversion circuit; the pressure sensor array unit is arranged on the surface of the object placing table, the pressure sensor array unit is connected with one end of the analog-to-digital conversion circuit, and the other end of the analog-to-digital conversion circuit is connected with the processor.
Preferably, the communication module is any one of a 4G internet of things communication module, a WIFI communication module and an ethernet communication module, and the processor is any one of a microcomputer, a workstation or an embedded control motherboard.
Preferably, the checkout module comprises a two-dimensional code payment unit and a biological payment unit, the face recognition module comprises a face identity authentication unit and a blink detection unit, the commodity recognition module comprises a commodity positioning unit and a commodity recognition unit, and the voiceprint recognition module comprises a voice preprocessing unit, a voice feature extraction unit and a voice classification unit which are sequentially connected.
Another object of the invention is achieved by the following technical scheme:
a shopping checkout method based on feature fusion, comprising:
s1, a binocular camera shoots a face image of a customer, and a sound acquisition module acquires payment keywords input by the customer; and the touch display screen receives the electronic payment account information input by the customer, and the binding of the customer and the electronic payment account is completed.
S2, placing the commodity on a commodity placing table by a customer, and identifying the commodity by a cloud server, wherein the commodity comprises bulk commodity;
s3, when the customer selects biological payment, the customer aims at the binocular camera with the self face, blinks to determine living bodies, says 'confirm payment' to confirm transaction, and the bound electronic payment account or virtual payment account automatically deducts money;
s4, the customer takes all the commodities on the storage table.
Preferably, step S2 includes:
the pressure sensor array module acquires weight distribution information of commodities on the object placing table and constructs a weight distribution diagram;
the commodity camera shoots all commodities on the opposite object table to obtain commodity pictures;
the processor uploads the commodity pictures and the weight distribution map to the cloud server;
the commodity identification module of the cloud server is used for positioning and identifying the types of the commodities on the commodity pictures and acquiring the weight w_k of the commodities on the weight distribution diagram according to the weight distribution diagram;
after the unit price of the commodity a_k is obtained from the commodity database, the price of the commodity a_k is calculated by combining the weight w_k;
traversing k to all values from 1 to n, and identifying all commodities on the object placing table;
preferably, the acquiring, by the commodity identification module of the cloud server, the weight w_k of the commodity on the weight distribution map according to the weight distribution map for the position of the commodity on the commodity picture includes: the commodity identification module of the cloud server locates the positions of the commodities on the commodity pictures and then acquires sub-regions r_k corresponding to the commodities a_k in the gravity distribution diagram; affine transformation is carried out on the subregion r_k on the gravity distribution diagram to the commodity picture subregion s_k to obtain a corresponding position s_k in the commodity picture; the cloud server integrates the gravity in the subarea r_k according to the weight distribution diagram to obtain the weight w_k of the commodity a_k; the cloud server integrates the gravity in the subarea r_k according to the weight distribution diagram, and the obtaining of the weight w_k of the commodity a_k comprises: performing pressure calibration on the pressure sensor array unit in advance to obtain a calibration curve; the method comprises the steps of calibrating a value of any point p_j in a sub-region r_k of a gravity distribution diagram by using a calibration curve to obtain a real pressure value rw_j; where p_j is any point in region r_k, j= … m; summing rw_j for all j from 1 to m to obtain the weight w_k on the subarea r_k; the commodity identification module of the cloud server locates the position of the commodity on the commodity picture, and the commodity identification module comprises: the commodity identification module of the cloud server preprocesses the weight distribution map to obtain the positions of commodities on the commodity picture; post-processing the commodity position obtained after the pretreatment to obtain a weight distribution map sub-region and a picture sub-region; the commodity identification module of the cloud server identifies the types of commodities on commodity pictures, and the commodity identification module comprises: inputting the weight distribution map subarea into a convolution layer, and extracting the characteristic vector of the weight distribution map subarea; inputting the picture subareas into a feature extraction layer, and extracting feature vectors of the picture subareas; the convolution layer and the feature extraction layer are of a deep convolution neural network structure.
Preferably, the step of preprocessing is: binarizing the weight distribution map by taking the background weight as a threshold value to obtain a binarization map; template matching is carried out on the binarization graph, and the rough position of the commodity is determined; expanding the range of the rough position area of the commodity to obtain the commodity position so that the commodity image completely falls into the positioning range;
the post-treatment comprises the following steps: cutting the weight distribution diagram according to the commodity position to obtain a weight distribution diagram cut area; carrying out affine transformation on the commodity position according to the affine transformation relation between the weight distribution diagram and the picture to obtain the corresponding position of the commodity position in the weight distribution diagram in the commodity picture; cutting the commodity picture according to the corresponding position in the commodity picture to obtain a picture cut area; median filtering is carried out on the region after the picture is cut; and performing expansion transformation on the region after the weight distribution map is cut and the region after the commodity picture is cut, and transforming the expansion transformation to the input size matched with the convolutional neural network to obtain a weight distribution map subregion and a picture subregion.
Preferably, the face recognition module comprises a face identity authentication unit and a blink detection unit; the face identity authentication unit is used for combining a traditional visible three-channel image with a near infrared single-channel image to form a four-channel image, taking the four-channel image as the input of a convolutional neural network CNN, and sending the four-channel image into the convolutional neural network CNN to identify and classify the face to obtain the identity information of the face of a customer; the blink detection unit is used for extracting feature points of eyes, and sending feature vectors describing the feature points into the machine learning classifier for classification training to obtain an identification model of blink detection; the voiceprint recognition module is used for pre-emphasizing, framing and windowing the collected voice signals; and carrying out end point detection, and identifying the starting time, the transition stage, the noise section and the ending time of the voice signal, wherein the end point detection algorithm is a double-threshold end point detection method based on short-time energy and short-time zero-crossing rate, calculates the mel cepstrum coefficient and the gammatine frequency cepstrum coefficient of each frame of voice signal, and combines to form a voice fusion characteristic.
Compared with the prior art, the invention has the following advantages:
the customer does not need to additionally take the bulk commodity to a weighing position for weighing, but directly takes the bulk commodity to a commodity identification and checkout system for checkout, and the settlement mode of the bulk packaged commodity is unified. And this scheme can discern all commodity simultaneously, need not to place commodity in proper order, and the customer only need put all commodity on the mesa simultaneously, and no matter this commodity is packing commodity or bulk commodity, homoenergetic unification discernment. The characteristic fusion mode of combining the weight distribution diagram and the picture can better position the commodity and extract more complete commodity characteristics, thereby having higher commodity identification accuracy. Replacing traditional bar code identification with visual identification: the process of repeatedly searching the commodity bar code is omitted, and the identification process is quicker. Customer self-service commodity checkout: the employment of cashiers is avoided, and the supermarket operation cost is saved.
Drawings
FIG. 1 is a block diagram of the feature fusion-based shopping checkout system of the present invention.
Fig. 2 is a block diagram of the commodity identification checkout terminal of the present invention.
FIG. 3 is a flow chart of a feature fusion-based shopping checkout method of the present invention.
Fig. 4 is a schematic diagram of a commodity identification module according to the present invention.
Fig. 5 is a schematic diagram of a face recognition module according to the present invention.
Fig. 6 is a schematic diagram of a voiceprint recognition module of the present invention.
Fig. 7 is a residual block diagram of the present invention.
Wherein, 1: a storage table; 2: a pressure sensor array unit; 3: a case; 4: a commodity camera; 5: a small round hole; 6: touching the display screen; 7: binocular camera.
Detailed Description
The invention is further described below with reference to the drawings and examples.
Referring to fig. 1-2, a shopping checkout system based on feature fusion, comprising: the commodity identification checkout terminal comprises a processor, a commodity camera 4, a binocular camera 7, a pressure sensor array module, a storage table 1, a communication module, a loudspeaker, a sound acquisition module and a touch display screen, and the cloud server comprises a commodity database, a transaction record management module, a checkout module, a commodity identification module, a face recognition module and a voiceprint identification module; the commodity camera 4, the binocular camera 7, the pressure sensor array module, the loudspeaker, the sound acquisition module, the touch display screen and one end of the processor are connected, and the other end of the processor is connected with the commodity database, the transaction record management module, the checkout module, the commodity identification module, the face identification module and the voiceprint identification module through the communication module; the surface of the object placing table 1 is stuck with a pressure sensor array module.
In this embodiment, the commodity camera 4 is used for collecting image information of commodities. The binocular camera 7 is used for collecting a customer face image and a payment two-dimensional code image. The binocular camera 7 is a binocular camera 7 with different light wave frequency bands. The speaker is used for playing voice prompts of commodity identification information and payment information. The sound collection module comprises a microphone and a corresponding driving circuit and is used for collecting voiceprint information. The touch display screen is a capacitive or resistive touch color display screen and is used for displaying a list of identified commodities, including trade names, weights, numbers, unit price and total price information, and displaying commodity payment information, including total price, two-dimension code payment and payment options, so as to realize man-machine interaction with customers. The commodity database stores name, weight, unit price, and price information for each commodity. The transaction record management module is used for recording, viewing and managing the transaction record of the commodity. The voiceprint recognition module utilizes voiceprint information acquired by the voice acquisition module to realize speaker recognition through a voiceprint recognition algorithm. The commodity identification module is used for identifying commodities by combining the commodity pictures acquired by the commodity camera 4 by means of weight distribution information acquired by the pressure sensor array module and commodity positioning and commodity identification algorithms.
More specifically, the processor ranges from a microcomputer, workstation, or embedded control motherboard. The processor is used for coordinating and processing the work and operation of other sub-modules in the commodity identification checkout system and is used for preliminary processing of collected data. Commodity camera 4: the commodity camera 4 is a color high-definition camera. The commodity camera 4 is used for collecting image information of commodities, and then the image information is transmitted to the cloud server for identification through the communication module by the processor. Binocular camera 7: the binocular cameras 7 are binocular cameras 7 with different light wave bands, wherein one of the binocular cameras 7 is a visible light camera, and the other binocular camera is a near infrared light camera. The binocular camera 7 is used for collecting face image information of a customer, and then the face image information is transmitted to the cloud server for identification through the communication module by the processor, and is also used for collecting a two-dimensional code image of payment. A pressure sensor array module: the pressure sensor array module consists of a pressure sensor array unit and an analog-to-digital conversion circuit. The pressure sensor array unit is used for collecting weight distribution information of the commodity, the weight distribution information is processed by the processor to obtain a weight distribution diagram, and meanwhile the processor uploads the obtained weight distribution diagram to the cloud server through the communication module to conduct auxiliary identification of the commodity. Object placing table 1: the object placing table 1 is a solid color plane plate, and the surface of the object placing table is stuck with a pressure sensor array unit. The object placing table 1 is used for placing the identified commodities and providing uniform background with uniform color for commodity image shooting. And a communication module: the communication module comprises a 4G Internet of things communication module, a WIFI communication module and an Ethernet communication module. The communication module is used for transmitting images and other data information and realizing communication with the cloud server. A loudspeaker: the speaker is used for playing voice prompts of commodity identification information and payment information. The sound collection module is used for: the sound collection module comprises a microphone and a corresponding driving circuit. The sound collection module is used for collecting voiceprint information. Touch display screen: the touch display screen is a capacitive or resistive touch color display screen. The touch display screen is used for displaying a list of the identified commodities, including trade names, weights, amounts, unit prices and total price information, and displaying commodity payment information, including total prices, two-dimensional payment codes and payment options, so that man-machine interaction with customers is realized. Commodity database: the commodity database contains the name, weight, unit price and price information of each commodity. The commodity database is used for storing names, weights, unit prices and prices of commodities. Transaction record management module: the transaction record management module is used for recording, viewing and managing the transaction records of the commodities. And a checkout module: the checkout module comprises two-dimensional code payment, biological payment and other mainstream payment means. The checkout module is used for calling a corresponding payment interface according to the commodity identification result of the commodity identification module, the customer identity information or the account information of the payment two-dimensional code identified by the face identification module and the voiceprint identification module, and automatically deducting money from an electronic payment account or a virtual payment account of the customer. A commodity identification module: the basic flow of the commodity identification module is commodity positioning and commodity identification. The commodity identification module is used for carrying out commodity position location and commodity type identification on the commodity image uploaded to the cloud server. The face recognition module consists of a face identity authentication unit and a blink detection unit. The face identification module is used for carrying out identification on the face image uploaded to the cloud server, and the blink detection unit is used for carrying out biological living body identification on the face image uploaded to the cloud server. Voiceprint recognition unit: the voiceprint recognition module comprises a voice preprocessing unit, a voice feature extraction unit and a voice classification unit which are sequentially connected. The voice preprocessing unit is used for preprocessing the collected voice signals, the voice feature extraction unit is used for extracting the preprocessed voice features, and the voice classification unit is used for classifying the extracted voice features.
In this embodiment, referring to fig. 2, the left side of the object placing table 1 is a box 3, a processor and a communication module are disposed inside the box 3, a commodity camera 4 is disposed on the lower surface of the pointed protrusion of the box 3, a touch display screen 6 is disposed on the inclined surface of the front side of the box 3, a small round hole is disposed below the touch display screen 6 and on the inclined surface, a speaker and a sound collecting module are disposed inside the small round hole, and a binocular camera 7 is disposed above the touch display screen 6 and on the inclined surface.
In this embodiment, the pressure sensor array module includes: the pressure sensor array unit 2 and an analog-to-digital conversion circuit; the pressure sensor array unit 2 is arranged on the surface of the object placing table 1, the pressure sensor array unit 2 is connected with one end of the analog-to-digital conversion circuit, and the other end of the analog-to-digital conversion circuit is connected with the processor. The pressure sensor array module is used for collecting weight distribution information of commodities. The object placing table 1 is a solid-color plane plate with the surface being stuck with the pressure sensor array unit 2.
In this embodiment, the communication module is any one of a 4G internet of things communication module, a WIFI communication module, and an ethernet communication module, and the processor is any one of a microcomputer, a workstation, or an embedded control motherboard.
In this embodiment, the checkout module includes a two-dimensional code payment unit and a biological payment unit, the face recognition module includes a face identity authentication unit and a blink detection unit, the commodity recognition module includes a commodity positioning unit and a commodity recognition unit, and the voiceprint recognition module includes a voice preprocessing unit, a voice feature extraction unit and a voice classification unit which are sequentially connected.
Referring to fig. 3, a feature fusion-based shopping checkout method applicable to the feature fusion-based shopping checkout system includes:
s1, a binocular camera 7 shoots a face image of a customer, and a sound acquisition module acquires payment keywords input by the customer; the touch display screen 6 receives the electronic payment account information input by the customer, and binding of the customer and the electronic payment account is completed.
S2, placing the commodity on the object placing table 1 by a customer, and identifying the commodity by a cloud server, wherein the commodity comprises bulk commodity;
s3, when the customer selects biological payment, the customer aims at the binocular camera 7 to blink to determine living bodies, and says 'confirm payment' to confirm transaction, and the bound electronic payment account or virtual payment account automatically deducts money;
s4, the customer takes all the commodities on the storage table 1.
From the customer perspective, step S1 operates to: the customer enters the customer registration interface by touching the touch display screen 6, and operates according to the screen prompt. When the face is recorded, the customer stands in front of the binocular camera 7, the face faces the camera, and the recording of the face image is completed by rotating the head and the like. When voiceprint is collected, a customer repeatedly speaks a payment keyword (such as a payment keyword for confirming payment) for a plurality of times according to the screen prompt, and the entry of voiceprint information is completed. And uploading the face image and voice information to a cloud server by the commodity identification checkout terminal, and calling the face recognition module and the voiceprint recognition module to complete training of the face recognition module and training of the voiceprint recognition module. The customer completes the binding of the face and the electronic payment account by inputting personal electronic payment account information on the touch display screen 6; or the virtual payment account is recharged using an existing electronic payment account (e.g., a payment treasury, weChat, bank card, etc.). From the customer perspective, step S2 operates to: the customer carries one or more items of merchandise to the merchandise identification checkout system. The goods include bulk goods and packaged goods. The customer places all the goods on the object placing table 1 at the same time, but can also be placed on the object placing table 1 in turn, and notice that the goods can not be stacked. The processor obtains the identified commodity information from the cloud server through the communication module, and displays the commodity information on the touch display screen 6. From the customer perspective, step S3 operates to: when confirming the settlement, the customer selects the bio-payment or the two-dimensional code payment from the touch display 6. Bio-payment requires face and voiceprint information of individuals registered by customers in advance for use. When the bio-payment is selected, the customer blinks his face against the binocular camera 7 to determine the living body, says "confirm payment" to confirm the transaction, and the bound electronic payment account or virtual payment account is automatically deducted. In the process, the commodity identification checkout system can upload face video images and voice information to the cloud server in real time, call the face identification module to carry out face identification and blink detection, and call the voiceprint identification module to carry out speaker identification. And automatically deducting money by combining the electronic payment account or the virtual payment account bound by the customer. When the two-dimensional code payment is selected, the customer aims at the binocular camera 7 at the payment two-dimensional code (such as payment treasures, weChat and the like), and the electronic payment account automatically deducts money. In the process, the commodity identification and checkout system analyzes and identifies the shot two-dimensional code picture, and invokes a payment interface provided by an official to automatically deduct money from the electronic payment account. After the transaction deduction is successful, the transaction record management module of the cloud server records the transaction information of the commodity.
In this embodiment, step S2 includes: the pressure sensor array module acquires weight distribution information of the commodity on the object placing table 1, and a weight distribution diagram is constructed; the commodity camera 4 shoots all commodities on the counter 1 to obtain commodity pictures; the processor uploads the commodity pictures and the weight distribution map to the cloud server; the commodity identification module of the cloud server is used for positioning and identifying the types of the commodities on the commodity pictures and acquiring the weight w_k of the commodities on the weight distribution diagram according to the weight distribution diagram; after the unit price of the commodity a_k is obtained from the commodity database, the price of the commodity a_k is calculated by combining the weight w_k; and traversing k to all values from 1 to n, and identifying all commodities on the object placing table 1.
Referring to fig. 4, the acquiring, by the commodity identification module of the cloud server, the weight w_k of the commodity on the weight distribution diagram according to the weight distribution diagram for the position of the commodity on the commodity picture includes: the commodity identification module of the cloud server locates the positions of the commodities on the commodity pictures and then acquires sub-regions r_k corresponding to the commodities a_k in the gravity distribution diagram; when a plurality of commodities exist on the object placing table 1, the commodity quantity is recorded as n, the commodities are sequentially recorded as a_i (i= … n), and the sub-region r_k on the gravity distribution diagram is affine transformed to the commodity picture sub-region s_k to obtain a corresponding position s_k in the commodity picture; the cloud server integrates the gravity in the subarea r_k according to the weight distribution diagram to obtain the weight w_k of the commodity a_k; if the commodity is packaged, directly acquiring the price of the commodity from a commodity database; if the commodity is bulk commodity, the price of the commodity a_k is calculated by combining the weight w_k after the unit price of the commodity is obtained from the commodity database. Traversing k through all values from 1 to n can identify all the goods on the object placing table 1.
The cloud server integrates the gravity in the subarea r_k according to the weight distribution diagram, and the obtaining of the weight w_k of the commodity a_k comprises: performing pressure calibration on the pressure sensor array unit in advance to obtain a calibration curve; the method comprises the steps of calibrating a value of any point p_j in a sub-region r_k of a gravity distribution diagram by using a calibration curve to obtain a real pressure value rw_j; where p_j is any point in region r_k, j= … m; summing rw_j for all j from 1 to m to obtain the weight w_k on the subarea r_k;
the commodity identification module of the cloud server locates the position of the commodity on the commodity picture, and the commodity identification module comprises: the commodity identification module of the cloud server preprocesses the weight distribution map to obtain the positions of commodities on the commodity picture; post-processing the commodity position obtained after the pretreatment to obtain a weight distribution map sub-region and a picture sub-region; the pretreatment steps are as follows: binarizing the weight distribution map by taking the background weight as a threshold value to obtain a binarization map; template matching is carried out on the binarization graph, and the rough position of the commodity is determined; expanding the range of the rough position area of the commodity to obtain the commodity position so that the commodity image completely falls into the positioning range; the post-treatment comprises the following steps: cutting the weight distribution diagram according to the commodity position to obtain a weight distribution diagram cut area; carrying out affine transformation on the commodity position according to the affine transformation relation between the weight distribution diagram and the picture to obtain the corresponding position of the commodity position in the weight distribution diagram in the commodity picture; cutting the commodity picture according to the corresponding position in the commodity picture to obtain a picture cut area; median filtering is carried out on the region after the picture is cut; and performing expansion transformation on the region after the weight distribution map is cut and the region after the commodity picture is cut, and transforming the expansion transformation to the input size matched with the convolutional neural network to obtain a weight distribution map subregion and a picture subregion.
The commodity identification module of the cloud server identifies the types of commodities on commodity pictures, and the commodity identification module comprises: inputting the weight distribution map subarea into a convolution layer, and extracting the characteristic vector of the weight distribution map subarea; inputting the picture subareas into a feature extraction layer, and extracting feature vectors of the picture subareas; the convolution layer and the feature extraction layer are of a deep convolution neural network structure. The convolutional layer and the feature extraction layer often adopt variants of the existing classification depth convolutional neural network, and VGG, resNet, inception, denseNet, ZFNet and AlexNet are common to the existing classification depth convolutional neural network.
The convolutional layer is a variant of the VGG16 based classification depth convolutional neural network. N convolution kernels of a x a size are denoted conv_a_n, e.g. 64 convolution kernels of 3*3 size are denoted conv_3_64. And copying the weight distribution map subareas of the single channel into two images forming three channels, and inputting the images into a subsequent convolution layer. The convolutional layer inputs 224 x 224 three channel color pictures. With the aid of the sign expression of the convolution kernel, the neural network structure of this VGG16 variant can be described as: conv_3_64, maxpool, conv_3_128, maxpool conv_3_256, maxpool, conv_3_512 conv_3_512, maxpool, conv_3_512, maxpool, fc_4096, fc_1000. Wherein maxpool is a pooling layer of 2 x 2 size and step size of 2, fc_4096 represents a fully connected layer of 4096 neurons, and fc_1000 is a fully connected layer of 1000 neurons. This feature extraction layer of the final fully connected layer fc_1000 is a variant of the classification depth convolutional neural network based on res net 50. We note the convolution structure as shown in fig. 7 as a residual block. The residual block is composed of a 1*1 convolution kernels, b 3*3 convolution kernels and c 1*1 convolution kernels in sequence, and in the structure shown in fig. 6, the values of a, b and c are 64, 64 and 256 in sequence. The residual module directly spans the inputs before three layers to the output end for addition, and then activates the output serving as the residual module through a relu activation function. Each convolution uses a relu activation function. Let us note that the residual modules with parameters a, b and c are denoted by the symbol bottleneck_a_b_c, e.g. the residual module shown in fig. 7 is denoted as bottleneck_64_64_256. One of the exemplary structures of the feature extraction layer is: conv_7_64, maxpool, 3 bottleneck_64_64_256, 4 bottleneck_128_128_512, 6 bottleneck_256_256_1024, 3 bottleneck_512_512_2048, avgpool, fc_1000. Where conv_7_64 represents the convolution kernel of 64 7*7, maxpool represents the largest pooling layer of size 3*3 in steps of 2, avgpool represents the average pooling layer, and fc_1000 is the fully connected layer of neuron number 1000. Finally, the output of the full connection layer FC_1000 is used as the output of the feature extraction layer, and the output is a 1000-dimensional feature vector.
And combining the weight characteristic and the picture characteristic to obtain a synthesized characteristic, wherein the dimension of the synthesized characteristic is 2000.
The synthesized features are input into a classification depth neural network, the neural network is composed of a full-connection layer and a softmax classification layer, and the commodity with the highest softmax output layer probability is taken as a commodity identification result.
The classification deep neural network is as follows: the neural network structure is fc_2000, softmax in order. Where fc_2000 represents a fully connected layer with a neuron number of 2000, softmax is a classified output layer with an output node number of commodity category number. Wherein the activation functions all use the relu activation function.
In this embodiment, referring to fig. 5, the face recognition module includes a face identity authentication unit and a blink detection unit; the face identity authentication unit adopts a convolutional neural network CNN, is used for combining a traditional visible three-channel image with a near infrared single-channel image to form a four-channel image, takes the four-channel image as the input of the convolutional neural network CNN, and sends the four-channel image into the convolutional neural network CNN to identify and classify the face, so as to obtain the identity information of the face of the customer; the blink detection unit is used for extracting feature points of eyes, and sending feature vectors describing the feature points into the machine learning classifier for classification training to obtain an identification model of blink detection; the purpose of the blink detection unit is to combat non-living subjects, only subjects that blink are likely to be identified as living subjects. The blink detection algorithm may be selected as a real-time blink detection algorithm using eye feature extraction techniques. The blink detection algorithm has certain robustness against external attacks. The blink detection algorithm obtains the recognition model of blink detection by extracting the characteristic points of eyes and then sending the characteristic vectors describing the characteristic points into a machine learning classifier (such as a Support Vector Machine (SVM)) for classification training.
Referring to fig. 6, the voiceprint recognition module is configured to pre-emphasis, frame and window the collected voice signal; and carrying out end point detection, and identifying the starting time, the transition stage, the noise section and the ending time of the voice signal, wherein the end point detection algorithm is a double-threshold end point detection method based on short-time energy and short-time zero-crossing rate, calculates the mel cepstrum coefficient and the gammatine frequency cepstrum coefficient of each frame of voice signal, and combines to form a voice fusion characteristic.
Features were trained using a deep neural network, the output layer of which was the softmax classification layer. And taking the softmax layer with the largest output probability as a recognition result.
Compared with the prior art, the scheme has the following advantages:
(1) Lay down and unify and discern: compared with the prior art of scanning commodities piece by piece, by adopting the system, a customer only needs to put all commodities on a table top at the same time. The system can identify all commodities at the same time without placing the commodities in sequence, whether the commodities are packaged commodities or bulk commodities.
(2) Unifying the settlement mode of the bulk packaged goods: by adopting the system, a customer does not need to additionally take bulk commodities to a weighing position for weighing, but directly takes the bulk commodities to a commodity identification checkout system for checkout.
(3) Replacing traditional bar code identification with visual identification: the process of repeatedly searching the commodity bar code is omitted, and the identification process is quicker.
(4) Customer self-service commodity checkout: the employment of cashiers is avoided, and the supermarket operation cost is saved.
(5) Higher commodity identification accuracy: compared with the traditional machine learning algorithm, the method has the advantages that the characteristic fusion mode of combining the weight distribution diagram and the picture is adopted, the commodity position can be well positioned, the more complete commodity characteristics can be extracted, and therefore the method has higher commodity identification accuracy.
(6) Higher face recognition accuracy: compared with the traditional machine learning algorithm, the face feature can be described more completely by adopting the feature fusion and convolutional neural network mode, so that the face recognition accuracy is higher.
(7) The access mode of the cloud server reduces the cost of image identification: the cloud server accesses the visual identification model without writing the visual identification model into the checkout system, thereby facilitating the modification of the visual identification model and reducing the hardware cost of the commodity identification checkout system.
(8) The wired network deployment cost of the equipment is saved: the access of the 4G Internet of things mode enables the equipment to be deployed without network connection. But the system also supports WIFI and Ethernet communication interfaces.
(9) The probability of a customer mis-checkout is reduced: the adoption of the triple verification modes of face recognition, blink detection and voiceprint recognition greatly reduces the possibility of incorrect checkout of customers due to misoperation. But is convenient and easy to operate compared with the current checkout mode.
(10) Saving the cost of manpower employed by supermarkets: the supermarket does not need to hire weighing staff and cashiers, so that the manual expenditure is saved. If 7 cashiers and 3 weighing staff are employed in the supermarket originally, the staff receives five thousands of months, and the annual profit brought to the supermarket is equivalent to 60 thousands of yuan after the staff is introduced into the system, so that the economic benefit is considerable.
The above embodiments are preferred examples of the present invention, and the present invention is not limited thereto, and any other modifications or equivalent substitutions made without departing from the technical aspects of the present invention are included in the scope of the present invention.

Claims (7)

1. A shopping checkout system based on feature fusion, comprising: the commodity identification checkout terminal comprises a processor, a commodity camera, a binocular camera, a pressure sensor array module, a storage table, a communication module, a loudspeaker, a sound acquisition module and a touch display screen, and the cloud server comprises a commodity database, a transaction record management module, a checkout module, a commodity identification module, a face recognition module and a voiceprint identification module;
the commodity camera, the binocular camera, the pressure sensor array module, the loudspeaker, the sound acquisition module, the touch display screen and one end of the processor are connected, and the other end of the processor is connected with the commodity database, the transaction record management module, the checkout module, the commodity identification module, the face recognition module and the voiceprint identification module through the communication module; the surface of the object placing table is stuck with a pressure sensor array module;
the processor is used for preliminary processing of the acquired data;
the commodity camera is used for collecting image information of commodities;
the binocular cameras are binocular cameras with different light wave bands, one is a visible light camera, and the other is a near infrared light camera, and are used for collecting face images of guests and paying two-dimensional code images;
the pressure sensor array module comprises a pressure sensor array unit and an analog-to-digital conversion circuit, wherein the pressure sensor array unit is arranged on the surface of the object placing table, one end of the pressure sensor array unit is connected with one end of the analog-to-digital conversion circuit, and the other end of the analog-to-digital conversion circuit is connected with the processor and is used for collecting weight distribution information of commodities and constructing a weight distribution diagram;
the object placing table is a solid color plane plate and is used for placing the identified commodities and providing uniform background with uniform color for commodity image shooting;
the communication module is used for transmitting images and other data information and realizing communication with the cloud server; the loudspeaker is used for playing voice prompts of commodity identification information and payment information;
the sound collection module comprises a microphone and a corresponding driving circuit and is used for collecting voiceprint information;
the touch display screen is a capacitive or resistive touch color display screen and is used for displaying a list of the identified commodities and commodity payment information;
the commodity database is used for storing the name, weight, unit price and price information of each commodity;
the transaction record management module is used for recording, checking and managing the transaction record of the commodity;
the checkout module comprises a two-dimension code payment unit and a biological payment unit, and is used for calling a corresponding payment interface according to the commodity identification result of the commodity identification module, the customer identity information identified by the face identification module and the voiceprint identification module or the account information of the payment two-dimension code, and automatically deducting money from an electronic payment account or a virtual payment account of a customer;
the commodity identification module comprises a commodity positioning unit and a commodity identification unit and is used for positioning commodity positions and identifying commodity types of commodity images uploaded to the cloud server;
the face identification module comprises a face identification unit and a blink detection unit, wherein the face identification module is used for carrying out identification judgment on the face image uploaded to the cloud server, and the blink detection unit is used for carrying out biological living body identification on the face image uploaded to the cloud server;
the voiceprint recognition module comprises a voice preprocessing unit, a voice feature extraction unit and a voice classification unit which are sequentially connected, wherein the voice preprocessing unit is used for preprocessing collected voice signals, the voice feature extraction unit is used for extracting preprocessed voice features, and the voice classification unit is used for classifying the extracted voice features.
2. The shopping checkout system based on feature fusion according to claim 1, wherein the left side of the object placing table is a box body, a processor and a communication module are arranged in the box body, a commodity camera is arranged on the lower side surface of the protrusion of the sharp corner of the box body, a touch display screen is arranged on the inclined surface of the front side of the box body, a small round hole is arranged below the touch display screen and on the inclined surface, a loudspeaker and a sound collecting module are arranged in the small round hole, and a binocular camera is arranged above the touch display screen and on the inclined surface.
3. The shopping checkout system based on feature fusion of claim 1, wherein the communication module is any one of a 4G internet of things communication module, a WIFI communication module and an ethernet communication module, and the processor is any one of a microcomputer, a workstation or an embedded control motherboard.
4. A shopping checkout method based on feature fusion, comprising:
s1, a binocular camera shoots a face image of a customer, and a sound acquisition module acquires payment keywords input by the customer; the touch display screen receives electronic payment account information input by a customer, and binding of the customer and the electronic payment account is completed;
s2, the customer places bulk goods and packaged goods on the object placing table simultaneously or sequentially, and the cloud server identifies the goods and comprises the following steps:
the pressure sensor array module acquires weight distribution information of commodities on the object placing table and constructs a weight distribution diagram;
the commodity camera shoots all commodities on the opposite object table to obtain commodity pictures;
the processor uploads the commodity pictures and the weight distribution map to the cloud server;
the commodity identification module of the cloud server is used for positioning and identifying the types of the commodities on the commodity pictures and acquiring the weight w_k of the commodities on the weight distribution diagram according to the weight distribution diagram;
after the unit price of the commodity a_k is obtained from the commodity database, the price of the commodity a_k is calculated by combining the weight w_k;
traversing k to all values from 1 to n, and identifying all commodities on the object placing table;
s3, when the customer selects biological payment, the customer aims at the binocular camera with the self face, blinks to determine living bodies, says 'confirm payment' to confirm transaction, and the bound electronic payment account or virtual payment account automatically deducts money;
in the biological payment process, a commodity identification checkout terminal uploads a face video image and voice information to a cloud server, a face recognition module is called to carry out face recognition and blink detection, and a voiceprint recognition module is called to carry out speaker recognition;
s4, the customer takes all the commodities on the storage table.
5. The shopping checkout method based on feature fusion of claim 4, wherein the acquiring the weight w_k of the commodity on the weight distribution map according to the weight distribution map by the commodity identification module of the cloud server for the position of the commodity on the commodity picture comprises:
the commodity identification module of the cloud server locates the positions of the commodities on the commodity pictures and then acquires sub-regions r_k corresponding to the commodities a_k in the gravity distribution diagram;
affine transformation is carried out on the subregion r_k on the gravity distribution diagram to the commodity picture subregion s_k to obtain a corresponding position s_k in the commodity picture;
the cloud server integrates the gravity in the subarea r_k according to the weight distribution diagram to obtain the weight w_k of the commodity a_k;
the cloud server integrates the gravity in the subarea r_k according to the weight distribution diagram, and the obtaining of the weight w_k of the commodity a_k comprises:
performing pressure calibration on the pressure sensor array unit in advance to obtain a calibration curve;
the method comprises the steps of calibrating a value of any point p_j in a sub-region r_k of a gravity distribution diagram by using a calibration curve to obtain a real pressure value rw_j; where p_j is any point in region r_k, j= … m;
summing rw_j for all j from 1 to m to obtain the weight w_k on the subarea r_k;
the commodity identification module of the cloud server locates the position of the commodity on the commodity picture, and the commodity identification module comprises:
the commodity identification module of the cloud server preprocesses the weight distribution map to obtain the positions of commodities on the commodity picture;
post-processing the commodity position obtained after the pretreatment to obtain a weight distribution map sub-region and a picture sub-region;
the commodity identification module of the cloud server identifies the types of commodities on commodity pictures, and the commodity identification module comprises:
inputting the weight distribution map subarea into a convolution layer, and extracting the characteristic vector of the weight distribution map subarea;
inputting the picture subareas into a feature extraction layer, and extracting feature vectors of the picture subareas; the convolution layer and the feature extraction layer are of a deep convolution neural network structure.
6. The feature fusion-based shopping checkout method of claim 5, wherein the preprocessing step is:
binarizing the weight distribution map by taking the background weight as a threshold value to obtain a binarization map;
template matching is carried out on the binarization graph, and the rough position of the commodity is determined;
expanding the range of the rough position area of the commodity to obtain the commodity position so that the commodity image completely falls into the positioning range;
the post-treatment comprises the following steps:
cutting the weight distribution diagram according to the commodity position to obtain a weight distribution diagram cut area;
carrying out affine transformation on the commodity position according to the affine transformation relation between the weight distribution diagram and the picture to obtain the corresponding position of the commodity position in the weight distribution diagram in the commodity picture;
cutting the commodity picture according to the corresponding position in the commodity picture to obtain a picture cut area;
median filtering is carried out on the region after the picture is cut;
and performing expansion transformation on the region after the weight distribution map is cut and the region after the commodity picture is cut, and transforming the expansion transformation to the input size matched with the convolutional neural network to obtain a weight distribution map subregion and a picture subregion.
7. The shopping checkout method based on feature fusion according to claim 4, wherein the face recognition module comprises a face identity authentication unit and a blink detection unit;
the face identity authentication unit is used for combining a traditional visible three-channel image with a near infrared single-channel image to form a four-channel image, taking the four-channel image as the input of a convolutional neural network CNN, and sending the four-channel image into the convolutional neural network CNN to identify and classify the face to obtain the identity information of the face of a customer;
the blink detection unit is used for extracting feature points of eyes, and sending feature vectors describing the feature points into the machine learning classifier for classification training to obtain an identification model of blink detection;
the voiceprint recognition module is used for pre-emphasizing, framing and windowing the collected voice signals; and carrying out end point detection, and identifying the starting time, the transition stage, the noise section and the ending time of the voice signal, wherein the end point detection algorithm is a double-threshold end point detection method based on short-time energy and short-time zero-crossing rate, calculates the mel cepstrum coefficient and the gammatine frequency cepstrum coefficient of each frame of voice signal, and combines to form a voice fusion characteristic.
CN201811440047.4A 2018-11-29 2018-11-29 Shopping checkout system and method based on feature fusion Active CN109508974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811440047.4A CN109508974B (en) 2018-11-29 2018-11-29 Shopping checkout system and method based on feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811440047.4A CN109508974B (en) 2018-11-29 2018-11-29 Shopping checkout system and method based on feature fusion

Publications (2)

Publication Number Publication Date
CN109508974A CN109508974A (en) 2019-03-22
CN109508974B true CN109508974B (en) 2023-08-22

Family

ID=65751132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811440047.4A Active CN109508974B (en) 2018-11-29 2018-11-29 Shopping checkout system and method based on feature fusion

Country Status (1)

Country Link
CN (1) CN109508974B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210893A (en) * 2019-05-09 2019-09-06 秒针信息技术有限公司 Generation method, device, storage medium and the electronic device of report
CN111275907B (en) * 2020-01-20 2022-04-12 江苏恒宝智能系统技术有限公司 Multi-channel identification authentication method and multi-channel identification authentication terminal
CN111353389B (en) * 2020-02-11 2023-08-11 山东唐门信息技术有限公司 Intelligent weighing device based on multi-sensor recognition and analysis method thereof
CN111780844A (en) * 2020-06-22 2020-10-16 明光利拓智能科技有限公司 Weighing and paying device for traceability scale and using method thereof
CN112149577A (en) * 2020-09-24 2020-12-29 信雅达系统工程股份有限公司 Intelligent settlement system based on neural network image recognition
CN113361673B (en) * 2021-01-18 2022-07-15 南昌航空大学 Color two-dimensional code anti-counterfeiting method based on support vector machine
CN113762969B (en) * 2021-04-23 2023-08-08 腾讯科技(深圳)有限公司 Information processing method, apparatus, computer device, and storage medium
CN114267139A (en) * 2021-12-13 2022-04-01 湖南省金河计算机科技有限公司 Intelligent POS machine system based on Internet of things, control method thereof and electronic equipment
CN116503724A (en) * 2022-01-20 2023-07-28 索尼半导体解决方案公司 AI weighing system and method for increasing accuracy of AI model using multiple data sets
CN115601027B (en) * 2022-12-12 2023-04-21 临沂中科英泰智能科技有限责任公司 Self-service retail cashing system and method based on big data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886889A (en) * 2017-03-31 2017-06-23 四川研宝科技有限公司 A kind of self-help shopping system and method based on commodity weight
CN107808469A (en) * 2017-11-02 2018-03-16 华北理工大学 Self-service supermarket system
CN108229946A (en) * 2018-02-08 2018-06-29 中山简单点网络技术有限公司 A kind of method of unmanned marketing balance system and system identification commodity
CN108269371A (en) * 2017-09-27 2018-07-10 缤果可为(北京)科技有限公司 Commodity automatic settlement method, device, self-service cashier
CN108537994A (en) * 2018-03-12 2018-09-14 深兰科技(上海)有限公司 View-based access control model identifies and the intelligent commodity settlement system and method for weight induction technology
JP2018147402A (en) * 2017-03-08 2018-09-20 東芝テック株式会社 Checkout system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018147402A (en) * 2017-03-08 2018-09-20 東芝テック株式会社 Checkout system
CN106886889A (en) * 2017-03-31 2017-06-23 四川研宝科技有限公司 A kind of self-help shopping system and method based on commodity weight
CN108269371A (en) * 2017-09-27 2018-07-10 缤果可为(北京)科技有限公司 Commodity automatic settlement method, device, self-service cashier
CN107808469A (en) * 2017-11-02 2018-03-16 华北理工大学 Self-service supermarket system
CN108229946A (en) * 2018-02-08 2018-06-29 中山简单点网络技术有限公司 A kind of method of unmanned marketing balance system and system identification commodity
CN108537994A (en) * 2018-03-12 2018-09-14 深兰科技(上海)有限公司 View-based access control model identifies and the intelligent commodity settlement system and method for weight induction technology

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
大型超市自助收银终端的分析与设计;刘蒙 等;《工业设计》;全文 *

Also Published As

Publication number Publication date
CN109508974A (en) 2019-03-22

Similar Documents

Publication Publication Date Title
CN109508974B (en) Shopping checkout system and method based on feature fusion
US11151427B2 (en) Method and apparatus for checkout based on image identification technique of convolutional neural network
US11790433B2 (en) Constructing shopper carts using video surveillance
US11049373B2 (en) Storefront device, storefront management method, and program
WO2019062812A1 (en) Human-computer interaction device for automatic payment and use thereof
CN111222870B (en) Settlement method, device and system
CN108198315A (en) A kind of auth method and authentication means
WO2019127618A1 (en) Settlement method, device and system
WO2021179137A1 (en) Settlement method, apparatus, and system
US11705133B1 (en) Utilizing sensor data for automated user identification
CN108805644A (en) The commercial articles vending method and machine for vending of machine for vending
US20230137409A1 (en) Information processing system, customer identification apparatus, and information processing method
CN113887884A (en) Business-super service system
CN109448278A (en) Self-service shopping and goods picking system for unmanned store
US20220270061A1 (en) System and method for indicating payment method availability on a smart shopping bin
US11727470B2 (en) Optical scanning for weights and measures
CN109697645A (en) Information generating method and device for shelf
CN115294325A (en) Dynamic commodity identification system, method, medium, equipment and terminal for sales counter
CN114078299A (en) Commodity settlement method, commodity settlement device, electronic equipment and medium
CN114358795A (en) Payment method and device based on human face
CN113516469A (en) Unmanned supermarket vending system and vending method based on deep learning and eye tracking
CN112348607A (en) Image-based shopping guide method, terminal, device, medium and system in commercial place
CN112906759A (en) Pure vision-based entrance-guard-free unmanned store checkout method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant