WO2019062812A1 - 用于自动结算的人机交互装置及其应用 - Google Patents

用于自动结算的人机交互装置及其应用 Download PDF

Info

Publication number
WO2019062812A1
WO2019062812A1 PCT/CN2018/107958 CN2018107958W WO2019062812A1 WO 2019062812 A1 WO2019062812 A1 WO 2019062812A1 CN 2018107958 W CN2018107958 W CN 2018107958W WO 2019062812 A1 WO2019062812 A1 WO 2019062812A1
Authority
WO
WIPO (PCT)
Prior art keywords
commodity
information
product
image
neural network
Prior art date
Application number
PCT/CN2018/107958
Other languages
English (en)
French (fr)
Inventor
陈子林
王良旗
Original Assignee
缤果可为(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 缤果可为(北京)科技有限公司 filed Critical 缤果可为(北京)科技有限公司
Publication of WO2019062812A1 publication Critical patent/WO2019062812A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07GREGISTERING THE RECEIPT OF CASH, VALUABLES, OR TOKENS
    • G07G1/00Cash registers
    • G07G1/0018Constructional details, e.g. of drawer, printing means, input means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G19/00Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups
    • G01G19/40Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07GREGISTERING THE RECEIPT OF CASH, VALUABLES, OR TOKENS
    • G07G1/00Cash registers
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07GREGISTERING THE RECEIPT OF CASH, VALUABLES, OR TOKENS
    • G07G1/00Cash registers
    • G07G1/0036Checkout procedures
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07GREGISTERING THE RECEIPT OF CASH, VALUABLES, OR TOKENS
    • G07G1/00Cash registers
    • G07G1/12Cash registers electronically operated

Definitions

  • the present application relates to a human-machine interaction device for automatic settlement and an application thereof, and belongs to the field of deep learning neural networks and image recognition.
  • the first one is based on the RFID electronic tag method, in the database for each electronic tag with a unique ID to specify the product correspondence, and then the label is posted to the product, the card reader first read the electronic tag only ID, query the product information corresponding to the ID in the database to realize the "identification" of the product.
  • the direct failure object of this method is RFID, which has high label cost, high error correction cost, and is susceptible to electromagnetic interference during identification.
  • the second is that the user self-services the bar code on the merchandise under the scanning code machine, and realizes "recognition" through the machine scan code.
  • This method increases the user's operating cost and is easy to operate, such as multiple scan code repetition counts; and some commodity barcodes are difficult to scan correctly due to product distortion.
  • the method still has the user's self-consciousness in the scanning code recognition process, and cannot provide a basis for the follow-up anti-theft process of self-checkout.
  • the present application provides a human-machine interaction device for automatic settlement, which solves the problem of commodity machine identification in a self-checkout scenario, captures a product image through a common camera, and utilizes deep learning based on depth learning.
  • the image recognition algorithm directly identifies the item.
  • the method does not require any third-party identification to identify the item, and the user can identify the item by placing the item on the table.
  • the product category can be identified as a preset product image, thereby improving the recognition accuracy of the product during self-settlement during the user's shopping process.
  • a camera unit configured to acquire a first product image of the product in the product placement area
  • An identification unit configured to identify the first product image, and output product information
  • a display unit configured to display the commodity information
  • the image capturing unit is disposed outside the product placement area and electrically connected to the identification unit, and the identification unit is electrically connected to the display unit;
  • the identification unit identifies the item type and the item quantity in the first item image, and outputs the preset image and the item quantity of the item to the display unit.
  • the camera unit includes at least two cameras, and the camera is configured to acquire different angles and/or different depths of the product to obtain the first product image.
  • the camera unit includes a first camera, a second camera, and a third camera;
  • the first camera is disposed on the top of the product facing the product; the second camera and the third camera are respectively disposed on two sides of the product.
  • the preset image of the product and the quantity of the commodity are output to the display unit, including the steps of:
  • the first product is generated by a first product type, the first data is a quantity of the product of the type; the N product is generated by a product type of the Nth product, and the Mth data is The number of the Nth commodity;
  • the human-machine interaction device for automatic settlement includes: a settlement module, wherein the settlement module is electrically connected to the identification unit and the display unit, respectively, for generating payment information according to the commodity information,
  • the display unit displays the payment information.
  • the human-machine interaction device for automatic settlement includes: a distance sensing unit, the distance sensing unit is disposed outside the product placement area, and is electrically connected to the display unit, and is used for Obtaining a distance between the user payment gesture and the display unit, and adjusting a display area of the payment information in the display unit according to the distance.
  • the human-machine interaction device for automatic settlement includes: a review module, wherein the review module is electrically connected to the camera unit, the settlement module, and the display unit, respectively.
  • the human-machine interaction device for automatic settlement includes: an alarm module, the alarm module is electrically connected to the review module and disposed near the commodity placement area, and is configured to obtain the comparison
  • the result of the product information of the second product image and the result of the payment result information is an alarm when the second product image includes the unsettled product.
  • the identifying unit is configured to detect an unrecognizable item in the first product image, and when the unrecognizable item is detected, display the correct prompt by the display unit.
  • the human-machine interaction device for automatic settlement includes: an indication module, the indication module is electrically connected to the identification unit, and when the identification unit outputs the commodity information, the indication module issues an indication signal.
  • the working process of the identifying unit includes the following:
  • the first product image is input to a neural network based recognition system, and the neural network based recognition system outputs the commodity information.
  • the neural network-based identification system includes a first neural network based on a regional convolutional neural network; and the neural network-based commodity identification method includes the following steps:
  • the first product information is output as the to-be-detected product information
  • the step (a1) further includes the step of weighing the weight of the commodity to be inspected, and obtaining the total weight of the commodity actually weighed;
  • the step (b1) further includes a step (b2) of calculating a total weight of the commodity in the first commodity information, and comparing the total weight of the commodity to obtain a difference data, and determining whether the differential data is less than or equal to a preset threshold:
  • the first commodity information is output as the information of the commodity to be detected
  • the neural network-based identification system includes: a first neural network based on a regional convolutional neural network;
  • the neural network based commodity identification method includes the steps of:
  • the first commodity information is output as the information of the commodity to be detected
  • the first commodity information is output as the information of the commodity to be detected
  • the step (b1) is to determine whether the Nth item information is consistent with the first item information; if the determination result is yes, output the first item information as the item to be detected information If the judgment result is no, the subsequent steps are performed.
  • the step (b3) is to determine whether the Nth item information is consistent with the first item information; if the determination result is yes, output the first item information as the item to be detected information If the judgment result is no, the subsequent steps are performed.
  • the neural network based identification system comprises a second neural network based on a regional convolutional neural network, the neural network based identification system being obtained by a method comprising the following steps:
  • the second neural network is trained using the first image set to obtain a first neural network.
  • the method for training the second neural network is:
  • the second image set includes an image of the item to be detected of the item to be detected outputted by the neural network based identification system
  • the training the second neural network and the method are supervised learning methods
  • the process of training the third neural network with the second set of images is unsupervised learning.
  • the neural network-based commodity identification method includes the following steps:
  • step (d3) when the determination result in the step (c3) is NO, identifying the difference commodity between the first commodity information and the Nth commodity information;
  • (e3) is: acquiring a difference image set of the difference commodity in the step (d3), and intensively training the first neural network with the difference image set.
  • Another aspect of the present application provides an unmanned merchandise checkout counter, wherein the unmanned merchandise checkout counter adopts at least one of the above-described human-machine interaction devices for automatic settlement;
  • the human-machine interaction device for automatic settlement identifies the type and quantity of the commodity on the unmanned merchandise checkout, and displays an image of the merchandise and/or the number of merchandise in the form of a preset image.
  • Another aspect of the present application provides an automatic store or an unmanned convenience store that employs at least one of the above-described human-machine interaction devices for automatic settlement and/or the above-described unmanned goods. At least one of the checkout counters.
  • the human-machine interaction device of the automatic service desk provided by the present application includes:
  • a camera unit for acquiring an image of a product
  • An identification unit configured to process an image of the product acquired by the camera unit, and output commodity information
  • a display unit configured to display commodity information output by the identification unit
  • the identification unit is electrically connected to the imaging unit and the display unit.
  • the camera unit includes at least two cameras that acquire product images of different angles and/or depths of field.
  • the camera unit includes at least 2 to 4 cameras that acquire product images of different angles and/or depths of field.
  • the camera unit includes a first camera, a second camera, and a third camera;
  • the first camera, the second camera, and the third camera respectively acquire product images from different angles.
  • the identification unit identifies the item type and the quantity of the item in the product image acquired by the image capturing unit, and outputs the image to the display unit in a preset image and quantity.
  • identifying a product category in the product image acquired by the camera unit including:
  • the first item to the Nth item are generated from the item type in the product image recognized by the image capturing unit, and N is the item type.
  • identifying the number of items in the product image acquired by the camera unit including:
  • the first data is the quantity of the first item
  • the Mth data is the quantity of the Nth item .
  • outputting to the display unit by using a preset image and quantity of the commodity including the steps of:
  • the Nth item The Mth data
  • the identification unit operates a neural network image recognition system.
  • the working process of the identifying unit includes the following:
  • the image containing the commodity to be inspected is input to a neural network-based recognition system that outputs the commodity information to be detected.
  • the obtaining the image containing the commodity to be detected is at least a two-dimensional image.
  • the obtaining the image containing the commodity to be detected includes at least a first image to an Nth image having different angles and/or depths of field; N ⁇ 2.
  • the neural network identification system comprises a first neural network based on a regional convolutional neural network;
  • the commodity identification method comprises the steps of:
  • the first product information is output as the to-be-detected product information
  • the neural network identification system comprises a first neural network based on a regional convolutional neural network;
  • the commodity identification method comprises the steps of:
  • the first product information is output as the to-be-detected product information
  • the neural network identification system comprises a first neural network based on a regional convolutional neural network;
  • the commodity identification method comprises the steps of:
  • the first product information is output as the to-be-detected product information
  • the first product information is output as the to-be-detected product information
  • the method of determining whether the Nth product information is included in the first product information in the step (b1) and the step (b3) is: determining whether the product type in the Nth product information is average Exist in the first item information.
  • the method of determining whether the Nth item information is included in the first item information in the step (b1) and the step (b3) is: determining whether the quantity of the item in the Nth item information is less than Equal to the number of items in the first item information.
  • the method of determining whether the Nth item information is included in the first item information in the step (b1) and the step (b3) is: determining each item in the item N item information Whether the quantity is less than or equal to the number of items in the first item information.
  • step (b1) and the step (b3) are performed to determine whether the Nth commodity information is consistent with the first commodity information
  • the first product information is output as the to-be-detected product information
  • the Nth item information in the step (b1) and the step (b3) is consistent with the first item information, including that the item types are consistent and the quantity of each item is consistent.
  • the preset threshold value in the step (b2) and the step (c3) is at least one of 0.1 g to 10 kg.
  • the preset threshold in the step (b2) and the step (c3) is the weight of the commodity with the smallest weight in the first commodity information.
  • the preset threshold in the step (b2) and the step (c3) is at least one of 10% to 80% of the weight of the commodity with the smallest weight in the first commodity information.
  • the feedback result in the step (b2) and the step (c3) includes at least one of a stacking prompt and an error report.
  • the number of items to be detected in the image containing the commodity to be detected is ⁇ 1.
  • the number of the products to be detected in the image containing the product to be detected is 1 to 1000.
  • the type of the product to be detected in the image containing the commodity to be detected is ⁇ 1.
  • the type of the product to be detected is 1 to 1000.
  • the neural network based identification system comprises a second neural network based on a regional convolutional neural network, the neural network based identification system being obtained by a method comprising the following steps:
  • the second neural network is trained using the first image set to obtain a first neural network.
  • the method of training the second neural network is a supervised learning method.
  • the method for training the second neural network is:
  • the third neural network is trained with the second image set to obtain a first neural network.
  • the second image set includes an image of the item to be detected that outputs the item information to be detected via the neural network based identification system.
  • the second neural network has an accuracy of identifying the commodity of 80% or more.
  • the process of training the third neural network with the second image set is not unsupervised learning.
  • the commodity identification method includes the following steps:
  • step (c1) in the case where the determination result in the step (b1) is NO, identifying the difference product between the first item information and the item N item information;
  • step (d1) acquiring the second image set of the difference commodity in the step (c1), and intensively training the first neural network with the second image set.
  • the commodity identification method includes the following steps:
  • the first product information is output as the to-be-detected product information
  • step (d3) in the case where the determination result in the step (c3) is NO, identifying the difference product between the first item information and the item N item information;
  • the commodity identification method includes the following steps:
  • step (d2) acquiring the second image set of the identified item in the step (c2), and intensively training the first neural network with the second image set.
  • the commodity identification method includes the following steps:
  • the first product information is output as the to-be-detected product information
  • a merchandise identification device comprising:
  • a camera unit for acquiring an image of a product
  • a recognition unit configured to run a neural network identification system, and process an image of the product acquired by the camera unit;
  • a display unit configured to display commodity information output by the identification unit
  • the identification unit is electrically connected to the imaging unit and the display unit.
  • the camera unit includes at least two cameras that acquire product images of different angles and/or depths of field.
  • the camera unit includes at least 2 to 4 cameras that acquire product images of different angles and/or depths of field.
  • the camera unit includes a first camera and a second camera
  • the first camera and the second camera respectively acquire product images from different angles.
  • the article identification device comprises a stage, the stage comprising a weight sensor for measuring the total weight of the goods on the stage;
  • the weight sensor is electrically connected to the identification unit, and the total weight of the goods on the stage is input to the identification unit.
  • a serious anomaly is because an item at the stack or extreme angle is completely obscured by another item or mostly obscured, and there is not enough detail to identify it.
  • the present application solves the problem by combining the weight sensor, and the weight of the item in the total recognition result and the weight sensor in the identification device are actually weighed. If not, the stacking conclusion is given.
  • the camera unit comprises two common web cameras, two fixed angle adjusters, a continuous computer capable of running image uploading, and a high precision weight sensor.
  • the main workflow is: running an image capture program on the computer, which can upload the image captured by the two cameras at the same time to the remote server, and the remote server will return the recognition result.
  • the cost of this solution is extremely low, and the working computer only needs the most basic configuration.
  • the camera unit comprises 2-4 fixed-lens high-definition cameras, a corresponding number of adjustable angle fixers, a high-precision weight sensor, and a computer with a display memory of 2G or more.
  • the main workflow is to run an image capture program on the computer, which can identify the image captured by the two cameras at the same time locally.
  • the item identification device may be batch-detected (low-cost solution), and a plurality of ordinary cameras are used to obtain images of the item to be inspected from different angles.
  • a plurality of different angle cameras can solve the problem of occlusion caused by the difference in placement angle and item height in the same 2D image. Basically, three cameras can achieve the information needed to be identified without dead angles. In the case of a suitable camera position, two cameras can also achieve better results.
  • an unmanned merchandise cash register is provided, wherein the unmanned merchandise cash register adopts at least one of the above-mentioned human-machine interaction devices;
  • the human-machine interaction device identifies the type and quantity of the merchandise on the unmanned merchandise checkout, and displays the image and quantity of the merchandise in the form of a preset image.
  • an automatic store or an unmanned convenience store wherein the unmanned merchandise checkout adopts at least one of the above-described human-machine interaction devices and/or the above-mentioned unmanned merchandise cashier At least one of Taichung.
  • the human-machine interaction device identifies the type and quantity of the merchandise on the unmanned merchandise checkout, and displays the image and quantity of the merchandise in the form of a preset image.
  • the electrical connection in the present application includes the transmission of data and control commands between the devices, and the transmission modes include wireless transmission, wired transmission or communication transmission.
  • the human-machine interaction device for automatic settlement realizes a sensory and intuitive interaction form by displaying a preset picture of a product, thereby improving the shopping experience of the unattended convenience store.
  • the advantage of text graphics is that it is intuitive and can supplement the text information to identify missing useful information. In theory, when people see text information, the degree of participation of the brain is not high for most audiences. The information conveyed by the graphics is easy to mine, and it can well convey the contents of the system. A large amount of information is presented on the screen in the form of text, which increases the cognitive burden on the human brain and enhances the overall efficiency through visual and auditory means.
  • the human-machine interaction device for automatic settlement provided by the present application can capture the product screen through the ordinary camera, and can realize rapid detection of the batch goods, thereby greatly reducing the cost and speed of the product identification.
  • the human-computer interaction device for automatic settlement provided by the present application makes full use of the neural network to identify the commodity, and judges the product information obtained from the obtained multiple images, thereby avoiding the excessive dependence on image recognition in the existing image recognition field.
  • the resulting recognition error rate improves recognition accuracy. No need to use existing barcodes or RFID tags for identification, reducing the cost of use.
  • the human-computer interaction device for automatic settlement provided by the present application is a method based on deep learning for sustainable learning, and does not need to identify goods by any third-party identification, and the user only needs to place the purchased goods on the desktop. Recognition can be achieved. Sustainability learning through deep learning increases the recognition accuracy of the method as the frequency of use increases. The neural network identification and multi-image comparison are used to correct the recognition result, and the neural network system is trained by the image set obtained by the recognition result, and the recognition accuracy is continuously improved.
  • the application provided by the present application can be used in an unmanned goods checkout counter of an automatic store or an unattended convenience store, thereby realizing low-cost and high-efficiency completion of product identification and settlement in a self-service settlement scenario.
  • FIG. 1 is a schematic structural diagram of a human-machine interaction apparatus for automatic settlement provided by the present application
  • FIG. 3 is a schematic diagram of a state of use of an embodiment of a human-machine interaction apparatus for automatic settlement provided by the present application;
  • FIG. 4 is a schematic diagram of a flow of use in an embodiment of the present application, including a) a schematic diagram of the recognition result display; b) a schematic diagram of the payment operation; c) a schematic diagram of the display unit after the adjustment of the payment information; d) a schematic diagram showing the display of the payment completion check result;
  • the display unit displays a schematic diagram when it is unrecognizable; f) the display unit and the client display schematic diagram;
  • FIG. 5 is a schematic block diagram of a flow of a neural network based product identification method in a first preferred embodiment of the present application
  • FIG. 6 is a schematic block diagram showing a flow of a neural network-based commodity identification method in a second preferred embodiment of the present application.
  • FIG. 7 is a schematic block diagram showing a flow of a neural network-based product identification method in a third preferred embodiment of the present application.
  • FIG. 8 is a schematic block diagram showing a flow of a neural network-based product identification method in a fourth preferred embodiment of the present application.
  • FIG. 9 is a schematic block diagram showing a flow of a neural network-based product identification method in a fifth preferred embodiment of the present application.
  • Fig. 11 is a timing chart of an unmanned convenience store to which an unmanned merchandise checkout counter according to an embodiment of the present application is applied.
  • the human-machine interaction apparatus for automatic settlement provided by the present application includes:
  • the camera unit 100 is configured to acquire a first product image of the product in the product placement area 5;
  • the identification unit 200 is configured to identify the first product image and output commodity information
  • a display unit 4 configured to display the commodity information
  • the image capturing unit 100 is disposed outside the product placement area 5 and electrically connected to the identification unit 200, and the identification unit 200 is electrically connected to the display unit 4;
  • the identification unit 200 identifies the item type and the item quantity in the first item image, and outputs the preset image and the item quantity of the item to the display unit 4.
  • the camera unit 100 includes a first camera 3, a second camera 2, and a third camera 1;
  • the first camera 3 is disposed on the top of the product for the product; the second camera 2 and the third camera 1 are respectively disposed on two sides of the product.
  • the preset image is an image that can display all the feature information of the product when the product category name is entered, and the preset image is stored in one-to-one correspondence with the product name and the product category.
  • the preset image of the product and the quantity of the commodity are output to the display unit 4, including the steps of:
  • the first product is generated by a first product type, the first data is a quantity of the product of the type; the N product is generated by a product type of the Nth product, and the Mth data is The number of the Nth commodity;
  • Displaying the product type and the product preset image in the form of an entire column matrix can improve the user's intuitive shopping experience and improve the user experience.
  • the human-machine interaction device for automatic settlement includes: a settlement module 300, and the settlement module 300 is electrically connected to the identification unit 200 and the display unit 4, respectively, for generating a payment according to the commodity information. Information, the payment information is displayed by the display unit 4.
  • the settlement module 300 herein may be part of the display unit 4 or may be a separate display device. Or other common billing devices with billing.
  • the human-machine interaction device for automatic settlement includes: a distance sensing unit 500, the distance sensing unit 500 is disposed outside the product placement area 5, and is electrically connected to the display unit 4. And for obtaining a distance of the user payment gesture from the display unit 4, and adjusting a display area of the payment information in the display unit 4 according to the distance.
  • the human-machine interaction device for automatic settlement includes: a review module 700, wherein the verification module 700 is electrically connected to the camera unit 100, the settlement module 300, and the display unit 4, respectively, for payment completion Reading the payment result information, acquiring the second product image by the image capturing unit 100, identifying the product information in the second product image, comparing the product information of the second product image with the payment result information, and comparing The result is displayed by the display unit 4.
  • a review module 700 wherein the verification module 700 is electrically connected to the camera unit 100, the settlement module 300, and the display unit 4, respectively, for payment completion Reading the payment result information, acquiring the second product image by the image capturing unit 100, identifying the product information in the second product image, comparing the product information of the second product image with the payment result information, and comparing The result is displayed by the display unit 4.
  • the human-machine interaction device for automatic settlement includes: an alarm module 800 electrically connected to the review module 700 and disposed near the merchandise placement area.
  • the alarm module 800 can be an alarm for an alarm, an eye-catching indicator, or the like.
  • the alarm module 800 is configured to obtain a result obtained by comparing the product information of the second product image with the payment result information, and issue an alarm prompt when the second product image includes an unsettled commodity.
  • the identification unit 200 is configured to detect an unrecognizable item in the first product image, and display the correct prompt message through the display unit 4 after detecting the unrecognizable item.
  • the human-machine interaction device for automatic settlement includes: an indication module 6, the indication module 6 is electrically connected to the identification unit 200, and when the identification unit 200 outputs the commodity information, the indication Module 6 issues an indication signal.
  • the indicating module 6 can be a device capable of emitting a stimulation prompt such as an indicator light or an alarm light.
  • FIG. 2 a working flow chart of a human-machine interaction device for automatic settlement is shown in FIG. 2 .
  • Step S100 The camera acquires a first product image of the commodity to be settled in the product placement area 5;
  • Step S200 identifying the product image, and outputting product information and/or payment information
  • Step S300 determining whether the payment is completed, and if the payment is completed, obtaining the second product image for the commodity to be settled again, determining whether there is an unsettled commodity in the second product image, if yes, outputting an alarm message, if not, prompting settlement Complete the information.
  • the working process of the device is as follows: the user places the product to be settled in an area under the camera to obtain an image, and after obtaining the product image, the product information included in the product image is processed through processing, and the product information includes but is not limited to the product type and quantity. And product images.
  • the item information is displayed in the display area, and payment information is generated for the user to pay.
  • the camera again acquires the image of the product still in the payment area, obtains the second product image, and recognizes the product contained in the second image again, and checks the payment result to determine whether there is an unpaid product in the second product image. If yes, the alarm information is output, and the unpaid commodity information and its corresponding payment information are displayed.
  • the alarm information here can be, but is not limited to, a screen display, an alarm sound, or a light indication. If not, the settlement is completed.
  • step S300 further includes a human-computer interaction step of paying for the scan code: acquiring and analyzing the behavior of the user, adjusting the orientation of the camera and the payment information according to the behavior of the user. Show location. In this way, the user experience paid by the user is improved, the scanning time is reduced, and the settlement efficiency is improved.
  • the image recognition and interaction of the human-machine interaction device for automatic settlement in the shopping scene mainly includes automatically extracting, describing, tracking, identifying, and analyzing behaviors of the moving object. Aspect of the content. If the camera is seen as the human eye, the intelligent interactive system or device can be seen as the human brain. Intelligent interactive technology is to use the powerful data processing function of the computer to perform high-speed analysis on the massive data in the video screen, filter out the information that the user does not care about, and only provide useful key information for the shopper.
  • the service desk provided is shown in Figures 3 and 5. Including 1. third camera; 2. second camera; 3. first camera; 4. display unit; 5. commodity placement area; 6. indicator module; 7. sound hole; 8. first distance sensor; Second distance sensor; 10. third distance sensor.
  • the human-machine interaction device for automatic settlement works as follows:
  • an image capturing unit 100 is disposed in a photographable area of the product placement area 5.
  • the image capturing unit 100 includes a third camera 1, a second camera 2, and a first camera 3, which are disposed at intervals in the product placement area 5. Above. It is used to obtain an image of the product on the product placement area 5.
  • the display unit 4 disposed at an angle to the front of the merchandise placement area 5 is for displaying information to the user.
  • the display unit 4 may be a display screen or a plurality of display screens.
  • the merchandise placement area 5 is used for placing goods that need to be settled. According to different conditions that may occur, an indicator light 6 is emitted around the merchandise placement area 5.
  • the sounding hole 7 is disposed at a lower portion of the side of the product placement area 5, and an alarm module 800, which can be a speaker, is disposed at the sounding hole 7, and the user is assisted by the voice to complete the shopping, thereby improving the user's shopping comfort experience.
  • one side of the merchandise placement area 5 is provided with a distance sensor module, and the distance sensor module comprises a first distance sensor 8, a second distance sensor 9 or/and a third distance sensor 10.
  • the first distance sensor 8 and the second distance sensor 9 are respectively disposed on opposite sides of the commodity placement area 5.
  • the third distance sensor 10 is disposed directly in front of the merchandise placement area 5. This setting can ensure accurate capture of the user's movement situation at different angles of the product placement area 5.
  • the interactive interface on the display unit 4 is displayed correspondingly.
  • the camera 2 acquires the product image
  • the product image is recognized, it is displayed on the display unit 4.
  • the indication module 6 is green, and a part of the display unit 4 displays payment information, such as a payment two-dimensional code.
  • the two-dimensional to be scanned is The code is set in an appropriate area of the display unit 4, thereby improving the user's scan code experience, avoiding the user repeatedly adjusting the distance and position, shortening the scan time and improving the checkout efficiency.
  • the two-dimensional code is finally presented at a suitable location on the display unit 4, depending on the location of the handset.
  • the review module 700 is further included, and the review module 700 is electrically connected to the camera unit 100, the settlement module 300, and the display unit 4, respectively.
  • the review module 700 first reads the payment result, and after the payment result is the payment completion, issues a control command to the camera unit 100, and acquires the product image of the product placement area 5 again, which is called the second product image, and then identifies the first product image through the review module 700.
  • the product information in the two product images is compared with the payment result.
  • the unrecognized item information and category obtained after the image recognition process are presented on the display unit 4. At the same time, the checkout tips are made for other goods that have already been settled.
  • the speaker When there is an unsettled item in the product placement area 5, the speaker will also sound to prompt the user to avoid repeated settlement of the item by the user.
  • the display unit 4 displays a message such as "Please place the product correctly". And the voice prompts the user to place the product correctly in order to settle.
  • the display unit 4 and the mobile terminal simultaneously display various related information such as the consumption amount, the member icon, and the member credit, as shown in FIG. 4f).
  • the mobile terminal can also display the recommended promotional product combination related to the consumption habit according to the user's consumption habits, and can help the shopper to more conveniently and quickly complete the shopping experience, and can also promote the user's secondary consumption.
  • the working process of the identification unit 200 includes the following:
  • an image containing the product to be inspected is at least a two-dimensional image
  • Obtaining an image containing the commodity to be inspected includes at least an angle of an image and/or a depth of field different from the first image to the Nth image; N ⁇ 2;
  • the neural network-based recognition system includes a first neural network based on a regional convolutional neural network; the neural network-based commodity identification method includes the steps of:
  • the first product information is output as the to-be-detected product information; if the determination result is no, the feedback prompt is output.
  • the neural network-based product identification method provided by the present application is mainly used in an unattended environment, and self-service is obtained after self-service acquisition of settlement commodity information.
  • the method fully utilizes the neural network to identify the commodity, and judges the commodity information obtained from the obtained multiple images, thereby avoiding the excessive recognition of the image recognition in the existing image recognition field, resulting in the recognition error rate and improving the recognition accuracy.
  • the feedback prompt here includes at least one of a stacking prompt and an error report.
  • the method may be used for the type and quantity of the goods to be processed, and may be, for example, the number of items to be detected in the image containing the item to be inspected ⁇ 1.
  • the number of items to be detected in the image containing the product to be detected is 1 to 1000.
  • the type of the product to be detected in the image containing the product to be inspected is ⁇ 1.
  • the type of the product to be tested is 1 to 1000.
  • the judged product information includes the product type or the number of each product. Determine whether the product type and/or the quantity of the product are consistent.
  • the neural network-based commodity identification method provided by the present application is used for settlement in an unattended environment, and only needs to use a common networked camera to achieve accurate identification of goods. There is no need to use RFID tags at all, and the cost is reduced. At the same time, it can avoid problems that cannot be settled due to misuse.
  • the first image is a frontal image of the item to be inspected.
  • the main image for recognition the accuracy of recognition can be improved.
  • N the recognition accuracy of the neural network can be improved. It is beneficial to improve the accuracy of subsequent recognition results.
  • the step (a1) further comprises the steps of weighing the product to be inspected to obtain the total weight of the commodity actually weighed; and the step (b1) is calculating the total weight of the commodity in the first product information for (b2). Comparing with the total weight of the actually weighed goods, the difference data is obtained, and it is judged whether the difference data is less than or equal to a preset threshold: if the judgment result is yes, the first commodity information is output as the commodity information to be detected; if the judgment result is no, the output is output. Feedback tips. At the same time, for the obtained product information, the obtained product can be corrected by analyzing the weight of the product included in the product information, thereby improving the accuracy of the image recognition result.
  • the neural network-based identification system includes a first neural network based on a regional convolutional neural network; the neural network-based commodity identification method includes the steps of: (a3) inputting the first image into the first neural network, a neural network outputs first commodity information; the Nth image is input to the first neural network, the first neural network outputs the Nth commodity information; (b3) determines whether the Nth commodity information is included in the first commodity information; If yes, the first product information is output as the product information to be detected; if the determination result is no, the subsequent steps are performed; (c3) calculating the total weight of the goods in the first product information, and comparing with the total weight of the actually weighed goods to obtain a difference The data is used to determine whether the difference data is less than or equal to a preset threshold: if the determination result is yes, the first product information is output as the to-be-detected product information; if the determination result is no, the feedback prompt is output.
  • the preset threshold here may be at least one of 0.1 g to 10 kg.
  • the preset threshold may also be the weight of the smallest item in the first item information.
  • the preset threshold may also be at least one of 10% to 80% of the weight of the smallest weight commodity in the first item information.
  • the method of determining whether the Nth product information is included in the first product information in the step (b1) and the step (b3) is to determine whether the product type in the Nth product information is present in the first product information.
  • the method of determining whether the Nth item information is included in the first item information in the step (b1) and the step (b3) is: determining whether the quantity of each item in the item N item information is less than or equal to the first item information. The number of items.
  • the step (b1) and the step (b3) are to determine whether the Nth item information is consistent with the first item information; if the determination result is yes, the first item information is output as the item information to be detected; , then perform the next steps.
  • the Nth item information in the step (b1) and the step (b3) is consistent with the first item information, including the same item type and the quantity of each item are consistent.
  • the neural network-based identification system comprises a second neural network based on a regional convolutional neural network
  • the neural network-based identification system is obtained by a method comprising the steps of: obtaining a first image set of each multi-angle image of the commodity to be inspected Using the first set of images to train the second neural network to obtain a first neural network.
  • the obtained result can be used to train the first neural network, thereby realizing deep learning automatic system error correction, and the recognition accuracy of the neural recognition system automatically increases as the number of identified commodities increases. It can be done according to the existing method. Training with multi-angle images of the products to be inspected can improve the recognition accuracy of the neural network-based recognition system in response to occlusion of goods.
  • the method of training the second neural network is a supervised learning method.
  • the method for training the second neural network is: using supervised learning, training the second neural network with the first image set to obtain a third neural network; obtaining a second image set of the product image to be detected; training with the second image set The third neural network obtains the first neural network.
  • the second image set includes an image of the item to be detected that is to be detected by the neural network based identification system.
  • the recognition accuracy of the second neural network to be detected is 80% or more.
  • the process of training the third neural network of the second image set is unsupervised learning. It can be done according to the existing method.
  • the neural network based commodity identification method comprises the steps of:
  • the error correction capability of the system can be further improved by training the first neural network with the difference image set. At the same time, this operation can also be used in the method shown in FIG.
  • the neural network-based commodity identification method comprises the steps of:
  • This step can also be applied to the method shown in FIG. 7, which is not described here.
  • the detection result is no, the first commodity information in the case of multiple unrecognized situations is collected and used to train the first neural network, thereby improving the recognition capability of the first neural network for the unrecognizable situation.
  • the neural network-based product identification method when used, places the product to be detected on the stage, and N cameras surround the product to be detected.
  • the images of the respective angles of the products to be detected are obtained by N cameras, and are respectively recorded as P1, P2, . . . , PN.
  • the camera located directly above the stage is the main camera, and is recorded as the first camera.
  • the image acquired by the camera is the first image P1.
  • P1, P2, . . . PN are uploaded to the local identification server or the cloud identification server, and each picture is identified, and the identified product information is respectively recorded as R1, R2, . . . RN, and the product information includes the product.
  • R2 second product information
  • R1 first product information
  • R1 is output as the commodity information to be detected
  • the total weight of the commodity in R1 is calculated, and the absolute value of the result obtained by subtracting the total weight of the commodity from the actual weighing is used as the difference data to determine whether the difference data is less than or equal to a preset threshold:
  • R1 is output as the product information to be detected, and the product information list including the category, quantity, and price of the product is output;
  • Another aspect of the present application provides an unmanned merchandise checkout counter, wherein the unmanned merchandise checkout counter adopts at least one of the above-described human-machine interaction devices for automatic settlement;
  • the human-machine interaction device for automatic settlement identifies the type and quantity of the commodity on the unmanned merchandise checkout, and displays an image of the merchandise and/or the number of merchandise in the form of a preset image. Improve the shopping experience by replacing the text of the product name with the product image to improve the shopping experience.
  • Another aspect of the present application provides an automatic store or an unmanned convenience store that employs at least one of the above-described human-machine interaction devices for automatic settlement and/or the above-described unmanned goods. At least one of the checkout counters.
  • FIG. 11 is a timing diagram showing an embodiment of a neural network-based article identification device for use in a self-service checkout counter by the present application. It can also be used as an implementation example of the self-service checkout counter provided by the present application. As shown in FIG. 11, a neural network-based product identification device including any of the aforementioned neural network-based product identification methods is used, and the shopping steps of the customer in the unattended convenience store are as follows:
  • the self-service checkout counter also the stage in the neural network-based product identification device
  • the stage senses a weight of >0, triggering a neural network-based commodity identification device to initiate a product identification program
  • the camera captures the goods on the stage, obtains the product picture, and encodes the product picture Base64 to the image recognition server for image recognition;
  • the order processing interface is requested to generate an order
  • a stacking prompt is displayed on the operation interface, prompting the customer to move the product, so that the camera can capture the goods stacked on the lower layer; the camera is re-imaged. Shooting the goods on the stage, obtaining a new product picture, until the differential data is less than or equal to a preset threshold, requesting to generate an order from the order processing interface;
  • the order processing interface receives the generated order request, issues a payment QR code string, and generates a payment two-dimensional code on the operation interface;
  • the customer scans the payment QR code
  • the message SOCKET sends a message of successful payment to degauss the goods on the stage;
  • the message SOCKET sends a face recognition message to the secure channel
  • the customer carries the goods through a safe passage including a detecting device. If no undemagnetized label is detected, the door is opened and the customer walks out of the unmanned shopping convenience store; if the undemagnetized label is detected, an unpaid warning is issued and the door is not opened.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Cash Registers Or Receiving Machines (AREA)

Abstract

本申请公开了一种用于自动结算的人机交互装置及其应用,该装置包括:摄像单元,用于获取商品摆放区内商品的第一商品图像;识别单元,识别第一商品图像,输出商品信息;显示单元,用于显示商品信息;摄像单元设置于商品摆放区外,并与识别单元电连接,识别单元与显示单元电连接;识别单元识别第一商品图像中的商品种类和商品数量,以商品的预设图像和商品数量输出至显示单元。该装置可对商品特征的准确识别,对商品图像的分析提取主要信息,经过重组并最终反馈给人,提升无人便利店场景中用户的购物体验。本申请还公开了采用所述自动结算的人机交互装置的无人服务台、自动商店或无人便利店。

Description

用于自动结算的人机交互装置及其应用 技术领域
本申请涉及一种用于自动结算的人机交互装置及其应用,属于深度学习神经网络和图像识别领域。
背景技术
现有线下自助结算场景下,商品机器识别方法主要分为两种:
第一种是基于RFID电子标签方式,在数据库中为每个具有唯一ID的电子标签指定商品对应关系,然后将标签张贴到该类商品上,在识别时用读卡器先读出电子标签唯一ID,在数据库中查询该ID对应商品信息,实现“识别”商品。此方法的直接失败对象为RFID,存在标签成本高,纠错成本高,识别时容易受电磁干扰等问题。
第二种是用户自助将商品上的条码放在扫码机器下,通过机器扫码实现“识别”。此方法增加了用户的操作成本,容易操作失误,比如出现多次扫码重复计数;而有些商品条码因为商品变形,难以正确扫描。同时该方法还存在扫码识别过程全靠用户自觉,不能为自助结算的后续防盗流程提供依据。
现有无人环境下的购物多受制于无人可以询问,导致用户购物体验较差。而传统的图像识别与计算机后台大数据关系密切,但是识别过程主要正对图像中所含商品,缺少对与用户有关的交互数据收集处理。同时现有图像识别后,并不能再次对用户显示商品图像,在无人值守环境下,用户无人可问,当对购物信息存在疑问时,也只能丢弃商品,从而降低了用户的购物体验。在线下场景中,现有通过计算机识别商品的方法缺乏对用户感官体验的考虑。常忽略用户反馈的信息,而无法提高用户在自助购物过程中的舒适度,降低了用户对自助购物的接受程度。
发明内容
为解决上述技术问题,本申请提供一种用于自动结算的人机交互装置,该人机交互装置解决了自助结算场景下商品机器识别问题,通过普通摄像头抓取商品画面,利用基于深度学习的图像识别算法直接识别出商品。该方法不需要借助任何第三方标识识别商品,用户只要将选购商品放在桌面即可实现识别。可以将商品类别识别为预设商品图像,从而提高用户购物过程中,自助结算时对商品的识别准确率。对商品特征的准确识别,对商品图像的分析提取主要信息,经过重组并最终反馈给人,提升无人便利店场景中用户的购物体验。
包括:
摄像单元,用于获取商品摆放区内商品的第一商品图像;
识别单元,用于识别所述第一商品图像,输出商品信息;
显示单元,用于显示所述商品信息;
所述摄像单元设置于所述商品摆放区外,并与所述识别单元电连接,所述识别单元与所述显示单元电连接;
所述识别单元识别所述第一商品图像中的商品种类和商品数量,以所述商品的预设图像和商品数量输出至所述显示单元。
可选的,所述摄像单元包括至少两个摄像头,所述摄像头用于获取所述商品的不同角度和/或不同景深,得到所述第一商品图像。
可选的,所述摄像单元包括第一摄像头、第二摄像头和第三摄像头;
所述第一摄像头正对所述商品设置于所述商品的顶部;所述第二摄像头和所述第三摄像头分别设置于所述商品的两侧。
可选的,以所述商品的预设图像和所述商品的数量输出至所述显示单元,包括步骤:
a)生成以下第一矩阵:
Figure PCTCN2018107958-appb-000001
其中,所述第一商品由第一种商品种类生成,所述第一数据为该种类商品的数量;所述N商品为所述第N种商品的商品种类生成,所述第M数据为所述第N种商品的数量;
b)将所述第一商品替换为第一种商品的第一预设图像,重复该操作直至将所述第N商品替换为第N种商 品的第N预设图像,得到以下第二矩阵:
Figure PCTCN2018107958-appb-000002
c)将所述第二矩阵输出至所述显示单元。
可选的,所述用于自动结算的人机交互装置包括:结算模块,所述结算模块分别与所述识别单元和所述显示单元电连接,用于根据所述商品信息生成支付信息,通过所述显示单元显示所述支付信息。
可选的,所述用于自动结算的人机交互装置包括:距离传感单元,所述距离传感单元设置于所述商品摆放区的外侧,并与所述显示单元电连接,用于获取用户支付手势与所述显示单元的距离,并根据所述距离调整所述显示单元中支付信息的显示区域。
可选的,所述用于自动结算的人机交互装置包括:复核模块,所述复核模块分别与所述摄像单元、所述结算模块和所述显示单元电连接,
用于支付完成后读取支付结果信息,通过所述摄像单元获取结算完毕的第二商品图像,识别所述第二商品图像中商品信息,比对所述第二商品图像的商品信息与所述支付结果信息,将比对结果通过所述显示单元显示。
可选的,所述用于自动结算的人机交互装置包括:警报模块,所述警报模块与所述复核模块电连接并设置于所述商品摆放区域附近,用于获取比对所述第二商品图像的商品信息与所述支付结果信息所得结果,当第二商品图像中包含未结算商品时,发出报警提示。
可选地,所述识别单元,用于检测所述第一商品图像中无法识别物品,当检测到无法识别物品后,通过所述显示单元显示摆放正确提示语。
可选的,所述用于自动结算的人机交互装置包括:指示模块,所述指示模块与所述识别单元电连接,当所述识别单元输出所述商品信息后,所述指示模块发出指示信号。
可选的,所述识别单元的工作过程包括如下:
获得含有待检测商品的所述第一商品图像;
将所述第一商品图像输入基于神经网络的识别系统,所述基于神经网络的识别系统输出所述商品信息。
可选的,所述基于神经网络的识别系统包括基于区域卷积神经网络的第一神经网络;所述基于神经网络的商品识别方法包括步骤:
(a1)将所述第一图像输入所述第一神经网络,所述第一神经网络输出第一商品信息;将所述第N图像输入所述第一神经网络,所述第一神经网络输出第N商品信息;
(b1)判断所述第N商品信息是否包含在所述第一商品信息中;
如判断结果为是,则将所述第一商品信息作为所述待检测商品信息输出;
如判断结果为否,则输出反馈提示。
可选的,所述步骤(a1)中还包括称量所述待检测商品重量的步骤,得到实际称量的商品总重量;
所述步骤(b1)中还包括步骤(b2)计算所述第一商品信息中商品总重量,与实际称量的商品总重量对比得到差分数据,判断所述差分数据是否小于等于预设阈值:
如判断结果为是,则将所述第一商品信息作为所述待检测商品的信息输出;
如判断结果为否,则输出反馈提示。
可选的,所述基于神经网络的识别系统包括:基于区域卷积神经网络的第一神经网络;
所述基于神经网络的商品识别方法包括步骤:
(a3)将所述第一图像输入所述第一神经网络,所述第一神经网络输出第一商品信息;将所述第N图像输入所述第一神经网络,所述第一神经网络输出第N商品信息;
(b3)判断所述第N商品信息是否包含在所述第一商品信息中;
如判断结果为是,则将所述第一商品信息作为所述待检测商品的信息输出;
如判断结果为否,则执行后续步骤;
(c3)计算所述第一商品信息中商品总重量,与实际称量的商品总重量对比得到差分数据,判断所述差分数据是否小于等于预设阈值:
如判断结果为是,则将所述第一商品信息作为所述待检测商品的信息输出;
如判断结果为否,则输出反馈提示。
可选的,所述步骤(b1)为判断所述第N商品信息是否与所述第一商品信息一致;如判断结果为是,则将所述第一商品信息作为所述待检测商品信息输出;如判断结果为否,则执行后续步骤。
可选的,所述步骤(b3)为判断所述第N商品信息是否与所述第一商品信息一致;如判断结果为是,则将所述第一商品信息作为所述待检测商品信息输出;如判断结果为否,则执行后续步骤。
可选的,基于神经网络的识别系统包括基于区域卷积神经网络的第二神经网络,所述基于神经网络的识别系统由包括以下步骤的方法得到:
获得包含每件所述待检测商品多角度图像的第一图像集;
使用所述第一图像集训练所述第二神经网络,得到第一神经网络。
可选的,所述训练所述第二神经网络的方法为:
采用监督学习,使用第一图像集训练所述第二神经网络,得到第三神经网络;
获得所述待检测商品图像的第二图像集;
用第二图像集训练所述第三神经网络,得到第一神经网络;
所述第二图像集包括经基于神经网络的识别系统输出的所述待检测商品信息的所述待检测商品的图像;
所述训练所述第二神经网络和所述的方法为监督学习方法;
所述用第二图像集训练所述第三神经网络的过程为无监督学习。
可选的,所述基于神经网络的商品识别方法包括步骤:
(d3)为:所述步骤(c3)中判断结果为否时,识别所述第一商品信息与所述第N商品信息中差异商品;
(e3)为:获取步骤(d3)中的所述差异商品的差异图像集,用所述差异图像集强化训练所述第一神经网络。
本申请的另一方面提供了一种无人商品收银台,所述无人商品收银台采用上述的用于自动结算的人机交互装置中的至少一种;
所述用于自动结算的人机交互装置识别所述无人商品收银台上的所述商品的种类和数量,以预设图像的形式展示所述商品的图像和/或所述商品数量。
本申请的另一方面提供了一种自动商店或无人便利店,自动商店或无人便利店采用上述的用于自动结算的人机交互装置中的至少一种和/或上述的无人商品收银台中的至少一种。
具体的,参见图1~2,本申请提供的自动服务台的人机交互装置,包括:
摄像单元,用于获取商品图像;
识别单元,用于处理所述摄像单元获取的商品图像,输出商品信息;
显示单元,用于显示所述识别单元输出的商品信息;
所述识别单元与所述摄像单元和所述显示单元电连接。
可选地,所述摄像单元包括至少两个获取不同角度和/或景深的商品图像的摄像头。
可选地,所述摄像单元包括至少2至4个获取不同角度和/或景深的商品图像的摄像头。
可选地,所述摄像单元包括第一摄像头、第二摄像头和第三摄像头;
所述第一摄像头、第二摄像头和第三摄像头分别从不同角度获取商品图像。
可选地,所述识别单元识别所述摄像单元获取的商品图像中的商品种类和商品数量,以预设图像和数量输出至所述显示单元。
可选地,识别所述摄像单元获取的商品图像中的商品种类,包括:
由识别所述摄像单元获取的商品图像中的商品种类生成第一商品至第N商品,N为所述商品种类。
可选地,识别所述摄像单元获取的商品图像中的商品数量,包括:
由识别所述摄像单元获取的商品图像中的商品数量生成第一数据至第M数据,所述第一数据为所述第一商品的数量,所述第M数据为所述第N商品的数量。
可选地,以商品预设图像和数量输出至所述显示单元,包括步骤:
a)生成以下第一矩阵:
第一商品  第一数据
       …
第N商品  第M数据
b)将所述第一矩阵中的第一商品替换为第一预设第N商品根据预设图像,直至将所述第M商品至第M预设图像,得到以下第二矩阵:
第一预设图像  第一数据
       …
第N预设图像  第M数据
c)将所述第二矩阵输出到所述显示单元。
可选地,所述识别单元运行神经网络图像识别系统。
可选地,所述识别单元的工作过程包括如下:
获得含有待检测商品的图像;
将所述含有待检测商品的图像输入基于神经网络的识别系统,所述基于神经网络的识别系统输出待检测商品信息。
可选地,所述获得含有待检测商品的图像至少为二维图像。
可选地,所述获得含有待检测商品的图像至少包括角度和/或景深不同第一图像至第N图像;N≥2。
可选地,所述获得含有待检测商品的图像至少包括不同角度和/或不同景深的第一图像至第N图像;N=2~4。
可选地,所述神经网络识别系统包括基于区域卷积神经网络的第一神经网络;所述商品识别方法包括步骤:
(a1)将所述第一图像输入所述第一神经网络,所述第一神经网络输出第一商品信息;将所述第N图像输入所述第一神经网络,所述第一神经网络输出第N商品信息;
(b1)判断所述第N商品信息是否包含在所述第一商品信息中:
如判断结果为是,则将所述第一商品信息作为所述待检测商品信息输出;
如判断结果为否,则输出反馈结果。
可选地,所述神经网络识别系统包括基于区域卷积神经网络的第一神经网络;所述商品识别方法包括步骤:
(a2)将所述第一图像输入所述第一神经网络,所述第一神经网络输出第一商品信息;
(b2)计算所述第一商品信息中商品总重量,与实际称量的商品总重量对比得到差分数据,判断所述差分数据是否小于等于预设阈值:
如判断结果为是,则将所述第一商品信息作为所述待检测商品信息输出;
如判断结果为否,则输出第二结果。
可选地,所述神经网络识别系统包括基于区域卷积神经网络的第一神经网络;所述商品识别方法包括步骤:
包括步骤:
(a3)将所述第一图像输入所述第一神经网络,所述第一神经网络输出第一商品信息;将所述第N图像输入所述第一神经网络,所述第一神经网络输出第N商品信息;
(b3)判断所述第N商品信息是否包含在所述第一商品信息中;
如判断结果为是,则将所述第一商品信息作为所述待检测商品信息输出;
如判断结果为否,则执行后续步骤;
(c3)计算所述第一商品信息中商品总重量,与实际称量的商品总重量对比得到差分数据,判断所述差分数据是否小于等于预设阈值:
如判断结果为是,则将所述第一商品信息作为所述待检测商品信息输出;
如判断结果为否,则输出反馈结果。
可选地,所述步骤(b1)和步骤(b3)中判断所述第N商品信息是否包含在所述第一商品信息中的方法为,判断所述第N商品信息中的商品种类是否均存在于所述第一商品信息中。
可选地,所述步骤(b1)和步骤(b3)中判断所述第N商品信息是否包含在所述第一商品信息中的方法为,判断所述第N商品信息中的商品数量是否小于等于所述第一商品信息中的商品数量。
可选地,所述步骤(b1)和步骤(b3)中判断所述第N商品信息是否包含在所述第一商品信息中的方法为,判断所述第N商品信息中的每种商品的数量是否小于等于所述第一商品信息中的商品数量。
可选地,所述步骤(b1)和步骤(b3)为判断所述第N商品信息是否于所述第一商品信息一致;
如判断结果为是,则将所述第一商品信息作为所述待检测商品信息输出;
如判断结果为否,则执行后续步骤。
可选地,所述步骤(b1)和步骤(b3)中所述第N商品信息是否与所述第一商品信息一致,包括商品种类一致和每种商品的数量一致。
可选地,所述步骤(b2)和步骤(c3)中预设阈值为0.1g至10kg中的至少一个数值。
可选地,所述步骤(b2)和步骤(c3)中预设阈值为第一商品信息中重量最小的商品重量。
可选地,所述步骤(b2)和步骤(c3)中预设阈值为第一商品信息中重量最小的商品重量的10%至80%中的至少一个数值。
可选地,所述步骤(b2)和步骤(c3)中反馈结果包括堆叠提示、错误报告中的至少一种。
可选地,所述含有待检测商品的图像中待检测商品的数量≥1。
可选地,所述含有待检测商品的图像中待检测商品的数量为1~1000。
可选地,所述含有待检测商品的图像中待检测商品的种类≥1。
可选地,所述待检测商品的种类为1~1000。
可选地,基于神经网络的识别系统包括基于区域卷积神经网络的第二神经网络,所述基于神经网络的识别系统由包括以下步骤的方法得到:
获得每件在售商品多角度图像的第一图像集;
使用所述第一图像集训练所述第二神经网络,得到第一神经网络。
可选地,所述训练所述第二神经网络的方法为监督学习方法。
可选地,所述训练所述第二神经网络的方法为:
采用监督学习,使用第一图像集训练所述第二神经网络,得到第三神经网络;
获得在售商品图像的第二图像集;
用第二图像集训练所述第三神经网络,得到第一神经网络。
可选地,所述第二图像集包括经基于神经网络的识别系统输出待检测商品信息的待检测商品的图像。
可选地,所述第二神经网络对商品的识别准确率为80%以上。
可选地,所述用第二图像集训练所述第三神经网络的过程未无监督学习。
可选地,所述商品识别方法,包括步骤:
(a1)将所述第一图像输入所述第一神经网络,所述第一神经网络输出第一商品信息;将所述第N图像输入所述第一神经网络,所述第一神经网络输出第N商品信息;
(b1)判断所述第N商品信息是否包含在所述第一商品信息中;
(c1)所述步骤(b1)中判断结果为否的情况下,识别所述第一商品信息与所述第N商品信息中差异商品;
(d1)获取步骤(c1)中的所述差异商品的第二图像集,用上述第二图像集强化训练所述第一神经网络。
可选地,所述商品识别方法,包括步骤:
(a3)将所述第一图像输入所述第一神经网络,所述第一神经网络输出第一商品信息;将所述第N图像输入所述第一神经网络,所述第一神经网络输出第N商品信息;
(b3)判断所述第N商品信息是否包含在所述第一商品信息中;
如判断结果为是,则将所述第一商品信息作为所述待检测商品信息输出;
如判断结果为否,则执行后续步骤;
(c3)计算所述第一商品信息中商品总重量,与实际称量的商品总重量对比得到差分数据,判断所述差分数据是否小于等于预设阈值:
(d3)所述步骤(c3)中判断结果为否的情况下,识别所述第一商品信息与所述第N商品信息中差异商品;
(e3)获取步骤(d3)中的所述差异商品的第二图像集,用上述第二图像集强化训练所述第一神经网络。
可选地,所述商品识别方法,包括步骤:
(a2)将所述第一图像输入所述第一神经网络,所述第一神经网络输出第一商品信息;
(b2)计算所述第一商品信息中商品总重量,与实际称量的商品总重量对比得到差分数据,判断所述差分 数据是否小于等于预设阈值:
(c2)收集所述步骤(b2)中判断结果为否的情况下,识别所述第一商品信息中的商品;
(d2)获取步骤(c2)中的所述识别商品的第二图像集,用上述第二图像集强化训练所述第一神经网络。
可选地,所述商品识别方法,包括步骤:
(a3)将所述第一图像输入所述第一神经网络,所述第一神经网络输出第一商品信息;将所述第N图像输入所述第一神经网络,所述第一神经网络输出第N商品信息;
(b3)判断所述第N商品信息是否包含在所述第一商品信息中;
如判断结果为是,则将所述第一商品信息作为所述待检测商品信息输出;
如判断结果为否,则执行后续步骤;
(c3)计算所述第一商品信息中商品总重量,与实际称量的商品总重量对比得到差分数据,判断所述差分数据是否小于等于预设阈值:
(d3)收集所述步骤(c3)中判断结果为否的情况下,识别所述第一商品信息中的商品;
(e3)获取步骤(d3)中的所述识别商品的第二图像集,用上述第二图像集强化训练所述第一神经网络。
根据本申请的又一方面,提供一种商品识别装置,其特征在于,所述商品识别装置包括:
摄像单元,用于获取商品图像;
识别单元,用于运行神经网络识别系统,处理所述摄像单元获取的商品图像;
显示单元,用于显示所述识别单元输出的商品信息;
所述识别单元与所述摄像单元和所述显示单元电连接。
可选地,所述摄像单元包括至少两个获取不同角度和/或景深的商品图像的摄像头。
可选地,所述摄像单元包括至少2至4个获取不同角度和/或景深的商品图像的摄像头。
可选地,所述摄像单元包括第一摄像头和第二摄像头;
所述第一摄像头和第二摄像头分别从不同角度获取商品图像。
可选地,商品识别装置包括载物台,所述载物台含有重量传感器,用于测量所述载物台上商品总重量;
所述重量传感器与所述识别单元电连接,将所述载物台上商品的总重量输入所述识别单元。在2D图像识别中,一个严重的异常是因为堆叠或极端角度某一物品完全被另外物品遮挡或大部分被遮挡,没要足够的细节来用以识别。为了准确判断商品内有无堆叠情况,本申请结合重量传感器来解决该问题,合计识别结果中的物品重量与识别装置内的重量传感器实际称重,如果不一致,给出堆叠结论。
可选地,所述摄像单元包括二个普通网络摄像头,二个可调整任意角度的固定器,一台可运行图像上传的持续的计算机,一个高精度重量传感器。主要工作流程为:计算机上运行一个图像抓取程序,该程序可以将同一时间的两个摄像头抓取的画面图像上传到远程服务器,远程服务器将识别结果返回。此方案成本极低,工作计算机也只需要最基础配置即可。
可选地,所述摄像单元包括2-4个固定镜头高清摄像头,相应数量的可调节角度固定器,一个高精度重量传感器,一台带显存2G以上显卡的计算机。主要工作流程为,算机上运行一个图像抓取程序,该程序可以将同一时间的两个摄像头抓取的画面图像在本地识别。
可选地,商品识别装置可批量检测(低成本方案),采用多个普通摄像头,从不同角度获得待检测商品的图像。
多个不同角度的摄像头可以解决商品在同一个2D图像中因为摆放角度和物品高度差异产生的遮挡问题。基本上3个摄像头可以实现无死角获取待识别所需信息,合适的摄像头机位情况下,2个摄像头也可以达到较理想效果。
根据本申请的又一方面,提供一种无人商品收银台,其特征在于,所述无人商品收银台采用上述的人机交互装置中的至少一种;
所述人机交互装置识别所述无人商品收银台上的商品的种类和数量,以预设图像的形式展示商品的图像和数量。
根据本申请的又一方面,提供一种自动商店或无人便利店,其特征在于,所述无人商品收银台采用上述的 人机交互装置中的至少一种和/或上述无人商品收银台中的至少一种。
所述人机交互装置识别所述无人商品收银台上的商品的种类和数量,以预设图像的形式展示商品的图像和数量。
本申请中电连接包括各装置之间的数据、控制指令的传输,传输方式包括无线传输、有线传输或通讯传输。
本申请的有益效果包括但不限于:
(1)本申请所提供的用于自动结算的人机交互装置,通过显示商品预设图片,实现感官上直观的交互形式,提高无人便利店的购物体验。文本图形化优点是形象直观,能够补充文字信息识别缺少的有用信息。理论上,人看到文字信息的时候对于大多数受众而言,大脑所参与的程度并没有图形高,图形所传达的信息是便于挖掘的,可以很好地传递出系统所要说明的内容,如果有大量信息通过文字的形式呈现在屏幕上,会给人的大脑增加认知的负担,通过视觉和听觉手段能够提升整体的效率。
(2)本申请所提供的用于自动结算的人机交互装置,通过普通摄像头抓取商品画面,可实现批量商品的快速检测,大幅降低了商品识别的成本和速度。
(3)本申请所提供的用于自动结算的人机交互装置,充分利用神经网络对商品进行识别,并对所得多幅图像所得商品信息进行判断,避免了现有图像识别领域过度依赖图像识别,导致的识别误差率,提高了识别准确性。无需使用现有条码或RFID电子标签进行识别,降低了使用成本。
(4)本申请所提供的用于自动结算的人机交互装置,是基于深度学习的可持续性学习的方法,不需要借助任何第三方标识识别商品,用户只要将选购商品放在桌面即可实现识别。通过深度学习的可持续性学习,随着使用频率的增加不断提高该方法的识别准确性。通过神经网络识别和多图像比对,实现对识别结果的校正,并利用识别结果所得图像集对神经网络系统进行训练,不断提高其识别准确性。
(5)本申请所提供的应用,可以用于自动商店或无人便利店无人商品收银台,从而实现自助结算场景下,低成本、高效率的完成商品识别和结算。
附图说明
图1是本申请提供的用于自动结算的人机交互装置结构示意图;
图2是本申请一种实施方式中的工作流程图;
图3是本申请提供的用于自动结算的人机交互装置一种实施方式使用状态示意图;
图4是本申请一种实施方式中使用流程示意图,其中包括a)识别结果显示示意图;b)支付操作示意图;c)支付信息调整后显示单元显示示意图;d)支付完成核对结果显示示意图;e)无法识别情况下,显示单元显示示意图;f)显示单元和客户端显示示意图;
图5是本申请第一优选实施例中基于神经网络的商品识别方法流程示意框图;
图6是本申请第二优选实施例中基于神经网络的商品识别方法流程示意框图;
图7是本申请第三优选实施例中基于神经网络的商品识别方法流程示意框图;
图8是本申请第四优选实施例中基于神经网络的商品识别方法流程示意框图;
图9是本申请第五优选实施例中基于神经网络的商品识别方法流程示意框图;
图10是本申请一种实施方式中的工作流程图;
图11是应用了本申请一种实施方式中无人商品收银台的无人便利店的时序图。
部件和附图标记:
附图标记 部件名称
100 摄像单元
1 第三摄像头
2 第二摄像头
3 第一摄像头
4 显示单元
5 商品摆放区
6 指示模块
7 发声孔
500 距离传感单元
8 第一距离传感器
9 第二距离传感器
10 第三距离传感器
200 识别单元
300 结算模块
700 复核模块
800 警报模块
具体实施方式
下面结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例是本申请的一部分实施例,而不是全部实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都应属于本申请保护的范围。
参见图1~2,本申请提供的所述用于自动结算的人机交互装置包括:
摄像单元100,用于获取商品摆放区5内商品的第一商品图像;
识别单元200,用于识别所述第一商品图像,输出商品信息;
显示单元4,用于显示所述商品信息;
所述摄像单元100设置于所述商品摆放区5外,并与所述识别单元200电连接,所述识别单元200与所述显示单元4电连接;
所述识别单元200识别所述第一商品图像中的商品种类和商品数量,以所述商品的预设图像和商品数量输出至所述显示单元4。
通过直接显示预设商品种类对应的预设图像,提高用户购物时的直观感受,避免用户名称过长或较类似,导致用户无法准确获取所需物品。同时也方便用户在无人值守情况下,核对购物信息,避免买错东西。
优选的,所述摄像单元100包括第一摄像头3、第二摄像头2和第三摄像头1;
所述第一摄像头3正对所述商品设置于所述商品的顶部;第二摄像头2和第三摄像头1分别设置于所述商品的两侧。本文中预设图像为在录入商品种类名称时,最能显示商品全部特征信息的图像,预设图像与商品名、商品类别一一对应存储。
优选的,以所述商品的预设图像和所述商品的数量输出至所述显示单元4,包括步骤:
a)生成以下第一矩阵:
Figure PCTCN2018107958-appb-000003
其中,所述第一商品由第一种商品种类生成,所述第一数据为该种类商品的数量;所述N商品为所述第N种商品的商品种类生成,所述第M数据为所述第N种商品的数量;
b)将所述第一商品替换为第一种商品的第一预设图像,重复该操作直至将所述第N商品替换为第N种商品的第N预设图像,得到以下第二矩阵:
Figure PCTCN2018107958-appb-000004
c)将所述第二矩阵输出至所述显示单元4。
以整列矩阵的形式显示商品种类和商品预设图像,能提高用户的购物直观感受,提高用户体验。
优选的,所述用于自动结算的人机交互装置包括:结算模块300,所述结算模块300分别与所述识别单元200和所述显示单元4电连接,用于根据所述商品信息生成支付信息,通过所述显示单元4显示所述支付信息。此处的结算模块300可以为显示单元4的一部分也可以为单独的显示装置。或其他具有结算功能的常用结算装置。
优选的,所述用于自动结算的人机交互装置包括:距离传感单元500,所述距离传感单元500设置于所述 商品摆放区5的外侧,并与所述显示单元4电连接,用于获取用户支付手势与所述显示单元4的距离,并根据所述距离调整所述显示单元4中支付信息的显示区域。
优选的,用于自动结算的人机交互装置包括:复核模块700,所述复核模块700分别与所述摄像单元100、所述结算模块300和所述显示单元4电连接,用于支付完成后读取支付结果信息,通过所述摄像单元100获取第二商品图像,识别所述第二商品图像中商品信息,比对所述第二商品图像的商品信息与所述支付结果信息,将比对结果通过所述显示单元4显示。
优选的,用于自动结算的人机交互装置包括:警报模块800,所述警报模块800与所述复核模块700电连接并设置于所述商品摆放区域附近。警报模块800可以为警报器、醒目的指示灯等刺激。警报模块800用于获取比对所述第二商品图像的商品信息与所述支付结果信息所得结果,当第二商品图像中包含未结算商品时,发出报警提示。
优选的,所述识别单元200,用于检测所述第一商品图像中无法识别物品,当检测到无法识别物品后,通过所述显示单元4显示摆放正确提示语。
优选的,所述用于自动结算的人机交互装置包括:指示模块6,所述指示模块6与所述识别单元200电连接,当所述识别单元200输出所述商品信息后,所述指示模块6发出指示信号。指示模块6,可以为指示灯、报警灯等能发出刺激提示的装置。
本申请的一个具体实施方式中,用于自动结算的人机交互装置的工作流程图如图2所示。
步骤S100:摄像头获取商品摆放区5内待结算商品的第一商品图像;
步骤S200:识别所述商品图像,输出商品信息和/或支付信息;
步骤S300:判断支付是否完成,如果支付完成再次对所述待结算商品获取第二商品图像,判断所述第二商品图像中是否存在未结算商品,如果有则输出报警信息,如果没有则提示结算完成信息。
该装置的工作过程如下:用户将待结算商品置于摄像头下方能获取图像的区域内,获取商品图像后,经过处理识别出商品图像中所含商品信息,商品信息包括但不限于商品种类、数量及商品图像。将商品信息显示在显示区域内,并生成支付信息供用户进行支付。用户支付后,摄像头再次对仍然处于支付区域的商品进行图像获取,得到第二商品图像,并再次识别第二图像中所含商品,并核对支付结果,判断第二商品图像中是否存在未支付商品,如果有则输出报警信息,并显示未支付商品信息及其对应支付信息。此处的报警信息可以但不限于屏幕显示、报警声或灯光指示。如果没有则提示结算完成。
同时为了提高用户的购物体验,在步骤S300中还包括对对扫码支付的人机交互步骤:对用户的行为进行获取和分析,根据用户的行为调整所述摄像头的朝向和所述支付信息的显示位置。以此提高用户支付的用户体验,减少扫码时间,提高结算效率。
作为本申请的一个具体的实施例,用于自动结算的人机交互装置在购物场景中的图像识别与交互主要包括对图像序列自动地进行运动对象的提取、描述、跟踪、识别和行为分析等方面的内容。如果把摄像机看作人的眼睛,而智能交互系统或设备则可以看作人的大脑。智能交互技术就是借助计算机强大的数据处理功能,对视频画面中的海量数据进行高速分析,过滤掉用户不关心的信息,仅仅为购物者提供有用的关键信息。
所提供服务台如图3和图5所示。包括1.第三摄像头;2.第二摄像头;3.第一摄像头;4.显示单元;5.商品摆放区;6.指示模块;7.发声孔;8.第一距离传感器;9.第二距离传感器;10.第三距离传感器。
本申请一个具体实施方式中,用于自动结算的人机交互装置的工作如下:
参见图3,商品摆放区5的可拍摄区域内设置摄像单元100,本实施例中摄像单元100包括第三摄像头1、第二摄像2和第一摄像头3,间隔设置于商品摆放区5的上方。用来获取商品摆放区5上的商品图像。商品摆放区5的前方呈一定角度设置的显示单元4用于向用户显示信息,显示单元4可以为一个显示屏也可以为多个显示屏均可。商品摆放区5用于摆放需要结账的商品,根据可能发生的不同状况,商品摆放区5周围会有指示灯6发出提示信号。发声孔7设置于商品摆放区5的侧面下部,发声孔7处设置可以为扬声器的警报模块800,通过语音辅助用户完成购物,提高用户购物舒适度体验。同时商品摆放区5的一侧设置距离传感器模块,距离传感器模块包括第一距离传感器8、第二距离传感器9或/和第三距离传感器10。第一距离传感器8和第二距离传感器9分别设置于商品摆放区5的两相对侧。第三距离传感器10设置于商品摆放区5的正前方。如此设置能保证对商品摆放区5各不同角度,用户移动情况进行准确捕捉。
在商品摆放区5摆放商品后,显示单元4上的交互界面会有相应显示,如图4a)所示,摄像头2获取商品图像后,经过对商品图像进行识别后,在显示单元4上显示商品的具体名称、对应图像以及各商品数量。当 识别完成后指示模块6为绿色,同时显示单元4的部分区域显示支付信息,例如支付二维码。通过在显示单元4上显示对应商品名称的商品图像,有利于在无人环境下,用户自行检查所购商品,避免由于无法将商品名称与具体商品对应,导致的无法购物等问题。
如图4b)所示,当用户作出扫码支付的动作时,首先获取手机相对各距离传感器的距离,之后根据预设阈值判断用户的扫码动作可能的运行范围,据此将待扫二维码设置于显示单元4的适当区域,从而提高用户扫码体验,避免用户反复调整距离和位置,缩短扫码时间提高结账效率。如图4c)所示,根据手机的位置,二维码最终在显示单元4的一个合适位置呈现。
本申请的又一个实施方式中,如图4d)所示,在无人值守环境中的购物,会有各种状况发生。为了保证结算结果的准确性,还包括复核模块700,复核模块700分别与所述摄像单元100、所述结算模块300和所述显示单元4电连接。复核模块700首先读取支付结果,当支付结果为支付完成后,对摄像单元100发出控制指令,再次获取商品摆放区5的商品图像,称为第二商品图像,之后通过复核模块700识别第二商品图像中的商品信息,并与支付结果进行比对。如有未结账的商品时,图像识别处理后得出的未结账商品信息和类别,并在显示单元4上提示。同时对其他已结账的商品则作出已结账提示。当商品摆放区5中存在未结账商品时,扬声器也会发出声音提示用户,以避免用户对商品的重复结算。
本申请的又一个实施方式中,如图4e)所示,当商品摆放区5的商品未以可识别的方式摆放时,显示单元4上显示“商品请摆放正确”等类似提示信息,并语音提示用户需要将商品摆放正确,才能进行结算。
当按照要求完成识别并付款后,显示单元4和手机端会同步显示消费金额、会员图标、会员积分等各类相关信息,如图4f)所示。后续使用过程中,手机端还能根据用户的消费习惯,显示推荐与其消费习惯相关的优惠促销商品组合,在帮助购物者更方便快捷地完成购物体验的同时,也能促成用户的二次消费。
参见图5,所述识别单元200的工作过程包括如下:
获得含有待检测商品的图像;
将含有待检测商品的图像输入基于神经网络的识别系统,基于神经网络的识别系统输出待检测商品信息;
获得含有待检测商品的图像至少为二维图像;
获得含有待检测商品的图像至少包括角度和/或景深不同第一图像至第N图像;N≥2;
基于神经网络的识别系统包括基于区域卷积神经网络的第一神经网络;基于神经网络的商品识别方法包括步骤:
(a1)将第一图像输入第一神经网络,第一神经网络输出第一商品信息;将第N图像输入第一神经网络,第一神经网络输出第N商品信息;
(b1)判断第N商品信息是否包含在第一商品信息中;
如判断结果为是,则将第一商品信息作为待检测商品信息输出;如判断结果为否,则输出反馈提示。
本申请提供的基于神经网络的商品识别方法主要用于无人值守环境下,自助获取结算商品信息后进行自助购物。该方法充分利用神经网络对商品进行识别,并对所得多幅图像所得商品信息进行判断,避免了现有图像识别领域过度依赖图像识别,导致的识别误差率,提高了识别准确性。当无法获取准确的商品信息是,可以通过反馈提示提示用户,待结算商品是否无法被准确识别的状态,从而仅需通过调整待识别商品即可纠正识别错误,无需反复扫码或多次尝试。此处的反馈提示包括堆叠提示、错误报告中的至少一种。该方法可以用于处理的商品种类和数量不限,例如可以为含有待检测商品的图像中待检测商品的数量≥1。含有待检测商品的图像中待检测商品的数量为1~1000。含有待检测商品的图像中待检测商品的种类≥1。待检测商品的种类为1~1000。判断的商品信息包括商品种类或每种商品的数量。判断商品种类和/或商品数量是否一致。本申请提供的基于神经网络的商品识别方法,用于无人值守环境下的结算时,仅需使用普通具有网络联网功能的摄像头即可实现对商品的准确识别。完全无需使用RFID标签,成本降低。同时还能避免误操作等导致无法结算的问题。
优选的,第一图像为待检测商品的正面图像。以此作为主要图像进行识别,能提高识别的准确率。
优选的,获得含有待检测商品的图像至少包括不同角度和/或不同景深的第一图像至第N图像;N=2~4。通过获取多角度图像,能提高神经网络的识别准确性。有利于提高后续识别结果的准确性。
参见图6,优选的,步骤(a1)中还包括称量待检测商品重量的步骤,得到实际称量的商品总重量;步骤(b1)为(b2)计算第一商品信息中商品总重量,与实际称量的商品总重量对比得到差分数据,判断差分数据是否小于等于预设阈值:如判断结果为是,则将第一商品信息作为待检测商品信息输出;如判断结果为否,则输出反馈提示。同时对于所获取的商品信息,还可通过对商品信息中所包含的商品重量进行分析,对所得结果 进行校正,从而提高图像识别结果的准确性。
参见图7,优选的,基于神经网络的识别系统包括基于区域卷积神经网络的第一神经网络;基于神经网络的商品识别方法包括步骤:(a3)将第一图像输入第一神经网络,第一神经网络输出第一商品信息;将第N图像输入第一神经网络,第一神经网络输出第N商品信息;(b3)判断第N商品信息是否包含在第一商品信息中;如判断结果为是,则将第一商品信息作为待检测商品信息输出;如判断结果为否,则执行后续步骤;(c3)计算第一商品信息中商品总重量,与实际称量的商品总重量对比得到差分数据,判断差分数据是否小于等于预设阈值:如判断结果为是,则将第一商品信息作为待检测商品信息输出;如判断结果为否,则输出反馈提示。
通过连用种类和商品信息作为校正参数,能更好对所得结果进行纠正,提高商品的识别准确度。此处的预设阈值可以为0.1g至10kg中的至少一个数值。预设阈值还可以为第一商品信息中重量最小的商品重量。预设阈值还可以为第一商品信息中重量最小的商品重量的10%至80%中的至少一个数值。
优选的,步骤(b1)和步骤(b3)中判断第N商品信息是否包含在第一商品信息中的方法为,判断第N商品信息中的商品种类是否均存在于第一商品信息中。
优选的,步骤(b1)和步骤(b3)中判断第N商品信息是否包含在第一商品信息中的方法为,判断第N商品信息中的每种商品的数量是否小于等于第一商品信息中的商品数量。
优选的,步骤(b1)和步骤(b3)为判断第N商品信息是否与第一商品信息一致;如判断结果为是,则将第一商品信息作为待检测商品信息输出;如判断结果为否,则执行后续步骤。
优选的,步骤(b1)和步骤(b3)中第N商品信息是否与第一商品信息一致,包括商品种类一致和每种商品的数量一致。
优选的,基于神经网络的识别系统包括基于区域卷积神经网络的第二神经网络,基于神经网络的识别系统由包括以下步骤的方法得到:获得每件待检测商品多角度图像的第一图像集;使用第一图像集训练第二神经网络,得到第一神经网络。通过使用第二神经网络,能将所得结果用于训练第一神经网络,从而实现深度学习化的自动系统纠错,随着识别商品数量的提高,该神经识别系统的识别准确性自动升高。按现有方法进行即可。以待检测商品的多角度图像进行训练,能提高基于神经网络的识别系统在应对商品被遮挡时的识别准确性。
优选的,训练第二神经网络的方法为监督学习方法。
优选的,训练第二神经网络的方法为:采用监督学习,使用第一图像集训练第二神经网络,得到第三神经网络;获得待检测商品图像的第二图像集;用第二图像集训练第三神经网络,得到第一神经网络。
优选的,第二图像集包括经基于神经网络的识别系统输出的待检测商品信息的待检测商品的图像。
优选的,第二神经网络对待检测商品的识别准确率为80%以上。优选的,第二图像集训练第三神经网络的过程为无监督学习。按现有方法进行即可。
参见图8,优选的,基于神经网络的商品识别方法,包括步骤:
(c1)或(d3)步骤(b1)或(c3)中判断结果为否时,识别第一商品信息与第N商品信息中差异商品;
(d1)或(e3)获取步骤(c1)或(d3)中的差异商品的差异图像集,用差异图像集强化训练第一神经网络。
通过收集判断结果为否时,第N商品信息中存在的差异商品并获取其图像集,通过以该差异图像集训练第一神经网络,能进一步提高该系统的纠错能力。同时该操作也可以用于如图7所示的方法中。
参见图9,优选的,基于神经网络的商品识别方法,包括步骤:
(c2)或(d3)收集步骤(b2)或(c3)中判断结果为否时,识别第一商品信息中的商品;
(d2)或(e3)获取步骤(c2)或(d3)中的识别商品的收集图像集,用收集图像集强化训练第一神经网络。
该步骤也可以用于如图7所示的方法,在此不累述。当检测结果为否时,通过对多次无法识别情况下的第一商品信息进行收集,并将其用于训练第一神经网络,从而提高第一神经网络对无法识别情况的识别能力。
参见图10,本申请提供的基于神经网络的商品识别方法,使用时,待检测商品放置于载物台上,N个摄像头围绕待检测商品环绕设置。通过N个摄像头获取待检测商品各个角度的图像,分别记为P1、P2.....PN。N个摄像头中,位于载物台正上方的摄像头为主摄像头,记为第一摄像头,该摄像头所获取的图像即为第一图像P1。
将P1、P2......PN上传到本地识别服务器或云端识别服务器,对各张图片进行识别,识别出的商品信息分别记为R1、R2....RN,商品信息中包括商品的类别信息和数量信息,其中,主摄像头的识别结果R1为第一商 品信息,其他摄像头的识别结果R2......RN分别为第二商品信息......第N商品信息;
以两个摄像头为例,判断R2(第二商品信息)是否包含在R1(第一商品信息)中;
如果判断结果为是,则将R1作为待检测商品信息输出;
如判断结果为否,则计算R1中商品的总重量,其与实际称量的商品总重量相减所得结果的绝对值作为差分数据,判断差分数据是否小于等于预设阈值:
如判断结果为是,则将R1作为待检测商品信息输出,输出商品信息包含商品的类别、数量和价格的商品信息列表;
如判断结果为否,则显示堆叠提示或错误报告信息。
本申请的另一方面提供了一种无人商品收银台,所述无人商品收银台采用上述的用于自动结算的人机交互装置中的至少一种;
所述用于自动结算的人机交互装置识别所述无人商品收银台上的所述商品的种类和数量,以预设图像的形式展示所述商品的图像和/或所述商品数量。通过以商品图像替代商品名称的文字,提高购物直观感受,改善用户购物体验。
本申请的另一方面提供了一种自动商店或无人便利店,自动商店或无人便利店采用上述的用于自动结算的人机交互装置中的至少一种和/或上述的无人商品收银台中的至少一种。
图11示出了本申请通过的基于神经网络的商品识别装置用于自助收银台的无人便利店的一种实施方式的时序示意图。也可以作为本申请提供的自助收银台的实施实例。如图11所示,使用了包含任一前述基于神经网络的商品识别方法的基于神经网络的商品识别装置,顾客在无人便利店中的购物步骤如下:
顾客选择完商品后,将所有商品放置于自助收银台(也是基于神经网络的商品识别装置中的载物台)上;
载物台感应到重量>0,触发基于神经网络的商品识别装置启动商品识别程序;
摄像头拍摄载物台上的商品,获得商品图片,并将商品图片Base64编码POST到图像识别服务器,进行图像识别;
图像识别的结果(包括所有商品品名、价格、总重量)的信息与载物台实际称量得到的总重量比对,得到差分数据;
当差分数据小于等于预设阈值时,判断为[实际称重与范围重量一致],则向订单处理接口请求生成订单;
当差分数据大于预设阈值时,判断为[实际称重与范围重量不一致],则在操作界面显示堆叠提示,提示顾客挪动商品,使摄像头可拍摄到堆叠在下层被遮挡住的商品;摄像头重新拍摄载物台上的商品,获得新的商品图片,直至差分数据小于等于预设阈值,向订单处理接口请求生成订单;
订单处理接口收到生成订单请求,发出支付二维码字符串,在操作界面生成支付二维码;
顾客扫描支付二维码;
支付成功后,消息SOCKET发送支付成功的消息,对载物台上的商品进行消磁;
消息SOCKET向安全通道发送人脸识别消息;
顾客携带商品通过包括检测装置的安全通道,如未检测到未消磁标签,大门开启,顾客走出无人购物便利店;如检测到未消磁标签,则发出未支付警告,大门不开启。
以上所述,仅是本申请的几个实施例,并非对本申请做任何形式的限制,虽然本申请以较佳实施例揭示如上,然而并非用以限制本申请,任何熟悉本专业的技术人员,在不脱离本申请技术方案的范围内,利用上述揭示的技术内容做出些许的变动或修饰均等同于等效实施案例,均属于技术方案范围内。

Claims (21)

  1. 一种用于自动结算的人机交互装置,其特征在于,所述用于自动结算的人机交互装置包括:
    摄像单元,用于获取商品摆放区内商品的第一商品图像;
    识别单元,用于识别所述第一商品图像,输出商品信息;
    显示单元,用于显示所述商品信息;
    所述摄像单元设置于所述商品摆放区外,并与所述识别单元电连接,所述识别单元与所述显示单元电连接;
    所述识别单元识别所述第一商品图像中的商品种类和商品数量,以所述商品的预设图像和商品数量输出至所述显示单元。
  2. 根据权利要求1所述的用于自动结算的人机交互装置,其特征在于,所述摄像单元包括至少两个摄像头,所述摄像头用于获取所述商品的不同角度和/或不同景深,得到所述第一商品图像。
  3. 根据权利要求1所述的用于自动结算的人机交互装置,其特征在于,所述摄像单元包括第一摄像头、第二摄像头和第三摄像头;
    所述第一摄像头正对所述商品设置于所述商品的顶部;
    所述第二摄像头和所述第三摄像头分别设置于所述商品的两侧。
  4. 根据权利要求1所述的用于自动结算的人机交互装置,其特征在于,以所述商品的预设图像和所述商品的数量输出至所述显示单元,包括步骤:
    a)生成以下第一矩阵:
    Figure PCTCN2018107958-appb-100001
    其中,所述第一商品由第一种商品种类生成,所述第一数据为该种类商品的数量;所述N商品为所述第N种商品的商品种类生成,所述第M数据为所述第N种商品的数量;
    b)将所述第一商品替换为第一种商品的第一预设图像,重复该操作直至将所述第N商品替换为第N种商品的第N预设图像,得到以下第二矩阵:
    Figure PCTCN2018107958-appb-100002
    c)将所述第二矩阵输出至所述显示单元。
  5. 根据权利要求1所述的用于自动结算的人机交互装置,其特征在于,所述用于自动结算的人机交互装置包括:结算模块,所述结算模块分别与所述识别单元和所述显示单元电连接,用于根据所述商品信息生成支付信息,通过所述显示单元显示所述支付信息。
  6. 根据权利要求5所述的用于自动结算的人机交互装置,其特征在于,所述用于自动结算的人机交互装置包括:距离传感单元,所述距离传感单元设置于所述商品摆放区的外侧,并与所述显示单元电连接,用于获取用户支付手势与所述显示单元的距离,并根据所述距离调整所述显示单元中支付信息的显示区域。
  7. 根据权利要求5所述的用于自动结算的人机交互装置,其特征在于,所述用于自动结算的人机交互装置包括:复核模块,所述复核模块分别与所述摄像单元、所述结算模块和所述显示单元电连接,
    用于支付完成后读取支付结果信息,通过所述摄像单元获取结算完毕的第二商品图像,识别所述第二商品图像中商品信息,比对所述第二商品图像的商品信息与所述支付结果信息,将比对结果通过所述显示单元显示。
  8. 根据权利要求7所述的用于自动结算的人机交互装置,其特征在于,所述用于自动结算的人机交互装置包括:警报模块,所述警报模块与所述复核模块电连接并设置于所述商品摆放区域附近,用于获取比对所述 第二商品图像的商品信息与所述支付结果信息所得结果,当第二商品图像中包含未结算商品时,发出报警提示。
  9. 根据权利要求1所述的用于自动结算的人机交互装置,其特征在于,所述识别单元,用于检测所述第一商品图像中无法识别物品,当检测到无法识别物品后,通过所述显示单元显示摆放正确提示语。
  10. 根据权利要求1所述的用于自动结算的人机交互装置,其特征在于,所述用于自动结算的人机交互装置包括:指示模块,所述指示模块与所述识别单元电连接,当所述识别单元输出所述商品信息后,所述指示模块发出指示信号。
  11. 根据权利要求1所述的用于自动结算的人机交互装置,其特征在于,所述识别单元的工作过程包括如下:
    获得含有待检测商品的所述第一商品图像;
    将所述第一商品图像输入基于神经网络的识别系统,所述基于神经网络的识别系统输出所述商品信息。
  12. 根据权利要求11所述的用于自动结算的人机交互装置,其特征在于,所述基于神经网络的识别系统包括基于区域卷积神经网络的第一神经网络;所述基于神经网络的商品识别方法包括步骤:
    (a1)将所述第一图像输入所述第一神经网络,所述第一神经网络输出第一商品信息;将所述第N图像输入所述第一神经网络,所述第一神经网络输出第N商品信息;
    (b1)判断所述第N商品信息是否包含在所述第一商品信息中;
    如判断结果为是,则将所述第一商品信息作为所述待检测商品的信息输出;
    如判断结果为否,则输出反馈提示。
  13. 根据权利要求11所述的用于自动结算的人机交互装置,其特征在于,所述步骤(a1)中还包括称量所述待检测商品重量的步骤,得到实际称量的商品总重量;
    所述步骤(b1)中还包括步骤(b2)计算所述第一商品信息中商品总重量,与实际称量的商品总重量对比得到差分数据,判断所述差分数据是否小于等于预设阈值:
    如判断结果为是,则将所述第一商品信息作为所述待检测商品的信息输出;
    如判断结果为否,则输出反馈提示。
  14. 根据权利要求12所述的用于自动结算的人机交互装置,其特征在于,所述步骤(b1)为判断所述第N商品信息是否与所述第一商品信息一致;如判断结果为是,则将所述第一商品信息作为所述待检测商品信息输出;如判断结果为否,则执行后续步骤。
  15. 根据权利要求11所述的用于自动结算的人机交互装置,其特征在于,所述基于神经网络的识别系统包括:基于区域卷积神经网络的第一神经网络;
    所述基于神经网络的商品识别方法包括步骤:
    (a3)将所述第一图像输入所述第一神经网络,所述第一神经网络输出第一商品信息;将所述第N图像输入所述第一神经网络,所述第一神经网络输出第N商品信息;
    (b3)判断所述第N商品信息是否包含在所述第一商品信息中;
    如判断结果为是,则将所述第一商品信息作为所述待检测商品的信息输出;
    如判断结果为否,则执行后续步骤;
    (c3)计算所述第一商品信息中商品总重量,与实际称量的商品总重量对比得到差分数据,判断所述差分数据是否小于等于预设阈值:
    如判断结果为是,则将所述第一商品信息作为所述待检测商品的信息输出;
    如判断结果为否,则输出反馈提示。
  16. 根据权利要求15所述的用于自动结算的人机交互装置,其特征在于,所述步骤(b3)为判断所述第N商品信息是否与所述第一商品信息一致;如判断结果为是,则将所述第一商品信息作为所述待检测商品信息输出;如判断结果为否,则执行后续步骤。
  17. 根据权利要求11所述的用于自动结算的人机交互装置,其特征在于,所述基于神经网络的识别系统包括基于区域卷积神经网络的第二神经网络,所述基于神经网络的识别系统由包括以下步骤的方法得到:
    获得包含每件所述待检测商品多角度图像的第一图像集;
    使用所述第一图像集训练所述第二神经网络,得到第一神经网络。
  18. 根据权利要求17所述的用于自动结算的人机交互装置,其特征在于,所述训练所述第二神经网络的方法为:
    采用监督学习,使用第一图像集训练所述第二神经网络,得到第三神经网络;
    获得所述待检测商品图像的第二图像集;
    用第二图像集训练所述第三神经网络,得到第一神经网络;
    所述第二图像集包括经基于神经网络的识别系统输出的所述待检测商品信息的所述待检测商品的图像;
    所述训练所述第二神经网络和所述的方法为监督学习方法;
    所述用第二图像集训练所述第三神经网络的过程为无监督学习。
  19. 根据权利要求15所述的用于自动结算的人机交互装置,其特征在于,所述基于神经网络的商品识别方法包括步骤:
    (d3)为:所述步骤(c3)中判断结果为否时,识别所述第一商品信息与所述第N商品信息中差异商品;
    (e3)为:获取步骤(d3)中的所述差异商品的差异图像集,用所述差异图像集强化训练所述第一神经网络。
  20. 一种无人商品收银台,其特征在于,所述无人商品收银台采用权利要求1至19任一项所述的用于自动结算的人机交互装置中的至少一种;
    所述用于自动结算的人机交互装置识别所述无人商品收银台上的所述商品的种类和数量,以预设图像的形式展示所述商品的图像和/或所述商品数量。
  21. 一种自动商店或无人便利店,其特征在于,所述自动商店或无人便利店采用权利要求1至19任一项所述的用于自动结算的人机交互装置中的至少一种和/或权利要求20所述的无人商品收银台中的至少一种。
PCT/CN2018/107958 2017-09-27 2018-09-27 用于自动结算的人机交互装置及其应用 WO2019062812A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201710890989 2017-09-27
CN201710890989.1 2017-09-27
CN201811125176.4 2018-09-26
CN201811125176.4A CN109559453A (zh) 2017-09-27 2018-09-26 用于自动结算的人机交互装置及其应用

Publications (1)

Publication Number Publication Date
WO2019062812A1 true WO2019062812A1 (zh) 2019-04-04

Family

ID=65864754

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/107958 WO2019062812A1 (zh) 2017-09-27 2018-09-27 用于自动结算的人机交互装置及其应用

Country Status (2)

Country Link
CN (1) CN109559453A (zh)
WO (1) WO2019062812A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111988579A (zh) * 2020-08-31 2020-11-24 杭州海康威视系统技术有限公司 数据审核方法、系统和电子设备
CN113509265A (zh) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 一种动态位置识别提示系统及其方法
CN114049107A (zh) * 2021-11-22 2022-02-15 深圳市智百威科技发展有限公司 一种高效支付收付款系统及方法
CN114613024A (zh) * 2022-03-23 2022-06-10 成都智元汇信息技术股份有限公司 基于动作数据推送人脸识别图片的方法及其装置、系统
WO2023002510A1 (en) * 2021-07-23 2023-01-26 Medaara Health Care Technologies Private Limited An unmanned apparatus for measuring the wellness parameters of a person

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112466035B (zh) * 2019-09-06 2022-08-12 图灵通诺(北京)科技有限公司 基于视觉和重力感应的商品识别方法、装置和系统
CN110992140A (zh) * 2019-11-28 2020-04-10 浙江由由科技有限公司 一种用于识别模型的匹配方法和系统
CN111310618A (zh) * 2020-02-03 2020-06-19 北京百度网讯科技有限公司 物品识别方法、装置、电子设备及可读存储介质
WO2021232333A1 (en) * 2020-05-21 2021-11-25 Shenzhen Malong Technologies Co., Ltd. System and methods for express checkout
CN112270389B (zh) * 2020-10-22 2022-08-30 贵州省生物技术研究所(贵州省生物技术重点实验室、贵州省马铃薯研究所、贵州省食品加工研究所) 一种基于大数据处理的食品安全检测设备及其使用方法
CN113326894A (zh) * 2021-06-23 2021-08-31 中国农业银行股份有限公司 影像仪
CN116090928A (zh) * 2023-01-14 2023-05-09 大参林医药集团股份有限公司 物流分拣系统及其控制方法、控制设备、存储介质
CN116029630A (zh) * 2023-01-14 2023-04-28 大参林医药集团股份有限公司 商品分拣复核系统
CN117789380B (zh) * 2023-12-26 2024-07-30 深圳市君时达科技有限公司 一种购物结帐的自助结算方法、系统、电子设备及介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470929A (zh) * 2007-12-28 2009-07-01 东芝泰格有限公司 商品销售数据处理装置及商品销售数据处理方法
JP2014235530A (ja) * 2013-05-31 2014-12-15 Kddi株式会社 セルフショッピングシステム、携帯端末、コンピュータプログラムおよびセルフショッピング方法
CN204883934U (zh) * 2015-07-27 2015-12-16 陈若春 食堂电子结账系统
CN106326852A (zh) * 2016-08-18 2017-01-11 无锡天脉聚源传媒科技有限公司 一种基于深度学习的商品识别方法及装置
CN106504074A (zh) * 2016-11-14 2017-03-15 上海卓易科技股份有限公司 一种智能购物方法及系统
CN106781014A (zh) * 2017-01-24 2017-05-31 广州市蚁道互联网有限公司 自动售货机及其运行方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101894538B (zh) * 2010-07-15 2012-09-05 优视科技有限公司 屏幕显示画面控制方法和装置
JP5132733B2 (ja) * 2010-08-23 2013-01-30 東芝テック株式会社 店舗システムおよびプログラム
DE202013001532U1 (de) * 2013-02-16 2013-03-15 Certus Warensicherungs-Systeme Gmbh Vorrichtung zum visuellen Überwachen des Durchganges an einem handelsverfügbaren Kassen-PC in einem Einkaufsmarkt
WO2015163460A1 (ja) * 2014-04-25 2015-10-29 佳弘 東 決済支援装置、決済支援方法及びプログラム
CN106815237B (zh) * 2015-11-30 2020-08-21 北京睿创投资管理中心(有限合伙) 搜索方法、搜索装置、用户终端及搜索服务器
CN106875203A (zh) * 2015-12-14 2017-06-20 阿里巴巴集团控股有限公司 一种确定商品图片的款式信息的方法及装置
CN105787490A (zh) * 2016-03-24 2016-07-20 南京新与力文化传播有限公司 基于深度学习的商品潮流识别方法及装置
CN107093269A (zh) * 2017-06-08 2017-08-25 上海励识电子科技有限公司 一种智能无人售货系统及方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101470929A (zh) * 2007-12-28 2009-07-01 东芝泰格有限公司 商品销售数据处理装置及商品销售数据处理方法
JP2014235530A (ja) * 2013-05-31 2014-12-15 Kddi株式会社 セルフショッピングシステム、携帯端末、コンピュータプログラムおよびセルフショッピング方法
CN204883934U (zh) * 2015-07-27 2015-12-16 陈若春 食堂电子结账系统
CN106326852A (zh) * 2016-08-18 2017-01-11 无锡天脉聚源传媒科技有限公司 一种基于深度学习的商品识别方法及装置
CN106504074A (zh) * 2016-11-14 2017-03-15 上海卓易科技股份有限公司 一种智能购物方法及系统
CN106781014A (zh) * 2017-01-24 2017-05-31 广州市蚁道互联网有限公司 自动售货机及其运行方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111988579A (zh) * 2020-08-31 2020-11-24 杭州海康威视系统技术有限公司 数据审核方法、系统和电子设备
CN111988579B (zh) * 2020-08-31 2022-05-31 杭州海康威视系统技术有限公司 数据审核方法、系统和电子设备
CN113509265A (zh) * 2021-04-01 2021-10-19 上海复拓知达医疗科技有限公司 一种动态位置识别提示系统及其方法
WO2023002510A1 (en) * 2021-07-23 2023-01-26 Medaara Health Care Technologies Private Limited An unmanned apparatus for measuring the wellness parameters of a person
CN114049107A (zh) * 2021-11-22 2022-02-15 深圳市智百威科技发展有限公司 一种高效支付收付款系统及方法
CN114049107B (zh) * 2021-11-22 2022-08-05 深圳市智百威科技发展有限公司 一种支付收付款系统及方法
CN114613024A (zh) * 2022-03-23 2022-06-10 成都智元汇信息技术股份有限公司 基于动作数据推送人脸识别图片的方法及其装置、系统

Also Published As

Publication number Publication date
CN109559453A (zh) 2019-04-02

Similar Documents

Publication Publication Date Title
WO2019062812A1 (zh) 用于自动结算的人机交互装置及其应用
CN109559457B (zh) 基于神经网络识别商品的收银方法以及自助收银台
JP6709320B6 (ja) 畳み込みニューラルネットワーク画像認識技術による会計方法及び会計設備
WO2019062018A1 (zh) 商品自动结算方法、装置、自助收银台
CN107464116B (zh) 一种订单结算方法及系统
CN110866429B (zh) 漏扫识别方法、装置、自助收银终端及系统
CN109508974B (zh) 一种基于特征融合的购物结账系统和方法
KR102358607B1 (ko) 인공지능 감정 시스템, 인공지능 감정 방법 및 기록 매체
US10510218B2 (en) Information processing apparatus, information processing method, and non-transitory storage medium
JP6549558B2 (ja) 売上登録装置、プログラム及び売上登録方法
US10372998B2 (en) Object recognition for bottom of basket detection
CN111222870B (zh) 结算方法、装置和系统
JP7264401B2 (ja) 会計方法、装置及びシステム
WO2019165895A1 (zh) 自动售货方法和系统以及自动售货装置和自动贩售机
CN110689389A (zh) 基于计算机视觉的购物清单自动维护方法及装置、存储介质、终端
CN111199410A (zh) 商品管理方法、装置和智能货架
CN211262448U (zh) 一种智能秤
CN111311225A (zh) 基于光模块加密的屏下式支付方法和装置
US20240005750A1 (en) Event-triggered capture of item image data and generation and storage of enhanced item identification data
CN118736724A (zh) 智能售货机状态在线监测管理系统
CN111310493A (zh) 基于多传感器的屏下识读二维码的方法和装置
CN111311240A (zh) 应用于ios系统的多用户无障碍扫码支付的方法和装置
CN111311242A (zh) 适用于屏下式快速识读二维码的方法和装置
CN111310499A (zh) 基于光电传感器的屏下识读二维码的方法和装置
CN111310494A (zh) 基于双屏显示的屏下式识读二维码的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18862694

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 19/08/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18862694

Country of ref document: EP

Kind code of ref document: A1