CN109003390B - Commodity identification method, unmanned vending machine and computer-readable storage medium - Google Patents

Commodity identification method, unmanned vending machine and computer-readable storage medium Download PDF

Info

Publication number
CN109003390B
CN109003390B CN201810696427.8A CN201810696427A CN109003390B CN 109003390 B CN109003390 B CN 109003390B CN 201810696427 A CN201810696427 A CN 201810696427A CN 109003390 B CN109003390 B CN 109003390B
Authority
CN
China
Prior art keywords
image
commodity
video data
commodities
vending machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810696427.8A
Other languages
Chinese (zh)
Other versions
CN109003390A (en
Inventor
林丽梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hetai Intelligent Home Appliance Controller Co ltd
Original Assignee
Shenzhen Het Data Resources and Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Het Data Resources and Cloud Technology Co Ltd filed Critical Shenzhen Het Data Resources and Cloud Technology Co Ltd
Priority to CN201810696427.8A priority Critical patent/CN109003390B/en
Publication of CN109003390A publication Critical patent/CN109003390A/en
Application granted granted Critical
Publication of CN109003390B publication Critical patent/CN109003390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F9/00Details other than those peculiar to special kinds or types of apparatus
    • G07F9/002Vending machines being part of a centrally controlled network of vending machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Control Of Vending Devices And Auxiliary Devices For Vending Devices (AREA)

Abstract

The embodiment of the invention provides a commodity identification method, an unmanned vending machine and a computer readable storage medium, wherein the method is applied to the unmanned vending machine and comprises the following steps: when the door of the unmanned vending machine is detected to be opened, video data in the unmanned vending machine are collected through the camera; identifying commodities included in the video data through a trained target object detection model, and obtaining the type, the number and the position of the image commodities; determining the type and the number of commodities taken away by a user according to the type, the number and the position of the commodities in the images and the highest position, wherein the highest position is the highest position in the positions of the commodities in the background image identified by the trained target object detection model, and the background image is an image in the unmanned vending machine collected by a camera before the door of the unmanned vending machine is opened. By implementing the embodiment of the invention, the commodity identification accuracy can be improved.

Description

Commodity identification method, unmanned vending machine and computer-readable storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a commodity identification method, an unmanned vending machine and a computer readable storage medium.
Background
With the continuous development of artificial intelligence technology, the unmanned vending machine has appeared in the life of people gradually, so how to identify the commodities taken away by the user in the unmanned vending machine has become a technical problem to be solved urgently. At present, an important commodity identification method is as follows: when the door of the unmanned vending machine is detected to be closed, a Radio Frequency Identification (RFID) tag on each commodity in the unmanned vending machine is scanned, the scanned RFID tag is compared with stored RFID tags, and commodities corresponding to tags except the scanned RFID tag in the stored RFID tags are determined as commodities taken away by a user. However, in the above method, since the RFID tag is easily damaged, the article identification accuracy is low.
Disclosure of Invention
The embodiment of the invention provides a commodity identification method, an unmanned vending machine and a computer readable storage medium, which are used for improving the commodity identification accuracy.
A first aspect provides a product identification method, which is applied to an unmanned vending machine, and includes:
when the door of the unmanned vending machine is detected to be opened, video data in the unmanned vending machine are collected through a camera;
identifying commodities included in the video data through a trained target object detection model, and obtaining the type, the number and the position of the image commodities;
determining the type and the number of commodities taken away by a user according to the type, the number and the position of the image commodities and the highest position, wherein the highest position is the highest position in the positions of the commodities in the background image identified by the trained target object detection model, and the background image is the image in the unmanned vending machine collected by the camera before the door of the unmanned vending machine is opened.
A second aspect provides an unmanned aerial vehicle comprising means for performing the article identification method provided by the first aspect.
A third aspect provides an unmanned vending machine, including a processor, a memory, a camera, and a transceiver, where the processor, the memory, the camera, and the transceiver are connected to each other, where the camera is configured to collect video data, the transceiver is configured to communicate with an electronic device, the memory is configured to store a computer program, the computer program includes program instructions, and the processor is configured to call the program instructions to execute the article identification method provided in the first aspect.
A fourth aspect provides a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the article identification method provided by the first aspect.
A fifth aspect provides an application program for executing the article identification method provided by the first aspect when running.
In the embodiment of the invention, when the door of the unmanned vending machine is detected to be opened, the video data in the unmanned vending machine is acquired through the camera, the type, the quantity and the position of the commodity obtained by the image commodity are obtained by identifying the commodity included in the video data through the trained target object detection model, and the type and the quantity of the commodity taken away by a user are determined according to the type, the quantity and the position of the commodity and the highest position of the image commodity. The method comprises the steps of firstly identifying the type, the number and the position of image commodities in video data, and then determining the type and the number of commodities taken away by a user according to the type, the number and the position of the image commodities and the highest position, so that the commodities taken away by the user can be accurately determined, and therefore the commodity identification accuracy can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for identifying a commodity according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another method for identifying a commodity according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an exemplary vending machine according to the present invention;
FIG. 4 is a schematic diagram of another vending machine configuration provided by embodiments of the present invention;
FIG. 5 is a schematic diagram of another vending machine according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an image after a difference image is subjected to binarization processing according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a background image according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a commodity being taken away by a user according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a commodity identification method, an unmanned vending machine and a computer readable storage medium, which are used for improving the commodity identification accuracy. The following are detailed below.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a method for identifying a commodity according to an embodiment of the present invention. The commodity identification method is applied to the unmanned vending machine. As shown in fig. 1, the article identification method may include the following steps.
101. When detecting that the door of unmanned vending machine is opened, gather the video data in the unmanned vending machine through the camera.
In this embodiment, when the door of the unmanned vending machine is opened by the user, it indicates that there is a user who needs to purchase goods in the unmanned vending machine, and therefore, when it is detected that the door of the unmanned vending machine is opened, video data in the unmanned vending machine is collected through the camera so as to be used for determining goods taken away by the user. The number of the cameras can be one, and at the moment, the cameras must be installed in places where all areas in the unmanned vending machine can be shot; the goods shelves can be arranged in a plurality of layers, one or more cameras can be arranged on each layer of goods shelves, one camera can be arranged on each two layers of goods shelves, and other installation modes can be adopted as long as the acquisition areas of all the cameras can cover the whole area in the unmanned goods shelves.
102. And identifying commodities included in the video data through the trained target object detection model, and obtaining the type, the number and the position of the image commodities.
In this embodiment, after the video data in the unmanned vending machine is collected by the camera, the type, the number, and the position of the image commodity are obtained by identifying the commodity included in the video data through the trained target object detection model, or the type, the number, and the position of the image commodity are obtained by identifying the commodity included in each frame of image in the video data through the trained target object detection model. The trained target object detection model can be an SSD network model, a YOLO network model, a Faster-rcnn network model, or other models capable of identifying the target object.
In this embodiment, a difference image may be obtained by performing a difference operation on each frame of image in the video data and the background image, a binary difference image may be obtained by performing binarization on the difference image, an outline area may be obtained by performing edge detection on the binary difference image, an image corresponding to the outline area larger than a threshold value may be selected from the video data as the change frame image, and the type, number, and position of the commodity of the image may be obtained by identifying the commodity included in the change frame image through the trained target object detection model. And performing difference operation on each frame of image in the video data and the background image, namely, performing absolute value of pixel difference of corresponding pixel positions of each frame of image and the background image in the video data. The binarization processing of the difference image can ignore small changes between each frame of image and the background image in the video data and highlight large changes between the frame of image and the background image. Referring to fig. 6, fig. 6 is a schematic diagram of an image after binarization processing is performed on a difference image according to an embodiment of the present invention. The image shown in fig. 6 is an image obtained after setting 255 a set that the absolute value of the pixel difference is greater than or equal to 25 and 0 a set that the absolute value of the pixel difference is less than 25. As shown in fig. 6, the image after the binarization process includes only two colors of black and white. The background image is an image in the unmanned vending machine collected through the camera before the door of the unmanned vending machine is opened.
In this embodiment, the background image may be converted into a grayscale image to obtain a grayscale background image, and then the grayscale background image may be subjected to gaussian smoothing to obtain a smooth background image, so as to perform blurring processing on the background image. Further, since the gaussian smoothing process can only process a grayscale image, the image needs to be converted into a grayscale image before the gaussian smoothing process is performed. The method comprises the steps of identifying commodities included in video data through a trained target object detection model to obtain the types, the number and the positions of image commodities, converting each frame of image in the video data into a gray image to obtain gray video data, and carrying out Gaussian smoothing processing on the gray video data to obtain smooth video data, so that when difference operation is carried out on each frame of image and a background image in the video data to obtain a difference image, difference operation can be carried out on each frame of image and the smooth background image in the smooth video data to obtain the difference image.
In this embodiment, in order to eliminate fine noise points in the binary difference image, the type, number, and position of the commodity in the image are obtained by identifying the commodity included in the video data through the trained target object detection model, and the processing difference image is obtained by performing dilation corrosion processing on the binary difference image. Therefore, the contour area obtained by performing edge detection on the binary difference image may be the contour area obtained by performing edge detection on the processed difference image.
103. And determining the type and the number of the commodities taken away by the user according to the type, the number and the positions of the commodities in the images and the highest position.
In this embodiment, after the type, the number, and the position of the commodity included in the video data are identified by the trained target object detection model to obtain the type, the number, and the position of the image commodity, the type and the number of the commodity taken away by the user are determined according to the type, the number, and the position of the image commodity and the highest position of the image commodity, the commodity with the position higher than the highest position may be selected from the image commodity as the commodity taken away by the user, and then the type and the number of the commodity taken away by the user are determined according to the type and the number of the image commodity. The highest position is the highest position among the positions of the commodities in the background image recognized by the trained target object detection model. Referring to fig. 7, fig. 7 is a schematic diagram of a background image according to an embodiment of the present invention. As shown in fig. 7, the dotted line is the highest position. Referring to fig. 8, fig. 8 is a schematic diagram illustrating a commodity being taken away by a user according to an embodiment of the present invention. As shown in fig. 8, when the commodity is taken away by the user, the commodity is taken up by the user, and the position of the commodity is higher than the highest position in the background image in the process, so that the commodity taken away by the user can be determined through the highest position. The background images are different, and the corresponding highest positions may be different, so that when the background images are multiple images, the corresponding highest positions may also be multiple.
In the commodity identification method described in fig. 1, when it is detected that a door of the unmanned vending machine is opened, video data in the unmanned vending machine is collected through a camera, the commodities included in the video data are identified through a trained target object detection model to obtain the type, the quantity and the positions of image commodities, and the type and the quantity of the commodities taken away by a user are determined according to the type, the quantity and the positions of the image commodities and the highest position. The method comprises the steps of firstly identifying the type, the number and the position of image commodities in video data, and then determining the type and the number of commodities taken away by a user according to the type, the number and the position of the image commodities and the highest position, so that the commodities taken away by the user can be accurately determined, and therefore the commodity identification accuracy can be improved.
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating another method for identifying a commodity according to an embodiment of the present invention. The commodity identification method is applied to the unmanned vending machine. As shown in fig. 2, the article identification method may include the following steps.
201. And acquiring commodity images of each commodity in all commodities at different angles and different distances.
In this embodiment, commodity images of all commodities that the unmanned vending machine needs to sell, that is, commodity images of each commodity in all commodities at different angles and at different distances, may be collected in advance.
202. And marking the position and the type of the commodity in each image in the commodity image to obtain marking information.
In this embodiment, after the commodity images of each commodity in all the commodities at different angles and different distances are acquired, the position and the type of the commodity in each image in the commodity image are labeled to obtain labeling information. Marking the position of the commodity in each image of the commodity image can be marking the coordinate point of the upper left corner and the coordinate point of the lower right corner of the commodity in each image, or marking the coordinate point of the upper right corner and the coordinate point of the lower left corner of the commodity in each image.
203. And converting the commodity image into an image with set pixels to obtain a converted image.
In this embodiment, after the commodity images of each commodity in all the commodities at different angles and different distances are acquired, the commodity images are converted into the images with the set pixels to obtain the converted images, that is, the commodity images are converted into the images with the same height and width, that is, the images are all converted into the images with m × n pixels, so that when the target object detection model is trained in step 204, the training rate can be increased. Step 202 and step 203 may be executed in parallel or in series.
204. And training the target object detection model by using the converted image and the labeling information to obtain the trained target object detection model.
In this embodiment, after labeling the position and the type of the commodity in each image in the commodity image to obtain labeling information, and converting the commodity image into an image with set pixels to obtain a converted image, the converted image and the labeling information are used to train a target object detection model to obtain a trained target object detection model. The model architecture of the target object detection model is the same as that of the trained target object detection model, but the parameters are different. The target object detection model and the trained target object detection model may be an SSD network model, a YOLO network model, a Faster-rcnn network model, or other models capable of identifying the target object.
In one embodiment, after the position and the type of the commodity in each image in the commodity image are marked to obtain the marking information, the commodity image and the marking information may be directly used to train the target object detection model to obtain the trained target object detection model, i.e. step 203 is not performed.
205. And when the connection with the electronic equipment through the two-dimensional code comprising the identification of the unmanned vending machine is detected, opening the door lock of the unmanned vending machine.
In this embodiment, the lock of unmanned aerial vehicle machine is in the closed condition under normal conditions, and when the user scanned the two-dimensional code including the sign of unmanned aerial vehicle machine through electronic equipment such as cell-phone, electronic equipment and unmanned aerial vehicle machine will establish being connected, consequently, when detecting to establish being connected through the two-dimensional code including the sign of unmanned aerial vehicle machine and electronic equipment, show someone needs to purchase the commodity in the unmanned aerial vehicle machine, will open the lock of unmanned aerial vehicle machine to the user can pull open the door of unmanned aerial vehicle machine and take commodity.
206. When detecting that the door of unmanned vending machine is opened, gather the video data in the unmanned vending machine through the camera.
In this embodiment, when the door of the unmanned vending machine is opened by the user, it indicates that there is a user who needs to purchase goods in the unmanned vending machine, and therefore, when it is detected that the door of the unmanned vending machine is opened, video data in the unmanned vending machine is collected through the camera so as to be used for determining goods taken away by the user. The number of the cameras can be one, and at the moment, the cameras must be installed in places where all areas in the unmanned vending machine can be shot; the goods shelves can be arranged in a plurality of layers, one or more cameras can be arranged on each layer of goods shelves, one camera can be arranged on each two layers of goods shelves, and other installation modes can be adopted as long as the acquisition areas of all the cameras can cover the whole area in the unmanned goods shelves.
207. And identifying commodities included in the video data through the trained target object detection model, and obtaining the type, the number and the position of the image commodities.
In this embodiment, after the video data in the unmanned vending machine is collected by the camera, the type, the number, and the position of the image commodity are obtained by identifying the commodity included in the video data through the trained target object detection model, or the type, the number, and the position of the image commodity are obtained by identifying the commodity included in each frame of image in the video data through the trained target object detection model. When step 203 is executed, each frame of image in the video data needs to be converted into an image with set pixels to obtain set video data, and then the type, number and position of the commodity obtaining image included in the set video data can be identified through the trained target object detection model.
In this embodiment, a difference image may be obtained by performing a difference operation on each frame of image in the video data and the background image, a binary difference image may be obtained by performing binarization on the difference image, an outline area may be obtained by performing edge detection on the binary difference image, an image corresponding to the outline area larger than a threshold value may be selected from the video data as the change frame image, and the type, number, and position of the commodity of the image may be obtained by identifying the commodity included in the change frame image through the trained target object detection model. And performing difference operation on each frame of image in the video data and the background image, namely, performing absolute value of pixel difference of corresponding pixel positions of each frame of image and the background image in the video data. The binarization processing of the difference image can ignore small changes between each frame of image and the background image in the video data and highlight large changes between the frame of image and the background image. Referring to fig. 6, fig. 6 is a schematic diagram of an image after binarization processing is performed on a difference image according to an embodiment of the present invention. The image shown in fig. 6 is an image obtained after setting 255 a set that the absolute value of the pixel difference is greater than or equal to 25 and 0 a set that the absolute value of the pixel difference is less than 25. As shown in fig. 6, the image after the binarization process includes only two colors of black and white. The background image is an image in the unmanned vending machine collected through the camera before the door of the unmanned vending machine is opened. In the above manner, when step 203 is executed, each image in the variation frame image needs to be converted into an image with set pixels to obtain a set frame image, and then the type, number, and position of the commodity obtaining image commodity included in the set frame image can be identified through the trained target object detection model. Each frame of image or changed frame of image in the video data is not required to be converted into the image with set pixels, and the conversion can be carried out according to the image of the training target object detection model; when the images of the training target object detection model are not unified into the image of the set pixel, each frame of image or the image of the change frame in the video data does not need to be converted.
In this embodiment, the background image may be converted into a grayscale image to obtain a grayscale background image, and then the grayscale background image may be subjected to gaussian smoothing to obtain a smooth background image, so as to perform blurring processing on the background image. Further, since the gaussian smoothing process can only process a grayscale image, the image needs to be converted into a grayscale image before the gaussian smoothing process is performed. The method comprises the steps of identifying commodities included in video data through a trained target object detection model to obtain the types, the number and the positions of image commodities, converting each frame of image in the video data into a gray image to obtain gray video data, and carrying out Gaussian smoothing processing on the gray video data to obtain smooth video data, so that when difference operation is carried out on each frame of image and a background image in the video data to obtain a difference image, difference operation can be carried out on each frame of image and the smooth background image in the smooth video data to obtain the difference image.
In this embodiment, in order to eliminate fine noise points in the binary difference image, the type, number, and position of the commodity in the image are obtained by identifying the commodity included in the video data through the trained target object detection model, and the processing difference image is obtained by performing dilation corrosion processing on the binary difference image. Therefore, the contour area obtained by performing edge detection on the binary difference image may be the contour area obtained by performing edge detection on the processed difference image.
208. And determining the type and the number of the commodities taken away by the user according to the type, the number and the positions of the commodities in the images and the highest position.
In this embodiment, after the type, the number, and the position of the commodity included in the video data are identified by the trained target object detection model to obtain the type, the number, and the position of the image commodity, the type and the number of the commodity taken away by the user are determined according to the type, the number, and the position of the image commodity and the highest position of the image commodity, the commodity with the position higher than the highest position may be selected from the image commodity as the commodity taken away by the user, and then the type and the number of the commodity taken away by the user are determined according to the type and the number of the image commodity. The highest position is the highest position among the positions of the commodities in the background image recognized by the trained target object detection model. Referring to fig. 7, fig. 7 is a schematic diagram of a background image according to an embodiment of the present invention. As shown in fig. 7, the dotted line is the highest position. Referring to fig. 8, fig. 8 is a schematic diagram illustrating a commodity being taken away by a user according to an embodiment of the present invention. As shown in fig. 8, when the commodity is taken away by the user, the commodity is taken up by the user, and the position of the commodity is higher than the highest position in the background image in the process, so that the commodity taken away by the user can be determined through the highest position. The background images are different, and the corresponding highest positions may be different, so that when the background images are multiple images, the corresponding highest positions may also be multiple.
209. And calculating the commodity price of the commodity taken away by the user according to the type and the quantity of the commodity taken away by the user, and deducting the commodity price amount through the electronic equipment.
In this embodiment, after determining the type and number of the commodities taken away by the user according to the type, number, and position of the image commodities and the highest position, when the number of the commodities taken away by the user is equal to 0, it indicates that the user is about to end without taking the commodities from the unmanned vending machine. When the number of the commodities taken away by the user is larger than 0, the commodity taken away from the unmanned vending machine by the user is indicated, the commodity price of the commodity taken away by the user is calculated according to the type and the number of the commodities taken away by the user, the commodity price amount is deducted through the electronic equipment, automatic settlement can be achieved, the settlement process of the user can be omitted, and therefore user experience can be improved.
210. And sending payment information including the commodity price amount to the electronic equipment.
In this embodiment, after the commodity price amount is deducted by the electronic device, the payment information including the commodity price amount is sent to the electronic device.
In the commodity identification method described in fig. 2, the type, number, and position of the image commodity in the video data are identified, and then the type and number of the commodity taken away by the user are determined according to the type, number, and position of the image commodity and the highest position, so that the commodity taken away by the user can be accurately determined, and therefore, the commodity identification accuracy can be improved.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an unmanned vending machine according to an embodiment of the present invention. As shown in fig. 3, the vending machine may include:
the acquisition unit 301 is used for acquiring video data in the unmanned vending machine through the camera when the door of the unmanned vending machine is detected to be opened;
the identification unit 302 is used for identifying commodities included in the video data acquired by the acquisition unit 301 through a trained target object detection model, and obtaining the type, the number and the position of the image commodities;
a determining unit 303, configured to determine the type and number of the commodities taken away by the user according to the type, number, and position of the image commodities identified by the identifying unit 302 and a highest position, where the highest position is a highest position in positions of commodities in a background image identified by the trained target object detection model, and the background image is an image in the vending machine collected by a camera before a door of the vending machine is opened.
As a possible implementation, the identifying unit 302 may include:
a difference subunit 3021, configured to perform difference operation on each frame of image in the video data acquired by the acquisition unit 301 and a background image to obtain a difference image;
a binary subunit 3022, configured to perform binarization processing on the difference image obtained by the difference subunit 3021 to obtain a binary difference image;
a detection subunit 3023, configured to perform edge detection on the binary difference image obtained by the binary subunit 3022, to obtain an outline area;
an obtaining subunit 3024, configured to select, from the video data acquired by the acquisition unit 301, an image corresponding to the contour area obtained by the detection subunit 3023 being larger than a threshold as a change frame image;
a recognition subunit 3025, configured to recognize, through the trained target object detection model, the commodities included in the change frame image obtained by the obtaining subunit 3024, and obtain the type, number, and position of the commodity in the image.
As a possible implementation, the vending machine further comprises:
a first conversion unit 304, configured to convert the background image into a grayscale image, and obtain a grayscale background image;
a smoothing unit 305 configured to perform gaussian smoothing on the grayscale background image obtained by the first conversion unit 304 to obtain a smoothed background image;
the recognition unit 302 may further include:
the first conversion subunit 3026 is configured to convert each frame of image in the video data acquired by the acquisition unit 301 into a grayscale image, so as to obtain grayscale video data;
a smoothing subunit 3027, configured to perform gaussian smoothing processing on the grayscale video data obtained by the first conversion subunit 3026 to obtain smoothed video data;
the difference sub-unit 3021 is specifically configured to perform a difference operation on each frame of image in the smoothed video data obtained by the smoothing sub-unit 3027 and the smoothed background image obtained by the smoothing unit 305 to obtain a difference image.
As a possible implementation, the identifying unit 302 may further include:
a processing subunit 3028, configured to perform dilation corrosion processing on the binary difference image obtained by the binary subunit 3022 to obtain a processed difference image;
the detection sub-unit 3023 is specifically configured to perform edge detection on the processing difference image obtained by the processing sub-unit 3028 to obtain a contour area.
As a possible implementation, the determining unit 303 may include:
a selecting sub-unit 3031 for selecting, from the image commodities identified by the identifying sub-unit 3025, a commodity whose position is higher than the highest position as a commodity taken away by the user;
the determining subunit 3032 is used for determining the type and the number of the commodities taken away by the user, which are selected by the selecting subunit 3031, according to the type and the number of the image commodities identified by the identifying subunit 3025.
As a possible implementation manner, the collecting unit 301 is further configured to collect commodity images of different angles and different distances of each commodity in all commodities;
the vending machine may further include:
the labeling unit 306 is used for labeling the position and the type of the commodity in each image in the commodity image collected by the collecting unit 301 to obtain labeling information;
the training unit 307 is configured to train the target object detection model using the commodity image acquired by the acquisition unit 301 and the labeling information labeled by the labeling unit 306, and obtain the trained target object detection model.
As a possible implementation, the unmanned aerial vehicle may further include:
a second conversion unit 308, configured to convert the training image acquired by the acquisition unit 301 into an image with set pixels, so as to obtain a converted image;
the training unit 307 is specifically configured to train the target object detection model by using the converted image obtained by the conversion performed by the second conversion unit 308 and the labeling information labeled by the labeling unit 306, so as to obtain a trained target object detection model.
Specifically, the identifying subunit 3025 is configured to identify, according to the trained target object detection model obtained by the training unit 307, the commodities included in the moving frame image selected by the obtaining subunit 3024, and obtain the type, number, and position of the commodity in the image.
As a possible embodiment, a second conversion sub-unit 3029 configured to convert the change frame image obtained by the obtaining sub-unit 3024 into an image of set pixels, obtaining a set frame image;
a recognition subunit 3025, configured to recognize, through a trained target object detection model, a commodity included in the setting frame image obtained by the second conversion subunit 3029, and obtain the type, number, and position of the image commodity.
As a possible implementation, the unmanned aerial vehicle may further include:
a detecting unit 309 for detecting whether a connection is established with the electronic device through the two-dimensional code including the identification of the unmanned vending machine
An opening unit 310 for opening a door lock of the vending machine when the detection unit 309 detects that a connection is established with the electronic device through the two-dimensional code including the identification of the vending machine.
As a possible embodiment, when the number of the goods taken by the user is greater than 0, the unmanned aerial vehicle may further include:
a calculation unit 311 for calculating the commodity price of the commodity taken away by the user based on the kind and the number of the commodity taken away by the user determined by the determination subunit 3032;
and a deduction unit 312, configured to deduct, through the electronic device, the price amount of the commodity calculated by the calculation unit 311.
As a possible implementation, the unmanned aerial vehicle may further include:
a sending unit 313, configured to send payment information including the price amount of the commodity deducted by the deducting unit 310 to the electronic device detected by the detecting unit 309.
In the unmanned vending machine depicted in fig. 3, when it is detected that the door of the unmanned vending machine is opened, video data in the unmanned vending machine is collected through the camera, the type, the quantity and the position of the commodity of the image obtained by identifying the commodity included in the video data through the trained target object detection model are determined, and the type and the quantity of the commodity taken away by the user are determined according to the type, the quantity and the position of the commodity of the image and the highest position. The method comprises the steps of firstly identifying the type, the number and the position of image commodities in video data, and then determining the type and the number of commodities taken away by a user according to the type, the number and the position of the image commodities and the highest position, so that the commodities taken away by the user can be accurately determined, and therefore the commodity identification accuracy can be improved.
It can be understood that the functions of the units of the unmanned aerial vehicle according to this embodiment may be specifically implemented according to the method in the foregoing embodiment of the product identification method, and the specific implementation process may refer to the related description of the foregoing embodiment of the product identification method, which is not described herein again.
Referring to fig. 4, fig. 4 is a schematic structural diagram of another vending machine according to an embodiment of the present invention. As shown in fig. 4, the vending machine may include:
the acquisition unit 401 is used for acquiring video data in the unmanned vending machine through a camera when the door of the unmanned vending machine is detected to be opened;
the identification unit 402 is configured to identify, through a trained target object detection model, commodities included in the video data acquired by the acquisition unit 401, and obtain the type, number, and position of the image commodities;
a determining unit 403, configured to determine the type and number of the commodities taken away by the user according to the type, number and position of the image commodities identified by the identifying unit 402 and the highest position, where the highest position is the highest position in the positions of the commodities in the background image identified by the trained target object detection model, and the background image is an image in the vending machine collected by the camera before the door of the vending machine is opened.
As a possible implementation, the determining unit 403 may include:
a selecting sub-unit 4031 configured to select, from the image commodities identified by the identifying unit 402, a commodity whose position is higher than the highest position as a commodity taken away by the user;
a determining subunit 4032, configured to determine, according to the type and the number of the image commodities identified by the identifying unit 402, the type and the number of the commodities taken away by the user and selected by the selecting subunit 4031.
As a possible implementation, the collecting unit 401 is further configured to collect commodity images of different angles and different distances of each commodity in all commodities;
the vending machine may further include:
a labeling unit 404, configured to label a position and a type of a commodity in each image of the commodity images acquired by the acquisition unit 401, to obtain labeling information;
a training unit 405, configured to train the target object detection model using the training image acquired by the acquisition unit 401 and the labeling information labeled by the labeling unit 404, so as to obtain a trained target object detection model.
As a possible implementation, the unmanned aerial vehicle may further include:
a conversion unit 406, configured to convert the training image acquired by the acquisition unit 401 into an image with set pixels, so as to obtain a converted image;
the training unit 405 is specifically configured to train the target object detection model by using the converted image obtained by conversion by the conversion unit 406 and the labeling information labeled by the labeling unit 404, so as to obtain a trained target object detection model.
As a possible implementation, the identifying unit 402 may include:
the conversion subunit 4021 is configured to convert each frame of image in the video data acquired by the acquisition unit 401 into an image of a set pixel, so as to obtain set video data;
the identifying subunit 4022 is configured to identify, through the trained target object detection model, the commodities included in the set video data obtained by the converting subunit 4021, and obtain the type, number, and position of the image commodities.
Specifically, the identifying subunit 4022 is configured to identify the commodities included in the set video data through the trained target object detection model obtained by the training unit 405, and obtain the type, number, and position of the image commodities.
As a possible implementation, the unmanned aerial vehicle may further include:
a detecting unit 407 for detecting whether a connection is established with the electronic device through a two-dimensional code including an identification of the unmanned vending machine
An opening unit 408 for opening a door lock of the vending machine when the detection unit 407 detects that a connection is established with the electronic device through the two-dimensional code including the identification of the vending machine.
As a possible embodiment, when the number of the goods taken by the user is greater than 0, the unmanned aerial vehicle may further include:
a calculating unit 409 for calculating the commodity price of the commodity taken away by the user according to the type and the quantity of the commodity taken away by the user determined by the determining subunit 4032;
and a deduction unit 410, configured to deduct, by an electronic device, the price amount of the commodity calculated by the calculation unit 409.
As a possible implementation, the unmanned aerial vehicle may further include:
the sending unit 411 is configured to send payment information including the price amount of the commodity deducted by the deducting unit 410 to the electronic device detected by the detecting unit 407.
In the unmanned vending machine depicted in fig. 4, when it is detected that the door of the unmanned vending machine is opened, video data in the unmanned vending machine is collected through the camera, the type, the quantity and the position of the commodity of the image obtained by identifying the commodity included in the video data through the trained target object detection model are determined, and the type and the quantity of the commodity taken away by the user are determined according to the type, the quantity and the position of the commodity of the image and the highest position. The method comprises the steps of firstly identifying the type, the number and the position of image commodities in video data, and then determining the type and the number of commodities taken away by a user according to the type, the number and the position of the image commodities and the highest position, so that the commodities taken away by the user can be accurately determined, and therefore the commodity identification accuracy can be improved.
It can be understood that the functions of the units of the unmanned aerial vehicle according to this embodiment may be specifically implemented according to the method in the foregoing embodiment of the product identification method, and the specific implementation process may refer to the related description of the foregoing embodiment of the product identification method, which is not described herein again.
Referring to fig. 5, fig. 5 is a schematic structural diagram of another vending machine according to an embodiment of the disclosure. As shown in fig. 5, the vending machine may include at least one processor 501, a memory 502, at least one camera 503, a transceiver 504, and a bus 505, the processor 501, the memory 502, the camera 503, and the transceiver 504 being connected by the bus 505, wherein:
the camera 503 is used for collecting video data in the unmanned vending machine through the camera when the door of the unmanned vending machine is detected to be opened;
the memory 502 is used for storing a computer program comprising program instructions, and the processor 501 is used for calling the program instructions stored in the memory 502 to execute the following steps:
identifying commodities included in the video data through a trained target object detection model, and obtaining the type, the number and the position of the image commodities;
determining the type and the number of commodities taken away by a user according to the type, the number and the position of the commodities in the images and the highest position, wherein the highest position is the highest position in the positions of the commodities in the background image identified by the trained target object detection model, and the background image is an image in the unmanned vending machine collected by a camera before the door of the unmanned vending machine is opened.
As a possible implementation, the processor 501 identifies the commodity included in the video data through the trained target object detection model, and obtaining the type, number, and position of the image commodity includes:
carrying out difference operation on each frame of image and a background image in video data to obtain a difference image;
carrying out binarization processing on the difference image to obtain a binary difference image;
performing edge detection on the binary differential image to obtain a contour area;
selecting an image corresponding to the contour area larger than the threshold value from the video data as a change frame image;
and identifying commodities included in the variable frame image through the trained target object detection model, and obtaining the type, the quantity and the position of the image commodities.
As a possible implementation, the processor 501 is further configured to call the program instructions stored in the memory 502 to perform the following steps:
converting the background image into a gray level image to obtain a gray level background image;
performing Gaussian smoothing processing on the gray background image to obtain a smooth background image;
the processor 501 identifies the commodities included in the video data through the trained target object detection model, and obtaining the type, number and position of the image commodities further includes:
converting each frame of image in the video data into a gray level image to obtain gray level video data;
performing Gaussian smoothing processing on the gray level video data to obtain smooth video data;
the processor 501 performs a difference operation on each frame of image in the video data and the background image, and obtaining a difference image includes:
and carrying out difference operation on each frame of image in the smooth video data and the smooth background image to obtain a difference image.
As a possible implementation, the processor 501 identifies the commodity included in the video data through the trained target object detection model, and obtaining the type, number, and position of the image commodity further includes:
performing expansion corrosion processing on the binary differential image to obtain a processed differential image;
the processor 501 performs edge detection on the binary difference image, and obtaining the contour area includes:
and carrying out edge detection on the processed differential image to obtain the outline area.
As one possible implementation, the processor 501 determining the type and number of the goods taken by the user according to the type, number and location of the image goods and the highest location comprises:
selecting the commodity with the position higher than the highest position from the image commodities as the commodity taken away by the user;
and determining the type and the number of the commodities taken away by the user according to the type and the number of the image commodities.
As a possible implementation, the camera 503 is further configured to collect commodity images of different angles and different distances of each commodity in all commodities;
the processor 501 is also configured to call the program instructions stored in the memory 502 to perform the following steps:
marking the position and the type of the commodity in each image in the commodity image to obtain marking information;
and training the target object detection model by using the commodity image and the labeling information to obtain the trained target object detection model.
As a possible implementation, the processor 501 is further configured to call the program instructions stored in the memory 502 to perform the following steps:
converting the commodity image into an image with set pixels to obtain a converted image;
the processor 501 trains the target object detection model using the commodity image and the annotation information, and obtaining the trained target object detection model includes:
and training the target object detection model by using the converted image and the labeling information to obtain the trained target object detection model.
As a possible implementation, the processor 501 identifies the commodity included in the video data through the trained target object detection model, and obtaining the type, number, and position of the image commodity includes:
converting each frame of image in the video data into an image with set pixels to obtain set video data;
and identifying and setting commodities included in the video data through the trained target object detection model, and obtaining the type, the number and the position of the image commodities.
As a possible implementation, the processor 501 is further configured to call the program instructions stored in the memory 502 to perform the following steps:
and when the connection with the electronic equipment through the two-dimensional code comprising the identification of the unmanned vending machine is detected, opening the door lock of the unmanned vending machine.
As a possible implementation, when the number of the goods taken by the user is greater than 0, the processor 501 is further configured to call the program instructions stored in the memory 502 to perform the following steps:
calculating the commodity price of the commodity taken away by the user according to the type and the quantity of the commodity taken away by the user;
and deducting the price amount of the commodity through the electronic equipment.
As a possible implementation, the transceiver 504 is configured to send payment information including a price amount of the article to the electronic device.
In the unmanned vending machine depicted in fig. 5, when it is detected that the door of the unmanned vending machine is opened, video data in the unmanned vending machine is collected through the camera, the type, the quantity and the position of the commodity of the image obtained by identifying the commodity included in the video data through the trained target object detection model are determined, and the type and the quantity of the commodity taken away by the user are determined according to the type, the quantity and the position of the commodity of the image and the highest position. The method comprises the steps of firstly identifying the type, the number and the position of image commodities in video data, and then determining the type and the number of commodities taken away by a user according to the type, the number and the position of the image commodities and the highest position, so that the commodities taken away by the user can be accurately determined, and therefore the commodity identification accuracy can be improved.
In one embodiment, a computer-readable storage medium is provided, which stores a computer program comprising program instructions that, when executed by a processor, cause the processor to perform the article identification method of fig. 1 or 2.
In one embodiment, an application program is provided for performing the article identification method of fig. 1 or fig. 2 when running.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The commodity identification method, the unmanned vending machine and the computer-readable storage medium provided by the embodiment of the invention are described in detail, a specific example is applied in the description to explain the principle and the implementation of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. An article identification method, applied to an unmanned vending machine, includes:
when the door of the unmanned vending machine is detected to be opened, video data in the unmanned vending machine are collected through a camera;
carrying out difference operation on each frame of image and a background image in the video data to obtain a difference image;
carrying out binarization processing on the difference image to obtain a binary difference image;
performing edge detection on the binary differential image to obtain a contour area;
selecting an image corresponding to the contour area larger than a threshold value from the video data as a change frame image;
identifying commodities included in the variable frame image through a trained target object detection model, and obtaining the type, the number and the position of the image commodities;
selecting commodities with positions higher than the highest position from the image commodities as commodities taken away by a user, wherein the highest position is the highest position in the positions of the commodities in the background image identified by the trained target object detection model, and the background image is an image in the unmanned vending machine collected by the camera before the door of the unmanned vending machine is opened;
and determining the type and the number of the commodities taken away by the user according to the type and the number of the image commodities.
2. The method of claim 1, further comprising:
converting the background image into a gray level image to obtain a gray level background image;
performing Gaussian smoothing processing on the gray background image to obtain a smooth background image;
the identifying the commodities included in the video data through the trained target object detection model, and the obtaining of the types, the quantities and the positions of the image commodities further comprises:
converting each frame of image in the video data into a gray level image to obtain gray level video data;
performing Gaussian smoothing processing on the gray level video data to obtain smooth video data;
the differential operation of each frame of image and background image in the video data to obtain a differential image comprises:
and carrying out difference operation on each frame of image in the smooth video data and the smooth background image to obtain a difference image.
3. The method of claim 2, wherein the identifying the commodity included in the video data by the trained target object detection model, and obtaining the type, number and position of the image commodity further comprises:
performing expansion corrosion processing on the binary differential image to obtain a processed differential image;
the edge detection of the binary difference image to obtain the contour area comprises:
and carrying out edge detection on the processed differential image to obtain the outline area.
4. The method of claim 1, further comprising:
acquiring commodity images of each commodity in all commodities at different angles and different distances;
marking the position and the type of the commodity in each image in the commodity image to obtain marking information;
and training a target object detection model by using the commodity image and the labeling information to obtain the trained target object detection model.
5. The method of claim 4, further comprising:
converting the commodity image into an image with set pixels to obtain a converted image;
the training of the target object detection model using the commodity image and the labeling information to obtain the trained target object detection model comprises:
and training a target object detection model by using the conversion image and the labeling information to obtain a trained target object detection model.
6. The method of claim 5, wherein the identifying the commodity included in the video data by the trained target object detection model, and obtaining the type, number and position of the image commodity comprises:
converting each frame of image in the video data into the image of the set pixel to obtain set video data;
and identifying the commodities included in the set video data through the trained target object detection model, and obtaining the type, the number and the position of the image commodities.
7. The method according to any one of claims 1-6, further comprising:
and when the connection with the electronic equipment through the two-dimensional code comprising the identification of the unmanned vending machine is detected, opening a door lock of the unmanned vending machine.
8. The method of claim 7, wherein when the number of items removed by the user is greater than 0, the method further comprises:
calculating the commodity price of the commodity taken away by the user according to the type and the quantity of the commodity taken away by the user;
and deducting the commodity price amount through the electronic equipment.
9. The method of claim 8, further comprising:
and sending payment information comprising the commodity price amount to the electronic equipment.
10. An unmanned aerial vehicle comprising means for performing the article identification method of any of claims 1-9.
11. An unmanned vending machine comprising a processor, a memory, a camera, and a transceiver, the processor, the memory, the camera, and the transceiver being interconnected, wherein the camera is configured to capture video data, the transceiver is configured to communicate with an electronic device, the memory is configured to store a computer program, the computer program comprising program instructions, and the processor is configured to invoke the program instructions to perform the article identification method of any of claims 1-9.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the article identification method according to any one of claims 1 to 9.
CN201810696427.8A 2018-06-29 2018-06-29 Commodity identification method, unmanned vending machine and computer-readable storage medium Active CN109003390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810696427.8A CN109003390B (en) 2018-06-29 2018-06-29 Commodity identification method, unmanned vending machine and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810696427.8A CN109003390B (en) 2018-06-29 2018-06-29 Commodity identification method, unmanned vending machine and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN109003390A CN109003390A (en) 2018-12-14
CN109003390B true CN109003390B (en) 2021-08-10

Family

ID=64602109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810696427.8A Active CN109003390B (en) 2018-06-29 2018-06-29 Commodity identification method, unmanned vending machine and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN109003390B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829359A (en) * 2018-12-15 2019-05-31 深圳壹账通智能科技有限公司 Monitoring method, device, computer equipment and the storage medium in unmanned shop
CN109712315B (en) * 2018-12-27 2021-04-20 浪潮金融信息技术有限公司 Automatic vending machine cargo falling detection method based on double cameras
CN109712324B (en) * 2018-12-28 2020-12-18 广东便捷神科技股份有限公司 Vending machine image identification method, vending method and vending equipment
CN111415461B (en) 2019-01-08 2021-09-28 虹软科技股份有限公司 Article identification method and system and electronic equipment
CN109840503B (en) * 2019-01-31 2021-02-26 深兰科技(上海)有限公司 Method and device for determining category information
CN109948515B (en) * 2019-03-15 2022-04-15 百度在线网络技术(北京)有限公司 Object class identification method and device
CN110287888A (en) * 2019-06-26 2019-09-27 中科软科技股份有限公司 A kind of TV station symbol recognition method and system
CN110503037A (en) * 2019-08-22 2019-11-26 三星电子(中国)研发中心 A kind of method and system of the positioning object in region
CN112541940B (en) * 2019-09-20 2023-09-05 杭州海康威视数字技术股份有限公司 Article detection method and system
CN111209911A (en) * 2020-01-07 2020-05-29 创新奇智(合肥)科技有限公司 Custom tag identification system and identification method based on semantic segmentation network
CN111666927A (en) * 2020-07-08 2020-09-15 广州织点智能科技有限公司 Commodity identification method and device, intelligent container and readable storage medium
CN111860371A (en) * 2020-07-24 2020-10-30 浙江星星冷链集成股份有限公司 Method for detecting commodity type, quantity and purity and freezer thereof
CN112215168A (en) * 2020-10-14 2021-01-12 上海爱购智能科技有限公司 Image editing method for commodity identification training
CN112529851B (en) * 2020-11-27 2023-07-18 中冶赛迪信息技术(重庆)有限公司 Hydraulic pipe state determining method, system, terminal and medium
CN112802049B (en) * 2021-03-04 2022-10-11 山东大学 Method and system for constructing household article detection data set
CN114640797B (en) * 2021-11-03 2024-06-21 深圳友朋智能商业科技有限公司 Order generation method and device for synchronously optimizing commodity track and intelligent vending machine
CN113723383B (en) * 2021-11-03 2022-06-28 武汉星巡智能科技有限公司 Order generation method for synchronously identifying commodities in same area at different visual angles and intelligent vending machine
CN114202700B (en) * 2021-12-16 2022-07-22 东莞先知大数据有限公司 Cargo volume anomaly detection method and device and storage medium
CN116185541B (en) * 2023-01-06 2024-02-20 广州市玄武无线科技股份有限公司 Business execution system, method, terminal equipment and medium of business super intelligent equipment
CN118365672A (en) * 2024-03-28 2024-07-19 上海商汤信息科技有限公司 Target statistics method, device, electronic equipment and computer readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261721A (en) * 2007-03-09 2008-09-10 海尔集团公司 Statistical and management method and its device for article storage and taking
CN106339917A (en) * 2016-08-18 2017-01-18 无锡天脉聚源传媒科技有限公司 Commodity model training method and device
CN107833361A (en) * 2017-09-28 2018-03-23 中南大学 A kind of method that automatic vending machine based on image recognition falls Cargo Inspection survey
EP3319027A1 (en) * 2016-11-02 2018-05-09 Vocollect, Inc. Planogram compliance
CN108052949A (en) * 2017-12-08 2018-05-18 广东美的智能机器人有限公司 Goods categories statistical method, system, computer equipment and readable storage medium storing program for executing
CN108182757A (en) * 2018-01-22 2018-06-19 合肥美的智能科技有限公司 Self-service machine and its control method
CN108198052A (en) * 2018-03-02 2018-06-22 北京京东尚科信息技术有限公司 User's free choice of goods recognition methods, device and intelligent commodity shelf system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002133242A (en) * 2000-10-20 2002-05-10 Nippon Conlux Co Ltd Promotion method and system
CN106204573A (en) * 2016-07-07 2016-12-07 Tcl集团股份有限公司 A kind of food control method and system of intelligent refrigerator
CN106971457A (en) * 2017-03-07 2017-07-21 深圳市楼通宝实业有限公司 Self-service vending method and system
CN108171172A (en) * 2017-12-27 2018-06-15 惠州Tcl家电集团有限公司 Self-help shopping method, self-service sale device and computer readable storage medium
CN108182417B (en) * 2017-12-29 2020-07-10 广东安居宝数码科技股份有限公司 Shipment detection method and device, computer equipment and vending machine

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101261721A (en) * 2007-03-09 2008-09-10 海尔集团公司 Statistical and management method and its device for article storage and taking
CN106339917A (en) * 2016-08-18 2017-01-18 无锡天脉聚源传媒科技有限公司 Commodity model training method and device
EP3319027A1 (en) * 2016-11-02 2018-05-09 Vocollect, Inc. Planogram compliance
CN107833361A (en) * 2017-09-28 2018-03-23 中南大学 A kind of method that automatic vending machine based on image recognition falls Cargo Inspection survey
CN108052949A (en) * 2017-12-08 2018-05-18 广东美的智能机器人有限公司 Goods categories statistical method, system, computer equipment and readable storage medium storing program for executing
CN108182757A (en) * 2018-01-22 2018-06-19 合肥美的智能科技有限公司 Self-service machine and its control method
CN108198052A (en) * 2018-03-02 2018-06-22 北京京东尚科信息技术有限公司 User's free choice of goods recognition methods, device and intelligent commodity shelf system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的货架商品检测技术研究;刘永豪;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180115(第01期);第I138-1173页 *

Also Published As

Publication number Publication date
CN109003390A (en) 2018-12-14

Similar Documents

Publication Publication Date Title
CN109003390B (en) Commodity identification method, unmanned vending machine and computer-readable storage medium
CN108985359B (en) Commodity identification method, unmanned vending machine and computer-readable storage medium
CN109117848B (en) Text line character recognition method, device, medium and electronic equipment
CN111415461B (en) Article identification method and system and electronic equipment
CN108335408B (en) Article identification method, device and system for vending machine and storage medium
CN111626201B (en) Commodity detection method, commodity detection device and readable storage medium
CN111259889A (en) Image text recognition method and device, computer equipment and computer storage medium
US20140169639A1 (en) Image Detection Method and Device
GB2565775A (en) A Method, an apparatus and a computer program product for object detection
CN109784385A (en) A kind of commodity automatic identifying method, system, device and storage medium
Xiang et al. Moving object detection and shadow removing under changing illumination condition
CN108961547A (en) A kind of commodity recognition method, self-service machine and computer readable storage medium
CN112541372B (en) Difficult sample screening method and device
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN111723777A (en) Method and device for judging commodity taking and placing process, intelligent container and readable storage medium
CN112001200A (en) Identification code identification method, device, equipment, storage medium and system
CN114255377A (en) Differential commodity detection and classification method for intelligent container
US20180268247A1 (en) System and method for detecting change using ontology based saliency
CN116452636A (en) Target tracking-based dynamic commodity identification method and related device for unmanned sales counter
CN115953744A (en) Vehicle identification tracking method based on deep learning
CN111680680A (en) Object code positioning method and device, electronic equipment and storage medium
CN111402185A (en) Image detection method and device
CN109523573A (en) The tracking and device of target object
Merrad et al. A Real-time Mobile Notification System for Inventory Stock out Detection using SIFT and RANSAC.
CN117197653A (en) Landslide hazard identification method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 518000 Guangdong science and technology innovation and Research Institute, Shenzhen, Shenzhen, Nanshan District No. 6, science and technology innovation and Research Institute, Shenzhen, D 10, 1004, 10

Patentee after: Shenzhen Hetai intelligent home appliance controller Co.,Ltd.

Address before: 518000 Guangdong science and technology innovation and Research Institute, Shenzhen, Shenzhen, Nanshan District No. 6, science and technology innovation and Research Institute, Shenzhen, D 10, 1004, 10

Patentee before: SHENZHEN H&T DATA RESOURCES AND CLOUD TECHNOLOGY Ltd.

CP01 Change in the name or title of a patent holder