CN112052708A - Article detection method, device and system - Google Patents

Article detection method, device and system Download PDF

Info

Publication number
CN112052708A
CN112052708A CN201910493006.XA CN201910493006A CN112052708A CN 112052708 A CN112052708 A CN 112052708A CN 201910493006 A CN201910493006 A CN 201910493006A CN 112052708 A CN112052708 A CN 112052708A
Authority
CN
China
Prior art keywords
article
placing
taking
image data
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910493006.XA
Other languages
Chinese (zh)
Inventor
朱镇峰
马强
解松霖
王靖雄
毛慧
浦世亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910493006.XA priority Critical patent/CN112052708A/en
Publication of CN112052708A publication Critical patent/CN112052708A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G19/00Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups
    • G01G19/40Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight
    • G01G19/413Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight using electromechanical or electronic computing means
    • G01G19/414Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight using electromechanical or electronic computing means using electronic computing means only
    • G01G19/4144Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight using electromechanical or electronic computing means using electronic computing means only for controlling weight of goods in commercial establishments, e.g. supermarket, P.O.S. systems
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F9/00Details other than those peculiar to special kinds or types of apparatus
    • G07F9/02Devices for alarm or indication, e.g. when empty; Advertising arrangements in coin-freed apparatus

Abstract

The application discloses an article detection method, device and system. The method comprises the following steps: acquiring article taking and placing trigger data aiming at the article taking and placing cabinet; when the article taking and placing cabinet is determined to be in an article taking and placing triggering state based on the triggering data, determining a taking and placing area based on the article taking and placing triggering data; acquiring an image and weight data related to the article taking and placing time, and acquiring target image data based on the image related to the article taking and placing time; and determining article taking and placing information based on the information of the taking and placing area, the weight data and the target image data. When the article taking and placing cabinet is determined to be in an article taking and placing triggering state, the target image data is acquired based on the image related to the article taking and placing time, and therefore the article taking and placing information is determined based on the information of the taking and placing area, the target image data and the weight data.

Description

Article detection method, device and system
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to an article detection method, device and system.
Background
Along with the development of artificial intelligence, unmanned article get and put cabinet by the wide application gradually. For example, the unmanned article taking and placing cabinet can be used for storing articles for sale, when a purchase demand exists, people can take the articles out of the unmanned article taking and placing cabinet, and then the management personnel can supplement the articles into the unmanned article taking and placing cabinet. Therefore, how to realize the object detection is a relatively critical problem.
The related art provides an unmanned container system, which identifies the article taking and placing actions of a user and the types and the quantity of the taken and placed articles through video analysis, so as to judge the types and the quantity of the taken and placed articles and achieve the purpose of article detection.
However, the method needs to process the video, and has high requirements on the computing power of the system, and the amount of computation is large, so that the detection efficiency is low.
Disclosure of Invention
The embodiment of the application provides an article detection method, device and system, which can be used for solving the problems in the related art. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides an article detection method, where the method includes:
acquiring article taking and placing trigger data aiming at the article taking and placing cabinet;
when the article taking and placing cabinet is determined to be in an article taking and placing triggering state based on the triggering data, determining a taking and placing area based on the article taking and placing triggering data;
acquiring an image and weight data related to the article taking and placing time, and acquiring target image data based on the image related to the article taking and placing time;
and determining article taking and placing information based on the information of the taking and placing area, the weight data and the target image data.
Optionally, infrared correlation units are arranged on two sides of the entrance and the exit of the article taking and placing cabinet;
the acquisition is to article of article get put cabinet is got and is put trigger data includes:
acquiring an infrared signal transmitted by the infrared correlation unit;
after acquiring the article taking and placing trigger data for the article taking and placing cabinet, the method further comprises the following steps:
detecting an infrared cut-off signal based on the infrared signal emitted by the infrared correlation unit;
and when the change of the quantity of the infrared cutoff signals is detected, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
Optionally, the infrared correlation unit includes an infrared transmitting end and an infrared receiving end, the infrared transmitting end set up in the access & exit downside of the cabinet is put to article, the infrared receiving end set up in the access & exit upside of the cabinet is put to article.
Optionally, the entrance and the exit of the article picking and placing cabinet are provided with cameras, the field angles of the cameras cover the entrance and the exit, and the optical axis directions of the cameras are parallel to the entrance and the exit;
the acquisition is to article of article get put cabinet is got and is put trigger data includes:
acquiring a current image of the entrance and exit acquired by the camera;
after acquiring the article taking and placing trigger data for the article taking and placing cabinet, the method further comprises the following steps:
acquiring an optical flow vector based on the current image of the entrance and exit acquired by the camera;
and when the optical flow vector of the taking and placing operation direction appears in the access area, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
Optionally, the entrance and exit of the article taking and placing cabinet is provided with a camera, the angle of view of the camera covers the entrance and exit, and the edge of the entrance and exit of the article taking and placing cabinet is provided with a marker;
the acquisition is to article of article get put cabinet is got and is put trigger data includes:
acquiring a current image of an entrance and an exit of the article taking and placing cabinet acquired by the camera;
after acquiring the article taking and placing trigger data for the article taking and placing cabinet, the method further comprises the following steps:
detecting marker information in the current image;
and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not based on the detection result.
Optionally, the acquiring an image related to the article pick-and-place time includes:
and acquiring the image of the article taking and placing moment, or acquiring the images of the reference quantity before and after the article taking and placing moment.
Optionally, the determining article pick-and-place information based on the information of the pick-and-place area, the weight data, and the target image data includes:
and sending the information of the taking and placing area, the weight data and the target image data to a cloud end, and determining article taking and placing information based on the information of the taking and placing area, the weight data and the target image data at the cloud end.
Optionally, the determining article pick-and-place information based on the information of the pick-and-place area, the weight data, and the target image data includes:
filtering the target image data based on the information of the pick-and-place area to obtain filtered image data;
and identifying article taking and placing information in the filtered image data based on the weight data and the target image data, wherein the article taking and placing information comprises types and quantities.
Optionally, the obtaining target image data based on the image related to the article pick-and-place time includes:
and acquiring all image data in the image related to the article taking and placing time, and taking the all image data as target image data.
Optionally, the obtaining target image data based on the image related to the article pick-and-place time includes:
and carrying out article detection on the image related to the article taking and placing moment to obtain local image data of an area where the article is located and coordinates of the local image data, and taking the local image data of the area where the article is located and the coordinates of the local image data as target image data.
Optionally, the determining article pick-and-place information based on the information of the pick-and-place area, the weight data, and the target image data includes:
identifying item information in the target image data, wherein the item information comprises a position, a type and a quantity;
determining the type and the number of articles in the pick-and-place area based on the position in the article information and the information of the pick-and-place area;
and correcting the quantity of the articles in the pick-and-place area based on the weight data, and obtaining article pick-and-place information according to the corrected quantity of the articles and the types of the articles in the pick-and-place area.
Optionally, after determining the type and the number of the articles located in the pick-and-place area based on the position in the article information and the information of the pick-and-place area, the method further includes:
rechecking the number of articles in the pick-and-place area based on the weight data;
and when the rechecking is passed, taking the types and the quantity of the articles in the pick-and-place area as article pick-and-place information.
Optionally, the determining article pick-and-place information based on the information of the pick-and-place area, the weight data, and the target image data includes:
filtering the local image data based on the information of the pick-and-place area and the coordinates of the local image data, and comparing the filtered local image data with an article sample library;
when the filtered local area image data comprise articles according to the comparison result, determining the types and the quantity of the articles contained in the filtered local area image data;
and correcting the quantity based on the weight data, and determining article taking and placing information according to a correction result.
There is also provided an article detection apparatus, the apparatus comprising:
the triggering assembly is used for acquiring object taking and placing triggering data aiming at the object taking and placing cabinet and sending the triggering data to the first processor;
the first processor is used for determining a picking and placing area based on the article picking and placing triggering data when the article picking and placing cabinet is determined to be in an article picking and placing triggering state based on the triggering data; acquiring an image and weight data related to the article taking and placing time, and acquiring target image data based on the image related to the article taking and placing time; and determining article taking and placing information based on the information of the taking and placing area, the weight data and the target image data.
Optionally, the triggering assembly includes an infrared correlation unit, and the infrared correlation unit is disposed at two sides of an entrance and an exit of the article picking and placing cabinet;
the first processor is used for detecting an infrared cut-off signal based on the infrared signal emitted by the infrared correlation unit; and when the change of the quantity of the infrared cutoff signals is detected, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
Optionally, the infrared correlation unit includes an infrared transmitting end and an infrared receiving end, the infrared transmitting end set up in the access & exit downside of the cabinet is put to article, the infrared receiving end set up in the access & exit upside of the cabinet is put to article.
Optionally, the triggering component includes a camera, the camera is disposed at an entrance of the article picking and placing cabinet, a field angle of the camera covers the entrance, and an optical axis direction of the camera is parallel to the entrance;
the first processor is used for acquiring an optical flow vector based on the current image of the entrance and the exit acquired by the camera; and when the optical flow vector of the taking and placing operation direction appears in the access area, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
Optionally, the triggering component includes a camera, the camera is disposed at an entrance of the article picking and placing cabinet, a field angle of the camera covers the entrance, and a marker is disposed at an edge of the entrance of the article picking and placing cabinet;
the first processor is used for detecting marker information in the current image of the entrance and exit acquired by the camera; and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not based on the detection result.
Optionally, the first processor is configured to acquire an image of the article pick-and-place time, or images of reference numbers before and after the article pick-and-place time.
Optionally, the first processor is configured to send the information of the pick-and-place area, the weight data, and the target image data to a cloud, and determine, at the cloud, article pick-and-place information based on the information of the pick-and-place area, the weight data, and the target image data.
Optionally, the first processor is configured to filter the target image data based on the information of the pick-and-place area to obtain filtered image data;
identifying item information in the filtered image data based on the weight data and the target image data, the item information including location, type, and quantity.
Optionally, the first processor is configured to acquire all image data in the image related to the article pick-and-place time, and use the all image data as target image data.
Optionally, the first processor is configured to perform article detection on the image related to the article pick-and-place time, obtain local image data of an area where the article is located and coordinates of the local image data, and use the local image data of the area where the article is located and the coordinates of the local image data as target image data.
Optionally, the first processor is configured to determine the type and number of the articles located in the pick-and-place area based on the position in the article information and the information of the pick-and-place area; and correcting the quantity of the articles in the pick-and-place area based on the weight data, and obtaining article pick-and-place information according to the corrected quantity of the articles and the types of the articles in the pick-and-place area.
Optionally, the first processor is configured to filter the local image data based on the information of the pick-and-place area and the coordinates of the local image data, and compare the filtered local image data with an article sample library; when the filtered local area image data comprise articles according to the comparison result, determining the types and the quantity of the articles contained in the filtered local area image data; and correcting the quantity based on the weight data, and determining article taking and placing information according to a correction result.
Optionally, the first processor is further configured to review the number of articles located in the pick-and-place area based on the weight data; and when the rechecking is passed, taking the types and the quantity of the articles in the pick-and-place area as article pick-and-place information.
There is also provided an item detection system, the system comprising: the system comprises a trigger unit, an image acquisition unit, a weight acquisition unit and an article detection unit; the triggering unit, the weight acquisition unit and the image acquisition unit are all connected with the article detection unit, and the image acquisition unit and the weight acquisition unit are also all connected with the triggering unit;
the trigger unit is used for acquiring article taking and placing trigger data for the article taking and placing cabinet, determining a taking and placing area based on the article taking and placing trigger data when the article taking and placing cabinet is determined to be in an article taking and placing trigger state based on the trigger data, sending information of the taking and placing area to the article detection unit, sending trigger signals to the image acquisition unit and the weight acquisition unit, triggering the image acquisition unit to acquire image data, and triggering the weight acquisition unit to acquire weight data;
the image acquisition unit is used for acquiring images related to the article taking and placing time based on the trigger signal; transmitting target image data obtained based on the image to the article detection unit;
the weight acquisition unit is used for acquiring weight data related to the article taking and placing time based on the trigger signal; sending the weight data to the item detection unit;
the article detection unit is used for determining article taking and placing information based on the information of the taking and placing area, the weight data and the target image data.
Optionally, the trigger unit includes: the infrared correlation unit is arranged at an entrance and an exit of the article taking and placing cabinet;
the processor is used for detecting an infrared cut-off signal based on the infrared signal emitted by the infrared correlation unit; and when the change of the quantity of the infrared cutoff signals is detected, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
Optionally, the infrared correlation unit includes an infrared transmitting end and an infrared receiving end, the infrared transmitting end set up in the access & exit downside of the cabinet is put to article, the infrared receiving end set up in the access & exit upside of the cabinet is put to article.
Optionally, the image acquisition unit is further configured to acquire a current image of an entrance of the article picking and placing cabinet, and send the current image as trigger data to the trigger unit;
the trigger unit is used for acquiring an optical flow vector based on the current image; and when the optical flow vector of the taking and placing operation direction appears in the access area, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
Optionally, the edge of the entrance and exit of the article taking and placing cabinet is provided with a marker;
the image acquisition unit is also used for acquiring the current image of the entrance and the exit of the article taking and placing cabinet and sending the current image of the entrance and the exit of the article taking and placing cabinet to the trigger unit as trigger data;
the trigger unit is used for detecting marker information in the current image; and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not based on the detection result.
Optionally, the image acquisition unit comprises: the camera is arranged at the entrance and the exit of the article taking and placing cabinet, the monitoring area of the camera covers the whole entrance and the exit, and the optical axis direction of the camera is parallel to the entrance and the exit;
or, the image acquisition unit includes a plurality of cameras, and the monitoring area of each camera covers a part of the access & exit of the article taking and placing cabinet, and the monitoring area of the plurality of cameras covers the whole access & exit of the article taking and placing cabinet, and the optical axis direction of each camera is parallel to the access & exit.
Optionally, the system further comprises: the target detection unit is connected with the image acquisition unit;
the image acquisition unit is used for sending the acquired image related to the article taking and placing time to the target detection unit;
the object detection unit is used for carrying out object detection on the image related to the object taking and placing time to obtain local image data of an area where the object is located and coordinates of the local image data, and sending the local image data of the area where the object is located and the coordinates of the local image data to the object detection unit as object image data;
the article detection unit is used for determining article taking and placing information based on the information of the taking and placing area, the weight data, the local image data of the area where the article is located and the coordinates of the local image data.
Optionally, the system further comprises: a communication unit;
the triggering unit, the image acquisition unit, the weight acquisition unit and the communication unit are arranged in an article taking and placing cabinet, and the article detection unit is arranged in a cloud.
There is also provided a computer device comprising a processor and a memory, the memory having stored therein at least one instruction which, when executed by the processor, implements an item detection method as described in any one of the above.
There is also provided a computer readable storage medium having stored therein at least one instruction which, when executed, implements an item detection method as recited in any of the above.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
when the triggering data for taking and placing the article are acquired, when the triggering data is used for determining that the article taking and placing cabinet is in the article taking and placing triggering state, the image and the weight data related to the article taking and placing moment are acquired, and the target image data is acquired based on the image related to the article taking and placing moment, so that the information for taking and placing the article is determined based on the information of the taking and placing area, the target image data and the weight data.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an article detection system according to an embodiment of the present disclosure;
FIG. 2 is a schematic representation of a marker provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a hardware configuration of a part of an article detection system according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a hardware configuration of a part of an article detection system according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an article detection process provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of an article detection process provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of an article detection process provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a hardware configuration of a part of an article detection system according to an embodiment of the present disclosure;
FIG. 9 is a schematic structural diagram of an article detection system provided in an embodiment of the present application;
FIG. 10 is a schematic diagram of an article detection system according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of an article storage and retrieval cabinet system according to an embodiment of the present disclosure;
FIG. 12 is a flow chart of an article detection method provided by an embodiment of the present application;
FIG. 13 is a schematic diagram of an article detection process provided by an embodiment of the present application;
FIG. 14 is a schematic diagram of an article detection process provided by an embodiment of the present application;
FIG. 15 is a schematic diagram of an article detection process provided by an embodiment of the present application;
FIG. 16 is a schematic structural diagram of an article detection device according to an embodiment of the present disclosure;
fig. 17 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
An embodiment of the present application provides an article detection system, as shown in fig. 1, the system includes: a trigger unit 11, an image acquisition unit 12, a weight acquisition unit 13, and an article detection unit 14;
the triggering unit 11, the weight acquisition unit 13 and the image acquisition unit 12 are all connected with the article detection unit 14, and the image acquisition unit 12 and the weight acquisition unit 13 are also all connected with the triggering unit 11;
the triggering unit 11 is configured to acquire article taking and placing triggering data for the article taking and placing cabinet, determine a taking and placing area based on the article taking and placing triggering data when it is determined that the article taking and placing cabinet is in an article taking and placing triggering state based on the triggering data, send information of the taking and placing area to the article detecting unit 14, send a triggering signal to the image collecting unit 12 and the weight collecting unit 13, trigger the image collecting unit 12 to collect image data, and trigger the weight collecting unit 13 to collect weight data. The image acquisition unit 12 is configured to acquire an image related to the article pick-and-place time based on the trigger signal, and send target image data obtained based on the image to the article detection unit 14. And the weight acquisition unit 13 is used for acquiring weight data related to the moment when the article is taken and placed based on the trigger signal and sending the weight data to the article detection unit 14. And the article detection unit 14 is used for determining article taking and placing information based on the information of the taking and placing area, the weight data and the target image data.
Optionally, the image capturing unit 12 captures a current image of the entrance of the article picking and placing cabinet for triggering analysis and article identification detection. The triggering unit 11 performs triggering analysis by using the data provided by the image acquisition unit 12, that is, determines whether the article picking and placing cabinet is in an article picking and placing triggering state based on the triggering data. For example, when an object (such as a hand, an article, and the like) enters or leaves the article pick-and-place cabinet, it is determined that the article pick-and-place cabinet is in an article pick-and-place trigger state, an article pick-and-place trigger signal is generated, and a corresponding pick-and-place area is provided based on the trigger data. The image acquisition unit 12 can also acquire an image related to the article taking and placing time under the trigger of the trigger unit 11. Target image data derived based on the image may be sent to the item detection unit 14.
Alternatively, the target image data may be all image data in the image related to the article pick-and-place time acquired by the image acquisition unit 12, and the image acquisition unit 12 may directly transmit all image data in the acquired image as the target image data to the article detection unit 14.
It should be noted that, if the article storage cabinet has multiple layers, the weight collecting unit 13 may be disposed on each layer. When the article taking and placing cabinet is determined to be in the article taking and placing triggering state based on the triggering data, the triggering unit 11 sends a triggering signal to the weight collecting unit 13, and when the weight collecting unit 13 detects the article taking and placing triggering signal, the weight data collected by each layer of weight collecting unit 13 is obtained for subsequent rechecking and article correction. Of course, as an alternative, after the pick-and-place area is determined, which layer of the article pick-and-place cabinet the pick-and-place operation is located in may be determined, so that only the weight data collected by the weight collecting unit 13 of the layer is obtained. The mode to be selected is not limited in the present application. Of course, only one weight collecting unit 13 may be disposed for the article storage cabinet, and the number of the weight collecting units 13 is not limited in the embodiment of the present application. Each weight collecting unit 13 can collect weight data related to the article taking and placing time under the trigger of the trigger unit 11, and send the weight data to the article detecting unit 14.
The article detection unit 14 may include a classifier trained by means of deep learning, such as a detection and recognition deep network like Fast _ RCNN and YOLO. The article detection unit 14 detects and recognizes article information in the target image data, and obtains information such as the position, kind, and number of articles contained in the target image data.
In addition, the article detecting unit 14 may further include a unit having a function of implementing an image area overlap judgment algorithm, and is configured to judge whether the pick-and-place area determined by the triggering unit 11 overlaps with a position in the article information detected based on the target image data, so as to filter out articles that do not participate in the current pick-and-place triggering event (for example, hand-held products, background articles, and other articles that do not participate in the pick-and-place), and give a type and a number of articles that are actually taken out or put back by the current pick-and-place triggering event, that is, a type and a number of articles located in the pick. Then, the article detecting unit 14 corrects the number of articles located in the pick-and-place area by combining the weight data, and obtains article pick-and-place information according to the corrected number of articles and the type of articles located in the pick-and-place area.
Optionally, the article detection unit 14 may be disposed in the cloud, and due to the strong computing power of the cloud, the target image data, the weight data, and the information of the pick-and-place area are locally obtained, and then the information is sent to the cloud to determine the pick-and-place information of the article, so that the local calculation amount is further reduced by combining the local and the cloud, and the detection efficiency is improved.
Of course, the method provided by the embodiment of the present application also supports all local implementations. Even so, because only when confirming that article are put and put the cabinet and are in article and put the trigger state, obtain the image that is relevant with article and put the moment, the information of putting and putting article is confirmed to target image data, weight data and getting and putting the region based on this image acquisition, compare with utilizing video image to carry out action gesture analysis and article discernment, still can reduce the calculated amount, improve detection efficiency.
Optionally, the edges of the entrance and the exit of the article taking and placing cabinet are provided with markers, and the image acquisition unit 12 is used for acquiring images of the entrance and the exit of the article taking and placing cabinet; the triggering unit 11 is connected to the image acquiring unit 12, and is configured to perform object picking and placing triggering state detection based on the marker information in the image acquired by the image acquiring unit 12.
Wherein, article are got and are put the cabinet and be used for depositing article, and this application embodiment does not inject the product form of article getting and putting the cabinet, does not inject article kind, size and quantity of depositing in this article are got and are put the cabinet yet. Because the partial area of the access of the article taking and placing cabinet can be shielded when the articles are taken and placed from the article taking and placing cabinet, the embodiment of the application can detect whether the article taking and placing operation exists or not based on the shielding condition of the marker by arranging the marker on the edge of the access of the article taking and placing cabinet, namely, whether the article taking and placing cabinet is in the article taking and placing triggering state or not is determined.
Optionally, the markers include, but are not limited to, one or more of line character coded markers, barcode coded markers, and checkerboard coded markers.
The marker of the line feature coding is of a vertical gradient coding type, and gradient coding is carried out on the marker of the vertical gradient coding type in a direction perpendicular to a picking and placing boundary (namely, an entrance edge and an exit edge). In the marker encoded by the line feature shown in fig. 2(a), the gradient exists in the direction perpendicular to the boundary, and the marker interval in this encoding method is infinitesimal.
The bar code and the checkerboard code can be of a two-dimensional code type, and the markers of the two-dimensional code type are coded in the vertical and horizontal directions of the picking and placing boundary. Common two-dimensional codes include two-dimensional codes, two-dimensional codes in a checkerboard form, such as bar code codes in a two-dimensional code form shown in fig. 2(b), and checkerboard codes shown in fig. 2 (c).
Regardless of the encoding type of the marker, the system provided by the embodiment of the present application includes a plurality of markers, and the plurality of markers form a feature array. In addition, the interval between every two markers is smaller than the width of the smallest article in the articles taken from and placed in the article taking and placing cabinet. For example, the markers can be continuously arranged on the edge of the entrance and the exit of the article taking and placing cabinet for one circle, and the interval between every two markers is smaller than the width of the smallest article in the articles taken and placed from the article taking and placing cabinet, so that detection leakage is avoided, and the accuracy of taking and placing detection is further improved.
On the basis of setting the marker, the gradient of the edge of the marker is ensured to be more than 10, namely, the difference of pixel values of two side areas of the edge is greater than 10, so that the accuracy of the feature extraction of the marker is ensured. To ensure that the marker has a significant edge gradient, optionally one of the marker edges is provided with a light absorbing material and the other edge is provided with a diffuse reflecting material. That is, the materials on both sides of the edge of the marker are usually selected from a light-absorbing photographic cloth, printing ink, rubber, etc. on one side, and a material with strong diffuse reflection capability on the other side, such as: printing paper, PET (Polyethylene Terephthalate) diffuse reflection material, and the like. The embodiment of the application does not limit the material of the marker, and can extract the characteristics.
For example, the marker is black and white, and a paper marker printed in black and white can be pasted on the edge of an access opening of an article taking and placing cabinet, for example, a circle of area for pasting the marker is arranged on the periphery of the inner cavity of the cabinet. The graphite of the black part of the marker has good light absorption performance, and the printing paper of the white part has good diffuse reflection performance, so that the gray difference between black and white of the marker in a gray image is ensured to be more than that.
Alternatively, the image capturing unit 12 is used for capturing images of the entrance of the article storage and retrieval cabinet, and the image capturing unit 12 may include one camera, and the monitoring area of the one camera covers the entire entrance of the article storage and retrieval cabinet. Therefore, the whole passageway of the article taking and placing cabinet can be shot through one camera, so that the problem that the taking and placing detection is inaccurate due to the fact that a certain marker is missed to be detected is avoided. For example, a circle of markers are continuously arranged on an inner cavity at the edge of an entrance of the article taking and placing cabinet, and the detection camera can monitor the entrance and the exit and can acquire the characteristics of the markers. The visual angle through the camera can cover whole access & exit to guarantee that the operation of getting of optional position can both be shown by the image of gathering, and then avoid lou detecting.
Alternatively, rather than using one camera, the image capturing unit 12 may include a plurality of cameras, each having a monitoring area covering a portion of the access opening of the article picking and placing cabinet, and a monitoring area covering the entire access opening of the article picking and placing cabinet. For example, the number of cameras is determined according to the size of the access opening of the article taking and placing cabinet and the visual angle range of the cameras, so that the sum of the monitoring areas of the cameras used for detection can cover the whole access opening of the article taking and placing cabinet.
It should be noted that, if the image capturing unit 12 in the article taking and placing system includes a plurality of cameras, each camera transmits the captured current image to the triggering unit 11. In addition, images acquired by each camera need to be kept synchronous, so that the current images acquired by the trigger unit 11 are images at the same time, and the current images can reflect the condition that the entrances and exits of the article taking and placing cabinet are at the same time, so that the accuracy of detection results is improved.
In addition, the embodiment of the present application is described only by taking an example of connecting the image capturing unit 12 with the article storage cabinet, and the image capturing unit 12 may be disposed in a certain range of the entrance of the article storage cabinet, so as to ensure that the image of the entrance can be captured. Alternatively, the image capturing unit 12 may be disposed separately from the article storage cabinet. For example, the image capturing unit 12 may be disposed opposite to the article storage cabinet, facing the entrance of the article storage cabinet, and may capture an image of the entrance. The embodiment of the present application does not limit the specific number and position of the image capturing units 12.
For the convenience of understanding, the schematic diagram shown in fig. 3 is taken as an example in the present application. As shown in fig. 3(a), the image capturing unit 12 includes a camera, for example, and a marker may be disposed at the edge of the entrance of the article storage cabinet. A camera can be arranged at the upper right corner of the passageway, and the monitoring area of the camera covers the whole passageway so as to detect the passageway and acquire images of the whole passageway. As shown in fig. 3(b), a camera may be disposed at each of the upper right corner and the upper left corner of the doorway, the monitoring area of each camera covers a part of the doorway of the article picking and placing cabinet, and the monitoring areas of all the cameras cover the entire doorway, so as to detect the entire doorway and acquire images of the entire doorway.
Optionally, the light variation of the environment where the article taking and placing cabinet is located may affect the definition of the image collected by the image collecting unit 12, and affect the marker identification. To this end, the system further comprises a light source for supplementary lighting of the markers, as shown in fig. 4. The marker is subjected to light supplement through the light source, so that the gray level of the characteristic image of the marker is not changed along with the change of the illumination condition of the external environment, and the accuracy of picking and placing detection is further ensured.
The specific position of the light source is not limited in the embodiments of the present application, and the light can be supplemented to the marker. For example, the light source may be disposed directly opposite the article picking and placing cabinet to face an entrance edge of the article picking and placing cabinet. In addition, the number of the light sources may be one or more, and the number of the light sources is not limited in the embodiments of the present application, and the kind of the light sources is not limited. Optionally, the system may further comprise control means for controlling the light sources to be switched on and off. For example, the light source is controlled to be turned on and off based on the brightness of the environment in which the article storage and retrieval cabinet is located.
Based on the article detection system, when the article is taken and placed, the object entering the cabinet to perform the taking and placing operation can shield the marker, the taking and placing operation can be accurately detected by detecting the shielding condition of the marker, and therefore the article taking and placing triggering state is obtained. Further, the pick-and-place area can be determined based on the occlusion area.
The marker encoded by the two-dimensional code shown in fig. 5(a) will be described as an example. The marker of two-dimensional code is black and white two-colour, and the paper two-dimensional code that uses black and white printing is posted to the access & exit edge that article were put cabinet. Can carry out the light filling in order to reduce the illumination change to the marker through the light source to reduce the influence to two-dimensional code feature extraction. The graphite of the black part of the two-dimensional code has good light absorption performance, and the printing paper of the white part of the two-dimensional code has good diffuse reflection performance, so that the gray difference between black and white of the two-dimensional code in a gray image is ensured to be more than 100.
Before the picking and placing operation detection is performed, the method provided by the embodiment of the application firstly uses the image acquisition unit 12 to acquire the reference image at the moment when the picking and placing operation is not performed, and then identifies all the two-dimensional codes in the image. As shown in fig. 5(a), the continuous two-dimensional code sequence located at the edge of the entrance/exit of the article picking/placing cabinet obtains the positions of all the two-dimensional codes and the internal coding vectors as the marker features during picking/placing detection, and obtains the reference marker information for subsequent detection.
Then, the image acquisition unit 12 detects the current image of the entrance of the article picking and placing cabinet in real time. When there is a pick-and-place operation, the two-dimensional code located at the edge of the entrance/exit is blocked by the pick-and-place operation, as shown by the shaded area in fig. 5 (b). And detecting the two-dimensional code on the current image according to the position of the two-dimensional code obtained by the reference marker information and extracting the internal coding vector of the two-dimensional code. If the two-dimensional code is not detected in the current image or the internal coding vector of the two-dimensional code cannot be matched with the internal coding vector of the reference two-dimensional code at the position, the two-dimensional code at the position is shielded, and the article taking and placing operation is determined to exist, namely the article taking and placing cabinet is in an article taking and placing triggering state.
Further, after each two-dimensional code is identified by the method, the position and the number of the shielding areas are obtained, as shown in fig. 5(c), the dotted line part is the shielding area where the pick-and-place operation exists, and the two shielding areas are shared in the figure; and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not by comparing the number change of the front frame shielding area and the rear frame shielding area in the time domain by using the shielding area information, and outputting a triggering signal when the article taking and placing cabinet is in the article taking and placing triggering state. The method comprises the steps of comparing the number change of the shielded areas obtained by marker information in current images acquired at different time to determine whether an article taking and placing cabinet is in an article taking and placing triggering state, and outputting a triggering signal when the article taking and placing cabinet is in the article taking and placing triggering state.
Taking the example that the method provided by the embodiment of the present application is applied to the marker of the line feature code shown in fig. 6(a), the difference between the article pick-and-place detection method based on the marker and the process shown in fig. 5 is the type of the code of the marker. The marker shown in fig. 6(a) is a continuous strip of markers printed with horizontal black and white stripes (i.e., a vertical gradient). When the marker is deployed, the black-white printed paper marker strip is pasted to the edge of the access of the article taking and placing cabinet. And then adjusting the visual angle of the camera to enable the mark strips in the current image of the entrance and exit of the article taking and placing cabinet shot by the camera to be parallel to the horizontal axis of the current image as much as possible. Since the markers are continuous, a list of markers in the camera image is detected as a feature description unit. For example, the marker bars of line coding shown in fig. 6(a) have two vertical gradients in each column, wherein one gradient is downward, i.e. the gray scale is larger from top to bottom, and the other gradient is upward, i.e. the gray scale is smaller from top to bottom.
Before the picking and placing operation detection is carried out, namely the object picking and placing trigger state is determined, the estimated position of each gradient edge can be manually given in a mode of drawing lines on a reference image when the picking and placing operation is not carried out. According to the method provided by the embodiment of the application, the image acquisition unit is used for acquiring the reference image at the moment when the picking and placing operation is not carried out, and the picking and placing operation detection unit is used for searching in the neighborhood in the vertical direction of the estimated position. And searching and finding out the pixel position with the maximum gradient in the neighborhood as an accurate gradient position, and obtaining all gradient positions and corresponding gradient directions in each row of markers in the reference image as reference marker information.
Then, the image acquisition unit 12 detects the current image of the entrance of the article picking and placing cabinet in real time. When the picking and placing operation exists, extracting gradients on the current image according to the gradient positions in the reference marker information. If the image has the condition that no gradient is extracted or the direction of extracting the gradient is not consistent with the characteristics in the reference marker information, the current region has the picking and placing operation, namely the article picking and placing cabinet is in an article picking and placing triggering state, so that the markers are blocked, as shown by the shaded region in fig. 6 (b).
After each marker is identified in this way, the position and number of the occluded area are obtained. As shown in fig. 6(c), the dotted line part is a shielding region where the pick-and-place operation exists, and the two shielding regions are shared in the figure; and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not by comparing the number change of the front frame shielding area and the rear frame shielding area in the time domain by using the shielding area information, and outputting a triggering signal when the article taking and placing cabinet is in the article taking and placing triggering state. Namely, the number of the shielded areas obtained by comparing the marker information in the current image acquired at different time is changed to output a trigger signal.
The marker coded on a checkerboard pattern as shown in FIG. 7(a) will be described as an example. Before the picking and placing operation detection is carried out, that is, before the object picking and placing triggering state is determined, the method provided by the embodiment of the application firstly uses the image acquisition unit 12 to acquire the reference image at the moment when the picking and placing operation is not carried out, and then the picking and placing operation detection unit identifies all the checkerboard corner points in the image. As shown in fig. 7(a), the continuous checkerboard code sequence located at the edge of the entrance/exit of the article pick-and-place cabinet obtains the positions of all the checkerboard corner points as the marker features during the article pick-and-place detection, and obtains the reference marker information for the subsequent detection.
Then, the image acquisition unit 12 detects the current image of the entrance of the article picking and placing cabinet in real time. When there is a pick-and-place operation, the corner points of the checkerboard at the edges of the entrance and exit are blocked by the pick-and-place operation, as shown by the shaded area in fig. 7 (b). And extracting the checkerboard corner points on the current image according to the checkerboard corner point positions obtained by the reference marker information. If the checkerboard angular points are not detected in the current image, the checkerboard angular points at the position are shielded, and the picking and placing operation is determined to exist.
After each checkerboard is identified by the method, the positions and the number of the shielding areas are obtained, as shown in fig. 6(c), the dotted line part is the shielding area with the picking and placing operation, and the two shielding areas are shared in the drawing; and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not by comparing the number change of the front frame shielding area and the rear frame shielding area in the time domain by using the shielding area information, and outputting a triggering signal when the article taking and placing cabinet is in the article taking and placing triggering state. Namely, the number of the shielded areas obtained by comparing the marker information in the current image acquired at different time is changed to output a trigger signal.
Optionally, no matter which type of the above-mentioned marker is used, based on the determined information of the blocked area of the marker, whether the article pick-and-place cabinet is in the article pick-and-place triggering state is determined by comparing the number change of the blocked areas in the previous and next frame images in the time domain, and when the article pick-and-place cabinet is in the article pick-and-place triggering state, a pick-and-place triggering signal is output. The taking and placing trigger signal is used for indicating the trigger state of the article taking and placing operation, each shielding area can be used as an operation point, and different trigger states can be determined based on the number of the operation points. For example, the trigger state may be defined as 4 types, 0 entering operation (the number of operation points in the trigger plane of the article pick-and-place cabinet is changed from 0 to non-0), 1 increasing operation (the number of operation points in the trigger plane of the article pick-and-place cabinet is increased and is not 0 before increasing), 2 decreasing operation (the number of operation points in the trigger plane of the article pick-and-place cabinet is decreased and is not 0 after decreasing), 3 leaving operation (the number of operation points in the trigger plane of the article pick-and-place cabinet is changed from non-0 to 0), and 4 simultaneous entering and leaving operation (one operation point in the trigger plane of the article pick-and-place cabinet enters and the other operation point leaves). In addition, no object enters or exits the article taking and placing cabinet, namely, under the condition that the number of the operating points is not changed, the operation state is considered to be an invalid operation state, namely, the article taking and placing cabinet is not in an article taking and placing triggering state.
In the above description, the image capturing unit 12 includes a camera only as an example, and the image capturing unit 12 may be a depth camera, a camera, or the like, and the product form of the image capturing unit 12 is not limited in the embodiments of the present application. If a depth camera is used as the image acquisition unit 12, the number of the operation points and the change condition thereof are determined by determining the change of the number of the depth value communication areas in the trigger plane by using the depth image obtained by the depth camera, so that various trigger states are given, a trigger signal is provided to trigger the image acquisition unit 12 to acquire an image, and the weight acquisition unit 13 is triggered to acquire weight data. In addition, based on the depth information, the pick-and-place area can be mapped to the corresponding area in each image, and accordingly information of the pick-and-place area is obtained.
Optionally, in addition to the manner of detecting the article pick-and-place trigger state by using the marker, the method provided in the embodiment of the present application further includes a detection manner of an infrared correlation light curtain and an optical flow detection manner. Taking the detection mode of the infrared correlation light curtain as an example, the triggering unit 11 includes: the article taking and placing cabinet comprises an infrared correlation unit and a processor, wherein the infrared correlation unit is arranged at an entrance and an exit of the article taking and placing cabinet, and the processor is used for detecting an infrared cut-off signal based on an infrared signal emitted by the infrared correlation unit; and when the change of the quantity of the infrared cutoff signals is detected, determining that the article taking and placing cabinet is in an article taking and placing triggering state. Optionally, the infrared correlation unit includes an infrared emitting end and an infrared receiving end. Optionally, the infrared transmitting end and the infrared receiving end can be respectively located at the upper side and the lower side of an entrance of the article taking and placing cabinet, the infrared transmitting end transmits one infrared light at a certain distance, and the infrared receiving end receives infrared signals at the same interval, so that an infrared opposite-emitting light curtain covering the entrance is formed. Considering that the article storage and taking cabinet may be installed outdoors, in case of outdoor sunlight, in order to avoid that the infrared part in the sunlight may interfere with the signal of the infrared receiving end, as shown in fig. 5(a), as an alternative, the infrared emitting end may be installed at the lower side (lower edge) of the entrance of the article storage and taking cabinet, and the infrared receiving end may be installed at the upper side (upper edge) of the entrance of the article storage and taking cabinet. In addition, under the same coverage area, the light curtain adopting the infrared correlation units arranged on the upper side and the lower side is shorter than the light curtain adopting the infrared correlation units arranged on the left side and the right side, and the cost is lower. And under the condition that the user uses two hands to take the article, the infrared correlation units arranged on the left side and the right side only detect one hand, so the detection accuracy of the infrared correlation units arranged on the upper side and the lower side is higher than that of the infrared correlation units arranged on the left side and the right side.
When an object enters the object taking and placing cabinet to take and place the object, the infrared light rays entering the position are shielded, the infrared light rays cannot be received by the corresponding position of the receiving end, an infrared cutoff signal is generated, and when the quantity of the infrared cutoff signal is detected to be changed, the object taking and placing cabinet can be determined to be in an object taking and placing triggering state. For example, when one hand is in the article taking and placing cabinet, infrared cutoff signals are generated, the number of the infrared cutoff signals is from 0 to 1, and the number of the infrared cutoff signals is changed. When the article taking and placing cabinet enters one hand and an infrared cutoff signal is generated at the other position, the number of the infrared cutoff signals is changed from 1 to 2, and the number of the infrared cutoff signals is also changed under the condition. Therefore, the article taking and placing cabinet can be determined to be in an article taking and placing triggering state based on the quantity change of the infrared cutoff signals. The triggering unit 11 can obtain the time and the position when the object enters the article taking and placing cabinet based on the cutoff time and the cutoff area of the infrared light, so as to obtain the article taking and placing triggering data.
For example, when one hand starts to enter the article taking and placing cabinet for taking and placing operations, a continuous intercepting region starts to appear on the infrared light curtain at the hand entering position, namely the number of the intercepting regions is changed from 0 to 1, the number of the intercepting signals is changed from 0 to 1, and then the entering starting signal can be acquired; then, when one hand enters the article taking and placing cabinet, the infrared light curtain generates a continuous cut-off region at a new hand entering position, namely the number of the cut-off regions is changed from n to n +1(n is 1,2,3 and …), and then an entering signal can be acquired; when one hand leaves the article taking and placing cabinet, a continuous cut-off area of the infrared light curtain disappears at the position where the hand leaves, namely the number of the cut-off areas is changed from n +1 to n (n is 1,2,3, …), and then a leaving signal can be acquired; when the last hand leaves the article taking and placing cabinet and the infrared light curtain is restored to be fully communicated, namely the number of the truncation areas is changed from 1 to 0, the leaving ending signal can be acquired.
At the same time, the corresponding appearing or disappearing truncation area is known for each trigger. For example, when an object enters the article taking and placing cabinet, the number of the light receiving points at the receiving end changes. If the receiving end receives data 000000000000011110000000 that the data received by the light curtain of 24 pairs of light points at a certain time is (where 0 represents that the point is not blocked, and 1 represents that the point is blocked), it indicates that the 14 th to 17 th points are blocked, and the points are communicated with each other, and this state indicates that an operating point is in the article pick-and-place cabinet at this moment, and the position coordinate of the operating point is 14-17, which is called the pick-and-place area. In addition, the pick-and-place object triggered at this time is necessarily located in the pick-and-place area corresponding to the pick-and-place operation at this time, so that the object which is not located in the pick-and-place area and does not participate in the pick-and-place operation can be filtered subsequently aiming at the triggering of the pick-and-place operation at each time.
Taking the optical flow detection method shown in fig. 5(c) as an example, the trigger mechanism based on the image optical flow detection is a pure algorithmic trigger. In order to ensure the robustness of optical flow calculation, the optical axis direction of the camera is parallel to the access (or perpendicular to the movement direction of the picking and placing actions), and meanwhile, the view field of the camera is ensured to cover the whole inlet of the article picking and placing cabinet. The triggering unit 11 detects a video at an entrance and an exit of the camera, and the video comprises a current image of the entrance and the exit of the article taking and placing cabinet. The triggering unit 11 calculates the optical flow in real time by using an optical flow calculation method such as LK (Lucas-Kanade) optical flow, and projects the obtained optical flow vector in the vertical direction of the image (since the optical axis direction of the camera is perpendicular to the moving direction of the picking and placing action, the vertical direction of the camera image is the moving direction of the picking and placing action) to obtain the optical flow vector representing the entering and exiting of the article picking and placing cabinet. When an object enters or leaves the article taking and placing cabinet, the camera detects an optical flow vector area in the direction of entering or leaving the article taking and placing cabinet due to the movement of the object, and when the optical flow vector area in the taking and placing operation direction appears in the entrance and exit area of the article taking and placing cabinet, the object is indicated to enter or leave the article taking and placing cabinet, and the article taking and placing cabinet is determined to be in an article taking and placing triggering state. The direction according to the optical flow vector may be determined as an incoming signal or an outgoing signal. Meanwhile, the optical flow vector area corresponds to the pick-and-place area triggered at this time.
Optionally, referring to fig. 9, the system further comprises: a target detection unit 15 connected to the image acquisition unit 12; the object detection unit 15 is also connected to the item detection unit 14.
The image acquisition unit 12 is used for sending the acquired image related to the article taking and placing time to the target detection unit 15; the object detection unit 15 is configured to perform object detection on an image related to the article pick-and-place time to obtain local image data of an area where an article is located and coordinates of the local image data, and send the local image data of the area where the article is located and the coordinates of the local image data as object image data to the article detection unit 14; and the article detection unit 14 is used for determining article taking and placing information based on the information of the taking and placing area, the weight data, the local image data of the area where the article is located and the coordinates of the local image data.
Since the target detection unit 15 can send the local image data of the area where the article is located and the coordinates of the local image data to the article detection unit 14 as the target image data after the article detection is performed, compared with a mode that all the image data of the image related to the article pick-and-place time and acquired by the image acquisition unit 12 are directly sent to the article detection unit 14 as the target image data, the data amount of transmission is further reduced, so that the calculation amount is reduced, and the detection efficiency is improved. Alternatively, the processors in the object detection unit 15 and the trigger unit 11 may be different processors. Alternatively, the processors may be the same processor, which is not limited in this embodiment of the present application.
Optionally, referring to fig. 10, for a manner of locally combining with a cloud, the system further includes: a communication unit 16; the triggering unit 11, the image acquisition unit 12, the weight acquisition unit 13 and the communication unit 16 are disposed in the local area of the article taking and placing cabinet, and the article detection unit 14 is disposed in the cloud.
When the communication unit 16 and the article detection unit 14 are deployed in the cloud, the triggering unit 11 is configured to determine, based on article taking and placing triggering data of the article taking and placing cabinet, that the article taking and placing cabinet is in an article taking and placing triggering state, determine a taking and placing area based on the article taking and placing triggering data, send information of the taking and placing area to the communication unit 16, send a triggering signal to the image acquisition unit 12 and the weight acquisition unit 13, trigger the image acquisition unit 12 to acquire image data, and trigger the weight acquisition unit 13 to acquire weight data; the image acquisition unit 12 is used for acquiring images related to the article taking and placing time under the trigger of the trigger unit 11 and sending target image data obtained based on the images to the communication unit 16; the weight acquisition unit 13 is used for acquiring weight data related to the article taking and placing time under the trigger of the trigger unit 11 and sending the weight data to the communication unit 16; the communication unit 16 sends the information of the pick-and-place area, the weight data and the target image data to the article detection unit 14; and the article detection unit 14 is used for determining article taking and placing information based on the information of the taking and placing area, the weight data and the target image data.
Under the condition that the communication unit 16, the target detection unit 15 and the article detection unit 14 are deployed at the cloud, the triggering unit 11 is used for determining an article taking and placing cabinet to be in an article taking and placing triggering state based on article taking and placing triggering data of the article taking and placing cabinet, determining a taking and placing area based on the article taking and placing triggering data, sending information of the taking and placing area to the communication unit 16, respectively sending triggering signals to the image acquisition unit 12 and the weight acquisition unit 13, triggering the image acquisition unit 12 to acquire image data, and triggering the weight acquisition unit 13 to acquire weight data; the image acquisition unit 12 is used for sending the acquired image related to the article taking and placing time to the target detection unit 15; the object detection unit 15 is configured to perform object detection on an image related to the article pick-and-place time to obtain local image data of an area where an article is located and coordinates of the local image data, and send the local image data of the area where the article is located and the coordinates of the local image data to the communication unit 16 as object image data; the weight acquisition unit 13 is used for acquiring weight data related to the article taking and placing time under the trigger of the trigger unit 11 and sending the weight data to the communication unit 16; the communication unit 16 sends the information of the pick-and-place area, the weight data and the target image data to the article detection unit 14; and the article detection unit 14 is used for determining article taking and placing information based on the information of the taking and placing area, the weight data and the target image data.
Optionally, referring to fig. 11, an article taking and placing cabinet system is further provided in the embodiment of the present application. In the article taking and placing cabinet system, besides the article detection system, the system further comprises an access control unit, a payment unit and a display unit. The functions of the article detection system can be referred to above, and are not described herein again. And the access control unit is used for controlling the article taking and placing cabinet. For example, when the user needs to open the article taking and placing cabinet to take and place the article, the access control card information can be input in a mode of swiping the access control card, and the access control card information is verified by the access control unit, so that the article taking and placing cabinet is allowed to be opened after verification. Except the mode of punching the card, also can input entrance guard's information, make entrance guard's control unit pass through after verifying entrance guard's information, just can open article and get and put the cabinet. Alternatively, the access control information may be a user name, a password, and the like. Of course, the access control verification can be carried out in a face recognition mode, and the article taking and placing cabinet can be opened only after the face recognition is passed.
Optionally, the article taking and placing cabinet system further comprises an identity recognition unit, wherein the identity recognition unit is used for verifying the identity of a user, after the user passes the verification of the access control unit, the identity recognition unit can be triggered to perform identity recognition, and after the user passes the identity recognition of the identity recognition unit, the article taking and placing cabinet is opened. During identification, the user can be guided to input user information, such as identity document information or passwords. The present application does not limit the manner of authentication. It should be understood that the identity recognition unit and the access control unit may alternatively be disposed in the container system for taking and placing articles, which is not limited in the embodiments of the present application.
Optionally, the payment unit is configured to perform a payment operation based on the picked and placed article after the article picking and placing operation is completed. The display unit is used for displaying the information of the articles needing payment and the payment information. For example, after the pick-and-place article information is detected by the article detection system, the amount to be paid can be calculated based on the article information to obtain payment information. The display unit displays payment information and information of the articles needing payment, and the user can pay through the payment unit based on the information displayed by the display unit. The embodiment of the present application is not limited to specific information displayed by the display unit, and the embodiment of the present application is also not limited to the payment operation process of the payment unit.
In this regard, referring to fig. 12, an embodiment of the present application provides an article detection method, which is applied to the article detection system. As shown in fig. 12, the method includes the following steps.
Step 1201, acquiring article taking and placing triggering data for the article taking and placing cabinet.
Based on the above three detection modes, the step 1201 includes, but is not limited to, the following three detection cases:
in the first case: the two sides of the entrance and the exit of the article taking and placing cabinet are provided with infrared correlation units;
acquire to article get put article of cabinet get put trigger data, include:
acquiring an infrared signal transmitted by an infrared correlation unit;
after obtaining the article to article get put cabinet and get and put trigger data, still include:
detecting an infrared cut-off signal based on an infrared signal emitted by an infrared correlation unit;
and when the change of the quantity of the infrared cutoff signals is detected, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
In the second case: the entrance and the exit of the article taking and placing cabinet are provided with cameras, the field angle of the cameras covers the entrance and the exit, and the optical axis direction of the cameras is parallel to the entrance and the exit;
acquire to article get put article of cabinet get put trigger data, include:
acquiring a current image of an entrance and an exit acquired by a camera;
after obtaining the article to article get put cabinet and get and put trigger data, still include:
acquiring an optical flow vector based on a current image of an entrance and an exit acquired by a camera;
and when the light stream vector of the taking and placing operation direction appears in the entrance and exit area, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
In the third case: the edges of the entrance and the exit of the article taking and placing cabinet are provided with markers; the entrance and exit of the article storage cabinet is provided with a camera, and the angle of view of the camera covers the entrance and exit.
Acquire to article get put article of cabinet get put trigger data, include:
acquiring a current image of an entrance and an exit of the article taking and placing cabinet acquired by a camera;
after obtaining the article to article get put cabinet and get and put trigger data, still include:
detecting marker information in a current image;
and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not based on the detection result.
It should be noted that, the specific processes of the three cases may refer to the related contents in the article detection system, and are not described in detail here.
Step 1202, when the article taking and placing cabinet is determined to be in an article taking and placing triggering state based on the article taking and placing triggering data, determining a taking and placing area based on the article taking and placing triggering data; and acquiring an image and weight data related to the article taking and placing time, and acquiring target image data based on the image related to the article taking and placing time.
When it is determined that the article picking and placing cabinet is in the article picking and placing triggering state based on the article picking and placing triggering data, the manner of determining the picking and placing area based on the article picking and placing triggering data may be referred to the introduction in the article detection system. The modes for acquiring the trigger data of taking and placing the article are different, and the modes for determining the areas for taking and placing are different. If the mode of combining the cloud end locally is adopted, no matter which mode of acquiring the article picking and placing triggering data is adopted, after the article picking and placing cabinet is determined to be in the article picking and placing triggering state based on the article picking and placing triggering data, the determined information of the picking and placing area can be uploaded to the cloud end, and article detection is carried out by the cloud end.
When the article taking and placing cabinet is determined to be in the article taking and placing triggering state based on the article taking and placing triggering data, the image related to the article taking and placing moment can be acquired based on the image acquisition unit, and the image related to the article taking and placing moment acquired by the image acquisition unit is acquired. Optionally, acquiring an image related to the moment of taking and placing the article includes: and acquiring an image at the article taking and placing moment, or acquiring images of reference quantity before and after the article taking and placing moment. The reference amount may be set based on an application scenario or experience, and is not limited in the embodiment of the present application.
In addition, according to the triggering data of article taking and placing, when the weight data related to the article taking and placing time is acquired by the weight acquisition unit, the weight data of each time can be continuously acquired by the weight acquisition unit in real time, and the weight data of an object in a period of time before and after the article taking and placing cabinet time is stored for subsequent article detection.
When the target image data is acquired based on the image related to the article taking and placing time, the method includes, but is not limited to, the following two modes:
the first method is as follows: all image data in the image is set as target image data.
In this way, after the image acquisition unit acquires the image related to the article pick-and-place time, all image data in the image is directly used as target image data. If a mode of locally combining with the cloud is adopted, all image data are uploaded to the cloud as target image data, and the cloud detects the articles accordingly.
The second method comprises the following steps: and carrying out article detection on the image related to the article taking and placing time to obtain the local image data of the area where the article is located and the coordinates of the local image data, and taking the local image data of the area where the article is located and the coordinates of the local image data as target image data.
In this way, after the image acquisition unit acquires the image related to the article taking and placing time, the image related to the article taking and placing time is sent to the target detection unit, and the target detection unit detects the article on the image based on means such as deep learning. For example, a classifier is trained in advance by means of deep learning or the like, and if the target detection unit includes the classifier, image data related to the article pick-and-place timing is input to the classifier. And identifying whether the image contains the article and which article is contained based on the classifier, thereby extracting the local image data possibly containing the article, namely obtaining the local image data of the region where the article is located. Then, the position of the local image data in the original image (i.e. the image related to the article pick-and-place time) is further determined, and the coordinates of the local image data are obtained.
The local image data of the area where the article is located and the coordinates of the local image data are used as target image data, and the target image data are uploaded to the article detection unit, so that the article detection unit detects the article according to the target image data. Compared with the method of uploading all image data of the whole image, the method has the advantages that the amount of uploaded data is small, and the article detection efficiency can be further improved.
It should be noted that, if the article storage cabinet is multi-layered, a weight collecting unit may be disposed on each layer. And when the article taking and placing cabinet is determined to be in the article taking and placing triggering state based on the article taking and placing triggering data, acquiring the weight data acquired by each layer of weight acquisition unit for subsequent article correction. Of course, as an alternative, after the picking and placing area is determined, which layer of the article picking and placing cabinet the picking and placing operation is located in may be determined, so that only the weight data collected by the weight collecting unit of the layer is obtained. The mode to be selected is not limited in the present application.
Step 1203, determining article taking and placing information based on the information of the taking and placing area, the weight data and the target image data.
Optionally, the method provided by the embodiment of the present application may be applied locally, that is, all processes are implemented locally, or implemented locally and in a cloud. For example, the information, the weight data and the target image data of the pick-and-place area are sent to a cloud, and the pick-and-place information of the article is determined at the cloud based on the information, the weight data and the target image data of the pick-and-place area. Optionally, whether locally or in combination with a cloud, the article pick-and-place information is determined based on the information of the pick-and-place area, the weight data, and the target image data, including but not limited to the following three cases:
in the first case: the target image data comprises all image data in the image related to the article taking and placing time; determining article pick-and-place information based on the information of the pick-and-place area, the weight data and the target image data, including: determining the type and the quantity of the articles in the pick-and-place area based on the position in the article information and the information of the pick-and-place area; and correcting the quantity of the articles in the pick-and-place area based on the weight data, and obtaining article pick-and-place information according to the corrected quantity of the articles and the types of the articles in the pick-and-place area.
Optionally, after determining the type and the number of the articles located in the pick-and-place area based on the position in the article information and the information of the pick-and-place area, the method further includes: rechecking the number of articles located in the pick-and-place area based on the weight data; and when the rechecking is passed, taking the types and the quantity of the articles in the taking and placing area as article taking and placing information.
For example, as shown in fig. 13, in the local area of the article picking and placing cabinet, the triggering unit performs triggering analysis by using the image data provided by the image acquisition unit; and outputting corresponding picking and placing trigger data when picking and placing operations are generated, determining a picking and placing area, and sending information of the picking and placing area to the article detection unit. And then, the triggering unit triggers the image data acquisition unit to record an image related to the article taking and placing time, and triggers the weight acquisition unit to acquire weight data related to the article taking and placing time. The image acquisition unit acquires images related to the article taking and placing time under the trigger of the trigger unit, and all image data in the images are used as target image data to be sent to the article detection unit; the weight acquisition unit acquires weight data related to the article taking and placing time under the trigger of the trigger unit and sends the weight data to the article detection unit.
The object detection unit detects the target image data to obtain the positions, types and quantity of all objects contained in the target image data; the type and the quantity of the articles in the pick-and-place area are determined based on the position in the article information and the information of the pick-and-place area, namely, the pick-and-place area indicated by the information of the pick-and-place area and the position in the article information detected by the article detection unit are used for filtering the articles which do not belong to the pick-and-place operation. And then, the article detection unit rechecks the number of the articles in the pick-and-place area based on the weight data, if the type and the number of the articles in the pick-and-place area can be matched with the weight data collected by the weight collection unit, the rechecking is passed, and the type and the number of the articles in the pick-and-place area are used as article pick-and-place information. If the types and the quantity of the articles in the pick-and-place area can not be matched with the weight data collected by the weight collecting unit, the quantity of the articles in the pick-and-place area is corrected, and article pick-and-place information is obtained according to the corrected quantity of the articles and the types of the articles in the pick-and-place area.
When determining whether the type and the number of the articles in the pick-and-place area are matched with the weight data acquired by the weight acquisition unit, the weight data can be acquired based on the type and the number of the articles in the pick-and-place area, and the weight data acquired by the weight acquisition unit can acquire the weight change before and after the pick-and-place operation, namely a weight difference value. If a weight data obtained based on the kind and the number of the articles located in the pick-and-place area is consistent with the weight difference or the error is within the reference range, the recheck is determined to pass. And if a weight data obtained based on the type and the quantity of the articles in the pick-and-place area is inconsistent with the weight difference and the error exceeds a reference range, determining that the recheck fails.
Optionally, for the case that the rechecking does not pass the correction requirement, different numbers may be enumerated based on the types of the articles located in the pick-and-place area, and the simultaneous equations are calculated, so as to obtain the number that satisfies the types of the articles located in the pick-and-place area and the weight difference, and use the number as the corrected number. And taking the corrected quantity and the identified type of the articles in the taking and placing area as article taking and placing information. Of course, the correction may be performed in other ways besides the above-described correction method, and this is not limited in the embodiment of the present application.
In the second case: the target image data comprises local image data of an area where an article is located and coordinates of the local image data, wherein the local image data are obtained by detecting the article of the image related to the article taking and placing time; determining article pick-and-place information based on the information of the pick-and-place area, the weight data and the target image data, comprising: filtering the local image data based on the information of the pick-and-place area and the coordinates of the local image data, and comparing the filtered local image data with an article sample library; when the filtered local area image data comprise the articles according to the comparison result, determining the types and the quantity of the articles contained in the filtered local area image data; and rechecking the quantity based on the weight data, and determining the article taking and placing information according to a rechecking result. Optionally, referring to the first case, determining the article pick-and-place information according to the rechecking result includes: rechecking the number of the articles in the pick-and-place area based on the weight data; and when the rechecking is passed, taking the type and the quantity of the articles contained in the filtered local image data as article taking and placing information. And when the rechecking fails, correcting the quantity based on the weight data, and taking the type of the articles and the corrected quantity included in the filtered local image data as article taking and placing information. The calibration method can be referred to the related description of the first case, and is not described herein again.
In this case, the article detection system further includes a target detection unit, the target detection unit is disposed locally, only performs article target detection and does not perform article type determination, the target detection unit may be a two-classifier (whether an article target or not), the algorithm complexity of the target detection unit is not high, and the classifier is not sensitive to a specific article type, and the algorithm model is not updated and maintained frequently, and is suitable for being deployed locally. The data transmitted to the cloud server do not need to transmit the whole image related to the article taking and placing time, but transmit the local image data (small images) of the area where the article is located, and the data transmission amount is further reduced. The overall optimization balances the calculation cost, the data transmission cost and the operation and maintenance cost, and further reduces the cost of the whole system scheme.
In addition, an article sample library may be pre-established, in which article information in the article taking and placing cabinet is stored, including but not limited to information such as article type, quantity, location, and the like. Optionally, the article sample library may be stored in the cloud, and the article sample library includes article information in all article pick-and-place cabinets managed by the cloud. Therefore, for the article taking and placing cabinet currently needing to detect the article, the method provided by the embodiment of the application can also upload the identification information of the article taking and placing cabinet currently needing to detect the article to the cloud end, so that the cloud end determines the information related to the article taking and placing cabinet currently needing to detect the article in the article sample library. Optionally, the identification information for identifying the article pick-and-place cabinet includes, but is not limited to, position information, codes, and the like, which are not limited in this application embodiment, and it is sufficient to identify the corresponding article pick-and-place cabinet.
For example, taking the case of determining whether the article pick-and-place cabinet is in the article pick-and-place triggering state by using the on-off of the light curtain, as shown in fig. 14, the whole article detection process is basically the same as that shown in fig. 13 in the first case by using the infrared correlation light curtain as the triggering mechanism. Different from fig. 13, the article detection unit compares all the small images provided by the target detection unit with the article sample library to determine whether the target in the small images is an article, which kind of article, and counts the kind and number of all the articles.
In the local area of the article taking and placing cabinet, a target detection unit detects an image related to article taking and placing time, and outputs a possible article target area local image (small image) and coordinates thereof in an original image, namely local image data of an area where an article is located and coordinates of the local image data; and transmitting data such as the triggering state, the information of the pick-and-place area, the local image data of the area where the article is located, the coordinates of the local image data and the like to the article detection unit. This article detecting element can be located the cloud ware in high in the clouds, also can be located locally, and this application embodiment does not restrict to this.
The article detection unit filters the local image data based on the information of the pick-and-place area and the coordinates of the local image data, namely filters articles which do not belong to the pick-and-place operation in the original image by using the pick-and-place area indicated by the information of the pick-and-place area and the coordinates of the local image data in the original image, and outputs the filtered local image data containing the types and the number of the picked articles. Comparing the filtered local image data with an article sample library; when the filtered local area image data comprise the articles according to the comparison result, determining the types and the quantity of the articles contained in the filtered local area image data; and correcting the quantity based on the weight data, and determining the article taking and placing information according to the correction result.
In the third case: filtering the target image data based on the information of the pick-and-place area to obtain filtered image data; and identifying the article information in the filtered image data based on the weight data and the target image data to obtain article taking and placing information, wherein the article taking and placing information comprises types and quantity.
Optionally, identifying article information in the filtered image data based on the weight data and the target image data to obtain article pick-and-place information, including: rechecking the quantity in the identified article information through the weight data; when the rechecking is passed, the identified article information is used as article taking and placing information; and when the rechecking fails, correcting the identified article information, and obtaining article taking and placing information according to a correction result.
For a manner of correcting the identified article information, refer to the description of the first case, and details are not repeated here. Under the condition, the image acquired by the image acquisition unit may include a hand for taking and placing the article and other articles outside the article taking and placing cabinet besides the article taking and placing, so that the target image data is filtered based on the information of the taking and placing area, the hand for taking and placing the article and other articles outside the article taking and placing cabinet are filtered, the noise of the filtered image data is small, the calculation amount of subsequent data processing can be reduced, and the detection efficiency and the accuracy are improved.
Based on the above article detection process, the embodiment of the application may use the operation state of 0 as a start, and the operation state of 3 as an end, and use these two states and the non-5-invalid operation state therebetween as a complete article pick-and-place event. As shown in fig. 15, the article detection may be implemented completely locally, or locally in combination with a cloud.
If the processing is completely local, the weight change value of the operation is obtained by analyzing the change of the weight data before and after the event. And detecting and searching the object image data related to the occurrence time of the event to acquire the object type and the corresponding quantity in the object image data. And then checking whether the type and the quantity of the articles obtained based on the image data can correspond to the weight change value before and after the picking and placing operation. If the image recognition result is correct, the image recognition result, namely the visual recognition result, is indicated to be correct and can be used as the final result of the picking and placing operation. If the type and the quantity of the articles obtained based on the image data have large difference with the weight variation, the type information of the articles identified by the image data is used for enumerating different article quantities, and a simultaneous equation is used for resolving to find a quantity value which simultaneously meets the type and the total weight variation of the articles. The numerical value is used as the type and numerical value of the article corresponding to the picking and placing operation.
Taking a self-service shopping scene of a user as an example, determining the final information of the articles to be taken and placed by adopting the method, and after the types and the quantity of the articles are obtained, summing up the results of all the operation events from the opening of the article taking and placing cabinet to the closing of the article taking and placing cabinet to obtain the information of the articles purchased in the shopping. And then, the article information can be output to the display unit for displaying, and the payment unit carries out payment operation, so that the self-service shopping is completed.
If the cloud processing is combined locally, the steps corresponding to 1-3 in fig. 15 are performed locally, the corresponding data are transmitted to the cloud, and the steps corresponding to 4 and 5 are performed at the cloud. And then feeding back the detected article taking and placing information to the local by the cloud, displaying the article taking and placing information by a local display unit, and performing payment settlement and other functions by a payment unit.
It can be seen that the technical scheme provided by the embodiment of the application is the fusion of all weight changes and image recognition results in the time of opening and closing the article taking and placing cabinet. The entire shopping process is divided into individual events using the triggering hardware. The weight change before and after the event and the image recognition result are rechecked, the rechecking is failed, then the image recognition and the weight combined solution are carried out, the number and the types of the articles of a single event are less, and the combined solution accuracy is high. The method has the advantages that a shopping process is divided into independent events, the events are not influenced mutually, the error of the whole shopping result caused by the calculation error of one event is avoided, and the accuracy is improved.
According to the method provided by the embodiment of the application, when the article taking and placing cabinet is determined to be in the article taking and placing triggering state based on the triggering data, the image and the weight data related to the article taking and placing time are obtained, and the target image data is obtained based on the image related to the article taking and placing time, so that the article taking and placing information is determined based on the information of the taking and placing area, the target image data and the weight data.
Based on the same technical concept, the embodiment of the application provides an article detection device, and the device is applied to the article detection system. Referring to fig. 16, the apparatus includes:
the triggering component 161 is configured to acquire item taking and placing triggering data for the item taking and placing cabinet, and send the triggering data to the first processor 162;
the first processor 162 is configured to determine a pick-and-place area based on the article pick-and-place trigger data when it is determined that the article pick-and-place cabinet is in an article pick-and-place trigger state based on the trigger data; acquiring an image and weight data related to the article taking and placing time, and acquiring target image data based on the image related to the article taking and placing time; and determining article taking and placing information based on the information of the taking and placing area, the weight data and the target image data.
Optionally, the triggering assembly 161 includes an infrared correlation unit, and the infrared correlation unit is disposed at two sides of the entrance and exit of the article taking and placing cabinet;
a first processor 162 for detecting an infrared cut signal based on the infrared signal emitted from the infrared correlation unit; and when the change of the quantity of the infrared cutoff signals is detected, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
Optionally, the infrared correlation unit includes an infrared emission end and an infrared receiving end, the infrared emission end is disposed at a lower side of an entrance of the article taking and placing cabinet, and the infrared receiving end is disposed at an upper side of the entrance of the article taking and placing cabinet.
Optionally, the triggering component 161 includes a camera, the camera is disposed at an entrance of the article taking and placing cabinet, a field angle of the camera covers the entrance, and an optical axis direction of the camera is parallel to the entrance;
a first processor 162, configured to obtain an optical flow vector based on a current image of the entrance and exit acquired by the camera; and when the light stream vector of the taking and placing operation direction appears in the entrance and exit area, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
Optionally, the triggering component 161 includes a camera, the camera is disposed at an entrance of the article picking and placing cabinet, a field angle of the camera covers the entrance, and an edge of the entrance of the article picking and placing cabinet has a marker;
a first processor 162 for detecting marker information in the current image of the doorway collected by the camera; and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not based on the detection result.
Optionally, the first processor 162 is configured to acquire an image of the article pick-and-place time, or an image of a reference quantity before and after the article pick-and-place time.
Optionally, the first processor 162 is configured to send the information, the weight data, and the target image data of the pick-and-place area to a cloud, and determine the article pick-and-place information based on the information, the weight data, and the target image data of the pick-and-place area at the cloud.
Optionally, the first processor 162 is configured to filter the target image data based on the information of the pick-and-place area, so as to obtain filtered image data;
and identifying article taking and placing information in the filtered image data based on the weight data and the target image data, wherein the article taking and placing information comprises types and quantities.
Optionally, the first processor 162 is configured to acquire all image data in the image related to the article pick-and-place time, and use all the image data as the target image data.
Optionally, the first processor 162 is configured to perform article detection on the image related to the article pick-and-place time, obtain local image data of the area where the article is located and coordinates of the local image data, and use the local image data of the area where the article is located and the coordinates of the local image data as target image data.
Optionally, the first processor 162 is configured to determine the type and number of the articles located in the pick-and-place area based on the position in the article information and the information of the pick-and-place area; and correcting the quantity of the articles in the pick-and-place area based on the weight data, and obtaining article pick-and-place information according to the corrected quantity of the articles and the types of the articles in the pick-and-place area.
Optionally, the first processor 162 is configured to filter the local image data based on the information of the pick-and-place area and the coordinates of the local image data, and compare the filtered local image data with the article sample library; when the filtered local area image data comprise the articles according to the comparison result, determining the types and the quantity of the articles contained in the filtered local area image data; and correcting the quantity based on the weight data, and determining the article taking and placing information according to the correction result.
Optionally, the first processor 162 is further configured to review the number of articles located in the pick-and-place area based on the weight data; and when the rechecking is passed, taking the types and the quantity of the articles in the taking and placing area as article taking and placing information.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
In an example embodiment, a computer device is also provided that includes a processor and a memory having at least one instruction stored therein. The at least one instruction is configured to be executed by one or more processors to implement any of the article detection methods described above.
Fig. 17 is a schematic structural diagram of a computer device according to an embodiment of the present invention. The device may be a terminal, and may be, for example: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. A terminal may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
Generally, a terminal includes: a processor 1701 and a memory 1702.
The processor 1701 may include one or more processing cores, such as 4-core processors, 8-core processors, and the like. The processor 1701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1701 may also include a main processor, which is a processor for Processing data in an awake state, also called a Central Processing Unit (CPU), and a coprocessor; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1701 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and rendering content that the display screen needs to display. In some embodiments, the processor 1701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
The memory 1702 may include one or more computer-readable storage media, which may be non-transitory. The memory 1702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1702 is used to store at least one instruction for execution by the processor 1701 to implement the article detection method provided by the method embodiments of the present application.
In some embodiments, the terminal may further include: a peripheral interface 1703 and at least one peripheral. The processor 1701, memory 1702 and peripheral interface 1703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 1703 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 1704, a touch display screen 1705, a camera 1706, an audio circuit 1707, a positioning component 1708, and a power source 1709.
The peripheral interface 1703 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1701 and the memory 1702. In some embodiments, the processor 1701, memory 1702, and peripheral interface 1703 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1701, the memory 1702, and the peripheral interface 1703 may be implemented on separate chips or circuit boards, which are not limited in this embodiment.
The Radio Frequency circuit 1704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1704 communicates with a communication network and other communication devices via electromagnetic signals. The rf circuit 1704 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1704 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1705 is a touch display screen, the display screen 1705 also has the ability to capture touch signals on or above the surface of the display screen 1705. The touch signal may be input as a control signal to the processor 1701 for processing. At this point, the display 1705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1705 may be one, providing a front panel of the terminal; in other embodiments, the display 1705 may be at least two, respectively disposed on different surfaces of the terminal or in a folded design; in still other embodiments, the display 1705 may be a flexible display, disposed on a curved surface or a folded surface of the terminal. Even further, the display screen 1705 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1706 is used to capture images or video. Optionally, camera assembly 1706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, inputting the electric signals into the processor 1701 for processing, or inputting the electric signals into the radio frequency circuit 1704 for voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones can be arranged at different parts of the terminal respectively. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1701 or the radio frequency circuit 1704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1707 may also include a headphone jack.
The positioning component 1708 is used for positioning the current geographic Location of the terminal to implement navigation or LBS (Location Based Service). The Positioning component 1708 may be a Positioning component based on a GPS (Global Positioning System) in the united states, a beidou System in china, a greiner System in russia, or a galileo System in the european union.
A power supply 1709 is used to supply power to the various components in the terminal. The power supply 1709 may be ac, dc, disposable or rechargeable. When power supply 1709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal also includes one or more sensors 1710. The one or more sensors 1710 include, but are not limited to: acceleration sensor 1711, gyro sensor 1712, pressure sensor 1713, fingerprint sensor 1714, optical sensor 1715, and proximity sensor 1716.
The acceleration sensor 1711 can detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the terminal. For example, the acceleration sensor 1711 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1701 may control the touch display screen 1705 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1711. The acceleration sensor 1711 may also be used for acquisition of motion data of a game or a user.
The gyroscope sensor 1712 can detect the body direction and the rotation angle of the terminal, and the gyroscope sensor 1712 can cooperate with the acceleration sensor 1711 to acquire the 3D action of the user on the terminal. The processor 1701 may perform the following functions based on the data collected by the gyro sensor 1712: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 1713 may be disposed on the side frames of the terminal and/or underlying the touch display screen 1705. When the pressure sensor 1713 is disposed on the side frame of the terminal, the user's holding signal to the terminal can be detected, and the processor 1701 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1713. When the pressure sensor 1713 is disposed at the lower layer of the touch display screen 1705, the processor 1701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1714 is configured to capture a fingerprint of the user, and the processor 1701 is configured to identify the user based on the fingerprint captured by the fingerprint sensor 1714, or the fingerprint sensor 1714 is configured to identify the user based on the captured fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1701 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1714 may be provided on the front, back or side of the terminal. When a physical key or vendor Logo is provided on the terminal, the fingerprint sensor 1714 may be integrated with the physical key or vendor Logo.
The optical sensor 1715 is used to collect the ambient light intensity. In one embodiment, the processor 1701 may control the display brightness of the touch display screen 1705 based on the ambient light intensity collected by the optical sensor 1715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1705 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1705 is turned down. In another embodiment, the processor 1701 may also dynamically adjust the shooting parameters of the camera assembly 1706 according to the ambient light intensity collected by the optical sensor 1717.
A proximity sensor 1716, also known as a distance sensor, is typically provided on the front panel of the terminal. The proximity sensor 1716 is used to collect the distance between the user and the front of the terminal. In one embodiment, the processor 1701 controls the touch display 1705 to switch from a bright screen state to a dark screen state when the proximity sensor 1716 detects that the distance between the user and the front face of the terminal is gradually reduced; when the proximity sensor 1716 detects that the distance between the user and the front of the terminal is gradually increased, the processor 1701 controls the touch display 1705 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 17 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer-readable storage medium is further provided, where at least one instruction is stored in the storage medium, and when the at least one instruction is executed by a processor of a computer device, the at least one instruction implements any one of the above article pick-and-place detection methods.
In a possible embodiment of the present application, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (34)

1. An item detection method, comprising:
acquiring article taking and placing trigger data aiming at the article taking and placing cabinet;
when the article taking and placing cabinet is determined to be in an article taking and placing triggering state based on the triggering data, determining a taking and placing area based on the article taking and placing triggering data;
acquiring an image and weight data related to the article taking and placing time, and acquiring target image data based on the image related to the article taking and placing time;
and determining article taking and placing information based on the information of the taking and placing area, the weight data and the target image data.
2. The method according to claim 1, wherein the article taking and placing cabinet is provided with infrared emission units at two sides of the entrance and exit;
the acquisition is to article of article get put cabinet is got and is put trigger data includes:
acquiring an infrared signal transmitted by the infrared correlation unit;
after acquiring the article taking and placing trigger data for the article taking and placing cabinet, the method further comprises the following steps:
detecting an infrared cut-off signal based on the infrared signal emitted by the infrared correlation unit;
and when the change of the quantity of the infrared cutoff signals is detected, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
3. The method according to claim 2, wherein the infrared correlation unit comprises an infrared emitting end and an infrared receiving end, the infrared emitting end is disposed at a lower side of the entrance of the article picking and placing cabinet, and the infrared receiving end is disposed at an upper side of the entrance of the article picking and placing cabinet.
4. The method according to claim 1, wherein the entrance of the article picking and placing cabinet is provided with a camera, the field angle of the camera covers the entrance, and the optical axis direction of the camera is parallel to the entrance;
the acquisition is to article of article get put cabinet is got and is put trigger data includes:
acquiring a current image of the entrance and exit acquired by the camera;
after acquiring the article taking and placing trigger data for the article taking and placing cabinet, the method further comprises the following steps:
acquiring an optical flow vector based on the current image of the entrance and exit acquired by the camera;
and when the optical flow vector of the taking and placing operation direction appears in the access area, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
5. The method according to claim 1, wherein the entrance of the article picking and placing cabinet is provided with a camera, the angle of view of the camera covers the entrance, and the edge of the entrance of the article picking and placing cabinet is provided with a marker;
the acquisition is to article of article get put cabinet is got and is put trigger data includes:
acquiring a current image of an entrance and an exit of the article taking and placing cabinet acquired by the camera;
after acquiring the article taking and placing trigger data for the article taking and placing cabinet, the method further comprises the following steps:
detecting marker information in the current image;
and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not based on the detection result.
6. The method of claim 1, wherein said acquiring an image associated with an item pick-and-place time comprises:
and acquiring the image of the article taking and placing moment, or acquiring the images of the reference quantity before and after the article taking and placing moment.
7. The method of claim 1, wherein determining article pick and place information based on the pick and place area information, the weight data, and the target image data comprises:
and sending the information of the taking and placing area, the weight data and the target image data to a cloud end, and determining article taking and placing information based on the information of the taking and placing area, the weight data and the target image data at the cloud end.
8. The method of any one of claims 1-7, wherein determining article pick-and-place information based on the pick-and-place area information, the weight data, and the target image data comprises:
filtering the target image data based on the information of the pick-and-place area to obtain filtered image data;
and identifying article taking and placing information in the filtered image data based on the weight data and the target image data, wherein the article taking and placing information comprises types and quantities.
9. The method according to any one of claims 1 to 7, wherein obtaining target image data based on the image related to the article pick-and-place time comprises:
and acquiring all image data in the image related to the article taking and placing time, and taking the all image data as target image data.
10. The method according to any one of claims 1 to 7, wherein obtaining target image data based on the image related to the article pick-and-place time comprises:
and carrying out article detection on the image related to the article taking and placing moment to obtain local image data of an area where the article is located and coordinates of the local image data, and taking the local image data of the area where the article is located and the coordinates of the local image data as target image data.
11. The method of claim 9, wherein determining item pick and place information based on the pick and place area information, the weight data, and the target image data comprises:
identifying item information in the target image data, wherein the item information comprises a position, a type and a quantity;
determining the type and the number of articles in the pick-and-place area based on the position in the article information and the information of the pick-and-place area;
and correcting the quantity of the articles in the pick-and-place area based on the weight data, and obtaining article pick-and-place information according to the corrected quantity of the articles and the types of the articles in the pick-and-place area.
12. The method of claim 11, wherein after determining the type and number of the articles located in the pick-and-place area based on the position in the article information and the information of the pick-and-place area, further comprising:
rechecking the number of articles in the pick-and-place area based on the weight data;
and when the rechecking is passed, taking the types and the quantity of the articles in the pick-and-place area as article pick-and-place information.
13. The method of claim 10, wherein determining item pick and place information based on the pick and place area information, the weight data, and the target image data comprises:
filtering the local image data based on the information of the pick-and-place area and the coordinates of the local image data, and comparing the filtered local image data with an article sample library;
when the filtered local area image data comprise articles according to the comparison result, determining the types and the quantity of the articles contained in the filtered local area image data;
and correcting the quantity based on the weight data, and determining article taking and placing information according to a correction result.
14. An article detection device, the device comprising:
the triggering assembly is used for acquiring object taking and placing triggering data aiming at the object taking and placing cabinet and sending the triggering data to the first processor;
the first processor is used for determining a picking and placing area based on the article picking and placing triggering data when the article picking and placing cabinet is determined to be in an article picking and placing triggering state based on the triggering data; acquiring an image and weight data related to the article taking and placing time, and acquiring target image data based on the image related to the article taking and placing time; and determining article taking and placing information based on the information of the taking and placing area, the weight data and the target image data.
15. The device according to claim 14, wherein the triggering component comprises an infrared correlation unit, and the infrared correlation unit is arranged at two sides of an entrance and an exit of the article taking and placing cabinet;
the first processor is used for detecting an infrared cut-off signal based on the infrared signal emitted by the infrared correlation unit; and when the change of the quantity of the infrared cutoff signals is detected, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
16. The apparatus according to claim 15, wherein the infrared correlation unit comprises an infrared emitting end and an infrared receiving end, the infrared emitting end is disposed at a lower side of the entrance of the article storage cabinet, and the infrared receiving end is disposed at an upper side of the entrance of the article storage cabinet.
17. The apparatus according to claim 14, wherein the triggering component comprises a camera, the camera is disposed at an entrance of the article picking and placing cabinet, a field angle of the camera covers the entrance, and an optical axis direction of the camera is parallel to the entrance;
the first processor is used for acquiring an optical flow vector based on the current image of the entrance and the exit acquired by the camera; and when the optical flow vector of the taking and placing operation direction appears in the access area, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
18. The apparatus of claim 14, wherein the triggering component comprises a camera, the camera is disposed at an entrance of the article picking and placing cabinet, a field angle of the camera covers the entrance, and an edge of the entrance of the article picking and placing cabinet has a marker;
the first processor is used for detecting marker information in the current image of the entrance and exit acquired by the camera; and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not based on the detection result.
19. The apparatus of claim 14, wherein the first processor is configured to obtain the image of the article pick-and-place time or a reference number of images before and after the article pick-and-place time.
20. The apparatus of claim 14, wherein the first processor is configured to send the pick-and-place area information, the weight data, and the target image data to a cloud, where article pick-and-place information is determined based on the pick-and-place area information, the weight data, and the target image data.
21. The apparatus according to any one of claims 14 to 20, wherein the first processor is configured to filter the target image data based on the information of the pick-and-place area to obtain filtered image data;
and identifying article taking and placing information in the filtered image data based on the weight data and the target image data, wherein the article taking and placing information comprises types and quantities.
22. The apparatus according to any one of claims 14 to 20, wherein the first processor is configured to obtain all image data in the image related to the article pick-and-place time, and use the all image data as target image data.
23. The apparatus according to any one of claims 14 to 20, wherein the first processor is configured to perform article detection on the image related to the article pick-and-place time, obtain local image data of an area where the article is located and coordinates of the local image data, and use the local image data of the area where the article is located and the coordinates of the local image data as target image data.
24. The apparatus of claim 22, wherein the first processor is configured to determine a type and a number of the articles located in the pick-and-place area based on the position in the article information and the information of the pick-and-place area; and correcting the quantity of the articles in the pick-and-place area based on the weight data, and obtaining article pick-and-place information according to the corrected quantity of the articles and the types of the articles in the pick-and-place area.
25. The apparatus of claim 23, wherein the first processor is configured to filter the local image data based on the information of the pick-and-place area and the coordinates of the local image data, and compare the filtered local image data with an article sample library; when the filtered local area image data comprise articles according to the comparison result, determining the types and the quantity of the articles contained in the filtered local area image data; and correcting the quantity based on the weight data, and determining article taking and placing information according to a correction result.
26. The apparatus of claim 25, wherein the first processor is further configured to review the number of items located in the pick-and-place area based on the weight data; and when the rechecking is passed, taking the types and the quantity of the articles in the pick-and-place area as article pick-and-place information.
27. An item detection system, the system comprising: the system comprises a trigger unit, an image acquisition unit, a weight acquisition unit and an article detection unit; the triggering unit, the weight acquisition unit and the image acquisition unit are all connected with the article detection unit, and the image acquisition unit and the weight acquisition unit are also all connected with the triggering unit;
the trigger unit is used for acquiring article taking and placing trigger data for the article taking and placing cabinet, determining a taking and placing area based on the article taking and placing trigger data when the article taking and placing cabinet is determined to be in an article taking and placing trigger state based on the trigger data, sending information of the taking and placing area to the article detection unit, sending trigger signals to the image acquisition unit and the weight acquisition unit, triggering the image acquisition unit to acquire image data, and triggering the weight acquisition unit to acquire weight data;
the image acquisition unit is used for acquiring images related to the article taking and placing time based on the trigger signal; transmitting target image data obtained based on the image to the article detection unit;
the weight acquisition unit is used for acquiring weight data related to the article taking and placing time based on the trigger signal; sending the weight data to the item detection unit;
the article detection unit is used for determining article taking and placing information based on the information of the taking and placing area, the weight data and the target image data.
28. The system of claim 27, wherein the trigger unit comprises: the infrared correlation unit is arranged at an entrance and an exit of the article taking and placing cabinet;
the processor is used for detecting an infrared cut-off signal based on the infrared signal emitted by the infrared correlation unit; and when the change of the quantity of the infrared cutoff signals is detected, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
29. The system of claim 28, wherein the infrared correlation unit comprises an infrared emitting end and an infrared receiving end, the infrared emitting end is disposed at a lower side of the entrance of the article picking and placing cabinet, and the infrared receiving end is disposed at an upper side of the entrance of the article picking and placing cabinet.
30. The system according to claim 27, wherein the image capturing unit is further configured to capture a current image of an entrance of the article picking and placing cabinet, and send the current image as trigger data to the triggering unit;
the trigger unit is used for acquiring an optical flow vector based on the current image; and when the optical flow vector of the taking and placing operation direction appears in the access area, determining that the article taking and placing cabinet is in an article taking and placing triggering state.
31. The system of claim 27, wherein the access edge of the article pick-and-place cabinet has a marker;
the image acquisition unit is also used for acquiring the current image of the entrance and the exit of the article taking and placing cabinet and sending the current image of the entrance and the exit of the article taking and placing cabinet to the trigger unit as trigger data;
the trigger unit is used for detecting marker information in the current image; and determining whether the article taking and placing cabinet is in an article taking and placing triggering state or not based on the detection result.
32. The system according to any one of claims 27-31, wherein the image acquisition unit comprises: the camera is arranged at the entrance and the exit of the article taking and placing cabinet, the monitoring area of the camera covers the whole entrance and the exit, and the optical axis direction of the camera is parallel to the entrance and the exit;
or, the image acquisition unit includes a plurality of cameras, and the monitoring area of each camera covers a part of the access & exit of the article taking and placing cabinet, and the monitoring area of the plurality of cameras covers the whole access & exit of the article taking and placing cabinet, and the optical axis direction of each camera is parallel to the access & exit.
33. The system of claim 27, further comprising: the target detection unit is connected with the image acquisition unit;
the image acquisition unit is used for sending the acquired image related to the article taking and placing time to the target detection unit;
the object detection unit is used for carrying out object detection on the image related to the object taking and placing time to obtain local image data of an area where the object is located and coordinates of the local image data, and sending the local image data of the area where the object is located and the coordinates of the local image data to the object detection unit as object image data;
the article detection unit is used for determining article taking and placing information based on the information of the taking and placing area, the weight data, the local image data of the area where the article is located and the coordinates of the local image data.
34. The system according to any one of claims 27-31 and 33, further comprising: a communication unit;
the triggering unit, the image acquisition unit, the weight acquisition unit and the communication unit are arranged in an article taking and placing cabinet, and the article detection unit is arranged in a cloud.
CN201910493006.XA 2019-06-06 2019-06-06 Article detection method, device and system Pending CN112052708A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910493006.XA CN112052708A (en) 2019-06-06 2019-06-06 Article detection method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910493006.XA CN112052708A (en) 2019-06-06 2019-06-06 Article detection method, device and system

Publications (1)

Publication Number Publication Date
CN112052708A true CN112052708A (en) 2020-12-08

Family

ID=73608719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910493006.XA Pending CN112052708A (en) 2019-06-06 2019-06-06 Article detection method, device and system

Country Status (1)

Country Link
CN (1) CN112052708A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850960A (en) * 2021-09-07 2021-12-28 深圳市智莱科技股份有限公司 Article detection device and method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500335A (en) * 2013-09-09 2014-01-08 华南理工大学 Photo shooting and browsing method and photo shooting and browsing device based on gesture recognition
CN103971131A (en) * 2014-05-13 2014-08-06 华为技术有限公司 Preset facial expression recognition method and device
CN104202560A (en) * 2014-08-14 2014-12-10 胡月明 Image recognition based video monitoring system and method
US20170372159A1 (en) * 2016-06-22 2017-12-28 United States Postal Service Item tracking using a dynamic region of interest
CN107915102A (en) * 2017-11-02 2018-04-17 浙江新再灵科技股份有限公司 A kind of elevator based on video analysis blocks the detecting system and detection method of a behavior
CN108320379A (en) * 2018-02-28 2018-07-24 成都果小美网络科技有限公司 Good selling method, device and the self-service machine compared based on image
CN108335408A (en) * 2018-03-02 2018-07-27 北京京东尚科信息技术有限公司 For the item identification method of automatic vending machine, device, system and storage medium
CN108885813A (en) * 2018-06-06 2018-11-23 深圳前海达闼云端智能科技有限公司 Intelligent sales counter, article identification method, apparatus, server and storage medium
CN109360331A (en) * 2017-12-29 2019-02-19 广州Tcl智能家居科技有限公司 A kind of automatic vending method and automatic vending machine based on article identification
CN109711337A (en) * 2018-12-26 2019-05-03 苏州浪潮智能软件有限公司 A method of realizing object using Background matching, whether there is or not detections
CN109767557A (en) * 2018-12-29 2019-05-17 合肥美的智能科技有限公司 Container system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103500335A (en) * 2013-09-09 2014-01-08 华南理工大学 Photo shooting and browsing method and photo shooting and browsing device based on gesture recognition
CN103971131A (en) * 2014-05-13 2014-08-06 华为技术有限公司 Preset facial expression recognition method and device
CN104202560A (en) * 2014-08-14 2014-12-10 胡月明 Image recognition based video monitoring system and method
US20170372159A1 (en) * 2016-06-22 2017-12-28 United States Postal Service Item tracking using a dynamic region of interest
CN107915102A (en) * 2017-11-02 2018-04-17 浙江新再灵科技股份有限公司 A kind of elevator based on video analysis blocks the detecting system and detection method of a behavior
CN109360331A (en) * 2017-12-29 2019-02-19 广州Tcl智能家居科技有限公司 A kind of automatic vending method and automatic vending machine based on article identification
CN108320379A (en) * 2018-02-28 2018-07-24 成都果小美网络科技有限公司 Good selling method, device and the self-service machine compared based on image
CN108335408A (en) * 2018-03-02 2018-07-27 北京京东尚科信息技术有限公司 For the item identification method of automatic vending machine, device, system and storage medium
CN108885813A (en) * 2018-06-06 2018-11-23 深圳前海达闼云端智能科技有限公司 Intelligent sales counter, article identification method, apparatus, server and storage medium
CN109711337A (en) * 2018-12-26 2019-05-03 苏州浪潮智能软件有限公司 A method of realizing object using Background matching, whether there is or not detections
CN109767557A (en) * 2018-12-29 2019-05-17 合肥美的智能科技有限公司 Container system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113850960A (en) * 2021-09-07 2021-12-28 深圳市智莱科技股份有限公司 Article detection device and method

Similar Documents

Publication Publication Date Title
US11798190B2 (en) Position and pose determining method, apparatus, smart device, and storage medium
CN108682038B (en) Pose determination method, pose determination device and storage medium
CN109859102B (en) Special effect display method, device, terminal and storage medium
CN111444887A (en) Mask wearing detection method and device, storage medium and electronic equipment
CN110839128B (en) Photographing behavior detection method and device and storage medium
CN111447389B (en) Video generation method, device, terminal and storage medium
CN109886208B (en) Object detection method and device, computer equipment and storage medium
CN110166691A (en) A kind of image pickup method and terminal device
CN110874905A (en) Monitoring method and device
CN112749613A (en) Video data processing method and device, computer equipment and storage medium
CN111754386A (en) Image area shielding method, device, equipment and storage medium
CN109241832A (en) A kind of method and terminal device of face In vivo detection
CN110827195A (en) Virtual article adding method and device, electronic equipment and storage medium
CN111586279B (en) Method, device and equipment for determining shooting state and storage medium
CN112396076A (en) License plate image generation method and device and computer storage medium
CN112052701B (en) Article taking and placing detection system, method and device
CN111127541A (en) Vehicle size determination method and device and storage medium
CN112749590B (en) Object detection method, device, computer equipment and computer readable storage medium
CN111241869B (en) Material checking method and device and computer readable storage medium
CN111931712A (en) Face recognition method and device, snapshot machine and system
CN112052708A (en) Article detection method, device and system
CN112991439A (en) Method, apparatus, electronic device, and medium for positioning target object
CN111860064A (en) Target detection method, device and equipment based on video and storage medium
CN111754564A (en) Video display method, device, equipment and storage medium
CN112541940B (en) Article detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination