CN116384912A - Unmanned contact type article selling method, device and storage medium - Google Patents
Unmanned contact type article selling method, device and storage medium Download PDFInfo
- Publication number
- CN116384912A CN116384912A CN202310257049.4A CN202310257049A CN116384912A CN 116384912 A CN116384912 A CN 116384912A CN 202310257049 A CN202310257049 A CN 202310257049A CN 116384912 A CN116384912 A CN 116384912A
- Authority
- CN
- China
- Prior art keywords
- target
- grabbing
- target object
- image
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003860 storage Methods 0.000 title claims abstract description 58
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000008569 process Effects 0.000 claims abstract description 18
- 238000004422 calculation algorithm Methods 0.000 claims description 27
- 238000001914 filtration Methods 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 14
- 230000001133 acceleration Effects 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 11
- 238000004891 communication Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 238000007635 classification algorithm Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 4
- 235000013399 edible fruits Nutrition 0.000 abstract description 60
- 206010022000 influenza Diseases 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 238000012544 monitoring process Methods 0.000 description 6
- 238000005303 weighing Methods 0.000 description 6
- 238000003066 decision tree Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 206010011409 Cross infection Diseases 0.000 description 2
- 206010029803 Nosocomial infection Diseases 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 239000012528 membrane Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 240000009088 Fragaria x ananassa Species 0.000 description 1
- 244000141359 Malus pumila Species 0.000 description 1
- 244000183278 Nephelium litchi Species 0.000 description 1
- 241000220324 Pyrus Species 0.000 description 1
- 235000021307 Triticum Nutrition 0.000 description 1
- 244000098338 Triticum aestivum Species 0.000 description 1
- 235000021016 apples Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 235000021017 pears Nutrition 0.000 description 1
- 235000021012 strawberries Nutrition 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 235000013311 vegetables Nutrition 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/103—Workflow collaboration or project management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Multimedia (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- General Business, Economics & Management (AREA)
- Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Artificial Intelligence (AREA)
- Operations Research (AREA)
- Development Economics (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- Tourism & Hospitality (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses a method, a device and a storage medium for selling an unmanned contact type article, and relates to the field of article selling, wherein the method comprises the steps of receiving an order task and analyzing to obtain target article information; the target article information comprises target article type information and target article weight information; the method comprises the steps of sending an identification instruction and a grabbing and storing instruction, wherein the identification instruction is used for identifying a target object, and the grabbing and storing instruction is used for grabbing the target object with corresponding weight and placing the grabbed target object into a storage box; judging whether an order task is completed or not, if yes, storing the order task; if not, repeatedly sending the identification instruction and the grabbing and storing instruction until the order task is completed. The whole process of the method does not need manual participation, realizes full-automatic selling of fruits, reduces the operation cost of fruit shops, and can avoid the problem that influenza is easily transmitted or infected in the process of purchasing fruits.
Description
Technical Field
The invention relates to the technical field of commodity purchasing, in particular to a method and a device for selling an unmanned contact type article and a storage medium.
Background
Most of the intelligent selling systems of the current fruit shops are semi-intelligent equipment, when the intelligent selling systems are used, workers are required to place fruits in a designated area, the type of the current fruits can be automatically identified, the weight is weighed, the price is calculated, and the workers can carry out subsequent order settlement and other works.
The disadvantages of the current devices are as follows: 1. when the fruit vending machine is used, a worker is required to assist aside, and full-automatic fruit vending cannot be achieved. 2. Under the influenza condition, the risk of cross infection of personnel cannot be reduced, and unmanned fruit selling can be realized. 3. And the operation cost of fruit shops cannot be reduced. How to provide a selling method, can realize the full-automatic fruit selling without manual intervention, so as to avoid the cross infection of influenza personnel and reduce the operation cost of fruit shops, which is a problem to be solved.
Disclosure of Invention
The invention aims to provide an unmanned contact type article selling method, an unmanned contact type article selling device and a storage medium, so as to solve the problems that fruits cannot be sold fully automatically and the operation cost of a fruit store is high in the prior art.
In order to achieve the above purpose, the invention is realized by adopting the following technical scheme:
in a first aspect, the invention discloses a method for vending articles without human contact, comprising the following steps:
receiving and analyzing order tasks to obtain target article information; the target article information comprises target article type information and target article weight information;
the method comprises the steps of sending an identification instruction and a grabbing and storing instruction, wherein the identification instruction is used for identifying a target object, and the grabbing and storing instruction is used for grabbing the target object with corresponding weight and placing the grabbed target object into a storage box;
judging whether an order task is completed or not, if yes, storing the order task; if not, repeatedly sending the identification instruction and the grabbing and storing instruction until the order task is completed.
Further, the identifying the target object includes the steps of:
controlling the robot to move to a first storage level of the article rack, and identifying whether an article on the first storage level is a target article or not by adopting an image identification algorithm;
if the object on the first storage level is not the target object, continuing to move the robot to the next storage level until the target object is identified.
Further, identifying whether the item on the first storage level is the target item using the image recognition algorithm includes:
converting the acquired object image on the first object placing position into a gray level image; setting a threshold value, and performing binarization operation on the gray level map;
performing expansion filtering treatment on the image subjected to binarization operation for a plurality of times;
adopting Canny contour extraction to process the image subjected to expansion filtering treatment to obtain an HSV image of the object on the first storage level;
processing the HSV image by adopting a real-time classification algorithm to obtain the type information of the object on the first object placing level;
and comparing the type information of the object on the first storage level with the target object information to judge whether the object on the first storage level is the target object.
Further, the grabbing the target object with the corresponding weight comprises the following steps:
acquiring the net weight of a mechanical arm of the robot;
the mechanical arm is controlled to finish grabbing of the target object, and when the electronic scale is in a stable state, the grabbing weight of the target object is calculated by fitting a weight curve of the target object; wherein, the electronic scale is in a stable state when the weight value of the continuous mechanical arm is within a preset range;
and completing grabbing the target object with corresponding weight until the grabbing weight and the target object weight information meet preset conditions.
Further, the method further comprises the following steps:
driving the robot to walk and constructing an environment map by adopting a Gmbping mapping algorithm;
planning a motion path of the robot by adopting a Dijkstra global path planning algorithm and a TEB local path planning algorithm;
and driving the robot to move according to the motion path to identify the target object, and driving the robot to move according to the motion path to place the grabbed target object into the storage box.
Further, storing the order task includes: and writing the order task into a plurality of sectors in the security chip respectively, and judging that the order task is successfully stored when the data of the plurality of sectors are the same.
The invention discloses a non-contact type article vending device, which comprises an upper computer and an industrial pie which are in communication connection, wherein the upper computer is used for sending order tasks;
the industrial pie is used for receiving and analyzing the order task and obtaining target article information; the target article information comprises target article type information and target article weight information; the industrial pie is also used for
The method comprises the steps of sending an identification instruction and a grabbing and storing instruction, wherein the identification instruction is used for identifying a target object, and the grabbing and storing instruction is used for grabbing the target object with corresponding weight and placing the grabbed target object into a storage box; the industrial pie is also used for
Judging whether an order task is completed or not, if yes, storing the order task; if not, repeatedly sending the identification instruction and the grabbing and storing instruction until the order task is completed.
Further, the industrial pie comprises an image recognition module for
Converting the acquired object image on the first object placing position into a gray level image; setting a threshold value, and performing binarization operation on the gray level map;
performing expansion filtering treatment on the image subjected to binarization operation for a plurality of times;
adopting Canny contour extraction to process the image subjected to expansion filtering treatment to obtain an HSV image of the object on the first storage level;
processing the HSV image by adopting a real-time classification algorithm to obtain the type information of the object on the first object placing level;
and comparing the type information of the object on the first storage level with the target object information to judge whether the object on the first storage level is the target object.
Further, the image recognition module further comprises a DSP acceleration unit;
the DSP acceleration unit is used for receiving the first address and the length information of a physical buffer area, wherein the physical buffer area stores images compressed according to the corresponding relation between bytes and images after binarization operation;
the DSP acceleration unit is also used for rewriting data back to the same physical buffer area after the image is subjected to expansion filtering processing for a plurality of times.
In a third aspect, the invention discloses a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of vending items of the type described in any of the preceding claims.
According to the technical scheme, the invention has the beneficial effects that: according to the method, an order task is received and analyzed to obtain target article information, an identification instruction and a grabbing and storing instruction are sent, a target article is identified, the target article with corresponding weight is grabbed, the grabbed target article is placed in a storage box, whether the order task is completed or not is judged, and if yes, the order task is stored; if not, the identification instruction and the grabbing and storing instruction are repeatedly sent until the order task is completed, the whole process does not need to be manually participated, fruits of corresponding types and weights can be automatically grabbed according to the order task, full-automatic selling of the fruits is realized, the operation cost of a fruit store is reduced, and the problem that influenza is easily transmitted or infected in the process of purchasing the fruits can be avoided.
Drawings
FIG. 1 is a schematic overall construction of the vending method of the present invention;
FIG. 2 is a flow chart of the present invention for identifying a target item;
FIG. 3 is a flow chart of the present invention for identifying a target item using an image recognition algorithm;
FIG. 4 is a flow chart of the present invention gripping a corresponding weight target item;
FIG. 5 is a flow chart of the present invention capturing a corresponding weight target item;
FIG. 6 is a flow chart of the path planning of the present invention;
FIG. 7 is a schematic view of a robot positioning of the present invention;
FIG. 8 is a schematic view of the overall arrangement of the merchandiser of the present invention;
FIG. 9 is a schematic diagram of communication connection between an industrial pie and an upper computer according to the present invention;
FIG. 10 is a schematic view of a robot according to the present invention;
FIG. 11 is a schematic diagram of a security chip according to the present invention;
FIG. 12 is a schematic diagram of an interface of a host computer according to the present invention.
Detailed Description
The invention is further described in connection with the following detailed description, in order to make the technical means, the creation characteristics, the achievement of the purpose and the effect of the invention easy to understand.
As shown in fig. 1 to 7, the invention discloses a method for selling an unmanned contact type article, which comprises the following steps:
step 1, receiving and analyzing order tasks to obtain target article information; the target item information includes target item type information and target item weight information.
And 2, sending an identification instruction and a grabbing and storing instruction, wherein the identification instruction is used for identifying the target object, and the grabbing and storing instruction is used for grabbing the target object with corresponding weight and placing the grabbed target object into the storage box.
Step 3, judging whether an order task is completed, if yes, storing the order task; if not, repeatedly sending the identification instruction and the grabbing and storing instruction until the order task is completed.
According to the method, the order task is received and analyzed, the target article information is obtained, the identification instruction and the grabbing and storing instruction are sent, the target article with the corresponding weight is identified, the grabbed target article is placed in the storage box, whether the order task is completed or not is judged, if not, the identification instruction and the grabbing and storing instruction are repeatedly sent until the order task is completed, manual participation is not needed in the whole process, fruits with the corresponding types and weights can be automatically grabbed according to the order task, full-automatic selling of the fruits is achieved, the operation cost of a fruit store is reduced, and the problem that influenza is easy to spread or infects in the process of purchasing the fruits can be avoided.
In this application, the items sold may be fruits or other types of merchandise, such as roasted seeds and nuts, and also vegetables.
In some embodiments, the robot further comprises path planning, the path planning can ensure that the object is identified and the object to be grabbed is placed in the storage box without collision with an obstacle, the path planning can be performed to optimize the walking route of the robot, the walking time of the robot is shortened, and the identification grabbing efficiency is improved.
The path planning process comprises the following steps: driving the robot to walk and constructing an environment map by adopting a Gmbping mapping algorithm; and planning a motion path of the robot by adopting a Dijkstra global path planning algorithm and a TEB local path planning algorithm.
Specifically, the robot needs to perform necessary navigation, obstacle avoidance, and other works in the fruit shop, and thus needs to know the surrounding environment map in advance. The invention adopts Gapplying mapping algorithm and manual driving robot mode to construct environment map. After the map is constructed, the system adopts Dijkstra global path planning algorithm and TEB local path planning algorithm to realize the functions of path planning, navigation obstacle avoidance and the like of the robot. The Dijkstra global path planning algorithm is used for searching the shortest path between the robot and the target point in the known map, and the TEB local path planning algorithm is used for acquiring the local environment information of the robot in real time according to the planned shortest path, so that the robot has good obstacle avoidance capability.
The robot positioning information is mainly obtained by fusing IMU, odometer positioning data and AMCL particle filtering positioning data, but with the increase of the moving distance and the times of the robot, the situation of deviation of the positioning information is unavoidable. And then, correcting the position of the robot by adopting laser radar accurate ranging data, namely measuring the distance between the four directions of the robot and the wall by using the laser radar, calculating the accurate position and the angle of the current robot in the environment by using the distance information, and updating the current positioning information of the robot by using the positioning information. The robot high precision positioning is shown in fig. 7.
After the path planning, identifying the target object according to the path planning result, as shown in fig. 2 and 3, the process specifically includes: controlling the robot to move to a first storage level of the article rack, and identifying whether an article on the first storage level is a target article or not by adopting an image identification algorithm; if the object on the first storage level is not the target object, continuing to move the robot to the next storage level until the target object is identified.
Further, identifying whether the item on the first storage level is the target item using the image recognition algorithm includes:
converting the acquired object image on the first object placing position into a gray level image; setting a threshold value, and performing binarization operation on the gray level map; performing expansion filtering treatment on the image subjected to binarization operation for a plurality of times; adopting Canny contour extraction to process the image subjected to expansion filtering treatment to obtain an HSV image of the object on the first storage level; processing the HSV image by adopting a real-time classification algorithm to obtain the type information of the object on the first object placing level; and comparing the type information of the object on the first storage level with the target object information to judge whether the object on the first storage level is the target object.
In terms of image recognition, the invention uses the traditional OpenCV algorithm and decision tree model for recognition. Firstly, converting an image acquired by a camera into a gray level image, binarizing the gray level image through a reasonable threshold, then performing expansion filtering on the binarized image for a plurality of times to filter the influence of other noise on recognition, then performing contour extraction by using Canny filtering, detecting and removing the extracted contour, extracting the contour of the current fruit, obtaining the HSV image of the current fruit according to the reserved contour, further obtaining the characteristics (the numerical value of the HSV image of the fruit under each channel) of the current fruit, and finally inputting the characteristics into a real-time classification algorithm to judge the type of the fruit.
The real-time classification algorithm adopts a decision tree algorithm in inductive machine learning, builds a tree model by using known training samples, and inputs the extracted fruit features into the tree model for prediction. The decision tree model is shown in fig. 4.
The invention also uses the DSP acceleration unit to accelerate the hardware of the expansion operation with the longest time consumption in the OpenCV image processing part. The ARM to DSP communication employs a MessageQ message queuing mechanism. In order to reduce the communication data volume between the ARM core and the DSP core, the invention compresses the corresponding relation between bytes and pixels, and compresses the original one byte corresponding to one pixel into one byte corresponding to eight pixels, thereby reducing the time spent by inter-core communication. The process of hardware acceleration using a DSP core is as follows:
1) The ARM end collects a frame of image through OpenCV, compresses the corresponding relation between bytes and pixels after converting the image into gray and binarizing operation, stores the compressed image data into a physical buffer area, and sends the first address and the length information of the physical buffer area to the DSP by using MessageQ
2) The DSP end receives the first address and the length information of the physical buffer area through the MessageQ, reads the image data in the buffer area, rewrites the data back to the same buffer area after expanding the image for a plurality of times, and then sends the MessageQ to the ARM
3) The ARM end receives the first address and the length information of the physical buffer area through the MessageQ, reads data in the buffer area, changes the data storage relation into one byte corresponding to one pixel, and then performs subsequent Canny boundary extraction, image feature extraction and other operations.
When the target fruit is identified, the target fruit with corresponding weight is required to be grabbed, and the method specifically comprises the following steps: acquiring the net weight of a mechanical arm of the robot; the mechanical arm is controlled to finish grabbing of the target object, and when the electronic scale is in a stable state, the grabbing weight of the target object is calculated by fitting a weight curve of the target object; wherein, the electronic scale is in a stable state when the weight value of the continuous mechanical arm is within a preset range; and completing grabbing the target object with corresponding weight until the grabbing weight and the target object weight information meet preset conditions.
When the grabbing weight and the weight information of the target object meet preset conditions, the fruit picking weight and the weight information of the target object can be judged according to the type of the fruit. When the weight difference is generally set to be less than 15%, the preset condition is satisfied. If the difference is too small, the fruits with larger weights such as apples and pears are not used for grabbing, and if the difference is too large, larger errors are caused when the fruits such as litchis and strawberries are grabbed.
Specifically, as shown in fig. 5, the mechanical arm weighing process is as follows: before the mechanical arm grabs, the net weight of the mechanical arm is read and stored, after the mechanical arm grabs, the electronic scale is waited to be stable, the numerical value of the electronic scale is specified to be read once every 0.5 second, and the electronic scale is considered to be stable if the weight error of continuous 10 times of reading is smaller than 0.1 gram. After the electronic scale reading is stable, the weight of the current fruit is calculated according to the read difference information and the fitted weight curve, when the weight is larger than a threshold value (3 g), the weight of the current fruit is output until the fruit with the corresponding weight is taken out, otherwise, the electronic scale is considered to be changed due to shaking of the machine body, and the net weight of the mechanical arm is updated.
After the order task is completed, the order task is stored, specifically: and writing the order task into a plurality of sectors in the security chip respectively, and judging that the order task is successfully stored when the data of the plurality of sectors are the same.
Example 2
Based on the same inventive concept as in embodiment 1, this embodiment also provides a contact-free article vending apparatus, as shown in fig. 8, which includes a host computer and an industrial pie connected in communication, and the industrial pie is mounted on a robot. The upper computer is used for sending order tasks; the industrial pie is used for receiving and analyzing the order task and obtaining target article information; the target article information comprises target article type information and target article weight information; the industrial pie is also used for sending an identification instruction and a grabbing and storing instruction, wherein the identification instruction is used for identifying target objects, and the grabbing and storing instruction is used for grabbing target objects with corresponding weight and placing the grabbed target objects into the storage box; the industrial pie is also used for judging whether the order task is completed or not, if yes, the order task is stored; if not, repeatedly sending the identification instruction and the grabbing and storing instruction until the order task is completed.
The operating environment of the system device is a closed environment, as shown in fig. 8. Fruits are orderly placed on the fruit rack (the articles A, B, C, D, E, F and G can be used for placing fruits), the robot is placed at the initial position, and the upper computer and the fruit basket are placed at the selling window. The main hardware modules of the system are all concentrated on the robot, and comprise a wheat wheel trolley chassis, an AM5708 industrial pie, a laser radar, a camera, a mechanical arm, a pressure sensor and the like. The upper computer is an independent development part of the vending system.
In this application, the fruit placement positions are not necessarily placed strictly according to fig. 8, and a plurality of rows and columns can be placed in a store, but the placement of the plurality of rows and columns is necessarily ordered, and the fruit placed at each position is not fixed.
In some further embodiments, the industrial pie includes an image recognition module for converting the acquired image of the item on the first storage level into a gray level map; setting a threshold value, and performing binarization operation on the gray level map; performing expansion filtering treatment on the image subjected to binarization operation for a plurality of times; adopting Canny contour extraction to process the image subjected to expansion filtering treatment to obtain an HSV image of the object on the first storage level; processing the HSV image by adopting a real-time classification algorithm to obtain the type information of the object on the first object placing level; and comparing the type information of the object on the first storage level with the target object information to judge whether the object on the first storage level is the target object.
Specifically, the system device mainly comprises an upper computer monitoring module, an image recognition module, a mechanical arm grabbing and weighing module and a map building navigation module.
The upper computer monitoring module is used for man-machine interaction and mainly comprises functions of order issuing and settlement, monitoring images, displaying running states of the robot and the like.
The image recognition module consists of an OpenCV and decision tree algorithm and a DSP acceleration unit.
The mechanical arm grabbing and weighing module integrates a flexible mechanical arm and a planar membrane type pressure sensor, and the weight of fruits can be directly weighed after grabbing.
The map building navigation module senses the surrounding environment and completes map building by using a laser radar and Gmaging map building algorithm, and achieves the functions of path planning, autonomous navigation, obstacle avoidance and the like of the fruit vending robot by using a Dijkstra global path planning algorithm and a TEB local path planning algorithm. A system scheme block diagram is shown in fig. 3.
The operation flow of the system is as follows: the user selects the fruit which the user wants to purchase and places an order at the selling window by using the upper computer, after receiving and analyzing the order information, the robot moves to the first fruit position of the fruit rack to start to identify, if the current fruit is identified in the order information, the robot arm is used for grabbing and weighing, otherwise, the robot arm is moved to the next fruit position to identify, the robot arm is placed to the fruit basket after grabbing and weighing are completed, meanwhile, whether the order fruit is taken is detected, after taking is completed, the order information is stored in the TI safety chip and returned to the initial position to wait for receiving the next order information, and otherwise, the robot arm is moved to the next fruit position to continue executing the order.
After the image recognition part is completed, the invention further uses the DSP acceleration unit to carry out hardware acceleration on the expansion operation which takes the longest time in the OpenCV image processing part. The ARM to DSP communication employs a MessageQ message queuing mechanism. In order to reduce the communication data volume between the ARM core and the DSP core, the invention compresses the corresponding relation between bytes and pixels, and compresses the original one byte corresponding to one pixel into one byte corresponding to eight pixels, thereby reducing the time spent by inter-core communication. The process of hardware acceleration using a DSP core is as follows:
1) The ARM end collects a frame of image through OpenCV, compresses the corresponding relation between bytes and pixels after converting the image into gray and binarizing operation, stores the compressed image data into a physical buffer area, and sends the first address and the length information of the physical buffer area to the DSP by using MessageQ
2) The DSP end receives the first address and the length information of the physical buffer area through the MessageQ, reads the image data in the buffer area, rewrites the data back to the same buffer area after expanding the image for a plurality of times, and then sends the MessageQ to the ARM
3) The ARM end receives the first address and the length information of the physical buffer area through the MessageQ, reads data in the buffer area, changes the data storage relation into one byte corresponding to one pixel, and then performs subsequent Canny boundary extraction, image feature extraction and other operations.
In order to improve the integration level of the whole system, the invention designs a hardware structure for coupling the flexible mechanical arm and the planar membrane type pressure sensor, so that the mechanical arm can directly know the weight of an object after the grabbing is finished. The mechanical arm weighing hardware structure is shown in fig. 10.
In order to protect the data security of users, the invention uses the security characteristic of the TI security chip RM57 to carry out self-checking and protection on the data. RM57 provides dual-core lockstep CPU and memory Error (ECC) real-time diagnostics, as well as hardware-based CPU logic built-in self-test (LBIST) and SRAM programmable built-in self-test (PBIST) functions; the chip also has a self-checking function, and can judge whether a system or software on the chip is tampered or not, so that attack is resisted. In addition, the invention uses a multi-sector storage mechanism, namely, the same data are respectively written into three sectors of FLASH in the security chip RM57, and the data are considered to be correct only after the data of the three sectors are the same, thereby further ensuring business privacy and data security. The security features of TI security chip RM57 are shown in fig. 11; the multi-sector storage mechanism is shown in table 1.
In order to ensure the use experience of the user and facilitate the monitoring of the robot by the merchant, the invention develops a QT-ROS-based upper computer interface. The upper computer mainly comprises a fruit purchase interface and a robot running state monitoring interface. Wherein the fruit purchase interface displays information on a plurality of fruits sold, unit price of each fruit, and the fruit currently being grasped; the robot running state monitoring interface displays information such as a global image, an identification image and the current running state of the robot. The upper computer interface is shown in fig. 12.
Example 3
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be appreciated by those skilled in the art that the present invention can be carried out in other embodiments without departing from the spirit or essential characteristics thereof. Accordingly, the above disclosed embodiments are illustrative in all respects, and not exclusive. All changes that come within the scope of the invention or equivalents thereto are intended to be embraced therein.
Claims (10)
1. A method of vending an item in a contact-less manner, comprising:
receiving and analyzing order tasks to obtain target article information; the target article information comprises target article type information and target article weight information;
the method comprises the steps of sending an identification instruction and a grabbing and storing instruction, wherein the identification instruction is used for identifying a target object, and the grabbing and storing instruction is used for grabbing the target object with corresponding weight and placing the grabbed target object into a storage box;
judging whether an order task is completed or not, if yes, storing the order task; if not, repeatedly sending the identification instruction and the grabbing and storing instruction until the order task is completed.
2. The method of claim 1, wherein identifying the target item comprises the steps of:
controlling the robot to move to a first storage level of the article rack, and identifying whether an article on the first storage level is a target article or not by adopting an image identification algorithm;
if the object on the first storage level is not the target object, continuing to move the robot to the next storage level until the target object is identified.
3. The method of claim 2, wherein identifying whether the item on the first storage level is the target item using an image recognition algorithm comprises:
converting the acquired object image on the first object placing position into a gray level image; setting a threshold value, and performing binarization operation on the gray level map;
performing expansion filtering treatment on the image subjected to binarization operation for a plurality of times;
adopting Canny contour extraction to process the image subjected to expansion filtering treatment to obtain an HSV image of the object on the first storage level;
processing the HSV image by adopting a real-time classification algorithm to obtain the type information of the object on the first object placing level;
and comparing the type information of the object on the first storage level with the target object information to judge whether the object on the first storage level is the target object.
4. The method of claim 1, wherein the grasping the target item of the corresponding weight comprises the steps of:
acquiring the net weight of a mechanical arm of the robot;
the mechanical arm is controlled to finish grabbing of the target object, and when the electronic scale is in a stable state, the grabbing weight of the target object is calculated by fitting a weight curve of the target object; wherein, the electronic scale is in a stable state when the weight value of the continuous mechanical arm is within a preset range;
and completing grabbing the target object with corresponding weight until the grabbing weight and the target object weight information meet preset conditions.
5. The method of unmanned touch article vending of claim 1, further comprising:
driving the robot to walk and constructing an environment map by adopting a Gmbping mapping algorithm;
planning a motion path of the robot by adopting a Dijkstra global path planning algorithm and a TEB local path planning algorithm;
and driving the robot to move according to the motion path to identify the target object, and driving the robot to move according to the motion path to place the grabbed target object into the storage box.
6. The method of claim 1, wherein storing the order tasks comprises: and writing the order task into a plurality of sectors in the security chip respectively, and judging that the order task is successfully stored when the data of the plurality of sectors are the same.
7. The unmanned contact type article vending device is characterized by comprising an upper computer and an industrial pie which are in communication connection, wherein the upper computer is used for sending order tasks;
the industrial pie is used for receiving and analyzing the order task and obtaining target article information; the target article information comprises target article type information and target article weight information; the industrial pie is also used for
The method comprises the steps of sending an identification instruction and a grabbing and storing instruction, wherein the identification instruction is used for identifying a target object, and the grabbing and storing instruction is used for grabbing the target object with corresponding weight and placing the grabbed target object into a storage box; the industrial pie is also used for
Judging whether an order task is completed or not, if yes, storing the order task; if not, repeatedly sending the identification instruction and the grabbing and storing instruction until the order task is completed.
8. The unmanned touch item merchandiser of claim 7, wherein the industrial pie comprises an image recognition module for
Converting the acquired object image on the first object placing position into a gray level image; setting a threshold value, and performing binarization operation on the gray level map;
performing expansion filtering treatment on the image subjected to binarization operation for a plurality of times;
adopting Canny contour extraction to process the image subjected to expansion filtering treatment to obtain an HSV image of the object on the first storage level;
processing the HSV image by adopting a real-time classification algorithm to obtain the type information of the object on the first object placing level;
and comparing the type information of the object on the first storage level with the target object information to judge whether the object on the first storage level is the target object.
9. The unmanned touch article vending method of claim 8, wherein the image recognition module further comprises a DSP acceleration unit;
the DSP acceleration unit is used for receiving the first address and the length information of a physical buffer area, wherein the physical buffer area stores images compressed according to the corresponding relation between bytes and images after binarization operation;
the DSP acceleration unit is also used for rewriting data back to the same physical buffer area after the image is subjected to expansion filtering processing for a plurality of times.
10. A computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the contactless item vending method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310257049.4A CN116384912A (en) | 2023-03-16 | 2023-03-16 | Unmanned contact type article selling method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310257049.4A CN116384912A (en) | 2023-03-16 | 2023-03-16 | Unmanned contact type article selling method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116384912A true CN116384912A (en) | 2023-07-04 |
Family
ID=86960713
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310257049.4A Pending CN116384912A (en) | 2023-03-16 | 2023-03-16 | Unmanned contact type article selling method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116384912A (en) |
-
2023
- 2023-03-16 CN CN202310257049.4A patent/CN116384912A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11501523B2 (en) | Goods sensing system and method for goods sensing based on image monitoring | |
JP7248689B2 (en) | Vending method and apparatus, and computer readable storage medium | |
CN109840504B (en) | Article taking and placing behavior identification method and device, storage medium and equipment | |
US10332066B1 (en) | Item management system using weight | |
US20170270579A1 (en) | Robotic equipment for the location of items in a shop and operating process thereof | |
US20220391796A1 (en) | System and Method for Mapping Risks in a Warehouse Environment | |
CN101479082A (en) | Robot device and robot device control method | |
CN107283428A (en) | robot control method, device and robot | |
EP4390803A1 (en) | Supplies inventory method and apparatus, and device and storage medium | |
CN110097715A (en) | Merchandise management server, automatic cash register system and merchandise control method | |
CN112508109A (en) | Training method and device for image recognition model | |
CN112487861A (en) | Lane line recognition method and device, computing equipment and computer storage medium | |
EP3629276A1 (en) | Context-aided machine vision item differentiation | |
CN116384912A (en) | Unmanned contact type article selling method, device and storage medium | |
JP2019211891A (en) | Behavior analysis device, behavior analysis system, behavior analysis method, program and recording medium | |
CN112154488B (en) | Information processing apparatus, control method, and program | |
CN107209559A (en) | Interaction analysis | |
CN116525133A (en) | Automatic collection method, system, electronic equipment and medium for nucleic acid | |
CN110956459A (en) | Commodity processing method and system | |
CN110955243A (en) | Travel control method, travel control device, travel control apparatus, readable storage medium, and mobile device | |
CN110188695B (en) | Shopping action decision method and device | |
CN115187800A (en) | Artificial intelligence commodity inspection method, device and medium based on deep learning | |
CN113591594A (en) | Weighing processing method and device and electronic equipment | |
Kreutz et al. | Autonomous, low-cost sensor module for fill level measurement for a self-learning electronic Kanban system | |
Matsui et al. | Enhanced YOLO using Attention for Apple grading |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |