CN114933176A - 3D vision stacking system adopting artificial intelligence - Google Patents

3D vision stacking system adopting artificial intelligence Download PDF

Info

Publication number
CN114933176A
CN114933176A CN202210525053.XA CN202210525053A CN114933176A CN 114933176 A CN114933176 A CN 114933176A CN 202210525053 A CN202210525053 A CN 202210525053A CN 114933176 A CN114933176 A CN 114933176A
Authority
CN
China
Prior art keywords
stacking
mechanical arm
image processing
module
palletizing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210525053.XA
Other languages
Chinese (zh)
Inventor
张旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Institute of Economic and Trade Technology
Original Assignee
Jiangsu Institute of Economic and Trade Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Institute of Economic and Trade Technology filed Critical Jiangsu Institute of Economic and Trade Technology
Priority to CN202210525053.XA priority Critical patent/CN114933176A/en
Publication of CN114933176A publication Critical patent/CN114933176A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G61/00Use of pick-up or transfer devices or of manipulators for stacking or de-stacking articles not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G43/00Control devices, e.g. for safety, warning or fault-correcting
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G2203/00Indexing code relating to control or detection of the articles or the load carriers during conveying
    • B65G2203/02Control or detection
    • B65G2203/0208Control or detection relating to the transported articles
    • B65G2203/0233Position of the article
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G2203/00Indexing code relating to control or detection of the articles or the load carriers during conveying
    • B65G2203/04Detection means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Manipulator (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a 3D vision stacking system adopting artificial intelligence, and particularly relates to the field of industrial robot design. The control processing unit comprises a manager instruction input unit, an image processing module 1, an image processing module 2, a control module and an operation module. The 3D vision stacking system is characterized in that the input modes are connected among modules: the monitoring facility is added to monitor the stacking area and the target stacking area, so that on one hand, scene judgment of a stacking work task can be carried out, on the other hand, scattered objects are grabbed and stacked again by monitoring stacking work, and the stacking work is more precise.

Description

Adopt artifical intelligent 3D vision pile up neatly system
Technical Field
The invention relates to the field of industrial robot design, in particular to a 3D visual stacking system adopting artificial intelligence.
Background
In the manufacturing production and transportation, the stacking operation can not be avoided, the current stacking operation is mostly manual stacking, the problems of low efficiency, high cost, easy occurrence of stacking errors and the like exist, in the face of heavier packing boxes, accidents can be caused, safety production is not facilitated, along with the transformation and upgrading of the manufacturing industry, the requirement of a 'robot replacement robot' is continuously increased, as an indispensable ring, the machine vision also caters to the opportunity of rapid development, the machine vision system comprises an imaging technology and an image processing technology, the functions of detection, identification, measurement, positioning and the like in the industrial process can be realized, the calculation is carried out according to the Cedi 2021 year white paper of the development of the Chinese industrial machine vision industry, and the market scale of the 2021 year Chinese industrial machine vision is about 250 hundred yuan.
The necessity of machine vision lies in that the application scene of the robot can be widened, and if the robot is compared with the arm of the human, the machine vision is equivalent to the eyes and the brain of the human. At this stage, machine vision is in the era of 2D to 3D transition. Compared with 2D vision, 3D vision has more advantages in the aspects of measurement accuracy, speed, anti-interference degree, operation simplicity and the like.
Chinese patents CN111360847A and CN111232664A disclose a delivery robot and an industrial robot for automatically storing and taking materials, respectively, which use a soft package unstacking, unloading and stacking device and a method for unstacking, unloading and stacking, wherein 3D vision collecting devices are adopted, so that the grabbing/placing efficiency and precision are improved, but the above two technical schemes all adopt whole-course three-dimensional image collection, resulting in large computation amount, higher demand on hardware, and if the hardware standard is reduced, the computation speed cannot follow up, which affects the execution efficiency.
National patent CN112047113A discloses a 3D vision palletizing system and method based on artificial intelligence technology, including: 3D structured light camera, transfer chain, arm, moving platform, control processing unit read the packing information through the two-dimensional code of article, derive and snatch the route, realize snatching fast to combine article information to know the regional position of pile up neatly, promoted the stability and the security of pile up neatly to a certain extent. However, in practical application, the two-dimension code detection device can cause the situation that the two-dimension code cannot be scanned due to shielding, covering and fouling, and stacking operation is influenced.
In practice, there are often certain requirements on pallet size for ease of storage and transportation, and there is no management of pallet size in the above-described invention.
Disclosure of Invention
In order to overcome the defects in the prior art, the embodiment of the invention provides an artificial intelligent 3D vision stacking system, the computation amount is controlled in a reasonable range by a mode of combining 2D and 3D, the system can quickly and accurately identify materials such as cartons, sacks and the like by a deep learning algorithm to obtain a grabbing path and an action instruction, the requirements of various different stacking types can be met, the stacking type can be automatically generated according to the size requirement, and the mixing stacking can be carried out according to the requirement, so that the problems in the background art are solved.
In order to achieve the purpose, the invention provides the following technical scheme: the utility model provides an adopt artifical intelligent 3D vision pile up neatly system, includes control facility, 3D vision scanning device, pile up neatly machine people, control processing unit includes administrator instruction input unit, image processing module 1, image processing module 2, control module, operation module, and it is to connect the input mode between each module of 3D vision pile up neatly system: the monitoring facility inputs information to the image processing unit 1, the 3D visual scanning device is connected with the palletizing robot, the 3D visual scanning device inputs information to the image processing unit 2, the image processing unit and the control module input information to the operation module, the control module inputs instructions to the operation module and the palletizing robot, and the manager instruction unit inputs instructions to the control module.
In a preferred embodiment, the 3D visual scanning device is installed on a palletizing robot and is used for shooting 3D images to realize scanning and positioning, the shot 3D images are point cloud information, the point cloud information is subjected to noise reduction and feature extraction processing by the image processing module and then transmitted to the operation module, the operation module performs three-dimensional reconstruction on the received point cloud information, and then the three-dimensional model is analyzed by a built-in artificial neural network.
In a preferred embodiment, the palletizing robot comprises a mechanical arm and a moving platform, the mechanical arm is connected to the moving platform through threads, the moving platform has an automatic navigation function and is used for moving, the mechanical arm is used for grabbing objects, the mechanical arm is an artificial intelligent presentation mode, a certain clamping tool is installed at the tail end of the mechanical arm, the mechanical arm can be a vacuum sucker or a magnetic sucker according to different application scenes, and the vacuum sucker or the magnetic sucker mechanical arm can be well connected with the moving platform.
In a preferred embodiment, the control processing unit comprises a manager instruction input unit, an image processing module 1, an image processing module 2, a control module and an operation module. The manager instruction is executed through a Programmable Logic Controller (PLC), the PLC consists of a CPU, an instruction and data memory, an input/output interface, a power supply, a digital-analog conversion and other functional units, and the manager loads the control instruction into the memory at any time through the PLC to be stored and executed.
In a preferred embodiment, the operation module creates a stacking model based on deep learning, an artificial neural network is built in, the artificial neural network is used for finely adjusting a mechanical arm and a gripping device to grip and stack products to generate stacking traces, and guiding the mechanical arm to place the objects in a designated position.
In a preferred embodiment, the specific palletizing steps based on the 3D visual palletizing system are as follows:
s1, judging a working scene, acquiring monitoring images of an object stacking area A and a target stacking area B through a monitoring facility, and transmitting the monitoring images to a control processing unit, wherein under the condition of a first condition, objects are stacked in the object stacking area A, and under the condition of a second condition, the target stacking area B is stacked in a space, and if the two conditions are met, the working scene is a stacking working scene;
s2 issues job task, the control processing unit receives monitoring information and manager instruction, and sends stacking job information to the stacking robot, including: an article stacking area A, a target stacking area B, a stacking size and the like;
s3, grabbing objects, receiving operation information, moving from a rest charging area to an object stacking area A to carry out work, monitoring a camera, transmitting two-dimensional images to an image processing module 1 in a control unit for processing, then transmitting the two-dimensional images to a computing unit, predicting grabbing positions according to a grabbing model obtained by deep learning, transmitting action instructions to a mechanical arm, implementing the action instructions of grabbing the objects according to the instructions, and moving to a target stacking area B after grabbing;
s4, stacking the articles, picking the articles by the stacking robot to move to a target stacking area B, scanning the environment of the stacking area by the 3D vision scanner, shooting a 3D image, transmitting the 3D image to the image processing module 2 in the control unit, calculating a picking target by an algorithm in the operation module, guiding the robot to stack the articles in an accurate posture, and calculating a correct placement position according to the packaging size;
s5, checking and determining, resetting the palletizing robot, wherein the palletizing robot cannot guarantee hundred percent accuracy in the palletizing process, scattered and dropped objects exist, the scattered objects are found through monitoring of monitoring equipment, the palletizing robot executes a grabbing and stacking task again, palletizing is orderly, no scattered objects in a palletizing area are regarded as the completion of the palletizing task, and finally the palletizing robot returns to a rest area to be charged.
The invention has the technical effects and advantages that:
1. according to the invention, a monitoring facility is added to monitor the stacking area and the target stacking area, so that on one hand, the scene judgment of stacking work tasks can be carried out, on the other hand, the stacking work is monitored, and scattered articles are grabbed and stacked again, so that the stacking work is more refined;
2. according to the invention, the mechanical arm clamping tool of the palletizing robot can be a vacuum sucker or a magnetic sucker, so that different application scenes are met;
3. the control processing unit comprises a manager instruction input unit, and a manager loads a control instruction into a memory at any time through a Programmable Logic Controller (PLC) to store and execute the control instruction;
4. according to the invention, by adding the stacking size parameter model in the operation module, the requirements of various stacking types can be met, and the stacking type can be automatically generated according to the size;
5. the palletizing robot adopts wireless induction charging, so that the workload of people is reduced, and the intelligent degree is higher.
Drawings
Fig. 1 is a schematic diagram of a 3D vision palletizing system according to the present invention.
FIG. 2 is a schematic diagram of a 3D point cloud effect according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
The embodiment provides a 3D vision palletizing system adopting artificial intelligence and a using method, so that an agent can autonomously select to grab an object and neatly stack the object in another area depending on vision state input.
The 3D vision palletizing system structure diagram in this embodiment is shown in fig. 1, and includes: the robot palletizer comprises a monitoring facility, a 3D visual scanning device, a palletizing robot and a control processing unit, wherein the control processing unit comprises a manager instruction input unit, an image processing module 1, an image processing module 2, a control module and an operation module. The 3D vision stacking system has the following connection input modes among modules: the monitoring facility inputs information to the image processing unit 1, the 3D visual scanning device is connected with the palletizing robot, the 3D visual scanning device inputs information to the image processing unit 2, the image processing unit and the control module input information to the operation module, the control module inputs instructions to the operation module and the palletizing robot, and the manager instruction unit inputs instructions to the control module.
The monitoring facilities are positioned on the article stacking area A and the target stacking area B and used for monitoring the area AB.
The 3D visual scanning device is installed on the palletizing robot and used for shooting 3D images to achieve scanning and positioning, the shot 3D images are point cloud information, the point cloud information is transmitted to the operation module after noise reduction and feature extraction processing are carried out through the image processing module, the operation module carries out three-dimensional reconstruction on the received point cloud information, then a built-in artificial neural network analyzes a three-dimensional model, and the 3D point cloud effect is shown in a second drawing.
The stacking robot comprises a mechanical arm and a moving platform, the mechanical arm is connected to the moving platform through threads, the moving platform has an automatic navigation function and is used for moving, the mechanical arm is used for grabbing objects, the mechanical arm is an artificial and intelligent presentation mode, a certain clamping tool is installed at the tail end of the mechanical arm, the mechanical arm can be a vacuum sucker or a magnetic sucker according to different application scenes, and the vacuum sucker and the magnetic sucker mechanical arm can be well connected with the moving platform.
The moving platform is an AGV car which is provided with an automatic navigation device, can run along a specified navigation path, has safety protection and various transfer functions, takes a rechargeable storage battery as a power source, and controls the running path and the behavior by a control unit.
The control processing unit comprises a manager instruction input unit, an image processing module 1, an image processing module 2, a control module and an operation module. The manager instruction is executed through a Programmable Logic Controller (PLC), the PLC consists of a CPU, an instruction and data memory, an input/output interface, a power supply, a digital-analog conversion and other functional units, and the manager loads the control instruction into the memory at any time through the PLC for storage and execution.
The operation module establishes a stacking model based on deep learning, an artificial neural network is arranged in the operation module, the artificial neural network is used for finely adjusting a mechanical arm and a gripping device to grip and stack products to generate stacking traces, the mechanical arm is guided to place the objects into a designated position, the artificial neural network is a Convolutional Neural Network (CNN) and is used for adjusting the mechanical arm and the gripping device to grip and stack the products, the convolutional neural network simulates the visual perception mechanism construction of organisms and can perform supervised learning and unsupervised learning, the important learning of stacking gripping, stacking operation and stacking size design is realized, and the convolution kernel parameter sharing in a hidden layer and the sparsity of interlayer connection enable the convolutional neural network to realize lattice characteristics with smaller calculated amount.
The embodiment further provides a stacking method based on the 3D vision stacking system, and the stacking method specifically comprises the following steps:
s1, judging a working scene, acquiring monitoring images of an object stacking area A and a target stacking area B through a monitoring facility, and transmitting the monitoring images to a control processing unit, wherein under the condition of a first condition, objects are stacked in the object stacking area A, and under the condition of a second condition, the target stacking area B is stacked in a space, and if the two conditions are met, the working scene is a stacking working scene;
s2 issues job task, the control processing unit receives monitoring information and manager instruction, and sends stacking job information to the stacking robot, including: an article stacking area A, a target stacking area B, a stacking size and the like;
s3, grabbing objects, receiving operation information, moving from a rest charging area to an object stacking area A to carry out work, monitoring a camera, transmitting two-dimensional images to an image processing module 1 in a control unit for processing, then transmitting the two-dimensional images to a computing unit, predicting grabbing positions according to a grabbing model obtained by deep learning, transmitting action instructions to a mechanical arm, implementing the action instructions of grabbing the objects according to the instructions, and moving to a target stacking area B after grabbing;
s4, stacking the articles, picking the articles by the stacking robot to move to a target stacking area B, scanning the environment of the stacking area by the 3D vision scanner, shooting a 3D image, transmitting the 3D image to the image processing module 2 in the control unit, calculating a picking target by an algorithm in the operation module, guiding the robot to stack the articles in an accurate posture, and calculating a correct placement position according to the packaging size;
s5, checking and determining that the palletizing robot resets, the palletizing robot cannot guarantee hundred-percent accuracy in the palletizing process, scattered and falling objects can be generated, the monitoring of monitoring equipment is used for finding the scattered objects, the palletizing robot executes the grabbing and stacking task again, palletizing is tidy, no scattered objects in a palletizing area are considered to be completed by the palletizing task, and finally the palletizing robot returns to a rest area to be charged.
In S5, the robot palletizer may charge in a wireless inductive manner, and the charging state and the electric quantity information are transmitted to the control unit.
Examples of the experiments
The system for evaluating the technical scheme of the application in the simulation environment and the real scene demonstrates that the system can be applied to executing grabbing and stacking tasks. Boxes with different sizes and colors are randomly stacked on an empty ground, a palletizing robot needs to grab and stack the boxes one by one to meet the size requirement, and palletizing is carried out according to the specific steps in the embodiment, and the method achieves the stacking success rate of 90 percent (36/40) in the stacking task.
The system functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: u disk, removable hard disk, read only memory, random access memory, optical disk, etc. various media that can store program code.
And finally: the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. The utility model provides an adopt artifical intelligent 3D vision pile up neatly system, includes control facility, 3D vision scanning device, pile up neatly machine people, control processing unit, its characterized in that: the control processing unit comprises a manager instruction input unit, an image processing module 1, an image processing module 2, a control module and an operation module, and the connection input mode among the modules of the 3D visual stacking system is as follows: the monitoring facility inputs information to the image processing unit 1, the 3D visual scanning device is connected with the palletizing robot, the 3D visual scanning device inputs information to the image processing unit 2, the image processing unit and the control module input information to the operation module, the control module inputs instructions to the operation module and the palletizing robot, and the manager instruction unit inputs instructions to the control module.
2. The 3D visual palletization system adopting artificial intelligence as claimed in claim 1, wherein: the 3D visual scanning device is installed on the palletizing robot and used for shooting 3D images to realize scanning positioning, the shot 3D images are point cloud information, the point cloud information is transmitted to the operation module after noise reduction and feature extraction processing are carried out through the image processing module, the operation module carries out three-dimensional reconstruction on the received point cloud information, and then a built-in artificial neural network is used for analyzing a three-dimensional model.
3. The 3D visual palletization system adopting artificial intelligence as claimed in claim 1, wherein: the stacking robot comprises a mechanical arm and a moving platform, the mechanical arm is connected to the moving platform through threads, the moving platform has an automatic navigation function and is used for moving, the mechanical arm is used for grabbing objects, the mechanical arm is an artificial and intelligent presentation mode, a certain clamping tool is installed at the tail end of the mechanical arm, the mechanical arm can be a vacuum sucker or a magnetic sucker according to different application scenes, and the vacuum sucker and the magnetic sucker mechanical arm can be well connected with the moving platform.
4. The 3D visual palletization system adopting artificial intelligence as claimed in claim 1, wherein: the control processing unit comprises a manager instruction input unit, an image processing module 1, an image processing module 2, a control module and an operation module, wherein the manager instruction is executed through a Programmable Logic Controller (PLC), the PLC consists of a CPU, an instruction and data memory, an input/output interface, a power supply, a digital-analog conversion and other functional units, and the manager loads the control instruction into the memory at any time through the PLC to be stored and executed.
5. The 3D visual palletization system adopting artificial intelligence as claimed in claim 1, wherein: the operation module establishes a stacking model based on deep learning, an artificial neural network is arranged in the operation module, the artificial neural network is used for finely adjusting a mechanical arm and a grabbing device to grab and stack products to generate stacking traces, the mechanical arm is guided to place the objects into a designated position, the artificial neural network is a Convolutional Neural Network (CNN) and is used for adjusting the mechanical arm and the grabbing device to grab and stack the products, the convolutional neural network simulates the visual perception mechanism construction of organisms and can carry out supervised learning and unsupervised learning, and the convolutional kernel parameter sharing in a hidden layer and the sparsity of interlayer connection enable the convolutional neural network to realize lattice characteristics with smaller calculation amount.
6. The 3D visual palletization system adopting artificial intelligence as claimed in claim 1, wherein: the 3D vision stacking system based on the method comprises the following concrete stacking steps:
s1, judging a working scene, acquiring monitoring images of an object stacking area A and a target stacking area B through a monitoring facility, and transmitting the monitoring images to a control processing unit, wherein under the condition of a first condition, objects are stacked in the object stacking area A, and under the condition of a second condition, the target stacking area B is stacked in a space, and if the two conditions are met, the working scene is a stacking working scene;
s2 issues job task, the control processing unit receives monitoring information and manager instruction, and sends stacking job information to the stacking robot, including: an article stacking area A, a target stacking area B, a stacking size and the like;
s3, grabbing objects, receiving operation information, moving from a rest charging area to an object stacking area A to carry out work, monitoring a camera, transmitting two-dimensional images to an image processing module 1 in a control unit for processing, then transmitting the two-dimensional images to a computing unit, predicting grabbing positions according to a grabbing model obtained by deep learning, transmitting action instructions to a mechanical arm, implementing the action instructions of grabbing the objects according to the instructions, and moving to a target stacking area B after grabbing;
s4, stacking articles, picking the articles by the palletizing robot to move to a target stacking area B, scanning the stacking area environment by the 3D vision scanner, shooting 3D images, transmitting the 3D images to the image processing module 2 in the control unit, calculating the picking targets by the algorithm in the operation module, guiding the robot to stack the articles in an accurate posture, and calculating a correct placing position according to the packaging size;
s5, checking and determining that the palletizing robot resets, the palletizing robot cannot guarantee hundred-percent accuracy in the palletizing process, scattered and falling objects can be generated, the monitoring of monitoring equipment is used for finding the scattered objects, the palletizing robot executes the grabbing and stacking task again, palletizing is tidy, no scattered objects in a palletizing area are considered to be completed by the palletizing task, and finally the palletizing robot returns to a rest area to be charged.
7. The system of claim 6, wherein the system comprises: in the step S5, the robot palletizer may be charged in a wireless inductive manner, and the charging state and the electric quantity information are transmitted to the control unit.
CN202210525053.XA 2022-05-14 2022-05-14 3D vision stacking system adopting artificial intelligence Pending CN114933176A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210525053.XA CN114933176A (en) 2022-05-14 2022-05-14 3D vision stacking system adopting artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210525053.XA CN114933176A (en) 2022-05-14 2022-05-14 3D vision stacking system adopting artificial intelligence

Publications (1)

Publication Number Publication Date
CN114933176A true CN114933176A (en) 2022-08-23

Family

ID=82865304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210525053.XA Pending CN114933176A (en) 2022-05-14 2022-05-14 3D vision stacking system adopting artificial intelligence

Country Status (1)

Country Link
CN (1) CN114933176A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116757331A (en) * 2023-08-11 2023-09-15 山东捷瑞数字科技股份有限公司 Method, device, equipment and medium for generating stacking scheme based on industrial vision
CN117022971A (en) * 2023-10-09 2023-11-10 南通知力机械科技有限公司 Intelligent logistics stacking robot control system
CN117142156A (en) * 2023-10-30 2023-12-01 深圳市金环宇电线电缆有限公司 Cable stacking control method, device, equipment and medium based on automatic positioning
CN117735201A (en) * 2023-12-26 2024-03-22 杭州三奥智能科技有限公司 Automatic feeding stacking mechanism of vision guiding robot

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2015064107A1 (en) * 2013-10-31 2017-03-09 日本電気株式会社 Management system, list creation device, data structure and print label
CN107265129A (en) * 2017-06-02 2017-10-20 成都福莫斯智能系统集成服务有限公司 Using the system of image recognition auxiliary robot stacking
CN109279373A (en) * 2018-11-01 2019-01-29 西安中科光电精密工程有限公司 A kind of flexible de-stacking robot palletizer system and method based on machine vision
CN109353833A (en) * 2018-11-27 2019-02-19 深圳市汇川技术股份有限公司 Robot stacking point generation method, equipment and computer-readable memory
CN110222862A (en) * 2018-03-02 2019-09-10 北京京东尚科信息技术有限公司 Palletizing method and device
CN110490524A (en) * 2019-08-21 2019-11-22 赖辉 A kind of de-stacking method based on stacking data, de-stacking device and de-stacking system
CN110569792A (en) * 2019-09-09 2019-12-13 吉林大学 Method for detecting front object of automatic driving automobile based on convolutional neural network
CN111099363A (en) * 2020-01-09 2020-05-05 湖南视比特机器人有限公司 Stacking method, stacking system and storage medium
CN111243017A (en) * 2019-12-24 2020-06-05 广州中国科学院先进技术研究所 Intelligent robot grabbing method based on 3D vision
CN111232664A (en) * 2020-03-18 2020-06-05 上海载科智能科技有限公司 Industrial robot applied soft package unstacking, unloading and stacking device and method for unstacking, unloading and stacking
CN111360847A (en) * 2020-04-17 2020-07-03 江苏茂屹科技发展有限公司 Automatic delivery robot of access material
US10759054B1 (en) * 2020-02-26 2020-09-01 Grey Orange Pte. Ltd. Method and system for handling deformable objects
CN112047113A (en) * 2020-08-26 2020-12-08 苏州中科全象智能科技有限公司 3D visual stacking system and method based on artificial intelligence technology
CN113222257A (en) * 2021-05-17 2021-08-06 广东工业大学 Online mixed stacking method based on buffer area
CN113264369A (en) * 2021-06-21 2021-08-17 江苏经贸职业技术学院 Efficient commodity circulation is unloaded and is used automatic pile up neatly device
CN113547525A (en) * 2021-09-22 2021-10-26 天津施格机器人科技有限公司 Control method of robot controller special for stacking
CN113967914A (en) * 2021-10-30 2022-01-25 江苏建筑职业技术学院 Stacking method for industrial robot
CN114084683A (en) * 2021-12-02 2022-02-25 长沙长泰智能装备有限公司 Method and device for determining a shape of a pile

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2015064107A1 (en) * 2013-10-31 2017-03-09 日本電気株式会社 Management system, list creation device, data structure and print label
CN107265129A (en) * 2017-06-02 2017-10-20 成都福莫斯智能系统集成服务有限公司 Using the system of image recognition auxiliary robot stacking
CN110222862A (en) * 2018-03-02 2019-09-10 北京京东尚科信息技术有限公司 Palletizing method and device
CN109279373A (en) * 2018-11-01 2019-01-29 西安中科光电精密工程有限公司 A kind of flexible de-stacking robot palletizer system and method based on machine vision
CN109353833A (en) * 2018-11-27 2019-02-19 深圳市汇川技术股份有限公司 Robot stacking point generation method, equipment and computer-readable memory
CN110490524A (en) * 2019-08-21 2019-11-22 赖辉 A kind of de-stacking method based on stacking data, de-stacking device and de-stacking system
CN110569792A (en) * 2019-09-09 2019-12-13 吉林大学 Method for detecting front object of automatic driving automobile based on convolutional neural network
CN111243017A (en) * 2019-12-24 2020-06-05 广州中国科学院先进技术研究所 Intelligent robot grabbing method based on 3D vision
CN111099363A (en) * 2020-01-09 2020-05-05 湖南视比特机器人有限公司 Stacking method, stacking system and storage medium
US10759054B1 (en) * 2020-02-26 2020-09-01 Grey Orange Pte. Ltd. Method and system for handling deformable objects
CN111232664A (en) * 2020-03-18 2020-06-05 上海载科智能科技有限公司 Industrial robot applied soft package unstacking, unloading and stacking device and method for unstacking, unloading and stacking
CN111360847A (en) * 2020-04-17 2020-07-03 江苏茂屹科技发展有限公司 Automatic delivery robot of access material
CN112047113A (en) * 2020-08-26 2020-12-08 苏州中科全象智能科技有限公司 3D visual stacking system and method based on artificial intelligence technology
CN113222257A (en) * 2021-05-17 2021-08-06 广东工业大学 Online mixed stacking method based on buffer area
CN113264369A (en) * 2021-06-21 2021-08-17 江苏经贸职业技术学院 Efficient commodity circulation is unloaded and is used automatic pile up neatly device
CN113547525A (en) * 2021-09-22 2021-10-26 天津施格机器人科技有限公司 Control method of robot controller special for stacking
CN113967914A (en) * 2021-10-30 2022-01-25 江苏建筑职业技术学院 Stacking method for industrial robot
CN114084683A (en) * 2021-12-02 2022-02-25 长沙长泰智能装备有限公司 Method and device for determining a shape of a pile

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张旭: "基于关联维度特征的改进水下目标模式识别方法", 《科技通报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116757331A (en) * 2023-08-11 2023-09-15 山东捷瑞数字科技股份有限公司 Method, device, equipment and medium for generating stacking scheme based on industrial vision
CN117022971A (en) * 2023-10-09 2023-11-10 南通知力机械科技有限公司 Intelligent logistics stacking robot control system
CN117022971B (en) * 2023-10-09 2023-12-22 南通知力机械科技有限公司 Intelligent logistics stacking robot control system
CN117142156A (en) * 2023-10-30 2023-12-01 深圳市金环宇电线电缆有限公司 Cable stacking control method, device, equipment and medium based on automatic positioning
CN117142156B (en) * 2023-10-30 2024-02-13 深圳市金环宇电线电缆有限公司 Cable stacking control method, device, equipment and medium based on automatic positioning
CN117735201A (en) * 2023-12-26 2024-03-22 杭州三奥智能科技有限公司 Automatic feeding stacking mechanism of vision guiding robot

Similar Documents

Publication Publication Date Title
CN114933176A (en) 3D vision stacking system adopting artificial intelligence
US11383380B2 (en) Object pickup strategies for a robotic device
DE102019130048B4 (en) A robotic system with a sack loss management mechanism
US9649767B2 (en) Methods and systems for distributing remote assistance to facilitate robotic object manipulation
US9205558B1 (en) Multiple suction cup control
US9630316B2 (en) Real-time determination of object metrics for trajectory planning
Ellekilde et al. Motion planning efficient trajectories for industrial bin-picking
CN112850186B (en) Mixed pile-dismantling method based on 3D vision
US20230044001A1 (en) Systems and methods for object detection
JP7175487B1 (en) Robotic system with image-based sizing mechanism and method for operating the robotic system
CN115321090A (en) Method, device, equipment, system and medium for automatically receiving and taking luggage in airport
CN116728399A (en) System and method for a robotic system with object handling
Jia et al. Robot online 3D bin packing strategy based on deep reinforcement learning and 3D vision
US20230305574A1 (en) Detecting empty workspaces for robotic material handling
Kumar et al. Design and development of an automated robotic pick & stow system for an e-commerce warehouse
CN115556094A (en) Material taking method and device based on three-axis manipulator and computer readable storage medium
CN114800512A (en) Robot pushing and pulling boxing method and system based on deep reinforcement learning
CA3190171A1 (en) A selector for robot-retrievable items
US20230182315A1 (en) Systems and methods for object detection and pick order determination
US20230286156A1 (en) Motion planning and control for robots in shared workspace employing staging poses
US20230025647A1 (en) Robotic system with object update mechanism and methods for operating the same
CN116652974A (en) Control unit and method for controlling robot gripping object
CN116061192A (en) System and method for a robotic system with object handling
Yesudasu et al. Depalletisation humanoid torso: Real-time cardboard package detection based on deep learning and pose estimation algorithm
Rutishauser et al. From vision to action: grasping unmodeled objects from a heap

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220823

RJ01 Rejection of invention patent application after publication