CN116100551A - AI vision technology-based robot virtual simulation application method and system - Google Patents

AI vision technology-based robot virtual simulation application method and system Download PDF

Info

Publication number
CN116100551A
CN116100551A CN202310139493.6A CN202310139493A CN116100551A CN 116100551 A CN116100551 A CN 116100551A CN 202310139493 A CN202310139493 A CN 202310139493A CN 116100551 A CN116100551 A CN 116100551A
Authority
CN
China
Prior art keywords
robot
intelligent
box body
sorting
riveting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202310139493.6A
Other languages
Chinese (zh)
Inventor
朱悦灏
郭晓晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Haiyi Intelligent Technology Co ltd
Original Assignee
Suzhou Haiyi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Haiyi Intelligent Technology Co ltd filed Critical Suzhou Haiyi Intelligent Technology Co ltd
Priority to CN202310139493.6A priority Critical patent/CN116100551A/en
Publication of CN116100551A publication Critical patent/CN116100551A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a robot virtual simulation application method and system based on an AI vision technology, and relates to the field of virtual simulation. Including sorting and assembly stations: based on an intelligent path planning algorithm of the robot in a simulation scene, the intelligent robot utilizes a specific quick-change clamp to place a riveting nut on a welding box body and anchor the riveting nut in a hole site of the welding box body; based on the 3D intelligent camera, recognizing the positions of the scattered screws, the intelligent robot utilizes a specific quick-change clamp to place the screws in the secondary positioning device, and switches the screw locking clamp to lock the screws into the riveting nuts of the welding box body; moving the operating arm: the deep learning technology for classifying and positioning the fusion objects is used for conveying the box body from the material table to the sorting and assembling workstation, and conveying the finished box body to the stereoscopic warehouse after the riveting and screw locking of nuts on the box body are completed; the control precision can be improved, and the stability of the production line is ensured.

Description

AI vision technology-based robot virtual simulation application method and system
Technical Field
The invention relates to the field of virtual simulation, in particular to a robot virtual simulation application method and system based on an AI vision technology.
Background
Artificial intelligence is developing day-to-day, and talent demand in the AI field is increasing along with the wind gap. The data show that the AI talent demand has reached nearly twice that of 2016 in the first 10 months of 2017, 5.3 times that of 2015, and the talent demand has increased by over 200% in line. In order to solve talent shortage problem, the 3D vision is added on the conventional intelligent robot teaching equipment for integration. When the welding box body is assembled, in the actual production process, the problem of production efficiency reduction and installation quality are caused due to the fact that the robot is incorrect in grabbing angle or position of a workpiece and is easy to collide, and great influence is brought to the production process. Therefore, a method and a system for applying virtual simulation of a robot are needed at present, which can improve the control precision of virtual simulation of the robot and ensure the stability of a production line.
Disclosure of Invention
The invention aims to provide a robot virtual simulation application method based on an AI vision technology, which can automatically select proper grabbing angles and positions when a robot grabs through a motion planning algorithm based on reinforcement learning, and improves the virtual simulation control precision of the robot. The problems of production efficiency reduction and installation quality are solved, and the stability of the production line is ensured.
The invention further aims to provide a robot virtual simulation application system based on the AI vision technology, which can automatically select proper grabbing angles and positions when a robot grabs through a motion planning algorithm based on reinforcement learning, so that the virtual simulation control precision of the robot is improved, the problems of production efficiency reduction and installation quality are solved, and the stability of a production line is ensured.
Embodiments of the present invention are implemented as follows:
in a first aspect, an embodiment of the present application provides a method for applying virtual simulation of a robot based on AI vision technology, including the steps of: the intelligent robot is used for placing the riveting nut on the welding box body by utilizing a specific quick-change clamp based on an intelligent path planning algorithm of the robot in a simulation scene, and is anchored in a hole site of the welding box body by a riveting machine; based on the 3D intelligent camera, recognizing the positions of the scattered screws, the intelligent robot utilizes a specific quick-change clamp to place the screws in a secondary positioning device, and switches a screw locking clamp to lock the screws into the riveting nuts of the welding box body; the 3D intelligent camera is arranged on the sorting and assembling workstation to collect the specified number of images and finish labeling; moving the operating arm: the deep learning technology for classifying and positioning the fusion objects is used for conveying the welded box body from the box body material table to the sorting and assembling workstation, and conveying a box body finished product to a stereoscopic warehouse after the riveting and screw locking of nuts on the welded box body are completed; the mobile operation arm comprises a mobile robot, a cooperative robot arranged on the mobile robot, a clamp arranged on the cooperative robot and a 2D intelligent camera arranged on the cooperative robot; controlling the mobile robot to move in the whole space in the operation scene, and generating a special operation scene plane map model of the mobile robot; controlling the intelligent robot to move in the whole space in the operation scene to generate a special operation scene planar map model of the intelligent robot; the microphone array of the intelligent robot is used for collecting voice data of an operator, and voice dialogue prompt words of the intelligent robot are set.
In some embodiments of the present invention, the above-mentioned application method for virtual simulation of a robot based on AI vision technology further includes the steps of importing training set data collected by the above-mentioned 3D intelligent camera, setting training parameters, and training a workpiece pose recognition model; deploying the trained model, performing pose recognition test on the workpiece, and grading according to the accuracy of multiple times of recognition; programming a robot program by using graphical programming software, and controlling the intelligent robot to finish grabbing and placing the pose-adjusting screw; and programming a robot program by using graphical programming software, and controlling the intelligent robot to finish sorting operation of scattered workpieces in the material box by combining the identification result of the 3D intelligent camera.
In some embodiments of the present invention, the above-mentioned method for applying virtual simulation of a robot based on AI vision technology further includes the steps of programming the above-mentioned mobile robot, setting a working point and an automatic charging point, and implementing autonomous motion planning of the mobile robot; programming the 2D intelligent camera to realize the identification of the box body; programming the cooperative robot to realize the linkage of the movable operating arm and complete the positioning and grabbing of the box body workpiece.
In some embodiments of the present invention, the above-mentioned method for applying virtual simulation of a robot based on AI vision technology further includes the steps of setting a working point and an automatic inspection line of the above-mentioned intelligent robot in a planar map according to an actual scene; programming the intelligent robot to realize the pose recognition and grabbing of the screw material box; the intelligent robot is programmed to realize the patrol operation by controlling the intelligent robot through voice.
In some embodiments of the present invention, the above-mentioned method for applying virtual simulation of a robot based on AI vision technology further includes the steps of writing a robot program by using graphic programming software, and controlling the intelligent robot to implement complete processes of sorting, riveting and screw locking in combination with the identification result of the above-mentioned 3D intelligent camera; writing a main control program to realize the online communication of each unit of the technical platform; and writing a main control program to respectively control each unit of the technical platform.
In some embodiments of the present invention, the above-mentioned method for applying virtual simulation of a robot based on AI vision technology further includes the following steps, fusing artificial intelligence and autonomous robot planning technologies, to implement a complete typical industrial scene task: a) The complete flow is started through voice, the movable operating arm conveys the semi-finished product box body to the sorting and assembling workbench, and the intelligent robot conveys screw workpieces to the sorting and assembling workbench; b) The intelligent robot performs sorting, riveting and screw locking tasks, the mobile operation arm automatically returns to the charging pile position to perform charging tasks, and the intelligent robot performs automatic inspection tasks; c) And (3) moving the operation arm to finish charging, conveying the semi-finished product box body, conveying the finished product box body to a stereoscopic warehouse, and continuously executing sorting, riveting and screw locking tasks by the intelligent robot.
In a second aspect, an embodiment of the present application provides a robot virtual simulation application system based on an AI vision technology, which is implemented based on any one of the above-mentioned methods for applying robot virtual simulation based on an AI vision technology in the first aspect.
Compared with the prior art, the embodiment of the invention has at least the following advantages or beneficial effects:
for the first aspect to the second aspect: the intelligent path planning method is realized based on the intelligent path planning algorithm of the robot in the simulation scene, and the robot utilizes a specific quick-change clamp to accurately place the riveting nut on the welding box body and anchor the riveting nut in the hole site of the welding box body. Then, based on the scattered screw pose of 3D intelligent camera discernment, the robot utilizes specific quick change anchor clamps to place the screw in secondary positioner to switch the screw lock and pay anchor clamps, lock the screw to the riveting nut of box. The related artificial intelligence technology comprises an image semantic segmentation technology based on a deep learning technology, and a two-dimensional image-oriented deep learning technology is applied; the object pose estimation technology based on the 3D point cloud information applies the point cloud-oriented end-to-end deep learning technology; the robot intelligent path planning technology facing to the high-dimensional space applies the reinforcement learning technology. According to the invention, through a motion planning algorithm based on reinforcement learning, when the robot performs grabbing, a proper grabbing angle and a proper position can be automatically selected, so that the virtual simulation control precision of the robot is improved, the problems of production efficiency reduction and installation quality are solved, and the stability of a production line is ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a robot virtual simulation application method based on AI vision technique in accordance with embodiment 1 of the present invention;
fig. 2 is a control flow chart of the robot in embodiment 1 of the present invention;
FIG. 3 is a flow chart of the control application in embodiment 1 of the present invention;
FIG. 4 is a flowchart of an example of a scenario in embodiment 1 of the present invention;
fig. 5 is a schematic diagram of an electronic device according to embodiment 2 of the present invention.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Example 1
Referring to fig. 1 to 4, fig. 1 to 4 are schematic diagrams illustrating a robot virtual simulation application method based on AI vision technology according to an embodiment of the present application. The application method of the virtual simulation of the robot based on the AI vision technology comprises the following steps of sorting and assembling work stations: the intelligent robot is used for placing the riveting nut on the welding box body by utilizing a specific quick-change clamp based on an intelligent path planning algorithm of the robot in a simulation scene, and is anchored in a hole site of the welding box body by a riveting machine; based on the 3D intelligent camera, recognizing the positions of the scattered screws, the intelligent robot utilizes a specific quick-change clamp to place the screws in a secondary positioning device, and switches a screw locking clamp to lock the screws into the riveting nuts of the welding box body; the 3D intelligent camera is arranged on the sorting and assembling workstation to collect the specified number of images and finish labeling; moving the operating arm: the deep learning technology for classifying and positioning the fusion objects is used for conveying the welded box body from the box body material table to the sorting and assembling workstation, and conveying a box body finished product to a stereoscopic warehouse after the riveting and screw locking of nuts on the welded box body are completed; the mobile operation arm comprises a mobile robot, a cooperative robot arranged on the mobile robot, a clamp arranged on the cooperative robot and a 2D intelligent camera arranged on the cooperative robot; controlling the mobile robot to move in the whole space in the operation scene, and generating a special operation scene plane map model of the mobile robot; controlling the intelligent robot to move in the whole space in the operation scene to generate a special operation scene planar map model of the intelligent robot; the microphone array of the intelligent robot is used for collecting voice data of an operator, and voice dialogue prompt words of the intelligent robot are set.
The intelligent path planning method is realized based on the intelligent path planning algorithm of the robot in the simulation scene, and the robot utilizes a specific quick-change clamp to accurately place the riveting nut on the welding box body and anchor the riveting nut in the hole site of the welding box body. Then, based on the scattered screw pose of 3D intelligent camera discernment, the robot utilizes specific quick change anchor clamps to place the screw in secondary positioner to switch the screw lock and pay anchor clamps, lock the screw to the riveting nut of box. The related artificial intelligence technology comprises an image semantic segmentation technology based on a deep learning technology, and a two-dimensional image-oriented deep learning technology is applied; the object pose estimation technology based on the 3D point cloud information applies the point cloud-oriented end-to-end deep learning technology; the robot intelligent path planning technology facing to the high-dimensional space applies the reinforcement learning technology. According to the invention, through a motion planning algorithm based on reinforcement learning, when the robot performs grabbing, a proper grabbing angle and a proper position can be automatically selected, so that the virtual simulation control precision of the robot is improved, the problems of production efficiency reduction and installation quality are solved, and the stability of a production line is ensured.
And the mobile robot conveys the welded box body from the box body material table to the sorting and assembling workbench, and conveys the box body finished product to the stereoscopic warehouse after the nut riveting and screw locking on the box body are completed. The artificial intelligence technology is an object detection technology based on a deep learning technology, and particularly relates to a deep learning technology of fusion object classification (classification) and localization (localization). The screw picking and placing area and the assembly box picking and placing area are divided into a screw picking and placing area and a screw picking and placing area, an upper layer is provided with a plurality of bins, different screw boxes are placed in each bin, the lower layer is an intelligent robot picking and placing platform, different screw boxes can be manually selected to be randomly placed at different positions of the picking and placing platform according to requirements, and the intelligent robot can take away the required screw boxes through a visual identification technology. The assembly box taking and placing area can be divided into an upper layer and a lower layer, wherein the upper layer is a finished product placing area, and the lower layer is a semi-finished product placing area.
According to task requirements, the acquisition of robot perception data is completed by utilizing an internal/external sensor of the robot, preprocessing operation is carried out, the workpiece data acquired by the 3D intelligent camera are marked, a working scene map for the mobile robot and the intelligent robot is respectively constructed, voice dialogue data marking of the intelligent robot is carried out, and multi-sensor data fusion processing and analysis are realized.
In some embodiments of the present invention, the above-mentioned application method for virtual simulation of a robot based on AI vision technology further includes the steps of importing training set data collected by the above-mentioned 3D intelligent camera, setting training parameters, and training a workpiece pose recognition model; deploying the trained model, performing pose recognition test on the workpiece, and grading according to the accuracy of multiple times of recognition; programming a robot program by using graphical programming software, and controlling the intelligent robot to finish grabbing and placing the pose-adjusting screw; and programming a robot program by using graphical programming software, and controlling the intelligent robot to finish sorting operation of scattered workpieces in the material box by combining the identification result of the 3D intelligent camera.
According to task requirements, a given convolutional neural network training model is selected, training parameters are set, a training data set is input into the model for training, training effects and model files are output, a graphical robot program is written by utilizing the trained model and workpieces provided by a technical platform, and a training result is visually learned.
In some embodiments of the present invention, the above-mentioned method for applying virtual simulation of a robot based on AI vision technology further includes the steps of programming the above-mentioned mobile robot, setting a working point and an automatic charging point, and implementing autonomous motion planning of the mobile robot; programming the 2D intelligent camera to realize the identification of the box body; programming the cooperative robot to realize the linkage of the movable operating arm and complete the positioning and grabbing of the box body workpiece.
In some embodiments of the present invention, the above-mentioned method for applying virtual simulation of a robot based on AI vision technology further includes the steps of setting a working point and an automatic inspection line of the above-mentioned intelligent robot in a planar map according to an actual scene; programming the intelligent robot to realize the pose recognition and grabbing of the screw material box; the intelligent robot is programmed to realize the patrol operation by controlling the intelligent robot through voice.
According to task requirements, the robot of the platform is programmed and planned, and automatic decision making of the robot, specified actions and tasks of motion completion, cooperative application of a human and a machine, self diagnosis of the robot and remote intelligent maintenance are realized. According to task requirements, results of artificial intelligent model learning and training are fused, and an autonomous planning function of a robot is combined, an artificial intelligent application technology of the robot is applied, corresponding functional components are called, operation, programming and debugging are carried out, and a complete task of a typical industrial scene is realized on an operation platform.
In some embodiments of the present invention, the above-mentioned method for applying virtual simulation of a robot based on AI vision technology further includes the steps of writing a robot program by using graphic programming software, and controlling the intelligent robot to implement complete processes of sorting, riveting and screw locking in combination with the identification result of the above-mentioned 3D intelligent camera; writing a main control program to realize the online communication of each unit of the technical platform; and writing a main control program to respectively control each unit of the technical platform.
In some embodiments of the present invention, the above-mentioned method for applying virtual simulation of a robot based on AI vision technology further includes the following steps, fusing artificial intelligence and autonomous robot planning technologies, to implement a complete typical industrial scene task: a) The complete flow is started through voice, the movable operating arm conveys the semi-finished product box body to the sorting and assembling workbench, and the intelligent robot conveys screw workpieces to the sorting and assembling workbench; b) The intelligent robot performs sorting, riveting and screw locking tasks, the mobile operation arm automatically returns to the charging pile position to perform charging tasks, and the intelligent robot performs automatic inspection tasks; c) And (3) moving the operation arm to finish charging, conveying the semi-finished product box body, conveying the finished product box body to a stereoscopic warehouse, and continuously executing sorting, riveting and screw locking tasks by the intelligent robot.
Example 2
Referring to fig. 5, fig. 5 is a schematic block diagram of an electronic device according to an embodiment of the present application. The electronic device comprises a memory 101, a processor 102 and a communication interface 103, wherein the memory 101, the processor 102 and the communication interface 103 are electrically connected with each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 101 may be used to store software programs and modules, such as program instructions/modules corresponding to the method for implementing the AI-vision-based virtual simulation application of a robot as provided in embodiment 1 of the present application, and the processor 102 executes the software programs and modules stored in the memory 101, thereby executing various functional applications and data processing. The communication interface 103 may be used for communication of signaling or data with other node devices.
The Memory 101 may be, but is not limited to, a random access Memory (Random Access Memory, RAM), a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor 102 may be an integrated circuit chip with signal processing capabilities. The processor 102 may be a general purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In summary, the embodiment of the application provides a method and a system for virtual simulation application of a robot based on an AI vision technology:
the intelligent path planning method is realized based on the intelligent path planning algorithm of the robot in the simulation scene, and the robot utilizes a specific quick-change clamp to accurately place the riveting nut on the welding box body and anchor the riveting nut in the hole site of the welding box body. Then, based on the scattered screw pose of 3D intelligent camera discernment, the robot utilizes specific quick change anchor clamps to place the screw in secondary positioner to switch the screw lock and pay anchor clamps, lock the screw to the riveting nut of box. The related artificial intelligence technology comprises an image semantic segmentation technology based on a deep learning technology, and a two-dimensional image-oriented deep learning technology is applied; the object pose estimation technology based on the 3D point cloud information applies the point cloud-oriented end-to-end deep learning technology; the robot intelligent path planning technology facing to the high-dimensional space applies the reinforcement learning technology. According to the invention, through a motion planning algorithm based on reinforcement learning, when the robot performs grabbing, a proper grabbing angle and a proper position can be automatically selected, so that the virtual simulation control precision of the robot is improved, the problems of production efficiency reduction and installation quality are solved, and the stability of a production line is ensured.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (7)

1. A robot virtual simulation application method based on AI vision technology is characterized by comprising the following steps,
sorting and assembling work station: the intelligent robot is used for placing the riveting nut on the welding box body by utilizing a specific quick-change clamp based on an intelligent path planning algorithm of the robot in a simulation scene, and is anchored in a hole site of the welding box body by a riveting machine; the method comprises the steps that scattered screw pose is identified based on a 3D intelligent camera, the intelligent robot utilizes a specific quick-change clamp to place screws in a secondary positioning device, and switches screw locking clamps to lock the screws into riveting nuts of the welding box body;
the 3D intelligent camera is arranged on the sorting and assembling workstation to collect the specified number of images and finish labeling;
moving the operating arm: the deep learning technology is used for classifying and positioning the fusion objects, the welding box body is conveyed from the box body material table to the sorting and assembling workstation, and after the riveting and screw locking of nuts on the welding box body are completed, the box body finished product is conveyed to a stereoscopic warehouse;
the mobile operation arm comprises a mobile robot, a cooperative robot arranged on the mobile robot, a clamp arranged on the cooperative robot and a 2D intelligent camera arranged on the cooperative robot;
controlling the mobile robot to move in the whole space in the operation scene, and generating a special operation scene plane map model of the mobile robot; controlling the intelligent robot to move in the whole space in the operation scene, and generating a special operation scene planar map model of the intelligent robot; and collecting voice data of an operator by using the microphone array of the intelligent robot, and setting a voice dialogue prompt word of the intelligent robot.
2. The application method for virtual simulation of a robot based on the AI vision technology as set forth in claim 1, further comprising the steps of importing training set data collected by the 3D intelligent camera, setting training parameters, and training a workpiece pose recognition model; deploying the trained model, performing pose recognition test on the workpiece, and grading according to the accuracy of multiple times of recognition; programming a robot program by using graphical programming software, and controlling the intelligent robot to finish grabbing and placing the pose-adjusting screw; and programming a robot program by using graphical programming software, and controlling the intelligent robot to finish sorting operation of scattered workpieces in the material box by combining the identification result of the 3D intelligent camera.
3. The method for applying the virtual simulation of the robot based on the AI vision technology as set forth in claim 1, further comprising the steps of programming the mobile robot, setting a working point and an automatic charging point, and realizing the autonomous motion planning of the mobile robot; programming the 2D intelligent camera to realize the identification of the box body; programming the cooperative robot to realize the linkage of the movable operation arm and finish the positioning and grabbing of the box body workpiece.
4. The method for applying virtual simulation to a robot based on AI vision technology as set forth in claim 1, further comprising the step of setting a working point and an automatic inspection line of the intelligent robot in a planar map according to an actual scene; programming the intelligent robot to realize the pose recognition and grabbing of the screw material box; and programming the intelligent robot to realize that the intelligent robot is controlled by voice to execute patrol operation.
5. The method for applying the virtual simulation of the robot based on the AI vision technology as set forth in claim 1, further comprising the steps of programming a robot program by using graphic programming software, and controlling the intelligent robot to realize complete processes of sorting, riveting and screw locking by combining the identification result of the 3D intelligent camera; writing a main control program to realize the online communication of each unit of the technical platform; and writing a main control program to respectively control each unit of the technical platform.
6. The method for applying virtual simulation to a robot based on AI vision technology as set forth in claim 1, further comprising the step of fusing artificial intelligence and autonomous robot planning techniques to achieve a complete task of a typical industrial scenario: a) The complete flow is started through voice, the movable operating arm conveys the semi-finished product box body to the sorting and assembling workbench, and the intelligent robot conveys screw workpieces to the sorting and assembling workbench; b) The intelligent robot performs sorting, riveting and screw locking tasks, the mobile operation arm automatically returns to the charging pile position to perform charging tasks, and the intelligent robot performs automatic inspection tasks; c) And (3) moving the operation arm to finish charging, conveying the semi-finished product box body, conveying the finished product box body to a stereoscopic warehouse, and continuously executing sorting, riveting and screw locking tasks by the intelligent robot.
7. The virtual simulation application system of the robot based on the AI vision technology is characterized in that the virtual simulation application method of the robot based on the AI vision technology is realized based on any one of claims 1 to 6.
CN202310139493.6A 2023-02-20 2023-02-20 AI vision technology-based robot virtual simulation application method and system Withdrawn CN116100551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310139493.6A CN116100551A (en) 2023-02-20 2023-02-20 AI vision technology-based robot virtual simulation application method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310139493.6A CN116100551A (en) 2023-02-20 2023-02-20 AI vision technology-based robot virtual simulation application method and system

Publications (1)

Publication Number Publication Date
CN116100551A true CN116100551A (en) 2023-05-12

Family

ID=86259677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310139493.6A Withdrawn CN116100551A (en) 2023-02-20 2023-02-20 AI vision technology-based robot virtual simulation application method and system

Country Status (1)

Country Link
CN (1) CN116100551A (en)

Similar Documents

Publication Publication Date Title
CN110176078B (en) Method and device for labeling training set data
CN102317044A (en) Industrial robot system
DE102011082800A1 (en) Method for generating operational sequence plans for processing of workpiece by industrial robot, involves carrying iterative modification of output plan according to predetermined cost function and carrying iteration to cost function
CN109955242A (en) Patrol dimension robot control method, device, computer equipment and storage medium
DE102020130520A1 (en) METHOD OF CONTROLLING A ROBOT IN THE PRESENCE OF HUMAN OPERATORS
Oborski et al. Intelligent visual quality control system based on convolutional neural networks for Holonic shop floor control of industry 4.0 manufacturing systems
CN116188526A (en) Track generation method, device, equipment and medium
CN112621765B (en) Automatic equipment assembly control method and device based on manipulator
Zhang et al. Deep learning-based robot vision: High-end tools for smart manufacturing
Vater et al. A modular edge-/cloud-solution for automated error detection of industrial hairpin weldings using convolutional neural networks
DE112020004852T5 (en) CONTROL METHOD, CONTROL DEVICE, ROBOT SYSTEM, PROGRAM AND RECORDING MEDIA
CN116100551A (en) AI vision technology-based robot virtual simulation application method and system
Castro et al. AdaptPack studio: automatic offline robot programming framework for factory environments
Kranzer et al. An intelligent maintenance planning framework prototype for production systems
CN110370276A (en) The industrial robot machining locus automatic planning decomposed based on threedimensional model Morse
CN116038714A (en) Disorder sorting method and system based on 3D camera
CN109291049B (en) Data processing method and device and control equipment
Antonelli et al. FREE: flexible and safe interactive human-robot environment for small batch exacting applications
Sileo et al. Real-time object detection and grasping using background subtraction in an industrial scenario
Leidenkrantz et al. Implementation of machine vision on a collaborative robot
Ivanov et al. Visual Product Inspection Based on Deep Learning Methods
CN117589177B (en) Autonomous navigation method based on industrial large model
CN114683283B (en) Teaching-free welding method and device for welding robot
Wang K-Decision Tree Control Method of Welding Robots based on Machine Vision
CN116182773A (en) Three-dimensional scanning device and method for realizing full-automatic batch detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Guo Xiaochen

Inventor before: Zhu Yuehao

Inventor before: Guo Xiaochen

WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20230512