CN117124332A - Mechanical arm control method and system based on AI vision grabbing - Google Patents

Mechanical arm control method and system based on AI vision grabbing Download PDF

Info

Publication number
CN117124332A
CN117124332A CN202311346308.7A CN202311346308A CN117124332A CN 117124332 A CN117124332 A CN 117124332A CN 202311346308 A CN202311346308 A CN 202311346308A CN 117124332 A CN117124332 A CN 117124332A
Authority
CN
China
Prior art keywords
obstacle
coordinates
grabbed
laser radar
grabbing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311346308.7A
Other languages
Chinese (zh)
Inventor
季丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Communications Institute of Technology
Original Assignee
Nanjing Communications Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Communications Institute of Technology filed Critical Nanjing Communications Institute of Technology
Priority to CN202311346308.7A priority Critical patent/CN117124332A/en
Publication of CN117124332A publication Critical patent/CN117124332A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The application provides a mechanical arm control method and system based on AI vision grabbing, and relates to the technical field of data processing. According to the application, the radar and AI recognition technology is integrated to accurately extract the object to be grabbed, and the pixel coordinates and the laser radar coordinates of the object to be grabbed are converted to accurately position, so that the problem of error in positioning the object to be grabbed caused by the fact that the laser radar and the camera are not in the same position is solved while the error is reduced, and the free device of the camera and the laser radar is realized. Meanwhile, the operation route is determined based on the second coordinate in the positioning identification result, so that the mechanical arm bypasses an obstacle, collision is avoided, and the operation safety of the mechanical arm is improved.

Description

Mechanical arm control method and system based on AI vision grabbing
Technical Field
The application relates to the technical field of data processing, in particular to a mechanical arm control method and system based on AI vision grabbing.
Background
The mechanical arm is an automatic device which replaces manual work to complete some monotonous, frequent and repeated long-time operation in industrial production, and performs monitoring, grabbing, carrying work or tool handling operation according to set programs, tracks and requirements.
The installation state of the mechanical arm is usually ground by default in a situation, so the setting of the safety movement range is generally set based on ground installation, however, the inventor finds that at least the following technical problems exist in the prior art in the process of implementing the application: when a user changes the installation state of the mechanical arm according to the actual use situation, the default safe working range of the mechanical arm can not meet the actual requirement, thereby causing accidents such as collision and the like
Disclosure of Invention
The application aims to provide a mechanical arm control method and a mechanical arm control system based on AI vision grabbing so as to solve the problems. In order to achieve the above purpose, the technical scheme adopted by the application is as follows:
in a first aspect, the present application provides a method for controlling an arm based on AI vision gripping, including:
detecting obstacles in a preset distance space of a region to be worked based on a laser radar to obtain an obstacle set;
identifying the obstacle set based on an AI image identification technology to determine an object to be grabbed;
converting the pixel coordinates and the laser radar coordinates of the object to be grabbed in the AI image to obtain a first coordinate;
calculating based on the first coordinates and second coordinates corresponding to each obstacle to obtain an operation route, wherein the second coordinates are obtained by converting laser radar coordinates corresponding to each obstacle into pixel coordinates;
and carrying out grabbing control on the object to be grabbed based on the running route.
In a second aspect, the application further provides a mechanical arm control system based on AI vision grabbing, which comprises a detection module, an extraction module, a conversion module, a planning module and an execution module, wherein:
and a detection module: the method comprises the steps of detecting obstacles in a preset distance space of a region to be worked based on a laser radar to obtain an obstacle set;
and an extraction module: the method comprises the steps of identifying the obstacle set based on an AI image identification technology to determine an object to be grabbed;
and a conversion module: the method comprises the steps of converting pixel coordinates and laser radar coordinates of an object to be grabbed in an AI image to obtain a first coordinate;
and a planning module: the operation route is obtained based on the first coordinates and second coordinates corresponding to each obstacle, wherein the second coordinates are obtained by converting laser radar coordinates corresponding to each obstacle into pixel coordinates;
the execution module: and the grabbing control device is used for carrying out grabbing control on the object to be grabbed based on the running route.
The beneficial effects of the application are as follows:
according to the application, the radar and AI recognition technology is integrated to accurately extract the object to be grabbed, and the pixel coordinates and the laser radar coordinates of the object to be grabbed are converted to accurately position, so that the problem of error in positioning the object to be grabbed caused by the fact that the laser radar and the camera are not in the same position is solved while the error is reduced, and the free device of the camera and the laser radar is realized. Meanwhile, the operation route is determined based on the second coordinate in the positioning identification result, so that the mechanical arm bypasses an obstacle, collision is avoided, and the operation safety of the mechanical arm is improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block flow diagram of a robotic arm control method based on AI visual capture in an embodiment of the application;
FIG. 2 is a block diagram of a robotic arm control system based on AI visual capture in an embodiment of the application;
fig. 3 is a block diagram of an AI-vision-grabbing-based mechanical arm control device in an embodiment of the present application.
The marks in the figure: 710-a detection module; 720-an extraction module; 721-a first acquisition unit; 722-a matching unit; 7221-a second acquisition unit; 7222-classification unit; 7223-a second judging unit; 723-a first judgment unit; 730-a conversion module; 731-a third acquisition unit; 732-a first computing unit; 733-a second calculation unit; 740-a planning module; 741-dividing unit; 742-a tag unit; 743-query unit; 750-an execution module; 800-an AI vision grabbing-based mechanical arm control device; 801-a processor; an 802-memory; 803-multimedia component; 804-I/O interface; 805-a communication component.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Example 1:
referring to fig. 1, fig. 1 is a flow chart of a control method of a mechanical arm based on AI visual capturing in the present embodiment. The embodiment provides a mechanical arm control method based on AI vision grabbing, which comprises the steps of S1, S2, S3, S4 and S5.
Step S1, detecting obstacles in a preset distance space of a region to be worked based on a laser radar to obtain an obstacle set.
It can be understood that in this step, all the obstacles within the predetermined distance are detected by using the laser radar with the robot as the center, and all the objects which are possibly grabbing objects in the area to be worked can be quickly captured by the laser radar, and the objects to be grabbed are primarily screened. And the laser radar works in a scene with lower visibility, and can detect all obstacles in a preset range without being influenced by external environment, so that the accuracy of finally grabbing objects is improved.
And S2, identifying the obstacle set based on an AI image identification technology to determine an object to be grabbed.
It can be appreciated that in this step, which objects in the obstacle set are the grabbing tasks to be performed are identified based on the AI image recognition technology, and the objects to be grabbed are quickly determined.
In order to reduce the influence of the environmental factors of the working scene on the image acquisition quality, the step S2 further includes a step S21, a step S22 and a step S23.
And S21, acquiring image information corresponding to each obstacle in the obstacle set based on an infrared camera.
It will be appreciated that in this step, the use of an infrared camera to obtain image information may address the effect of ambient light on image acquisition.
Step S22, the image information is detected and matched with a preset label based on an AI image recognition technology, so that an AI recognition result and a corresponding mark value are obtained, the preset label is an image with different mark values, and the mark values are different values according to different types of objects.
It can be understood that in this step, the preset label and each image information are respectively identified and matched by using the AI image identification technology, so as to obtain an object classification result (i.e., AI identification result) corresponding to each image information, and label assignment is performed on each image information according to the label value corresponding to the preset label.
Further, in order to better adapt to the working scene with lower visibility, the step S22 includes a step S221, a step S222, and a step S223.
Step S221, the information of the cross section area of the laser radar corresponding to each obstacle is obtained based on a laser radar detection technology, and the target radius is determined.
It is understood that in this step, RCS is a physical quantity for measuring the intensity of the reflected millimeter wave of the millimeter wave laser radar detection target, and its value has a great relationship with the size, shape, structure, material, etc. of the cross section of the target. The method is introduced into the identification of the obstacle, different numerical values (namely, target radiuses) can be obtained for different kinds of targets such as people, vehicles, wall surfaces and the like, and then the later clustering is facilitated through the target radiuses to obtain classification results. Therefore, the physical quantity is introduced into the method, so that the capability of a clustering algorithm for distinguishing different kinds of targets can be improved.
Step S222, obtaining an obstacle classification result based on the target radius and a preset classification condition, wherein the preset classification condition is a score threshold value calculated by different obstacles based on the target radius.
It can be understood that in this step, the preset classification condition is determined according to the formula (1), and in the present application, the classification condition of each type of obstacle can be adaptively calculated, and the RCS is used as the supplement amount of the classification threshold, so as to improve the clustering accuracy:
L≤R+λ rcs (1)
wherein: l is a target radius corresponding to different object types; r is a fixed target radius corresponding to each object, lambda rcs Is the value corresponding to the RCS.
And step S223, when the AI identification result and the obstacle classification result are the same object, correspondingly obtaining the marking value.
It can be understood that in the step, the obstacle is identified by combining the data fusion of the AI image identification result and the laser radar identification result, so that the influence of severe weather such as heavy fog, sand storm and the like on AI perception and laser radar perception is avoided, the effectiveness of the object to be grabbed is verified and extracted, and the identification accuracy and the anti-interference capability of the classification result are improved.
And S23, judging whether the marking value is within a preset score, if so, determining that the object is to be grabbed, and if not, determining that the object is not to be grabbed.
It can be understood that in this step, the preset score may be set to be greater than zero, and the marking value corresponding to the non-object to be grabbed is set to be infinitely small, and then the object to be grabbed is determined according to the marking value corresponding to each image information. And according to the importance of the grabbing task, priorities and corresponding values can be set for different types of objects to be grabbed, and the objects are ordered according to the values so that the robot can execute important or urgent tasks.
And step S3, converting the pixel coordinates and the laser radar coordinates of the object to be grabbed in the AI image to obtain a first coordinate.
It can be understood that in this step, in order to obtain accurate positioning of the object to be grabbed relative to the mechanical arm, the problem of error in positioning the object to be grabbed caused by that the laser radar and the camera are not in the same position is solved.
In detail, the step S3 includes a step S31, a step S32, and a step S33.
And S31, calibrating the camera based on a checkerboard calibration method, and acquiring internal parameters and distortion coefficients of the camera.
And step S32, calculating by using an EPn (Ethernet passive matrix) algorithm according to a plurality of groups of data pairs to respectively obtain a rotation matrix and a translation matrix, wherein the data pairs are the laser radar coordinates and the pixel coordinates corresponding to the same object to be grabbed.
It can be understood that in this step, the camera and the laser radar collect the pixel coordinates of the edge corner points and the cloud coordinates of the edge corner points of the checkerboard correspondingly at the same time, and form a data pair. And then, establishing constraint equations for a plurality of data pairs by using an EPn (Ethernet passive matrix) algorithm, and solving to obtain a rotation matrix and a translation matrix.
And step S33, calculating based on the rotation matrix, the translation matrix, the internal parameters and the distortion coefficients to obtain a first coordinate.
It will be appreciated that in this step, the first coordinates are calculated according to equation (2):
wherein: d, d x 、d y 、m 0 And n 0 The parameters in the camera are respectively a horizontal axis focal length and a vertical axis focal length of the camera, and a horizontal pixel coordinate and a vertical pixel coordinate of an image center point; m and n are respectively a horizontal pixel coordinate and a vertical pixel coordinate of an object to be grabbed; p is a rotation matrix; u is a translation matrix; a is that j Is a distortion coefficient; a is that k 、B k And C k Is the radar three-bit coordinates of the object to be grabbed.
And S4, calculating based on the first coordinates and second coordinates corresponding to each obstacle to obtain an operation route, wherein the second coordinates are obtained by converting the laser radar coordinates corresponding to each obstacle into pixel coordinates.
It can be understood that in this step, the mechanical arm bypasses the obstacle according to the second coordinate, so that collision is avoided, and operation safety of the mechanical arm is improved.
In order to improve the working efficiency of the robot, step S4 includes step S41, step S42, and step S43.
And S41, performing three-dimensional space segmentation on the to-be-tested working area to obtain a plurality of cell bodies with the same size.
It can be understood that in this step, the three-dimensional space covered by the working area to be measured is divided into cubes of the same size, and is discretized into cellular bodies.
And S42, marking the corresponding cellular bodies according to each second coordinate to obtain an obstacle space and a traffic space.
It can be understood that in this step, the corresponding cellular body is marked according to the three-dimensional positioning information of the second coordinate corresponding to the obstacle, so as to form an obstacle space, and the place where no mark is made is the traffic space. According to the application, the obstacle space and the passing space are classified according to the second coordinates, so that the passable path of the mechanical arm can be obtained, the occurrence of obstacle collision event is avoided, and the operation safety is improved.
And S43, based on the traffic space, taking the cell body corresponding to the first coordinate by the hand grip of the mechanical arm as a path starting point and a path ending point, and calculating by using a Dijkstra algorithm with the shortest distance as a target to obtain an operation route.
It can be understood that in this step, the Dijkstra algorithm is utilized to find the shortest optimal path of the mechanical arm running without collision by taking the first coordinate as the starting point, so that the mechanical arm can effectively avoid the obstacle and the machine operation efficiency can be improved.
And S5, carrying out grabbing control on the object to be grabbed based on the running route.
It can be understood that the fusion radar and AI recognition technology in the application accurately extracts the object to be grabbed, and accurately positions the object to be grabbed by converting the pixel coordinates and the laser radar coordinates of the object to be grabbed, so that the problem of error in positioning the object to be grabbed caused by the fact that the laser radar and the camera are not in the same position is solved while the error is reduced, and the free device of the camera and the laser radar is realized. Meanwhile, the operation route is determined based on the second coordinate in the positioning identification result, so that the mechanical arm bypasses an obstacle, collision is avoided, and the operation safety of the mechanical arm is improved.
Example 2:
referring to fig. 2, fig. 2 is a block diagram of an AI vision gripping-based robotic arm control system, including a detection module 710, an extraction module 720, a conversion module 730, a planning module 740, and an execution module 750, according to an example embodiment, wherein:
detection module 710: the method comprises the steps of detecting obstacles in a preset distance space of a region to be worked based on a laser radar to obtain an obstacle set;
extraction module 720: the method comprises the steps of identifying the obstacle set based on an AI image identification technology to determine an object to be grabbed;
preferably, the extraction module 720 includes a first acquisition unit 721, a matching unit 722, and a first determination unit 723, wherein:
the first acquisition unit 721: the method comprises the steps of acquiring image information corresponding to each obstacle in the obstacle set based on an infrared camera;
matching unit 722: the method comprises the steps of detecting and matching the image information with a preset label based on an AI image recognition technology to obtain an AI recognition result and a corresponding mark value, wherein the preset label is an image with different mark values, and the mark values are different numerical values according to different types of objects;
preferably, the matching unit 722 includes a second acquisition unit 7221, a classification unit 7222, and a second judgment unit 7223, wherein:
the second acquisition unit 7221: the method comprises the steps of respectively obtaining the information of the cross section area of the laser radar corresponding to each obstacle based on a laser radar detection technology, and determining the radius of a target;
classification unit 7222: the obstacle classification method comprises the steps of obtaining an obstacle classification result based on the target radius and a preset classification condition, wherein the preset classification condition is a score threshold value calculated by different obstacles based on the target radius;
the second judgment unit 7223: and the marker value is correspondingly obtained when the AI identification result and the obstacle classification result are the same object.
The first judgment unit 723: and the method is used for judging whether the marking value is within a preset score, if so, the object is to be grabbed, and if not, the object is not to be grabbed.
Conversion module 730: the method comprises the steps of converting pixel coordinates and laser radar coordinates of an object to be grabbed in an AI image to obtain a first coordinate;
preferably, the conversion module 730 includes a third obtaining unit 731, a first calculating unit 732, and a second calculating unit 733, wherein:
third acquisition unit 731: the method is used for calibrating the camera based on the checkerboard calibration method and obtaining internal parameters and distortion coefficients of the camera;
the first calculation unit 732: the method comprises the steps of calculating by utilizing an EPn (Ethernet passive matrix) algorithm according to a plurality of groups of data pairs to respectively obtain a rotation matrix and a translation matrix, wherein the data pairs are the laser radar coordinates and the pixel coordinates corresponding to the same object to be grabbed;
the second calculation unit 733: and the first coordinate is obtained based on the rotation matrix, the translation matrix, the internal parameters and the distortion coefficient.
Planning module 740: the operation route is obtained based on the first coordinates and second coordinates corresponding to each obstacle, wherein the second coordinates are obtained by converting laser radar coordinates corresponding to each obstacle into pixel coordinates;
preferably, the planning module 740 includes a segmentation unit 741, a labeling unit 742, and a query unit 743, wherein:
the dividing unit 741: the method comprises the steps of carrying out three-dimensional space segmentation on the to-be-detected working area to obtain a plurality of cell bodies with consistent sizes;
a marking unit 742: the method comprises the steps of marking corresponding cellular bodies according to each second coordinate to obtain an obstacle space and a traffic space;
query unit 743: and the operation route is obtained by taking the cell body corresponding to the first coordinate as a path starting point and a path ending point and taking the shortest distance as a target through Dijkstra algorithm based on the passing space.
Execution module 750: and the grabbing control device is used for carrying out grabbing control on the object to be grabbed based on the running route.
It should be noted that, regarding the system in the above embodiment, the specific manner in which the respective modules perform the operations has been described in detail in the embodiment regarding the method, and will not be described in detail herein.
Example 3:
corresponding to the above method embodiment, the present embodiment further provides an AI-vision-grabbing-based mechanical arm control device 800, and an AI-vision-grabbing-based mechanical arm control device 800 described below and an AI-vision-grabbing-based mechanical arm control method described above may be referred to correspondingly.
Fig. 3 is a block diagram of a robotic arm control device 800 based on AI visual capture, according to an example embodiment. As shown in fig. 3, the AI-vision grabbing-based robot arm control apparatus 800 may include: a processor 801, a memory 802. The AI-vision-grabbing-based robotic arm control device 800 may also include one or more of a multimedia component 803, an i/O interface 804, and a communication component 805.
The processor 801 is configured to control the overall operation of the AI-vision-grabbing-based robotic arm control device 800, so as to complete all or part of the steps in the AI-vision-grabbing-based robotic arm control method. The memory 802 is used to store various types of data to support operation at the AI-vision-grabbing-based robotic arm control device 800, which may include, for example, instructions for any application or method operating on the AI-vision grabbing-based robotic arm control device 800, as well as application-related data, such as contact data, messages, pictures, audio, video, and so forth. The Memory 802 may be implemented by any type or combination of volatile or non-volatile Memory devices, such as static random access Memory (Static RandomAccess Memory, SRAM for short), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia component 803 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen, the audio component being for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signals may be further stored in the memory 802 or transmitted through the communication component 805. The audio assembly further comprises at least one speaker for outputting audio signals. The I/O interface 804 provides an interface between the processor 801 and other interface modules, which may be a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 805 is configured to perform wired or wireless communication between the AI-vision-grabbing-based robotic arm control device 800 and other devices. Wireless communication, such as Wi-Fi, bluetooth, near field communication (Near Field Communication, NFC for short), 2G, 3G or 4G, or a combination of one or more thereof, the respective communication component 805 may thus comprise: wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the AI vision gripping based robotic arm control device 800 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), digital signal processors (Digital Signal Processor, abbreviated as DSP), digital signal processing devices (Digital Signal Processing Device, abbreviated as DSPD), programmable logic devices (Programmable Logic Device, abbreviated as PLD), field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), controllers, microcontrollers, microprocessors, or other electronic components for performing the AI vision gripping based robotic arm control method described above.
In another exemplary embodiment, a computer storage medium is also provided that includes program instructions that, when executed by a processor, implement the steps of the AI-vision-grabbing-based robotic arm control method described above. For example, the computer storage medium may be the memory 802 including the program instructions described above, which are executable by the processor 801 of the AI-vision-grabbing-based robotic arm control device 800 to perform the AI-vision-grabbing-based robotic arm control method described above.
Example 4:
corresponding to the above method embodiment, a storage medium is further provided in this embodiment, and a storage medium described below and a robotic arm control method based on AI visual capture described above may be referred to correspondingly.
A storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method for controlling a robotic arm based on AI vision gripping of the above-described method embodiment.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, etc. that can store various program codes.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (10)

1. The mechanical arm control method based on AI vision grabbing is characterized by comprising the following steps:
detecting obstacles in a preset distance space of a region to be worked based on a laser radar to obtain an obstacle set;
identifying the obstacle set based on an AI image identification technology to determine an object to be grabbed;
converting the pixel coordinates and the laser radar coordinates of the object to be grabbed in the AI image to obtain a first coordinate;
calculating based on the first coordinates and second coordinates corresponding to each obstacle to obtain an operation route, wherein the second coordinates are obtained by converting laser radar coordinates corresponding to each obstacle into pixel coordinates;
and carrying out grabbing control on the object to be grabbed based on the running route.
2. The AI-vision-grabbing-based mechanical arm control method of claim 1, wherein identifying the set of obstacles based on AI image recognition techniques to determine an object to be grabbed comprises:
acquiring image information corresponding to each obstacle in the obstacle set based on an infrared camera;
detecting and matching the image information with a preset label based on an AI image recognition technology to obtain an AI recognition result and a corresponding mark value, wherein the preset label is an image with different mark values, and the mark values are different numerical values according to different types of objects;
and judging whether the marking value is within a preset score, if so, determining that the object is to be grabbed, and if not, determining that the object is not to be grabbed.
3. The AI-vision-grabbing-based mechanical arm control method according to claim 2, wherein detecting that the image information is matched with a preset label based on AI image recognition technology, and obtaining an AI recognition result and a label value corresponding to the AI recognition result comprises:
acquiring laser radar scattering cross-section area information corresponding to each obstacle based on a laser radar detection technology, and determining a target radius;
obtaining an obstacle classification result based on the target radius and a preset classification condition, wherein the preset classification condition is a score threshold value calculated by different obstacles based on the target radius;
and when the AI identification result and the obstacle classification result are the same object, correspondingly obtaining the marking value.
4. The AI-vision-grabbing-based mechanical arm control method according to claim 1, wherein the converting based on the pixel coordinates and the laser radar coordinates of the object to be grabbed in the AI image to obtain the first coordinates comprises:
calibrating the camera based on a checkerboard calibration method, and acquiring internal parameters and distortion coefficients of the camera;
calculating by using an EPn (Ethernet passive matrix) algorithm according to a plurality of groups of data pairs to respectively obtain a rotation matrix and a translation matrix, wherein the data pairs are the laser radar coordinates and the pixel coordinates corresponding to the same object to be grabbed;
and calculating based on the rotation matrix, the translation matrix, the internal parameters and the distortion coefficient to obtain a first coordinate.
5. The AI-vision-grabbing-based mechanical arm control method according to claim 1, wherein calculating, based on the first coordinates and the second coordinates corresponding to each obstacle, a running route includes:
dividing the to-be-measured working area in three-dimensional space to obtain a plurality of cell bodies with consistent sizes;
marking the corresponding cellular bodies according to each second coordinate to obtain an obstacle space and a traffic space;
based on the traffic space, the cell body corresponding to the first coordinate is taken as a path starting point and a path ending point by the hand of the mechanical arm, and the operation route is obtained by calculating by using Dijkstra algorithm with the shortest distance as a target.
6. Mechanical arm control system based on AI vision snatchs, characterized by comprising:
and a detection module: the method comprises the steps of detecting obstacles in a preset distance space of a region to be worked based on a laser radar to obtain an obstacle set;
and an extraction module: the method comprises the steps of identifying the obstacle set based on an AI image identification technology to determine an object to be grabbed;
and a conversion module: the method comprises the steps of converting pixel coordinates and laser radar coordinates of an object to be grabbed in an AI image to obtain a first coordinate;
and a planning module: the operation route is obtained based on the first coordinates and second coordinates corresponding to each obstacle, wherein the second coordinates are obtained by converting laser radar coordinates corresponding to each obstacle into pixel coordinates;
the execution module: and the grabbing control device is used for carrying out grabbing control on the object to be grabbed based on the running route.
7. The AI-vision-grabbing-based robotic arm control system of claim 6, wherein the extraction module comprises:
a first acquisition unit: the method comprises the steps of acquiring image information corresponding to each obstacle in the obstacle set based on an infrared camera;
matching unit: the method comprises the steps of detecting and matching the image information with a preset label based on an AI image recognition technology to obtain an AI recognition result and a corresponding mark value, wherein the preset label is an image with different mark values, and the mark values are different numerical values according to different types of objects;
a first judgment unit: and the method is used for judging whether the marking value is within a preset score, if so, the object is to be grabbed, and if not, the object is not to be grabbed.
8. The AI-vision-grabbing-based robotic arm control system of claim 7, wherein the matching unit comprises:
a second acquisition unit: the method comprises the steps of respectively obtaining the information of the cross section area of the laser radar corresponding to each obstacle based on a laser radar detection technology, and determining the radius of a target;
classification unit: the obstacle classification method comprises the steps of obtaining an obstacle classification result based on the target radius and a preset classification condition, wherein the preset classification condition is a score threshold value calculated by different obstacles based on the target radius;
a second judgment unit: and the marker value is correspondingly obtained when the AI identification result and the obstacle classification result are the same object.
9. The AI-vision-grabbing-based robotic arm control system of claim 6, wherein the conversion module comprises:
a third acquisition unit: the method is used for calibrating the camera based on the checkerboard calibration method and obtaining internal parameters and distortion coefficients of the camera;
a first calculation unit: the method comprises the steps of calculating by utilizing an EPn (Ethernet passive matrix) algorithm according to a plurality of groups of data pairs to respectively obtain a rotation matrix and a translation matrix, wherein the data pairs are the laser radar coordinates and the pixel coordinates corresponding to the same object to be grabbed;
a second calculation unit: and the first coordinate is obtained based on the rotation matrix, the translation matrix, the internal parameters and the distortion coefficient.
10. The AI-vision-grabbing-based robotic arm control system of claim 6, wherein the planning module comprises:
a dividing unit: the method comprises the steps of carrying out three-dimensional space segmentation on the to-be-detected working area to obtain a plurality of cell bodies with consistent sizes;
a marking unit: the method comprises the steps of marking corresponding cellular bodies according to each second coordinate to obtain an obstacle space and a traffic space;
query unit: and the operation route is obtained by taking the cell body corresponding to the first coordinate as a path starting point and a path ending point and taking the shortest distance as a target through Dijkstra algorithm based on the passing space.
CN202311346308.7A 2023-10-17 2023-10-17 Mechanical arm control method and system based on AI vision grabbing Pending CN117124332A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311346308.7A CN117124332A (en) 2023-10-17 2023-10-17 Mechanical arm control method and system based on AI vision grabbing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311346308.7A CN117124332A (en) 2023-10-17 2023-10-17 Mechanical arm control method and system based on AI vision grabbing

Publications (1)

Publication Number Publication Date
CN117124332A true CN117124332A (en) 2023-11-28

Family

ID=88856674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311346308.7A Pending CN117124332A (en) 2023-10-17 2023-10-17 Mechanical arm control method and system based on AI vision grabbing

Country Status (1)

Country Link
CN (1) CN117124332A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117463645A (en) * 2023-12-28 2024-01-30 东屹半导体科技(江苏)有限公司 Automatic control method and system for semiconductor sorting integrated machine

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117463645A (en) * 2023-12-28 2024-01-30 东屹半导体科技(江苏)有限公司 Automatic control method and system for semiconductor sorting integrated machine
CN117463645B (en) * 2023-12-28 2024-04-02 东屹半导体科技(江苏)有限公司 Automatic control method and system for semiconductor sorting integrated machine

Similar Documents

Publication Publication Date Title
US10311719B1 (en) Enhanced traffic detection by fusing multiple sensor data
US9123242B2 (en) Pavement marker recognition device, pavement marker recognition method and pavement marker recognition program
CN103559791B (en) A kind of vehicle checking method merging radar and ccd video camera signal
CN102792314B (en) Cross traffic collision alert system
JP5822255B2 (en) Object identification device and program
TWI651686B (en) Optical radar pedestrian detection method
CN111222568A (en) Vehicle networking data fusion method and device
CN111016918B (en) Library position detection method and device and model training device
CN108711172B (en) Unmanned aerial vehicle identification and positioning method based on fine-grained classification
CN117124332A (en) Mechanical arm control method and system based on AI vision grabbing
CN110136186B (en) Detection target matching method for mobile robot target ranging
CN116310679A (en) Multi-sensor fusion target detection method, system, medium, equipment and terminal
CN108764338B (en) Pedestrian tracking method applied to video analysis
CN114998276B (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
CN111382637A (en) Pedestrian detection tracking method, device, terminal equipment and medium
CN110324583A (en) A kind of video monitoring method, video monitoring apparatus and computer readable storage medium
CN112927303A (en) Lane line-based automatic driving vehicle-mounted camera pose estimation method and system
CN110866428A (en) Target tracking method and device, electronic equipment and storage medium
CN114898319A (en) Vehicle type recognition method and system based on multi-sensor decision-level information fusion
CN115147587A (en) Obstacle detection method and device and electronic equipment
CN114359865A (en) Obstacle detection method and related device
CN113158779A (en) Walking method and device and computer storage medium
CN115880673A (en) Obstacle avoidance method and system based on computer vision
Miseikis et al. Joint human detection from static and mobile cameras
CN110689556A (en) Tracking method and device and intelligent equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination