CN116704017B - Mechanical arm pose detection method based on visual mixing - Google Patents

Mechanical arm pose detection method based on visual mixing Download PDF

Info

Publication number
CN116704017B
CN116704017B CN202310998840.0A CN202310998840A CN116704017B CN 116704017 B CN116704017 B CN 116704017B CN 202310998840 A CN202310998840 A CN 202310998840A CN 116704017 B CN116704017 B CN 116704017B
Authority
CN
China
Prior art keywords
mechanical arm
information
data
data set
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310998840.0A
Other languages
Chinese (zh)
Other versions
CN116704017A (en
Inventor
刘兆伟
龚子航
苏航
阎维青
刘昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yantai New And Old Kinetic Energy Conversion Research Institute And Yantai Demonstration Base For Transfer And Transformation Of Scientific And Technological Achievements
Yantai University
Original Assignee
Yantai New And Old Kinetic Energy Conversion Research Institute And Yantai Demonstration Base For Transfer And Transformation Of Scientific And Technological Achievements
Yantai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yantai New And Old Kinetic Energy Conversion Research Institute And Yantai Demonstration Base For Transfer And Transformation Of Scientific And Technological Achievements, Yantai University filed Critical Yantai New And Old Kinetic Energy Conversion Research Institute And Yantai Demonstration Base For Transfer And Transformation Of Scientific And Technological Achievements
Priority to CN202310998840.0A priority Critical patent/CN116704017B/en
Publication of CN116704017A publication Critical patent/CN116704017A/en
Application granted granted Critical
Publication of CN116704017B publication Critical patent/CN116704017B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a visual mixing-based mechanical arm pose detection method which is characterized by comprising the following steps of S1, acquiring target detection data and key point detection data of a mechanical arm, and preprocessing; s2, selecting a mechanical arm target detection frame, training a PASCAL VOC data set corresponding to the mechanical arm target detection through a neural network model, and selecting integral information of the mechanical arm through the frame; s3, detecting mechanical arm skeleton information, training COCO data sets corresponding to mechanical arm key point detection through a neural network model, and obtaining the mechanical arm skeleton information; s4, measuring and calculating the three-dimensional angles of the mechanical arm framework, and measuring and calculating the bending angle information of the mechanical arm under different poses by utilizing the three-dimensional space information of the mechanical arm. The visual mixing-based mechanical arm pose detection method can detect target detection frame selection information and skeleton information of the mechanical arm in real time, and calculate the bending angles of the mechanical arm under different poses through three-dimensional space information obtained synchronously.

Description

Mechanical arm pose detection method based on visual mixing
Technical Field
The invention relates to the field of vision and the field of target pose detection, in particular to a method for detecting the pose of a mechanical arm based on vision mixing.
Background
The mechanical arm plays an important role in a man-machine fusion environment, and assists human workers to complete more complex, efficient and accurate work tasks, and the cooperation mode not only can improve work efficiency and quality, but also can reduce personnel injury and error rate, brings great convenience for people, and has important significance in promoting industrial manufacturing development. However, the mechanical arm may be affected by various uncontrollable factors to cause abnormal operation, so that a series of potential safety hazards are caused, and therefore, real-time detection is required, in most conventional researches, an intelligent sensor-based detection method and a computer vision-based target detection method are generally used for monitoring a monitored object in real time, and the intelligent sensor-based detection method is generally only capable of locally detecting the behavior characteristics of the monitored object, so that a good detection effect cannot be easily achieved on the whole detection, and compared with the computer vision-based detection method, the intelligent sensor is expensive and is not suitable for large-scale deployment. The object to be monitored is often judged to be in normal operation as a result of the identification of the target detection method, so that abnormal operation of the mechanical arm cannot be detected microscopically, and the method has larger application limitation.
Disclosure of Invention
The technical problem to be solved by the invention is that the existing mechanical arm pose detection method has defects and shortages, the mechanical arm is possibly influenced by various uncontrollable factors to cause abnormal operation, a series of potential safety hazards are caused, real-time detection is needed, in most conventional researches, an intelligent sensor-based detection method and a computer vision-based target detection method are usually used for monitoring a monitored object in real time, the intelligent sensor-based detection method is usually only used for locally detecting the behavior characteristics of the detected object, so that a good detection effect cannot be easily achieved on the whole detection, and compared with the computer vision-based detection method, the intelligent sensor is expensive and is not suitable for large-scale deployment.
In order to solve the technical problems, the invention adopts the following technical means:
a visual mixing-based mechanical arm pose detection method is characterized by comprising the following steps of: the method comprises the following steps:
s1, acquiring data: acquiring target detection data and key point detection data of the mechanical arm, acquiring the target detection data and the key point detection data from the acquired multi-pose video information of the mechanical arm, and performing preprocessing operation, wherein the preprocessing operation comprises the steps of randomly overturning and randomly rotating an image of the mechanical arm;
s2, frame selection information: selecting a mechanical arm target detection frame, namely dividing an acquired PASCAL VOC data set of the mechanical arm according to a proportion of 8:1:1, processing the PASCAL VOC data set corresponding to the detected mechanical arm target by using a Faster-RCNN-ResNet50 network model, and selecting integral information of the mechanical arm in a frame manner;
s3, acquiring skeleton information: the method comprises the steps of detecting mechanical arm skeleton information, dividing an obtained COCO data set of the mechanical arm in proportion, and processing the COCO data set corresponding to the key point detection of the mechanical arm after division by using a ResNet50-FPN network model to obtain the skeleton information of the mechanical arm;
s4, measuring and calculating three-dimensional angles of the framework: and (3) measuring and calculating the three-dimensional angle of the mechanical arm framework, selecting a target detection frame of the mechanical arm and detecting framework information to obtain two-dimensional coordinates of the mechanical arm, transmitting infrared pulse light to the mechanical arm by using an infrared camera to obtain three-dimensional space information of the mechanical arm, and obtaining the bending angles of the mechanical arm under different poses by using an angle measuring and calculating formula.
Preferably, the invention further adopts the technical scheme that:
the target detection data and the key point detection data of the mechanical arm are acquired from the acquired data, wherein the target detection data and the key point detection data comprise a PASCAL VOC data set and a COCO data set of the mechanical arm; the probability of flipping and the probability of rotation in the preprocessing operation are both set to 0.5.
The data which is obtained from the frame selection information and is obtained from the target detection PASCAL VOC data set and is divided into 4/5 data are used for training, 1/10 data are used for testing, and the rest 1/10 data are used for prediction, namely the data are divided into 8:1:1 proportions.
In the frame selection information, a neural network model is utilized to process the divided mechanical arm target detection data set, a PASCAL VOC data set of the mechanical arm is input into the network, the training is carried out by utilizing a Faster-RCNN-ResNet50 network, the training is carried out by utilizing the Faster-RCNN-ResNet50 network, namely, after deep learning of S2 data, the mechanical arm is detected and tested by utilizing the obtained model weight parameters, and the whole information of the mechanical arm is selected in a frame.
The network training in the frame selection information is aimed at S2 data deep learning, wherein the learning steps are as follows:
step 1: acquiring a data set: acquiring a final mechanical arm target detection data set in the frame selection information to obtain a PASCAL VOC data set of the mechanical arm;
step 2: and (3) establishing a model: selecting a better data set as a training set and a testing set, and training to obtain a deep learning network model;
step 3: performing frame selection and prediction: and carrying out real-time frame selection and prediction on the target mechanical arm through the trained deep learning network model.
The formula for generating the candidate frame for frame selection in the deep learning is as follows:
wherein the method comprises the steps ofFor the input feature map, < >>And->Respectively +.>The anchor frames are at the pixel->Probability of whether the target is included or not and the offset is adjusted>Calculated from a sigmoid activation function, and +.>Then it is generated by a 4-dimensional regressor network that predicts, for each anchor box, the offset of its corresponding targetAnd adjusting the anchor box to the offset of the bounding box actually containing the object +.>Wherein->Is the coordinate of the center point of the anchor frame, +.>For the width and height of the anchor frame +.>, />, />, />Then the RPN will be based on the predicted +.>And->Scoring all the anchor frames, and selecting the top N anchor frames with highest scores as candidate frames to be output.
The method comprises the steps of obtaining skeleton information, processing a partitioned mechanical arm key point detection data set by using a neural network model, inputting a COCO data set of the mechanical arm by using a network, training by using a ResNet50-FPN network, training by using the ResNet50-FPN network, namely, after deep learning of S3 data, detecting and testing the mechanical arm by using obtained model weight parameters after training, and obtaining the skeleton information of the mechanical arm.
The network training in the obtained skeleton information is deep learning for S3 data, wherein the learning steps are as follows:
step 1: obtaining a data set: acquiring a COCO data set of the mechanical arm by utilizing the detection data set of the key points of the mechanical arm in the skeleton information;
step 2: obtaining a model: selecting a better data set as a training set and a testing set, and training to obtain a deep learning network model;
step 3: real-time skeleton detection: and carrying out real-time skeleton detection on the mechanical arm through the trained deep learning network model.
And the three-dimensional angle measurement and calculation of the skeleton is performed, the target detection frame of the mechanical arm is selected, the skeleton information is detected, the two-dimensional coordinates of the mechanical arm are obtained, the infrared camera is utilized to emit infrared pulse light to the mechanical arm, the Z-axis information of the three-dimensional space is combined with the two-dimensional information of the skeleton to obtain the three-dimensional coordinate information of the skeleton point of the mechanical arm, and then the angle measurement and calculation is performed to the three-dimensional coordinate information to obtain the bending angles of the mechanical arm under different poses.
The angle measurement formula in the skeleton three-dimensional angle measurement is as follows:
the direction vector of the arm after the mechanical arm swings isThe normal vector of the robot body plane is +.>,/>Is the swinging angle of the mechanical arm, +.>Dot product operation representing robot arm direction vector and robot body plane normal vector, +.>And a module representing the robot arm direction vector and the robot body plane normal vector.
The visual mixing-based mechanical arm pose detection method can detect target detection frame selection information and skeleton information of the mechanical arm in real time, and calculate the bending angles of the mechanical arm under different poses through three-dimensional space information obtained synchronously.
Drawings
Fig. 1: the invention discloses a mechanical arm pose detection method based on visual mixing.
Fig. 2: the original image of the previous mechanical arm was not examined.
Fig. 3: and a visual detection result diagram of the mechanical arm framework after the mechanical arm is detected.
Detailed Description
The invention will be further illustrated with reference to the following examples.
Referring to fig. 1 and 2, an embodiment of the present invention can be seen:
a mechanical arm pose detection method based on visual mixing comprises the following steps:
s1, acquiring target detection data and key point detection data of a mechanical arm, acquiring the target detection data and the key point detection data from acquired multi-pose video information of the mechanical arm, and performing preprocessing operation;
s2, selecting a mechanical arm target detection frame, dividing the acquired PASCAL VOC data set of the mechanical arm according to a proportion, processing the divided mechanical arm target detection corresponding PASCAL VOC data set by using a Faster-RCNN-ResNet50 network model, and selecting integral information of the mechanical arm by the frame;
s3, detecting mechanical arm skeleton information, namely dividing the obtained COCO data set of the mechanical arm in proportion, and processing the COCO data set corresponding to the key point detection of the mechanical arm after division by using a ResNet50-FPN network model to obtain the skeleton information of the mechanical arm;
s4, measuring and calculating the three-dimensional angle of the mechanical arm framework, selecting a target detection frame of the mechanical arm, detecting framework information, acquiring two-dimensional coordinates of the mechanical arm, simultaneously transmitting infrared pulse light to the mechanical arm by using an infrared camera, acquiring three-dimensional space information of the mechanical arm, and obtaining the bending angles of the mechanical arm under different poses by using an angle measuring and calculating formula.
The S1 specifically comprises the following steps:
acquiring multi-pose image data of the mechanical arm by utilizing multi-angle shooting of a camera, and acquiring target detection data and key point detection data of the mechanical arm by calibration, wherein the target detection data and the key point detection data comprise a PASCAL VOC data set and a COCO data set of the mechanical arm; the preprocessing operation comprises the steps of randomly overturning and randomly rotating the mechanical arm image, wherein the overturning probability and the rotating probability are set to be 0.5.
The mechanical arm target detection frame selection of the S2 specifically comprises the following steps:
dividing the target detection PASCAL VOC data set obtained in the step S1 according to the ratio of 8:1:1, obtaining 4/5 data for training, 1/10 data for testing and the rest 1/10 data for prediction. The method comprises the steps of processing a divided mechanical arm target detection data set by adopting an improved target detection convolutional neural network model, inputting a PASCAL VOC 4/5 training data set of the mechanical arm by the network, predicting by using a 1/10 prediction data set after training, detecting and testing the mechanical arm by using an obtained model weight parameter after prediction, detecting and testing the mechanical arm by using a model weight parameter obtained after training by using a Faster-RCNN-ResNet50 network, performing real-time frame selection on the target mechanical arm by using a trained deep learning network model, and performing classified prediction by using a softmax layer.
The formula for generating the candidate frame and selecting the candidate frame in the S2 is as follows:
for the input feature map, < >>And->Respectively +.>The anchor frames are at the pixel->Whether the target is included or not and the probability of adjusting the offset. Specifically, & gt>Calculated from a sigmoid activation function, and +.>Then it is generated by a 4-dimensional regressor network that predicts, for each anchor box (pixel box), the offset of its corresponding target +.>And adjusting the anchor frame to truly containOffset of bounding box of objectWherein->Is the coordinate of the center point of the anchor frame, +.>For the width and height of the anchor frame +.>, , />, />Then the difference between the true border frame center point and the width and height and the anchor frame center point and the width and height is finally, the RPN (regional generation network) will be according to the predicted +.>And->Scoring all the anchor frames, and selecting the top N anchor frames with highest scores as candidate frames to be output.
The mechanical arm framework information detection of the S3 specifically comprises the following steps:
and processing the divided mechanical arm key point detection data set by adopting a neural network model, inputting the COCO data set of the mechanical arm by the network, and detecting and testing the mechanical arm by using the obtained model weight parameters after training to obtain the skeleton information of the mechanical arm.
In order to acquire the key point detection deep learning network training data, the method is adopted to obtain a final mechanical arm key point detection data set, a COCO data set of the mechanical arm is obtained, a better data set is selected to serve as a training set and a testing set, the ResNet50-FPN network is utilized for training, a deep learning network model and training parameters are obtained through training, real-time skeleton detection is carried out on the mechanical arm through the trained deep learning network model, and meanwhile two-dimensional coordinate information of mechanical arm skeleton points is acquired.
The three-dimensional angle measurement and calculation of the mechanical arm framework of the S4 specifically comprises the following steps:
and selecting a target detection frame of the mechanical arm, detecting skeleton information, acquiring two-dimensional coordinates of the mechanical arm, transmitting infrared pulse light to the mechanical arm by using an infrared camera, combining Z-axis information of a three-dimensional space with the two-dimensional information of the skeleton to obtain three-dimensional coordinate information of skeleton points of the mechanical arm, and measuring and calculating angles of the three-dimensional coordinate information to obtain bending angles of the mechanical arm under different poses.
The angle measurement formula in the S4 is as follows:
the direction vector of the arm after the mechanical arm swings isThe normal vector of the robot body plane is +.>,/>Is the swinging angle of the mechanical arm, +.>Dot product operation representing robot arm direction vector and robot body plane normal vector, +.>And a module representing the robot arm direction vector and the robot body plane normal vector.
Since the above description is only specific embodiments of the present invention, the protection of the present invention is not limited thereto, and any equivalent changes or substitutions of technical features of the present invention will be apparent to those skilled in the art, and are included in the scope of the present invention.

Claims (9)

1. A visual mixing-based mechanical arm pose detection method is characterized by comprising the following steps of: the method comprises the following steps:
s1, acquiring data: acquiring target detection data and key point detection data of the mechanical arm, acquiring the target detection data and the key point detection data from the acquired multi-pose video information of the mechanical arm, and performing preprocessing operation, wherein the preprocessing operation comprises the steps of randomly overturning and randomly rotating an image of the mechanical arm;
s2, frame selection information: selecting a mechanical arm target detection frame, namely dividing an acquired PASCAL VOC data set of the mechanical arm according to a proportion of 8:1:1, processing the PASCAL VOC data set corresponding to the detected mechanical arm target by using a Faster-RCNN-ResNet50 network model, and selecting integral information of the mechanical arm in a frame manner;
s3, acquiring skeleton information: the method comprises the steps of detecting mechanical arm skeleton information, dividing an obtained COCO data set of the mechanical arm in proportion, and processing the COCO data set corresponding to the key point detection of the mechanical arm after division by using a ResNet50-FPN network model to obtain the skeleton information of the mechanical arm;
s4, measuring and calculating three-dimensional angles of the framework: the three-dimensional angle measurement and calculation of the mechanical arm framework is carried out, when the two-dimensional coordinates of the mechanical arm are obtained through the target detection frame selection and framework information detection of the mechanical arm, the infrared camera is utilized to emit infrared pulse light to the mechanical arm, the three-dimensional space information of the mechanical arm is obtained, the bending angles of the mechanical arm under different poses are obtained through an angle measurement and calculation formula, and the angle measurement and calculation formula is as follows:the direction vector of the arm after the swinging of the mechanical arm is d, the normal vector of the robot body plane is n, θ is the swinging angle of the mechanical arm, d.n represents the point multiplication operation of the direction vector of the mechanical arm and the normal vector of the robot body plane, and d|n| represents the modulus of the direction vector of the mechanical arm and the normal vector of the robot body plane.
2. The vision-mixing-based mechanical arm pose detection method as claimed in claim 1, wherein the method comprises the following steps: the target detection data and the key point detection data of the mechanical arm are acquired from the acquired data, wherein the target detection data and the key point detection data comprise a PASCAL VOC data set and a COCO data set of the mechanical arm; the probability of flipping and the probability of rotation in the preprocessing operation are both set to 0.5.
3. The vision-mixing-based mechanical arm pose detection method as claimed in claim 1, wherein the method comprises the following steps: the data which is obtained from the frame selection information and is obtained from the target detection PASCAL VOC data set and is divided into 4/5 data are used for training, 1/10 data are used for testing, and the rest 1/10 data are used for prediction, namely the data are divided into 8:1:1 proportions.
4. The vision-mixing-based mechanical arm pose detection method as claimed in claim 1, wherein the method comprises the following steps: in the frame selection information, a neural network model is utilized to process the divided mechanical arm target detection data set, a PASCAL VOC data set of the mechanical arm is input into the network, the training is carried out by utilizing a Faster-RCNN-ResNet50 network, the training is carried out by utilizing the Faster-RCNN-ResNet50 network, namely, after deep learning of S2 data, the mechanical arm is detected and tested by utilizing the obtained model weight parameters, and the whole information of the mechanical arm is selected in a frame.
5. The vision-mixing-based mechanical arm pose detection method as claimed in claim 4, wherein the method comprises the following steps: the network training in the frame selection information is aimed at S2 data deep learning, wherein the learning steps are as follows:
step 1: acquiring a data set: acquiring a final mechanical arm target detection data set in the frame selection information to obtain a PASCAL VOC data set of the mechanical arm;
step 2: and (3) establishing a model: selecting a better data set as a training set and a testing set, and training to obtain a deep learning network model;
step 3: performing frame selection and prediction: and carrying out real-time frame selection and prediction on the target mechanical arm through the trained deep learning network model.
6. The vision-mixing-based mechanical arm pose detection method as claimed in claim 5, wherein the method comprises the following steps: the formula for generating the candidate frame for frame selection in the deep learning is as follows:
where F is the feature map of the input,and->Probability of whether the kth anchor box contains a target and an adjustment offset on pixel (i, j), respectively +.>Calculated from a sigmoid activation function, and +.>Then it is generated by a 4-dimensional regressor network, which predicts the offset (x, y, w, h) of its corresponding target for each anchor frame, and adjusts the anchor frame to the offset (Δx, Δy, Δw, Δh) of the bounding box that actually contains the target, where (w, t) is the coordinates of the anchor frame center, (w, t) is the width and height of the anchor frame, and Δx, Δy, Δw, Δh is the difference between the actual bounding box center and width and height and the anchor frame center and width and height, and RPN will depend on the predicted->And->Scoring all the anchor frames, and selecting the top N anchor frames with highest scores as candidate frames to be output.
7. The vision-mixing-based mechanical arm pose detection method as claimed in claim 1, wherein the method comprises the following steps: the method comprises the steps of obtaining skeleton information, processing a partitioned mechanical arm key point detection data set by using a neural network model, inputting a COCO data set of the mechanical arm by using a network, training by using a ResNet50-FPN network, training by using the ResNet50-FPN network, namely, after deep learning of S3 data, detecting and testing the mechanical arm by using obtained model weight parameters after training, and obtaining the skeleton information of the mechanical arm.
8. The vision-mixing-based mechanical arm pose detection method as claimed in claim 7, wherein the method comprises the following steps: the network training in the obtained skeleton information is deep learning for S3 data, wherein the learning steps are as follows:
step 1: obtaining a data set: acquiring a COCO data set of the mechanical arm by utilizing the detection data set of the key points of the mechanical arm in the skeleton information;
step 2: obtaining a model: selecting a better data set as a training set and a testing set, and training to obtain a deep learning network model;
step 3: real-time skeleton detection: and carrying out real-time skeleton detection on the mechanical arm through the trained deep learning network model.
9. The vision-mixing-based mechanical arm pose detection method as claimed in claim 1, wherein the method comprises the following steps: and the three-dimensional angle measurement and calculation of the skeleton is performed, the target detection frame of the mechanical arm is selected, the skeleton information is detected, the two-dimensional coordinates of the mechanical arm are obtained, the infrared camera is utilized to emit infrared pulse light to the mechanical arm, the Z-axis information of the three-dimensional space is combined with the two-dimensional information of the skeleton to obtain the three-dimensional coordinate information of the skeleton point of the mechanical arm, and then the angle measurement and calculation is performed to the three-dimensional coordinate information to obtain the bending angles of the mechanical arm under different poses.
CN202310998840.0A 2023-08-09 2023-08-09 Mechanical arm pose detection method based on visual mixing Active CN116704017B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310998840.0A CN116704017B (en) 2023-08-09 2023-08-09 Mechanical arm pose detection method based on visual mixing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310998840.0A CN116704017B (en) 2023-08-09 2023-08-09 Mechanical arm pose detection method based on visual mixing

Publications (2)

Publication Number Publication Date
CN116704017A CN116704017A (en) 2023-09-05
CN116704017B true CN116704017B (en) 2023-11-14

Family

ID=87841919

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310998840.0A Active CN116704017B (en) 2023-08-09 2023-08-09 Mechanical arm pose detection method based on visual mixing

Country Status (1)

Country Link
CN (1) CN116704017B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152670A (en) * 2023-10-31 2023-12-01 江西拓世智能科技股份有限公司 Behavior recognition method and system based on artificial intelligence

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147547A (en) * 2019-04-09 2019-08-20 苏宁易购集团股份有限公司 A kind of intelligence auxiliary mask method and system based on iterative study
CN111260649A (en) * 2020-05-07 2020-06-09 常州唯实智能物联创新中心有限公司 Close-range mechanical arm sensing and calibrating method
CN111523486A (en) * 2020-04-24 2020-08-11 重庆理工大学 Mechanical arm grabbing detection method based on improved CenterNet
CN111723782A (en) * 2020-07-28 2020-09-29 北京印刷学院 Deep learning-based visual robot grabbing method and system
WO2021165628A1 (en) * 2020-02-17 2021-08-26 Ariel Ai Ltd Generating three-dimensional object models from two-dimensional images
CN113894481A (en) * 2021-09-09 2022-01-07 中国科学院自动化研究所 Method and device for adjusting welding pose of complex space curve welding seam
CN114693661A (en) * 2022-04-06 2022-07-01 上海麦牙科技有限公司 Rapid sorting method based on deep learning
CN114998432A (en) * 2022-05-31 2022-09-02 杭州电子科技大学 YOLOv 5-based circuit board detection point positioning method
CN115816460A (en) * 2022-12-21 2023-03-21 苏州科技大学 Manipulator grabbing method based on deep learning target detection and image segmentation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110900598B (en) * 2019-10-15 2022-09-23 合肥工业大学 Robot three-dimensional motion space action simulation learning method and system
CN115335872A (en) * 2021-02-26 2022-11-11 京东方科技集团股份有限公司 Training method of target detection network, target detection method and device
US20230042756A1 (en) * 2021-10-09 2023-02-09 Southeast University Autonomous mobile grabbing method for mechanical arm based on visual-haptic fusion under complex illumination condition

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147547A (en) * 2019-04-09 2019-08-20 苏宁易购集团股份有限公司 A kind of intelligence auxiliary mask method and system based on iterative study
WO2021165628A1 (en) * 2020-02-17 2021-08-26 Ariel Ai Ltd Generating three-dimensional object models from two-dimensional images
CN111523486A (en) * 2020-04-24 2020-08-11 重庆理工大学 Mechanical arm grabbing detection method based on improved CenterNet
CN111260649A (en) * 2020-05-07 2020-06-09 常州唯实智能物联创新中心有限公司 Close-range mechanical arm sensing and calibrating method
CN111723782A (en) * 2020-07-28 2020-09-29 北京印刷学院 Deep learning-based visual robot grabbing method and system
CN113894481A (en) * 2021-09-09 2022-01-07 中国科学院自动化研究所 Method and device for adjusting welding pose of complex space curve welding seam
CN114693661A (en) * 2022-04-06 2022-07-01 上海麦牙科技有限公司 Rapid sorting method based on deep learning
CN114998432A (en) * 2022-05-31 2022-09-02 杭州电子科技大学 YOLOv 5-based circuit board detection point positioning method
CN115816460A (en) * 2022-12-21 2023-03-21 苏州科技大学 Manipulator grabbing method based on deep learning target detection and image segmentation

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A Reinforcement Learning-Based Incentive Mechanism for Task Allocation Under Spatiotemporal Crowdsensing;Zhaowei Liu 等;《IEEE Transactions on Computational Social Systems ( Early Access )》;全文 *
LabVIEW环境下基于Kinect的人体姿态识别中手臂动作辨识的探究;刘卫玲;刘畅;张耀尹;常晓明;杨玲珍;;电子测量技术(第23期);全文 *
基于卷积神经网络的改进机械臂抓取方法;蔡晨;魏国亮;;计算机与数字工程(第01期);全文 *

Also Published As

Publication number Publication date
CN116704017A (en) 2023-09-05

Similar Documents

Publication Publication Date Title
EP3407292B1 (en) Neural network point cloud generation system
EP3776462B1 (en) System and method for image-based target object inspection
CN116704017B (en) Mechanical arm pose detection method based on visual mixing
CN109840900B (en) Fault online detection system and detection method applied to intelligent manufacturing workshop
CN107064170A (en) One kind detection phone housing profile tolerance defect method
CN108182689A (en) The plate workpiece three-dimensional recognition positioning method in polishing field is carried applied to robot
CN109800689A (en) A kind of method for tracking target based on space-time characteristic fusion study
CN107186752A (en) A kind of compensation of undulation fishing robot system
CN107253192A (en) It is a kind of based on Kinect without demarcation human-computer interactive control system and method
CN116259002A (en) Human body dangerous behavior analysis method based on video
CN115330734A (en) Automatic robot repair welding system based on three-dimensional target detection and point cloud defect completion
CN112684797B (en) Obstacle map construction method
CN112785564B (en) Pedestrian detection tracking system and method based on mechanical arm
CN112377332B (en) Rocket engine polarity testing method and system based on computer vision
KR101141686B1 (en) rotation angle estimation apparatus and method for rotation angle estimation thereof
CN115464651A (en) Six groups of robot object grasping system
CN115494074A (en) Online detection method for surface defects of continuous casting slab
CN112598738A (en) Figure positioning method based on deep learning
CN113359738A (en) Mobile robot path planning method based on deep learning
EP4318394A1 (en) System and method for generating training image data for supervised machine learning, and program
WO2023100282A1 (en) Data generation system, model generation system, estimation system, trained model production method, robot control system, data generation method, and data generation program
US20230418257A1 (en) A method of monitoring industrial processing processes, corresponding apparatus and computer program product
EP4197711A1 (en) Cooperation system
Jin et al. Determination of defects for dynamic objects using instance segmentation
EP4339908A2 (en) Information processing device and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20230905

Assignee: Yantai Zhongneng Environmental Technology Co.,Ltd.

Assignor: Yantai new and old kinetic energy conversion Research Institute and Yantai demonstration base for the transfer and transformation of scientific and technological achievements

Contract record no.: X2024980007216

Denomination of invention: A visual hybrid based pose detection method for robotic arms

Granted publication date: 20231114

License type: Common License

Record date: 20240618

EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20230905

Assignee: SHANDONG HENGHAO INFORMATION TECHNOLOGY Co.,Ltd.

Assignor: Yantai new and old kinetic energy conversion Research Institute and Yantai demonstration base for the transfer and transformation of scientific and technological achievements

Contract record no.: X2024980007899

Denomination of invention: A visual hybrid based pose detection method for robotic arms

Granted publication date: 20231114

License type: Common License

Record date: 20240625