CN111798516B - Method for detecting running state quantity and analyzing errors of bridge crane equipment - Google Patents

Method for detecting running state quantity and analyzing errors of bridge crane equipment Download PDF

Info

Publication number
CN111798516B
CN111798516B CN202010618093.XA CN202010618093A CN111798516B CN 111798516 B CN111798516 B CN 111798516B CN 202010618093 A CN202010618093 A CN 202010618093A CN 111798516 B CN111798516 B CN 111798516B
Authority
CN
China
Prior art keywords
coordinates
error
camera
mask
cnn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010618093.XA
Other languages
Chinese (zh)
Other versions
CN111798516A (en
Inventor
梁敏健
刘桂雄
杨宁祥
戚政武
苏宇航
陈英红
杨帆
李继承
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Guangdong Inspection and Research Institute of Special Equipment Zhuhai Inspection Institute
Original Assignee
South China University of Technology SCUT
Guangdong Inspection and Research Institute of Special Equipment Zhuhai Inspection Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT, Guangdong Inspection and Research Institute of Special Equipment Zhuhai Inspection Institute filed Critical South China University of Technology SCUT
Priority to CN202010618093.XA priority Critical patent/CN111798516B/en
Publication of CN111798516A publication Critical patent/CN111798516A/en
Application granted granted Critical
Publication of CN111798516B publication Critical patent/CN111798516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting and analyzing the running state quantity and the error of bridge crane equipment, which comprises the following steps: predicting a positioning frame of a crane component, an example segmentation of a component area and a component key point and a natural calibration object key point in each frame of picture of the camera by using a deep learning model Mask R-CNN; establishing a world coordinate system by utilizing natural calibration object key points on a crane, taking pixel coordinates of corresponding positions identified by Mask R-CNN as input values for solving a PnP problem, and solving a homography matrix for obtaining coordinate transformation, namely obtaining a coordinate transformation matrix from camera coordinates to world coordinates of a current frame; combining the coordinate transformation matrix and the key position pixel coordinates predicted by Mask R-CNN, and solving the world coordinates of the key points corresponding to the pixel coordinates by utilizing the projection transformation property of the camera; calculating the running state quantity of the crane according to the calculated world coordinates of the key points; and analyzing and solving the calculation error of the world coordinates of the key points by utilizing a differential approximation error method, and obtaining an analysis expression of the calculation error.

Description

Method for detecting running state quantity and analyzing errors of bridge crane equipment
Technical Field
The invention relates to the technical field of mode identification, in particular to a method for identifying running state quantity of a bridge crane based on monocular vision.
Background
The existing bridge crane running state quantity measuring technology mostly depends on direct measurement of state quantity of various sensors, the sensors need to be installed at key positions of a crane to conduct direct measurement, inconvenience of disassembly and assembly is caused, redundant installation cost is caused, and reusability of the sensors on other cranes is reduced. The detection method for the running state quantity of the bridge crane equipment combines a deep learning algorithm and a camera pose PnP problem solving method of monocular vision, fully utilizes the advanced semantic understanding capability, high robustness and high generalization of the deep learning algorithm to the image, and realizes that various running state quantities of the bridge crane can be solved by only monocular vision. Compared with the traditional measurement scheme for installing a large number of sensors, the method only needs a monocular camera, thereby saving equipment cost, and utilizes the high generalization of deep learning, so that specific requirements on the placement position of the camera are not needed, and the usability of the measurement device is improved.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a mode measurement and error analysis method for the running state quantity of a bridge crane.
The aim of the invention is achieved by the following technical scheme:
a method for detecting and analyzing the running state quantity and the error of bridge crane equipment comprises the following steps:
a, predicting a positioning frame of a crane component, an example segmentation of a component area and a component key point and a natural calibration object key point in each frame of picture of a camera by using a deep learning model Mask R-CNN;
b, establishing a world coordinate system by utilizing natural calibration object key points on the crane, and taking pixel coordinates of corresponding positions identified by Mask R-CNN as input values for solving a PnP problem, and solving a homography matrix for obtaining coordinate transformation, namely obtaining a coordinate transformation matrix from camera coordinates of a current frame to world coordinates;
c, combining the coordinate transformation matrix and the key position pixel coordinates predicted by Mask R-CNN, and solving the world coordinates of the key points corresponding to the pixel coordinates by utilizing the projection transformation property of the camera;
d, calculating the crane running state quantity according to the calculated world coordinates of the key points;
and E, analyzing and solving the calculation error of the world coordinates of the key points by using a differential approximation error method, and obtaining an analysis expression of the calculation error.
One or more embodiments of the present invention may have the following advantages over the prior art:
compared with the traditional measurement scheme for installing a large number of sensors, the method only needs a monocular camera, thereby saving equipment cost, and utilizes the high generalization of deep learning, so that specific requirements on the placement position of the camera are not needed, and the usability of the measurement device is improved.
Drawings
FIG. 1 is a flow chart diagram of a method for detecting and analyzing the running state quantity and errors of bridge crane equipment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following examples and the accompanying drawings.
As shown in fig. 1, the flow of the method for detecting the running state quantity and analyzing the error of the bridge crane equipment comprises a sample set making and algorithm training stage; a camera pose real-time calculation stage; calculating world coordinates of key points; an operation state quantity calculation stage; an error calculation stage; the method specifically comprises the following steps:
step 10, predicting a positioning frame of a crane component, an example segmentation of a component area and a component key point and a natural calibration object key point in each frame of picture of a camera by using a deep learning model Mask R-CNN;
step 20, a world coordinate system is established by utilizing natural calibration object key points on a crane, pixel coordinates at corresponding positions identified by Mask R-CNN are used as input values for solving a PnP problem, and a homography matrix for obtaining coordinate transformation is solved, so that a coordinate transformation matrix from camera coordinates to world coordinates of a current frame is obtained;
step 30, combining the coordinate transformation matrix and the key position pixel coordinates predicted by Mask R-CNN, and calculating world coordinates corresponding to the pixel coordinates by utilizing the projection transformation property of the camera;
step 40, calculating crane running state quantities such as the inclination angle of a crane lifting appliance, the inclination angle of a lifting rope, the position of a trolley and the position of a cart according to the calculated world coordinates of the key points;
step 50 uses the differential approximation error method to analyze the calculation error of the world coordinates of the key points and obtain the analysis expression of the calculation error.
The step 10 specifically includes:
and (3) manufacturing a small sample data set of important position key points, component positioning frames and component area example segmentation of the crane. The key points comprise three or more points which are easy to identify and have constant relative positions on the crane, and world coordinates of the points are measured to serve as natural calibration objects. And performing migration learning on the Mask R-CNN by using the prepared small sample data set. The backbone network of Mask R-CNN is chosen to be a pre-trained ResNet-50 on the ImageNet dataset, combined with a feature pyramid (Feature Pyramid Networks, FPN)The structure is used for fully extracting the characteristic information of the images under different resolutions. The feature map output by the ResNet-50FPN backbone network and an anchor block (anchor) on the feature map are used as the input of a regional providing network layer (Region Proposal Network, RPN), the layer judges whether an anchor belongs to a positive class (the region contains a target object) or a negative class (the region does not contain the target object) through softmax, and the precise regression block is obtained by utilizing the coordinate error of the vertex of the regression anchor block, wherein the loss function of the vertex regression of the anchor block is shown as the following formula, and phi (A i ) Is the characteristic vector composed of the characteristic map of the anchor point frame region, W T * Is a parameter that needs to be learned and,the vertex coordinate value of the true anchor point frame is as follows:
and obtaining a characteristic diagram with a fixed size of 7 multiplied by 7 through RoIALign by using the characteristic diagram of the regression frame part, wherein the characteristic diagram is used as input of a positioning frame regression and classification branch, an example segmentation prediction branch and a key point prediction branch in the subsequent steps. Wherein, the Roialign layer reverse formula is as follows, x i Representing pixel points on the characteristic diagram before pooling, y rj The j-th point representing the pooled r-th candidate region, d () represents the distance between the two points, i * (r, j) represents the coordinates of the point where the maximum pixel value selected at the time of maximum pooling is located, and Δh and Δw represent x i And i * (r, j) difference in abscissa and ordinate:
the positioning frame regression and classification branch consists of two paths of full-connection layers, and one path of output 8-dimensional vector respectively represents the offset of each vertex coordinate component of the positioning frame; the other path of output vector dimension is the number of categories and is used for predicting the categories of the objects in the positioning frame. One-way sub-branch for regression offsetIts loss function L box The least square error of the true value and the predicted value is the same as the loss function of the peak regression of the anchor point frame; the classifying sub-branch loss function adopts the following cross entropy loss function L cls
Wherein p is out The output representing the classification sub-branch is a function of the network weight; n (N) ap Representing the number of samples; p is p n Representing a true class label.
The example segmentation branch and the key point prediction branch are both composed of deconvolution layers, and output a spatial probability distribution diagram which corresponds to the original image but has a scaled proportion, and represent the probability that each position is of a certain class; its loss function L mask And L point The cross entropy loss function is identical to the classifying subbranch loss function in form.
The total loss function of the whole Mask R-CNN network is L=L box +L cls +L mask +L point On a marked bridge crane small sample data set, an optimizer based on a gradient descent algorithm such as Adam or SGD is utilized to minimize the total loss, so that optimal Mask R-CNN network model parameters are obtained, and model training is completed.
The step 20 specifically includes:
the key point pixel coordinates predicted by Mask R-CNN and serving as natural calibration objects have the following relation with corresponding world coordinates:
wherein (w, h, 1) represents the predicted point pixel coordinates, d x 、d y Pixel equivalent of x-axis and y-axis of camera sensor, respectively, f represents camera image Fang Jiaoju, (w) 0 ,h 0 ) For the imaging center pixel coordinates,z is a transformation matrix of the camera coordinate system with respect to the world coordinate system c Representing the z-axis component of the camera coordinate system origin in the world coordinate system.
According to the relation, when the predicted key point n is more than or equal to 3, the homography matrix on the right side of the equation can be solved to obtain the transformation matrix of the calibrated camera relative to the world coordinates at any moment, and then the solution of the PnP problem is completed once. The solution of the equation adopts the native API solvePnPRansac provided by OpenCV, and when the number of the selected natural calibration object coordinate points is far more than 3, the solver has stronger robustness to noise.
The step 30 specifically includes:
on the basis of the camera transformation matrix at the current moment, the world coordinates of any key point with known motion curved surface parameters in the world coordinate system can be reversely deduced by utilizing the camera projection imaging rule, and at the moment, the key point has only two motion degrees of freedom. The pixel coordinates and world coordinates of any two-degree-of-freedom key point have the following relation, and for brevity, it is not necessary to assume that the motion curved surface is a plane:
in the formula (i),representing pixel coordinates, +.>Representing corresponding world coordinates>And the normal vector of the motion plane of the key point under the world coordinate system is represented, and h represents the intercept of the plane equation. The above formula expresses that ∈>Calculate->Is a whole process of the process. The predicted crane key point position of Mask R-CNN comprises: the world coordinates of any moment corresponding to the key points can be obtained by combining the predicted pixel coordinates of the key points with the parameters of the motion planes of the points on the crane in the world coordinate system and taking the parameters into the series of equations.
The step 40 specifically includes:
calculating the running state quantity of the bridge crane on the basis of the calculated world coordinates of the key points: trolley position, trolley speed, spreader inclination, and sling inclination.
The trolley position, represented by the y-axis of the average point of its profile vertices, the velocity is then obtained by differentiating the y-axis: v sc =fps×Δy, where FPS represents the frame rate, determined by the camera sampling rate, the algorithm calculation speed. When the outline vertex lacks a pre-test value or the prediction error is larger, the y-axis component y of the centroid world coordinate of the result is segmented by using the trolley region Mask R-CNN example m Instead of computing.
The cart position is represented by the z component of the translation transformation vector T, and the running speed is calculated by: v bc =FPS×Δz。
The calculation mode of the inclination angle of the lifting appliance and the lifting rope is as follows:
wherein,representing a series of predicted key points on the straight line of the lifting rope or series of key points on the middle line of the lifting appliance. I denote vector norms () x 、(.) y Representing the x and y components of the vector, respectively.
The step 50 specifically includes:
the error of the world coordinates corresponding to the two-degree-of-freedom key points is obtained by the pixel coordinates through the current analysis, and the output error can be approximately represented by a function differential value when the input error is considered to be smaller:
wherein,dh is the measurement error of the motion plane parameters of the key points; dR, & lt + & gt>For the pose calculation errors of the camera, solvers such as a solvePnPRansac and the like give corresponding algorithm solving errors which can be used as pose calculation errors to take values; />Calibrating errors for internal parameters of the camera; />The prediction error of the key point can be the prediction probability of Mask R-CNN on the key point.
Although the embodiments of the present invention are described above, the embodiments are only used for facilitating understanding of the present invention, and are not intended to limit the present invention. Any person skilled in the art can make any modification and variation in form and detail without departing from the spirit and scope of the present disclosure, but the scope of the present disclosure is still subject to the scope of the appended claims.

Claims (8)

1. The method for detecting and analyzing the running state quantity of the bridge crane equipment is characterized by comprising the following steps:
a, predicting a positioning frame of a crane component, an example segmentation of a component area and a component key point and a natural calibration object key point in each frame of picture of a camera by using a deep learning model Mask R-CNN;
b, establishing a world coordinate system by utilizing natural calibration object key points on the crane, and taking pixel coordinates of corresponding positions identified by Mask R-CNN as input values for solving a PnP problem, and solving a homography matrix for obtaining coordinate transformation, namely obtaining a coordinate transformation matrix from camera coordinates of a current frame to world coordinates;
c, combining the coordinate transformation matrix and the key position pixel coordinates predicted by Mask R-CNN, and solving the world coordinates of the key points corresponding to the pixel coordinates by utilizing the projection transformation property of the camera;
d, calculating the crane running state quantity according to the calculated world coordinates of the key points;
and E, analyzing and solving the calculation error of the world coordinates of the key points by using a differential approximation error method, and obtaining an analysis expression of the calculation error.
2. The method for detecting and analyzing the operation state quantity and the error of the bridge crane equipment according to claim 1, wherein the step a specifically comprises:
manufacturing a small sample data set of important position key points, component positioning frames and component area example segmentation of a crane; wherein the key points comprise: under the condition that the relative position on the crane is constant, three or more points which are easy to identify are detected, and world coordinates of the points are measured to be used as natural calibration objects;
performing migration learning on Mask R-CNN by using the prepared small sample data set; the backbone network of Mask R-CNN is selected as ResNet-50 pre-trained on an ImageNet data set, and feature information of images under different resolutions is extracted by combining a feature pyramid structure; the feature map output by ResNet-50FPN backbone network and the anchor blocks on the feature map together serve as inputs to the region providing network layer through softmax judges whether the anchor point belongs to positive class or negative class, and then the coordinate error of the vertex of the regression anchor point frame is utilized to obtain an accurate regression frame, wherein the loss function of the vertex regression of the anchor point frame is shown as the following formula, phi (A) i ) Is the characteristic vector composed of the characteristic map of the anchor point frame region, W T * Is a parameter that needs to be learned and,the vertex coordinate value of the true anchor point frame is as follows:
obtaining a feature map with a fixed size of 7 multiplied by 7 through RoIAlign by using the feature map of the regression frame part as input of a positioning frame regression and classification branch, an example segmentation prediction branch and a key point prediction branch; wherein the roualign layer inversion formula is as follows:
wherein x is i Representing pixel points on the characteristic diagram before pooling, y rj The j-th point representing the pooled r-th candidate region, d () represents the distance between the two points, i * (r, j) represents the coordinates of the point where the maximum pixel value selected at the time of maximum pooling is located, and Δh and Δw represent x i And i * (r, j) difference in abscissa and ordinate.
3. The method for detecting and analyzing the operation state quantity of the bridge crane equipment according to claim 2, wherein,
the regression and classification branch of the positioning frame consists of two paths of full-connection layers, and one path of output 8-dimensional vector respectively represents the offset of each vertex coordinate component of the positioning frame; the other path of output vector dimension is the number of categories and is used for predicting the categories of the objects in the positioning frame; one-way sub-division for regression offsetIts loss function L box The least square error of the true value and the predicted value is the same as the loss function of the peak regression of the anchor point frame; the classifying sub-branch loss function adopts the following cross entropy loss function L cls
Wherein p is out The output representing the classification sub-branch is a function of the network weight; n (N) ap Representing the number of samples; p is p n Representing a true class label.
4. The method for detecting and analyzing the operation state quantity of the bridge crane equipment according to claim 2, wherein,
the example segmentation branch and the key point prediction branch are both composed of deconvolution layers, and a space probability distribution diagram corresponding to the original image but scaled is output to represent the probability that each position is of a certain class; loss function L mask And L point The cross entropy loss function is the same as the classifying sub-branch loss function in form;
the total loss function of the whole Mask R-CNN network is L=L box +L cls +L mask +L point And on the marked bridge crane small sample data set, minimizing the total loss function by using an optimizer of Adam based on a gradient descent algorithm to obtain optimal Mask R-CNN network model parameters, and completing training of the model.
5. The method for detecting and analyzing the operation state quantity of the bridge crane according to claim 1, wherein in the step B:
the key point pixel coordinates predicted by Mask R-CNN and serving as natural calibration objects have the following relation with corresponding world coordinates:
wherein (w, H, 1) represents the predicted point pixel coordinates, d x 、d y Pixel equivalent of x-axis and y-axis of camera sensor, respectively, f represents camera image Fang Jiaoju, (w) 0 ,H 0 ) For the imaging center pixel coordinates,z is a transformation matrix of the camera coordinate system with respect to the world coordinate system c Representing a z-axis component of the camera coordinate system origin in the world coordinate system;
according to the relation, when the predicted key point n is more than or equal to 3, the homography matrix on the right side of the equation can be solved to obtain the transformation matrix of the calibrated camera relative to the world coordinates at any moment, and then the solution of the PnP problem is completed once.
6. The method for detecting and analyzing the operation state quantity and the error of the bridge crane equipment according to claim 1, wherein the step C specifically comprises:
based on the obtained camera transformation matrix at the current moment, the world coordinates of key points with known motion curved surface parameters in a world coordinate system are reversely deduced by utilizing a camera projection imaging rule, and at the moment, the key points have only two motion degrees of freedom; the pixel coordinates of any two-degree-of-freedom key point and world coordinates have the following relation, and for brevity, it is not necessary to assume that the motion curved surface is a plane:
the above formula expresses thatCalculate->In the formula, ++>Representing pixel coordinates, +.>Representing corresponding world coordinates>The normal vector of the motion plane of the key point under the world coordinate system is represented, and H represents the intercept of the plane equation;
the predicted crane key point position of the Mask R-CNN comprises: the world coordinates of any moment corresponding to the key points can be obtained by combining the predicted pixel coordinates of the key points with the parameters of the motion planes of the points on the crane in the world coordinate system and taking the parameters into the series of equations.
7. The method for detecting and analyzing the operation state quantity and the error of the bridge crane equipment according to claim 1, wherein the step D specifically comprises:
calculating the running state quantity of the bridge crane on the basis of the calculated world coordinates of the key points, wherein the crane running state quantity comprises a trolley position, a trolley speed, a lifting appliance gradient and a lifting rope gradient;
the trolley position, represented by the y-axis of the average point of its profile vertices, the velocity is then obtained by differentiating the y-axis: v sc =fps×Δy, where FPS represents the frame rate, determined by the camera sampling rate, the algorithm calculation speed; when the outline vertex lacks a pre-test value or the prediction error is larger, the y-axis component y of the centroid world coordinate of the result is segmented by using the trolley region Mask R-CNN example m Instead of calculation;
the cart position is represented by the z component of the translation transformation vector T, and the running speed is calculated by: v bc =FPS×Δz;
The calculation mode of the inclination angle of the lifting appliance and the lifting rope is as follows:
wherein,representing a series of predicted key points on a straight line where the lifting rope is located or a series of key points on a central line of the lifting appliance; i denote vector norms () x 、(.) y Representing the x and y components of the vector, respectively.
8. The method for detecting and analyzing the operation state quantity and the error of the bridge crane equipment according to claim 1, wherein the step E specifically comprises:
analyzing the error of the world coordinates corresponding to the two-degree-of-freedom key points obtained by the pixel coordinates, and considering that when the input error is small, the output error can be approximately represented by a function differential value:
wherein,dh is the measurement error of the motion plane parameters of the key points; dR, & lt + & gt>For the pose calculation errors of the camera, solvers such as a solvePnPRansac and the like give corresponding algorithm solving errors which can be used as pose calculation errors to take values; />Calibrating errors for internal parameters of the camera; />The prediction error of the key point can be the prediction probability of Mask R-CNN on the key point.
CN202010618093.XA 2020-07-01 2020-07-01 Method for detecting running state quantity and analyzing errors of bridge crane equipment Active CN111798516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010618093.XA CN111798516B (en) 2020-07-01 2020-07-01 Method for detecting running state quantity and analyzing errors of bridge crane equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010618093.XA CN111798516B (en) 2020-07-01 2020-07-01 Method for detecting running state quantity and analyzing errors of bridge crane equipment

Publications (2)

Publication Number Publication Date
CN111798516A CN111798516A (en) 2020-10-20
CN111798516B true CN111798516B (en) 2023-12-22

Family

ID=72810767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010618093.XA Active CN111798516B (en) 2020-07-01 2020-07-01 Method for detecting running state quantity and analyzing errors of bridge crane equipment

Country Status (1)

Country Link
CN (1) CN111798516B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232279B (en) * 2020-11-04 2023-09-05 杭州海康威视数字技术股份有限公司 Personnel interval detection method and device
CN112816496B (en) * 2021-01-05 2022-09-23 广州市华颉电子科技有限公司 Automatic optical detection method and device for interface assembly quality of automobile domain controller

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019192397A1 (en) * 2018-04-04 2019-10-10 华中科技大学 End-to-end recognition method for scene text in any shape
CN110930454A (en) * 2019-11-01 2020-03-27 北京航空航天大学 Six-degree-of-freedom pose estimation algorithm based on boundary box outer key point positioning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019192397A1 (en) * 2018-04-04 2019-10-10 华中科技大学 End-to-end recognition method for scene text in any shape
CN110930454A (en) * 2019-11-01 2020-03-27 北京航空航天大学 Six-degree-of-freedom pose estimation algorithm based on boundary box outer key point positioning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于视觉的挖掘机工作装置位姿测量;马伟;宫乐;冯浩;殷晨波;周俊静;曹东辉;;机械设计与研究(第05期);全文 *
面向井下无人机自主飞行的人工路标辅助位姿估计方法;单春艳;杨维;耿翠博;;煤炭学报(第S1期);全文 *

Also Published As

Publication number Publication date
CN111798516A (en) 2020-10-20

Similar Documents

Publication Publication Date Title
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
CN112199993B (en) Method for identifying transformer substation insulator infrared image detection model in any direction based on artificial intelligence
CN106023257B (en) A kind of method for tracking target based on rotor wing unmanned aerial vehicle platform
CN113139453B (en) Orthoimage high-rise building base vector extraction method based on deep learning
CN107871119A (en) A kind of object detection method learnt based on object space knowledge and two-stage forecasting
CN108734143A (en) A kind of transmission line of electricity online test method based on binocular vision of crusing robot
CN112288758B (en) Infrared and visible light image registration method for power equipment
CN113920107A (en) Insulator damage detection method based on improved yolov5 algorithm
CN111461213B (en) Training method of target detection model and target rapid detection method
CN113435282B (en) Unmanned aerial vehicle image ear recognition method based on deep learning
CN111798516B (en) Method for detecting running state quantity and analyzing errors of bridge crane equipment
CN112330593A (en) Building surface crack detection method based on deep learning network
CN112395972B (en) Unmanned aerial vehicle image processing-based insulator string identification method for power system
CN105957107A (en) Pedestrian detecting and tracking method and device
WO2020093631A1 (en) Antenna downtilt angle measurement method based on depth instance segmentation network
CN115937626B (en) Automatic generation method of paravirtual data set based on instance segmentation
CN115639248A (en) System and method for detecting quality of building outer wall
CN114998251A (en) Air multi-vision platform ground anomaly detection method based on federal learning
CN113313107A (en) Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge
CN117351499B (en) Split-combined indication state identification method, system, computer equipment and medium
CN112069997B (en) Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net
CN116051808A (en) YOLOv 5-based lightweight part identification and positioning method
CN109272021A (en) A kind of intelligent mobile robot air navigation aid based on width study
CN114463628A (en) Deep learning remote sensing image ship target identification method based on threshold value constraint
CN110910450A (en) Method for carrying out 3D target detection based on mixed feature perception neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant