CN115849202A - Intelligent crane operation target identification method based on digital twin technology - Google Patents

Intelligent crane operation target identification method based on digital twin technology Download PDF

Info

Publication number
CN115849202A
CN115849202A CN202310157211.5A CN202310157211A CN115849202A CN 115849202 A CN115849202 A CN 115849202A CN 202310157211 A CN202310157211 A CN 202310157211A CN 115849202 A CN115849202 A CN 115849202A
Authority
CN
China
Prior art keywords
target
image
crane operation
digital twin
identification method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310157211.5A
Other languages
Chinese (zh)
Other versions
CN115849202B (en
Inventor
景阔
孟红军
王鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Nuclear Xudong Electric Co ltd
Original Assignee
Henan Nuclear Xudong Electric Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Nuclear Xudong Electric Co ltd filed Critical Henan Nuclear Xudong Electric Co ltd
Priority to CN202310157211.5A priority Critical patent/CN115849202B/en
Publication of CN115849202A publication Critical patent/CN115849202A/en
Application granted granted Critical
Publication of CN115849202B publication Critical patent/CN115849202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The scheme provides an intelligent crane operation target identification method based on a digital twin technology, which is used for automatically detecting images of a target facility in a real environment and acquiring the position of the target facility, so that the virtual mapping of the target facility is reconstructed in a digital twin virtual environment. According to the scheme, an optimized neural network model is used for carrying out automatic image detection on the target facility in the real environment to obtain the position of the target facility, so that the virtual mapping of the target facility is reconstructed in the digital twin virtual environment; the method can automatically extract the target in the real environment and complete the target position calculation, thereby improving the modeling efficiency of the digital twin virtual environment.

Description

Intelligent crane operation target identification method based on digital twin technology
Technical Field
The invention belongs to the technical field of automatic control of cranes, and particularly relates to an intelligent crane operation target identification method based on a digital twin technology.
Background
The industrial construction of China goes through several development stages of mechanization, automation and digitization, the production process and the management efficiency of factories are developed rapidly, and great contribution is made to the industrial industry and urban development of China. In recent years, with the progress of project construction of smart cities, smart industries, digital china, and the like, social development has made higher demands on plant managers. At present, the whole industry is still a labor-intensive traditional industry on the whole, the industrial modernization level is not high, and the problems of long construction period, high resource and energy consumption, low production efficiency, low technological content and the like exist. Under the large wave tide of industrial 4.0, how to further promote industrialization and automation level, the operation of factory is more intelligent, so as to achieve safer, more efficient and energy-saving purposes and become a new development and research direction.
By constructing an intelligent crane operation education platform based on a digital twin technology and perfecting crane operation teaching and research measures based on digital information, automatic control, equipment, communication transmission and an AI intelligent analysis model, the experimental teaching level of relevant specialties such as mechanical design and manufacture, automation and the like is comprehensively improved.
In the process of establishing a virtualized and abstracted digital twin model for crane operation, crane operation target identification is a key of the model, and an automatic method needs to be adopted to identify target facilities existing in a real environment, such as fire-fighting facilities, a power system, an air conditioning system, a security system, valves, lighting, power facilities, IT equipment, office supplies, buildings, toilets, landmarks and the like, so that information such as coordinates of the equipment is provided for a virtual reality scene, and an operator can perform crane operation training in the virtual environment with operation experience similar to that of the real environment, thereby achieving the purpose of training.
Disclosure of Invention
In order to solve one or more problems, the invention provides an intelligent crane operation target identification method based on a digital twin technology, which is used for automatically detecting images of a target facility in a real environment and acquiring the position of the target facility, so that the virtual mapping of the target facility is reconstructed in a digital twin virtual environment.
An intelligent crane operation target identification method based on a digital twin technology,
video camera
Figure SMS_1
、/>
Figure SMS_2
、…、/>
Figure SMS_3
The photographed images are respectively recorded as->
Figure SMS_4
、/>
Figure SMS_5
、…、/>
Figure SMS_6
(ii) a Using a template operation on each image sample, a response pattern { [ 8 ] in 8 directions for the image sample can be obtained>
Figure SMS_7
Namely:
Figure SMS_8
performing singular value decomposition on each response graph to obtain singular values arranged from large to small
Figure SMS_9
And n denotes a response diagram +>
Figure SMS_10
Normalized to the interval 0-1, is:
Figure SMS_11
further, the dimensional characteristic value of the image sample is calculated as:
Figure SMS_12
the dimension characteristic value reflects the difference of response maps of the image sample in different directions
Constructing a neural network model, wherein the feature extraction layer of the neural network model
Figure SMS_13
Post-establishmentFully connected layer->
Figure SMS_14
Figure SMS_15
Figure SMS_16
Is a fully connected layer>
Figure SMS_17
Is greater than or equal to>
Figure SMS_18
Is a corresponding linear bias parameter;)>
Figure SMS_19
Is a nonlinear activation function;
when the temperature is higher than the set temperature
Figure SMS_20
When it is in, wherein>
Figure SMS_21
For the threshold, the following operation is performed:
Figure SMS_22
wherein V is a group consisting of
Figure SMS_23
Is based on the feature vector of (4), and (4)>
Figure SMS_24
Is->
Figure SMS_25
A diagonal matrix formed by the characteristic values of the data; />
Figure SMS_26
Wherein Q is
Figure SMS_27
And/or a total number of characteristic values of>
Figure SMS_28
、…、/>
Figure SMS_29
Arranged in the order from large to small. When the dimension characteristic value is lower than the characteristic value, take the value>
Figure SMS_30
Individual characteristic value>
Figure SMS_31
、…、/>
Figure SMS_32
Is 0, i.e. lowers the original matrix->
Figure SMS_33
The parameter amount of (a);
connecting output after the full connection layer, namely classifying the input image samples O;
acquiring the positions of different types of targets in the images acquired by the camera; and drawing the virtual target at the corresponding position in the virtual scene by taking the camera coordinates of the target as reference, and establishing a digital twin model of the crane operation target.
The method further comprises the steps of obtaining image coordinates of a certain target facility in the image according to the step 1, and calculating the coordinates of the target facility in the real environment, namely the coordinates in the camera coordinate system, so as to provide basis for establishing a model of the target facility at the corresponding position in the virtual environment.
Setting a target facility in a camera
Figure SMS_34
In which the image coordinates are acquired according to step 1>
Figure SMS_35
In the camera->
Figure SMS_36
In accordance with step 1, image coordinates are obtained>
Figure SMS_37
Figure SMS_38
Elimination of scale factors
Figure SMS_39
Then, the unknown parameter in the above equation only has the camera coordinates ≥>
Figure SMS_40
、/>
Figure SMS_41
、/>
Figure SMS_42
Solving equation set can obtain the coordinate of image>
Figure SMS_43
Further comprising determining coordinates in the virtual environment
Figure SMS_44
A virtual mapping of the target facility is established.
When the target in the real scene changes, the coordinates of the target are updated by adopting the method, and the target in the virtual scene is redrawn, so that the linkage of the target in the real scene and the target in the virtual scene is realized.
The neural network model is a 4-layer structure.
Four of the templates are templates in the axial direction of the image.
Four of the templates are templates in the diagonal direction of the image.
A computer apparatus for implementing the method described above.
The invention has the advantages that:
1. the method utilizes the optimized neural network model to automatically detect the image of the target facility in the real environment and obtain the position of the target facility, thereby reconstructing the virtual mapping of the target facility in the digital twin virtual environment; the method can automatically extract the target in the real environment and complete the target position calculation, thereby improving the modeling efficiency of the digital twin virtual environment.
2. The invention provides a layered space-invariant target detection model and a layered space-invariant target detection method, which are used for carrying out dimension test on image samples of different types of targets by utilizing an optimization template, and selecting a model according to the characteristic dimension of the targets. If the complexity is low, less network layers are connected to the network model when the network model is built, otherwise, more network layers are connected to the network model, so that the overall complexity of the network is reduced, and the detection performance is improved.
3. And a spatial structure mapping layer is added at the first layer of the model and is used for processing the difference of the target appearance caused by different shooting angles and extracting the spatial structure characteristics of the input image so as to better deal with the appearance deformation caused by observing the target from different angles, thereby considering the recognition efficiency and the accuracy.
Detailed Description
Step 1And carrying out target labeling according to the visual appearances of various target facilities in the real environment, determining the positions of the target facilities in the image, and realizing the visual target grading labeling method.
S1.1And (4) preparing.
The method comprises the steps of collecting known sample images of a plurality of key target facilities in a category in a real environment, such as fire-fighting facilities, electric power systems, air conditioning systems, security systems, valves, lighting, power facilities, IT equipment, office supplies, buildings, toilets, landmarks and the like, and using the sample images as labeled training samples.
S1.2And configuring an image acquisition environment.
Two or more cameras are arranged in the field environment of the training site, so that each target facility to be identified can be captured by at least two cameras. In order to complete the reconstruction process in the subsequent steps.
Using the camera coordinates of one of the camerasFor the reference, a camera coordinate system reference is established. The reference camera is recorded as
Figure SMS_45
Other cameras are sequentially programmed to->
Figure SMS_46
、…、/>
Figure SMS_47
。/>
Figure SMS_48
Targeting a reference camera
Figure SMS_49
The coordinates under the coordinate system are expressed in homogeneous coordinates as:
Figure SMS_50
the coordinates of the target in the other cameras can be derived as follows:
Figure SMS_51
wherein the content of the first and second substances,
Figure SMS_52
rotate matrix for 3*3, based on the value of the reference value>
Figure SMS_53
For a 3*1 translation vector, the combination of the two reflects the camera ≥ s>
Figure SMS_54
And camera>
Figure SMS_55
The relative relationship between them. The rotation matrix and the translation vector of each camera can be obtained by calibration in advance.
Targeting a camera
Figure SMS_56
The coordinates in the captured image are represented by homogeneous coordinates:
Figure SMS_57
and the image coordinates and the coordinates under the corresponding camera coordinate system satisfy a linear relationship:
Figure SMS_58
wherein
Figure SMS_59
、/>
Figure SMS_60
For a rotation matrix and a translation vector of the corresponding camera, based on the camera parameters>
Figure SMS_61
、/>
Figure SMS_62
、/>
Figure SMS_63
、/>
Figure SMS_64
The internal parameters of the camera are related to the optical parameters of the lens of the camera and the parameters of an imaging device, and the cameras with the same model can be adopted to ensure that the internal parameters of all the cameras are approximately equal; s is a scale factor; the internal parameters are also obtained by calibration.
S1.3An ambient image is captured and an associated object is detected in the image.
Video camera
Figure SMS_65
、/>
Figure SMS_69
、…、/>
Figure SMS_71
The photographed images are respectively recorded as->
Figure SMS_67
、/>
Figure SMS_70
、…、/>
Figure SMS_72
. Respectively in the image->
Figure SMS_73
、/>
Figure SMS_66
、…、/>
Figure SMS_68
In the method, various types of targets are detected and the positions of the targets are output.
The image target detection method commonly used in the industry at present is a convolution neural network method, and the method establishes a neural network model based on a convolution kernel, can better overcome the interference of noise on a target image and detect the target with higher precision. However, the method faces two key problems in the application scenario of the present invention. One is that the crane operation scene environment is complex, the number of the targets to be detected is large, and the distribution is wide, so that the angle difference of the camera shooting targets is large, and the appearance difference of the targets in the images is large; and the robustness of the two-dimensional convolution network in the aspect of appearance difference caused by shooting angles such as space rotation invariance is low. The second is that the convolutional neural network controls the detection and identification precision through the size of the convolutional kernel and the number of convolutional layers, and the larger convolutional kernel and more convolutional layers increase the computation amount while improving the precision. Because the invention relates to more types of targets, part of the targets have higher feature dimensions and lower feature dimensions, and the target type with lower feature dimensions does not need to adopt a very complicated network structure, the balance of network complexity and performance is also a problem to be considered in the application scene of the invention.
In order to solve the two problems, the invention provides a layered space-invariant target detection model and a layered space-invariant target detection method. And secondly, adding a spatial structure mapping layer at the first layer of the model for processing the difference of the appearance of the target caused by different shooting angles.
S1.3.1 dimension characteristic value and dimension test
And the dimension test of the target sample is used for evaluating the complexity of the target, if the complexity is lower, less network layers are connected for the target when the network model is constructed, otherwise, more network layers are connected for the target, so that the overall complexity of the network is reduced, and the detection performance is improved.
Defining the dimension test template as the following matrix:
Figure SMS_74
Figure SMS_75
wherein the template
Figure SMS_76
-/>
Figure SMS_77
Is equal to the image obtained in S1.1 with the size of the target training sample, is combined with the image obtained in S1.1>
Figure SMS_78
The image coordinates of one pixel in the template are represented, and the image center coordinates are (0,0). The four templates in equation 3 are templates in the image axial direction, and the four templates in equation 4 are templates in the image diagonal direction.
For each image sample
Figure SMS_79
By applying the template described above, a response pattern of the image sample in 8 directions can be obtained>
Figure SMS_80
Namely:
Figure SMS_81
performing singular value decomposition on each response graph to obtain singular values arranged from large to small
Figure SMS_82
And n denotes a response diagram +>
Figure SMS_83
Normalized to the interval 0-1, is:
Figure SMS_84
further, calculating the dimensional characteristic value of the image sample as:
Figure SMS_85
the dimension characteristic value reflects the difference of the response images of the image sample in different directions, if the difference is smaller, the influence of the change of the direction on the appearance of the image sample is smaller, and a simpler network model can be adopted; otherwise, the influence is larger, and a more complex network model needs to be adopted.
Setting a threshold value
Figure SMS_86
For selecting different network models, set in dependence on experimental empirical values>
Figure SMS_87
I.e. when->
Figure SMS_88
And selecting a complex network model, otherwise, selecting a simple network model.
The dimension testing process ends.
S1.3.2 image detection and neural network model construction for detection
The neural network model is a mathematical computation model taking an image as input and a detection result as output, the input and the output are formed by a plurality of hidden nodes according to a certain logical relationship, each hidden node represents a certain operation method, and parameters of a hidden node operation formula are determined through training (learning).
And establishing a spatial structure mapping layer after the image is input, wherein the spatial structure mapping layer is used for extracting spatial structure characteristics of the input image so as to better cope with appearance deformation caused by observing the target from different angles.
Defining spatial structure mapping layers
Figure SMS_89
Figure SMS_90
In the above formula
Figure SMS_91
I.e. the spatial structure mapping layer->
Figure SMS_92
The image transformation method is characterized by comprising 8 images with the same size as the original image, wherein each image represents the mapping of the original image after the original image is rotated by a certain angle, and the target appearance deformation caused by rotation can be effectively achieved. />
Figure SMS_93
Denotes the circumference ratio parameter, | denotes the absolute value.
Further, defining a spatial structure mapping layer
Figure SMS_94
And the method is used for measuring the appearance deformation of the target caused by spatial scaling. Namely:
Figure SMS_95
in the above formula
Figure SMS_96
Represents 3 scaling parameters, <' > based on the scaling parameter>
Figure SMS_100
The method is a linear scaling window and is used for carrying out local window scaling on the previous layer. As a preferred case of the test, let->
Figure SMS_103
Is dimensioned as->
Figure SMS_97
So when>
Figure SMS_99
When, is greater or less>
Figure SMS_102
When is greater than or equal to>
Figure SMS_105
When the temperature of the water is higher than the set temperature,
Figure SMS_98
when is greater than or equal to>
Figure SMS_101
In combination of time>
Figure SMS_104
The spatial structure mapping layer
Figure SMS_106
、/>
Figure SMS_107
And jointly establishing an appearance deformation model for processing the appearance deformation of the target.
Further, defining a feature extraction layer
Figure SMS_108
The following:
Figure SMS_109
in the above-mentioned formula, the compound has the following structure,
Figure SMS_110
is a linear parameter of the neural network for combining>
Figure SMS_111
Layer results are mapped to a feature extraction layer->
Figure SMS_112
Completing feature extraction and/or based on the result of the comparison>
Figure SMS_113
Is a linear bias parameter; the feature extraction layer maps the high-dimensional image data to a one-dimensional feature space, so that the data dimension is reduced; />
Figure SMS_114
A non-linear excitation function for allowing the neural network to process non-linear sample data. />
Figure SMS_115
The function is defined as follows:
Figure SMS_116
the classification performance of the activation function is further improved by adopting a three-segment piecewise function in the formula.
Establishing a full connection layer after the feature extraction layer
Figure SMS_117
:/>
Figure SMS_118
In the above formula, the first and second carbon atoms are,
Figure SMS_119
is a fully connected layer->
Figure SMS_120
Is greater than or equal to>
Figure SMS_121
For a corresponding linear bias parameter;,/is>
Figure SMS_122
Defining the same formula for the nonlinear activation function; />
Figure SMS_123
In the form of a matrix, which combines the aforementioned dimension characteristic values, when the dimension characteristic value is smaller, it can be further paired and/or selected>
Figure SMS_124
And (5) optimizing parameters, and reducing parameter quantity so as to optimize the neural network model. According to the principle of linear algebraic eigenvalue decomposition, it can be known that:
Figure SMS_125
wherein V is a group consisting of
Figure SMS_126
Is based on the feature vector of (4), and (4)>
Figure SMS_127
Is->
Figure SMS_128
The characteristic values of (a) constitute a diagonal matrix. By decreasing the number of eigenvalues (i.e. setting the smaller eigenvalue to 0), the origin matrix can be based on>
Figure SMS_129
The amount of parameters (c) decreases. In the present invention, it is experimentally set that a pre-taking is taken when the dimensional characteristic value is below a threshold value>
Figure SMS_130
Is then fully connected>
Figure SMS_131
Is reduced to not optimized->
Figure SMS_132
Specifically, it is provided that:
Figure SMS_133
wherein Q is
Figure SMS_134
And/or a total number of characteristic values of>
Figure SMS_135
、…、/>
Figure SMS_136
Arranged in the order of big to small. When the dimension characteristic value is lower than the characteristic value, take the value>
Figure SMS_137
Individual characteristic value->
Figure SMS_138
、…、/>
Figure SMS_139
Is 0, i.e. lowers the original matrix->
Figure SMS_140
The amount of the above-mentioned components.
And connecting output after the full connection layer, namely, classifying the input image sample O:
Figure SMS_141
in the above formula, the first and second carbon atoms are,
Figure SMS_142
linear parameters representing an output layer>
Figure SMS_143
Representing the corresponding linear bias parameters; />
Figure SMS_144
The same formula is defined for the nonlinear activation function.
The neural network model (i.e., defined by equations 8-14) is trained to determine parameters of the neural network model, including a linear scaling window, linear parameters, and linear bias parameters. The image samples adopted by training are marked with classification values manually, and the classification values are used as the true values of the training and are marked as
Figure SMS_145
. Defining a cost function pick>
Figure SMS_146
The difference between the neural network output value and the training true value is:
Figure SMS_147
iterative optimization can be carried out on the neural network model by adopting a BP algorithm, the aim is to make the cost function converge, and the parameter value of the model can be obtained after convergence, so that training is completed.
After training is finished, the neural network model can be adopted to detect shot images, and the positions of different types of targets in the images collected by the camera are obtained.
Step 2The method for calculating the real environment coordinates of the target facility obtains the image coordinates of the target facility in the image according to the step 1, and calculates the coordinates of the target facility in the real environment (namely the coordinates under the camera coordinate system), thereby providing a basis for establishing a model of the target facility at the corresponding position in the virtual environment.
Setting a target facility in the camera
Figure SMS_148
In which the image coordinates are acquired according to step 1>
Figure SMS_149
In the camera->
Figure SMS_150
In which the image coordinates are acquired according to step 1>
Figure SMS_151
. According to formula 2 in step 1, there are: />
Figure SMS_152
Elimination of scale factors
Figure SMS_153
Then, the unknown parameter in the above equation only has the camera coordinates ≥>
Figure SMS_154
、/>
Figure SMS_155
、/>
Figure SMS_156
And solving the equation system to obtain the target.
According to coordinates in a virtual environment
Figure SMS_157
It is sufficient to establish a virtual mapping of the target facility.
The method comprises the steps of establishing a virtualized and abstracted digital twin model of crane operation.
Firstly, the target to be operated by the crane is identified by the method in the step 1 of the invention, and the coordinates of the target in the image are obtained.
Further, coordinates in a plurality of images are obtained according to the step 1, and camera coordinates of corresponding target facilities are calculated according to the method in the step 2. And drawing the virtual target at the corresponding position in the virtual scene by taking the camera coordinate of the target as a reference.
When the target in the real scene changes, the coordinates of the target are updated by adopting the method, and the target in the virtual scene is redrawn, so that the target linkage between the real scene and the virtual scene is realized, and the operation trainees can implement crane operation training in the virtual environment with the operation experience similar to that of the real environment, thereby achieving the training purpose.
The invention provides an intelligent crane operation target identification method based on a digital twin technology, which is used for automatically detecting images of a target facility in a real environment and acquiring the position of the target facility, so that the virtual mapping of the target facility is reconstructed in a digital twin virtual environment. Compared with the traditional neural network model, the layered space-invariant target detection model and the layered space-invariant target detection method can better solve the problem of target deformation caused by a large shooting angle range, improve the target detection accuracy and improve the target detection efficiency. The comparison result with the classical neural network model is shown in the table 1, and the method is higher in detection accuracy and higher in recognition efficiency, so that the intelligent crane operation target recognition task of the digital twin technology is completed more effectively.
TABLE 1
Reference model Target detection success rate (error)<3pixel) Target identification time (average target number 50)
AlexNet 71.7% 23 seconds
YOLO 83.1% 101 seconds
ResNet 85.4% 355 seconds
The invention 90.7% 11 seconds

Claims (10)

1. Intelligent crane operation target recognition based on digital twinborn technology is based on the intelligent crane operation target recognition method of digital twinborn technology, characterized by that:
video camera
Figure QLYQS_1
、/>
Figure QLYQS_2
、…、/>
Figure QLYQS_3
The photographed images are respectively recorded as->
Figure QLYQS_4
、/>
Figure QLYQS_5
、…、/>
Figure QLYQS_6
(ii) a Using a template operation on each image sample, a response pattern { [ 8 ] in 8 directions for the image sample can be obtained>
Figure QLYQS_7
Namely:
Figure QLYQS_8
performing singular value decomposition on each response graph to obtain singular values arranged from large to small
Figure QLYQS_9
And n denotes a response diagram +>
Figure QLYQS_10
Normalized to the interval 0-1, is:
Figure QLYQS_11
further, calculating the dimensional characteristic value of the image sample as:
Figure QLYQS_12
the dimension characteristic value reflects the difference of response maps of the image sample in different directions
Constructing a neural network model, wherein the feature extraction layer of the neural network model
Figure QLYQS_13
Post-establishment full-connection layer->
Figure QLYQS_14
Figure QLYQS_15
Figure QLYQS_16
Is a fully connected layer->
Figure QLYQS_17
Is greater than or equal to>
Figure QLYQS_18
Is a corresponding linear bias parameter;)>
Figure QLYQS_19
Is a nonlinear activation function;
when in use
Figure QLYQS_20
When it is in, wherein>
Figure QLYQS_21
For the threshold, the following operation is performed:
Figure QLYQS_22
wherein V is a group consisting of
Figure QLYQS_23
Is based on the feature vector of (4), and (4)>
Figure QLYQS_24
Is->
Figure QLYQS_25
A diagonal matrix formed by the characteristic values of the data; />
Figure QLYQS_26
Wherein Q is
Figure QLYQS_27
And/or a total number of characteristic values of>
Figure QLYQS_28
、…、/>
Figure QLYQS_29
Arranged in the order from big to small, and taken back when the dimension characteristic value is lower than the characteristic value>
Figure QLYQS_30
Individual characteristic value->
Figure QLYQS_31
、…、/>
Figure QLYQS_32
Is 0, i.e. lowers the original matrix->
Figure QLYQS_33
The parameter amount of (a);
connecting output after the full connection layer, namely classifying the input image samples O;
acquiring the positions of different types of targets in the images acquired by the camera; and drawing the virtual target at the corresponding position in the virtual scene by taking the camera coordinates of the target as reference, and establishing a digital twin model of the crane operation target.
2. The intelligent crane operation target identification method based on the digital twin technology as claimed in claim 1, wherein: the method further comprises the steps of obtaining image coordinates of a certain target facility in the image according to the step 1, and calculating the coordinates of the target facility in the real environment, namely the coordinates in the camera coordinate system, so as to provide basis for establishing a model of the target facility at the corresponding position in the virtual environment.
3. The intelligent crane operation target identification method based on the digital twin technology as claimed in claim 2, wherein: setting a target facility in a camera
Figure QLYQS_34
In which the image coordinates are acquired according to step 1>
Figure QLYQS_35
In the camera->
Figure QLYQS_36
In accordance with step 1, image coordinates are obtained>
Figure QLYQS_37
4. An intelligent crane operation target identification method based on digital twin technology as claimed in claim 3, wherein:
Figure QLYQS_38
elimination of scale factors
Figure QLYQS_39
Then, the unknown parameter in the above equation only has the camera coordinates ≥>
Figure QLYQS_40
、/>
Figure QLYQS_41
、/>
Figure QLYQS_42
Solving the equation set can obtain the image coordinate ^ or ^ based on>
Figure QLYQS_43
5. The intelligent crane operation target identification method based on the digital twin technology as claimed in claim 4, wherein: further comprising determining coordinates in the virtual environment
Figure QLYQS_44
A virtual mapping of the target facility is established.
6. The intelligent crane operation target identification method based on the digital twin technology as claimed in claim 5, wherein: when the target in the real scene changes, the coordinate of the target is updated by adopting the intelligent crane operation target identification method based on the digital twinning technology, and the target in the virtual scene is redrawn, so that the target in the real scene and the target in the virtual scene are linked.
7. An intelligent crane operation target identification method based on a digital twin technology as claimed in any one of claims 1-6, wherein: the neural network model is a 4-layer structure.
8. The intelligent crane operation target identification method based on the digital twin technology as claimed in claim 1, wherein: four of the templates are templates of the image axial direction.
9. The intelligent crane operation target identification method based on the digital twin technology as claimed in claim 1, wherein: four of the templates are templates in the diagonal direction of the image.
10. A computer apparatus for implementing the intelligent crane operation target identification method based on the digital twin technology as claimed in any one of claims 1 to 9.
CN202310157211.5A 2023-02-23 2023-02-23 Intelligent crane operation target identification method based on digital twin technology Active CN115849202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310157211.5A CN115849202B (en) 2023-02-23 2023-02-23 Intelligent crane operation target identification method based on digital twin technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310157211.5A CN115849202B (en) 2023-02-23 2023-02-23 Intelligent crane operation target identification method based on digital twin technology

Publications (2)

Publication Number Publication Date
CN115849202A true CN115849202A (en) 2023-03-28
CN115849202B CN115849202B (en) 2023-05-16

Family

ID=85658765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310157211.5A Active CN115849202B (en) 2023-02-23 2023-02-23 Intelligent crane operation target identification method based on digital twin technology

Country Status (1)

Country Link
CN (1) CN115849202B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524030A (en) * 2023-07-03 2023-08-01 新乡学院 Reconstruction method and system for digital twin crane under swinging condition

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334701A (en) * 2019-07-11 2019-10-15 郑州轻工业学院 Collecting method based on deep learning and multi-vision visual under the twin environment of number
US20190325876A1 (en) * 2016-09-20 2019-10-24 Allstate Insurance Company Personal information assistant computing system
CN111339975A (en) * 2020-03-03 2020-06-26 华东理工大学 Target detection, identification and tracking method based on central scale prediction and twin neural network
CN111563446A (en) * 2020-04-30 2020-08-21 郑州轻工业大学 Human-machine interaction safety early warning and control method based on digital twin
CN112418103A (en) * 2020-11-24 2021-02-26 中国人民解放军火箭军工程大学 Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision
CN113741442A (en) * 2021-08-25 2021-12-03 中国矿业大学 Monorail crane automatic driving system and method based on digital twin driving
CN114049422A (en) * 2021-11-11 2022-02-15 上海交通大学 Data enhancement method and system based on digital twinning and image conversion
CN114155299A (en) * 2022-02-10 2022-03-08 盈嘉互联(北京)科技有限公司 Building digital twinning construction method and system
CN114329747A (en) * 2022-03-08 2022-04-12 盈嘉互联(北京)科技有限公司 Building digital twin oriented virtual and real entity coordinate mapping method and system
CN114818312A (en) * 2022-04-21 2022-07-29 浙江三一装备有限公司 Modeling method, modeling system and remote operation system for hoisting operation
CN114898285A (en) * 2022-04-11 2022-08-12 东南大学 Method for constructing digital twin model of production behavior
CN115272888A (en) * 2022-07-22 2022-11-01 三峡大学 Digital twin-based 5G + unmanned aerial vehicle power transmission line inspection method and system
CN115303946A (en) * 2022-09-16 2022-11-08 江苏省特种设备安全监督检验研究院 Digital twin-based tower crane work monitoring method and system
CN115457479A (en) * 2022-09-28 2022-12-09 江苏省特种设备安全监督检验研究院 Crane operation monitoring method and system based on digital twinning
US20220402732A1 (en) * 2021-06-14 2022-12-22 Manitowoc Crane Group France Method for securing a crane to the occurrence of an exceptional event
US20220413452A1 (en) * 2021-06-28 2022-12-29 Applied Materials, Inc. Reducing substrate surface scratching using machine learning
US20220411229A1 (en) * 2020-01-16 2022-12-29 Inventio Ag Method for the digital documentation and simulation of components in a personnel transport installation
CN115620121A (en) * 2022-10-24 2023-01-17 中国人民解放军战略支援部队航天工程大学 Photoelectric target high-precision detection method based on digital twinning

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190325876A1 (en) * 2016-09-20 2019-10-24 Allstate Insurance Company Personal information assistant computing system
CN110334701A (en) * 2019-07-11 2019-10-15 郑州轻工业学院 Collecting method based on deep learning and multi-vision visual under the twin environment of number
US20220411229A1 (en) * 2020-01-16 2022-12-29 Inventio Ag Method for the digital documentation and simulation of components in a personnel transport installation
CN111339975A (en) * 2020-03-03 2020-06-26 华东理工大学 Target detection, identification and tracking method based on central scale prediction and twin neural network
CN111563446A (en) * 2020-04-30 2020-08-21 郑州轻工业大学 Human-machine interaction safety early warning and control method based on digital twin
CN112418103A (en) * 2020-11-24 2021-02-26 中国人民解放军火箭军工程大学 Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision
US20220402732A1 (en) * 2021-06-14 2022-12-22 Manitowoc Crane Group France Method for securing a crane to the occurrence of an exceptional event
US20220413452A1 (en) * 2021-06-28 2022-12-29 Applied Materials, Inc. Reducing substrate surface scratching using machine learning
CN113741442A (en) * 2021-08-25 2021-12-03 中国矿业大学 Monorail crane automatic driving system and method based on digital twin driving
CN114049422A (en) * 2021-11-11 2022-02-15 上海交通大学 Data enhancement method and system based on digital twinning and image conversion
CN114155299A (en) * 2022-02-10 2022-03-08 盈嘉互联(北京)科技有限公司 Building digital twinning construction method and system
CN114329747A (en) * 2022-03-08 2022-04-12 盈嘉互联(北京)科技有限公司 Building digital twin oriented virtual and real entity coordinate mapping method and system
CN114898285A (en) * 2022-04-11 2022-08-12 东南大学 Method for constructing digital twin model of production behavior
CN114818312A (en) * 2022-04-21 2022-07-29 浙江三一装备有限公司 Modeling method, modeling system and remote operation system for hoisting operation
CN115272888A (en) * 2022-07-22 2022-11-01 三峡大学 Digital twin-based 5G + unmanned aerial vehicle power transmission line inspection method and system
CN115303946A (en) * 2022-09-16 2022-11-08 江苏省特种设备安全监督检验研究院 Digital twin-based tower crane work monitoring method and system
CN115457479A (en) * 2022-09-28 2022-12-09 江苏省特种设备安全监督检验研究院 Crane operation monitoring method and system based on digital twinning
CN115620121A (en) * 2022-10-24 2023-01-17 中国人民解放军战略支援部队航天工程大学 Photoelectric target high-precision detection method based on digital twinning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524030A (en) * 2023-07-03 2023-08-01 新乡学院 Reconstruction method and system for digital twin crane under swinging condition
CN116524030B (en) * 2023-07-03 2023-09-01 新乡学院 Reconstruction method and system for digital twin crane under swinging condition

Also Published As

Publication number Publication date
CN115849202B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN108985238B (en) Impervious surface extraction method and system combining deep learning and semantic probability
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN112836610B (en) Land use change and carbon reserve quantitative estimation method based on remote sensing data
CN107688856B (en) Indoor robot scene active identification method based on deep reinforcement learning
CN111126308B (en) Automatic damaged building identification method combining pre-disaster remote sensing image information and post-disaster remote sensing image information
CN109145836A (en) Ship target video detection method based on deep learning network and Kalman filtering
CN114332385A (en) Monocular camera target detection and spatial positioning method based on three-dimensional virtual geographic scene
CN112232328A (en) Remote sensing image building area extraction method and device based on convolutional neural network
CN108932474B (en) Remote sensing image cloud judgment method based on full convolution neural network composite characteristics
CN115849202A (en) Intelligent crane operation target identification method based on digital twin technology
CN114926511A (en) High-resolution remote sensing image change detection method based on self-supervision learning
CN112949407A (en) Remote sensing image building vectorization method based on deep learning and point set optimization
CN112288758A (en) Infrared and visible light image registration method for power equipment
CN114266967A (en) Cross-source remote sensing data target identification method based on symbolic distance characteristics
CN111104850A (en) Remote sensing image building automatic extraction method and system based on residual error network
CN111222576B (en) High-resolution remote sensing image classification method
CN116386042A (en) Point cloud semantic segmentation model based on three-dimensional pooling spatial attention mechanism
CN115841557A (en) Intelligent crane operation environment construction method based on digital twinning technology
CN113011506B (en) Texture image classification method based on deep fractal spectrum network
CN114998251A (en) Air multi-vision platform ground anomaly detection method based on federal learning
CN113192204B (en) Three-dimensional reconstruction method for building in single inclined remote sensing image
Sun et al. A flower recognition system based on MobileNet for smart agriculture
CN114494850A (en) Village unmanned courtyard intelligent identification method and system
CN116309849B (en) Crane positioning method based on visual radar
CN112380967A (en) Spatial artificial target spectrum unmixing method and system based on image information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant