CN115849202A - Intelligent crane operation target identification method based on digital twin technology - Google Patents
Intelligent crane operation target identification method based on digital twin technology Download PDFInfo
- Publication number
- CN115849202A CN115849202A CN202310157211.5A CN202310157211A CN115849202A CN 115849202 A CN115849202 A CN 115849202A CN 202310157211 A CN202310157211 A CN 202310157211A CN 115849202 A CN115849202 A CN 115849202A
- Authority
- CN
- China
- Prior art keywords
- target
- image
- crane operation
- digital twin
- identification method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
The scheme provides an intelligent crane operation target identification method based on a digital twin technology, which is used for automatically detecting images of a target facility in a real environment and acquiring the position of the target facility, so that the virtual mapping of the target facility is reconstructed in a digital twin virtual environment. According to the scheme, an optimized neural network model is used for carrying out automatic image detection on the target facility in the real environment to obtain the position of the target facility, so that the virtual mapping of the target facility is reconstructed in the digital twin virtual environment; the method can automatically extract the target in the real environment and complete the target position calculation, thereby improving the modeling efficiency of the digital twin virtual environment.
Description
Technical Field
The invention belongs to the technical field of automatic control of cranes, and particularly relates to an intelligent crane operation target identification method based on a digital twin technology.
Background
The industrial construction of China goes through several development stages of mechanization, automation and digitization, the production process and the management efficiency of factories are developed rapidly, and great contribution is made to the industrial industry and urban development of China. In recent years, with the progress of project construction of smart cities, smart industries, digital china, and the like, social development has made higher demands on plant managers. At present, the whole industry is still a labor-intensive traditional industry on the whole, the industrial modernization level is not high, and the problems of long construction period, high resource and energy consumption, low production efficiency, low technological content and the like exist. Under the large wave tide of industrial 4.0, how to further promote industrialization and automation level, the operation of factory is more intelligent, so as to achieve safer, more efficient and energy-saving purposes and become a new development and research direction.
By constructing an intelligent crane operation education platform based on a digital twin technology and perfecting crane operation teaching and research measures based on digital information, automatic control, equipment, communication transmission and an AI intelligent analysis model, the experimental teaching level of relevant specialties such as mechanical design and manufacture, automation and the like is comprehensively improved.
In the process of establishing a virtualized and abstracted digital twin model for crane operation, crane operation target identification is a key of the model, and an automatic method needs to be adopted to identify target facilities existing in a real environment, such as fire-fighting facilities, a power system, an air conditioning system, a security system, valves, lighting, power facilities, IT equipment, office supplies, buildings, toilets, landmarks and the like, so that information such as coordinates of the equipment is provided for a virtual reality scene, and an operator can perform crane operation training in the virtual environment with operation experience similar to that of the real environment, thereby achieving the purpose of training.
Disclosure of Invention
In order to solve one or more problems, the invention provides an intelligent crane operation target identification method based on a digital twin technology, which is used for automatically detecting images of a target facility in a real environment and acquiring the position of the target facility, so that the virtual mapping of the target facility is reconstructed in a digital twin virtual environment.
An intelligent crane operation target identification method based on a digital twin technology,
video camera、/>、…、/>The photographed images are respectively recorded as->、/>、…、/>(ii) a Using a template operation on each image sample, a response pattern { [ 8 ] in 8 directions for the image sample can be obtained>Namely:
performing singular value decomposition on each response graph to obtain singular values arranged from large to smallAnd n denotes a response diagram +>Normalized to the interval 0-1, is:
further, the dimensional characteristic value of the image sample is calculated as:
the dimension characteristic value reflects the difference of response maps of the image sample in different directions
Constructing a neural network model, wherein the feature extraction layer of the neural network modelPost-establishmentFully connected layer->:
Is a fully connected layer>Is greater than or equal to>Is a corresponding linear bias parameter;)>Is a nonlinear activation function;
when the temperature is higher than the set temperatureWhen it is in, wherein>For the threshold, the following operation is performed:
wherein V is a group consisting ofIs based on the feature vector of (4), and (4)>Is->A diagonal matrix formed by the characteristic values of the data; />
Wherein Q isAnd/or a total number of characteristic values of>、…、/>Arranged in the order from large to small. When the dimension characteristic value is lower than the characteristic value, take the value>Individual characteristic value>、…、/>Is 0, i.e. lowers the original matrix->The parameter amount of (a);
connecting output after the full connection layer, namely classifying the input image samples O;
acquiring the positions of different types of targets in the images acquired by the camera; and drawing the virtual target at the corresponding position in the virtual scene by taking the camera coordinates of the target as reference, and establishing a digital twin model of the crane operation target.
The method further comprises the steps of obtaining image coordinates of a certain target facility in the image according to the step 1, and calculating the coordinates of the target facility in the real environment, namely the coordinates in the camera coordinate system, so as to provide basis for establishing a model of the target facility at the corresponding position in the virtual environment.
Setting a target facility in a cameraIn which the image coordinates are acquired according to step 1>In the camera->In accordance with step 1, image coordinates are obtained>。
Elimination of scale factorsThen, the unknown parameter in the above equation only has the camera coordinates ≥>、/>、/>Solving equation set can obtain the coordinate of image>。
Further comprising determining coordinates in the virtual environmentA virtual mapping of the target facility is established.
When the target in the real scene changes, the coordinates of the target are updated by adopting the method, and the target in the virtual scene is redrawn, so that the linkage of the target in the real scene and the target in the virtual scene is realized.
The neural network model is a 4-layer structure.
Four of the templates are templates in the axial direction of the image.
Four of the templates are templates in the diagonal direction of the image.
A computer apparatus for implementing the method described above.
The invention has the advantages that:
1. the method utilizes the optimized neural network model to automatically detect the image of the target facility in the real environment and obtain the position of the target facility, thereby reconstructing the virtual mapping of the target facility in the digital twin virtual environment; the method can automatically extract the target in the real environment and complete the target position calculation, thereby improving the modeling efficiency of the digital twin virtual environment.
2. The invention provides a layered space-invariant target detection model and a layered space-invariant target detection method, which are used for carrying out dimension test on image samples of different types of targets by utilizing an optimization template, and selecting a model according to the characteristic dimension of the targets. If the complexity is low, less network layers are connected to the network model when the network model is built, otherwise, more network layers are connected to the network model, so that the overall complexity of the network is reduced, and the detection performance is improved.
3. And a spatial structure mapping layer is added at the first layer of the model and is used for processing the difference of the target appearance caused by different shooting angles and extracting the spatial structure characteristics of the input image so as to better deal with the appearance deformation caused by observing the target from different angles, thereby considering the recognition efficiency and the accuracy.
Detailed Description
Step 1And carrying out target labeling according to the visual appearances of various target facilities in the real environment, determining the positions of the target facilities in the image, and realizing the visual target grading labeling method.
S1.1And (4) preparing.
The method comprises the steps of collecting known sample images of a plurality of key target facilities in a category in a real environment, such as fire-fighting facilities, electric power systems, air conditioning systems, security systems, valves, lighting, power facilities, IT equipment, office supplies, buildings, toilets, landmarks and the like, and using the sample images as labeled training samples.
S1.2And configuring an image acquisition environment.
Two or more cameras are arranged in the field environment of the training site, so that each target facility to be identified can be captured by at least two cameras. In order to complete the reconstruction process in the subsequent steps.
Using the camera coordinates of one of the camerasFor the reference, a camera coordinate system reference is established. The reference camera is recorded asOther cameras are sequentially programmed to->、…、/>。/>。
Targeting a reference cameraThe coordinates under the coordinate system are expressed in homogeneous coordinates as:
the coordinates of the target in the other cameras can be derived as follows:
wherein the content of the first and second substances,rotate matrix for 3*3, based on the value of the reference value>For a 3*1 translation vector, the combination of the two reflects the camera ≥ s>And camera>The relative relationship between them. The rotation matrix and the translation vector of each camera can be obtained by calibration in advance.
and the image coordinates and the coordinates under the corresponding camera coordinate system satisfy a linear relationship:
wherein、/>For a rotation matrix and a translation vector of the corresponding camera, based on the camera parameters>、/>、/>、/>The internal parameters of the camera are related to the optical parameters of the lens of the camera and the parameters of an imaging device, and the cameras with the same model can be adopted to ensure that the internal parameters of all the cameras are approximately equal; s is a scale factor; the internal parameters are also obtained by calibration.
S1.3An ambient image is captured and an associated object is detected in the image.
Video camera、/>、…、/>The photographed images are respectively recorded as->、/>、…、/>. Respectively in the image->、/>、…、/>In the method, various types of targets are detected and the positions of the targets are output.
The image target detection method commonly used in the industry at present is a convolution neural network method, and the method establishes a neural network model based on a convolution kernel, can better overcome the interference of noise on a target image and detect the target with higher precision. However, the method faces two key problems in the application scenario of the present invention. One is that the crane operation scene environment is complex, the number of the targets to be detected is large, and the distribution is wide, so that the angle difference of the camera shooting targets is large, and the appearance difference of the targets in the images is large; and the robustness of the two-dimensional convolution network in the aspect of appearance difference caused by shooting angles such as space rotation invariance is low. The second is that the convolutional neural network controls the detection and identification precision through the size of the convolutional kernel and the number of convolutional layers, and the larger convolutional kernel and more convolutional layers increase the computation amount while improving the precision. Because the invention relates to more types of targets, part of the targets have higher feature dimensions and lower feature dimensions, and the target type with lower feature dimensions does not need to adopt a very complicated network structure, the balance of network complexity and performance is also a problem to be considered in the application scene of the invention.
In order to solve the two problems, the invention provides a layered space-invariant target detection model and a layered space-invariant target detection method. And secondly, adding a spatial structure mapping layer at the first layer of the model for processing the difference of the appearance of the target caused by different shooting angles.
S1.3.1 dimension characteristic value and dimension test
And the dimension test of the target sample is used for evaluating the complexity of the target, if the complexity is lower, less network layers are connected for the target when the network model is constructed, otherwise, more network layers are connected for the target, so that the overall complexity of the network is reduced, and the detection performance is improved.
Defining the dimension test template as the following matrix:
wherein the template-/>Is equal to the image obtained in S1.1 with the size of the target training sample, is combined with the image obtained in S1.1>The image coordinates of one pixel in the template are represented, and the image center coordinates are (0,0). The four templates in equation 3 are templates in the image axial direction, and the four templates in equation 4 are templates in the image diagonal direction.
For each image sampleBy applying the template described above, a response pattern of the image sample in 8 directions can be obtained>Namely:
performing singular value decomposition on each response graph to obtain singular values arranged from large to smallAnd n denotes a response diagram +>Normalized to the interval 0-1, is:
further, calculating the dimensional characteristic value of the image sample as:
the dimension characteristic value reflects the difference of the response images of the image sample in different directions, if the difference is smaller, the influence of the change of the direction on the appearance of the image sample is smaller, and a simpler network model can be adopted; otherwise, the influence is larger, and a more complex network model needs to be adopted.
Setting a threshold valueFor selecting different network models, set in dependence on experimental empirical values>I.e. when->And selecting a complex network model, otherwise, selecting a simple network model.
The dimension testing process ends.
S1.3.2 image detection and neural network model construction for detection
The neural network model is a mathematical computation model taking an image as input and a detection result as output, the input and the output are formed by a plurality of hidden nodes according to a certain logical relationship, each hidden node represents a certain operation method, and parameters of a hidden node operation formula are determined through training (learning).
And establishing a spatial structure mapping layer after the image is input, wherein the spatial structure mapping layer is used for extracting spatial structure characteristics of the input image so as to better cope with appearance deformation caused by observing the target from different angles.
In the above formulaI.e. the spatial structure mapping layer->The image transformation method is characterized by comprising 8 images with the same size as the original image, wherein each image represents the mapping of the original image after the original image is rotated by a certain angle, and the target appearance deformation caused by rotation can be effectively achieved. />Denotes the circumference ratio parameter, | denotes the absolute value.
Further, defining a spatial structure mapping layerAnd the method is used for measuring the appearance deformation of the target caused by spatial scaling. Namely:
in the above formulaRepresents 3 scaling parameters, <' > based on the scaling parameter>The method is a linear scaling window and is used for carrying out local window scaling on the previous layer. As a preferred case of the test, let->Is dimensioned as->So when>When, is greater or less>When is greater than or equal to>When the temperature of the water is higher than the set temperature,when is greater than or equal to>In combination of time>。
The spatial structure mapping layer、/>And jointly establishing an appearance deformation model for processing the appearance deformation of the target.
in the above-mentioned formula, the compound has the following structure,is a linear parameter of the neural network for combining>Layer results are mapped to a feature extraction layer->Completing feature extraction and/or based on the result of the comparison>Is a linear bias parameter; the feature extraction layer maps the high-dimensional image data to a one-dimensional feature space, so that the data dimension is reduced; />A non-linear excitation function for allowing the neural network to process non-linear sample data. />The function is defined as follows:
the classification performance of the activation function is further improved by adopting a three-segment piecewise function in the formula.
In the above formula, the first and second carbon atoms are,is a fully connected layer->Is greater than or equal to>For a corresponding linear bias parameter;,/is>Defining the same formula for the nonlinear activation function; />In the form of a matrix, which combines the aforementioned dimension characteristic values, when the dimension characteristic value is smaller, it can be further paired and/or selected>And (5) optimizing parameters, and reducing parameter quantity so as to optimize the neural network model. According to the principle of linear algebraic eigenvalue decomposition, it can be known that:
wherein V is a group consisting ofIs based on the feature vector of (4), and (4)>Is->The characteristic values of (a) constitute a diagonal matrix. By decreasing the number of eigenvalues (i.e. setting the smaller eigenvalue to 0), the origin matrix can be based on>The amount of parameters (c) decreases. In the present invention, it is experimentally set that a pre-taking is taken when the dimensional characteristic value is below a threshold value>Is then fully connected>Is reduced to not optimized->。
Specifically, it is provided that:
wherein Q isAnd/or a total number of characteristic values of>、…、/>Arranged in the order of big to small. When the dimension characteristic value is lower than the characteristic value, take the value>Individual characteristic value->、…、/>Is 0, i.e. lowers the original matrix->The amount of the above-mentioned components.
And connecting output after the full connection layer, namely, classifying the input image sample O:
in the above formula, the first and second carbon atoms are,linear parameters representing an output layer>Representing the corresponding linear bias parameters; />The same formula is defined for the nonlinear activation function.
The neural network model (i.e., defined by equations 8-14) is trained to determine parameters of the neural network model, including a linear scaling window, linear parameters, and linear bias parameters. The image samples adopted by training are marked with classification values manually, and the classification values are used as the true values of the training and are marked as. Defining a cost function pick>The difference between the neural network output value and the training true value is:
iterative optimization can be carried out on the neural network model by adopting a BP algorithm, the aim is to make the cost function converge, and the parameter value of the model can be obtained after convergence, so that training is completed.
After training is finished, the neural network model can be adopted to detect shot images, and the positions of different types of targets in the images collected by the camera are obtained.
Step 2The method for calculating the real environment coordinates of the target facility obtains the image coordinates of the target facility in the image according to the step 1, and calculates the coordinates of the target facility in the real environment (namely the coordinates under the camera coordinate system), thereby providing a basis for establishing a model of the target facility at the corresponding position in the virtual environment.
Setting a target facility in the cameraIn which the image coordinates are acquired according to step 1>In the camera->In which the image coordinates are acquired according to step 1>. According to formula 2 in step 1, there are: />
Elimination of scale factorsThen, the unknown parameter in the above equation only has the camera coordinates ≥>、/>、/>And solving the equation system to obtain the target.
According to coordinates in a virtual environmentIt is sufficient to establish a virtual mapping of the target facility.
The method comprises the steps of establishing a virtualized and abstracted digital twin model of crane operation.
Firstly, the target to be operated by the crane is identified by the method in the step 1 of the invention, and the coordinates of the target in the image are obtained.
Further, coordinates in a plurality of images are obtained according to the step 1, and camera coordinates of corresponding target facilities are calculated according to the method in the step 2. And drawing the virtual target at the corresponding position in the virtual scene by taking the camera coordinate of the target as a reference.
When the target in the real scene changes, the coordinates of the target are updated by adopting the method, and the target in the virtual scene is redrawn, so that the target linkage between the real scene and the virtual scene is realized, and the operation trainees can implement crane operation training in the virtual environment with the operation experience similar to that of the real environment, thereby achieving the training purpose.
The invention provides an intelligent crane operation target identification method based on a digital twin technology, which is used for automatically detecting images of a target facility in a real environment and acquiring the position of the target facility, so that the virtual mapping of the target facility is reconstructed in a digital twin virtual environment. Compared with the traditional neural network model, the layered space-invariant target detection model and the layered space-invariant target detection method can better solve the problem of target deformation caused by a large shooting angle range, improve the target detection accuracy and improve the target detection efficiency. The comparison result with the classical neural network model is shown in the table 1, and the method is higher in detection accuracy and higher in recognition efficiency, so that the intelligent crane operation target recognition task of the digital twin technology is completed more effectively.
TABLE 1
Reference model | Target detection success rate (error)<3pixel) | Target identification time (average target number 50) |
AlexNet | 71.7% | 23 seconds |
YOLO | 83.1% | 101 seconds |
ResNet | 85.4% | 355 seconds |
The invention | 90.7% | 11 seconds |
Claims (10)
1. Intelligent crane operation target recognition based on digital twinborn technology is based on the intelligent crane operation target recognition method of digital twinborn technology, characterized by that:
video camera、/>、…、/>The photographed images are respectively recorded as->、/>、…、/>(ii) a Using a template operation on each image sample, a response pattern { [ 8 ] in 8 directions for the image sample can be obtained>Namely:
performing singular value decomposition on each response graph to obtain singular values arranged from large to smallAnd n denotes a response diagram +>Normalized to the interval 0-1, is:
further, calculating the dimensional characteristic value of the image sample as:
the dimension characteristic value reflects the difference of response maps of the image sample in different directions
Constructing a neural network model, wherein the feature extraction layer of the neural network modelPost-establishment full-connection layer->:
Is a fully connected layer->Is greater than or equal to>Is a corresponding linear bias parameter;)>Is a nonlinear activation function;
wherein V is a group consisting ofIs based on the feature vector of (4), and (4)>Is->A diagonal matrix formed by the characteristic values of the data; />;
Wherein Q isAnd/or a total number of characteristic values of>、…、/>Arranged in the order from big to small, and taken back when the dimension characteristic value is lower than the characteristic value>Individual characteristic value->、…、/>Is 0, i.e. lowers the original matrix->The parameter amount of (a);
connecting output after the full connection layer, namely classifying the input image samples O;
acquiring the positions of different types of targets in the images acquired by the camera; and drawing the virtual target at the corresponding position in the virtual scene by taking the camera coordinates of the target as reference, and establishing a digital twin model of the crane operation target.
2. The intelligent crane operation target identification method based on the digital twin technology as claimed in claim 1, wherein: the method further comprises the steps of obtaining image coordinates of a certain target facility in the image according to the step 1, and calculating the coordinates of the target facility in the real environment, namely the coordinates in the camera coordinate system, so as to provide basis for establishing a model of the target facility at the corresponding position in the virtual environment.
3. The intelligent crane operation target identification method based on the digital twin technology as claimed in claim 2, wherein: setting a target facility in a cameraIn which the image coordinates are acquired according to step 1>In the camera->In accordance with step 1, image coordinates are obtained>。
4. An intelligent crane operation target identification method based on digital twin technology as claimed in claim 3, wherein:
6. The intelligent crane operation target identification method based on the digital twin technology as claimed in claim 5, wherein: when the target in the real scene changes, the coordinate of the target is updated by adopting the intelligent crane operation target identification method based on the digital twinning technology, and the target in the virtual scene is redrawn, so that the target in the real scene and the target in the virtual scene are linked.
7. An intelligent crane operation target identification method based on a digital twin technology as claimed in any one of claims 1-6, wherein: the neural network model is a 4-layer structure.
8. The intelligent crane operation target identification method based on the digital twin technology as claimed in claim 1, wherein: four of the templates are templates of the image axial direction.
9. The intelligent crane operation target identification method based on the digital twin technology as claimed in claim 1, wherein: four of the templates are templates in the diagonal direction of the image.
10. A computer apparatus for implementing the intelligent crane operation target identification method based on the digital twin technology as claimed in any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310157211.5A CN115849202B (en) | 2023-02-23 | 2023-02-23 | Intelligent crane operation target identification method based on digital twin technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310157211.5A CN115849202B (en) | 2023-02-23 | 2023-02-23 | Intelligent crane operation target identification method based on digital twin technology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115849202A true CN115849202A (en) | 2023-03-28 |
CN115849202B CN115849202B (en) | 2023-05-16 |
Family
ID=85658765
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310157211.5A Active CN115849202B (en) | 2023-02-23 | 2023-02-23 | Intelligent crane operation target identification method based on digital twin technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115849202B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116524030A (en) * | 2023-07-03 | 2023-08-01 | 新乡学院 | Reconstruction method and system for digital twin crane under swinging condition |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110334701A (en) * | 2019-07-11 | 2019-10-15 | 郑州轻工业学院 | Collecting method based on deep learning and multi-vision visual under the twin environment of number |
US20190325876A1 (en) * | 2016-09-20 | 2019-10-24 | Allstate Insurance Company | Personal information assistant computing system |
CN111339975A (en) * | 2020-03-03 | 2020-06-26 | 华东理工大学 | Target detection, identification and tracking method based on central scale prediction and twin neural network |
CN111563446A (en) * | 2020-04-30 | 2020-08-21 | 郑州轻工业大学 | Human-machine interaction safety early warning and control method based on digital twin |
CN112418103A (en) * | 2020-11-24 | 2021-02-26 | 中国人民解放军火箭军工程大学 | Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision |
CN113741442A (en) * | 2021-08-25 | 2021-12-03 | 中国矿业大学 | Monorail crane automatic driving system and method based on digital twin driving |
CN114049422A (en) * | 2021-11-11 | 2022-02-15 | 上海交通大学 | Data enhancement method and system based on digital twinning and image conversion |
CN114155299A (en) * | 2022-02-10 | 2022-03-08 | 盈嘉互联(北京)科技有限公司 | Building digital twinning construction method and system |
CN114329747A (en) * | 2022-03-08 | 2022-04-12 | 盈嘉互联(北京)科技有限公司 | Building digital twin oriented virtual and real entity coordinate mapping method and system |
CN114818312A (en) * | 2022-04-21 | 2022-07-29 | 浙江三一装备有限公司 | Modeling method, modeling system and remote operation system for hoisting operation |
CN114898285A (en) * | 2022-04-11 | 2022-08-12 | 东南大学 | Method for constructing digital twin model of production behavior |
CN115272888A (en) * | 2022-07-22 | 2022-11-01 | 三峡大学 | Digital twin-based 5G + unmanned aerial vehicle power transmission line inspection method and system |
CN115303946A (en) * | 2022-09-16 | 2022-11-08 | 江苏省特种设备安全监督检验研究院 | Digital twin-based tower crane work monitoring method and system |
CN115457479A (en) * | 2022-09-28 | 2022-12-09 | 江苏省特种设备安全监督检验研究院 | Crane operation monitoring method and system based on digital twinning |
US20220402732A1 (en) * | 2021-06-14 | 2022-12-22 | Manitowoc Crane Group France | Method for securing a crane to the occurrence of an exceptional event |
US20220413452A1 (en) * | 2021-06-28 | 2022-12-29 | Applied Materials, Inc. | Reducing substrate surface scratching using machine learning |
US20220411229A1 (en) * | 2020-01-16 | 2022-12-29 | Inventio Ag | Method for the digital documentation and simulation of components in a personnel transport installation |
CN115620121A (en) * | 2022-10-24 | 2023-01-17 | 中国人民解放军战略支援部队航天工程大学 | Photoelectric target high-precision detection method based on digital twinning |
-
2023
- 2023-02-23 CN CN202310157211.5A patent/CN115849202B/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190325876A1 (en) * | 2016-09-20 | 2019-10-24 | Allstate Insurance Company | Personal information assistant computing system |
CN110334701A (en) * | 2019-07-11 | 2019-10-15 | 郑州轻工业学院 | Collecting method based on deep learning and multi-vision visual under the twin environment of number |
US20220411229A1 (en) * | 2020-01-16 | 2022-12-29 | Inventio Ag | Method for the digital documentation and simulation of components in a personnel transport installation |
CN111339975A (en) * | 2020-03-03 | 2020-06-26 | 华东理工大学 | Target detection, identification and tracking method based on central scale prediction and twin neural network |
CN111563446A (en) * | 2020-04-30 | 2020-08-21 | 郑州轻工业大学 | Human-machine interaction safety early warning and control method based on digital twin |
CN112418103A (en) * | 2020-11-24 | 2021-02-26 | 中国人民解放军火箭军工程大学 | Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision |
US20220402732A1 (en) * | 2021-06-14 | 2022-12-22 | Manitowoc Crane Group France | Method for securing a crane to the occurrence of an exceptional event |
US20220413452A1 (en) * | 2021-06-28 | 2022-12-29 | Applied Materials, Inc. | Reducing substrate surface scratching using machine learning |
CN113741442A (en) * | 2021-08-25 | 2021-12-03 | 中国矿业大学 | Monorail crane automatic driving system and method based on digital twin driving |
CN114049422A (en) * | 2021-11-11 | 2022-02-15 | 上海交通大学 | Data enhancement method and system based on digital twinning and image conversion |
CN114155299A (en) * | 2022-02-10 | 2022-03-08 | 盈嘉互联(北京)科技有限公司 | Building digital twinning construction method and system |
CN114329747A (en) * | 2022-03-08 | 2022-04-12 | 盈嘉互联(北京)科技有限公司 | Building digital twin oriented virtual and real entity coordinate mapping method and system |
CN114898285A (en) * | 2022-04-11 | 2022-08-12 | 东南大学 | Method for constructing digital twin model of production behavior |
CN114818312A (en) * | 2022-04-21 | 2022-07-29 | 浙江三一装备有限公司 | Modeling method, modeling system and remote operation system for hoisting operation |
CN115272888A (en) * | 2022-07-22 | 2022-11-01 | 三峡大学 | Digital twin-based 5G + unmanned aerial vehicle power transmission line inspection method and system |
CN115303946A (en) * | 2022-09-16 | 2022-11-08 | 江苏省特种设备安全监督检验研究院 | Digital twin-based tower crane work monitoring method and system |
CN115457479A (en) * | 2022-09-28 | 2022-12-09 | 江苏省特种设备安全监督检验研究院 | Crane operation monitoring method and system based on digital twinning |
CN115620121A (en) * | 2022-10-24 | 2023-01-17 | 中国人民解放军战略支援部队航天工程大学 | Photoelectric target high-precision detection method based on digital twinning |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116524030A (en) * | 2023-07-03 | 2023-08-01 | 新乡学院 | Reconstruction method and system for digital twin crane under swinging condition |
CN116524030B (en) * | 2023-07-03 | 2023-09-01 | 新乡学院 | Reconstruction method and system for digital twin crane under swinging condition |
Also Published As
Publication number | Publication date |
---|---|
CN115849202B (en) | 2023-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108985238B (en) | Impervious surface extraction method and system combining deep learning and semantic probability | |
CN108961235B (en) | Defective insulator identification method based on YOLOv3 network and particle filter algorithm | |
CN112836610B (en) | Land use change and carbon reserve quantitative estimation method based on remote sensing data | |
CN107688856B (en) | Indoor robot scene active identification method based on deep reinforcement learning | |
CN111126308B (en) | Automatic damaged building identification method combining pre-disaster remote sensing image information and post-disaster remote sensing image information | |
CN109145836A (en) | Ship target video detection method based on deep learning network and Kalman filtering | |
CN114332385A (en) | Monocular camera target detection and spatial positioning method based on three-dimensional virtual geographic scene | |
CN112232328A (en) | Remote sensing image building area extraction method and device based on convolutional neural network | |
CN108932474B (en) | Remote sensing image cloud judgment method based on full convolution neural network composite characteristics | |
CN115849202A (en) | Intelligent crane operation target identification method based on digital twin technology | |
CN114926511A (en) | High-resolution remote sensing image change detection method based on self-supervision learning | |
CN112949407A (en) | Remote sensing image building vectorization method based on deep learning and point set optimization | |
CN112288758A (en) | Infrared and visible light image registration method for power equipment | |
CN114266967A (en) | Cross-source remote sensing data target identification method based on symbolic distance characteristics | |
CN111104850A (en) | Remote sensing image building automatic extraction method and system based on residual error network | |
CN111222576B (en) | High-resolution remote sensing image classification method | |
CN116386042A (en) | Point cloud semantic segmentation model based on three-dimensional pooling spatial attention mechanism | |
CN115841557A (en) | Intelligent crane operation environment construction method based on digital twinning technology | |
CN113011506B (en) | Texture image classification method based on deep fractal spectrum network | |
CN114998251A (en) | Air multi-vision platform ground anomaly detection method based on federal learning | |
CN113192204B (en) | Three-dimensional reconstruction method for building in single inclined remote sensing image | |
Sun et al. | A flower recognition system based on MobileNet for smart agriculture | |
CN114494850A (en) | Village unmanned courtyard intelligent identification method and system | |
CN116309849B (en) | Crane positioning method based on visual radar | |
CN112380967A (en) | Spatial artificial target spectrum unmixing method and system based on image information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |