CN115849202B - Intelligent crane operation target identification method based on digital twin technology - Google Patents
Intelligent crane operation target identification method based on digital twin technology Download PDFInfo
- Publication number
- CN115849202B CN115849202B CN202310157211.5A CN202310157211A CN115849202B CN 115849202 B CN115849202 B CN 115849202B CN 202310157211 A CN202310157211 A CN 202310157211A CN 115849202 B CN115849202 B CN 115849202B
- Authority
- CN
- China
- Prior art keywords
- target
- image
- digital twin
- camera
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
The intelligent crane operation target identification method based on the digital twin technology is used for automatically detecting images of target facilities in a real environment and acquiring positions of the target facilities, so that virtual mapping of the target facilities is rebuilt in the digital twin virtual environment. The method comprises the steps that an optimized neural network model is utilized to automatically detect an image of a target facility in a real environment, and the position of the target facility is obtained, so that virtual mapping of the target facility is rebuilt in a digital twin virtual environment; the method can automatically extract the target in the real environment and complete the target position calculation, thereby improving the modeling efficiency of the digital twin virtual environment.
Description
Technical Field
The invention belongs to the technical field of crane automatic control, and particularly relates to an intelligent crane operation target identification method based on a digital twin technology.
Background
The construction of the industry in China goes through the development stages of mechanization, automation and digitalization, the production process and the management efficiency of factories are developed rapidly, and great contribution is made to the industrial business and urban development in China. In recent years, with the continuous advancement of construction of smart cities, smart industry, digital china and other projects, social development has put higher demands on plant managers. At present, the whole industry is still a labor-intensive traditional industry, the modernization level of the industry is not high, and the problems of longer construction period, higher resource and energy consumption, higher production efficiency, lower technological content and the like exist. Under the tide of 4.0 industry, how to further improve industrialization and automation level, make the operation of mill more wisdom to accomplish safer, high-efficient, energy-conservation become new development research direction.
Through constructing the intelligent crane operation education platform based on the digital twin technology, the teaching and research measures of crane operation are perfected based on digital information, automatic control, equipment, communication transmission and AI intelligent analysis models, so that the experimental teaching level of relevant professions such as mechanical design and manufacture, automation and the like is comprehensively improved.
In the process of establishing a digital twin model for the operation virtualization and abstraction of a crane, the identification of the operation target of the crane is a key point, and an automation method is required to identify target facilities existing in a real environment, such as fire-fighting facilities, electric power systems, air conditioning systems, security systems, valves, illumination, power facilities, IT equipment, office supplies, buildings, toilets, landmarks and the like, so that information such as coordinates of the equipment is provided for a virtual reality scene, and an operator can implement crane operation training in the virtual environment with operation experience similar to that of the real environment, thereby achieving the training purpose.
Disclosure of Invention
In order to solve one or more of the problems, the invention provides an intelligent crane operation target identification method based on a digital twin technology, which automatically detects images of target facilities in a real environment and acquires positions of the target facilities, thereby reconstructing virtual mapping of the target facilities in the digital twin virtual environment.
An intelligent crane operation target identification method based on digital twin technology,
video camera、/>、…、/>The captured images are recorded as +.>、/>、…、/>The method comprises the steps of carrying out a first treatment on the surface of the The response patterns of the image samples in 8 directions can be obtained by using template operation for each image sample>The method comprises the following steps:
for each responseThe graph is subjected to singular value decomposition to obtain singular values which are arranged from large to smallN represents the response map->Is normalized to the n singular value of the interval 0-1:
further, the dimension characteristic value of the image sample is calculated as follows:
the dimension characteristic value reflects the difference of response graphs of the image samples in different directions
Constructing a neural network model, wherein a feature extraction layer of the neural network modelAfter establishment of the full connection layer->:
Is a full connection layer->Linear parameter of>Is the corresponding linear bias parameter>Is nonlinearActivating a function;
wherein Q isIs the total number of eigenvalues of (1), and +.>、…、/>Arranged in order from large to small. When the characteristic value of the dimension is lower than the characteristic value, taking back +.>Personal characteristic value->、…、/>Is 0, i.e. reduce the primordia +.>Is a parameter of (a);
the output is connected after the full connection layer, namely the classification O of the input image sample is performed;
acquiring the positions of different targets in images acquired by a camera; and drawing a virtual target at a corresponding position in the virtual scene by taking the camera coordinates of the target as a reference, and establishing a digital twin model of the crane operation target.
And (2) obtaining the image coordinates of a certain target facility in the image according to the step (1), and calculating the coordinates of the target facility in the real environment, namely the coordinates under a camera coordinate system, so as to provide a basis for establishing a model of the target facility at a corresponding position in the virtual environment.
Setting a target facility in the cameraThe image coordinates are obtained according to step 1>In the camera->The image coordinates are obtained according to step 1>。
Eliminating scale factorsAfter that, the unknown parameters in the above formula are only camera coordinates +.>、/>、/>Solving the equation set to obtain the image coordinate +.>。
And also includes in the virtual environment according to the coordinatesA virtual map of the target facility is established.
When the target in the real scene changes, the coordinate of the target is updated by adopting the method, and the target in the virtual scene is redrawn, so that the linkage of the real scene and the target in the virtual scene is realized.
The neural network model is a 4-layer structure.
Four templates among the templates are templates in the axial direction of the image.
Four of the templates are templates in the diagonal direction of the image.
A computer device implementing the above method.
The invention has the following technical effects:
1. the invention utilizes the optimized neural network model to automatically detect the image of the target facility in the real environment and acquire the position of the target facility, thereby reconstructing the virtual mapping of the target facility in the digital twin virtual environment; the method can automatically extract the target in the real environment and complete the target position calculation, thereby improving the modeling efficiency of the digital twin virtual environment.
2. The invention provides a layered space-invariant target detection model and a layered space-invariant target detection method. If the complexity is lower, connecting fewer network layers for the network model when constructing the network model, otherwise connecting more network layers for the network model, thereby reducing the overall complexity of the network and improving the detection performance.
3. And a spatial structure mapping layer is added to the first layer of the model and is used for processing the difference of the appearance of the target caused by different shooting angles and extracting the spatial structure characteristics of the input image so as to better cope with the appearance deformation caused by observing the target from different angles, thereby considering the recognition efficiency and the accuracy.
Detailed Description
Step 1And (3) marking the targets according to the visual appearances of various target facilities in the real environment, determining the positions of the target facilities in the images, and realizing the visual target hierarchical marking method.
S1.1And (5) preparation.
For a plurality of types of key target facilities in a real environment, such as fire-fighting facilities, electric power systems, air-conditioning systems, security systems, valves, lighting, power facilities, IT equipment, office supplies, buildings, toilets, landmarks and the like, known sample images of the key target facilities are collected and used as marked training samples.
S1.2And configuring an image acquisition environment.
Two or more cameras are arranged in the field environment of the training station so that each target facility to be identified can be captured by at least two cameras. In order to complete the reconstruction process in a subsequent step.
And establishing a camera coordinate system reference by taking a camera coordinate system of one of the cameras as a reference. The reference camera is marked asThe other cameras are orderly coded as +.>、…、/>。/>。
Camera with target at referenceThe coordinates in the coordinate system are expressed as homogeneous coordinates:
the coordinates of the object in the other cameras can be estimated as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,for 3*3 rotation matrix, +.>For the 3*1 translation vector, the combination of the two reflects the camera +.>And camera->Relative relationship between the two. The rotation matrix and translation vector of each camera can be obtained by calibration in advance.
and the image coordinates and the coordinates in the corresponding camera coordinate system satisfy a linear relation:
wherein the method comprises the steps of、/>To correspond to shootingRotation matrix and translation vector of the camera, +.>、/>、/>、/>As internal parameters of the camera, related to lens optical parameters and imaging device parameters of the camera, the internal parameters of all cameras can be approximately equal by adopting the same type of camera; s is a scale factor; the internal parameters mentioned above are also obtained by calibration.
S1.3An image of the environment is captured, and a related object is detected in the image.
Video camera、/>、…、/>The captured images are recorded as +.>、/>、…、/>. Respectively in the picture->、/>、…、/>In the method, various targets are detected and the positions of the targets are output.
The conventional image target detection method in the industry is a convolutional neural network method, and the method establishes a neural network model based on a convolutional kernel, so that the interference of noise on a target image can be well overcome, and the target can be detected with higher precision. But this approach faces two key problems in the application scenario of the present invention. Firstly, the crane operation scene has complex environment, and a large number of targets need to be detected and are widely distributed, so that the angle difference of targets shot by a camera is large, and the appearance difference of the targets in an image is large; and the two-dimensional convolution network has lower robustness in terms of appearance differences caused by shooting angles, such as spatial rotation invariance. The second is that the convolutional neural network controls the detection and identification precision through the size of the convolutional kernel and the number of convolutional layers, and the larger convolutional kernel and the more convolutional layers increase the operation quantity while improving the precision. Because the invention relates to a plurality of categories of targets, the feature dimension of part of targets is higher, the feature dimension of part of targets is lower, and for the target type with lower feature dimension, a very complex network structure is not needed, so that the balance of network complexity and performance is also a problem to be considered in the application scene of the invention.
In order to solve the two problems, the invention provides a layered space-invariant target detection model and a layered space-invariant target detection method. And secondly, adding a space structure mapping layer at the first layer of the model for processing the difference of the appearance of the target caused by different shooting angles.
S1.3.1 dimension eigenvalue and dimension test
The dimension test of the target sample is used for evaluating the complexity of the target, if the complexity is lower, fewer network layers are connected for the target sample when the network model is constructed, otherwise, more network layers are connected for the target sample, so that the overall complexity of the network is reduced, and the detection performance is improved.
Defining a dimension test template as the following matrix:
wherein, the template-/>Is an image of the same size as the target training sample obtained in S1.1, < + >>Representing the image coordinates of a pixel in the template, the image center coordinates being (0, 0). The four templates in formula 3 are templates in the axial direction of the image, and the four templates in formula 4 are templates in the diagonal direction of the image.
For each image sampleApplying the above template, a response map of the image sample in 8 directions can be obtained>The method comprises the following steps:
singular value decomposition is carried out on each response graph to obtain singular values which are arranged from large to smallN represents the response map->Is normalized to the n singular value of the interval 0-1:
further, the dimension characteristic value of the image sample is calculated as follows:
the dimension characteristic value reflects the difference of response graphs of the image sample in different directions, and if the difference is smaller, the appearance of the image sample is less influenced by the change of the direction, and a simpler network model can be adopted; otherwise, the description is more affected and a more complex network model needs to be adopted.
Setting a threshold valueFor selecting different network models, setting +.>I.e. when->And selecting a complex network model, otherwise, selecting a simple network model.
The dimension test process ends.
S1.3.2 image detection and neural network model construction for detection
The neural network model refers to a mathematical calculation model taking an image as input and a detection result as output, wherein the input and the output are formed by a plurality of hidden nodes according to a certain logic relationship, each hidden node represents a certain operation method, and parameters of the hidden node operation formula are determined through training (learning).
And a space structure mapping layer is established after the input image and is used for extracting the space structure characteristics of the input image so as to better cope with the appearance deformation caused by observing the target from different angles.
In the aboveI.e. spatial structure map layer->The image processing device consists of 8 images with the same size as the original image, and each image represents the mapping of the original image rotated by a certain angle, so that the target appearance deformation caused by rotation can be effectively realized. />Represents the circumference ratio parameter, and II represents the absolute value.
Further, defining a spatial structure mapping layerFor measuring the deformation of the target appearance due to the spatial scaling. Namely:
in the aboveRepresenting 3 scaling parameters->Is a linear scaling window and is used for carrying out local window scaling on the upper layer. As a preferred case of the test, let ∈ ->Is +.>Thus, the root of Fangzhi->When (I)>When (when)When (I)>When->When (I)>。
The space structure mapping layer、/>And jointly establishing an appearance deformation model for processing the appearance deformation of the target.
in the above-mentioned method, the step of,is a linear parameter of the neural network for the application of +.>The results of the layers are mapped to feature extraction layer +.>Finish feature extraction, ->Is a linear bias parameter; the feature extraction layer maps the high-dimensional image data to a one-dimensional feature spaceThe data dimension is reduced; />An excitation function that is nonlinear, for enabling the neural network to process nonlinear sample data. />The function is defined as follows:
the three-section piecewise function is adopted in the above formula to further improve the classification performance of the activation function.
In the above-mentioned method, the step of,is a full connection layer->Linear parameter of>Is the corresponding linear bias parameter>Defining the same formula as the nonlinear activation function; />In the form of a matrix, combined with the above-mentioned dimension characteristic values, when the dimension characteristic value is smaller, the +.>Parameter optimization is carried out, and parameter quantity is reducedWhile optimizing the neural network model. According to the principle of linear algebraic eigenvalue decomposition, it can be seen that:
wherein V is formed byOrthogonal matrix of eigenvectors, +.>Is->A diagonal matrix of eigenvalues. By reducing the number of eigenvalues (i.e. setting the smaller eigenvalue to 0), the original matrix can be +.>The parameter amount of (2) decreases. In the present invention, when the dimension characteristic value is lower than the threshold value, the former ∈is set by experiment>Is the characteristic value of (1) then the full connection layer->The number of elements of (2) is reduced to +.>。
Specifically, let:
wherein Q isIs the total number of eigenvalues of (1), and +.>、…、/>Arranged in order from large to small. When the characteristic value of the dimension is lower than the characteristic value, taking back +.>Personal characteristic value->、…、/>0, i.e. decreasing the primordia->Is a parameter of the model (a).
The full connection layer is connected with output, namely, the classification O of input image samples is as follows:
in the above-mentioned method, the step of,linear parameter representing output layer,/->Representing a corresponding linear bias parameter; />For a nonlinear activation function, the same equation is defined.
The neural network model (defined by formulas 8-14) is trained to determine parameters of the neural network model, including linear scaling windows, linear parameters and linear bias parameters. The image samples adopted in training are manually marked with classification values, and the classification values are taken as true values of the training and recorded as. Define cost function->The difference between the neural network output value and the training truth value is:
the BP algorithm can be used for carrying out iterative optimization on the neural network model, the objective is to enable the cost function to be converged, and the parameter value of the model can be obtained after the convergence, so that training is completed.
After training, the neural network model can be used for detecting the shot images to obtain the positions of different targets in the images acquired by the camera.
Step 2According to the real environment coordinate resolving method of the target facility, the image coordinates of a certain target facility in the image are obtained according to the step 1, and the coordinates (namely the coordinates under a camera coordinate system) of the target facility in the real environment are calculated, so that a basis is provided for establishing a model of the target facility at a corresponding position in the virtual environment.
Setting a target facility in the cameraThe image coordinates are obtained according to step 1>In the camera->The image coordinates are obtained according to step 1>. According to formula 2 in step 1, there are:
eliminating scale factorsAfter that, the unknown parameters in the above formula are only camera coordinates +.>、/>、/>And solving the equation set to obtain the product. />
According to coordinates in a virtual environmentAnd establishing virtual mapping of the target facility.
The method establishes a digital twin model for the operation virtualization and abstraction of the crane through the related steps.
Firstly, identifying an object to be operated of the crane by the method in the step 1 of the invention, and obtaining the coordinates of the object in the image.
Further, coordinates in a plurality of images are obtained according to the step 1, and camera coordinates of corresponding target facilities are calculated according to the method described in the step 2. And drawing the virtual target at a corresponding position in the virtual scene by taking the camera coordinates of the target as a reference.
When the targets in the real scene change, the coordinates of the targets are updated by adopting the method, and the targets in the virtual scene are redrawn, so that the linkage between the real scene and the targets in the virtual scene is realized, and an operator can perform crane operation training in the virtual environment with operation experience similar to that of the real environment, thereby achieving the training purpose.
The invention provides an intelligent crane operation target identification method based on a digital twin technology, which is used for automatically detecting images of target facilities in a real environment and acquiring positions of the target facilities, so that virtual mapping of the target facilities is rebuilt in a digital twin virtual environment. Compared with the traditional neural network model, the target deformation problem caused by large shooting angle range can be better solved, the target detection accuracy is improved, and the target detection efficiency is improved. Table 1 shows the comparison result with the classical neural network model, and the method has higher detection accuracy and faster recognition efficiency, thereby more effectively completing the intelligent crane operation target recognition task of the digital twin technology.
TABLE 1
Reference model | Target detection success rate (error)<3pixel) | Target recognition time (average target number 50) |
AlexNet | 71.7% | 23 seconds |
YOLO | 83.1% | 101 seconds |
ResNet | 85.4% | 355 seconds |
The invention is that | 90.7% | 11 seconds |
Claims (3)
1. The intelligent crane operation target identification method based on the digital twin technology is characterized by comprising the following steps of:
video camera C 1 、C 2 、…、C r ShootingTo the image respectively marked as I 1 、I 2 、…、I r The method comprises the steps of carrying out a first treatment on the surface of the The template operation is utilized to each image sample, and a response graph gamma of the image sample in 8 directions can be obtained k The method comprises the following steps:
Υ k (i,j)=S(i,j)×T k (i,j),k=1,…,8
wherein S (i, j) is a sample image, T k (i, j) is a template;
singular value decomposition is carried out on each response graph to obtain singular values gamma arranged from large to small k,n N represents the response pattern y k Is normalized to the n singular value of the interval 0-1:
further, the dimension characteristic value of the image sample is calculated as follows:
the dimension characteristic value reflects the difference of response graphs of the image samples in different directions;
constructing a neural network model, wherein a feature extraction layer h of the neural network model 3 After establishment of the full connection layer h 4 :
Θ is the full connection layer h 4 Linear parameter beta 2 Is a corresponding linear bias parameter; sigma is a nonlinear activation function;
Θ=VΓV -1
wherein V is an orthogonal array composed of eigenvectors of Θ, Γ is a diagonal array composed of eigenvalues of Θ;
wherein Q is the total number of eigenvalues of Θ, and τ 1 、…、τ Q Arranged in order from large to small; when the characteristic value of the dimension is lower than the characteristic value, taking the characteristic valuePersonal characteristic value->…、τ Q 0, i.e. reducing the parameter number of the original matrix Θ;
the rear of the full-connection layer is connected with the output layer;
acquiring the positions of different targets in images acquired by a camera; and drawing a virtual target at a corresponding position in the virtual scene by taking the camera coordinates of the target as a reference, and establishing a digital twin model of the crane operation target.
2. The intelligent crane operation target identification method based on the digital twin technology as claimed in claim 1, wherein: the method further comprises the steps of obtaining image coordinates of a certain target facility in the image, and calculating coordinates of the target facility in the real environment, namely coordinates under a camera coordinate system, so that basis is provided for building a model of the target facility at a corresponding position in the virtual environment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310157211.5A CN115849202B (en) | 2023-02-23 | 2023-02-23 | Intelligent crane operation target identification method based on digital twin technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310157211.5A CN115849202B (en) | 2023-02-23 | 2023-02-23 | Intelligent crane operation target identification method based on digital twin technology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115849202A CN115849202A (en) | 2023-03-28 |
CN115849202B true CN115849202B (en) | 2023-05-16 |
Family
ID=85658765
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310157211.5A Active CN115849202B (en) | 2023-02-23 | 2023-02-23 | Intelligent crane operation target identification method based on digital twin technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115849202B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116524030B (en) * | 2023-07-03 | 2023-09-01 | 新乡学院 | Reconstruction method and system for digital twin crane under swinging condition |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112418103A (en) * | 2020-11-24 | 2021-02-26 | 中国人民解放军火箭军工程大学 | Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision |
CN114155299A (en) * | 2022-02-10 | 2022-03-08 | 盈嘉互联(北京)科技有限公司 | Building digital twinning construction method and system |
CN114898285A (en) * | 2022-04-11 | 2022-08-12 | 东南大学 | Method for constructing digital twin model of production behavior |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10395652B2 (en) * | 2016-09-20 | 2019-08-27 | Allstate Insurance Company | Personal information assistant computing system |
CN110334701B (en) * | 2019-07-11 | 2020-07-31 | 郑州轻工业学院 | Data acquisition method based on deep learning and multi-vision in digital twin environment |
US20220411229A1 (en) * | 2020-01-16 | 2022-12-29 | Inventio Ag | Method for the digital documentation and simulation of components in a personnel transport installation |
CN111339975B (en) * | 2020-03-03 | 2023-04-21 | 华东理工大学 | Target detection, identification and tracking method based on central scale prediction and twin neural network |
CN111563446B (en) * | 2020-04-30 | 2021-09-03 | 郑州轻工业大学 | Human-machine interaction safety early warning and control method based on digital twin |
FR3123908B1 (en) * | 2021-06-14 | 2023-10-27 | Manitowoc Crane Group France | Process for securing a crane in the event of an exceptional event |
US11586160B2 (en) * | 2021-06-28 | 2023-02-21 | Applied Materials, Inc. | Reducing substrate surface scratching using machine learning |
CN113741442B (en) * | 2021-08-25 | 2022-08-02 | 中国矿业大学 | Monorail crane automatic driving system and method based on digital twin driving |
CN114049422A (en) * | 2021-11-11 | 2022-02-15 | 上海交通大学 | Data enhancement method and system based on digital twinning and image conversion |
CN114329747B (en) * | 2022-03-08 | 2022-05-10 | 盈嘉互联(北京)科技有限公司 | Virtual-real entity coordinate mapping method and system for building digital twins |
CN114818312A (en) * | 2022-04-21 | 2022-07-29 | 浙江三一装备有限公司 | Modeling method, modeling system and remote operation system for hoisting operation |
CN115272888A (en) * | 2022-07-22 | 2022-11-01 | 三峡大学 | Digital twin-based 5G + unmanned aerial vehicle power transmission line inspection method and system |
CN115303946B (en) * | 2022-09-16 | 2023-05-26 | 江苏省特种设备安全监督检验研究院 | Digital twinning-based tower crane operation monitoring method and system |
CN115457479A (en) * | 2022-09-28 | 2022-12-09 | 江苏省特种设备安全监督检验研究院 | Crane operation monitoring method and system based on digital twinning |
CN115620121A (en) * | 2022-10-24 | 2023-01-17 | 中国人民解放军战略支援部队航天工程大学 | Photoelectric target high-precision detection method based on digital twinning |
-
2023
- 2023-02-23 CN CN202310157211.5A patent/CN115849202B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112418103A (en) * | 2020-11-24 | 2021-02-26 | 中国人民解放军火箭军工程大学 | Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision |
CN114155299A (en) * | 2022-02-10 | 2022-03-08 | 盈嘉互联(北京)科技有限公司 | Building digital twinning construction method and system |
CN114898285A (en) * | 2022-04-11 | 2022-08-12 | 东南大学 | Method for constructing digital twin model of production behavior |
Also Published As
Publication number | Publication date |
---|---|
CN115849202A (en) | 2023-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108985238B (en) | Impervious surface extraction method and system combining deep learning and semantic probability | |
CN109816725B (en) | Monocular camera object pose estimation method and device based on deep learning | |
CN109584248B (en) | Infrared target instance segmentation method based on feature fusion and dense connection network | |
CN108961235B (en) | Defective insulator identification method based on YOLOv3 network and particle filter algorithm | |
CN108764048B (en) | Face key point detection method and device | |
CN106127204B (en) | A kind of multi-direction meter reading Region detection algorithms of full convolutional neural networks | |
CN110599537A (en) | Mask R-CNN-based unmanned aerial vehicle image building area calculation method and system | |
CN109376591B (en) | Ship target detection method for deep learning feature and visual feature combined training | |
CN111126308B (en) | Automatic damaged building identification method combining pre-disaster remote sensing image information and post-disaster remote sensing image information | |
CN108960404B (en) | Image-based crowd counting method and device | |
CN111178206A (en) | Building embedded part detection method and system based on improved YOLO | |
CN110633708A (en) | Deep network significance detection method based on global model and local optimization | |
CN109145836A (en) | Ship target video detection method based on deep learning network and Kalman filtering | |
CN112613097A (en) | BIM rapid modeling method based on computer vision | |
CN115849202B (en) | Intelligent crane operation target identification method based on digital twin technology | |
CN108932474B (en) | Remote sensing image cloud judgment method based on full convolution neural network composite characteristics | |
CN112288758B (en) | Infrared and visible light image registration method for power equipment | |
CN110827304A (en) | Traditional Chinese medicine tongue image positioning method and system based on deep convolutional network and level set method | |
CN111222545B (en) | Image classification method based on linear programming incremental learning | |
CN110334584A (en) | A kind of gesture identification method based on the full convolutional network in region | |
CN112861666A (en) | Chicken flock counting method based on deep learning and application | |
CN116206112A (en) | Remote sensing image semantic segmentation method based on multi-scale feature fusion and SAM | |
CN114266967A (en) | Cross-source remote sensing data target identification method based on symbolic distance characteristics | |
CN117036756B (en) | Remote sensing image matching method and system based on variation automatic encoder | |
CN111222576B (en) | High-resolution remote sensing image classification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |