CN109800708A - Visit image lesion intelligent identification Method in aero-engine hole based on deep learning - Google Patents
Visit image lesion intelligent identification Method in aero-engine hole based on deep learning Download PDFInfo
- Publication number
- CN109800708A CN109800708A CN201910048264.7A CN201910048264A CN109800708A CN 109800708 A CN109800708 A CN 109800708A CN 201910048264 A CN201910048264 A CN 201910048264A CN 109800708 A CN109800708 A CN 109800708A
- Authority
- CN
- China
- Prior art keywords
- image
- aero
- neural networks
- convolutional neural
- damage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of, and image lesion intelligent identification Method is visited in the aero-engine hole based on deep learning, belongs to aero-engine non-destructive tests field.Method includes: to obtain the network weight for reaching the full convolutional neural networks that default accuracy rate requires on test set, and test set is that tag image is visited in multiple aero-engine holes;Load networks weight is to initialize full convolutional neural networks;It obtains aero-engine hole and visits image;Image is visited to aero-engine hole to pre-process, and obtains the pretreatment image for meeting full convolutional neural networks input requirements;Pretreatment image is handled using the full convolutional neural networks after initialization, obtains damage field and damage type corresponding with damage field that image is visited in aero-engine hole.The present invention through the above technical solution can intelligent recognition portal and visit damage field in image and corresponding classification, visit efficiency to improve hole, improve the influence that hole visits the precision of process, reduces artificial subjective factor during hole is visited.
Description
Technical field
The invention belongs to aero-engine damage identification technique field, in particular to a kind of aviation hair based on deep learning
Visit image lesion intelligent identification Method in motivation hole.
Background technique
Engine has great influence as the core component in aircraft, for flight safety.Inside when engine operation
Temperature is high, pressure is big, therefore engine interior structure often will appear a variety of damages, such as crack, burn-through.If cannot send out in time
These existing damages, it will great threat is caused to civil aviaton's flight safety.Therefore, company, civil aviaton uses Through Several Survey Measure, comes
The security risk of discovery engine structure in time.
It visits technology and is important one of detection means in engine hole.Hole, which visits technical staff and visited camera using hole and protruded into, starts
In machine, photo, video of engine interior etc. are shot, and finds crack in corresponding photo, video, burn equivalent damage, most
End form pore-forming visits report, provides guidance for further maintenance, maintenance work.But hole is visited technology and is often taken time and effort, to one
The hole of platform engine, which is visited, will often expend as long as tens of hours.And being visited personnel's subjective factor by hole is influenced, accuracy rate
It is limited.As China's economic development, urbanization process are accelerated, increasing rapidly occurs in domestic in recent years, overseas airline.Traditional
Technology is visited since efficiency, precision are limited in hole, and human cost is high, is not able to satisfy current surging engine hole increasingly and visits demand.
Summary of the invention
To solve the above-mentioned problems, the present invention provides a kind of, and image is visited in the aero-engine hole based on deep learning
Damage intelligent identification Method comprising: obtain the net for reaching the full convolutional neural networks that default accuracy rate requires on test set
Network weight, the test set are that tag image is visited in multiple aero-engine holes, and it is warp that tag image is visited in the aero-engine hole
Damage field is marked in testing crew and image is visited in the aero-engine hole of damage type corresponding with the damage field;Load institute
Network weight is stated to initialize the full convolutional neural networks;It obtains aero-engine hole and visits image;To the aero-engine
Hole is visited image and is pre-processed, and the pretreatment image for meeting the full convolutional neural networks input requirements is obtained;Use initialization
The full convolutional neural networks afterwards handle the pretreatment image, obtain the damage that image is visited in the aero-engine hole
Hurt region and damage type corresponding with the damage field.
In method as described above, it is preferable that the full convolutional neural networks using after initialization are to described
Pretreatment image is handled, and is obtained the aero-engine hole and is visited the damage field of image and corresponding with the damage field
Damage type specifically includes: using the convolutional coding structure of the full convolutional neural networks after initialization to the pretreatment image
It carries out feature extraction and obtains characteristics of image tensor;Using the deconvolution structure of the full convolutional neural networks after initialization to institute
Characteristics of image tensor is stated to carry out a liter dimension to handle to obtain the aero-engine hole to visit each pixel in image being respectively various damages
The probability of classification;The damage type of each pixel is obtained according to the probability that each pixel is respectively various damage types;According to every
The damage type of a pixel obtain the aero-engine hole visit image damage field and damage corresponding with the damage field
Hurt classification.
In method as described above, it is preferable that the acquisition reaches the full volume that default accuracy rate requires on test set
The network weight of product neural network, specifically includes: obtaining multiple aero-engine holes and visits tag image;Multiple aviations are sent out
Motivation hole visits tag image and is divided into test set and training set in proportion, visits label to the aero-engine hole in the training set
Image is pre-processed;It constructs and initializes full convolutional neural networks;It is initialized using training set after pretreatment training
Full convolutional neural networks afterwards, the network weight after being trained;It is verified using the test set with the network weight after training
Whether the full convolutional neural networks updated are effective, if be verified as effectively, the network weight after the training is used as
Reach the network weight for the full convolutional neural networks that the default accuracy rate requires on the test set.
In method as described above, it is preferable that it is described by multiple aero-engine holes visit tag images press than
Example is divided into after test set and training set, the method also includes: mark is visited to each aero-engine hole in the training set
Remember that image carries out data enhancing processing, obtains aero-engine hole and visit enhancing image;Accordingly, the image packet in the training set
Include: tag image is visited in aero-engine hole and aero-engine hole corresponding with aero-engine hole spy tag image is visited and increased
Strong image.
In method as described above, it is preferable that the building simultaneously initializes full convolutional neural networks, specifically includes: taking
The convolutional coding structure in the full convolutional neural networks is built, the convolutional coding structure is used to visit mark to the received aero-engine hole
Remember that image carries out feature extraction to obtain characteristics of image tensor;Build the deconvolution structure in the full convolutional neural networks, institute
Deconvolution structure is stated to visit for carrying out liter dimension processing to received described image characteristic tensor to obtain the aero-engine hole
Each pixel is respectively the probability of various damage types in tag image;The convolutional coding structure is carried out just using pre-training weight
Beginningization, the pre-training weight are obtained by the convolutional coding structure is trained on disclosed image data set;Initialize deconvolution
Structure.
In method as described above, it is preferable that the convolutional coding structure includes multiple convolution blocks, each convolution block packet
Include convolutional layer and pond layer with the first activation primitive;The deconvolution structure includes warp lamination and activates with second
The convolutional layer of function;The first described activation primitive and second of activation primitive are different activation primitives.
In method as described above, it is preferable that the convolutional coding structure includes: 5 convolution blocks, and each convolution block is
Two continuous convolutional layers with relu activation primitive add the structure of a pond layer;The deconvolution structure includes: warp
Lamination and a convolutional layer with sigmoid activation primitive.
In method as described above, it is preferable that it is described that aero-engine hole spy image is pre-processed, it is specific to wrap
It includes: being to meet the full convolutional neural networks to want the input of size by the size scaling that image is visited in the aero-engine hole
It asks;Image after scaling is standardized so that the mean value of all pixels of the image after the scaling is become 0 and variance change
It is 1.
In method as described above, it is preferable that it is described using training set after pretreatment training it is initialized after
Full convolutional neural networks, the network weight after being trained, specifically include: training set being divided into more batches, every batch of includes N described
Visit tag image in aero-engine hole;It is more to traverse that training step is repeated to the full convolutional neural networks after initialization
Batch, until the value of objective function meets preset condition, by network weight corresponding with the value for the objective function for meeting preset condition
As the network weight after training;The training step specifically includes: mark is visited in each Zhang Suoshu aero-engine hole in prediction every batch of
Each pixel in note image is respectively the probability of different damage types;It is respectively the general of different damage types according to each pixel
Rate obtains the damage type of each pixel prediction;The damage for obtaining the damage type for indicating each pixel prediction and being marked through testing crew
Hurt the value of classification gap;The aero-engine holes all in every batch of are visited to the damage type of all pixels prediction in tag image
With the average value of the value of the damage type gap marked through testing crew, as objective function;Based on back propagation, according to mesh
Scalar functions calculate the gradient of each weight variation in the full convolutional neural networks, and use optimal method, according to calculating
To gradient value adjust the value of each weight in the full convolutional neural networks;Wherein, N is the positive integer more than or equal to 1.
In method as described above, it is preferable that described to be verified using test set with the network weight update after training
Whether the full convolutional neural networks are effective, if be verified as effectively, the network weight after the training is used as in the survey
The network weight for reaching the full convolutional neural networks that the default accuracy rate requires on examination collection, specifically includes: default evaluation
Index: the prediction result of tag image is visited in the aero-engine hole described for one, there is the equal coincidence factor of pixel accuracy rate PA peace
Two evaluation indexes of mIOU, PA=∑inii/∑iti,Wherein, nabFor full convolutional Neural
Network is by the quantity for the pixel that the pixel prediction that damage type is a is damage type b, and a, b take i or j, t in formulaiTo test people
The pixel quantity that member's label damage type is i, tiMeet ti=∑jnij, nclIt is to mark the in jured kind quantity for including in classification,
∑jnjiIndicate the quantity for being predicted as all pixels of the i-th class damage type,It indicates for damage type i, through trying
The registration of the damage field of worker labels and the damage field of prediction is tested,It indicates for all damages
Classification is summed;Tag image is visited to aero-engine hole each in the test set to pre-process, and obtains meeting described complete
The pretreatment image of convolutional neural networks input requirements;According to the full convolutional Neural net updated with the network weight after training
It is various damages that network is predicted to obtain the aero-engine hole to visit each pixel in tag image to the pretreatment image
Then the probability of classification obtains the damage type of each pixel prediction;It is visited according to the aero-engine hole every in tag image
The damage type of a element marking and the damage type of prediction, calculate the aero-engine hole visit tag image PA and
mIOU;Judge in the test set all aero-engine holes visit tag images average PA and mIOU whether meet it is default
Threshold requirement;Meet if being judged as, the weight of the full convolutional neural networks at this time is reached as on the test set
The network weight for the full convolutional neural networks that the default accuracy rate requires.
Technical solution provided in an embodiment of the present invention has the benefit that
The damage field in image is visited to aero-engine hole using full convolutional neural networks and carries out intelligent recognition, effectively
Improve existing working efficiency and accuracy rate based on manpower recognition methods;Via hole can not only be played during hole is visited visits people
Member's orientated damage improves the effect that efficiency is visited in hole, additionally it is possible to hole be assisted to visit some artificial more difficult discoveries of personnel's discovery or often neglected
Damage (can indirect labor identify not found damage field) slightly can further increase the precision that process is visited in hole,
Reduce the influence of artificial subjective factor during hole is visited;It can efficiently work for a long time, reduce the consumption of manpower, reduce
Staff judges by accident under weariness working, the probability for damage of failing to judge, and improves accuracy of identification.
Detailed description of the invention
Fig. 1 is that image lesion intelligently knowledge is visited in a kind of aero-engine hole based on deep learning provided in an embodiment of the present invention
The flow diagram of other method.
Fig. 2 is that image lesion intelligence is visited in another aero-engine hole based on deep learning provided in an embodiment of the present invention
The flow diagram of recognition methods.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to embodiment party of the present invention
Formula is described in further detail.
It should be understood that following damage types had both included the classification haveing damage, such as crack is burnt;It again include lossless
The classification of wound, i.e., it is not damaged.
Referring to Fig. 1, one embodiment of the invention provides a kind of aero-engine hole spy image lesion based on deep learning
Intelligent identification Method comprising following steps:
Step 101, the network weight for reaching the full convolutional neural networks that default accuracy rate requires on test set is obtained,
In, test set is that tag image is visited in multiple aero-engine holes, and it is to mark through testing crew that tag image is visited in aero-engine hole
Visit image in the aero-engine hole of damage field and corresponding with damage field damage type.
Step 102, load networks weight is to initialize full convolutional neural networks.
Step 103, it obtains aero-engine hole and visits image.
Step 104, image is visited to aero-engine hole to pre-process, obtain meeting full convolutional neural networks input requirements
Pretreatment image.
Step 105, pretreatment image is handled using the full convolutional neural networks after initialization, obtains aeroplane engine
The damage field and damage type corresponding with damage field of machine hole spy image.
The present embodiment visits damage field and damage in image by using full convolutional neural networks to aero-engine hole
The corresponding damage type in region carries out intelligent recognition, effectively improves existing working efficiency and standard based on manpower recognition methods
True rate.It can not only play the role of via hole during hole is visited to visit personnel positioning damage, improve hole spy efficiency, additionally it is possible to assist
Hole visit some artificial more difficult discoveries of personnel's discovery or commonly overlooked damage (can indirect labor identify not found damage
Region), the precision that process is visited in hole can be further increased, the influence of artificial subjective factor during hole is visited is reduced;It can
Efficiently work for a long time, reduce the consumption of manpower, reduce staff judged by accident under weariness working, damage of failing to judge it is general
Rate improves accuracy of identification.
Referring to fig. 2, another embodiment of the present invention provides a kind of, and image damage is visited in the aero-engine hole based on deep learning
Hurt intelligent identification Method comprising following steps:
Step 201, test set and training set are obtained, image in training set is pre-processed, obtains meeting full convolution mind
The image required through network inputs.
Specifically, firstly, obtaining multiple aero-engine holes visits image, damage field and its corresponding damage class are marked
Not, the image after label is known as aero-engine hole at this time and visits tag image.Such as: testing crew is by being taken on site or receiving
The method for collecting history image obtains multiple aero-engine holes and visits image, marks damage using geometry in each image
Region and its corresponding damage type, geometry can be polygon, can also be other shapes.Testing crew when label
(or skilled practitioner is visited in hole or the expert in image lesion identification field is visited in aero-engine hole) by marking one by one
The vertex for remembering polygon, is sequentially connected with each vertex from beginning to end and obtains the damage field of polygon.The damage field of label, classification can
It can be crack or burn-through.It should be noted that in an image, in fact it could happen that multiple damage fields, it is also possible to go out simultaneously
Existing variety classes damage field.General damage field does not have coincidence;If any coincidence, then overlapping region is damaged labeled as crack
Wound.In labeling process, testing crew can only mark the region haveing damage, and unlabelled image-region is then nothing by default setting
Damage field.
Then, multiple aero-engine holes spy tag image is divided into test set and training set in proportion, i.e., by multiple
A part that tag image is visited in aero-engine hole forms test set, and another part shape of image is visited by multiple aero-engine holes
At training set, ratio can be 80% and 20% ratio, can also be other ratios, the present embodiment is to this without limit
It is fixed.
Again, image each in training set is pre-processed, obtains meeting following full convolutional neural networks input requirements
Image comprising: it is first to meet full convolutional neural networks to size by the size scaling that tag image is visited in aero-engine hole
Input requirements.It is illustrated by taking the multiple that input requirements of the full convolutional neural networks to picture size are 32 as an example, such as: with
The size of high × wide expression is 576 × 768.Then the image after scaling is standardized with all of the image after scaling
The mean value of pixel become 0 and variance become 1.Such as: image pixel value is adjusted using formula x '=x/127.5-1, wherein x is
Each point pixel value (each point pixel value before standardizing) in image, x ' are that image each point pixel value is (each after standardizing after handling
Point pixel value).
In order to optimize the training result of full convolutional neural networks, the method also includes: to each aviation hair in training set
Motivation hole visits tag image and carries out data enhancing processing, obtains aero-engine hole and visits enhancing image, after data enhancing processing, instruction
Practicing concentration includes: that tag image and aero-engine hole corresponding with aero-engine hole spy tag image are visited in aero-engine hole
Visit enhancing image.The method of data enhancing processing can be with are as follows: overturning processing is carried out to aero-engine tag image, such as: water
It square indicates both to carry out water to overturning and/or vertical direction overturning and/or the overturning of horizontal vertical direction, the overturning of horizontal vertical direction
Square to overturning again carry out vertical direction overturning.In other examples, can also be rotation processing, the present embodiment to this not
It is defined.
Step 202, it constructs and initializes full convolutional neural networks.
Specifically, firstly, building the convolutional coding structure in full convolutional neural networks, convolutional coding structure is used for aero-engine hole
It visits tag image and carries out feature extraction to obtain characteristics of image tensor.Convolutional coding structure includes: multiple convolution blocks, each convolution block packet
It includes: convolutional layer and pond layer with the first activation primitive.Below with convolution number of blocks for 5, convolution layer number is right for being 2
Convolutional coding structure is illustrated.Convolutional coding structure includes: 5 convolution blocks, and each convolution block is conv+relu+conv+relu+
Pooling structure, i.e., the two continuous convolutional layers with relu activation primitive add the structure of a pond layer.The core of convolutional layer
Size and step-length are respectively 3 × 3 and 1, and the core size and step-length of pond layer are respectively 2 × 2 and 2, the first activation primitive is
Relu activation primitive.Input image size is reduced 32 times by 5 convolution blocks.It should be noted that the specific structure of convolutional coding structure
At can be adjusted according to the actual situation, the present embodiment is not limited the quantity of convolution block, also not to the tool of convolution block
Body structure is limited.
Then, build the deconvolution structure in full convolutional neural networks, deconvolution structure be used for characteristics of image tensor into
Row rises dimension processing to obtain aero-engine hole and visit the probability that each pixel in tag image is respectively various damage types.Warp
Product structure includes: warp lamination and the convolutional layer with second of activation primitive;Second of activation primitive and the first activation primitive
For different activation primitives.Deconvolution structure is illustrated so that the quantity of convolutional layer is 1 as an example below.Deconvolution structure packet
It includes: warp lamination and a convolutional layer with sigmoid activation primitive.The core size and step-length of warp lamination are respectively 64 × 64
With 32, the core size and step-length of convolutional layer are respectively 1 × 1 and 1, and second of activation primitive is sigmoid activation primitive.The warp
Input image size is expanded 32 times by product structure.It should be noted that the specific composition of deconvolution structure can be according to reality
Situation is adjusted, and the present embodiment does not limit the quantity of convolution block, is not also limited the specific structure of convolution block.
Again, full convolutional neural networks are initialized comprising: initialize the convolutional coding structure of full convolutional neural networks and initial
Change the deconvolution structure of full convolutional neural networks.Initialization convolutional coding structure can initialize convolution knot by using random noise
Each weight of structure realizes that the warp lamination for initializing deconvolution structure can come initially by using bilinear interpolation transformation matrix
Change the weight of warp lamination to realize, the convolutional layer for initializing deconvolution structure initially can change this by using random noise
The weight of convolutional layer is realized.Random noise can use the random noise of normal distribution.
In order to enable full convolutional neural networks are quickly restrained, carried out using each weight of the pre-training weight to convolutional coding structure
Initialization.Pre-training weight is obtained by convolutional coding structure is trained on disclosed image data set.Disclosed image data set example
It such as can be ImageNet image data, be the large-scale image data set for being used for the research of visual object recognizer.
Step 203, the full convolutional neural networks using training set after pretreatment training after initialized, are trained
Network weight afterwards.
Specifically, firstly, aero-engine tag images all in training set after pretreatment are divided into more batches, every batch of
Comprising N images, i.e., training set after pretreatment is divided into batch that size is N, N is the natural number more than or equal to 1.Work as N=1
When, it, can not training set carries out batch divides to treated due to comprising 1 image.
Then, training step is executed comprising: each of tag image is visited in each aero-engine hole in prediction a batch
Pixel is respectively the probability of different damage types.Each pixel is obtained according to the probability that each pixel is respectively different damage types
Damage type, the damage type obtained at this time is prediction damage type, such as: choose the damage type of maximum probability as should
The damage type of pixel.Obtain the prediction damage type for indicating each pixel and the value for marking damage type gap.It, can be in
The value is calculated by cross entropy, label damage type is the damage type marked by testing crew.To own in this batch
It visits all pixels in tag image and indicates prediction damage type and mark being averaged for the value of damage type gap in aero-engine hole
Value, as objective function, when the value is calculated using cross entropy, objective function at this time can be referred to as cross entropy letter
Number.Based on back propagation, the gradient of each weight variation in full convolutional neural networks is calculated according to objective function, that is, is based on mesh
Scalar functions, the gradient value of each weight variation in full convolutional neural networks is calculated using back propagation, and uses optimization side
Method updates according to the gradient value being calculated and (modifies or adjust) value of each weight in full convolutional neural networks.It optimizes
Method is the optimization method in machine learning, can be stochastic gradient descent method or RMSPROP method or ADAM method.
Wherein, objective function L can be indicated are as follows:
Wherein N is the picture number of every batch of
Amount, H are the height of image, and W is the width of image, the port number of C representative image,Calculate the difference of two inputs
Away from value, i.e. gap function, YnijcWithIn the every batch of for respectively representing convolutional neural networks prediction that testing crew marks and complete
Position is the damage type of the pixel of (i, j) in n-th image, c-th of channel.
Again, above-mentioned training step successively is executed to one in more crowdes crowd, until the value of objective function meets default item
Part, using network weight corresponding with the value for the objective function for meeting preset condition as the network weight after training.Preset condition
Can no longer be reduced or the value little Yu goal-selling function value of objective function for the value of objective function, such as 10-5、10-6。
Step 204, it is whether effective that the full convolutional neural networks updated with the network weight after training are verified using training set,
If be verified as effectively, the network weight after training is as the full convolutional Neural for reaching default accuracy rate requirement on test set
The network weight of network.
Specifically, the full convolution mind that the network weight after training updates using test set and the verifying of preset evaluation index
It is whether effective through network.Evaluation index is used to verify whether full convolutional neural networks are effective, can be accurate for following pixels
Rate PA peace coincidence factor mIOU, can also measure full convolution using other evaluation indexes according to different application demands
The validity of neural network, such as according to the prediction result of each pixel, the precision ratio (precision) or Cha Quan that are calculated
Rate (recall).Verification process is specifically described by taking pixel accuracy rate PA peace coincidence factor mIOU as an example below.
Firstly, preset evaluation index: the equal coincidence factor of pixel accuracy rate peace.Pixel accuracy rate PA (Pixel Accuracy)
=∑inii/∑iti, average coincidence factor mIOU (mean Intersection over Union) indicates the pre- of all damage types
The average value of the damage field registration of the damage field and real marking (i.e. testing crew label) of survey,Wherein, nabIt by the pixel prediction that damage type is a is damage for full convolutional neural networks
The quantity for hurting the pixel of classification b, a, b take i or j, t in formulaiThe pixel quantity that damage type is i, t are marked for testing crewiIt is full
Sufficient ti=∑jnij, nclIt is to mark the in jured kind quantity for including, ∑ in classificationjnjiExpression is predicted as the i-th class damage type
The quantity of all pixels,Indicate the registration for being i for damage type,It indicates for institute
Some damage types are summed.
Then, tag image is visited to aero-engine each in test set hole to pre-process, obtain meeting full convolutional Neural
The pretreatment image that network inputs require, the full convolutional neural networks pair of network weight update are carried out with the network weight after training
Pretreatment image is predicted to obtain the probability that each pixel in aero-engine hole spy tag image is various damage types, into
And obtain the damage type of each pixel prediction.Detailed description about the step can be found in above-mentioned steps 201~203
Related content no longer repeats one by one herein.
Again, the damage type of each element marking and the damage class of prediction in tag image are visited according to aero-engine hole
Not, the PA and mIOU for visiting tag image in aero-engine hole are calculated, judges that the PA and mIOU of all image averagings in test set are
No to meet respective preset threshold requirement, which is usually set by testing crew;Meet if being judged as, by full convolution
The weight of neural network network at this time is as the network for reaching the full convolutional neural networks that default accuracy rate requires on test set
Weight thinks that full convolutional neural networks are available.
It is not met if being judged as, reselects hyper parameter, re -training successively executes one in more crowdes crowd
Training step is stated, until the value of objective function meets preset condition, it will be corresponding with the value for the objective function for meeting preset condition
Then network weight executes step 204 as the network weight after training.Hyper parameter includes the size criticized, optimal method
Selection and the corresponding parameter of optimal method.N is such as adjusted to 3 by 2, by optimal method by stochastic gradient descent method tune
Whole is ADAM method, and the corresponding parameter of corresponding modification optimal method.If be still unable to get by adjusting hyper parameter
The network weight for reaching the full convolutional neural networks that default accuracy rate requires on the test set, then in the base of original training set
On plinth, more training datas are regathered, that is, regathers multiple aero-engine holes and visits image, be then trained, that is, execute again
Step 203~204.
Step 205, load networks weight is to initialize full convolutional neural networks.
Step 206, it obtains aero-engine hole and visits image, image is visited to aero-engine hole and is pre-processed, is accorded with
Close the pretreatment image of full convolutional neural networks input requirements.Detailed description about the step can be found in above-mentioned steps
201~203 related content, no longer repeats one by one herein.
Step 207, pretreatment image is handled using the full convolutional neural networks after initialization, obtains aeroplane engine
The damage field and damage type corresponding with damage field of machine hole spy image.
Specifically, firstly, being carried out using the convolutional coding structure of the full convolutional neural networks after initialization to pretreatment image special
Sign is extracted and obtains characteristics of image tensor;Using the deconvolution structure of the full convolutional neural networks after initialization to characteristics of image tensor
It carries out liter dimension and handles to obtain the probability that each pixel in aero-engine hole spy image is respectively various damage types;According to each
Pixel is respectively the damage type that the probability of various damage types obtains each pixel.Detailed description about the step can
Referring to the related content of above-mentioned steps 201~203, no longer repeat one by one herein.
Secondly, obtaining the damage field and and damage zone that image is visited in aero-engine hole according to the damage type of each pixel
The corresponding damage type in domain.Such as: according to the damage type of each pixel, obtains an aero-engine hole and visit in tag image
The distribution of various damage types, the pixel extraction of identical damage type is come out, then obtains the corresponding region of the damage type.
The present embodiment visits damage field and damage in image by using full convolutional neural networks to aero-engine hole
The corresponding damage type in region carries out intelligent recognition, effectively improves existing working efficiency and standard based on manpower recognition methods
True rate.It can not only play the role of via hole during hole is visited to visit personnel positioning damage, improve hole spy efficiency, additionally it is possible to assist
Hole visit some artificial more difficult discoveries of personnel's discovery or commonly overlooked damage (can indirect labor identify not found damage
Region), the precision that process is visited in hole can be further increased, the influence of artificial subjective factor during hole is visited is reduced.It can grow
Phase efficiently works, and reduces the consumption of manpower, reduce staff judged by accident under weariness working, the probability for damage of failing to judge,
Improve accuracy of identification.
One embodiment of the invention additionally provides a kind of aero-engine hole spy image lesion intelligently knowledge based on deep learning
Other device is specifically included for executing above-mentioned intelligent identification Method:
First obtains module, for obtaining the net for reaching the full convolutional neural networks that default accuracy rate requires on test set
Network weight, wherein test set is that tag image is visited in multiple aero-engine holes, and it is through testing that tag image is visited in aero-engine hole
Visit image in the aero-engine hole of worker labels' damage field and corresponding with damage field damage type.
Second obtains module, visits image for obtaining aero-engine hole.
Preprocessing module pre-processes for visiting image to aero-engine hole, obtains meeting full convolutional neural networks
The pretreatment image of input requirements.
Full convolutional neural networks module, full convolutional neural networks are initialized for load networks weight, use initialization
Full convolutional neural networks afterwards handle pretreatment image, obtain aero-engine hole visit image damage field and with damage
Hurt the corresponding damage type in region.
Preferably, full convolutional neural networks module is used for using the full convolutional neural networks after initialization to pretreatment image
It is handled, obtains damage field and damage type corresponding with damage field that image is visited in aero-engine hole, be specifically used for:
Feature extraction is carried out to pretreatment image using the convolutional coding structure of the full convolutional neural networks after initialization and obtains characteristics of image
Amount;A liter dimension is carried out to characteristics of image tensor using the deconvolution structure of the full convolutional neural networks after initialization to handle to obtain aviation
Visit the probability that each pixel in image is respectively various damage types in engine hole;It is respectively various damage classes according to each pixel
Other probability obtains the damage type of each pixel;Aero-engine hole, which is obtained, according to the damage type of each pixel visits image
Damage field and damage type corresponding with damage field.
The embodiment party for obtaining module, the second acquisition module, preprocessing module and full convolutional neural networks module about first
Formula can be found in the associated description of step 101~105 and step 201~207 in above-described embodiment, no longer go to live in the household of one's in-laws on getting married one by one herein
It states.
The embodiment of the present invention by using full convolutional neural networks to aero-engine hole visit image in damage field into
Row intelligent recognition effectively improves existing working efficiency and accuracy rate based on manpower recognition methods;During hole is visited not
It is only capable of playing the role of the damage of via hole spy personnel positioning, improves hole spy efficiency, additionally it is possible to assist hole to visit personnel and find some
The more difficult discovery of work or commonly overlooked damage (can indirect labor identify not found damage field), can further mention
The precision of process is visited in high hole, reduces the influence of artificial subjective factor during hole is visited;It can efficiently work, reduce for a long time
The consumption of manpower, reduce staff judged by accident under weariness working, the probability for damage of failing to judge, improve accuracy of identification.It needs
Illustrate: intelligent identification device provided by the above embodiment is only lifted in identification with the division of above-mentioned each functional module
Example illustrates, in practical application, can according to need and is completed by different functional modules above-mentioned function distribution, i.e., by system
Internal structure is divided into different functional modules, to complete all or part of the functions described above, such as by preprocessing module
Full convolutional neural networks module is divided into full convolutional neural networks module.In addition, intelligent recognition dress provided by the above embodiment
It sets and belongs to same design with intelligent identification Method, specific implementation process is detailed in embodiment of the method, and which is not described herein again.
One embodiment of the invention additionally provides a kind of aero-engine hole spy image lesion intelligently knowledge based on deep learning
Other device, specifically includes: the memory of image collecting device, processor and the executable instruction for storage processor.
Processor is configured as obtaining the network for reaching the full convolutional neural networks that default accuracy rate requires on test set
Weight, wherein test set is that tag image is visited in multiple aero-engine holes, and it is through testing people that tag image is visited in aero-engine hole
Damage field is marked in member and image is visited in the aero-engine hole of damage type corresponding with damage field;Load networks weight is with first
The full convolutional neural networks of beginningization;Aero-engine hole, which is obtained, by image collecting device visits image;Aero-engine hole is visited and is schemed
As being pre-processed, the pretreatment image for meeting full convolutional neural networks input requirements is obtained;Use the full convolution after initialization
Neural network handles pretreatment image, obtains aero-engine hole and visits the damage field of image and corresponding with damage field
Damage type.Image collecting device can be camera.
Specific descriptions about image collecting device and processor can be found in step 101~105 and 201 in above-described embodiment
~207 related content, no longer repeats one by one herein.
One embodiment of the invention additionally provides a kind of storage medium, when the instruction in storage medium is by based on deep learning
When the processing component that image lesion intelligent identification device is visited in aero-engine hole executes, so that this intelligent recognition is until be able to carry out
Visit image lesion intelligent identification Method in the above-mentioned aero-engine hole based on deep learning.Processing component includes processor.
As known by the technical knowledge, the present invention can pass through the embodiment party of other essence without departing from its spirit or essential feature
Case is realized.Therefore, embodiment disclosed above, in all respects are merely illustrative, not the only.Institute
Have within the scope of the present invention or is included in the invention in the change being equal in the scope of the present invention.
Claims (10)
1. image lesion intelligent identification Method is visited in a kind of aero-engine hole based on deep learning, which is characterized in that the side
Method includes:
The network weight for reaching the full convolutional neural networks that default accuracy rate requires on test set is obtained, the test set is more
Tag image is visited in a aero-engine hole, and it is that damage field is marked through testing crew that tag image is visited in the aero-engine hole
Image is visited with the aero-engine hole of damage type corresponding with the damage field;
The network weight is loaded to initialize the full convolutional neural networks;
It obtains aero-engine hole and visits image;
Image is visited to the aero-engine hole to pre-process, and obtains meeting the pre- of the full convolutional neural networks input requirements
Handle image;
The pretreatment image is handled using the full convolutional neural networks after initialization, obtains the aeroplane engine
The damage field and damage type corresponding with the damage field of machine hole spy image.
2. the method according to claim 1, wherein the full convolutional neural networks using after initialization
The pretreatment image is handled, obtain the aero-engine hole visit image damage field and with the damage field
Corresponding damage type, specifically includes:
Feature extraction is carried out to the pretreatment image using the convolutional coding structure of the full convolutional neural networks after initialization to obtain
To characteristics of image tensor;
A liter Wei Chu is carried out to described image characteristic tensor using the deconvolution structure of the full convolutional neural networks after initialization
Reason obtains the aero-engine hole and visits the probability that each pixel in image is respectively various damage types;
The damage type of each pixel is obtained according to the probability that each pixel is respectively various damage types;
According to the damage type of each pixel obtain the aero-engine hole visit image damage field and with the damage zone
The corresponding damage type in domain.
3. the method according to claim 1, wherein the acquisition reaches default accuracy rate requirement on test set
Full convolutional neural networks network weight, specifically include:
It obtains multiple aero-engine holes and visits tag image;
Tag image is visited into multiple aero-engine holes and is divided into test set and training set in proportion, in the training set
Aero-engine hole visit tag image pre-processed;
It constructs and initializes full convolutional neural networks;
Network weight using full convolutional neural networks of the training set after pretreatment training after initialized, after being trained
Weight;
It is whether effective that the full convolutional neural networks updated with the network weight after training are verified using the test set, if
It is verified as effectively, then the network weight after the training is as the institute for reaching the default accuracy rate requirement on the test set
State the network weight of full convolutional neural networks.
4. according to the method described in claim 3, it is characterized in that, label figure is visited in multiple aero-engine holes described
As being divided into after test set and training set in proportion, the method also includes:
Tag image is visited to each aero-engine hole in the training set and carries out data enhancing processing, obtains aero-engine hole
Visit enhancing image;
Accordingly, the image in the training set include: aero-engine hole visit tag image and with the aero-engine hole
It visits the corresponding aero-engine hole of tag image and visits enhancing image.
5. according to the method described in claim 3, it is characterized in that, described construct and initialize full convolutional neural networks, specifically
Include:
The convolutional coding structure in the full convolutional neural networks is built, the convolutional coding structure is used for the received aero-engine
It visits tag image and carries out feature extraction to obtain characteristics of image tensor in hole;
The deconvolution structure in the full convolutional neural networks is built, the deconvolution structure is used for special to received described image
Sign tensor carries out liter dimension processing to obtain the aero-engine hole to visit each pixel in tag image being respectively various damage classes
Other probability;
The convolutional coding structure is initialized using pre-training weight, the pre-training weight is by the convolutional coding structure open
Image data set on trained obtain;
Initialize deconvolution structure.
6. according to the method described in claim 5, it is characterized in that,
The convolutional coding structure includes multiple convolution blocks, each convolution block include: the convolutional layer with the first activation primitive with
And pond layer;
The deconvolution structure includes: warp lamination and the convolutional layer with second of activation primitive;
The first described activation primitive and second of activation primitive are different activation primitives.
7. each described according to the method described in claim 6, it is characterized in that, the convolutional coding structure includes: 5 convolution blocks
Convolution block is the structure that two continuous convolutional layers with relu activation primitive add a pond layer;
The deconvolution structure includes: the convolutional layer that warp lamination and one match sigmoid activation primitive.
8. the method according to claim 1, wherein it is described to aero-engine hole visit image pre-process,
It specifically includes:
It is to meet the full convolutional neural networks to want the input of size by the size scaling that image is visited in the aero-engine hole
It asks;
Image after scaling is standardized so that the mean value of all pixels of the image after the scaling is become 0 and variance change
It is 1.
9. according to the method described in claim 3, it is characterized in that, described initial using training set after pretreatment training warp
Full convolutional neural networks after change, the network weight after being trained specifically include:
Training set is divided into more batches, every batch of includes that tag image is visited in the N aero-engine holes;
Training step is repeated to traverse more batches to the full convolutional neural networks after initialization, until the value of objective function
Meet preset condition, using network weight corresponding with the value for the objective function for meeting preset condition as the network weight after training
Weight;
The training step specifically includes:
The each pixel for predicting that each Zhang Suoshu aero-engine hole is visited in tag image in every batch of is respectively different damage types
Probability;
The damage type of each pixel prediction is obtained according to the probability that each pixel is respectively different damage types;
Obtain the value of the damage type for indicating each pixel prediction and the damage type gap through testing crew label;
By the damage type of all pixels prediction in the aero-engine holes spy tag images all in every batch of and through testing people
The average value of the value of the damage type gap of member's label, as objective function;
Based on back propagation, the ladder of each weight variation in the full convolutional neural networks is calculated according to the objective function
Degree, and optimal method is used, the value of each weight in the full convolutional neural networks is adjusted according to the gradient value being calculated;
Wherein, N is the positive integer more than or equal to 1.
10. according to the method described in claim 3, it is characterized in that, described verified using the test set with the net after training
Whether the full convolutional neural networks that network weight updates are effective, if be verified as effectively, the network weight after the training
It is specific to wrap as the network weight for reaching the full convolutional neural networks that the default accuracy rate requires on the test set
It includes:
Preset evaluation index: the prediction result of tag image is visited in the aero-engine hole described for one, there is pixel accuracy rate PA
Peaceful coincidence factor two evaluation indexes of mIOU,
Wherein, nabIt is full convolutional neural networks by the quantity for the pixel that the pixel prediction that damage type is a is damage type b,
A, b take i or j, t in formulaiThe pixel quantity that damage type is i, t are marked for testing crewiMeet ti=∑jnij, nclIt is marking class
The in jured kind quantity for including in not, ∑jnjiIndicate the quantity for being predicted as all pixels of the i-th class damage type,Indicate the coincidence of the damage field for damage type i, through testing crew label and the damage field of prediction
Degree,Expression sums for all damage types;
Tag image is visited to aero-engine hole each in the test set to pre-process, and obtains meeting the full convolutional Neural net
The pretreatment image of network input requirements;
The pretreatment image is measured in advance according to the full convolutional neural networks updated with the network weight after training
The probability that each pixel in tag image is various damage types is visited to the aero-engine hole, it is pre- then to obtain each pixel
The damage type of survey;
The damage type of each element marking and the damage type of prediction in tag image, meter are visited according to the aero-engine hole
PA and mIOU that tag image is visited in the aero-engine hole are calculated, judges that all aero-engine holes are visited in the test set
Whether tag image average PA and mIOU meets preset threshold requirement;
Meet if being judged as, using the weight of the full convolutional neural networks at this time as reaching described pre- on the test set
If the network weight for the full convolutional neural networks that accuracy rate requires.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/095290 WO2020119103A1 (en) | 2018-12-13 | 2019-07-09 | Aero-engine hole detection image damage intelligent identification method based on deep learning |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811526577 | 2018-12-13 | ||
CN2018115265770 | 2018-12-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109800708A true CN109800708A (en) | 2019-05-24 |
Family
ID=66559637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910048264.7A Pending CN109800708A (en) | 2018-12-13 | 2019-01-18 | Visit image lesion intelligent identification Method in aero-engine hole based on deep learning |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109800708A (en) |
WO (1) | WO2020119103A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020119103A1 (en) * | 2018-12-13 | 2020-06-18 | 程琳 | Aero-engine hole detection image damage intelligent identification method based on deep learning |
CN111598879A (en) * | 2020-05-18 | 2020-08-28 | 湖南大学 | Method, system and equipment for evaluating structural fatigue accumulated damage |
CN112529899A (en) * | 2020-12-28 | 2021-03-19 | 内蒙动力机械研究所 | Nondestructive testing method for solid rocket engine based on machine learning and computer vision |
CN112561892A (en) * | 2020-12-22 | 2021-03-26 | 东华大学 | Defect detection method for printed and jacquard fabric |
CN112581430A (en) * | 2020-12-03 | 2021-03-30 | 厦门大学 | Deep learning-based aeroengine nondestructive testing method, device, equipment and storage medium |
CN112643618A (en) * | 2020-12-21 | 2021-04-13 | 东风汽车集团有限公司 | Intelligent adjusting device and method for flexible engine warehousing tool |
CN113687282A (en) * | 2021-08-20 | 2021-11-23 | 吉林建筑大学 | Magnetic detection system and method for magnetic nano material |
CN114120317A (en) * | 2021-11-29 | 2022-03-01 | 哈尔滨工业大学 | Optical element surface damage identification method based on deep learning and image processing |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111965183B (en) * | 2020-08-17 | 2023-04-18 | 沈阳飞机工业(集团)有限公司 | Titanium alloy microstructure detection method based on deep learning |
CN113034599B (en) * | 2021-04-21 | 2024-04-12 | 南京航空航天大学 | Hole detection device and method for aeroengine |
CN113744230B (en) * | 2021-08-27 | 2023-09-05 | 中国民航大学 | Unmanned aerial vehicle vision-based intelligent detection method for aircraft skin damage |
CN114240948B (en) * | 2021-11-10 | 2024-03-05 | 西安交通大学 | Intelligent segmentation method and system for structural surface damage image |
CN115114860B (en) * | 2022-07-21 | 2024-03-01 | 郑州大学 | Data modeling amplification method for concrete pipeline damage identification |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107909564A (en) * | 2017-10-23 | 2018-04-13 | 昆明理工大学 | A kind of full convolutional network image crack detection method based on deep learning |
CN108074231A (en) * | 2017-12-18 | 2018-05-25 | 浙江工业大学 | A kind of magnetic sheet detection method of surface flaw based on convolutional neural networks |
WO2018125014A1 (en) * | 2016-12-26 | 2018-07-05 | Argosai Teknoloji Anonim Sirketi | A method for foreign object debris detection |
CN108345911A (en) * | 2018-04-16 | 2018-07-31 | 东北大学 | Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics |
CN108416394A (en) * | 2018-03-22 | 2018-08-17 | 河南工业大学 | Multi-target detection model building method based on convolutional neural networks |
CN108492281A (en) * | 2018-03-06 | 2018-09-04 | 陕西师范大学 | A method of fighting Bridge Crack image detection of obstacles and the removal of network based on production |
CN108562589A (en) * | 2018-03-30 | 2018-09-21 | 慧泉智能科技(苏州)有限公司 | A method of magnetic circuit material surface defect is detected |
CN108717554A (en) * | 2018-05-22 | 2018-10-30 | 复旦大学附属肿瘤医院 | A kind of thyroid tumors histopathologic slide image classification method and its device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109800708A (en) * | 2018-12-13 | 2019-05-24 | 程琳 | Visit image lesion intelligent identification Method in aero-engine hole based on deep learning |
-
2019
- 2019-01-18 CN CN201910048264.7A patent/CN109800708A/en active Pending
- 2019-07-09 WO PCT/CN2019/095290 patent/WO2020119103A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018125014A1 (en) * | 2016-12-26 | 2018-07-05 | Argosai Teknoloji Anonim Sirketi | A method for foreign object debris detection |
CN107909564A (en) * | 2017-10-23 | 2018-04-13 | 昆明理工大学 | A kind of full convolutional network image crack detection method based on deep learning |
CN108074231A (en) * | 2017-12-18 | 2018-05-25 | 浙江工业大学 | A kind of magnetic sheet detection method of surface flaw based on convolutional neural networks |
CN108492281A (en) * | 2018-03-06 | 2018-09-04 | 陕西师范大学 | A method of fighting Bridge Crack image detection of obstacles and the removal of network based on production |
CN108416394A (en) * | 2018-03-22 | 2018-08-17 | 河南工业大学 | Multi-target detection model building method based on convolutional neural networks |
CN108562589A (en) * | 2018-03-30 | 2018-09-21 | 慧泉智能科技(苏州)有限公司 | A method of magnetic circuit material surface defect is detected |
CN108345911A (en) * | 2018-04-16 | 2018-07-31 | 东北大学 | Surface Defects in Steel Plate detection method based on convolutional neural networks multi-stage characteristics |
CN108717554A (en) * | 2018-05-22 | 2018-10-30 | 复旦大学附属肿瘤医院 | A kind of thyroid tumors histopathologic slide image classification method and its device |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020119103A1 (en) * | 2018-12-13 | 2020-06-18 | 程琳 | Aero-engine hole detection image damage intelligent identification method based on deep learning |
CN111598879A (en) * | 2020-05-18 | 2020-08-28 | 湖南大学 | Method, system and equipment for evaluating structural fatigue accumulated damage |
CN112581430A (en) * | 2020-12-03 | 2021-03-30 | 厦门大学 | Deep learning-based aeroengine nondestructive testing method, device, equipment and storage medium |
CN112643618A (en) * | 2020-12-21 | 2021-04-13 | 东风汽车集团有限公司 | Intelligent adjusting device and method for flexible engine warehousing tool |
CN112561892A (en) * | 2020-12-22 | 2021-03-26 | 东华大学 | Defect detection method for printed and jacquard fabric |
CN112529899A (en) * | 2020-12-28 | 2021-03-19 | 内蒙动力机械研究所 | Nondestructive testing method for solid rocket engine based on machine learning and computer vision |
CN113687282A (en) * | 2021-08-20 | 2021-11-23 | 吉林建筑大学 | Magnetic detection system and method for magnetic nano material |
CN114120317A (en) * | 2021-11-29 | 2022-03-01 | 哈尔滨工业大学 | Optical element surface damage identification method based on deep learning and image processing |
CN114120317B (en) * | 2021-11-29 | 2024-04-16 | 哈尔滨工业大学 | Optical element surface damage identification method based on deep learning and image processing |
Also Published As
Publication number | Publication date |
---|---|
WO2020119103A1 (en) | 2020-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109800708A (en) | Visit image lesion intelligent identification Method in aero-engine hole based on deep learning | |
CN107092870B (en) | A kind of high resolution image Semantic features extraction method | |
Li et al. | Road network extraction via deep learning and line integral convolution | |
CN110188720A (en) | A kind of object detection method and system based on convolutional neural networks | |
CN108764308A (en) | A kind of recognition methods again of the pedestrian based on convolution loop network | |
CN108985238A (en) | The high-resolution remote sensing image impervious surface extracting method and system of combined depth study and semantic probability | |
CN109446925A (en) | A kind of electric device maintenance algorithm based on convolutional neural networks | |
CN109165660A (en) | A kind of obvious object detection method based on convolutional neural networks | |
CN106991666B (en) | A kind of disease geo-radar image recognition methods suitable for more size pictorial informations | |
CN111126308B (en) | Automatic damaged building identification method combining pre-disaster remote sensing image information and post-disaster remote sensing image information | |
CN109583322A (en) | A kind of recognition of face depth network training method and system | |
Xu et al. | Pavement crack detection algorithm based on generative adversarial network and convolutional neural network under small samples | |
CN114299380A (en) | Remote sensing image semantic segmentation model training method and device for contrast consistency learning | |
CN108229589A (en) | A kind of ground cloud atlas sorting technique based on transfer learning | |
CN113705580B (en) | Hyperspectral image classification method based on deep migration learning | |
CN109858389A (en) | Vertical ladder demographic method and system based on deep learning | |
CN108288269A (en) | Bridge pad disease automatic identifying method based on unmanned plane and convolutional neural networks | |
CN110378232A (en) | The examination hall examinee position rapid detection method of improved SSD dual network | |
CN109919246A (en) | Pedestrian's recognition methods again based on self-adaptive features cluster and multiple risks fusion | |
CN109087305A (en) | A kind of crack image partition method based on depth convolutional neural networks | |
CN112395958A (en) | Remote sensing image small target detection method based on four-scale depth and shallow layer feature fusion | |
Lin et al. | Optimal CNN-based semantic segmentation model of cutting slope images | |
CN109472790A (en) | A kind of machine components defect inspection method and system | |
CN106096622A (en) | Semi-supervised Classification of hyperspectral remote sensing image mask method | |
CN108711150A (en) | A kind of end-to-end pavement crack detection recognition method based on PCA |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190524 |