CN108960256A - A kind of determination method, device and equipment of components damage degree - Google Patents

A kind of determination method, device and equipment of components damage degree Download PDF

Info

Publication number
CN108960256A
CN108960256A CN201810689304.1A CN201810689304A CN108960256A CN 108960256 A CN108960256 A CN 108960256A CN 201810689304 A CN201810689304 A CN 201810689304A CN 108960256 A CN108960256 A CN 108960256A
Authority
CN
China
Prior art keywords
image
component
detected
target component
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810689304.1A
Other languages
Chinese (zh)
Inventor
徐丽丽
王宇飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201810689304.1A priority Critical patent/CN108960256A/en
Publication of CN108960256A publication Critical patent/CN108960256A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Finance (AREA)
  • Multimedia (AREA)
  • Accounting & Taxation (AREA)
  • Data Mining & Analysis (AREA)
  • General Business, Economics & Management (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Development Economics (AREA)
  • Technology Law (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present application discloses a kind of determination method, device and equipment of components damage degree, image recognition model is trained by advancing with target component image, it wherein include target component in target component image, target component is lossless component, obtain image of component to be detected, it include component to be detected in image of component to be detected, by image of component input picture identification model to be detected, the identification probability that component to be detected is target component is obtained, the degree of injury of component to be detected is determined according to identification probability.The determination method of this components damage degree can be trained image recognition model using the image of lossless component, the determination for carrying out the degree of injury of component to be detected by completing the image recognition model of training again, the workload for reducing image recognition model training improves the accuracy for determining the degree of injury of component to be detected.

Description

A kind of determination method, device and equipment of components damage degree
Technical field
This application involves field of image processing more particularly to a kind of determination method, device and equipments of components damage degree.
Background technique
For insurance company, setting loss refers to the degree of injury of determining equipment, to be determined according to setting loss result Maintenance program and maintenance price.Setting loss is one of operation link mostly important in Claims Resolution, can be this fixed by manually carrying out Damage process needs to expend the more time, and therefore, most enterprises carry out setting loss usually using algorithm, to reduce cost, mentions High efficiency.
Currently, the setting loss of equipment is carried out by algorithm, and it can be in the following manner: a large amount of equipment damage image is obtained, These equipment damage images are marked, such as can be with the degree of injury of flag member, then to the equipment damage figure of label As being trained, setting loss analysis model is formed, the image of component to be detected including component to be detected is obtained, by component diagram to be detected As input setting loss analysis model, the degree of injury of component to be detected in image of component to be detected is obtained.
Above-mentioned setting loss mode needs a large amount of equipment damage image, includes various degree of injury in these equipment damage images Component to be detected, and be marked by the degree of injury manually to these components to be detected, therefore, this setting loss mode Accuracy depends on these equipment damage images, if the negligible amounts of equipment damage image, degree of injury are not comprehensive enough or right Degree of injury label in artificial equipment component damage image is not accurate enough, can all influence the accuracy of setting loss.In addition, setting loss The training effectiveness of journey larger workload, setting loss analysis model is low.
Summary of the invention
In order to solve the problems, such as that setting loss process accuracy is not high in the prior art and model training efficiency is lower, the application Embodiment provides a kind of determination method, device and equipment of components damage degree.
In a kind of determination method of components damage degree provided by the embodiments of the present application, target component image pair is advanced with Image recognition model is trained, and includes target component in the target component image, and the target component is lossless component;
This method comprises:
Image of component to be detected is obtained, includes component to be detected in the image of component to be detected;
The image of component to be detected is inputted into described image identification model, obtaining the component to be detected is the target The identification probability of component;
The degree of injury of the component to be detected is determined according to the identification probability.
The embodiment of the present application provides a kind of determining device of components damage degree, which includes:
Model training unit is trained image recognition model for advancing with target component image, the target It include target component in image of component, the target component is lossless component;
Image of component acquiring unit to be detected is wrapped in the image of component to be detected for obtaining image of component to be detected Include component to be detected;
Identification probability acquiring unit obtains institute for the image of component to be detected to be inputted described image identification model State the identification probability that component to be detected is the target component;
Degree of injury acquiring unit, for determining the degree of injury of the component to be detected according to the identification probability.
The embodiment of the present application also provides a kind of components damage degree locking equipment really, the equipment include: processor and Memory;
The memory, for storing instruction;
The processor executes a kind of component provided by the embodiments of the present application for executing the instruction in the memory The determination method of degree of injury.
The embodiment of the present application also provides a kind of computer readable storage mediums, including instruction, when it is transported on computers When row, so that computer executes a kind of determination method of components damage degree provided by the embodiments of the present application.
The determination method, device and equipment of a kind of components damage degree provided by the embodiments of the present application, by advancing with Target component image is trained image recognition model, wherein includes target component in target component image, and target component is Lossless component, due to the component to be detected compared to various degree of injury, the quantity of lossless component is relatively fewer, therefore can lead to Cross the training that less target component image realizes image recognition model.Obtain image of component to be detected, image of component to be detected In include component to be detected, by image of component input picture identification model to be detected, obtaining component to be detected is target component Identification probability, identification probability can indicate in the feature and target component image of the component to be detected in image of component to be detected The correlation of the feature of target component, identification probability is bigger, illustrates that correlation is better, and corresponding degree of injury is smaller, therefore, can To determine the degree of injury of component to be detected according to identification probability.The determination method of this components damage degree can utilize lossless The image of component is trained image recognition model, then the image recognition model by completing training carries out component to be detected The determination of degree of injury, wherein for trained target component image quantity, reduce the work of image recognition model training It measures, improves the accuracy of image recognition model training, and do not need that degree of injury is marked in this method, reduce The mistake being likely to occur manually is marked to degree of injury, improves the accuracy for determining the degree of injury of component to be detected.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The some embodiments recorded in application, for those of ordinary skill in the art, without creative efforts, It can also be obtained according to these attached drawings other attached drawings.
Fig. 1 is a kind of flow chart of the determination method of components damage degree provided by the embodiments of the present application;
Fig. 2 is a kind of image recognition model schematic provided by the embodiments of the present application;
Fig. 3 is the flow chart of the determination method of another components damage degree provided by the embodiments of the present application;
Fig. 4 is a kind of structural block diagram of the determining device of components damage degree provided by the embodiments of the present application;
Fig. 5 is a kind of structural block diagram of components damage degree provided by the embodiments of the present application locking equipment really.
Specific embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only this Apply for a part of the embodiment, instead of all the embodiments.Based on the embodiment in the application, those of ordinary skill in the art exist Every other embodiment obtained under the premise of creative work is not made, shall fall in the protection scope of this application.
In the prior art, it for the determination of the degree of injury of equipment, is usually carried out by algorithm, specifically, available A large amount of equipment damage image, marking arrangement damage the degree of injury of component to be detected in image, utilize the equipment damage of label Image is trained setting loss analysis model, forms setting loss analysis model, then mould is analyzed in image of component to be detected input setting loss Type obtains the degree of injury of component to be detected in image of component to be detected.This setting loss mode needs a large amount of equipment damage figure Picture includes the component to be detected of various degree of injury in these equipment damage images, and by manually to these components to be detected Degree of injury be marked, therefore, the accuracy of this setting loss mode depends on these equipment damage images, if equipment is damaged Negligible amounts, the degree of injury for hurting image are not comprehensive enough or manually inadequate to the degree of injury label in equipment component damage image Accurately, the accuracy of setting loss can all be influenced.In addition, setting loss process larger workload, the training effectiveness of setting loss analysis model are low.
In order to solve the above-mentioned technical problem, a kind of determination method of components damage degree provided by the embodiments of the present application, dress It sets and equipment, image recognition model is trained by advancing with target component image, is wherein wrapped in target component image Target component is included, target component is lossless component, due to the component to be detected compared to various degree of injury, for trained nothing The quantity for damaging component is relatively fewer, therefore the training of image recognition model can be realized by less target component image.It obtains Image of component to be detected is taken, includes component to be detected in image of component to be detected, image of component input picture to be detected is identified Model, obtain component to be detected be target component identification probability, identification probability can indicate in image of component to be detected to The correlation of the feature of the feature and target component in target component image of detection part, identification probability is bigger, illustrates correlation Property is better, and corresponding degree of injury is smaller, therefore, the degree of injury of component to be detected can be determined according to identification probability.It is this The determination method of components damage degree can be trained image recognition model using the image of lossless component, then pass through completion Trained image recognition model carries out the determination of the degree of injury of component to be detected, wherein for trained target component image Negligible amounts reduce the workload of image recognition model training, improve the accuracy of image recognition model training, and the party It does not need that degree of injury is marked in method, reduces the mistake for being manually marked and being likely to occur to degree of injury, improve Determine the accuracy of the degree of injury of component to be detected.
Referring to Fig. 1, which is a kind of flow chart of the determination method of components damage degree provided by the embodiments of the present application, should Method may include S101 to S103.Optionally, before S101, the training of image recognition model can also be carried out, specifically, S100 can be executed:
S100 advances with target component image and is trained to image recognition model.
Target component image is the image for including target component, may include the view of the multiple directions of target component.Mesh Mark image of component can be color image, be also possible to gray level image.
Target component is lossless component, i.e., the component not damaged.By taking the setting loss of vehicle as an example, target component can be vehicle Leaf, rear cover, bonnet, car light and rearview mirror after top, Qian Gang, rear thick stick, left/right frontal lobe, left/right front door, left/right back door, left/right At least one of equal components.These target components may belong to same vehicle, also may belong to a variety of different automobile types.
In the embodiment of the present application, the processing of image augmentation can also be carried out to target component image, specifically, can pass through Geometric transformation is carried out to target component image, such as target component image is rotated, deviated, scaled or overturn etc. and operated, The corresponding augmentation image of target component image is formed, image recognition model is instructed using augmentation image and target component image Practice, the quantity for being used in the training image of training image identification model increases, to improve the accuracy of image recognition model.
Image recognition model is depth convolutional neural networks model, may include convolutional layer, pond layer and full articulamentum, In the embodiment of the present application, image recognition model may include multiple convolutional layers and multiple pond layers, refering to what is shown in Fig. 2, according to right The processing sequence of target component image, from left to right, respectively the first convolutional layer, the first pond layer, the second convolutional layer, the second pond Change layer, third convolutional layer, third pond layer ..., kth convolutional layer, kth pond layer.
In the embodiment of the present application, from the first convolutional layer to kth convolutional layer, the dimension of parameter matrix can successively subtract It is small.For example, the parameter matrix A of the first convolutional layer1Dimension can be (n, n), be arranged the first pond layer window dimension For (2,2), step-length 2, then the parameter matrix A of the second convolutional layer2Dimension be the first convolutional layer parameter matrix A1Dimension Half, as (n/2, n/2), and so on, the parameter matrix A of third convolutional layer3Dimension can be the second convolutional layer Parameter matrix A2Dimension half ..., the parameter matrix A of kth convolutional layerkDimension can be -1 convolutional layer of kth Parameter matrix Ak-1Dimension half.
Image recognition model is trained using target component image, specifically, can be to the parameter matrix of convolutional layer It is trained, obtaining can be with the parameter matrix of the convolutional layer of corresponding achievable image recognition.
After obtaining the parameter matrix of convolutional layer, the feature of input picture can be extracted, by the ginseng of this feature and convolutional layer Matrix number carries out convolution, obtains the output feature of the convolutional layer.Specifically, can by the parameter matrix of this feature and convolutional layer into Row is multiplied by point convolution, the i.e. feature of input picture position corresponding with parameter matrix, for example, if the i-th convolutional layer is defeated The feature for entering image isParameter matrixThe then output feature of i-th layer of convolutional layerWherein, i is the integer within the scope of 1 to k.
In the specific implementation, refering to what is shown in Fig. 2, image recognition model can also include that full articulamentum swashs with what inside was arranged Function living, wherein full articulamentum can also be the output characteristic weighing of above-mentioned multiple convolutional layers, it is assumed that F1、F2、…、FkIt is first Convolutional layer, the second convolutional layer ..., the output feature of kth convolutional layer, w1It is F1Coefficient, w1With F1Middle all elements, which are multiplied, to be added Obtain fisrt feature S1, w2It is F2Coefficient, w2With F2Middle all elements, which are multiplied to be added, obtains second feature S2..., wkIt is FkBe Number, wkWith FkMiddle all elements, which are multiplied to be added, obtains kth feature Sk, by S1、S2、…、SkAddition obtains full articulamentum output, passes through Activation primitive maps the output of full articulamentum, the probability in the range of obtaining 0 to 1.
Therefore, image recognition model is trained using target component image, it can be specifically, ginseng to convolutional layer Matrix number and full articulamentum weight coefficient are trained, to make identification of the image recognition model realization to image.
In the embodiment of the present application, it can be formed with a kind of image recognition model for the training of each target component, it should Image recognition model can only identify corresponding target component.It is understood that the component of the different location of same equipment, it can Using as target component not of the same race, for example, same vehicle roof and preceding thick stick can be used as target component not of the same race;Distinct device Same position component, can also be used as target component not of the same race, such as the roof of two kinds of different automobile types, can be used as two kinds Target component.Specifically, the image training that can use the roof of a certain vehicle forms roof identification model, this roof is known Other model can identify the component in image whether be the vehicle roof.
In the embodiment of the present application, the image training that also can use plurality of target component forms an image recognition mould Type.At this point, for training image identification model target component image be it is multiple, multiple target component images respectively correspond difference Target component, target component image carry the target component mark corresponding to target component, such as target component can be vehicle Leaf, rear cover, bonnet, car light and rearview mirror after top, Qian Gang, rear thick stick, left/right frontal lobe, left/right front door, left/right back door, left/right A variety of components in equal components, are also possible to the lossless component of other equipment.Wherein, target component mark can be target component Code, be also possible to the title of target component, can also be the code of target component applicable device code and target component, It can also be the title of target component applicable device name and target component, such as the mark of target component can be A vehicle Roof.Target component image is then advanced with to be trained image recognition model, it can be with specifically: advance with and carry The target component image of target component mark is trained image recognition model.
It is understood that can repeatedly carry out scheming images to be recognized input after the completion of image recognition model training As the operation of identification model, without being trained in real time in input images to be recognized every time.
S101 obtains image of component to be detected.
After the completion of image recognition model training, available image of component to be detected, as image recognition model to Identify image.
Image of component to be detected is the image for including component to be detected, may include the view of all directions of component to be detected Figure, image of component to be detected can be by transmitting acquisition on user, the storage address that can also be provided according to user, which is searched, to be obtained. Image of component to be detected can be color image, be also possible to gray level image.May include in the image of component to be detected obtained Complete component to be detected, can also only include a part of component to be detected, can also include its except component to be detected His component.
In the embodiment of the present application, component to be detected can be the image of the lossless component of user's upload, be also possible to damage The image of traumatic part part, defective component refer to lossless component by collision or friction etc., and shape or color etc. change, and are formed Defective component.Defective component for example can be roof, Qian Gang, rear thick stick, left/right frontal lobe, left/right front door, left/right back door, a left side/ The corresponding defective component of at least one of components such as leaf, rear cover, bonnet, car light and rearview mirror, is also possible to other behind the right side The defective component of equipment.
After getting image of component to be detected, it can also judge whether the clarity of image of component to be detected conforms to It asks, if not meeting, image of component to be detected can be reacquired.
Component mark to be detected can not be carried in image of component to be detected, can also carry component mark to be detected Know, specifically, the component to be detected that component to be detected mark can be inputted by user is identified and obtained, or user provides A variety of component marks to be detected, determine according to the user's choice.
Image of component input picture identification model to be detected is obtained the identification that component to be detected is target component by S102 Probability.
Image recognition model be using do not carry target component mark target component image training obtain in the case where, It is typically used for identifying a kind of component, the component is corresponding with target component, by image of component input picture identification model to be detected Afterwards, the component to be detected in available image of component to be detected is the identification probability of the target component.Such as it can be by engine Lid is used as target component, is trained by not carrying the image of bonnet of component mark to image recognition model, training The image recognition model of completion can identify bonnet, it is such as obtaining the result is that: component to be detected is the general of the bonnet of A vehicle Rate is 80%.
It is understood that for the image of component to be detected for not carrying component mark, it can be by manual identified, it will The image of component to be detected inputs in corresponding image recognition model, carries out the identification of image of component to be detected.
Image recognition model is in the case where being obtained using the target component image training for carrying target component mark one As can be used to identify a variety of components, these identification component it is related to target component.It is to be detected in input picture identification model Image of component can not carry component mark to be detected, can also carry component mark to be detected.
If carrying component mark to be detected in image of component to be detected, image of component input picture to be detected is identified Model, it is that its target component mark and the component to be detected identify identical target that obtained result, which can be component to be detected, The identification probability of component.Such as the component to be detected in the image of component to be detected of input is identified as bonnet, then the knot exported Fruit may is that component to be detected be the probability of the bonnet of A vehicle is 80%.
If not carrying component mark to be detected in image of component to be detected, image of component to be detected is inputted, is obtained Result can be multiple identification probabilities that component to be detected is respectively each target component.For example, the result of output can be, Component to be detected is that the identification probability of the roof of A vehicle is 2%, and component to be detected is that the identification probability of the preceding thick stick of B vehicle is 0%, component to be detected is that the identification probability of the bonnet of C vehicle is 80%.
During image recognition model identifies damage image to be detected, it is to be understood that portion to be detected For the feature of part closer to the feature of target component, the degree of injury of component to be detected is smaller, and obtained component to be detected is target The identification probability of component is higher, correspondingly, the feature of component to be detected is more different from the feature of target component, component to be detected Degree of injury is bigger, and obtained component to be detected is that the identification probability of target component is lower.
As a kind of possible embodiment of the embodiment of the present application, the image of component to be detected is being inputted into described image Before identification model, multiple area to be tested can be determined in image of component to be detected, wherein multiple area to be tested are located at Different location in the region of image of component to be detected can also be with specifically, the size of multiple area to be tested can be all identical It is not all identical, it can also be different from.Multiple area to be tested images are formed according to the multiple area to be tested determined, these Area to be tested image is the different area images in the region of image of component to be detected.
By multiple area to be tested image input picture identification models, obtain to be detected in multiple area to be tested images Component is the area probability of target component, and the maximum value in area probability is determined as identification probability.
For example, in the image of component to be detected of certain rectangle, the lower right corner is component to be detected, by component diagram to be detected As being divided into four parts, respectively as in area to be tested image input picture identification model, 4 area probabilities are obtained, then bottom right The corresponding area probability of area to be tested image at angle is maximum value, can be using the maximum value as identification probability.
Optionally, multiple area to be tested can be randomly selected in image of component to be detected, these area to be tested can To include whole image of component to be detected, the image of a part of component to be detected can also be only included, can not also include to The size of the image of detection part, these area to be tested can be all identical, can also be different from, can also be not all identical.
Optionally, multiple area to be tested can also be determined according to preset rules, for example, detection part image can be treated Region carries out reduction operation, and the step-length of reduction operation can be 1, and the direction of reduction operation can be multiple directions, such as It is successively reduced from four direction up and down, the image-region that each execution reduction operation is obtained is as area to be tested.
S103 determines the degree of injury of component to be detected according to identification probability.
The degree of injury of component to be detected can be indicated by intensity grade, such as " rudimentary ", " middle rank " and " advanced " etc. Degree word, or the intensity grades such as " level-one ", " second level ", " three-level ", can also be indicated by damaged area, such as damage 1 Square centimeters etc. can also be indicated by percent injury, such as damage 50 percent etc..
Since the feature of component to be detected is closer to target component, obtained component to be detected is that the identification of target component is general Rate is higher, and the degree of injury of component to be detected is lower at this time, and therefore, identification probability and degree of injury have correlativity.Specifically , which can be the corresponding relationship between identification probability range and the grade of degree of injury, and it is general to be also possible to identification Linear relationship or non-linear relation between rate and percent injury, it is to be understood that identification probability and percent injury it Between, usually negative correlativing relation.As a kind of possible implementation, degree of injury can be indicated by percent injury, The degree of injury for calculating component to be detected can pass through following formula: degree of injury=1- identification probability.
For example, the component to be detected obtained is the identification probability of the corresponding component to be detected of target component when being 0, says Bright component to be detected is not target component, at this time it is believed that the corresponding lossless component of component to be detected is not in the range of target component Interior, the degree of injury of component to be detected is too big in other words, causes component that can not recognize, accordingly, it is determined that the damage of component to be detected Degree is 100%.In another example the identification probability that obtained component to be detected is target component is 100%, it is believed that portion to be detected The degree of injury of part is 0.For another example the identification probability that obtained component to be detected is target component is 80%, it is believed that should be to The degree of injury of detection part is 20%.
If the obtained multiple identification probabilities for corresponding to multiple target components and indicating, can be according to identification probability most Big value determines the degree of injury of component to be detected, because the maximum value of identification probability indicates the corresponding target component of the probability value A possibility that component is identical as the component to be detected in image of component to be detected is maximum.For example, the result of output is portion to be detected Part is that the identification probability of the roof of A vehicle is 2%, and component to be detected is that the identification probability of the bonnet of B vehicle is 80%, then 80% identification probability determines the degree of injury of component to be detected, such as the degree of injury of bonnet of available B vehicle is 20%.
The determination method of a kind of components damage degree provided by the embodiments of the present application, by advancing with target component image Image recognition model is trained, wherein includes target component in target component image, target component is lossless component, due to Compared to the component to be detected of various degree of injury, the quantity of lossless component is relatively fewer, therefore can pass through less target The training of image of component realization image recognition model.Image of component to be detected is obtained, includes to be detected in image of component to be detected Image of component input picture identification model to be detected is obtained the identification probability that component to be detected is target component, identification by component Probability can indicate the feature of the component to be detected in image of component to be detected and the spy of the target component in target component image The correlation of sign, identification probability is bigger, illustrates that correlation is better, and corresponding degree of injury is smaller, therefore, can be general according to identification Rate determines the degree of injury of component to be detected.The determination method of this components damage degree can utilize the image pair of lossless component Image recognition model is trained, then the image recognition model by completing training carries out the degree of injury of component to be detected really It is fixed, wherein for trained target component image quantity, to reduce the workload of image recognition model training, improve The accuracy of image recognition model training, and do not need that degree of injury is marked in this method, reduce manually to damage The mistake being likely to occur is marked in degree, improves the accuracy for determining the degree of injury of component to be detected.
In order to be further reduced the workload being trained to image recognition model, the embodiment of the present application also provides another The determination method of kind of components damage degree, can also be to target component figure before to image recognition model training in this method As being handled, before by image of component input picture identification model to be detected, the progress of detection part image can also be treated Processing, to improve the training effectiveness and operational efficiency of image recognition model.Specifically, can be refering to what is shown in Fig. 3, this method packet Include following steps.
S201 obtains target component image in advance, extracts the contour feature of target component image, forms objective contour figure Picture.
Target component image is the image for including target component, can indicate the view of the multiple directions of target component.
Since the profile of different target component usually differs greatly, target can be extracted after obtaining target component image The contour feature of image of component, forms objective contour image, includes the target in target component image in the objective contour image The contour feature of component.Image recognition model is trained using objective contour image, data volume can be reduced, improves image The training effectiveness of identification model.
Specifically, edge detection can be carried out to target component image, such as Laplace enlargement oprator and mesh can be passed through It marks image of component and carries out convolution, obtain the contour feature of target component image, Laplace enlargement oprator can refer to shown in following formula:
In the embodiment of the present application, edge detection can also be carried out to target component image by other algorithms, herein not It illustrates.
It, can also be right if target component image is color image before carrying out edge detection to target component image Target component image carries out gray processing processing, is gray level image by target component image procossing, then to gray processing treated mesh It marks image of component and carries out edge detection.Since compared to color image, gray level image has less data volume, usual situation Under, the data volume of gray level image can achieve the one third of the data volume of color image, therefore, can be further improved image The training effectiveness of identification model.
Before carrying out edge detection to target component image, target component image can also be smoothed, with Just the noise in target component image is eliminated, then edge processing is carried out to the target component image after smoothing processing.To target portion Part image is smoothed, it is possible to reduce and it is variegated in target component image, it can equally reduce in target component image Data volume improves the training effectiveness of image recognition model.
Specifically, to the smoothing processing of target component image convolution can be carried out by Gauss operator and target component image It realizes, target component image can also be smoothed by other algorithms, not illustrated herein.
It is above-mentioned that gray processing processing and smoothing processing are carried out to target component image, edge is being carried out to target component image It is carried out before detection, the embodiment of the present application is not defined the execution sequence of above two image procossing, can be first to target Image of component carries out gray processing processing, then treated that target component image is smoothed to gray processing, then to smooth place Target component image after reason carries out edge detection, can also first be smoothed to target component image, to smooth place Target component image after reason carries out gray processing processing, then treated that target component image carries out edge detection to gray processing, The realization of the embodiment of the present application is not influenced.
S202 is trained image recognition model using objective contour image.
Image recognition model is depth convolutional neural networks model, may include convolutional layer, pond layer and full articulamentum, benefit Image recognition model is trained with target component image, to make identification of the image recognition model realization to image.To figure As the training process of identification model can be with reference to the explanation in S100, details are not described herein.
Before being trained using objective contour image to image recognition model, objective contour image can also be carried out Threshold process, specifically, can be by objective contour image, the gray scale more than or equal to the first default gray threshold replaces with One default gray scale, obtains enhancing target image, and the enhancing target image recycled is trained image recognition model.It is right Objective contour image carries out threshold process, and the gray scale for the position that color is shallower in objective contour image can be made to be set to default ash Degree, to improve the contrast of objective contour image.For example, the gray scale that the first default gray threshold can be image is averaged Value, the first default gray scale can be 255, then replace with the gray scale for being greater than or equal to average gray in objective contour image 255, that is to say, that the color of the light position in objective contour image is replaced with into white.
Threshold process is carried out to objective contour image, gray scale threshold can also be preset by third in objective contour image, is less than The gray scale of value replaces with third gray value, obtains enhancing target image.It can be 50 that third, which presets gray threshold, third gray value It can be 0.
The effect of threshold process is to improve the contrast of objective contour image, and above two threshold process mode can be with It carries out, can also only carry out one of simultaneously.
It is understood that in above-mentioned S201 and S202 after the completion of image recognition model training, can repeatedly carry out by The operation of images to be recognized input picture identification model, without being trained in real time in input images to be recognized every time.
S203 obtains image of component to be detected.
The process for obtaining image of component to be detected can be with reference to S102, and details are not described herein.
S204 extracts the contour feature of image of component to be detected, forming member contour images.
The contour feature of image of component to be detected is extracted, forming member contour images can be specifically, treat detection part Image carries out edge detection, to extract the contour feature of image of component to be detected, forming member contour images, and wherein component wheel It include the contour feature of the component to be detected in image of component to be detected in wide image.It is carried out relative to detection part image is treated Identification identifies the component outline image for the contour feature for including component to be detected, can reduce the required of image recognition Data volume to be processed improves the operational efficiency of image recognition model.
The process for treating detection part image progress edge detection can carry out side to target component image with reference in S201 The process of edge detection can also treat detection part image and carry out ash before treating detection part image and carrying out edge detection Degreeization processing and/or smoothing processing, specifically, gray processing processing and smooth can be carried out to target component image with reference in S201 The process of processing, details are not described herein.Above-mentioned detection part image for the treatment of carries out gray processing processing and smoothing processing, is treating Detection part image carry out before edge detection, the embodiment of the present application not to the execution of above two image procossing sequence into Row limits.
Component outline image input picture identification model it is general to be obtained the identification that component to be detected is target component by S205 Rate.
The step can refer to S102, and details are not described herein.
Optionally, before by component outline image input picture identification model, component outline image can also be carried out Threshold process, specifically, the gray scale that be greater than or equal to the second default gray threshold in component outline image can be updated to the Two default gray scales obtain reinforcing member image, then the reinforcing member image input picture identification model that will be obtained.To component outline Image carries out threshold process, the gray scale for the position that color is shallower in component outline image can be made to be set to default gray scale, to subtract Few workload to component outline image procossing.For example, the first default gray threshold can be the average gray of image, First default gray scale can be 255, then the gray scale for being greater than or equal to average gray in component outline image is replaced with 255, That is, the color of the light position in component outline image is replaced with white, reduce the work to the identification of light position It measures.
S206 determines the degree of injury of component to be detected according to identification probability.
As a kind of possible embodiment, the determination for treating the degree of injury of detection part can be with reference to S103, herein It repeats no more.
As alternatively possible embodiment, in order to more accurately determine the degree of injury of component to be detected, In the case where carrying out gray processing processing to target component image and image of component to be detected, can also judge whether identification probability is small In or be equal to predetermined probabilities value, such as predetermined probabilities value can be 98%.This is because the target component handled using gray processing Image is trained image recognition model, and it is more accurate to the identification of gray level image to complete trained image recognition model, especially It is more accurate to the contour feature identification in gray level image.
If identification probability is less than or equal to predetermined probabilities value, it is believed that the contour feature and target component of component to be detected Contour feature difference it is larger, that is to say, that biggish deformation has occurred in component to be detected, at this time can according to identification probability and The correlativity of identification probability and degree of injury determines the degree of injury of component to be detected.The phase of identification probability and degree of injury Pass relationship can refer to S103.
If identification probability is greater than predetermined probabilities value, illustrate that the deformation quantity of component to be detected is smaller, at this point, component to be detected can Only to have slight deformation, or only because paint is fallen in part when scraping collision, lead to color change.It then can be by component to be detected Image input picture similarity-rough set model, obtains the similarity of image of component to be detected Yu target component image, according to described The similarity of image of component to be detected and the target component image determines the degree of injury of the component to be detected.Wherein, scheme As similarity-rough set model can be depth convolutional neural networks, it can use the training of target component image and obtain, wherein is to be checked It surveys image of component and target component image is color image.
That is, colored target component image training image similarity ratio can also be utilized in the embodiment of the present application Compared with model, when identification probability is greater than predetermined probabilities value, by colored image of component input picture similarity-rough set mould to be detected Type obtains the similarity of image of component to be detected Yu target component image, and the damage of component to be detected is determined according to similarity Degree.The degree of injury of component to be detected is determined by obtaining the similarity of image of component and target component image to be detected Mode can identify the variation of the color of component to be detected, to more accurately determine the degree of injury of component to be detected.
Specifically, component to be detected can be obtained according to the correlativity of similarity and similarity and degree of injury Degree of injury.Under normal conditions, similarity is higher, then illustrates that image of component to be detected and the feature of target component image get over phase Closely, then the degree of injury of component to be detected is lower.For example, the damage journey of component to be detected is indicated by percent injury When spending, the degree of injury of component to be detected can determine according to the following formula: degree of injury=1- similarity.
A kind of determination method of components damage degree provided by the embodiments of the present application, by advance to target component image into Row processing, such as the processing of contours extract, gray processing, smoothing processing and threshold process etc. utilize treated target component image Image recognition model is trained, data volume required for the training of image recognition model is further reduced, improves figure As the training effectiveness of identification model.Meanwhile it treating detection part image and being handled, such as contours extract, gray processing handle, are flat Sliding processing, threshold process and region division etc., will treated image of component input picture identification model to be detected, obtain to be checked Surveying component is target component to the identification probability of the component of drink, reduces image recognition model and treats at detection part image Data volume required for managing, improves the operational efficiency of image recognition model.By obtaining image of component to be detected and target portion The similarity of part image determines the mode of the degree of injury of component to be detected, can identify the change of the color of component to be detected Change, to more accurately determine the degree of injury of component to be detected.
Based on a kind of determination method for components damage degree that above embodiments provide, the embodiment of the present application also provides one Its working principle is described in detail with reference to the accompanying drawing in the determining device of kind of components damage degree.Referring to fig. 4, which is this Shen Please a kind of structural block diagram of the determining device of components damage degree that provides of embodiment, which includes:
Model training unit 110 is trained image recognition model for advancing with target component image, the mesh Marking includes target component in image of component, and the target component is lossless component;
Image of component acquiring unit 120 to be detected, for obtaining image of component to be detected, in the image of component to be detected Including component to be detected;
Identification probability acquiring unit 130 is obtained for the image of component to be detected to be inputted described image identification model The component to be detected is the identification probability of the target component;
Degree of injury acquiring unit 140, for determining the degree of injury of the component to be detected according to the identification probability.
Optionally, the model training unit includes:
Target component image acquisition unit, for obtaining target component image in advance;
First edge detection unit extracts the target component for carrying out edge detection to the target component image The contour feature of image forms objective contour image;
Model training subelement, for being trained using the objective contour image to image recognition model;
The identification probability acquiring unit, comprising:
Second edge detection unit is extracted described to be detected for carrying out edge detection to the image of component to be detected The contour feature of image of component, forming member contour images;
Identification probability obtains subelement, for the component outline image to be inputted described image identification model, obtains institute State the identification probability that component to be detected is the target component.
Optionally, the target component image is color image;The first edge detection unit is specifically used for:
The target component image is subjected to gray processing processing, the target component image carries out edge inspection to treated It surveys, extracts the contour feature of the target component image, form objective contour image;
And/or
The image of component to be detected is color image, and the second edge detection unit is specifically used for:
The image of component to be detected is subjected to gray processing processing, the image of component to be detected carries out side to treated Edge detection, extracts the contour feature of the image of component to be detected, forming member contour images.
Optionally, the model training subelement is specifically used for:
By in the objective contour image, the first default ash is replaced with more than or equal to the gray scale of the first default gray threshold Degree obtains enhancing target image,
Image recognition model is trained using the enhancing target image;
And/or
The identification probability obtains subelement and is specifically used for:
By in the component outline image, the second default ash is replaced with more than or equal to the gray scale of the second default gray threshold Degree, obtains reinforcing member image,
The reinforcing member image is inputted into described image identification model, obtaining the component to be detected is the target portion The identification probability of part.
Optionally, the first edge detection unit is specifically used for:
The target component image is smoothed, edge is carried out to the target component image after smoothing processing The contour feature of the target component image is extracted in detection, forms objective contour image;
And/or
The second edge detection unit is specifically used for:
The image of component to be detected is smoothed, the image of component to be detected after smoothing processing is carried out Edge detection extracts the contour feature of the image of component to be detected, forming member contour images.
Optionally, the identification probability acquiring unit, comprising:
Area to be tested acquiring unit, for determining multiple area to be tested in the image of component to be detected, according to The multiple area to be tested forms multiple area to be tested images, and the multiple area to be tested is located at the component to be detected Different location in image;
Area probability acquiring unit, for multiple area to be tested images to be inputted described image identification model, point The area probability that the component to be detected in multiple area to be tested images is the target component is not obtained;
Identification probability obtains subelement, for the maximum value in the area probability to be determined as identification probability.
Optionally, the size of the multiple area to be tested is not all identical.
Optionally, the target component image is multiple, and multiple target component images correspond to different target component, institute It states target component image and carries target component mark, the corresponding target component mark of different target component is different, the model Training unit is specifically used for:
Advance with carry target component mark target component image image recognition model is trained;
The image of component to be detected carries component mark to be detected, and the identification probability acquiring unit is specifically used for:
The image of component to be detected for carrying component mark to be detected is inputted into described image identification model, obtains institute Stating component to be detected is the identification probability that its target component mark identifies identical target component with the component to be detected.
Optionally, the target component image is multiple, and multiple target component images correspond to different target component, institute It states target component image and carries target component mark, the corresponding target component mark of different target component is different, the model Training unit is specifically used for:
Advance with carry target component mark target component image image recognition model is trained;
The identification probability acquiring unit is specifically used for:
The image of component to be detected is inputted into described image identification model, it is respectively each to obtain the component to be detected The identification probability of the target component;
The degree of injury acquiring unit is specifically used for:
The degree of injury of the component to be detected is determined according to the maximum value of multiple identification probabilities.
Optionally, the identification probability acquiring unit is specifically used for:
If the identification probability is less than or equal to predetermined probabilities value, according to the identification probability and identification probability and damage The correlativity of degree determines the degree of injury of the component to be detected.
Optionally, the identification probability acquiring unit is specifically used for:
If the identification probability is greater than predetermined probabilities value, by the image of component input picture similarity-rough set to be detected Model obtains the similarity of the image of component to be detected Yu the target component image;Described image similarity-rough set model To be obtained using target component image training;The image of component to be detected and the target component image are cromogram Picture;
According to the similarity of the image of component to be detected and the target component image, the component to be detected is determined Degree of injury.
Optionally, described image identification model includes multiple convolutional layers, the dimension of the parameter matrix of the multiple convolutional layer It is sequentially reduced according to the processing sequence to the image of component to be detected.
The determining device of a kind of components damage degree provided by the embodiments of the present application, by advancing with target component image Image recognition model is trained, wherein includes target component in target component image, target component is lossless component, due to Compared to the component to be detected of various degree of injury, the quantity of lossless component is relatively fewer, therefore can pass through less target The training of image of component realization image recognition model.Image of component to be detected is obtained, includes to be detected in image of component to be detected Image of component input picture identification model to be detected is obtained the identification probability that component to be detected is target component, identification by component Probability can indicate the feature of the component to be detected in image of component to be detected and the spy of the target component in target component image The correlation of sign, identification probability is bigger, illustrates that correlation is better, and corresponding degree of injury is smaller, therefore, can be general according to identification Rate determines the degree of injury of component to be detected.The determination method of this components damage degree can utilize the image pair of lossless component Image recognition model is trained, then the image recognition model by completing training carries out the degree of injury of component to be detected really It is fixed, wherein for trained target component image quantity, to reduce the workload of image recognition model training, improve The accuracy of image recognition model training, and do not need that degree of injury is marked in this method, reduce manually to damage The mistake being likely to occur is marked in degree, improves the accuracy for determining the degree of injury of component to be detected.
Based on the determination method of the above components damage degree, the embodiment of the present application also provides a kind of components damage degree Equipment is determined, as shown in figure 5, the equipment includes: processor and memory;
Wherein, the memory for storing instruction,
The processor is used to execute the instruction in the memory, executes the determination of the components damage degree of above-mentioned offer Method.
The embodiment of the present application also provides a kind of computer readable storage mediums, including instruction, when it is transported on computers When row, so that computer executes the determination method of the components damage degree of above-mentioned offer.
When introducing the element of various embodiments of the application, the article " one ", "one", " this " and " described " be intended to Indicate one or more elements.Word "include", "comprise" and " having " are all inclusive and mean in addition to listing Except element, there can also be other elements.
It should be noted that those of ordinary skill in the art will appreciate that realizing the whole in above method embodiment or portion Split flow is relevant hardware can be instructed to complete by computer program, and the program can be stored in a computer In read/write memory medium, the program is when being executed, it may include such as the process of above-mentioned each method embodiment.Wherein, the storage Medium can be magnetic disk, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access Memory, RAM) etc..
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device reality For applying example, since it is substantially similar to the method embodiment, so describing fairly simple, related place is referring to embodiment of the method Part explanation.The apparatus embodiments described above are merely exemplary, wherein described be used as separate part description Unit and module may or may not be physically separated.Furthermore it is also possible to select it according to the actual needs In some or all of unit and module achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not paying In the case where creative work, it can understand and implement.
The above is only the specific embodiment of the application, it is noted that for the ordinary skill people of the art For member, under the premise of not departing from the application principle, several improvements and modifications can also be made, these improvements and modifications are also answered It is considered as the protection scope of the application.

Claims (10)

1. a kind of determination method of components damage degree, which is characterized in that advance with target component image to image recognition mould Type is trained, and includes target component in the target component image, and the target component is lossless component;
The described method includes:
Image of component to be detected is obtained, includes component to be detected in the image of component to be detected;
The image of component to be detected is inputted into described image identification model, obtaining the component to be detected is the target component Identification probability;
The degree of injury of the component to be detected is determined according to the identification probability.
2. the method according to claim 1, wherein the target component image that advances with is to image recognition mould Type is trained, comprising:
Target component image is obtained in advance;
Edge detection is carried out to the target component image, extracts the contour feature of the target component image, forms target wheel Wide image;
Image recognition model is trained using the objective contour image;
It is described that the image of component to be detected is inputted into described image identification model, comprising:
Edge detection is carried out to the image of component to be detected, extracts the contour feature of the image of component to be detected, forming portion Part contour images;
The component outline image is inputted into described image identification model.
3. according to the method described in claim 2, it is characterized in that, the target component image is color image;It is described to institute It states target component image and carries out edge detection, comprising:
The target component image is subjected to gray processing processing, the target component image carries out edge detection to treated;
And/or
The image of component to be detected is color image, described to carry out edge detection to the image of component to be detected, comprising:
The image of component to be detected is subjected to gray processing processing, the image of component to be detected carries out edge inspection to treated It surveys.
4. according to the method described in claim 3, it is characterized in that, described utilize the objective contour image to image recognition mould Type is trained, comprising:
By in the objective contour image, the gray scale more than or equal to the first default gray threshold replaces with the first default gray scale, Enhancing target image is obtained,
Image recognition model is trained using the enhancing target image;
And/or
It is described that the component outline image is inputted into described image identification model, comprising:
By in the component outline image, the gray scale more than or equal to the second default gray threshold replaces with the second default gray scale, Reinforcing member image is obtained,
The reinforcing member image is inputted into described image identification model.
5. the method according to claim 1, which is characterized in that described by the image of component to be detected Described image identification model is inputted, the identification probability that the component to be detected is the target component is obtained, comprising:
Multiple area to be tested are determined in the image of component to be detected, according to the multiple area to be tested formed it is multiple to Detection zone image, the multiple area to be tested are located at the different location in the image of component to be detected;
Multiple area to be tested images are inputted into described image identification model, respectively obtain multiple area to be tested figures Component to be detected as in is the area probability of the target component;
Maximum value in the area probability is determined as identification probability.
6. the method according to claim 1, wherein the target component image is multiple, multiple targets Image of component corresponds to different target component, and the target component image carries target component mark, and different target component is corresponding Target component identify different, the target component image that advances with is trained image recognition model, comprising:
Advance with carry target component mark target component image image recognition model is trained;
The image of component to be detected carries component mark to be detected, described that the image of component to be detected is inputted the figure As identification model, the identification probability that the component to be detected is the target component is obtained, comprising:
The image of component to be detected for carrying component to be detected mark is inputted into described image identification model, obtain it is described to Detection part is the identification probability that its target component mark identifies identical target component with the component to be detected.
7. the method according to claim 1, wherein the target component image is multiple, multiple targets Image of component corresponds to different target component, and the target component image carries target component mark, and different target component is corresponding Target component identify different, the target component image that advances with is trained image recognition model, comprising:
Advance with carry target component mark target component image image recognition model is trained;
Described that the image of component to be detected is inputted described image identification model, obtaining the component to be detected is the target The identification probability of component, comprising:
The image of component to be detected is inputted into described image identification model, it is respectively each described for obtaining the component to be detected The identification probability of target component;
The degree of injury that the component to be detected is determined according to the identification probability, comprising:
The degree of injury of the component to be detected is determined according to the maximum value of multiple identification probabilities.
8. the method according to claim 1, which is characterized in that described to be determined according to the identification probability The degree of injury of the component to be detected, comprising:
If the identification probability is less than or equal to predetermined probabilities value, according to the identification probability and identification probability and degree of injury Correlativity, determine the degree of injury of the component to be detected.
9. the method according to claim 1, which is characterized in that described to be determined according to the identification probability The degree of injury of the component to be detected, comprising:
If the identification probability is greater than predetermined probabilities value, by the image of component input picture similarity-rough set mould to be detected Type obtains the similarity of the image of component to be detected Yu the target component image;Described image similarity-rough set model is It is obtained using target component image training;The image of component to be detected and the target component image are color image;
According to the similarity of the image of component to be detected and the target component image, the damage of the component to be detected is determined Degree.
10. described the method according to claim 1, wherein described image identification model includes multiple convolutional layers The dimension of the parameter matrix of multiple convolutional layers is sequentially reduced according to the processing sequence to the image of component to be detected.
CN201810689304.1A 2018-06-28 2018-06-28 A kind of determination method, device and equipment of components damage degree Pending CN108960256A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810689304.1A CN108960256A (en) 2018-06-28 2018-06-28 A kind of determination method, device and equipment of components damage degree

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810689304.1A CN108960256A (en) 2018-06-28 2018-06-28 A kind of determination method, device and equipment of components damage degree

Publications (1)

Publication Number Publication Date
CN108960256A true CN108960256A (en) 2018-12-07

Family

ID=64487677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810689304.1A Pending CN108960256A (en) 2018-06-28 2018-06-28 A kind of determination method, device and equipment of components damage degree

Country Status (1)

Country Link
CN (1) CN108960256A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363756A (en) * 2019-07-18 2019-10-22 佛山市高明金石建材有限公司 A kind of wear detecting system and detection method for bistrique
CN110969183A (en) * 2019-09-20 2020-04-07 北京方位捷讯科技有限公司 Method and system for determining damage degree of target object according to image data
CN111242070A (en) * 2020-01-19 2020-06-05 上海眼控科技股份有限公司 Target object detection method, computer device, and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081045A (en) * 2010-03-24 2011-06-01 上海海事大学 Structural damage identification method based on laser television holographic technique
CN102136007A (en) * 2011-03-31 2011-07-27 石家庄铁道大学 Small world property-based engineering information organization method
DE102012009783B3 (en) * 2012-05-18 2013-08-14 Khs Gmbh Method and device for inspection of empty bottles
CN104469305A (en) * 2014-12-04 2015-03-25 国家电网公司 Fault detecting method and device of power grid video monitoring device
CN105868722A (en) * 2016-04-05 2016-08-17 国家电网公司 Identification method and system of abnormal power equipment images
CN106023185A (en) * 2016-05-16 2016-10-12 国网河南省电力公司电力科学研究院 Power transmission equipment fault diagnosis method
CN106096668A (en) * 2016-08-18 2016-11-09 携程计算机技术(上海)有限公司 The recognition methods of watermarked image and the system of identification
CN106650770A (en) * 2016-09-29 2017-05-10 南京大学 Mura defect detection method based on sample learning and human visual characteristics
CN106815835A (en) * 2017-01-10 2017-06-09 北京邮电大学 Damnification recognition method and device
CN106959662A (en) * 2017-05-10 2017-07-18 东北大学 A kind of electric melting magnesium furnace unusual service condition identification and control method
CN107782733A (en) * 2017-09-30 2018-03-09 中国船舶重工集团公司第七〇九研究所 Image recognition the cannot-harm-detection device and method of cracks of metal surface
CN108178037A (en) * 2017-12-30 2018-06-19 武汉大学 A kind of elevator faults recognition methods based on convolutional neural networks

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102081045A (en) * 2010-03-24 2011-06-01 上海海事大学 Structural damage identification method based on laser television holographic technique
CN102136007A (en) * 2011-03-31 2011-07-27 石家庄铁道大学 Small world property-based engineering information organization method
DE102012009783B3 (en) * 2012-05-18 2013-08-14 Khs Gmbh Method and device for inspection of empty bottles
CN104469305A (en) * 2014-12-04 2015-03-25 国家电网公司 Fault detecting method and device of power grid video monitoring device
CN105868722A (en) * 2016-04-05 2016-08-17 国家电网公司 Identification method and system of abnormal power equipment images
CN106023185A (en) * 2016-05-16 2016-10-12 国网河南省电力公司电力科学研究院 Power transmission equipment fault diagnosis method
CN106096668A (en) * 2016-08-18 2016-11-09 携程计算机技术(上海)有限公司 The recognition methods of watermarked image and the system of identification
CN106650770A (en) * 2016-09-29 2017-05-10 南京大学 Mura defect detection method based on sample learning and human visual characteristics
CN106815835A (en) * 2017-01-10 2017-06-09 北京邮电大学 Damnification recognition method and device
CN106959662A (en) * 2017-05-10 2017-07-18 东北大学 A kind of electric melting magnesium furnace unusual service condition identification and control method
CN107782733A (en) * 2017-09-30 2018-03-09 中国船舶重工集团公司第七〇九研究所 Image recognition the cannot-harm-detection device and method of cracks of metal surface
CN108178037A (en) * 2017-12-30 2018-06-19 武汉大学 A kind of elevator faults recognition methods based on convolutional neural networks

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
RICHARD GREEN等: "MULTI-SCALE RIGID REGISTRATION TO DETECT DAMAGE IN MICRO-CT IMAGES OF PROGRESSIVELY LOADED BONES", 《2011 IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING FROM NANO TO MACRO》 *
TINGTING YAO等: "Image Based Obstacle Detection For Automatic Train Supervision", 《2012 5TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING》 *
牛林著: "《结构振动控制理论和计算方法》", 30 April 2016, 北京:中国科学技术出版社 *
范文兵等: "基于Relief 算法的故障图像识别与匹配方法", 《兵工自动化》 *
陈慧岩主编: "《无人驾驶汽车概论》", 31 July 2014, 北京:北京理工大学出版社 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363756A (en) * 2019-07-18 2019-10-22 佛山市高明金石建材有限公司 A kind of wear detecting system and detection method for bistrique
CN110969183A (en) * 2019-09-20 2020-04-07 北京方位捷讯科技有限公司 Method and system for determining damage degree of target object according to image data
CN110969183B (en) * 2019-09-20 2023-11-21 北京方位捷讯科技有限公司 Method and system for determining damage degree of target object according to image data
CN111242070A (en) * 2020-01-19 2020-06-05 上海眼控科技股份有限公司 Target object detection method, computer device, and storage medium

Similar Documents

Publication Publication Date Title
CN111723860B (en) Target detection method and device
CN110569837B (en) Method and device for optimizing damage detection result
CN105938559B (en) Use the Digital Image Processing of convolutional neural networks
Riese et al. Soil texture classification with 1D convolutional neural networks based on hyperspectral data
CN108537102B (en) High-resolution SAR image classification method based on sparse features and conditional random field
CN108647588A (en) Goods categories recognition methods, device, computer equipment and storage medium
CN105138993A (en) Method and device for building face recognition model
CN104866868A (en) Metal coin identification method based on deep neural network and apparatus thereof
CN110135318B (en) Method, device, equipment and storage medium for determining passing record
CN108960256A (en) A kind of determination method, device and equipment of components damage degree
CN110163294B (en) Remote sensing image change region detection method based on dimension reduction operation and convolution network
CN113269257A (en) Image classification method and device, terminal equipment and storage medium
Öztürk et al. Transfer learning and fine‐tuned transfer learning methods' effectiveness analyse in the CNN‐based deep learning models
CN107609507B (en) Remote sensing image target identification method based on characteristic tensor and support tensor machine
CN115240037A (en) Model training method, image processing method, device and storage medium
CN112200191B (en) Image processing method, image processing device, computing equipment and medium
CN114219936A (en) Object detection method, electronic device, storage medium, and computer program product
CN111968087B (en) Plant disease area detection method
CN111401415A (en) Training method, device, equipment and storage medium of computer vision task model
KR101821770B1 (en) Techniques for feature extraction
CN115630660B (en) Barcode positioning method and device based on convolutional neural network
Wong et al. Efficient multi-structure robust fitting with incremental top-k lists comparison
Khan et al. Segmentation of single and overlapping leaves by extracting appropriate contours
Krupiński et al. Improved two-step binarization of degraded document images based on Gaussian mixture model
CN112862767A (en) Measurement learning-based surface defect detection method for solving difficult-to-differentiate unbalanced samples

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181207