CN109657716B - Vehicle appearance damage identification method based on deep learning - Google Patents

Vehicle appearance damage identification method based on deep learning Download PDF

Info

Publication number
CN109657716B
CN109657716B CN201811521006.8A CN201811521006A CN109657716B CN 109657716 B CN109657716 B CN 109657716B CN 201811521006 A CN201811521006 A CN 201811521006A CN 109657716 B CN109657716 B CN 109657716B
Authority
CN
China
Prior art keywords
network
vehicle appearance
model
damage
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811521006.8A
Other languages
Chinese (zh)
Other versions
CN109657716A (en
Inventor
朱向雷
郭维明
刘森
朱倩倩
赵子豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Technology and Research Center Co Ltd
Automotive Data of China Tianjin Co Ltd
Original Assignee
China Automotive Technology and Research Center Co Ltd
Automotive Data of China Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Technology and Research Center Co Ltd, Automotive Data of China Tianjin Co Ltd filed Critical China Automotive Technology and Research Center Co Ltd
Priority to CN201811521006.8A priority Critical patent/CN109657716B/en
Publication of CN109657716A publication Critical patent/CN109657716A/en
Application granted granted Critical
Publication of CN109657716B publication Critical patent/CN109657716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Business, Economics & Management (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a vehicle appearance damage identification method based on deep learning, which comprises the steps of obtaining an actual vehicle appearance damage image and marking the damage type and position; building a deep convolutional neural network; carrying out model training to obtain a trained model; and carrying out vehicle appearance damage identification and model evaluation by using the model obtained by training. According to the vehicle appearance damage identification method based on deep learning, the damage type and degree of the vehicle appearance part are identified based on the deep convolutional neural network model under the complex environment, the algorithm precision is guaranteed, and meanwhile the algorithm operation speed is increased.

Description

Vehicle appearance damage identification method based on deep learning
Technical Field
The invention belongs to the field of artificial intelligence, and particularly relates to a vehicle appearance damage identification method based on deep learning.
Background
In recent years, with the continuous and rapid development of the economic society of China, the quantity of motor vehicles kept in China continuously and rapidly increases, and the problem of vehicle traffic accidents also frequently occurs all the time. Generally, after a vehicle has a traffic accident, a professional insurance company claim settlement person needs to identify the vehicle damage by adopting a manual judgment method, so that the problems of low case processing efficiency of the insurance company, long waiting time of a vehicle owner and the like are caused.
The search of the prior art shows that Chinese patent document No. CN105678622A, published/announced 2016, 06, 15, entitled "analysis method and system of vehicle insurance claim settlement photos" discloses a method for analyzing accident photos uploaded by a mobile terminal by using a conventional convolutional neural network, identifying damaged parts and generating reminding information based on the analysis result. The method is only used for determining the damage part of the vehicle, and specific damage types cannot be identified. In addition, the damage assessment result of the vehicle in the method still needs to be verified manually, and the labor cost is still large.
Further, chinese patent document No. CN107358596A, published/announced 11/17/2017, discloses a method, an apparatus, an electronic device, and a system for vehicle damage assessment based on images. In the patent, the identification of the damage of the vehicle appearance part is realized through sample training by constructing a network model of the convolutional layer CNN and the region suggestion layer RPN. The method adopts the multi-scale and multi-proportion reference frame, and can effectively improve the damage detection of the unconventional scale and proportion. The model algorithm is integrally divided into two stages, firstly, RPN is used for screening rough selection areas of the feature map, and then, a convolutional neural network is used for classifying and regressing the obtained rough selection areas. However, the method has complex process and large calculation amount, so that the algorithm detection speed is slow, and the real-time effect is difficult to achieve.
Disclosure of Invention
In view of this, the invention aims to provide a vehicle appearance damage identification method based on deep learning, which identifies the damage type and degree of a vehicle appearance part in a complex environment based on a deep convolutional neural network model, and improves the algorithm operation speed while ensuring the algorithm precision.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a vehicle appearance damage identification method based on deep learning comprises the following steps:
the method comprises the following steps: acquiring an actual vehicle appearance damage image and marking the damage type and position;
step two: building a deep convolutional neural network;
step three: carrying out model training to obtain a trained model;
step four: and carrying out vehicle appearance damage identification and model evaluation by using the model obtained by training.
Further, in the first step, a data set is built by self, and the acquired vehicle appearance damage images under various shooting angles, various vehicle types and various environments are stored in the data set.
The acquired images cover distant view images and close view images of damaged appearance of vehicles such as cars, SUVs, MPVs, cross-type passenger vehicles and the like in a plurality of directions such as the front of the vehicle, the rear of the vehicle, the front of the vehicle and the like in the day and at night.
And an image marking tool is adopted to mark the damage of the vehicle appearance piece, such as scratch, dent and the like.
Further, in the step one, the labeled data set is divided into a training set, a verification set and a test set.
Further, in the second step, firstly, a backbone network is built, the feature extraction based on the convolutional neural network model CNN is realized, and parameters such as a network threshold value, the maximum iteration times and the like are set;
building a candidate frame generation network, and taking the extracted feature map as input to realize candidate frame generation based on a convolutional neural network model (CNN); the candidate frames comprise both foreground candidate frames and background candidate frames, and the network directly inputs the generated candidate frames into the next part of network without primary screening, so that the running speed of the model is greatly shortened;
and building a target picture classification network and a boundary box regression network, and inputting the candidate box to realize the classification of the target in the candidate box and the position regression of the target.
Further, in the second step, a residual error network ResNet is used as a backbone network, and the backbone network is expanded by using a pyramid network FPN.
Further, in the second step, the built deep neural network is a feedforward neural network, model training is completed by building a loss function and continuously feeding back and adjusting network parameters, the traditional one-stage network loss function is a cross entropy CE loss function, and the formula is as follows:
Figure BDA0001903224560000031
wherein y-1 represents a positive sample, y-1 represents a negative sample, and p ∈ [0,1] is a confidence score; if this function is used, when a large number of simple samples exist, even if the errors generated by the respective samples are small, the sum of the errors may have a great influence on the detector;
by adding a weight function in front of the cross entropy loss function, the problem of class imbalance can be solved, so that
Figure BDA0001903224560000032
Then CE (p, y) is equal to CE (p)t)=-log(pt) Increasing the weight function- (1-p)t)γThus, the new loss function formula is derived as:
NCE(pt)=-(1-pt)γlog(pt) Wherein γ is a regulatory factor and γ > 0;
the new loss function solves the class imbalance problem of the traditional one-stage network, namely the situation that a large number of simple negative samples overwhelm the detector in the training process due to foreground-background class imbalance in the training process is solved.
Further, in the third step, firstly, initializing parameters to be trained in the network, and inputting a training set into the initialized network for forward propagation; and (5) adjusting parameters in the network by using the loss function in the step two and utilizing the characteristic that the convolutional neural network is used as a feedforward neural network, finishing training until the loss value is smaller than a set threshold value or reaches the maximum iteration number, and finally training to obtain a network model for identifying the vehicle appearance damage.
Further, in step three, the training sample data in the training set includes the original image, the damage location, and the damage type information.
Further, in step three, the model outputs the appearance damage level in the vehicle picture, including the damage type and the damage position.
Compared with the prior art, the vehicle appearance damage identification method based on deep learning has the following advantages:
according to the vehicle appearance damage identification method based on deep learning, the identification of the vehicle appearance damage is realized through the built deep convolutional neural network model, the vehicle appearance damage is accurate in positioning and comprises the damage type and the damage position;
the model algorithm solves the problem of low detection precision caused by the fact that the category is unbalanced in practice by improving the loss function of the algorithm, and realizes the function of quickly and accurately identifying the appearance damage of the vehicle;
the method and the system are beneficial to improving the efficiency and the accuracy of identifying the damage of the vehicle part by an insurance company, and really solve the problems of high labor calling cost, long waiting time of a vehicle owner and the like in actual claim settlement.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention.
In the drawings:
fig. 1 is a flowchart of a vehicle appearance damage identification method based on deep learning according to an embodiment of the present invention;
fig. 2 is a network model structure diagram of a vehicle appearance damage identification method based on deep learning according to an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art through specific situations.
The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
As shown in fig. 1-2, a method for identifying vehicle appearance damage based on deep learning includes:
the method comprises the following steps: acquiring an actual vehicle appearance damage image and marking the damage type and position;
step two: building a deep convolutional neural network;
step three: carrying out model training to obtain a trained model;
step four: and carrying out vehicle appearance damage identification and model evaluation by using the model obtained by training.
As shown in fig. 1-2, in the first step, a data set is created, and the acquired images of the damage to the appearance of the vehicle at various shooting angles, various vehicle types and various environments are stored in the data set.
The acquired images cover distant view images and close view images of damaged appearance of vehicles such as cars, SUVs, MPVs, cross-type passenger vehicles and the like in a plurality of directions such as the front of the vehicle, the rear of the vehicle, the front of the vehicle and the like in the day and at night.
And an image marking tool is adopted to mark the damage of the vehicle appearance piece, such as scratch, dent and the like.
As shown in fig. 1-2, in step one, the labeled data set is divided into a training set, a validation set and a test set.
As shown in fig. 1-2, in the second step, firstly, a backbone network is built, feature extraction based on a convolutional neural network model CNN is realized, and parameters such as a network threshold value, a maximum iteration number and the like are set;
building a candidate frame generation network, and taking the extracted feature map as input to realize candidate frame generation based on a convolutional neural network model (CNN); the candidate frames comprise both foreground candidate frames and background candidate frames, and the network directly inputs the generated candidate frames into the next part of network without primary screening, so that the running speed of the model is greatly shortened;
and building a target picture classification network and a boundary box regression network, and inputting the candidate box to realize the classification of the target in the candidate box and the position regression of the target.
As shown in fig. 1-2, in step two, a residual error network ResNet with strong feature expression capability is used as a backbone network, such as ResNet-50 or ResNet-101; the extension of the backbone network using pyramid network FPN, such as ResNet-101+ FPN, can better feature the extraction network at multiple scales.
As shown in fig. 1-2, in the second step, the constructed deep neural network is a feedforward neural network, and model training is completed by constructing a loss function and continuously feeding back and adjusting network parameters, where the traditional one-stage network loss function is a cross entropy CE loss function, and the formula is as follows:
Figure BDA0001903224560000071
wherein y-1 represents a positive sample, y-1 represents a negative sample, and p ∈ [0,1] is a confidence score; if this function is used, when a large number of simple samples exist, even if the errors generated by the respective samples are small, the sum of the errors may have a great influence on the detector;
by adding a weight function in front of the cross entropy loss function, the problem of class imbalance can be solved, so that
Figure BDA0001903224560000072
Then CE (p, y) is equal to CE (p)t)=-log(pt) Increasing the weight function- (1-p)t)γThus, the new loss function formula is derived as:
NCE(pt)=-(1-pt)γlog(pt) Wherein γ is a regulatory factor and γ > 0;
the new loss function solves the class imbalance problem of the traditional one-stage network, namely the situation that a large number of simple negative samples overwhelm the detector in the training process due to foreground-background class imbalance in the training process is solved.
As shown in fig. 1-2, in the third step, firstly, a parameter to be trained in the network is initialized, and in this embodiment, the ResNet-101 parameter is used as an initial value of the network convolution part; inputting a training set into the initialized network for forward propagation; and (5) adjusting parameters in the network by using the loss function in the step two and utilizing the characteristic that the convolutional neural network is used as a feedforward neural network, finishing training until the loss value is smaller than a set threshold value or reaches the maximum iteration number, and finally training to obtain a network model for identifying the vehicle appearance damage.
As shown in fig. 1-2, in step three, the training sample data in the training set includes the original image, the lesion location, and the lesion type information.
In step three, as shown in fig. 1-2, the model outputs the apparent damage level including the damage type and the damage location in the vehicle picture.
In this embodiment, a trained network is applied, one or more images to be detected are input to the trained network, and the type (including degree) of vehicle damage and damage position corresponding to each image are output, where the output image area is related to the vehicle appearance damage position in the image. Specifically, if an input image contains a vehicle appearance damage (contained in the damage label of the training sample), outputting an appearance damage and a corresponding position thereof; if there are k appearance impairments (contained in the trained appearance label type), then the k appearance impairments and their corresponding locations are output.
The speed of operation is found to be an effective improvement by comparison with the model with the area proposed network PRN, while the model accuracy is guaranteed.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A vehicle appearance damage identification method based on deep learning is characterized in that: the method comprises the following steps:
the method comprises the following steps: acquiring an actual vehicle appearance damage image and marking the damage type and position;
step two: building a deep convolutional neural network;
step three: carrying out model training to obtain a trained model;
step four: carrying out vehicle appearance damage identification and model evaluation by using the model obtained by training;
in the second step, a residual error network ResNet is used as a main network, and a pyramid network FPN is used for expanding the main network;
in the second step, the built deep neural network is a feedforward neural network, model training is completed by building a new loss function and continuously feeding back and adjusting network parameters, and the loss function formula is as follows:
NCE(pt)=-(1-pt)γlog(pt)
wherein γ is a regulatory factor and γ > 0,
Figure FDA0002689509910000011
y 1 represents a positive sample, y-1 represents a negative sample, p ∈ [0,1]Is the confidence score.
2. The vehicle appearance damage identification method based on deep learning of claim 1, wherein: in the first step, a data set is built by self, and the acquired vehicle appearance damage images of various shooting angles, various vehicle types and various environments are stored in the data set.
3. The vehicle appearance damage identification method based on deep learning according to claim 2, characterized in that: in the first step, the labeled data set is divided into a training set, a verification set and a test set.
4. The vehicle appearance damage identification method based on deep learning of claim 3, wherein: in the second step, firstly, a backbone network is built, and feature extraction based on a convolutional neural network model CNN is realized;
building a candidate frame generation network, and taking the extracted feature map as input to realize candidate frame generation based on a convolutional neural network model (CNN);
and building a target picture classification network and a boundary box regression network, and inputting the candidate box to realize the classification of the target in the candidate box and the position regression of the target.
5. The vehicle appearance damage identification method based on deep learning of claim 4, wherein: in the third step, firstly, initializing parameters to be trained in the network, and inputting a training set into the initialized network for forward propagation; and (5) adjusting parameters in the network by using the loss function in the step two and utilizing the characteristic that the convolutional neural network is used as a feedforward neural network, finishing training until the loss value is smaller than a set threshold value or reaches the maximum iteration number, and finally training to obtain a network model for identifying the vehicle appearance damage.
6. The vehicle appearance damage identification method based on deep learning of claim 1, wherein: in the third step, the training sample data in the training set includes the original image, the damage position and the damage type information.
7. The vehicle appearance damage identification method based on deep learning of claim 1, wherein: in the third step, the model outputs the appearance damage level in the vehicle picture, including the damage type and the damage position.
CN201811521006.8A 2018-12-12 2018-12-12 Vehicle appearance damage identification method based on deep learning Active CN109657716B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811521006.8A CN109657716B (en) 2018-12-12 2018-12-12 Vehicle appearance damage identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811521006.8A CN109657716B (en) 2018-12-12 2018-12-12 Vehicle appearance damage identification method based on deep learning

Publications (2)

Publication Number Publication Date
CN109657716A CN109657716A (en) 2019-04-19
CN109657716B true CN109657716B (en) 2020-12-29

Family

ID=66114222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811521006.8A Active CN109657716B (en) 2018-12-12 2018-12-12 Vehicle appearance damage identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN109657716B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135437B (en) * 2019-05-06 2022-04-05 北京百度网讯科技有限公司 Loss assessment method and device for vehicle, electronic equipment and computer storage medium
US10885625B2 (en) 2019-05-10 2021-01-05 Advanced New Technologies Co., Ltd. Recognizing damage through image analysis
CN110569703B (en) * 2019-05-10 2020-09-01 阿里巴巴集团控股有限公司 Computer-implemented method and device for identifying damage from picture
CN110196160A (en) * 2019-05-29 2019-09-03 国电联合动力技术有限公司 A kind of wind turbine gearbox monitoring method based on residual error network
CN110349124A (en) * 2019-06-13 2019-10-18 平安科技(深圳)有限公司 Vehicle appearance damages intelligent detecting method, device and computer readable storage medium
CN110363238A (en) * 2019-07-03 2019-10-22 中科软科技股份有限公司 Intelligent vehicle damage identification method, system, electronic equipment and storage medium
WO2021217853A1 (en) * 2020-04-30 2021-11-04 平安科技(深圳)有限公司 Intelligent loss assessment method and apparatus for damage image, electronic device, and storage medium
CN111523615B (en) * 2020-05-08 2024-03-26 北京深智恒际科技有限公司 Assembly line closed-loop flow method for realizing vehicle appearance professional damage labeling
WO2021151277A1 (en) * 2020-05-26 2021-08-05 平安科技(深圳)有限公司 Method and apparatus for determining severity of damage on target object, electronic device, and storage medium
US11328402B2 (en) 2020-09-29 2022-05-10 Hong Kong Applied Science and Technology Research Institute Company Limited Method and system of image based anomaly localization for vehicles through generative contextualized adversarial network
CN112907576B (en) * 2021-03-25 2024-02-02 平安科技(深圳)有限公司 Vehicle damage grade detection method and device, computer equipment and storage medium
CN113240641B (en) * 2021-05-13 2023-06-16 大连海事大学 Container damage real-time detection method based on deep learning
CN113506244A (en) * 2021-06-05 2021-10-15 北京超维世纪科技有限公司 Indicator light detection and color identification generalization capability improvement algorithm based on deep learning
CN113657409A (en) * 2021-08-16 2021-11-16 平安科技(深圳)有限公司 Vehicle loss detection method, device, electronic device and storage medium
CN113780435B (en) * 2021-09-15 2024-04-16 平安科技(深圳)有限公司 Vehicle damage detection method, device, equipment and storage medium
CN114359717B (en) * 2021-12-17 2023-04-25 华南理工大学 Vehicle damage identification method based on multi-view correlation deep learning
CN114493903B (en) * 2022-02-17 2024-04-09 平安科技(深圳)有限公司 Loss model optimization method in human cold risk assessment and related equipment
CN115994910B (en) * 2023-03-24 2023-06-06 邦邦汽车销售服务(北京)有限公司 Method and system for determining damage degree of automobile based on data processing
CN116012075B (en) * 2023-03-29 2023-05-30 邦邦汽车销售服务(北京)有限公司 Method and system for determining vehicle repair points based on machine learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127747A (en) * 2016-06-17 2016-11-16 史方 Car surface damage classifying method and device based on degree of depth study
CN107247930A (en) * 2017-05-26 2017-10-13 西安电子科技大学 SAR image object detection method based on CNN and Selective Attention Mechanism
CN107358596A (en) * 2017-04-11 2017-11-17 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device, electronic equipment and system
CN107463919A (en) * 2017-08-18 2017-12-12 深圳市唯特视科技有限公司 A kind of method that human facial expression recognition is carried out based on depth 3D convolutional neural networks

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180284735A1 (en) * 2016-05-09 2018-10-04 StrongForce IoT Portfolio 2016, LLC Methods and systems for industrial internet of things data collection in a network sensitive upstream oil and gas environment
TWI753034B (en) * 2017-03-31 2022-01-21 香港商阿里巴巴集團服務有限公司 Method, device and electronic device for generating and searching feature vector
CN108629309A (en) * 2018-04-28 2018-10-09 成都睿码科技有限责任公司 Foundation pit surrounding people's method for protecting
CN108564555B (en) * 2018-05-11 2021-09-21 中北大学 NSST and CNN-based digital image noise reduction method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127747A (en) * 2016-06-17 2016-11-16 史方 Car surface damage classifying method and device based on degree of depth study
CN107358596A (en) * 2017-04-11 2017-11-17 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device, electronic equipment and system
CN107247930A (en) * 2017-05-26 2017-10-13 西安电子科技大学 SAR image object detection method based on CNN and Selective Attention Mechanism
CN107463919A (en) * 2017-08-18 2017-12-12 深圳市唯特视科技有限公司 A kind of method that human facial expression recognition is carried out based on depth 3D convolutional neural networks

Also Published As

Publication number Publication date
CN109657716A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
CN109657716B (en) Vehicle appearance damage identification method based on deep learning
US11144786B2 (en) Information processing apparatus, method for controlling information processing apparatus, and storage medium
US11127200B1 (en) Photo deformation techniques for vehicle repair analysis
CN109816024B (en) Real-time vehicle logo detection method based on multi-scale feature fusion and DCNN
CN108229509B (en) Method and device for identifying object class and electronic equipment
CN104700099B (en) The method and apparatus for recognizing traffic sign
CN108898047B (en) Pedestrian detection method and system based on blocking and shielding perception
CN109087510B (en) Traffic monitoring method and device
CN109882019B (en) Automobile electric tail door opening method based on target detection and motion recognition
CN111507989A (en) Training generation method of semantic segmentation model, and vehicle appearance detection method and device
CN110866430B (en) License plate recognition method and device
CN105574550A (en) Vehicle identification method and device
CN108960124B (en) Image processing method and device for pedestrian re-identification
CN112016605B (en) Target detection method based on corner alignment and boundary matching of bounding box
CN113033604A (en) Vehicle detection method, system and storage medium based on SF-YOLOv4 network model
CN111274926B (en) Image data screening method, device, computer equipment and storage medium
CN112329881B (en) License plate recognition model training method, license plate recognition method and device
CN112613344B (en) Vehicle track occupation detection method, device, computer equipment and readable storage medium
CN112287896A (en) Unmanned aerial vehicle aerial image target detection method and system based on deep learning
CN111860652B (en) Method, device, equipment and medium for measuring animal body weight based on image detection
CN104615986A (en) Method for utilizing multiple detectors to conduct pedestrian detection on video images of scene change
CN110889421A (en) Target detection method and device
CN113822247A (en) Method and system for identifying illegal building based on aerial image
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN112307840A (en) Indicator light detection method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 12-17, Block B1, New City Center, No. 3 Wanhua Road, Zhongbei Town, Xiqing District, Tianjin

Applicant after: Sinotruk data (Tianjin) Co.,Ltd.

Address before: 300393 room B1, room B1, new city center, No. 3, Wan Hui Road, Beizhen, Xiqing District, Tianjin

Applicant before: TIANJIN KADAKE DATA Co.,Ltd.

CB02 Change of applicant information
TA01 Transfer of patent application right

Effective date of registration: 20201208

Address after: Room 12-17, Block B1, New City Center, No. 3 Wanhua Road, Zhongbei Town, Xiqing District, Tianjin

Applicant after: Sinotruk data (Tianjin) Co.,Ltd.

Applicant after: CHINA AUTOMOTIVE TECHNOLOGY AND RESEARCH CENTER Co.,Ltd.

Address before: Room 12-17, Block B1, New City Center, No. 3 Wanhua Road, Zhongbei Town, Xiqing District, Tianjin

Applicant before: Sinotruk data (Tianjin) Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant