WO2021218020A1 - 车辆损伤图片处理方法、装置、计算机设备及存储介质 - Google Patents

车辆损伤图片处理方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2021218020A1
WO2021218020A1 PCT/CN2020/118063 CN2020118063W WO2021218020A1 WO 2021218020 A1 WO2021218020 A1 WO 2021218020A1 CN 2020118063 W CN2020118063 W CN 2020118063W WO 2021218020 A1 WO2021218020 A1 WO 2021218020A1
Authority
WO
WIPO (PCT)
Prior art keywords
damage
data
detection frame
intensity
detection
Prior art date
Application number
PCT/CN2020/118063
Other languages
English (en)
French (fr)
Inventor
刘莉红
刘玉宇
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021218020A1 publication Critical patent/WO2021218020A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Definitions

  • This application relates to the field of image recognition, and in particular to a method, device, computer equipment and storage medium for processing vehicle damage pictures.
  • the other uses an object detection algorithm to detect damage pictures, and establishes the correlation between the damage shape and the damage picture by means of manual annotation, and then builds a prediction model of the damage shape. Due to the diversity of damage forms and the complexity of actual application scenarios, there is a bottleneck in the application of image processing results of damage forms. For example, this method cannot accurately divide the damage area, which affects the accurate judgment of the maintenance plan. For another example, the confidence distributions of the detection frames between the types used in this method are significantly different. The inventor realized that although the image processing result can be optimized through manual intervention in the later stage, the accuracy of the image processing result is low due to the poor robustness of its parameters (such as threshold) and discriminant rules.
  • a method for processing vehicle damage pictures including:
  • the damage intensity graphic data is input into a preset damage intensity classification model for processing, and the processing result output by the preset damage intensity classification model is obtained.
  • a vehicle damage image processing device including:
  • the detection module is used to process the damage image of the vehicle through a preset detector to obtain the damage detection result of the damage image of the vehicle;
  • Graphical module used to generate damage intensity graphic data according to the damage detection result
  • the model processing module is configured to input the damage intensity graphic data into a preset damage intensity classification model for processing, and obtain a processing result output by the preset damage intensity classification model.
  • a computer device includes a memory, a processor, and computer-readable instructions stored in the memory and capable of running on the processor.
  • the processor implements the following steps when the processor executes the computer-readable instructions: Read the instruction to process the damage image of the vehicle through a preset detector to obtain the damage detection result of the damage image of the vehicle;
  • the damage intensity graphic data is input into a preset damage intensity classification model for processing, and the processing result output by the preset damage intensity classification model is obtained.
  • One or more readable storage media storing computer readable instructions, when the computer readable instructions are executed by one or more processors, the one or more processors execute the following steps:
  • the damage intensity graphic data is input into a preset damage intensity classification model for processing, and the processing result output by the preset damage intensity classification model is obtained.
  • the aforementioned vehicle damage picture processing method, device, computer equipment and storage medium process the vehicle damage picture through a preset detector to obtain the damage detection result of the vehicle damage picture, and obtain the detection results of multiple detection frames that characterize the degree of vehicle damage; Generate damage intensity graphic data according to the damage detection result to convert the detection result of the discrete detection frame into a continuous intensity distribution, improve the robustness of the detection result; input the damage intensity graphic data into the preset damage intensity classification model for processing, and obtain Preset the processing results output by the damage intensity classification model, so that the neural network model can be used again for the damage intensity graphic data to obtain the optimal repair plan.
  • the present application can improve the processing capability of vehicle damage pictures, enhance the robustness of image processing, and at the same time improve the accuracy of image processing results.
  • FIG. 1 is a schematic diagram of an application environment of a method for processing a vehicle damage image in an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for processing damage images of a vehicle in an embodiment of the present application
  • FIG. 3 is an unprocessed vehicle damage picture and a vehicle damage picture processed by a preset detector in an embodiment of the present application
  • FIG. 4 is a schematic flowchart of a method for processing damage images of a vehicle in an embodiment of the present application
  • FIG. 5 is a schematic flowchart of a method for processing damage images of a vehicle in an embodiment of the present application
  • FIG. 6 is a schematic flowchart of a method for processing damage images of a vehicle in an embodiment of the present application
  • FIG. 7 is an image formed after the damage intensity graphic data is superimposed on the original picture and the image directly converted from the damage intensity graphic data in an embodiment of the present application;
  • FIG. 8 is a schematic diagram of a structure of a vehicle damage image processing device in an embodiment of the present application.
  • Fig. 9 is a schematic diagram of a computer device in an embodiment of the present application.
  • the vehicle damage image processing method provided in this embodiment can be applied in an application environment as shown in FIG. 1, in which the client communicates with the server.
  • the client includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
  • the server can be implemented with an independent server or a server cluster composed of multiple servers.
  • a method for processing a vehicle damage image is provided. Taking the method applied to the server in FIG. 1 as an example for description, the method includes the following steps:
  • S10 Process the damage image of the vehicle by a preset detector to obtain a damage detection result of the damage image of the vehicle.
  • the preset detector can be obtained by training using an existing detection algorithm based on deep learning.
  • the detection algorithm can use faster R-CNN (a fast image detection algorithm), SSD+FPN (Single Shot MultiBox Detector, Feature Pyramid Networks), etc.
  • the pre-determined detector can be trained using the marked damage pattern detection training set.
  • the preset detector can be trained in the following ways: 1. Initialize the model parameters (different with the detection algorithm); 2. Take a small batch of samples, perform forward propagation in the deep neural network of the detection algorithm, and then use the damage shape Detect the shape label of the training set and calculate the loss value of the loss function; 3. Perform back propagation according to the calculated loss value to obtain the gradient of each network parameter, and then perform the model according to the stochastic gradient descent algorithm and the learning rate obtained by debugging Update of parameters; 4. Repeat the iterative process above, and continuously iteratively update the model parameters until the loss value calculated by the loss function meets the preset convergence condition.
  • the detection effects of multiple detectors can be evaluated through the mAP (mean Average Precision) algorithm, and the detector with the highest mAP value is finally determined as the preset detector.
  • mAP mean Average Precision
  • These detectors are detection models saved under different iteration times, and their loss values all meet the preset convergence conditions. In general, the detector with the highest mAP value has the best detection performance.
  • the damage detection result of the damage image can be obtained.
  • the damage detection result may include multiple detection frame data.
  • the detection frame data includes the size of the detection frame, the position of the center point, and the confidence level.
  • Fig. 3-a is an unprocessed vehicle damage picture
  • Fig. 3-b is a vehicle damage picture processed by a preset detector.
  • the damage detection result includes multiple detection frame data.
  • the detection frame data can be expressed in many forms, such as (x min ,x max ,y min ,y max ,score), (x min ,y min ,x max ,y max ,score) or (x min ,y min , w, h, score) and so on. Score is the confidence level of the detection frame.
  • the data of each detection frame can be converted into damage intensity data of the detection frame.
  • the damage intensity graphic data may be the result of superimposing damage intensity data of multiple detection frames.
  • Converting different forms of damage detection results into damage intensity graphic data can overcome the shortcomings of low accuracy of the original damage detection results, and convert it into a probability expression of the damage intensity of the car body picture, and convert the detection results of discrete detection frames into Continuous intensity distribution improves the robustness of detection results.
  • step S20 includes:
  • S203 Generate the damage intensity graphic data according to the multiple damage intensity data of the detection frame.
  • the probability density of the coordinate point in the single detection frame can be calculated by the following formula, which is specifically:
  • (x+ ⁇ x, y+ ⁇ y) is the coordinate of the point in the detection frame
  • f(x+ ⁇ x, y+ ⁇ y) is the intensity value of the point with the coordinate (x+ ⁇ x, y+ ⁇ y).
  • is the first adjustment parameter, which can adjust the center focus distribution of the damage intensity data of the detection frame.
  • s is the central intensity of the detection frame, which can be set according to actual needs.
  • W is the second adjustment coefficient, and its value is related to the width of the detection frame.
  • H is the third adjustment coefficient, and its value is related to the height of the detection frame.
  • x max is the right boundary of the detection frame
  • x min is the left boundary of the detection frame
  • y max is the upper boundary of the detection frame
  • y min is the lower boundary of the detection frame.
  • w and h are the width and height of the detection frame respectively.
  • the detection frame data can be converted into the detection frame damage intensity data.
  • the damage intensity graphic data may be the result of superimposing damage intensity data of multiple detection frames.
  • F n (x, y) is the damage intensity graphic data generated based on the damage detection result
  • W img is the width of the vehicle damage picture
  • H img is the height of the vehicle damage picture
  • CN n is the data contained in the n detection frames
  • the set of center points of the detection frame f i is the Gaussian function corresponding to the i-th detection frame data
  • i is the serial number of the detection frame data
  • the value is [1,n]
  • (x i ,y i ) is the i-th
  • One detection frame data corresponds to one damage pattern.
  • D is the data matrix (n*W*H) formed by superimposing multiple channel data, and R refers to the real number space.
  • step S20 includes:
  • S205 Set the central intensity of the detection frame according to the confidence level of the detection frame data and the first adjustment coefficient.
  • the first adjustment coefficient can be represented by ⁇ , which is used to adjust the center focus distribution of the damage intensity data of the detection frame.
  • is used to adjust the center focus distribution of the damage intensity data of the detection frame.
  • the preferred value range of ⁇ is 0.3-0.35.
  • the damage intensity graphic data generated under the preferred value range matches the damage degree distribution of the picture in the detection frame, so the accuracy of the finally obtained processing result is also high.
  • 0.33.
  • the second adjustment coefficient can be represented by W
  • the third adjustment coefficient can be represented by H.
  • the central intensity of the detection frame can be represented by s. In one embodiment,
  • the method includes:
  • a visual image can be generated according to the obtained damage intensity graphic data.
  • the visualized image can be an image directly converted from damage intensity graphic data, as shown in Figure 7-b; it can also be an image formed by superimposing the original image on damage intensity graphic data, as shown in Figure 7-a.
  • the designated terminal can be computer equipment used by algorithm engineers or surveyors.
  • the algorithm engineer can compare the visualized image with the original image, determine whether the generated visualized image can correctly reflect the damage area and degree of the car body, and then determine whether the preset detector or Gaussian needs to be adjusted according to the judgment result Setting parameters of the distribution model.
  • the damage of the car body can be evaluated based on the visual images to avoid false alarms and omissions.
  • the preset damage intensity classification model may be a classification model obtained after training on a training set containing maintenance plan annotations. It is possible to construct a data set containing maintenance plan annotations. In this data set, the maintenance plan annotations include but are not limited to spray, repair, and replacement. These maintenance plan annotations can be generated manually by evaluation experts on the original damage pictures corresponding to the samples. Each sample also contains graphic data of damage intensity generated based on the above damage picture.
  • the data set containing the annotations of the maintenance plan can be divided into three subsets according to a certain proportion, namely the training set, the verification set and the test set. The division ratio can be set to 10:1:1. Use the above three sets to train, verify and test the damage intensity classification model, and finally obtain the preset damage intensity classification model that meets the preset requirements.
  • the preset damage intensity classification model can use general object classification models based on deep neural networks, such as Residual Network (Residual Network), Vgg Network (Visual Geometry Group Network), etc.
  • all the above-mentioned data can also be stored in a node of a blockchain.
  • the processing results and image data output by the preset damage intensity classification model can all be stored in the blockchain node.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • This solution can be applied in the field of smart transportation to promote the construction of smart cities.
  • the processing result output by the preset damage intensity classification model may include a repair plan for the damage picture, and may also include rating data used to evaluate the damage degree of the damage picture.
  • a vehicle damage image processing device is provided, and the vehicle damage image processing device corresponds to the vehicle damage image processing method in the above-mentioned embodiment in a one-to-one correspondence.
  • the vehicle damage image processing device includes a detection module 10, a graphics module 20 and a model processing module 30.
  • the detailed description of each functional module is as follows:
  • the detection module 10 is configured to process the damage image of the vehicle through a preset detector to obtain the damage detection result of the damage image of the vehicle;
  • the graphics module 20 is used to generate damage intensity graphic data according to the damage detection result
  • the model processing module 30 is configured to input the damage intensity graphic data into a preset damage intensity classification model for processing, and obtain a processing result output by the preset damage intensity classification model.
  • the graphical module 20 includes:
  • a detection frame data acquiring unit which is used to acquire the detection frame data in the damage detection result
  • the generating intensity graphic data unit is used to generate the damage intensity graphic data according to the multiple damage intensity data of the detection frame.
  • the unit for generating intensity data of the detection frame includes:
  • An adjustment distribution subunit for adjusting the center focus distribution of the damage intensity data of the detection frame through the first adjustment coefficient
  • a central intensity unit is set for setting the central intensity of the detection frame according to the confidence level of the detection frame data and the first adjustment coefficient.
  • the generating intensity graphic data unit generates the damage intensity graphic data by using the following formula:
  • F n (x, y) is the damage intensity graphic data generated based on the damage detection result
  • W img is the width of the vehicle damage picture
  • H img is the height of the vehicle damage picture
  • CN n is the data contained in the n detection frames
  • f i is the Gaussian function corresponding to the i-th detection frame data
  • i is the serial number of the detection frame data
  • the value is [1,n]
  • (x i ,y i ) is the i-th The coordinates of the center point of the detection frame contained in the detection frame data.
  • the vehicle damage image processing device further includes:
  • the visualization module is used to generate a visualization image according to the damage intensity graphic data
  • the image sending module is used to send the visual image to a designated terminal.
  • each module in the above vehicle damage image processing device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 9.
  • the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a readable storage medium and an internal memory.
  • the readable storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer readable instructions in the readable storage medium.
  • the database of the computer equipment is used to store the data involved in the above-mentioned vehicle damage image processing method.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection. When the computer readable instruction is executed by the processor, a method for processing a vehicle damage image is realized.
  • the readable storage medium provided in this embodiment includes a non-volatile readable storage medium and a volatile readable storage
  • a computer device including a memory, a processor, and computer-readable instructions stored on the memory and capable of running on the processor, and the processor implements the following steps when the processor executes the computer-readable instructions:
  • the damage intensity graphic data is input into a preset damage intensity classification model for processing, and the processing result output by the preset damage intensity classification model is obtained.
  • one or more readable storage media storing computer readable instructions are provided.
  • the readable storage media provided in this embodiment include non-volatile readable storage media and volatile readable storage. Medium; the readable storage medium stores computer readable instructions, and when the computer readable instructions are executed by one or more processors, the one or more processors implement the following steps:
  • the damage intensity graphic data is input into a preset damage intensity classification model for processing, and the processing result output by the preset damage intensity classification model is obtained.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本申请涉及人工智能中的图像识别领域,公开了一种车辆损伤图片处理方法、装置、计算机设备及存储介质,其方法包括:通过预设检测器处理车辆损伤图片,获得车辆损伤图片的损伤检测结果;根据损伤检测结果生成损伤烈度图形数据;将损伤烈度图形数据输入预设损伤烈度分类模型进行处理,获取预设损伤烈度分类模型输出的处理结果。本申请可以提高对车辆损伤图片的处理能力,增强图像处理的鲁棒性,同时提高图像处理结果的准确性。同时,本申请还涉及区块链技术,本申请可应用于智慧交通领域中,从而推动智慧城市的建设。

Description

车辆损伤图片处理方法、装置、计算机设备及存储介质
本申请要求于2020年4月29日提交中国专利局、申请号为202010357524.1,发明名称为“车辆损伤图片处理方法、装置、计算机设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像识别领域,尤其涉及一种车辆损伤图片处理方法、装置、计算机设备及存储介质。
背景技术
随着人工智能的发展,基于图像的智能分析方法在车辆定损方面已经开始普及。现有的智能分析方法主要有两种。一种通过物体检测算法对损伤图片进行检测,用人工标注的方式建立维修方案与损伤图片的关联关系,然后构建维修方案的预测模型。这种智能分析方法处理效率较高,但由于缺少对损伤形态的区分学习,模型鲁棒性和泛化能力较差。
另一种通过物体检测算法对损伤图片进行检测,用人工标注的方式建立损伤形态与损伤图片的关联关系,然后构建损伤形态的预测模型。由于损伤形态的多样性和实际应用场景的复杂性,损伤形态的图像处理结果在应用上存在瓶颈。例如,该方法无法准确划分损伤面积,进而影响对维修方案进行准确判断。又如,该方法使用的类型间检测框的置信度分布差异明显。发明人意识到,虽然可以通过后期的人工干预优化图像处理结果,但由于其参数(如阈值)和判别规则鲁棒性差,导致图像处理结果的准确率低。
申请内容
基于此,有必要针对上述技术问题,提供一种车辆损伤图片处理方法、装置、计算机设备及存储介质,提高对车辆损伤图片的处理能力,增强图像处理的鲁棒性,同时提高图像处理结果的准确性。
一种车辆损伤图片处理方法,包括:
通过预设检测器处理车辆损伤图片,获得所述车辆损伤图片的损伤检测结果;
根据所述损伤检测结果生成损伤烈度图形数据;
将所述损伤烈度图形数据输入预设损伤烈度分类模型进行处理,获取所述预设损伤烈度分类模型输出的处理结果。
一种车辆损伤图片处理装置,包括:
检测模块,用于通过预设检测器处理车辆损伤图片,获得所述车辆损伤图片的损伤检测结果;
图形化模块,用于根据所述损伤检测结果生成损伤烈度图形数据;
模型处理模块,用于将所述损伤烈度图形数据输入预设损伤烈度分类模型进行处理,获取所述预设损伤烈度分类模型输出的处理结果。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:计算机可读指令通过预设检测器处理车辆损伤图片,获得所述车辆损伤图片的损伤检测结果;
根据所述损伤检测结果生成损伤烈度图形数据;
将所述损伤烈度图形数据输入预设损伤烈度分类模型进行处理,获取所述预设损伤烈度分类模型输出的处理结果。
一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
通过预设检测器处理车辆损伤图片,获得所述车辆损伤图片的损伤检测结果;
根据所述损伤检测结果生成损伤烈度图形数据;
将所述损伤烈度图形数据输入预设损伤烈度分类模型进行处理,获取所述预设损伤烈度分类模型输出的处理结果。
上述车辆损伤图片处理方法、装置、计算机设备及存储介质,通过预设检测器处理车辆损伤图片,获得车辆损伤图片的损伤检测结果,获得了多个表征车辆损伤程度相关的检测框的检测结果;根据损伤检测结果生成损伤烈度图形数据,以将离散的检测框的检测结果转换成连续的烈度分布,提高检测结果的鲁棒性;将损伤烈度图形数据输入预设损伤烈度分类模型进行处理,获取预设损伤烈度分类模型输出的处理结果,以再次使用神经网络模型对损伤烈度图形数据,获得最优的维修方案。本申请可以提高对车辆损伤图片的处理能力,增强图像处理的鲁棒性,同时提高图像处理结果的准确性。
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一实施例中车辆损伤图片处理方法的一应用环境示意图;
图2是本申请一实施例中车辆损伤图片处理方法的一流程示意图;
图3是本申请一实施例中未经处理的车辆损伤图片及经预设检测器处理后的车辆损伤图片;
图4是本申请一实施例中车辆损伤图片处理方法的一流程示意图;
图5是本申请一实施例中车辆损伤图片处理方法的一流程示意图;
图6是本申请一实施例中车辆损伤图片处理方法的一流程示意图;
图7是本申请一实施例中损伤烈度图形数据叠加原有图片后形成的图像和由损伤烈度图形数据直接转化后生成的图像;
图8是本申请一实施例中车辆损伤图片处理装置的一结构示意图;
图9是本申请一实施例中计算机设备的一示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本实施例提供的车辆损伤图片处理方法,可应用在如图1的应用环境中,其中,客户端与服务端进行通信。其中,客户端包括但不限于各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备。服务端可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在一实施例中,如图2所示,提供一种车辆损伤图片处理方法,以该方法应用在图1中的服务端为例进行说明,包括如下步骤:
S10、通过预设检测器处理车辆损伤图片,获得所述车辆损伤图片的损伤检测结果。
本实施例中,预设检测器可以使用现有的基于深度学习的检测算法训练获得。检测算法可以使用faster R-CNN(一种处理速度快的图片检测算法)、SSD+FPN(Single Shot MultiBox Detector,Feature Pyramid Networks)等。可以使用已标注的损伤形态检测训练集对预设检测器进行训练。
预设检测器可通过以下方式进行训练:1、初始化模型参数(随检测算法的不同而不同);2、取小批量样本,在检测算法的深度神经网络中进行前向传播,然后利用损伤形态检测训练集的形态标注,计算损失函数的损失值;3、根据计算得到的损失值进行反向传播,得到各个网络参数的梯度,然后根据随机梯度下降算法,以及调试得到的学习率,进行模型参数的更新;4、重复迭代上述过程,不断迭代更新模型参数,直至损失函数计算得到的损失值满足预设的收敛条件。
在此处,可以通过mAP(mean Average Precision)算法评估多个检测器的检测效果,并最终确定mAP值最高的检测器为预设检测器。这些检测器是在不同迭代次数下保存的检测模型,其损失值均满足预设的收敛条件。一般情况,mAP值最高的检测器,其检测性能也是最佳的。
预设检测器处理损伤图片后,可以获得损伤图片的损伤检测结果。损伤检测结果可以包括多个检测框数据。检测框数据包括检测框的尺寸、中心点位置以及置信度。
如图3的示例,图3-a为未经处理的车辆损伤图片,图3-b为经预设检测器处理后的车辆损伤图片。
S20、根据所述损伤检测结果生成损伤烈度图形数据。
本实施例中,损伤检测结果包括多个检测框数据。检测框数据可以用多种形式表达,如(x min,x max,y min,y max,score)、(x min,y min,x max,y max,score)或(x min,y min,w,h,score)等。score为检测框的置信度。
每个检测框数据可以转换为检测框损伤烈度数据。损伤烈度图形数据可以是多个检测框损伤烈度数据叠加后的结果。
将不同形态的损伤检测结果转换为损伤烈度图形数据,可以克服原有损伤检测结果精度不高的缺点,而转换成对车体图片损伤烈度的概率表达,将离散的检测框的检测结果转换成连续的烈度分布,提高检测结果的鲁棒性。
可选的,如图4所示,步骤S20包括:
S201、获取所述损伤检测结果中的检测框数据;
S202、建立高斯分布模型,基于所述高斯分布模型将所述检测框数据处理为检测框损伤烈度数据;
S203、根据多个所述检测框损伤烈度数据生成所述损伤烈度图形数据。
本实施例中,在高斯分布模型,可通过以下计算公式单一检测框内坐标点的概率密度,具体为:
Figure PCTCN2020118063-appb-000001
其中(x,y)为检测框的中心点,即
Figure PCTCN2020118063-appb-000002
Figure PCTCN2020118063-appb-000003
Figure PCTCN2020118063-appb-000004
Figure PCTCN2020118063-appb-000005
上式中,(x+δx,y+δy)为检测框内点的坐标,f(x+δx,y+δy)为坐标为(x+δx,y+δy)的点的烈度值。σ为第一调节参数,可以调节检测框损伤烈度数据的中心聚焦分布。
s为检测框的中心烈度,可以根据实际需要进行设置。W为第二调节系数,其值与检测框宽度相关。H为第三调节系数,其值与检测框高度相关。x max为检测框的右边界,x min为检测框的左边界,y max为检测框的上边界,y min为检测框的下边界。w和h分别为检测框的宽和高。
通过高斯分布模型可以将检测框数据可以转换为检测框损伤烈度数据。损伤烈度图形数据可以是多个检测框损伤烈度数据叠加后的结果。
以尺寸为W img,H img的原始图像为例,通过预设检测器检测n类的损伤形态,记检测出的检测框集合为Ν,检测框中心点集合为CN,对每一类的损伤形态的检测结果进行结果转换,如下式:
Figure PCTCN2020118063-appb-000006
δx=x-x i,δy=y-y i,(x i,y i)∈CN n
F n(x,y)为基于所述损伤检测结果生成的损伤烈度图形数据,W img为车辆损伤图片的宽度,H img为车辆损伤图片的高度,CN n为n个检测框数据所包含的检测框中心点的集合,f i为第i个检测框数据所对应的高斯函数,i为检测框数据的序号,取值为[1,n],(x i,y i)为第i个检测框数据所包含的检测框中心点的坐标。一个检测框数据对应一种损伤形态。通过上式的计算,对每一种损伤形态计算得到一个损伤烈度图的通道,最终得到损伤烈度图的表示
Figure PCTCN2020118063-appb-000007
D为多个通道数据叠加后形成的数据矩阵(n*W*H),R指代实数空间。
可选的,如图5所示,步骤S20包括:
S204、通过第一调节系数调节所述检测框损伤烈度数据的中心聚焦分布;
S205、根据检测框数据的置信度和所述第一调节系数设置检测框的中心烈度。
本实施例中,第一调节系数可用σ表示,用于调节检测框损伤烈度数据的中心聚焦分布。经实际试验,σ的优选取值范围为0.3-0.35。在该优选取值范围下生成的损伤烈度图形数据,与检测框内图片的损伤程度分布较为匹配,因而最终获得的处理结果的准确率也较高。最优选的,σ=0.33。第二调节系数可用W表示,第三调节系数可用H表示。当σ=0.33时,W=1.5×w;H=1.5×h。
检测框的中心烈度可用s表示。在一实施例中,
Figure PCTCN2020118063-appb-000008
可选的,如图6所示,步骤S20之后包括:
S21、根据所述损伤烈度图形数据生成可视化图像;
S22、将所述可视化图像发送到指定终端。
本实施例中,可以根据获得的损伤烈度图形数据生成可视化图像。可视化图像可以是直接由损伤烈度图形数据直接转化后生成的图像,如图7-b的示例;也可以是损伤烈度图形数据叠加原有图片后形成的图像,如图7-a的示例。
指定终端可以是算法工程师或查勘员使用的计算机设备。在接收到可视化图像后,算法工程师可对可视化图像和原始图像进行比较,判断生成的可视化图像是否可以正确反映车体损伤区域及损伤程度,然后根据判断结果确定是否需要调整预设检测器或高斯分布模型的设置参数。对于查勘员而言,可以根据可视化图像评估车体损伤情况,避免误报漏报。
现有的计算机视觉算法对输入图片的检测结果常常会输出多个框,尽管经过非极大值抑制算法滤掉了大多高度重叠的预测检测框。但是,仍然存在大量的有重叠的检测框,使得可视化时常常只能根据置信度排序挑选部分进行可视化,导致部分检测框检测结果没有展示。特别的,由于车体损伤的特殊性,常常存在复合损伤,损伤部位更容易出现多个重叠的检测框,这些重叠的检测框容易影响人的判断,造成损伤图像可视化评估的困难。通过将检测框转换成损伤烈度图,大大缓解了上述两个问题的影响,提供质量较高的可视化图像。
S30、将所述损伤烈度图形数据输入预设损伤烈度分类模型进行处理,获取所述预设损伤烈度分类模型输出的处理结果。
预设损伤烈度分类模型可以是通过对包含维修方案标注的训练集进行训练后获得的分类模型。可以构建包含维修方案标注的数据集。在该数据集中,维修方案标注包括但不限于喷、修、换。这些维修方案标注可由评估专家对样本所对应的原始损伤图片人工标注生成。每个样本还包含了基于上述损伤图片生成的损伤烈度图形数据。可以将包含维修方案标注的数据集按一定比例划分为三个子集,分别为训练集、验证集和测试集。划分的比例可以设置为10:1:1。分别使用上述三个集合对损伤烈度分类模型进行训练、验证和测试,并最终获得满足预设要求的预设损伤烈度分类模型。
在此处,预设损伤烈度分类模型可以使用通用的基于深度神经网络的物体分类模型,如Resnet(Residual Network),Vgg网络(Visual Geometry Group Network)等。
优选地,为进一步保证上述所有出现的数据的私密和安全性,上述所有数据还可以存储于一区块链的节点中。例如所述预设损伤烈度分类模型输出的处理结果、图像数据等均 可存储在区块链节点中。
需要说明的是,本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
本方案可应用于智慧交通领域中,从而推动智慧城市的建设。
预设损伤烈度分类模型输出的处理结果可以包含针对损伤图片的维修方案,也可以包含用于评估损伤图片损伤程度的评级数据。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在一实施例中,提供一种车辆损伤图片处理装置,该车辆损伤图片处理装置与上述实施例中车辆损伤图片处理方法一一对应。如图8所示,该车辆损伤图片处理装置包括检测模块10、图形化模块20和模型处理模块30。各功能模块详细说明如下:
检测模块10,用于通过预设检测器处理车辆损伤图片,获得所述车辆损伤图片的损伤检测结果;
图形化模块20,用于根据所述损伤检测结果生成损伤烈度图形数据;
模型处理模块30,用于将所述损伤烈度图形数据输入预设损伤烈度分类模型进行处理,获取所述预设损伤烈度分类模型输出的处理结果。
可选的,图形化模块20包括:
获取检测框数据单元,用于获取所述损伤检测结果中的检测框数据;
生成检测框烈度数据单元,用于建立高斯分布模型,基于所述高斯分布模型将所述检测框数据处理为检测框损伤烈度数据;
生成烈度图形数据单元,用于根据多个所述检测框损伤烈度数据生成所述损伤烈度图形数据。
可选的,生成检测框烈度数据单元包括:
调整分布子单元,用于通过第一调节系数调节所述检测框损伤烈度数据的中心聚焦分布;
设置中心烈度单元,用于根据检测框数据的置信度和所述第一调节系数设置检测框的中心烈度。
可选的,所述生成烈度图形数据单元通过以下公式生成所述损伤烈度图形数据:
Figure PCTCN2020118063-appb-000009
δx=x-x i,δy=y-y i,(x i,y i)∈CN n
F n(x,y)为基于所述损伤检测结果生成的损伤烈度图形数据,W img为车辆损伤图片的宽度,H img为车辆损伤图片的高度,CN n为n个检测框数据所包含的检测框中心点的集合,f i为第i个检测框数据所对应的高斯函数,i为检测框数据的序号,取值为[1,n],(x i,y i)为 第i个检测框数据所包含的检测框中心点的坐标。
可选的,所车辆损伤图片处理装置还包括:
可视化模块,用于根据所述损伤烈度图形数据生成可视化图像;
发送图像模块,用于将所述可视化图像发送到指定终端。
关于车辆损伤图片处理装置的具体限定可以参见上文中对于车辆损伤图片处理方法的限定,在此不再赘述。上述车辆损伤图片处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图9所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括可读存储介质、内存储器。该可读存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为可读存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储上述车辆损伤图片处理方法所涉及的数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种车辆损伤图片处理方法。本实施例所提供的可读存储介质包括非易失性可读存储介质和易失性可读存储介质。
在一个实施例中,提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,处理器执行计算机可读指令时实现以下步骤:
通过预设检测器处理车辆损伤图片,获得所述车辆损伤图片的损伤检测结果;
根据所述损伤检测结果生成损伤烈度图形数据;
将所述损伤烈度图形数据输入预设损伤烈度分类模型进行处理,获取所述预设损伤烈度分类模型输出的处理结果。
在一个实施例中,提供了一个或多个存储有计算机可读指令的可读存储介质,本实施例所提供的可读存储介质包括非易失性可读存储介质和易失性可读存储介质;该可读存储介质上存储有计算机可读指令,该计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器实现以下步骤:
通过预设检测器处理车辆损伤图片,获得所述车辆损伤图片的损伤检测结果;
根据所述损伤检测结果生成损伤烈度图形数据;
将所述损伤烈度图形数据输入预设损伤烈度分类模型进行处理,获取所述预设损伤烈度分类模型输出的处理结果。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说 明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种车辆损伤图片处理方法,其中,包括:
    通过预设检测器处理车辆损伤图片,获得所述车辆损伤图片的损伤检测结果;
    根据所述损伤检测结果生成损伤烈度图形数据;
    将所述损伤烈度图形数据输入预设损伤烈度分类模型进行处理,获取所述预设损伤烈度分类模型输出的处理结果。
  2. 如权利要求1所述的车辆损伤图片处理方法,其中,所述根据所述损伤检测结果生成损伤烈度图形数据,包括:
    获取所述损伤检测结果中的检测框数据;
    建立高斯分布模型,基于所述高斯分布模型将所述检测框数据处理为检测框损伤烈度数据;
    根据多个所述检测框损伤烈度数据生成所述损伤烈度图形数据。
  3. 如权利要求2所述的车辆损伤图片处理方法,其中,所述建立高斯分布模型,基于所述高斯分布模型将所述检测框数据处理为检测框损伤烈度数据,包括:
    通过第一调节系数调节所述检测框损伤烈度数据的中心聚焦分布;
    根据检测框数据的置信度和所述第一调节系数设置检测框的中心烈度。
  4. 如权利要求2所述的车辆损伤图片处理方法,其中,通过以下公式生成所述损伤烈度图形数据:
    Figure PCTCN2020118063-appb-100001
    δx=x-x i,δy=y-y i,(x i,y i)∈CN n
    F n(x,y)为基于所述损伤检测结果生成的损伤烈度图形数据,W img为车辆损伤图片的宽度,H img为车辆损伤图片的高度,CN n为n个检测框数据所包含的检测框中心点的集合,f i为第i个检测框数据所对应的高斯函数,i为检测框数据的序号,取值为[1,n],(x i,y i)为第i个检测框数据所包含的检测框中心点的坐标。
  5. 如权利要求1所述的车辆损伤图片处理方法,其中,所述根据所述损伤检测结果生成损伤烈度图形数据之后,还包括:
    根据所述损伤烈度图形数据生成可视化图像;
    将所述可视化图像发送到指定终端。
  6. 一种车辆损伤图片处理装置,其中,包括:
    检测模块,用于通过预设检测器处理车辆损伤图片,获得所述车辆损伤图片的损伤检测结果;
    图形化模块,用于根据所述损伤检测结果生成损伤烈度图形数据;
    模型处理模块,用于将所述损伤烈度图形数据输入预设损伤烈度分类模型进行处理,获取所述预设损伤烈度分类模型输出的处理结果。
  7. 如权利要求6所述的车辆损伤图片处理装置,其中,所述图形化模块包括:
    获取检测框数据单元,用于获取所述损伤检测结果中的检测框数据;
    生成检测框烈度数据单元,用于建立高斯分布模型,基于所述高斯分布模型将所述检测框数据处理为检测框损伤烈度数据;
    生成烈度图形数据单元,用于根据多个所述检测框损伤烈度数据生成所述损伤烈度图形数据。
  8. 如权利要求7所述的车辆损伤图片处理装置,其中,所述生成检测框烈度数据单元包括:
    调整分布子单元,用于通过第一调节系数调节所述检测框损伤烈度数据的中心聚焦分布;
    设置中心烈度单元,用于根据检测框数据的置信度和所述第一调节系数设置检测框的中心烈度。
  9. 如权利要求7所述的车辆损伤图片处理装置,其中,所述生成烈度图形数据单元通过以下公式生成所述损伤烈度图形数据:
    Figure PCTCN2020118063-appb-100002
    δx=x-x i,δy=y-y i,(x i,y i)∈CN n
    F n(x,y)为基于所述损伤检测结果生成的损伤烈度图形数据,W img为车辆损伤图片的宽度,H img为车辆损伤图片的高度,CN n为n个检测框数据所包含的检测框中心点的集合,f i为第i个检测框数据所对应的高斯函数,i为检测框数据的序号,取值为[1,n],(x i,y i)为第i个检测框数据所包含的检测框中心点的坐标。
  10. 如权利要求6所述的车辆损伤图片处理装置,其中,还包括:
    可视化模块,用于根据所述损伤烈度图形数据生成可视化图像;
    发送图像模块,用于将所述可视化图像发送到指定终端。
  11. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下步骤:
    通过预设检测器处理车辆损伤图片,获得所述车辆损伤图片的损伤检测结果;
    根据所述损伤检测结果生成损伤烈度图形数据;
    将所述损伤烈度图形数据输入预设损伤烈度分类模型进行处理,获取所述预设损伤烈度分类模型输出的处理结果。
  12. 如权利要求11所述的计算机设备,其中,所述根据所述损伤检测结果生成损伤烈度图形数据,包括:
    获取所述损伤检测结果中的检测框数据;
    建立高斯分布模型,基于所述高斯分布模型将所述检测框数据处理为检测框损伤烈度数据;
    根据多个所述检测框损伤烈度数据生成所述损伤烈度图形数据。
  13. 如权利要求12所述的计算机设备,其中,所述建立高斯分布模型,基于所述高斯分布模型将所述检测框数据处理为检测框损伤烈度数据,包括:
    通过第一调节系数调节所述检测框损伤烈度数据的中心聚焦分布;
    根据检测框数据的置信度和所述第一调节系数设置检测框的中心烈度。
  14. 如权利要求12所述的计算机设备,其中,通过以下公式生成所述损伤烈度图形数据:
    Figure PCTCN2020118063-appb-100003
    δx=x-x i,δy=y-y i,(x i,y i)∈CN n
    F n(x,y)为基于所述损伤检测结果生成的损伤烈度图形数据,W img为车辆损伤图片的宽度,H img为车辆损伤图片的高度,CN n为n个检测框数据所包含的检测框中心点的集合,f i为第i个检测框数据所对应的高斯函数,i为检测框数据的序号,取值为[1,n],(x i,y i)为第i个检测框数据所包含的检测框中心点的坐标。
  15. 如权利要求11所述的计算机设备,其中,所述根据所述损伤检测结果生成损伤烈度图形数据之后,所述处理器执行所述计算机可读指令时还实现如下步骤:
    根据所述损伤烈度图形数据生成可视化图像;
    将所述可视化图像发送到指定终端。
  16. 一个或多个存储有计算机可读指令的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    通过预设检测器处理车辆损伤图片,获得所述车辆损伤图片的损伤检测结果;
    根据所述损伤检测结果生成损伤烈度图形数据;
    将所述损伤烈度图形数据输入预设损伤烈度分类模型进行处理,获取所述预设损伤烈度分类模型输出的处理结果。
  17. 如权利要求16所述的可读存储介质,其中,所述根据所述损伤检测结果生成损伤烈度图形数据,包括:
    获取所述损伤检测结果中的检测框数据;
    建立高斯分布模型,基于所述高斯分布模型将所述检测框数据处理为检测框损伤烈度数据;
    根据多个所述检测框损伤烈度数据生成所述损伤烈度图形数据。
  18. 如权利要求17所述的可读存储介质,其中,所述建立高斯分布模型,基于所述高斯分布模型将所述检测框数据处理为检测框损伤烈度数据,包括:
    通过第一调节系数调节所述检测框损伤烈度数据的中心聚焦分布;
    根据检测框数据的置信度和所述第一调节系数设置检测框的中心烈度。
  19. 如权利要求17所述的可读存储介质,其中,通过以下公式生成所述损伤烈度图形数据:
    Figure PCTCN2020118063-appb-100004
    δx=x-x i,δy=y-y i,(x i,y i)∈CN n
    F n(x,y)为基于所述损伤检测结果生成的损伤烈度图形数据,W img为车辆损伤图片的宽度,H img为车辆损伤图片的高度,CN n为n个检测框数据所包含的检测框中心点的集合,f i为第i个检测框数据所对应的高斯函数,i为检测框数据的序号,取值为[1,n],(x i,y i)为第i个检测框数据所包含的检测框中心点的坐标。
  20. 如权利要求16所述的可读存储介质,其中,所述根据所述损伤检测结果生成损伤烈度图形数据之后,所述处理器还用于执行如下步骤:
    根据所述损伤烈度图形数据生成可视化图像;
    将所述可视化图像发送到指定终端。
PCT/CN2020/118063 2020-04-29 2020-09-27 车辆损伤图片处理方法、装置、计算机设备及存储介质 WO2021218020A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010357524.1A CN111666973B (zh) 2020-04-29 2020-04-29 车辆损伤图片处理方法、装置、计算机设备及存储介质
CN202010357524.1 2020-04-29

Publications (1)

Publication Number Publication Date
WO2021218020A1 true WO2021218020A1 (zh) 2021-11-04

Family

ID=72383009

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/118063 WO2021218020A1 (zh) 2020-04-29 2020-09-27 车辆损伤图片处理方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN111666973B (zh)
WO (1) WO2021218020A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111666973B (zh) * 2020-04-29 2024-04-09 平安科技(深圳)有限公司 车辆损伤图片处理方法、装置、计算机设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764046A (zh) * 2018-04-26 2018-11-06 平安科技(深圳)有限公司 车辆损伤分类模型的生成装置、方法及计算机可读存储介质
CN109344899A (zh) * 2018-09-30 2019-02-15 百度在线网络技术(北京)有限公司 多目标检测方法、装置和电子设备
CN109815997A (zh) * 2019-01-04 2019-05-28 平安科技(深圳)有限公司 基于深度学习的识别车辆损伤的方法和相关装置
CN110569837A (zh) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 优化损伤检测结果的方法及装置
CN111666973A (zh) * 2020-04-29 2020-09-15 平安科技(深圳)有限公司 车辆损伤图片处理方法、装置、计算机设备及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446618A (zh) * 2018-03-09 2018-08-24 平安科技(深圳)有限公司 车辆定损方法、装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764046A (zh) * 2018-04-26 2018-11-06 平安科技(深圳)有限公司 车辆损伤分类模型的生成装置、方法及计算机可读存储介质
CN110569837A (zh) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 优化损伤检测结果的方法及装置
CN109344899A (zh) * 2018-09-30 2019-02-15 百度在线网络技术(北京)有限公司 多目标检测方法、装置和电子设备
CN109815997A (zh) * 2019-01-04 2019-05-28 平安科技(深圳)有限公司 基于深度学习的识别车辆损伤的方法和相关装置
CN111666973A (zh) * 2020-04-29 2020-09-15 平安科技(深圳)有限公司 车辆损伤图片处理方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
CN111666973A (zh) 2020-09-15
CN111666973B (zh) 2024-04-09

Similar Documents

Publication Publication Date Title
WO2021135499A1 (zh) 损伤检测模型训练、车损检测方法、装置、设备及介质
US11710293B2 (en) Target detection method and apparatus, computer-readable storage medium, and computer device
US11842487B2 (en) Detection model training method and apparatus, computer device and storage medium
US11455807B2 (en) Training neural networks for vehicle re-identification
US20210166383A1 (en) Method and device for detecting and locating lesion in medical image, equipment and storage medium
CN111310624B (zh) 遮挡识别方法、装置、计算机设备及存储介质
WO2021017261A1 (zh) 识别模型训练方法、图像识别方法、装置、设备及介质
WO2020215557A1 (zh) 医学影像解释方法、装置、计算机设备及存储介质
CN109255772B (zh) 基于风格迁移的车牌图像生成方法、装置、设备及介质
WO2019218410A1 (zh) 图像分类方法、计算机设备和存储介质
WO2021114809A1 (zh) 车辆损伤特征检测方法、装置、计算机设备及存储介质
WO2021151336A1 (zh) 基于注意力机制的道路图像目标检测方法及相关设备
WO2020119458A1 (zh) 脸部关键点检测方法、装置、计算机设备和存储介质
WO2020248841A1 (zh) 图像的au检测方法、装置、电子设备及存储介质
TW201933276A (zh) 影像補全方法
CN110930417A (zh) 图像分割模型的训练方法和装置、图像分割方法和装置
CN110826395B (zh) 人脸旋转模型的生成方法、装置、计算机设备及存储介质
WO2021114612A1 (zh) 目标重识别方法、装置、计算机设备和存储介质
CN110598638A (zh) 模型训练方法、人脸性别预测方法、设备及存储介质
CN112395979A (zh) 基于图像的健康状态识别方法、装置、设备及存储介质
CN111680675B (zh) 人脸活体检测方法、系统、装置、计算机设备和存储介质
WO2021159748A1 (zh) 模型压缩方法、装置、计算机设备及存储介质
WO2022134354A1 (zh) 车损检测模型训练、车损检测方法、装置、设备及介质
CN113469092B (zh) 字符识别模型生成方法、装置、计算机设备和存储介质
US20230036338A1 (en) Method and apparatus for generating image restoration model, medium and program product

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933761

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933761

Country of ref document: EP

Kind code of ref document: A1