Detailed Description
The following describes the scheme provided in the present specification with reference to the drawings.
The vehicle body color recognition method provided in one or more embodiments of the present disclosure may be applied to a scene as shown in fig. 1. In fig. 1, the vehicle body color recognition module 20 is used to recognize the color of the vehicle in the picture. Specifically, for a photographed picture of a vehicle, the body color recognition module 20 may detect areas where respective components of the vehicle are located in the photographed picture according to a target detection algorithm, thereby obtaining a plurality of areas. And selecting at least one candidate region where the component capable of reflecting the color of the vehicle body is located from the plurality of regions, thereby obtaining at least one candidate region. At least one candidate region is input into a multi-classification classifier to predict a confidence vector and a most likely color class for the candidate region. The confidence vector here consists of the confidence that the candidate regions respectively belong to the respective predefined color classes. The confidence vector and the maximum likelihood color class of each candidate region are input into a vehicle color recognition model to recognize the body color of the vehicle in the photographed picture.
The vehicle damage determination module 40 is configured to determine damage to the vehicle whose color is recognized by the vehicle body color recognition module 20. For example, a lost part of the vehicle reflected in the photographed picture and the degree of loss thereof are automatically recognized, and a maintenance scheme is automatically given.
It should be understood that fig. 1 is only one application scenario of the vehicle body color recognition method provided in one or more embodiments of the present disclosure, and in other application scenarios, the photographed image of the vehicle with the vehicle body color recognized by the vehicle body color recognition module 20 may be subjected to a process such as splitting, which is not limited in this disclosure.
Fig. 2 is a flowchart of a method for identifying a vehicle body color according to an embodiment of the present disclosure. The subject of execution of the method may be a device with processing capabilities: the server or system or module, for example, may be the body color identification module 20 of fig. 1, or the like. As shown in fig. 2, the method specifically may include:
step 202, a photographed image of a vehicle is obtained.
The above-mentioned photographed image may be a photographed image of a certain vehicle, and the photographed image may cover a plurality of components of the vehicle. Components herein may include, but are not limited to, doors, bumpers, license plates, fenders, headlights, tires, and the like.
And 204, detecting the areas where the parts are located in the shot picture according to the target detection algorithm, so as to obtain a plurality of areas.
The target detection algorithms herein may include, but are not limited to, fast (Faster) -area based convolutional neural networks (Region-based Convolut ional Neural Network, RCNN), area based full convolutional networks (Region-based Ful ly Convolut ional Network, RFCN), single-shot multiple bounding box detectors (Single Shot Mult iBox Detector, SSD), and YOLO, among others. Here, the accuracy of the area detection can be improved by detecting the area where the component of the vehicle is located by the target detection algorithm.
Optionally, the target detection algorithm may be trained according to a plurality of sample pictures, and then the region where each component is located may be detected in the photographed picture according to the trained target detection algorithm, so that accuracy of region detection may be improved. It should be noted that the above-mentioned one sample picture may cover one or more parts of a vehicle. For the sample picture, the area where each component is located and the category of the area can be manually calibrated in advance. The category herein may refer to the names of the above-mentioned components.
In one example, the resulting plurality of regions may be as shown in FIG. 3. In fig. 3, a rectangular frame is used to represent an area where a component of the vehicle is located. In one example, the region may be represented by four-dimensional coordinates, e.g., may be represented as (x, y, w, h), where x is the abscissa of the upper left vertex of the region, y is the ordinate of the upper left vertex of the region, w is the width of the region, and h is the height of the region. In fig. 3, "bumper", "license plate", "fender", "headlight", and "tire" are used to indicate the type of each region.
And 206, selecting a candidate region where at least one designated part is located from the plurality of regions.
The specified component herein may refer to a component having the same or similar color as the vehicle body. Such as "door", "bumper", "fender", and "hood", etc.
In one implementation, the candidate regions may be selected in conjunction with the category of the region. For example, when the category of a certain region is the name of the above-described designated part, the region may be selected as the candidate region. Taking fig. 3 as an example, the regions A, B and C can be selected as candidate regions. That is, the number of candidate regions in the present specification may be plural.
Here, by selecting the candidate region where the specified component is located, interference of unrelated components can be avoided.
Step 208, the candidate region is input into a multi-classification classifier to predict the confidence vector and the most probable color class of the candidate region.
The confidence vector here consists of the confidence that the candidate regions respectively belong to the respective predefined color classes. The multi-classification classifier herein may also be referred to as a multi-classification model. The multi-classification classifier can be obtained by training a lightweight neural network model according to a plurality of sample areas with color class labels. The lightweight neural network model herein may include, but is not limited to, mobi leNet, squeezeNet, acceptance, xception, shuffleNet, etc. Here, the prediction efficiency can be improved by training a lightweight neural network model to obtain a multi-classification classifier. In addition, the multi-classification classifier trained in the specification can predict the color types of candidate areas where all parts of the vehicle are located, so that the prediction comprehensiveness can be improved.
The definition of the sample area can be referred to as a candidate area, i.e., an area where a component having the same or similar color as the vehicle body is located. The sample area is calibrated from the sample picture. In addition, the color class label of the sample area may be preset manually according to the color of the component corresponding to the sample area, which may include, but is not limited to, black, blue, red, silver, white, other colors, indistinguishable, etc.
It should be noted that, when the multi-classification classifier is trained, the sample area may be detected and calibrated from a sample picture by a pre-trained target detection algorithm; the calibration can also be performed manually in the sample picture.
The definition of the predefined color category in step 208 may refer to the color category label of the sample area. Further, the most probable color category in step 208 may be determined based on a confidence that the candidate regions respectively belong to the respective predefined color categories. For example, a predefined color class corresponding to the greatest confidence level may be determined as the most probable color class of the candidate region; a predefined color category for which the confidence exceeds a threshold may also be determined as the most probable color category for the candidate region.
It should be noted that, in the second mode, a candidate region may also be assigned to a most probable color class. For example, the plurality of predefined color categories may be ordered according to a priority order from high to low. And then, judging whether the confidence coefficient corresponding to each predefined color category is larger than a threshold value in sequence, if the confidence coefficient of a certain predefined color category is larger than the threshold value, selecting the predefined color category as the color category with the maximum possibility, and judging the subsequent predefined color category no longer.
It will be appreciated that when the number of candidate regions is multiple, multiple confidence vectors may be predicted, as well as the most likely color class.
Step 210, inputting the confidence vector and the maximum likelihood color category into a vehicle color recognition model to recognize the body color of the vehicle.
The vehicle color recognition model may be obtained by training a decision tree, a support vector machine or a random forest by using confidence vectors and maximum likelihood color categories of sample areas of a plurality of sample pictures as input data. The confidence vector for the sample region may be composed of the confidence that the sample region belongs to each predefined color class. Here, the definition of the sample picture and the sample area are the same as above, and are not repeated here. It should be noted that, the sample picture has a corresponding color category label, and the color category label may be manually calibrated in advance. Further, the confidence vector for the sample region and the maximum likelihood color class may be obtained by inputting the sample region into a pre-trained multi-classification classifier.
When the number of candidate regions is plural, the confidence vectors and the most probable color categories, which respectively belong to the respective predefined color categories, may be input to the vehicle color recognition model. That is, the vehicle body color of the vehicle in the present specification may be identified according to the most probable color category of the plurality of candidate areas, so that accuracy of vehicle body color identification may be improved, and robustness may be better.
The vehicle body color identified in step 210 may be any of the following: black, blue, red, silver, white, other colors, indistinguishable, etc.
In summary, according to the vehicle body color recognition method provided by one or more embodiments of the present disclosure, the accuracy of region detection may be improved by detecting the region where the component of the vehicle is located by the target detection algorithm. In addition, the present specification can improve prediction efficiency by training a lightweight neural network model to obtain a multi-classification classifier. Finally, the vehicle color recognition model recognizes the vehicle body color according to the confidence vectors and the maximum likelihood color categories of the plurality of candidate areas, and the color of the single component is not depended, so that the problem of inaccurate recognized vehicle body color when the color of the single component changes due to the influence of illumination and the like can be avoided.
Corresponding to the above-mentioned vehicle body color recognition method, an embodiment of the present disclosure further provides a vehicle body color recognition device, as shown in fig. 4, which may include:
an acquisition unit 402 for acquiring a photographed picture of the vehicle. The captured picture covers a plurality of components of the vehicle.
And a detection unit 404, configured to detect, in the captured image acquired by the acquisition unit 402, an area where each component is located according to the target detection algorithm, thereby obtaining a plurality of areas.
The target detection algorithm herein may include, but is not limited to, any of the following: fast zone-based convolutional neural network Faster-RCNN, zone-based full convolutional network RFCN, single multi-bounding box detector SSD, and YOLO, among others.
A selecting unit 406, configured to select a candidate area where at least one designated component is located from the plurality of areas detected by the detecting unit 404.
The designated part herein may refer to a part having the same or similar color as the vehicle body among the plurality of parts.
A prediction unit 408, configured to input the candidate region selected by the selection unit 406 into a multi-classification classifier to predict a confidence vector and a maximum likelihood color class of the candidate region. The confidence vector is composed of the confidence levels that the candidate regions respectively belong to the respective predefined color labels.
The multi-classification classifier can be obtained by training a lightweight neural network model according to a plurality of sample areas with color class labels. The sample area corresponds to a component of the vehicle in the sample picture.
The identifying unit 410 is configured to input the confidence vector predicted by the predicting unit 408 and the maximum likelihood color class into the vehicle color identifying model to identify the body color of the vehicle.
The vehicle color recognition model can be obtained by training a decision tree, a support vector machine or a random forest by taking confidence vectors and the maximum likelihood color class of sample areas of a plurality of sample pictures as input data. The confidence vector may be made up of the confidence that the sample region belongs to each predefined color class. The sample picture covers one or more components of the vehicle, and the sample region corresponds to a component of the vehicle in the sample picture.
The functions of the functional modules of the apparatus in the foregoing embodiments of the present disclosure may be implemented by the steps of the foregoing method embodiments, so that the specific working process of the apparatus provided in one embodiment of the present disclosure is not repeated herein.
The present specification provides a vehicle body color recognition device, and the acquisition unit 402 acquires a photographed picture of a vehicle. The detection unit 404 detects the areas where the respective components are located in the photographed picture according to the target detection algorithm, thereby obtaining a plurality of areas. The selection unit 406 selects a candidate region where at least one specified component is located from the plurality of regions. The prediction unit 408 inputs the candidate regions into a multi-classification classifier to predict confidence vectors and most probable color categories for the candidate regions. The confidence vector is composed of the confidence that the candidate regions respectively belong to each predefined color class. The recognition unit 410 inputs the confidence vector and the maximum likelihood color category into a vehicle color recognition model to recognize the body color of the vehicle. Thus, the accuracy of the body color recognition of the vehicle can be improved.
The vehicle body color recognition device provided in one embodiment of the present disclosure may be a module or unit of the vehicle body color recognition module 20 of fig. 1.
Corresponding to the above-mentioned vehicle body color recognition method, the embodiment of the present disclosure further provides a vehicle body color recognition device, as shown in fig. 5, which may include:
a receiver 502 for acquiring a photographed picture of the vehicle. The captured picture covers a plurality of components of the vehicle.
At least one processor 504 is configured to detect the regions where the respective components are located in the captured image according to an object detection algorithm, thereby obtaining a plurality of regions. And selecting a candidate region where at least one designated part is located from the plurality of regions. The candidate regions are input into a multi-classification classifier to predict confidence vectors and most probable color categories for the candidate regions. The confidence vector is composed of the confidence that the candidate regions respectively belong to each predefined color class. The confidence vector and the maximum likelihood color category are input into a vehicle color recognition model to recognize a body color of the vehicle.
The vehicle body color recognition device provided by the embodiment of the specification can improve the accuracy of vehicle body color recognition of a vehicle.
Fig. 5 shows an example in which the multi-car recognition device provided in the embodiment of the present disclosure is located in a server. In practical applications, the device may also be located in a terminal, which is not limited in this specification.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware, or may be embodied in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may reside in a server. The processor and the storage medium may reside as discrete components in a server.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the present invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing detailed description of the embodiments has further described the objects, technical solutions and advantages of the present specification, and it should be understood that the foregoing description is only a detailed description of the embodiments of the present specification, and is not intended to limit the scope of the present specification, but any modifications, equivalents, improvements, etc. made on the basis of the technical solutions of the present specification should be included in the scope of the present specification.