Disclosure of Invention
One or more embodiments of the present disclosure describe a method, an apparatus, and a device for multi-vehicle recognition, which may implement case-level multi-vehicle recognition.
In a first aspect, a multi-vehicle identification method is provided, including:
acquiring a group of photographed pictures of vehicles corresponding to one case; the group of shot pictures comprises at least one shot picture;
inputting the at least one photographed picture into a multi-classification classifier to predict confidence vectors and maximum likelihood color categories of each photographed picture; the confidence vector is composed of the confidence of the shot pictures respectively belonging to each predefined color category;
combining the confidence vectors and the maximum likelihood color categories of the photographed pictures;
the combined confidence vectors and the maximum likelihood color class are input into a multi-vehicle classification model to identify whether the set of captured pictures covers multiple color vehicles.
In a second aspect, there is provided a multi-car identification device comprising:
an acquisition unit for acquiring a group of photographed pictures of the vehicle corresponding to one case; the group of shot pictures comprises at least one shot picture;
the prediction unit is used for inputting the at least one shot picture acquired by the acquisition unit into a multi-classification classifier so as to predict the confidence vector and the maximum likelihood color class of each shot picture; the confidence vector is composed of the confidence of the shot pictures respectively belonging to each predefined color category;
the merging unit is used for merging the confidence vectors and the maximum possibility color categories of the shot pictures predicted by the prediction unit;
the identifying unit is used for inputting the confidence coefficient vector and the maximum likelihood color category which are combined by the combining unit into a multi-vehicle classifying model so as to identify whether the group of shot pictures cover vehicles with multiple colors.
In a third aspect, there is provided a multi-car identification apparatus comprising:
a receiver for acquiring a photographed picture of a group of vehicles corresponding to one case; the group of shot pictures comprises at least one shot picture;
at least one processor for inputting the at least one captured picture into a multi-classification classifier to predict a confidence vector and a maximum likelihood color class for each captured picture; the confidence vector is composed of the confidence of the shot pictures respectively belonging to each predefined color category; combining the confidence vectors and the maximum likelihood color categories of the photographed pictures; the combined confidence vectors and the maximum likelihood color class are input into a multi-vehicle classification model to identify whether the set of captured pictures covers multiple color vehicles.
According to the multi-vehicle identification method, device and equipment provided by one or more embodiments of the present disclosure, a group of photographed pictures of vehicles corresponding to one case is obtained. The group of photographed pictures comprises at least one photographed picture. At least one of the captured pictures is input into a multi-classification classifier to predict confidence vectors and most likely color classes for each of the captured pictures. The confidence vector here consists of the confidence levels of the respective captured images belonging to the respective predefined color classes. And combining the confidence vectors and the maximum likelihood color categories of the photographed pictures. The combined confidence vectors and the maximum likelihood color class are input into a multi-vehicle classification model to identify whether the set of captured pictures covers multiple color vehicles. Therefore, the multi-vehicle recognition method provided by the specification can realize case-level multi-vehicle recognition. In addition, one group or one case usually comprises a plurality of shot pictures, and the method realizes the identification of multiple pictures and multiple vehicles.
Detailed Description
The following describes the scheme provided in the present specification with reference to the drawings.
The multi-vehicle identification method provided in one or more embodiments of the present disclosure may be applied to a scenario as shown in fig. 1. In fig. 1, the multi-car recognition module 20 is used for recognizing whether the photographed pictures in the same case cover vehicles of multiple colors. Specifically, the multi-car identification module 20 may obtain a photographed picture of a group of vehicles corresponding to one case. The group of photographed pictures comprises at least one photographed picture. At least one of the captured pictures is input into a multi-classification classifier to predict confidence vectors and most likely color classes for each of the captured pictures. The confidence vector here consists of the confidence levels of the respective captured images belonging to the respective predefined color classes. And combining the confidence vectors and the maximum likelihood color categories of the photographed pictures. The combined confidence vectors and the maximum likelihood color class are input into a multi-vehicle classification model to identify whether the set of captured pictures covers multiple color vehicles.
After the multi-vehicle recognition module 20 obtains the recognition result (e.g., the result of whether the vehicle covers multiple colors as described above), the recognition result may be input to the vehicle damage determination module 40. In one implementation, when the identification results in a vehicle that covers only one color, the vehicle of that one color is harmed by the vehicle harming module 40. When the identification result is a vehicle covering a plurality of colors, the vehicle damage determination module 40 determines damage to the vehicle of the target color or each color. The loss assessment process can be as follows: the lost parts of the vehicle and the loss degree thereof reflected in the photographed picture of the vehicle of a certain color are automatically recognized, and a maintenance scheme is automatically given.
When the vehicle damage determination module 40 determines damage to only the vehicle of the target color, the photographed pictures of the vehicles of the other colors may be subjected to processing such as branching.
Fig. 2 is a flowchart of a multi-vehicle recognition method according to an embodiment of the present disclosure. The execution body of the method may have a device with processing capabilities: the server or system or module, for example, may be the multi-car identification module 20 of fig. 1, or the like. As shown in fig. 2, the method specifically may include:
step 202, a set of photographed pictures of a vehicle corresponding to a case is obtained.
At least one shot picture may be included in the set of shot pictures.
At least one of the captured pictures is input into a multi-classification classifier to predict confidence vectors and most likely color categories for each of the captured pictures, step 204.
The confidence vector here consists of the confidence levels of the respective captured images belonging to the respective predefined color classes. The multi-classification classifier herein may also be referred to as a multi-classification model. The multi-classification classifier can be obtained by training a lightweight neural network model according to a plurality of sample pictures with color class labels. The lightweight neural network model herein may include, but is not limited to, mobi leNet, squeezeNet, acceptance, xception, shuffleNet, etc. Here, the prediction efficiency can be improved by training a lightweight neural network model to obtain a multi-classification classifier. In addition, the multi-classification classifier trained in the specification can predict the color types of a plurality of shot pictures, so that the comprehensiveness of prediction can be improved.
The sample pictures may be taken pictures of the vehicle collected in advance by the data collector (including the C-terminal user and the loss fighter of the insurance company). After the plurality of sample pictures are collected, the sample pictures can be divided into M groups according to case information, where M is a positive integer. One or more sample pictures may be included in each packet. It will be appreciated that the above-described one grouping corresponds to one case.
It should be noted that, the color type label of the sample picture may be preset manually, and may include, but is not limited to, black, blue, red, silver, white, other colors, indistinct, etc.
The definition of the predefined color category in step 204 may refer to the color category label of the sample picture. Furthermore, the most probable color category in step 204 may be determined based on the confidence that the captured picture belongs to each of the predefined color categories, respectively. For example, a predefined color category corresponding to the maximum confidence level may be determined as the most probable color category of the captured picture; a predefined color category for which the confidence exceeds a threshold may also be determined as the most likely color category for taking the picture.
It should be noted that, in the second mode, a shot picture may also be assigned to a most probable color category. For example, the plurality of predefined color categories may be ordered according to a priority order from high to low. And then, judging whether the confidence coefficient corresponding to each predefined color category is larger than a threshold value in sequence, if the confidence coefficient of a certain predefined color category is larger than the threshold value, selecting the predefined color category as the color category with the maximum possibility, and judging the subsequent predefined color category no longer.
Step 206, merging the confidence vectors and the maximum likelihood color categories of each shot picture.
Step 208, inputting the combined confidence vector and the maximum likelihood color class into the multi-vehicle classification model to identify whether the set of captured pictures covers vehicles of multiple colors.
The multi-vehicle classification model described above may also be referred to as a classification model. The multi-vehicle classification model can be obtained by training a decision tree, a support vector machine or a random forest by taking confidence vectors and maximum likelihood color categories of sample pictures in a plurality of groups as input data. Wherein the confidence vector consists of the confidence that the sample picture belongs to each predefined color class. In the training of the multi-vehicle classification model, the confidence vector and the maximum likelihood color class of the sample picture are input into the classification model (including a decision tree, a support vector machine, a random forest, or the like) according to the group, so that the learning of the characteristics (including the confidence vector and the maximum likelihood color class, or the like) of the plurality of sample pictures in the same group can be realized. The fusion of the characteristics of the plurality of sample pictures of the same case is realized, so that the problem of inaccurate recognition results caused by the fact that whether a case is multi-car or not is recognized only by relying on a single shot picture can be avoided.
It should be noted that, the above-mentioned group of each sample picture may have a sample tag of whether it is multiple cars. The sample tag may be manually pre-calibrated. Further, the confidence vector and the maximum likelihood color class of the sample picture may be obtained by inputting the sample picture into a pre-trained multi-classification classifier.
In steps 206 and 208, the confidence vectors and the maximum likelihood color categories of the photographed pictures are combined, and the combined result is input into the multi-vehicle classification model, so that the fusion of the features of all photographed pictures of the current case can be realized. Thereby improving the accuracy of multi-vehicle identification.
In summary, the multi-vehicle recognition method provided in one or more embodiments of the present disclosure may implement case-level multi-vehicle recognition. In addition, since a plurality of shot pictures are usually included in one group, the multi-vehicle classification model can fuse the characteristics of the plurality of pictures, so that the problem that if a certain case is a plurality of vehicles or not only depends on a single shot picture, if the shot picture is affected by illumination and the like and the color changes, the recognition result is inaccurate can be avoided. Finally, as the method can identify whether the plurality of shot pictures in the same group cover vehicles with various colors, the identification of multiple pictures and multiple vehicles is realized.
Corresponding to the above multi-vehicle identification method, an embodiment of the present disclosure further provides a multi-vehicle identification device, as shown in fig. 3, where the device may include:
an acquisition unit 302 is configured to acquire a photographed image of a group of vehicles corresponding to one case. The group of photographed pictures comprises at least one photographed picture.
A prediction unit 304, configured to input at least one captured picture acquired by the acquisition unit 302 into a multi-classification classifier, so as to predict a confidence vector and a maximum likelihood color class of each captured picture. The confidence vector is composed of the confidence levels of the respective shot pictures belonging to the respective predefined color categories.
A merging unit 306, configured to merge the confidence vectors and the maximum likelihood color categories of the captured pictures predicted by the prediction unit 304.
The identifying unit 308 is configured to input the confidence vector and the maximum likelihood color class combined by the combining unit 304 into a multi-vehicle classification model, so as to identify whether the set of photographed pictures covers vehicles with multiple colors.
The multi-vehicle classification model may be a classification model.
In one example, the multi-vehicle classification model may be obtained by training a decision tree, a support vector machine, or a random forest using confidence vectors and maximum likelihood color classes of sample pictures in a plurality of groups as input data. The confidence vector here consists of the confidence that the sample picture belongs to each predefined color class.
The functions of the functional modules of the apparatus in the foregoing embodiments of the present disclosure may be implemented by the steps of the foregoing method embodiments, so that the specific working process of the apparatus provided in one embodiment of the present disclosure is not repeated herein.
In the multi-car recognition device provided in one embodiment of the present disclosure, the acquisition unit 302 acquires a photographed picture of a group of vehicles corresponding to one case. The group of photographed pictures comprises at least one photographed picture. The prediction unit 304 inputs at least one photographed picture into a multi-classification classifier to predict confidence vectors and maximum likelihood color categories of the respective photographed pictures. The confidence vector is composed of the confidence levels of the respective shot pictures belonging to the respective predefined color categories. The merging unit 306 merges the confidence vectors and the maximum likelihood color categories of the respective photographed pictures. The recognition unit 308 inputs the combined confidence vector and the maximum likelihood color class into the multi-vehicle classification model to recognize whether the set of photographed pictures covers vehicles of multiple colors. Therefore, the accuracy of multi-car case identification can be improved.
The multi-car identification device provided in one embodiment of the present disclosure may be a sub-module or sub-unit of the multi-car identification module 20 in fig. 1.
Corresponding to the above multi-vehicle identification method, the embodiment of the present disclosure further provides a multi-vehicle identification device, as shown in fig. 4, where the device may include:
a receiver 402 for acquiring a photographed picture of a group of vehicles corresponding to one case. The group of photographed pictures comprises at least one photographed picture.
At least one processor 404 is configured to input at least one captured picture into the multi-classification classifier to predict a confidence vector and a maximum likelihood color class for each captured picture. The confidence vector is composed of the confidence levels of the respective shot pictures belonging to the respective predefined color categories. And combining the confidence vectors and the maximum likelihood color categories of the photographed pictures. The combined confidence vectors and the maximum likelihood color class are input into a multi-vehicle classification model to identify whether a set of captured pictures covers multiple color vehicles.
The multi-car recognition device provided by the embodiment of the specification can improve the accuracy of multi-car case recognition.
Fig. 4 shows an example in which the multi-car recognition device provided in the embodiment of the present disclosure is a server. In practical application, the device may also be a terminal, which is not limited in this specification.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware, or may be embodied in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may reside in a server. The processor and the storage medium may reside as discrete components in a server.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the present invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing detailed description of the embodiments has further described the objects, technical solutions and advantages of the present specification, and it should be understood that the foregoing description is only a detailed description of the embodiments of the present specification, and is not intended to limit the scope of the present specification, but any modifications, equivalents, improvements, etc. made on the basis of the technical solutions of the present specification should be included in the scope of the present specification.