CN110569693A - Vehicle body color identification method and device - Google Patents

Vehicle body color identification method and device Download PDF

Info

Publication number
CN110569693A
CN110569693A CN201810936776.2A CN201810936776A CN110569693A CN 110569693 A CN110569693 A CN 110569693A CN 201810936776 A CN201810936776 A CN 201810936776A CN 110569693 A CN110569693 A CN 110569693A
Authority
CN
China
Prior art keywords
color
vehicle
confidence
sample
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810936776.2A
Other languages
Chinese (zh)
Other versions
CN110569693B (en
Inventor
蒋晨
徐娟
程远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201810936776.2A priority Critical patent/CN110569693B/en
Publication of CN110569693A publication Critical patent/CN110569693A/en
Application granted granted Critical
Publication of CN110569693B publication Critical patent/CN110569693B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

the embodiment of the specification provides a method and a device for recognizing a vehicle body color. And detecting the area where each part is located in the shot picture according to a target detection algorithm, thereby obtaining a plurality of areas. A candidate region in which at least one specified component is located is selected from the plurality of regions. The candidate region is input to a multi-classification classifier to predict a confidence vector and a most likely color class for the candidate region. The confidence vector is formed by the confidence with which the candidate region belongs to each predefined color class, respectively. The confidence vector and the most likely color class are input to a vehicle color identification model to identify a body color of the vehicle.

Description

Vehicle body color identification method and device
Technical Field
One or more embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method and an apparatus for recognizing a color of a vehicle body.
Background
When a vehicle is damaged, it is generally necessary to identify whether or not the vehicle covered in the captured picture is a vehicle to be damaged. And one implementation of the above-described determination process may be to identify the body color of the vehicle. In the conventional art, the vehicle body color can be recognized only from a single reference area of the vehicle. The reference area may correspond to a component of the vehicle. For example, the region corresponding to the hood above the license plate can be used as a reference region to identify the color of the vehicle body.
However, in the process of recognizing the vehicle body color, if the part corresponding to the reference area is affected by light or the like, the color of the part may change. When the color of a part changes, the color of the vehicle body recognized according to the area corresponding to the part is inaccurate. Therefore, it is desirable to provide a more accurate vehicle body color recognition method.
Disclosure of Invention
One or more embodiments of the present disclosure describe a method and an apparatus for recognizing a color of a vehicle body, which can improve accuracy of recognizing the color of the vehicle body.
In a first aspect, a vehicle body color identification method is provided, including:
acquiring a shot picture of a vehicle; the captured picture covers a plurality of components of the vehicle;
Detecting the area where each component is located in the shot picture according to a target detection algorithm, so as to obtain a plurality of areas;
Selecting a candidate area where at least one specified component is located from the plurality of areas;
Inputting the candidate region into a multi-classification classifier to predict a confidence vector and a most likely color class of the candidate region; the confidence coefficient vector is formed by the confidence coefficients of the candidate regions respectively belonging to the predefined color classes;
Inputting the confidence vector and the most probable color class into a vehicle color recognition model to recognize a body color of the vehicle.
In a second aspect, there is provided a vehicle body color recognition device including:
an acquisition unit for acquiring a captured picture of a vehicle; the captured picture covers a plurality of components of the vehicle;
the detection unit is used for detecting the areas where the components are located in the shot picture acquired by the acquisition unit according to a target detection algorithm so as to obtain a plurality of areas;
A selecting unit configured to select a candidate region where at least one specifying member is located from the plurality of regions detected by the detecting unit;
the prediction unit is used for inputting the candidate region selected by the selection unit into a multi-classification classifier so as to predict the confidence coefficient vector and the maximum possibility color category of the candidate region; the confidence coefficient vector is formed by the confidence coefficients of the candidate regions respectively belonging to the predefined color classes;
And the identification unit is used for inputting the confidence coefficient vector predicted by the prediction unit and the maximum possibility color category into a vehicle color identification model so as to identify the body color of the vehicle.
In a third aspect, there is provided a vehicle body color recognition apparatus comprising:
a receiver for acquiring a photographed picture of a vehicle; the captured picture covers a plurality of components of the vehicle;
The processor is used for detecting the areas where the components are located in the shot picture according to a target detection algorithm so as to obtain a plurality of areas; selecting a candidate area where at least one specified component is located from the plurality of areas; inputting the candidate region into a multi-classification classifier to predict a confidence vector and a most likely color class of the candidate region; the confidence coefficient vector is formed by the confidence coefficients of the candidate regions respectively belonging to the predefined color classes; inputting the confidence vector and the most probable color class into a vehicle color recognition model to recognize a body color of the vehicle.
The method and the device for recognizing the color of the vehicle body, provided by one or more embodiments of the specification, are used for acquiring a shot picture of the vehicle. And detecting the area where each part is located in the shot picture according to a target detection algorithm, thereby obtaining a plurality of areas. A candidate region in which at least one specified component is located is selected from the plurality of regions. Inputting the candidate region into a multi-classification classifier to predict a confidence vector and a maximum likelihood color class of the candidate region; the confidence vector is formed by the confidence with which the candidate region belongs to each predefined color class, respectively. The confidence vector and the most likely color class are input to a vehicle color identification model to identify a body color of the vehicle. Therefore, the vehicle body color identification method provided by one or more embodiments of the present disclosure can identify the vehicle body color according to the candidate region where the at least one designated component is located, so as to improve the accuracy of vehicle body color identification.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
fig. 1 is a schematic view of an application scenario of the vehicle body color identification method provided in this specification;
FIG. 2 is a flowchart of a method for recognizing vehicle body color according to an embodiment of the present disclosure;
FIG. 3 is a schematic illustration of the areas where components are provided herein;
FIG. 4 is a schematic view of a vehicle body color recognition device provided in an embodiment of the present disclosure;
fig. 5 is a schematic view of a vehicle body color recognition device according to another embodiment of the present disclosure.
Detailed Description
The scheme provided by the specification is described below with reference to the accompanying drawings.
The method for recognizing the vehicle body color provided by one or more embodiments of the present description can be applied to the scene shown in fig. 1. In fig. 1, the car body color identification module 20 is used for identifying the color of the car in the picture. Specifically, for a shot picture of the vehicle, the body color recognition module 20 may detect regions in which various components of the vehicle are located in the shot picture according to a target detection algorithm, thereby obtaining a plurality of regions. And selecting at least one candidate area where the part capable of reflecting the color of the vehicle body is located from the plurality of areas so as to obtain at least one candidate area. At least one candidate region is input to a multi-classification classifier to predict a confidence vector and a most likely color class for the candidate region. The confidence vectors here consist of the confidence that the candidate regions belong to the respective predefined color class. And inputting the confidence coefficient vector and the maximum possibility color category of each candidate region into a vehicle color identification model so as to identify the body color of the vehicle in the shot picture.
the vehicle damage assessment module 40 is used for determining damage to the vehicle with the vehicle body color identified by the vehicle body color identification module 20. For example, the damaged parts of the vehicle and the degree of damage thereof reflected in the taken picture are automatically recognized, and the maintenance plan is automatically given.
It should be understood that fig. 1 is only one application scenario of the vehicle body color identification method provided in one or more embodiments of the present specification, and in other application scenarios, a process such as splitting a captured image of a vehicle whose vehicle body color is identified by the vehicle body color identification module 20 may also be performed, which is not limited in the present specification.
Fig. 2 is a flowchart of a vehicle body color identification method according to an embodiment of the present disclosure. The execution subject of the method may be a device with processing capabilities: the server or system or module, for example, may be the car body color identification module 20 in fig. 1. As shown in fig. 2, the method may specifically include:
Step 202, a picture of the vehicle is obtained.
It should be noted that the shot picture may be a shot picture for a certain vehicle, and the shot picture may cover a plurality of components of the vehicle. Components herein may include, but are not limited to, vehicle doors, bumpers, license plates, fenders, headlamps, tires, and the like.
And 204, detecting the area where each component is located in the shot picture according to a target detection algorithm, thereby obtaining a plurality of areas.
The target detection algorithm herein may include, but is not limited to, fast (fast) -Region-based convolutional Neural Network (RCNN), Region-based full convolutional Network (RFCN), single-shot multi-bounding box Detector (SSD), and YOLO, etc. Here, the region where the component of the vehicle is located is detected by the target detection algorithm, and the accuracy of the region detection can be improved.
optionally, the target detection algorithm may be trained according to a plurality of sample pictures, and then the region where each component is located may be detected in the shot picture according to the trained target detection algorithm, so that the accuracy of region detection may be improved. It should be noted that the sample picture may cover one or more components of a vehicle. For the sample picture, the area where each component is located and the category of the area can be manually calibrated in advance. The category herein may refer to the names of the above-mentioned components.
in one example, the resulting plurality of regions may be as shown in FIG. 3. In fig. 3, rectangular boxes are used to indicate areas where components of the vehicle are located. In one example, the region may be represented by four-dimensional coordinates, e.g., (x, y, w, h), where x is the abscissa of the upper left vertex of the region, y is the ordinate of the upper left vertex of the region, w is the width of the region, and h is the height of the region. In fig. 3, "bumper", "license plate", "fender", "headlight", and "tire" are used to indicate the type of each region.
in step 206, a candidate region where at least one designated component is located is selected from the plurality of regions.
The designated part herein may refer to a part having the same color as or similar to the color of the vehicle body. Such as "door", "bumper", "fender" and "hood", etc.
in one implementation, the candidate regions may be selected in combination with the classification of the regions. For example, when the category of a certain area is the name of the above-mentioned specified component, the area may be selected as a candidate area. For example, in fig. 3, regions A, B and C may be selected as candidate regions. That is, the number of candidate regions in the present specification may be plural.
Here, by selecting a candidate area where the designated component is located, interference of an irrelevant component can be avoided.
Step 208, the candidate region is input into a multi-classification classifier to predict the confidence vector and the most likely color class of the candidate region.
The confidence vectors here consist of the confidence that the candidate regions belong to the respective predefined color class. The multi-classification classifier herein may also be referred to as a multi-classification model. The multi-classification classifier can be obtained by training a light neural network model according to a plurality of sample regions with color class labels. The lightweight neural network model herein may include, but is not limited to, Mobi LeNet, SqueezeNet, inclusion, Xcept, ShuffleNet, and the like. Here, obtaining a multi-class classifier by training a lightweight neural network model can improve prediction efficiency. In addition, the multi-classification classifier trained by the specification can predict the color class of the candidate region where each component of the vehicle is located, so that the comprehensiveness of prediction can be improved.
The definition of the sample region can be referred to as a candidate region, i.e., a region where a component having the same color as or similar to the vehicle body is located. Except that the sample region is calibrated from the sample picture. In addition, the color type label of the sample region can be manually preset according to the color of the component corresponding to the sample region, which can include, but is not limited to, black, blue, red, silver, white, other colors, no judgment, and the like.
it should be noted that, when training the multi-classification classifier, the sample region may be detected and calibrated from a sample picture by a pre-trained target detection algorithm; or manually in the sample picture.
The definition of the predefined color categories in step 208 may refer to the color category labels of the sample regions. Further, the most likely color class in step 208 may be determined according to the confidence with which the candidate region respectively belongs to the respective predefined color classes. For example, the predefined color class corresponding to the maximum confidence may be determined as the most likely color class of the candidate region; the predefined color class whose confidence exceeds the threshold may also be determined as the most likely color class of the candidate region.
it should be noted that, when the second method is adopted, a candidate region may also belong to a most probable color category. For example, the plurality of predefined color categories may be ordered according to a priority from high to low. And then, sequentially judging whether the confidence corresponding to each predefined color class is greater than a threshold value, if the confidence of a certain predefined color class is greater than the threshold value, selecting the predefined color class as the most probable color class, and not judging the subsequent predefined color classes any more.
It is understood that when the number of candidate regions is plural, plural confidence vectors and the most likely color class may be predicted.
Step 210, inputting the confidence coefficient vector and the maximum possibility color category into a vehicle color identification model to identify the body color of the vehicle.
The vehicle color recognition model may be obtained by training a decision tree, a support vector machine, or a random forest using the confidence vectors and the most probable color classes of the sample regions of the plurality of sample pictures as input data. The confidence vectors for the sample regions may be comprised of the confidence of the sample regions as belonging to the respective predefined color classes. Here, the sample picture and the sample region are defined as above, and are not repeated herein. It should be noted that the sample picture has a corresponding color class label, and the color class label may be manually pre-calibrated. Further, the confidence vector and the most likely color class for the sample region may be derived by inputting the sample region to a multi-classification classifier trained in advance.
When the number of candidate regions is plural, the confidence vectors and the most probable color class, to which the plural candidate regions are respectively assigned, may be input to the vehicle color identification model. That is, in this specification, the body color of the vehicle may be identified according to the most probable color category of the plurality of candidate regions, so that the accuracy of body color identification may be improved, and the robustness is good.
The body color identified in step 210 may be any of the following: black, blue, red, silver, white, other colors and no judgment, etc.
In summary, the method for recognizing the color of the vehicle body provided by one or more embodiments of the present disclosure detects the area where the component of the vehicle is located through the target detection algorithm, so as to improve the accuracy of area detection. In addition, the present specification can improve prediction efficiency by training a lightweight neural network model to obtain a multi-class classifier. Finally, the vehicle color identification model identifies the vehicle body color according to the confidence coefficient vectors of the candidate regions and the maximum likelihood color category, and can be independent of the color of a single part, so that the problem that the identified vehicle body color is inaccurate when the color of the single part changes due to the influence of illumination and the like can be solved.
corresponding to the vehicle body color identification method, an embodiment of the present specification further provides a vehicle body color identification device, as shown in fig. 4, the device may include:
An acquisition unit 402 for acquiring a captured picture of the vehicle. The captured picture covers a plurality of components of the vehicle.
A detecting unit 404, configured to detect, according to an object detection algorithm, an area where each component is located in the captured picture acquired by the acquiring unit 402, so as to obtain multiple areas.
The target detection algorithm herein may include, but is not limited to, any of the following: fast, region-based convolutional neural networks, fast-RCNN, region-based full-convolutional networks, RFCN, single-pass multi-bounding box detectors, SSD, and YOLO, among others.
A selecting unit 406, configured to select a candidate region where at least one of the designated components is located from the plurality of regions detected by the detecting unit 404.
The designated part herein may refer to a part having the same or similar color as the vehicle body among a plurality of parts.
The predicting unit 408 is configured to input the candidate region selected by the selecting unit 406 into the multi-classification classifier to predict the confidence vector and the most probable color class of the candidate region. The confidence vector is formed by the confidence with which the candidate region belongs to each predefined color label, respectively.
the multi-classification classifier can be obtained by training a light neural network model according to a plurality of sample regions with color class labels. The sample region corresponds to a component of the vehicle in the sample picture.
And an identifying unit 410, configured to input the confidence vector predicted by the predicting unit 408 and the most probable color class into the vehicle color identification model to identify the body color of the vehicle.
The vehicle color recognition model can be obtained by training a decision tree, a support vector machine or a random forest by taking the confidence vectors and the maximum possibility color categories of the sample regions of a plurality of sample pictures as input data. The confidence vector may be made up of the confidence that the sample region belongs to each of the predefined color classes. The sample picture covers one or more components of the vehicle, and the sample region corresponds to a component of the vehicle in the sample picture.
the functions of each functional module of the device in the above embodiments of the present description may be implemented through each step of the above method embodiments, and therefore, a specific working process of the device provided in one embodiment of the present description is not repeated herein.
In the vehicle body color recognition device according to an embodiment of the present disclosure, the obtaining unit 402 obtains a captured picture of a vehicle. The detection unit 404 detects the area where each component is located in the captured picture according to the target detection algorithm, thereby obtaining a plurality of areas. The selecting unit 406 selects a candidate region in which at least one specified component is located from the plurality of regions. The prediction unit 408 inputs the candidate region into a multi-classification classifier to predict a confidence vector and a most likely color class of the candidate region. The confidence vector is formed by the confidence with which the candidate region belongs to each predefined color class, respectively. The recognition unit 410 inputs the confidence vector and the most probable color class into the vehicle color recognition model to recognize the body color of the vehicle. Therefore, the accuracy of vehicle body color recognition of the vehicle can be improved.
the vehicle body color identification device provided by one embodiment of the present specification may be a module or unit of the vehicle body color identification module 20 in fig. 1.
Correspondingly to the above vehicle body color identification method, an embodiment of the present specification further provides a vehicle body color identification device, as shown in fig. 5, the device may include:
a receiver 502 for obtaining a photographic picture of the vehicle. The captured picture covers a plurality of components of the vehicle.
at least one processor 504 is configured to detect an area in which each component is located in the captured picture according to an object detection algorithm, thereby obtaining a plurality of areas. A candidate region in which at least one specified component is located is selected from the plurality of regions. The candidate region is input to a multi-classification classifier to predict a confidence vector and a most likely color class for the candidate region. The confidence vector is formed by the confidence with which the candidate region belongs to each predefined color class, respectively. The confidence vector and the most likely color class are input to a vehicle color identification model to identify a body color of the vehicle.
The vehicle body color recognition device provided by one embodiment of the specification can improve the accuracy of vehicle body color recognition of a vehicle.
Fig. 5 shows an example in which the multi-vehicle recognition apparatus provided in the embodiment of the present disclosure is located in a server. In practical applications, the apparatus may also be located in a terminal, which is not limited in this specification.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware or may be embodied in software instructions executed by a processor. The software instructions may consist of corresponding software modules that may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a server. Of course, the processor and the storage medium may reside as discrete components in a server.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
the foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above-mentioned embodiments, objects, technical solutions and advantages of the present specification are further described in detail, it should be understood that the above-mentioned embodiments are only specific embodiments of the present specification, and are not intended to limit the scope of the present specification, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present specification should be included in the scope of the present specification.

Claims (11)

1. A vehicle body color identification method includes:
Acquiring a shot picture of a vehicle; the captured picture covers a plurality of components of the vehicle;
Detecting the area where each component is located in the shot picture according to a target detection algorithm, so as to obtain a plurality of areas;
Selecting a candidate area where at least one specified component is located from the plurality of areas;
Inputting the candidate region into a multi-classification classifier to predict a confidence vector and a most likely color class of the candidate region; the confidence coefficient vector is formed by the confidence coefficients of the candidate regions respectively belonging to the predefined color classes;
Inputting the confidence vector and the most probable color class into a vehicle color recognition model to recognize a body color of the vehicle.
2. the method according to claim 1, wherein the designated part is a part of the plurality of parts having a color identical or similar to the color of the vehicle body.
3. The method of claim 1 or 2, the target detection algorithm comprising any of: fast region-based convolutional neural networks, fast-RCNN, region-based full-convolutional networks, RFCN, single-pass multi-bounding box detectors, SSD, and YOLO.
4. The method of claim 1 or 2, wherein the multi-class classifier is derived by training a lightweight neural network model based on a plurality of sample regions having color class labels; the sample region corresponds to a component of the vehicle in the sample picture.
5. the method according to claim 1 or 2, wherein the vehicle color recognition model is obtained by training a decision tree, a support vector machine or a random forest by taking confidence vectors of sample regions of a plurality of sample pictures and a maximum likelihood color category as input data; the confidence vectors are formed by the confidence that the sample regions respectively belong to the predefined color classes; the sample picture covers one or more components of the vehicle, and the sample region corresponds to a component of the vehicle in the sample picture.
6. a vehicle body color recognition device comprising:
An acquisition unit for acquiring a captured picture of a vehicle; the captured picture covers a plurality of components of the vehicle;
The detection unit is used for detecting the areas where the components are located in the shot picture acquired by the acquisition unit according to a target detection algorithm so as to obtain a plurality of areas;
A selecting unit configured to select a candidate region where at least one specifying member is located from the plurality of regions detected by the detecting unit;
The prediction unit is used for inputting the candidate region selected by the selection unit into a multi-classification classifier so as to predict the confidence coefficient vector and the maximum possibility color category of the candidate region; the confidence coefficient vector is formed by the confidence coefficients of the candidate regions respectively belonging to the predefined color classes;
and the identification unit is used for inputting the confidence coefficient vector predicted by the prediction unit and the maximum possibility color category into a vehicle color identification model so as to identify the body color of the vehicle.
7. the apparatus according to claim 6, wherein the designated part is a part of the plurality of parts having a color identical or similar to the color of the vehicle body.
8. The apparatus of claim 6 or 7, the target detection algorithm comprising any of: fast region-based convolutional neural networks, fast-RCNN, region-based full-convolutional networks, RFCN, single-pass multi-bounding box detectors, SSD, and YOLO.
9. The apparatus of claim 6 or 7, wherein the multi-class classifier is derived by training a lightweight neural network model based on a plurality of sample regions having color class labels; the sample region corresponds to a component of the vehicle in the sample picture.
10. The apparatus according to claim 6 or 7, wherein the vehicle color recognition model is obtained by training a decision tree, a support vector machine or a random forest by using confidence vectors of sample regions of a plurality of sample pictures and a maximum likelihood color category as input data; the confidence vectors are formed by the confidence that the sample regions respectively belong to the predefined color classes; the sample picture covers one or more components of the vehicle, and the sample region corresponds to a component of the vehicle in the sample picture.
11. a vehicle body color recognition device comprising:
a receiver for acquiring a photographed picture of a vehicle; the captured picture covers a plurality of components of the vehicle;
The processor is used for detecting the areas where the components are located in the shot picture according to a target detection algorithm so as to obtain a plurality of areas; selecting a candidate area where at least one specified component is located from the plurality of areas; inputting the candidate region into a multi-classification classifier to predict a confidence vector and a most likely color class of the candidate region; the confidence coefficient vector is formed by the confidence coefficients of the candidate regions respectively belonging to the predefined color classes; inputting the confidence vector and the most probable color class into a vehicle color recognition model to recognize a body color of the vehicle.
CN201810936776.2A 2018-08-16 2018-08-16 Vehicle body color recognition method and device Active CN110569693B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810936776.2A CN110569693B (en) 2018-08-16 2018-08-16 Vehicle body color recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810936776.2A CN110569693B (en) 2018-08-16 2018-08-16 Vehicle body color recognition method and device

Publications (2)

Publication Number Publication Date
CN110569693A true CN110569693A (en) 2019-12-13
CN110569693B CN110569693B (en) 2023-05-12

Family

ID=68772339

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810936776.2A Active CN110569693B (en) 2018-08-16 2018-08-16 Vehicle body color recognition method and device

Country Status (1)

Country Link
CN (1) CN110569693B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325256A (en) * 2020-02-13 2020-06-23 上海眼控科技股份有限公司 Vehicle appearance detection method and device, computer equipment and storage medium
CN111401289A (en) * 2020-03-24 2020-07-10 国网上海市电力公司 Intelligent identification method and device for transformer component
CN112330619A (en) * 2020-10-29 2021-02-05 浙江大华技术股份有限公司 Method, device and equipment for detecting target area and storage medium
WO2022241807A1 (en) * 2021-05-20 2022-11-24 广州广电运通金融电子股份有限公司 Method for recognizing color of vehicle body of vehicle, and storage medium and terminal

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009244946A (en) * 2008-03-28 2009-10-22 Fujitsu Ltd Traffic light recognizing apparatus, traffic light recognizing method, and traffic light recognizing program
CN103440503A (en) * 2013-09-12 2013-12-11 青岛海信网络科技股份有限公司 Vehicle body color detection and identification method
US20150030255A1 (en) * 2013-07-25 2015-01-29 Canon Kabushiki Kaisha Method and apparatus for classifying pixels in an input image and image processing system
CN106384117A (en) * 2016-09-14 2017-02-08 东软集团股份有限公司 Vehicle color recognition method and device
CN107358596A (en) * 2017-04-11 2017-11-17 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device, electronic equipment and system
CN107480676A (en) * 2017-07-28 2017-12-15 浙江大华技术股份有限公司 A kind of vehicle color identification method, device and electronic equipment
US20180046935A1 (en) * 2016-08-09 2018-02-15 Microsoft Technology Licensing, Llc Interactive performance visualization of multi-class classifier
US20180114337A1 (en) * 2016-10-20 2018-04-26 Sun Yat-Sen University Method and system of detecting and recognizing a vehicle logo based on selective search

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009244946A (en) * 2008-03-28 2009-10-22 Fujitsu Ltd Traffic light recognizing apparatus, traffic light recognizing method, and traffic light recognizing program
US20150030255A1 (en) * 2013-07-25 2015-01-29 Canon Kabushiki Kaisha Method and apparatus for classifying pixels in an input image and image processing system
CN103440503A (en) * 2013-09-12 2013-12-11 青岛海信网络科技股份有限公司 Vehicle body color detection and identification method
US20180046935A1 (en) * 2016-08-09 2018-02-15 Microsoft Technology Licensing, Llc Interactive performance visualization of multi-class classifier
CN106384117A (en) * 2016-09-14 2017-02-08 东软集团股份有限公司 Vehicle color recognition method and device
US20180114337A1 (en) * 2016-10-20 2018-04-26 Sun Yat-Sen University Method and system of detecting and recognizing a vehicle logo based on selective search
CN107358596A (en) * 2017-04-11 2017-11-17 阿里巴巴集团控股有限公司 A kind of car damage identification method based on image, device, electronic equipment and system
CN107480676A (en) * 2017-07-28 2017-12-15 浙江大华技术股份有限公司 A kind of vehicle color identification method, device and electronic equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325256A (en) * 2020-02-13 2020-06-23 上海眼控科技股份有限公司 Vehicle appearance detection method and device, computer equipment and storage medium
CN111401289A (en) * 2020-03-24 2020-07-10 国网上海市电力公司 Intelligent identification method and device for transformer component
CN111401289B (en) * 2020-03-24 2024-01-23 国网上海市电力公司 Intelligent identification method and device for transformer component
CN112330619A (en) * 2020-10-29 2021-02-05 浙江大华技术股份有限公司 Method, device and equipment for detecting target area and storage medium
CN112330619B (en) * 2020-10-29 2023-10-10 浙江大华技术股份有限公司 Method, device, equipment and storage medium for detecting target area
WO2022241807A1 (en) * 2021-05-20 2022-11-24 广州广电运通金融电子股份有限公司 Method for recognizing color of vehicle body of vehicle, and storage medium and terminal

Also Published As

Publication number Publication date
CN110569693B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN110569693B (en) Vehicle body color recognition method and device
US20210327042A1 (en) Deep learning-based system and method for automatically determining degree of damage to each area of vehicle
TWI698802B (en) Vehicle parts detection method, device and equipment
KR101848019B1 (en) Method and Apparatus for Detecting Vehicle License Plate by Detecting Vehicle Area
US9501703B2 (en) Apparatus and method for recognizing traffic sign board
US11087153B2 (en) Traffic light recognition system and method
US20220027664A1 (en) Method for common detecting, trackng and classifying of objects
CN107194393B (en) Method and device for detecting temporary license plate
CN109670383B (en) Video shielding area selection method and device, electronic equipment and system
CN111027535B (en) License plate recognition method and related equipment
CN108734684B (en) Image background subtraction for dynamic illumination scene
KR101224164B1 (en) Pre- processing method and apparatus for license plate recognition
CN112598922B (en) Parking space detection method, device, equipment and storage medium
CN111783573B (en) High beam detection method, device and equipment
CN112926461B (en) Neural network training and driving control method and device
CN108537223B (en) License plate detection method, system and equipment and storage medium
CN112784675B (en) Target detection method and device, storage medium and terminal
Agarwal et al. Vehicle Characteristic Recognition by Appearance: Computer Vision Methods for Vehicle Make, Color, and License Plate Classification
CN111709377B (en) Feature extraction method, target re-identification method and device and electronic equipment
CN110569692B (en) Multi-vehicle identification method, device and equipment
CN115661131B (en) Image identification method and device, electronic equipment and storage medium
KR101741758B1 (en) A Real-time Face Tracking Method Robust to Occlusion Based on Improved CamShift with Depth Information
JP4784932B2 (en) Vehicle discrimination device and program thereof
US20230342937A1 (en) Vehicle image analysis
US20170336283A1 (en) Method for checking the position of characteristic points in light distributions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40018832

Country of ref document: HK

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200928

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20200928

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

GR01 Patent grant
GR01 Patent grant