CN111325211A - Method for automatically recognizing color of vehicle, electronic device, computer apparatus, and medium - Google Patents

Method for automatically recognizing color of vehicle, electronic device, computer apparatus, and medium Download PDF

Info

Publication number
CN111325211A
CN111325211A CN202010090012.3A CN202010090012A CN111325211A CN 111325211 A CN111325211 A CN 111325211A CN 202010090012 A CN202010090012 A CN 202010090012A CN 111325211 A CN111325211 A CN 111325211A
Authority
CN
China
Prior art keywords
vehicle body
vehicle
image
color
body area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010090012.3A
Other languages
Chinese (zh)
Inventor
周康明
申周
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN202010090012.3A priority Critical patent/CN111325211A/en
Publication of CN111325211A publication Critical patent/CN111325211A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

According to the automatic identification method, the electronic device, the computer equipment and the medium for the vehicle color, the image of the vehicle to be detected is obtained; recognizing a vehicle body area in the vehicle image to be detected based on a vehicle body semantic segmentation model to obtain a vehicle body area image; processing the vehicle body area image, and inputting the processed vehicle body area image into a vehicle body color classification model to obtain a classification result; and comparing the classification result with the color recorded in the file corresponding to the vehicle to be detected so as to judge whether the color represented by the classification result is consistent with the color recorded in the file. The method and the device can simplify task complexity, improve working efficiency, reduce working cost and ensure fairness and fairness of vehicle annual inspection and audit.

Description

Method for automatically recognizing color of vehicle, electronic device, computer apparatus, and medium
Technical Field
The invention relates to the technical field of target detection of deep learning, in particular to an automatic identification method of vehicle colors, an electronic device, computer equipment and a medium.
Background
Along with the increasing living standard of people, the number of motor vehicles is increased sharply, and the annual inspection workload of vehicles in a vehicle management station is increased. Vehicle appearance color recognition is one of the important factors. The traditional vehicle appearance color judgment and identification adopts a manual judgment mode, the labor cost is high, the main observation effect is large, and the efficiency is low. The traditional image processing mode adopts an rgb channel to be converted into an hsv channel, then the color in the image is counted in a pixel counting mode, the color with the highest occurrence frequency is the color of the vehicle, the mode is greatly influenced by shooting illumination, and the counting result is often inaccurate.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, it is an object of the present application to provide a method, an electronic device, a computer apparatus, and a medium for automatically recognizing a color of a vehicle, so as to solve the problems in the prior art.
To achieve the above and other related objects, the present application provides a method for automatically recognizing a color of a vehicle, the method comprising: acquiring an image of a vehicle to be detected; recognizing a vehicle body area in the vehicle image to be detected based on a vehicle body semantic segmentation model to obtain a vehicle body area image; processing the vehicle body area image, and inputting the processed vehicle body area image into a vehicle body color classification model to obtain a classification result; and comparing the classification result with the color recorded in the file corresponding to the vehicle to be detected so as to judge whether the color represented by the classification result is consistent with the color recorded in the file.
In an embodiment of the present application, the method for obtaining the semantic segmentation model of the vehicle body includes: obtaining vehicle sample images under different shooting conditions, wherein each vehicle sample image comprises a vehicle to be detected; respectively labeling the body area of the vehicle to be detected in each vehicle sample image; and training by using the labeled vehicle sample images to obtain the vehicle body semantic segmentation model.
In an embodiment of the application, the labeling the body area of the vehicle to be detected in each of the vehicle sample images respectively includes: judging whether the vehicle body area of the vehicle to be detected belongs to a smooth area or not; and if so, acquiring a plurality of boundary points of the smooth area, and performing curve fitting according to the boundary points to mark the body area of the vehicle to be detected.
In an embodiment of the application, the training using the labeled vehicle sample images to obtain the semantic segmentation model of the vehicle body includes: inputting each marked vehicle sample image into a semantic segmentation network for training; wherein the initial learning rate of the semantic segmentation network is 0.001; and when the semantic segmentation network is converged, stopping training to obtain the vehicle body semantic segmentation model.
In an embodiment of the application, the processing the body region image includes: acquiring a circumscribed rectangle of the vehicle body area in the vehicle body area image; intercepting the vehicle body area image according to the circumscribed rectangle to increase the proportion of the vehicle body area in the vehicle body area image to obtain a vehicle body area image after proportion adjustment; modifying the pixel value of the target area in the vehicle body area image after the proportion adjustment into a pixel value corresponding to black to obtain the processed vehicle body area image; wherein the target region comprises: any one or more of a window, a light, a tire, and a emblem.
In an embodiment of the present application, the method for obtaining the color classification model includes: obtaining vehicle body area images obtained under different shooting conditions; respectively labeling the colors in the images of the vehicle body areas; and training by using the marked images of the vehicle body areas to obtain the vehicle body color classification model.
In an embodiment of the application, the training using the labeled images of the vehicle body regions to obtain the vehicle body color classification model includes: determining a common convolution in a residual error network, and replacing the common convolution with an expansion convolution; cutting a target channel number in the residual error network, and modifying an input size value of a finally connected full connection layer in the residual error network into a numerical value with the same number as the types of the vehicle body colors to obtain an adjusted residual error network; training the marked images of the vehicle body areas according to the adjusted residual error network; and when the adjusted residual error network is converged, stopping training to obtain the vehicle body color classification model.
To achieve the above and other related objects, the present application provides an electronic device, comprising: the acquisition module is used for acquiring an image of a vehicle to be detected; the processing module is used for identifying the vehicle body area in the vehicle image to be detected based on the vehicle body semantic segmentation model so as to obtain a vehicle body area image; processing the vehicle body area image, and inputting the processed vehicle body area image into a vehicle body color classification model to obtain a classification result; and comparing the classification result with the color recorded in the file corresponding to the vehicle to be detected so as to judge whether the color represented by the classification result is consistent with the color recorded in the file.
To achieve the above and other related objects, the present application provides a computer apparatus, comprising: a memory, and a processor; the memory is to store computer instructions; the processor executes computer instructions to implement the method as described above.
To achieve the above and other related objects, the present application provides a computer readable storage medium storing computer instructions which, when executed, perform the method as described above.
In summary, according to the automatic identification method of the vehicle color, the electronic device, the computer device and the medium, the image of the vehicle to be detected is obtained; recognizing a vehicle body area in the vehicle image to be detected based on a vehicle body semantic segmentation model to obtain a vehicle body area image; processing the vehicle body area image, and inputting the processed vehicle body area image into a vehicle body color classification model to obtain a classification result; and comparing the classification result with the color recorded in the file corresponding to the vehicle to be detected so as to judge whether the color represented by the classification result is consistent with the color recorded in the file.
Has the following beneficial effects:
the task complexity can be simplified, the working efficiency is improved, the working cost is reduced, and the fairness of vehicle annual inspection and audit are guaranteed.
Drawings
Fig. 1 is a flowchart illustrating an automatic vehicle color identification method according to an embodiment of the present application.
Fig. 2 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application is provided by way of specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure herein. The present application is capable of other and different embodiments and its several details are capable of modifications and/or changes in various respects, all without departing from the spirit of the present application. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings so that those skilled in the art to which the present application pertains can easily carry out the present application. The present application may be embodied in many different forms and is not limited to the embodiments described herein.
In order to clearly explain the present application, components that are not related to the description are omitted, and the same reference numerals are given to the same or similar components throughout the specification.
Throughout the specification, when a component is referred to as being "connected" to another component, this includes not only the case of being "directly connected" but also the case of being "indirectly connected" with another element interposed therebetween. In addition, when a component is referred to as "including" a certain constituent element, unless otherwise stated, it means that the component may include other constituent elements, without excluding other constituent elements.
When an element is referred to as being "on" another element, it can be directly on the other element, or intervening elements may also be present. When a component is referred to as being "directly on" another component, there are no intervening components present.
Although the terms first, second, etc. may be used herein to describe various elements in some instances, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, the first interface and the second interface, etc. are described. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a", "an" and "the" include plural forms as long as the words do not expressly indicate a contrary meaning. The term "comprises/comprising" when used in this specification is taken to specify the presence of stated features, regions, integers, steps, operations, elements, and/or components, but does not exclude the presence or addition of other features, regions, integers, steps, operations, elements, and/or components.
Terms indicating "lower", "upper", and the like relative to space may be used to more easily describe a relationship of one component with respect to another component illustrated in the drawings. Such terms are intended to include not only the meanings indicated in the drawings, but also other meanings or operations of the device in use. For example, if the device in the figures is turned over, elements described as "below" other elements would then be oriented "above" the other elements. Thus, the exemplary terms "under" and "beneath" all include above and below. The device may be rotated 90 or other angles and the terminology representing relative space is also to be interpreted accordingly.
The existing vehicle body color identification method in the annual inspection of the vehicle usually adopts a manual judgment mode, the labor cost is high, and the main appearance influence is large. In the traditional image processing mode, an RGB channel is converted into an HSV channel, then the colors in the image are counted in a pixel counting mode, the color with the highest occurrence frequency is the color of the vehicle, the mode is greatly influenced by shooting illumination, and the counting result is often inaccurate.
The RGB colors are usually true pure colors, and the HSV colors are adjusted according to hue (H), saturation (S), and lightness. The parameters of the colors in the HSV color model are respectively: hue (H), saturation (S), value (V), three-dimensional representation of the HSV model evolved from the RGB cube. Consider that the hexagonal shape of a cube is seen looking from the white vertices to the black vertices of RGB along the diagonal of the cube. The hexagonal boundaries represent color, the horizontal axis represents purity, and the brightness is measured along the vertical axis.
The application aims to provide a method for identifying the color of a vehicle body for vehicle annual inspection, which adopts a deep learning mode to make the vehicle annual inspection process more intelligent.
The intelligent vehicle color identification method based on the deep learning has the advantages that the intelligent vehicle color identification is carried out on the vehicle color, the color identification is carried out in a segmentation-post processing-classification combined mode, the problem of vehicle body color identification is not directly solved by classification, and due to the fact that classification is directly adopted, the model classification result can be influenced by the pixel colors of other regions except the vehicle body region in the picture. The method in the application can highlight the color part of the vehicle body needing attention in the image and neglect the rest irrelevant parts, so that the color has a deeper influence on the classification model. The mode can improve the working efficiency, reduce the working cost and ensure the fairness and the fairness of the annual inspection and the audit of the vehicle.
Fig. 1 is a schematic flow chart illustrating a method for automatically identifying a vehicle color according to an embodiment of the present application. As shown, the method comprises:
step S101: and acquiring a license plate image to be detected.
In this embodiment, the license plate image to be detected is an image of a corresponding vehicle body, which is shot by a human or a machine during annual inspection of the vehicle. Namely, the subsequent detection process is carried out according to the existing license plate image to be detected or the normally acquired vehicle image to be detected.
The vehicle images to be detected can be vehicle images under different angles, different lighting conditions, different shooting angles, different types and different colors. And only one vehicle to be detected needs to be contained in one vehicle image to be detected, otherwise, the picture is not standard.
Step S102: and identifying the vehicle body area in the vehicle image to be detected based on the vehicle body semantic segmentation model to obtain a vehicle body area image.
In an embodiment of the present application, the method for obtaining the semantic segmentation model of the vehicle body includes:
training data preparation: and obtaining the vehicle sample images under different shooting conditions, such as obtaining the vehicle sample images with different angles, different lighting conditions, different shooting angles, different types and different colors. That is, vehicle sample images under various conditions, states and parameters are obtained, but each vehicle sample image needs to include a vehicle to be detected, and if an image includes a plurality of vehicles, the image is an irregular image.
Data annotation: the body region of the vehicle to be detected in each of the vehicle sample images is labeled, for example, the body region in the vehicle sample image may be labeled by a polygon.
In the present embodiment, the body region mainly includes: the roof, hood, trunk, and sides of the vehicle, but not the windows, lights, tires, emblems, and the like.
Model training: and training by using the labeled vehicle sample images to obtain the vehicle body semantic segmentation model.
In this embodiment, the detailed steps of obtaining the semantic segmentation model of the vehicle body include:
firstly, vehicle sample images shot by vehicles to be detected in a vehicle inspection field with different backgrounds, different illuminations, different shooting angles, different types and different colors are obtained.
Next, the body regions in all data are labeled with available polygons. Judging whether the body area of the vehicle to be detected belongs to a smooth area or not; and if so, acquiring a plurality of boundary points of the smooth area, and performing curve fitting according to the boundary points to mark the body area of the vehicle to be detected.
And then, performing data enhancement on the marked data, and performing random edge extension on the data to expand the area filling average value in order to enable the model to better adapt to multi-scale information. Then, each labeled vehicle sample image is input into a semantic segmentation network segnet for training, preferably, the initial learning rate of the semantic segmentation network can be set to 0.001, the learning rate is reduced along with the increase of the number of iterations in a multi-step mode, and an SGD is adopted by an optimizer.
And finally, stopping training when the semantic segmentation network is converged to obtain the vehicle body semantic segmentation model.
Step S103: and processing the vehicle body area image, and inputting the processed vehicle body area image into a vehicle body color classification model to obtain a classification result.
In an embodiment of the present application, the processing the body region image includes:
A. acquiring a circumscribed rectangle of the vehicle body area in the vehicle body area image;
B. intercepting the vehicle body area image according to the circumscribed rectangle to increase the proportion of the vehicle body area in the vehicle body area image to obtain a vehicle body area image after proportion adjustment;
C. modifying the pixel value of the target area in the vehicle body area image after the proportion adjustment into a pixel value corresponding to black to obtain the processed vehicle body area image; wherein the target region comprises: any one or more of a window, a light, a tire, and a emblem.
In this embodiment, a circumscribed rectangle of a vehicle body region in a vehicle body region image is obtained first, and the rectangle is used to intercept the vehicle body region image to increase the proportion of the vehicle body region in the vehicle body region image, so that the proportion of the vehicle body region in a picture is increased, and the whole picture is filled as much as possible. And then modifying the pixel value of the target area into black to ensure that all places except the vehicle body in the picture are black pixels so as to highlight the vehicle body. Wherein, the automobile body region mainly includes: roof, hood, trunk, and vehicle sides, while target areas in this application refer primarily to areas other than on the body of the vehicle, including but not limited to: windows, lights, tires, logos, etc. The picture after the two steps of operations can be used as the input of the color classification model.
In an embodiment of the present application, the method for obtaining the color classification model includes:
training data preparation: obtaining vehicle body area images obtained under different shooting conditions;
data annotation: respectively labeling the colors in the images of the vehicle body areas;
model training: and training by using the marked images of the vehicle body areas to obtain the vehicle body color classification model.
In an embodiment of the present application, the detailed steps of obtaining the color classification model include: firstly, according to the vehicle body segmentation result, namely the vehicle body area image, a circumscribed rectangle where the vehicle body is located is obtained, and according to the position and the size of the rectangle, a corresponding area is intercepted in an original image to be used as an input picture. Then, the input picture is preprocessed, and specifically, the pixel values of all pixel points of the non-vehicle body can be changed into black (0,0,0), so that the color of the vehicle body in the image is highlighted. And then training by taking the processed picture as the input of the model, wherein the initial learning rate can be set to be 0.01, and the learning rate is gradually reduced along with the increase of the number of times of the zone in a poly mode, wherein poly is a strategy for adjusting the learning rate, the shape of a poly learning rate curve is mainly controlled by the value of a parameter power, the power can be set to be 0.5, and Adam is adopted by an optimizer. In addition, the structure of the trained classification model based on deep learning is modified on the basis of resnet-18, and as the task solved by the method is not very complicated, the number of partial channels is cut to half so as to improve the operation speed. But because the receptive field requirement for this task is relatively high, a dilated convolution is used instead of the normal convolution. In the present application, a total of 10 color categories can be preset, for example, the numbers 0 to 9 represent black, gray, white, blue, green, red, yellow, brown, purple, other colors, etc., respectively, so that the full connection size of the last layer can be changed to 10 accordingly.
Step S104: and comparing the classification result with the color recorded in the file corresponding to the vehicle to be detected so as to judge whether the color represented by the classification result is consistent with the color recorded in the file.
In this embodiment, the color classification result outputted by the color classification model number is compared with the corresponding record color of the vehicle to determine whether the color classification result is consistent with the record color of the vehicle. For example, if the classification result is consistent with the color recorded in the file, the color of the vehicle is qualified, and the refitting phenomenon does not exist; otherwise, the color of the vehicle is not qualified.
In conclusion, the intelligent vehicle body color identification task is completed by the aid of the deep learning vehicle body semantic segmentation model and the color classification model, accuracy of the task is guaranteed not to be interfered by illumination, a shooting mode and surrounding or background objects in a deep learning image feature extraction mode, and task complexity is simplified. The mode reduces the labor cost and ensures the justice and the publicity of the annual inspection work of the vehicle.
Fig. 2 is a block diagram of an electronic device according to an embodiment of the present invention. As shown, the apparatus 200 includes:
the acquisition module 201 is used for acquiring an image of a vehicle to be detected;
the processing module 202 is configured to identify a vehicle body region in the vehicle image to be detected based on a vehicle body semantic segmentation model to obtain a vehicle body region image; processing the vehicle body area image, and inputting the processed vehicle body area image into a vehicle body color classification model to obtain a classification result; and comparing the classification result with the color recorded in the file corresponding to the vehicle to be detected so as to judge whether the color represented by the classification result is consistent with the color recorded in the file.
It should be noted that, because the contents of information interaction, execution process, and the like between the modules/units of the apparatus are based on the same concept as the method embodiment described in the present application, the technical effect brought by the contents is the same as the method embodiment of the present application, and specific contents may refer to the description in the foregoing method embodiment of the present application, and are not described herein again.
It should be further noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these units can be implemented entirely in software, invoked by a processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the processing module 202 may be a separate processing element, or may be integrated into a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a processing element of the apparatus calls and executes the functions of the processing module 202. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present invention. As shown, the computer device 300 includes: a memory 301, and a processor 302; the memory 301 is used for storing computer instructions; the processor 302 executes computer instructions to implement the method described in fig. 1.
In some embodiments, the number of the memories 301 in the computer device 300 may be one or more, the number of the processors 302 may be one or more, and fig. 3 illustrates one example.
In an embodiment of the present application, the processor 302 in the computer device 300 loads one or more instructions corresponding to the processes of the application program into the memory 301 according to the steps described in fig. 1, and the processor 302 executes the application program stored in the memory 302, thereby implementing the method described in fig. 1.
The Memory 301 may include a Random Access Memory (RAM), and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The memory 301 stores an operating system and operating instructions, executable modules or data structures, or a subset thereof, or an expanded set thereof, wherein the operating instructions may include various operating instructions for implementing various operations. The operating system may include various system programs for implementing various basic services and for handling hardware-based tasks.
The Processor 302 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In some specific applications, the various components of the computer device 300 are coupled together by a bus system that may include a power bus, a control bus, a status signal bus, etc., in addition to a data bus. But for clarity of explanation the various buses are referred to in figure 3 as a bus system.
In an embodiment of the present application, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the method described in fig. 1.
The computer-readable storage medium, as will be appreciated by one of ordinary skill in the art: the embodiment for realizing the functions of the system and each unit can be realized by hardware related to computer programs. The aforementioned computer program may be stored in a computer readable storage medium. When the program is executed, the embodiment including the functions of the system and the units is executed; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
In summary, the method for automatically identifying the color of the vehicle, the electronic device, the computer device and the medium provided by the application are implemented by obtaining an image of the vehicle to be detected; recognizing a vehicle body area in the vehicle image to be detected based on a vehicle body semantic segmentation model to obtain a vehicle body area image; processing the vehicle body area image, and inputting the processed vehicle body area image into a vehicle body color classification model to obtain a classification result; and comparing the classification result with the color recorded in the file corresponding to the vehicle to be detected so as to judge whether the color represented by the classification result is consistent with the color recorded in the file.
The application effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the invention. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present application.

Claims (10)

1. A method for automatic identification of vehicle colors, the method comprising:
acquiring an image of a vehicle to be detected;
recognizing a vehicle body area in the vehicle image to be detected based on a vehicle body semantic segmentation model to obtain a vehicle body area image;
processing the vehicle body area image, and inputting the processed vehicle body area image into a vehicle body color classification model to obtain a classification result;
and comparing the classification result with the color recorded in the file corresponding to the vehicle to be detected so as to judge whether the color represented by the classification result is consistent with the color recorded in the file.
2. The method according to claim 1, wherein the method for obtaining the semantic segmentation model of the vehicle body comprises the following steps:
obtaining vehicle sample images under different shooting conditions, wherein each vehicle sample image comprises a vehicle to be detected;
respectively labeling the body area of the vehicle to be detected in each vehicle sample image;
and training by using the labeled vehicle sample images to obtain the vehicle body semantic segmentation model.
3. The method according to claim 2, wherein the step of respectively labeling the body region of the vehicle to be detected in each vehicle sample image comprises the following steps:
judging whether the vehicle body area of the vehicle to be detected belongs to a smooth area or not;
and if so, acquiring a plurality of boundary points of the smooth area, and performing curve fitting according to the boundary points to mark the body area of the vehicle to be detected.
4. The method of claim 2, wherein the training using the labeled vehicle sample images to obtain the semantic segmentation model of the vehicle body comprises:
inputting each marked vehicle sample image into a semantic segmentation network for training; wherein the initial learning rate of the semantic segmentation network is 0.001;
and when the semantic segmentation network is converged, stopping training to obtain the vehicle body semantic segmentation model.
5. The method of claim 1, wherein said processing the body region image comprises:
acquiring a circumscribed rectangle of the vehicle body area in the vehicle body area image;
intercepting the vehicle body area image according to the circumscribed rectangle to increase the proportion of the vehicle body area in the vehicle body area image to obtain a vehicle body area image after proportion adjustment;
modifying the pixel value of the target area in the vehicle body area image after the proportion adjustment into a pixel value corresponding to black to obtain the processed vehicle body area image; wherein the target region comprises: any one or more of a window, a light, a tire, and a emblem.
6. The method according to claim 1, wherein the color classification model is obtained by a method comprising:
obtaining vehicle body area images obtained under different shooting conditions;
respectively labeling the colors in the images of the vehicle body areas;
and training by using the marked images of the vehicle body areas to obtain the vehicle body color classification model.
7. The method of claim 6, wherein the training using the labeled images of the body regions to obtain the body color classification model comprises:
determining a common convolution in a residual error network, and replacing the common convolution with an expansion convolution;
cutting a target channel number in the residual error network, and modifying an input size value of a finally connected full connection layer in the residual error network into a numerical value with the same number as the types of the vehicle body colors to obtain an adjusted residual error network;
training the marked images of the vehicle body areas according to the adjusted residual error network;
and when the adjusted residual error network is converged, stopping training to obtain the vehicle body color classification model.
8. An electronic device, the device comprising:
the acquisition module is used for acquiring an image of a vehicle to be detected;
the processing module is used for identifying the vehicle body area in the vehicle image to be detected based on the vehicle body semantic segmentation model so as to obtain a vehicle body area image; processing the vehicle body area image, and inputting the processed vehicle body area image into a vehicle body color classification model to obtain a classification result; and comparing the classification result with the color recorded in the file corresponding to the vehicle to be detected so as to judge whether the color represented by the classification result is consistent with the color recorded in the file.
9. A computer device, the device comprising: a memory, and a processor; the memory is to store computer instructions; the processor executes computer instructions to implement the method of any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon computer instructions which, when executed, perform the method of any one of claims 1 to 7.
CN202010090012.3A 2020-02-13 2020-02-13 Method for automatically recognizing color of vehicle, electronic device, computer apparatus, and medium Pending CN111325211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010090012.3A CN111325211A (en) 2020-02-13 2020-02-13 Method for automatically recognizing color of vehicle, electronic device, computer apparatus, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010090012.3A CN111325211A (en) 2020-02-13 2020-02-13 Method for automatically recognizing color of vehicle, electronic device, computer apparatus, and medium

Publications (1)

Publication Number Publication Date
CN111325211A true CN111325211A (en) 2020-06-23

Family

ID=71170980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010090012.3A Pending CN111325211A (en) 2020-02-13 2020-02-13 Method for automatically recognizing color of vehicle, electronic device, computer apparatus, and medium

Country Status (1)

Country Link
CN (1) CN111325211A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489143A (en) * 2020-11-30 2021-03-12 济南博观智能科技有限公司 Color identification method, device, equipment and storage medium
WO2022241807A1 (en) * 2021-05-20 2022-11-24 广州广电运通金融电子股份有限公司 Method for recognizing color of vehicle body of vehicle, and storage medium and terminal
CN116563770A (en) * 2023-07-10 2023-08-08 四川弘和数智集团有限公司 Method, device, equipment and medium for detecting vehicle color

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190481A (en) * 2018-08-06 2019-01-11 中国交通通信信息中心 A kind of remote sensing image road material extracting method and system
CN110245691A (en) * 2019-05-27 2019-09-17 上海眼控科技股份有限公司 A kind of intelligent identification Method of vehicle appearance color discoloration repacking
CN110297483A (en) * 2018-03-21 2019-10-01 广州极飞科技有限公司 To operating area boundary acquisition methods, device, operation flight course planning method
CN110598523A (en) * 2019-07-22 2019-12-20 浙江工业大学 Combined color classification and grouping method for clothing pictures
CN110729045A (en) * 2019-10-12 2020-01-24 闽江学院 Tongue image segmentation method based on context-aware residual error network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110297483A (en) * 2018-03-21 2019-10-01 广州极飞科技有限公司 To operating area boundary acquisition methods, device, operation flight course planning method
CN109190481A (en) * 2018-08-06 2019-01-11 中国交通通信信息中心 A kind of remote sensing image road material extracting method and system
CN110245691A (en) * 2019-05-27 2019-09-17 上海眼控科技股份有限公司 A kind of intelligent identification Method of vehicle appearance color discoloration repacking
CN110598523A (en) * 2019-07-22 2019-12-20 浙江工业大学 Combined color classification and grouping method for clothing pictures
CN110729045A (en) * 2019-10-12 2020-01-24 闽江学院 Tongue image segmentation method based on context-aware residual error network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489143A (en) * 2020-11-30 2021-03-12 济南博观智能科技有限公司 Color identification method, device, equipment and storage medium
WO2022241807A1 (en) * 2021-05-20 2022-11-24 广州广电运通金融电子股份有限公司 Method for recognizing color of vehicle body of vehicle, and storage medium and terminal
CN116563770A (en) * 2023-07-10 2023-08-08 四川弘和数智集团有限公司 Method, device, equipment and medium for detecting vehicle color
CN116563770B (en) * 2023-07-10 2023-09-29 四川弘和数智集团有限公司 Method, device, equipment and medium for detecting vehicle color

Similar Documents

Publication Publication Date Title
CN111325211A (en) Method for automatically recognizing color of vehicle, electronic device, computer apparatus, and medium
CN107610141B (en) Remote sensing image semantic segmentation method based on deep learning
CN110060237B (en) Fault detection method, device, equipment and system
CN106296666B (en) A kind of color image removes shadow method and application
WO2018040756A1 (en) Vehicle body colour identification method and device
CN108647664B (en) Lane line detection method based on look-around image
US9852354B2 (en) Method and apparatus for image scoring and analysis
US20130342694A1 (en) Method and system for use of intrinsic images in an automotive driver-vehicle-assistance device
CN103544480A (en) Vehicle color recognition method
CN106650611B (en) Method and device for recognizing color of vehicle body
CN105787475A (en) Traffic sign detection and identification method under complex environment
CN109949248A (en) Modify method, apparatus, equipment and the medium of the color of vehicle in the picture
Hua et al. Pedestrian-and vehicle-detection algorithm based on improved aggregated channel features
CN111325728B (en) Product defect detection method, device, equipment and storage medium
CN111695373B (en) Zebra stripes positioning method, system, medium and equipment
CN111325256A (en) Vehicle appearance detection method and device, computer equipment and storage medium
CN110390682A (en) A kind of image partition method that template is adaptive, system and readable storage medium storing program for executing
TWI498830B (en) A method and system for license plate recognition under non-uniform illumination
CN112784675B (en) Target detection method and device, storage medium and terminal
US20220222791A1 (en) Generating image masks from digital images utilizing color density estimation and deep learning models
CN103065145A (en) Vehicle movement shadow eliminating method
CN112862841A (en) Cotton image segmentation method and system based on morphological reconstruction and adaptive threshold
CN111695374A (en) Method, system, medium, and apparatus for segmenting zebra crossing region in monitoring view
CN104778454A (en) Night vehicle tail lamp extraction method based on descending luminance verification
CN111062309B (en) Method, storage medium and system for detecting traffic signs in rainy days

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination