CN112927231B - Training method of vehicle body dirt detection model, vehicle body dirt detection method and device - Google Patents

Training method of vehicle body dirt detection model, vehicle body dirt detection method and device Download PDF

Info

Publication number
CN112927231B
CN112927231B CN202110514569.XA CN202110514569A CN112927231B CN 112927231 B CN112927231 B CN 112927231B CN 202110514569 A CN202110514569 A CN 202110514569A CN 112927231 B CN112927231 B CN 112927231B
Authority
CN
China
Prior art keywords
image
vehicle body
dirt
training
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110514569.XA
Other languages
Chinese (zh)
Other versions
CN112927231A (en
Inventor
孙月
闫潇宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Anruan Huishi Technology Co ltd
Shenzhen Anruan Technology Co Ltd
Original Assignee
Shenzhen Anruan Huishi Technology Co ltd
Shenzhen Anruan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Anruan Huishi Technology Co ltd, Shenzhen Anruan Technology Co Ltd filed Critical Shenzhen Anruan Huishi Technology Co ltd
Priority to CN202110514569.XA priority Critical patent/CN112927231B/en
Publication of CN112927231A publication Critical patent/CN112927231A/en
Application granted granted Critical
Publication of CN112927231B publication Critical patent/CN112927231B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Abstract

The embodiment of the invention provides a training method of a vehicle body dirt detection model, a vehicle body dirt detection method and a vehicle body dirt detection device, wherein the method comprises the following steps: acquiring an initial detection image data set, and performing data preprocessing on the initial detection image data set to obtain a training image for detecting dirt of a target vehicle body; training a preset neural network model based on a training image, and extracting a high-level characteristic diagram of the training image to obtain a corresponding vehicle body dirt detection model; the method for obtaining the training image of the dirt detection of the target car body by carrying out data preprocessing on the initial detection image data set comprises the following steps: determining a map cutting frame for each target body dirt of each image in the initial detection image data set; determining an overlap ratio between a current sliced frame and a next sliced frame; calculating the shearing step number between the current frame and the next frame according to the overlapping ratio; and carrying out image cutting processing on the corresponding image according to the cutting steps to obtain a corresponding training image. The invention can improve the accuracy of the detection of the dirt of the vehicle body.

Description

Training method of vehicle body dirt detection model, vehicle body dirt detection method and device
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a training method of a vehicle body dirt detection model, a vehicle body dirt detection method and a vehicle body dirt detection device.
Background
With the progress and development of society, various image pickup apparatuses have been popularized in our lives. In order to strengthen the construction of urban and community environmental greening work, valuable information is quickly obtained from monitoring image data obtained by edge monitoring equipment, and the method is particularly important for office personnel and security personnel. In recent years, various intelligent products including an artificial intelligence technology as a core are gradually coming into the field of view of the public. As an important branch of the field of artificial intelligence, computer vision has become mature, especially for target detection technology based on deep learning. The target detection of the image is to position and classify and identify the target in the image, and the target in the image can be locked by using the target detection result, so that the monitoring video can be analyzed by the staff. For example, when detecting the dirt of the vehicle body, the existing detection method is suitable for an application scene of detecting the dirt of a larger target vehicle body. However, if the proportion of the target vehicle body dirt to the entire image to be detected is small, the accuracy of the vehicle body dirt detection is relatively low.
Disclosure of Invention
The embodiment of the invention provides a training method of a vehicle body dirt detection model, which can improve the accuracy of vehicle body dirt detection.
In a first aspect, an embodiment of the present invention provides a training method for a vehicle body dirt detection model, where the training method for the vehicle body dirt detection model includes the following steps:
acquiring an initial detection image data set, and performing data preprocessing on the initial detection image data set to obtain a training image for detecting the dirt of a target vehicle body, wherein the training image comprises the dirt of the target vehicle body;
training a preset neural network model based on the training image, extracting a high-level characteristic diagram of the training image, and obtaining a corresponding vehicle body dirt detection model;
the step of carrying out data preprocessing on the initial detection image data set to obtain a training image for detecting the dirt of the target vehicle body comprises the following steps:
determining a cutting frame of each target vehicle body dirt of each image in the initial detection image data set, wherein each image in the initial detection image data set comprises a corresponding original label;
determining an overlap ratio between a current sliced frame and a next sliced frame for each image in the initial detected image dataset;
calculating a number of cropping steps adapted to each image in the initial inspection image data set based on the overlap ratio of each image in the initial inspection image data set;
and performing image cutting processing on the corresponding image in the initial detection image data set according to the cutting step number of each image in the initial detection image data set to obtain a training image for detecting the dirt of the target vehicle body, wherein the training image comprises a corresponding training image label, and the training image label is obtained by processing the original label during the image cutting processing.
Optionally, each image in the initial detection image data set is an image to be cut, and the step of performing a cutting process on the image corresponding to the initial detection image data set according to the cutting step number of each image in the initial detection image data set to obtain a training image for detecting the dirt of the target vehicle body includes:
performing sliding cutting on the image corresponding to the initial detection image data set according to the cutting step number of each image in the initial detection image data set, and judging whether the last cutting step number exceeds the boundary of the image to be cut when performing sliding cutting;
if the dirt does not exceed the boundary of the image to be cut, circularly traversing the image to be cut for sliding cutting to obtain a training image for detecting the dirt of the target vehicle body;
and if the boundary of the image to be cut is exceeded, re-determining the overlapping ratio between the current cut-drawing frame and the next cut-drawing frame to obtain a new overlapping ratio, and returning to execute the step of calculating the cutting step number suitable for each image in the initial detection image data set according to the overlapping ratio of each image in the initial detection image data set according to the new overlapping ratio.
Optionally, the step of obtaining an initial detection image data set and performing data preprocessing on the initial detection image data set to obtain a training image for detecting fouling of a target vehicle body further includes:
rotating, zooming and changing the color gamut of the images in the initial detection image data set, and combining the images according to a preset direction to obtain a target combined image;
and carrying out black edge filling on the reduced target combined image according to the calculated scaling, scaling size and black edge filling number, so as to obtain a training image for detecting the dirt of the target vehicle body.
Optionally, the step of training a preset neural network model based on the training image, extracting a high-level feature map of the training image, and obtaining a corresponding vehicle body dirt detection model includes:
carrying out slicing processing on the training image for detecting the dirt of the target vehicle body to obtain a slice characteristic diagram;
integrating and splicing the slice characteristic diagrams to obtain spliced characteristic diagrams;
performing convolution processing on the spliced feature map to obtain a convolution feature map;
carrying out batch normalization processing on the convolution characteristic graph to obtain a normalized characteristic graph;
performing activation function processing on the normalized feature map to obtain a target feature map;
and performing iterative training on the preset neural network model according to the slice characteristic diagram, the splicing characteristic diagram, the convolution characteristic diagram, the normalization characteristic diagram and the target characteristic diagram, and extracting a high-level characteristic diagram of the training image to obtain the corresponding vehicle body dirt detection model.
Optionally, the training method of the vehicle body dirt detection model further includes the steps of:
performing vector conversion on the advanced feature map to obtain a target image vector corresponding to the training image for detecting the dirt of the target vehicle body;
and performing loss calculation on the target image vector based on a preset loss function, and continuously performing iterative training in a neural network to reduce the difference between a predicted value and a true value.
Optionally, the step of performing vector transformation on the advanced feature map to obtain a target image vector corresponding to the training image for detecting the fouling of the target vehicle body includes:
performing multi-scale maximum pooling on the features in the advanced feature map based on a preset SPP structure, and then splicing to obtain advanced features, wherein the preset SPP structure comprises three groups of different pooling operations;
enhancing the advanced features based on a preset FPN structure to adapt to the dirt detection of the target vehicle body zoomed in and out at different scales;
and fusing the enhanced advanced features based on a preset PAN structure to obtain the corresponding target image vector.
Optionally, the training method of the vehicle body dirt detection model further includes:
and screening out the final result through non-maximum suppression, and suppressing repeated predicted coordinate frames and coordinate frames with low probability.
In a second aspect, an embodiment of the present invention further provides a vehicle body dirt detection method, where the vehicle body dirt detection method is performed based on a vehicle body dirt detection model obtained by training the vehicle body dirt detection model provided in the foregoing embodiment, and the vehicle body dirt detection method includes:
acquiring a vehicle body image to be detected;
inputting the vehicle body image into the vehicle body dirt detection model for detection to obtain a detection result;
judging whether dirt exists in the vehicle body image according to the detection result;
and giving an alarm if the vehicle body image has dirt.
Optionally, the vehicle body dirt detection method further includes the steps of:
and analyzing the vehicle body image with dirt, and storing a corresponding analysis result.
In a third aspect, an embodiment of the present invention further provides a training device for a vehicle body dirt detection model, where the training device for the vehicle body dirt detection model includes:
the system comprises a data preprocessing module, a data processing module and a data processing module, wherein the data preprocessing module is used for acquiring an initial detection image data set and carrying out data preprocessing on the initial detection image data set to obtain a training image for detecting the dirt of a target vehicle body, and the training image comprises the dirt of the target vehicle body;
the training module is used for training a preset neural network model based on the training image, extracting a high-level characteristic diagram of the training image and obtaining a corresponding vehicle body dirt detection model;
the data preprocessing module comprises:
a first determining unit, configured to determine a map cutting frame of each target vehicle body dirt in each image in the initial detection image data set, where each image in the initial detection image data set includes a corresponding original tag;
a second determining unit for determining an overlapping ratio between a current frame cut and a next frame cut of each image in the initial detection image data set;
a calculation unit configured to calculate the number of cutting steps adapted to each image in the initial detection image data set, based on the overlap ratio of each image in the initial detection image data set;
and the image cutting unit is used for carrying out image cutting processing on the corresponding image in the initial detection image data set according to the cutting step number of each image in the initial detection image data set to obtain a training image for detecting the dirt of the target vehicle body, wherein the training image comprises a corresponding training image label, and the training image label is obtained by processing the original label during the image cutting processing.
In a fourth aspect, an embodiment of the present invention further provides a vehicle body dirt detection apparatus, which is performed by the training apparatus for the vehicle body dirt detection model provided in the above embodiment, and the vehicle body dirt detection apparatus includes:
the acquisition module is used for acquiring an image of a vehicle body to be detected;
the detection module is used for inputting the vehicle body image into the vehicle body dirt detection model for detection to obtain a detection result;
the judging module is used for judging whether dirt exists in the vehicle body image according to the detection result;
and the alarm module is used for giving an alarm if the automobile body image has dirt.
In a fifth aspect, an embodiment of the present invention further provides an electronic device, including: the training method for the vehicle body dirt detection model provided by the embodiment is realized when the processor executes the computer program, and the steps in the vehicle body dirt detection method provided by the embodiment are realized.
In a sixth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for training the vehicle body fouling detection model provided in the foregoing embodiment is implemented, and the steps in the method for detecting vehicle body fouling provided in the foregoing embodiment are implemented.
In the embodiment of the invention, a training image for detecting the dirt of the target car body is obtained by acquiring an initial detection image data set and carrying out data preprocessing on the initial detection image data set, wherein the training image comprises the dirt of the target car body; training a preset neural network model based on a training image, and extracting a high-level characteristic diagram of the training image to obtain a corresponding vehicle body dirt detection model; the method for obtaining the training image of the dirt detection of the target car body by carrying out data preprocessing on the initial detection image data set comprises the following steps: determining a map cutting frame of each object body dirt of each image in the initial detection image data set, wherein each image in the initial detection image data set comprises a corresponding original label; determining an overlap ratio between a current frame cut and a next frame cut of each image in the initial detection image data set; calculating the number of cutting steps adapted to each image in the initial detection image data set according to the overlapping ratio of each image in the initial detection image data set; and performing image cutting processing on the corresponding image in the initial detection image data set according to the cutting step number of each image in the initial detection image data set to obtain a training image for detecting the dirt of the target vehicle body, wherein the training image comprises a corresponding training image label, and the training image label is obtained by processing the original label during the image cutting processing. In this way, data preprocessing can be performed on the acquired initial detection image data set, mainly, image cutting processing is performed on each image in the initial detection image data set to obtain a training image for detecting the dirt of the target vehicle body, and then the high-level feature map of the training image is extracted to obtain a corresponding target vehicle body dirt detection model, so that the accuracy of the vehicle body dirt detection of the target vehicle body dirt detection model is improved. And is suitable for small target vehicle body dirt detection in a vehicle body dirt detection task.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a training method for a vehicle body dirt detection model according to an embodiment of the present invention;
FIG. 2 is a flow chart of another training method for a vehicle body dirt detection model provided by the embodiment of the invention;
FIG. 3 is a flowchart of a method provided in step 101 according to an embodiment of the present invention;
FIG. 4 is a flowchart of another method provided in step 101 of an embodiment of the present invention;
FIG. 5 is a flowchart of a method provided by step 102 in an embodiment of the present invention;
FIG. 6 is a flow chart of another method provided by step 102 in an embodiment of the present invention;
FIG. 7 is a flowchart of a training method for a vehicle body dirt detection model according to an embodiment of the present invention;
FIG. 8 is a flowchart of a method provided in step 201 according to an embodiment of the present invention;
FIG. 9 is a flow chart of a method for detecting fouling of a vehicle body according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of a training device for a vehicle body dirt detection model according to an embodiment of the present invention;
FIG. 11 is a schematic structural diagram of a training device for a vehicle body dirt detection model according to another embodiment of the present invention;
FIG. 12 is a schematic diagram of a structure provided by the data preprocessing module according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of a structure provided by the training module in an embodiment of the present invention;
FIG. 14 is a schematic structural diagram of a training device of another vehicle body dirt detection model provided by the embodiment of the invention;
fig. 15 is a schematic structural diagram of a vehicle body dirt detecting apparatus according to an embodiment of the present invention;
fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a training method of a vehicle body dirt detection model according to an embodiment of the present invention, the training method of the vehicle body dirt detection model is applied to a target detector of a neural network model, as shown in fig. 1, the training method of the vehicle body dirt detection model includes the following steps:
step 101, obtaining an initial detection image data set, and performing data preprocessing on the initial detection image data set to obtain a training image for detecting the dirt of the target vehicle body, wherein the training image comprises the dirt of the target vehicle body.
The initial detection image data set may include a plurality of images, and in the embodiment of the present invention, four images are taken as an example for illustration. Of course, the initial inspection image data set may be obtained from a surveillance video acquired by a monitor or a high definition camera. Each image in the initial detected image dataset includes target vehicle body fouling, and each picture may include a plurality of target vehicle body fouling. The target body dirt may be spots, stains, mud, garbage, etc. on the upper surface of the vehicle body. The dirt on the vehicle body can be dirt on the vehicle body of a garbage vehicle, or dirt on the vehicle body of a private car or dirt on the vehicle body of other vehicles. As long as the dirt on the vehicle body of the vehicle needs to be detected.
Specifically, the initial detection image data set is obtained, and the monitoring video transmitted by the high-definition camera is accessed, the position of the high-definition camera is guaranteed to cover the whole vehicle body (the high-definition camera is arranged at two opposite corners of the top vehicle body and can be used for monitoring and covering the whole vehicle body in a crossed mode), a preprocessing task is performed on the video, the image is mainly extracted by frames, and then the corresponding initial detection image data set is obtained.
In the embodiment of the present invention, as shown in fig. 2, step 101 includes:
step 101a, determining a map cutting frame of each target vehicle body dirt of each image in the initial detection image data set, wherein each image in the initial detection image data set comprises a corresponding original label.
The method for determining the frame cut frame may be to customize the frame cut frame with an appropriate size (e.g. the custom size is 640 x 640). Or, the maximum abscissa and ordinate sizes of all the target body dirt in the current image to be cut can be searched as the final size of the cutting graph.
Step 101b, determining the overlapping ratio between the current frame and the next frame of each image in the initial detection image data set.
And step 101c, calculating the shearing step number suitable for each image in the initial detection image data set according to the overlapping ratio of each image in the initial detection image data set.
And 101d, performing image cutting processing on the corresponding image in the initial detection image data set according to the cutting step number of each image in the initial detection image data set to obtain a training image for detecting the dirt of the target vehicle body, wherein the training image comprises a corresponding training image label, and the training image label is obtained by processing an original label during the image cutting processing.
Specifically, each image in the initial detection image dataset is called an image to be cut, and step 101d includes:
and performing sliding cutting on the image corresponding to the initial detection image data set according to the cutting step number of each image in the initial detection image data set, and judging whether the last cutting step number exceeds the boundary of the image to be cut when performing sliding cutting.
And if the dirt does not exceed the boundary of the image to be cut, circularly traversing the image to be cut for sliding cutting to obtain a training image for detecting the dirt of the target vehicle body.
And if the boundary of the image to be cut is exceeded, re-determining the overlapping ratio between the current cut-drawing frame and the next cut-drawing frame to obtain a new overlapping ratio, and returning to execute the step of calculating the cutting step number suitable for each image in the initial detection image data set according to the overlapping ratio of each image in the initial detection image data set according to the new overlapping ratio.
Specifically, the data preprocessing in step 101 may be to perform a cropping process on each image in the initial detection image data set. For example, the image may be cut into 640 × 640 image (a) (the image to be cut (a) is a high definition 1920 × 1080 image or the pixel level is higher) for training, and the image may be cut into 640 × 640 when predicting. The method for cutting the map specifically comprises the steps of firstly, determining the size(s) of a frame (k) to be cut (self-defined or according to the combination of the maximum value of the abscissa and the maximum value of the ordinate of all the dirt of the target vehicle body in the image to be cut), next, an overlap ratio (r) of the current frame (k) and the next frame (k + 1) is determined (for example, the size of the cut is 640 x 640, the overlap ratio is 0.2, the number of cut steps of the frame is 512= (1-0.2) = 640), and finally, if the size of the whole image (A) to be cut can not be cut exactly by the overlap ratio (r) set in the previous step, the overlap ratio of the current frame (k) and the next frame (k + 1) is required to be automatically adjusted according to calculation, so that the cutting result of the last cutting step number of the cutting frame (k) does not exceed the whole image (A) to be cut. The map cutting operation can effectively display the small target vehicle body dirt on one small map after being cut, so that the characteristics (positions and the like) of the small target vehicle body dirt are not obvious when the convolution down-sampling operation is carried out. After cutting, the original labels corresponding to the image to be cut (a) are changed into (Ax 1, Ay1, Ax2 and Ay 2) assuming that the original labels corresponding to a certain target frame (k 1) in the image to be cut (a), namely the coordinates of the upper left corner and the lower right corner of the target body dirt are allocated to the cut small image (640 x 640), and the rest images (without the target body dirt) are filtered out as backgrounds. The position of the coordinate of the upper left corner of the small picture (a) after being cut at present in the image (A) to be cut is Ax1, Ay1, if the target frame (k 1) is inside the small picture (a) (judged well according to the coordinates), the new upper left corner coordinate of the target frame (k 1) relative to the training image label of the small picture (a) is Ax1-Ax1, Ay1-Ay1, and the lower right corner is similar. And further obtaining a corresponding training image for detecting the fouling of the target vehicle body and a corresponding training image label.
In this embodiment of the present invention, as shown in fig. 3, step 101 further includes:
and step 1011, performing rotation, scaling and color gamut change on the images in the initial detection image data set, and combining the images according to a preset orientation to obtain a target combined image.
And step 1012, performing black edge filling on the reduced target combined image according to the calculated scaling, scaling size and black edge filling number, so as to obtain a training image for detecting the dirt of the target vehicle body.
Wherein, the preset orientation can be in the form of four grids of upper left, lower left, upper right and lower right.
More specifically, the process of step 101 may be a process of producing an object detection data set. Firstly, an image in an initial detection image data set is subjected to image enhancement operations such as scaling, rotation, color gamut change and the like after an image enhancement stage, then a plurality of images are randomly integrated into a target combined image, and then image scaling is carried out. For the reduced image, the scaling proportion and the scaled size need to be calculated, the black edge filling value of the image is calculated after the result is obtained, the blank part is filled with the black edge, and finally a new target combination image is obtained and used as the input of the neural network model.
Illustratively, as shown in fig. 4, fig. 4 is a flowchart of another method provided in step 101 in the embodiment of the present invention. The following describes an example in which four images in the initial detection image data set are subjected to data preprocessing. Firstly, reading four images through an image enhancement stage; respectively performing operations such as rotation, scaling, color gamut change and the like on the four images; then, respectively placing the four images in four directions, wherein the four directions are respectively upper left, lower left, upper right and lower right; then, combining the images into one image to obtain a target combined image; in the image zooming stage, the zooming of the image is to calculate the needed zooming proportion, calculate the zooming size and calculate the black edge filling value; and carrying out black edge filling on the reduced target combined image according to the calculated scaling, scaling size and black edge filling number to output a scaled fixed-size image so as to obtain a target detection image. And finally, taking the target detection image as an input image of the deep neural network model. It should be noted that the anchor frame is customizable.
And 102, training a preset neural network model based on the training image, extracting a high-level characteristic diagram of the training image, and obtaining a corresponding vehicle body dirt detection model.
The preset neural network model may be a neural network model without a trained basis.
Specifically, as shown in fig. 5, step 102 includes:
and step 1021, performing slicing processing on the training image for detecting the dirt of the target vehicle body to obtain a slice characteristic diagram.
And step 1022, integrating and splicing the slice characteristic diagrams to obtain a spliced characteristic diagram.
And 1023, performing convolution processing on the spliced feature map to obtain a convolution feature map.
And 1024, carrying out batch normalization processing on the convolution characteristic graph to obtain a normalized characteristic graph.
And step 1025, performing activation function processing on the normalized feature graph to obtain a target feature graph.
And 1026, performing iterative training on a preset neural network model according to the slice characteristic diagram, the splicing characteristic diagram, the convolution characteristic diagram, the normalization characteristic diagram and the target characteristic diagram, extracting a high-level characteristic diagram of the training image, and obtaining a corresponding vehicle body dirt detection model.
More specifically, as shown in fig. 6, fig. 6 is a flowchart of another method provided in step 102 in the embodiment of the present invention. Firstly, slicing operation is carried out on a training image for detecting dirt of a target vehicle body through a Focus structure, then integration and splicing are carried out, high-level features are obtained through iterative training of convolution, batch normalization and activation functions (Leaky _ relu and Mish) and serve as input of a following neural network model, a plurality of CSP (Cross Stage Partial) structures form the structure of the neural network model, and a high-level feature map can be obtained after a target feature map passes through the network. The neural network model at this time is a target vehicle body dirt detection model.
In an embodiment of the present invention, as shown in fig. 7, fig. 7 is a flowchart of a training method of a vehicle body dirt detection model according to an embodiment of the present invention. The training method of the vehicle body dirt detection model further comprises the following steps:
and step 201, performing vector conversion on the high-level feature map to obtain a target image vector corresponding to a training image for detecting the dirt of the target vehicle body.
Step 202, performing loss calculation on the target image vector based on a preset loss function, and continuously performing iterative training in a neural network to reduce the difference between the predicted value and the true value.
Specifically, step 201 includes: performing multi-scale maximum pooling on the features in the advanced feature map based on a preset SPP (spatial pyramid pooling) structure, and then splicing to obtain advanced features, wherein the preset SPP structure comprises three groups of different pooling operations. And enhancing the advanced features based on a preset FPN (Feature Pyramid) structure so as to adapt to the detection of the dirt of the target vehicle body zoomed in different scales. And fusing the enhanced high-level features based on a preset PAN (Path Aggregation Network) structure to obtain a corresponding target image vector. The FPN structure is from top to bottom, the feature information of a high layer is transmitted and fused in an up-sampling mode to obtain a predicted feature map, and strong semantic features are conveyed, namely the feature map is mainly used for determining the category information of the dirt of a target vehicle body. The PAN structure is from bottom to top, and is transferred and fused in a downsampling mode, so that the PAN structure conveys strong positioning characteristics, namely the PAN structure is mainly used for determining the position of the object body dirt in the picture.
More specifically, as shown in fig. 8, fig. 8 is a flowchart of a method provided in step 201 according to an embodiment of the present invention. Step 201 is performed in a module composed of a preset SPP structure and a preset FPN + PAN structure. The preset SPP consists of three groups of different pooling operations (13 x 13, 9 x 9 and 5 x 5), the outputs of the three groups of pooling operations are spliced (concat) to obtain a new output, then a Feature Pyramid (FPN) is used for enhancing the features for detecting the dirt of the target vehicle body with different scales, and a PAN structure is used for fusing the features to finally obtain a 1-dimensional target image vector. The target image vector includes (class + confidence + coordinate location of target) × 3 anchor box.
In step 202, the predetermined loss function includes: binary cross entropy and logs loss function.
More specifically, by using a GIOU Loss (Generalized Intersection over unit) as a Loss of the bounding box, a Loss of the class probability and the target score is calculated using a binary cross entropy and a logs Loss function, and a gap between a predicted value and a true value is narrowed by calculating the Loss.
In an embodiment of the present invention, the method further comprises the steps of: and screening out the final result through non-maximum suppression, and suppressing repeated predicted coordinate frames and coordinate frames with low probability.
Specifically, after the loss is calculated in step 202 to reduce the difference between the predicted value and the true value, the final result is screened out through the non-maximum suppression, and the repeated predicted coordinate frame and the coordinate frame with small probability are suppressed.
In another embodiment of the invention, a predicted value with the maximum probability is screened through non-maximum inhibition, then Adam (Adaptive Moment Estimation) or SGD (Stochastic Gradient Descent) is used as a Gradient optimization function to update the weight of the predicted value with the maximum probability when a neural network model is trained, and finally a detection result is output to a target detection image, so as to obtain a prediction result.
It should be noted that, the method for screening the maximum predicted value may be to sort the predicted results from big to small according to the probability, and perform non-maximum value suppression respectively according to the probability, that is, perform pairwise comparison on frames with an intersection-to-parallel ratio (IOU) greater than 50%, delete the coordinate frames with small probability, and finally obtain the predicted value with the maximum probability.
When the dirt of the vehicle body is detected, the dirt of the vehicle body is detected for each training image (namely the image after image cutting processing), when the dirt of the vehicle body is detected in the training images, the training image labels of the training images can be recorded, the original labels are restored according to the corresponding relation between the training image labels of the training images and the original labels, and meanwhile, the corresponding images in the initial detection image data set are restored. That is, the labels can be restored when the dirt on the vehicle body is detected, and the dirt on the vehicle body of each image in the initial detection image data set can be detected according to the dirt detection on the vehicle body of the training image. When detecting the dirt on the vehicle body, it is necessary to separately detect the original image by cutting, but when outputting the detection result, it is necessary to output the detection result of the entire original image by combining the detection results of the respective cut images.
In the embodiment of the invention, a training image for detecting the dirt of the target car body is obtained by acquiring an initial detection image data set and carrying out data preprocessing on the initial detection image data set, wherein the training image comprises the dirt of the target car body; training a preset neural network model based on a training image, and extracting a high-level characteristic diagram of the training image to obtain a corresponding vehicle body dirt detection model; the method for obtaining the training image of the dirt detection of the target car body by carrying out data preprocessing on the initial detection image data set comprises the following steps: determining a map cutting frame of each object body dirt of each image in the initial detection image data set, wherein each image in the initial detection image data set comprises a corresponding original label; determining an overlap ratio between a current frame cut and a next frame cut of each image in the initial detection image data set; calculating the number of cutting steps adapted to each image in the initial detection image data set according to the overlapping ratio of each image in the initial detection image data set; and performing image cutting processing on the corresponding image in the initial detection image data set according to the cutting step number of each image in the initial detection image data set to obtain a training image for detecting the dirt of the target vehicle body, wherein the training image comprises a corresponding training image label, and the training image label is obtained by processing the original label during the image cutting processing. In this way, data preprocessing can be performed on the acquired initial detection image data set, mainly, image cutting processing is performed on each image in the initial detection image data set to obtain a training image for detecting the dirt of the target vehicle body, and then the high-level feature map of the training image is extracted to obtain a corresponding target vehicle body dirt detection model, so that the accuracy of the vehicle body dirt detection of the target vehicle body dirt detection model is improved. And is suitable for small target vehicle body dirt detection in a vehicle body dirt detection task.
In the embodiment of the present invention, please refer to fig. 9, and fig. 9 is a flowchart of a vehicle body dirt detection method according to the embodiment of the present invention, where the vehicle body dirt detection method is performed based on a vehicle body dirt detection model obtained by training the vehicle body dirt detection model according to the training method of the vehicle body dirt detection model according to the embodiment. The method for detecting the dirt of the vehicle body comprises the following steps:
and 301, acquiring a vehicle body image to be detected.
Step 302, inputting the vehicle body image into a vehicle body dirt detection model for detection to obtain a detection result.
And step 303, judging whether the dirt exists in the vehicle body image according to the detection result.
And step 304, giving an alarm if the vehicle body image has dirt.
Specifically, the vehicle body image to be detected may be a multi-frame vehicle body image in a plurality of vehicle body videos, where the vehicle body image may include a dirty vehicle body image or a vehicle body image without dirt. The body image to be detected can be acquired in real time or acquired in advance and kept in a database. Any vehicle body image may be used as long as the user needs to detect the dirt on the vehicle body. After the vehicle body image needing to detect the dirt of the vehicle body is obtained, the vehicle body image to be detected is used as the input of a vehicle body dirt detection model, the vehicle body dirt detection model is used for detecting the dirt of the vehicle body image to be detected, finally, whether the dirt exists in the vehicle body image to be detected can be judged according to the output of the vehicle body dirt detection model, if yes, an alarm is given, and then the operator can pay attention to the vehicle corresponding to the vehicle body with the dirt, so that the vehicle can be analyzed conveniently. Of course, if fouling is not present, it may be ignored.
In an embodiment of the present invention, the vehicle body dirt detection method further includes the steps of: and analyzing the image of the vehicle body with the dirt, and storing a corresponding analysis result.
Specifically, the analysis result of the vehicle body image with the dirt is stored according to a certain form, so that the storage stability and the calling convenience are guaranteed, and the stored information mainly comprises alarm information (basic information of a vehicle with the dirt is detected), analysis data (each frame of image of the vehicle body with the dirt is captured for a worker to check) and a corresponding detection video.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a training apparatus for a vehicle body dirt detection model according to an embodiment of the present invention, where the training apparatus 400 for a vehicle body dirt detection model includes:
the data preprocessing module 401 is configured to obtain an initial detection image data set, perform data preprocessing on the initial detection image data set to obtain a training image for detecting dirt of a target vehicle body, where the training image includes the dirt of the target vehicle body;
a training module 402, configured to train a preset neural network model based on a training image, extract a high-level feature map of the training image, and obtain a corresponding vehicle body dirt detection model;
as shown in fig. 11, the data preprocessing module 401 includes:
a first determining unit 401a, configured to determine a map cutting frame of each target vehicle body dirt of each image in an initial detection image data set, where each image in the initial detection image data set includes a corresponding original tag;
a second determining unit 401b for determining an overlapping ratio between a current frame cut and a next frame cut of each image in the initial detection image data set;
a calculation unit 401c for calculating the number of cutting steps adapted to each image in the initial detection image data set based on the overlapping ratio of each image in the initial detection image data set;
the image cutting unit 401d is configured to perform image cutting processing on the corresponding image in the initial detection image data set according to the cutting step number of each image in the initial detection image data set, so as to obtain a training image for detecting dirt of the target vehicle body, where the training image includes a corresponding training image label, and the training image label is obtained by processing an original label when performing the image cutting processing.
Optionally, each image in the initial detection image dataset is an image to be cut, and the map cutting unit 401d includes:
the cutting subunit is used for performing sliding cutting on the image corresponding to the initial detection image data set according to the cutting step number of each image in the initial detection image data set, and judging whether the last cutting step number exceeds the boundary of the image to be cut when performing sliding cutting;
the circulating subunit is used for circularly traversing the image to be cut for sliding cutting to obtain a training image for detecting the dirt of the target vehicle body if the boundary of the image to be cut is not exceeded;
and the re-determining subunit is used for re-determining the overlapping ratio between the current frame and the next frame if the boundary of the image to be cut is exceeded, so as to obtain a new overlapping ratio. The calculation unit 401c performs the step of calculating the number of cutting steps adapted to each image in the initial detection image data set according to the overlap ratio of each image in the initial detection image data set again according to the new overlap ratio, and the map cutting unit 401d performs the corresponding step again according to the calculation result of the calculation unit 401 c.
Optionally, as shown in fig. 12, the data preprocessing module 401 further includes:
the image combination unit 4011 is configured to rotate, zoom, and change a color gamut of an image in the initial detection image dataset, and combine the image according to a preset orientation to obtain a target combination image;
and the filling unit 4012 is configured to perform black-edge filling on the reduced target combined image according to the calculated scaling ratio, scaling size, and black-edge filling number, so as to obtain a training image for detecting fouling of the target vehicle body.
Optionally, as shown in fig. 13, the training module 402 includes:
the slicing unit 4021 is configured to slice a training image for detecting fouling of a target vehicle body to obtain a slice feature map;
the image splicing unit 4022 is used for integrating and splicing the slice characteristic diagrams to obtain a spliced characteristic diagram;
a convolution unit 4023, configured to perform convolution processing on the spliced feature map to obtain a convolution feature map;
the normalization unit 4024 is used for performing batch normalization processing on the convolution characteristic diagram to obtain a normalization characteristic diagram;
an activation function unit 4025, configured to perform activation function processing on the normalized feature map to obtain a target feature map;
the training unit 4026 is configured to perform iterative training on a preset neural network model according to the slice feature map, the concatenation feature map, the convolution feature map, the normalization feature map, and the target feature map, extract a high-level feature map of a training image, and obtain a corresponding target vehicle body dirt detection model.
Optionally, as shown in fig. 14, the training device 400 for the vehicle body dirt detection model further includes:
the feature enhancing module 403 is configured to perform vector conversion on the advanced feature map to obtain a target image vector corresponding to a training image for detecting dirt on a target vehicle body;
and the calculating module 404 is configured to perform loss calculation on the target image vector based on a preset loss function, and continuously perform iterative training in the neural network to reduce a difference between a predicted value and a true value.
Optionally, the feature enhancing module 403 includes:
the advanced feature splicing unit is used for performing multi-scale maximum pooling on the features in the advanced feature map based on a preset SPP structure, and then splicing to obtain advanced features, wherein the preset SPP structure comprises three groups of different pooling operations;
the characteristic enhancement unit is used for enhancing high-level characteristics based on a preset FPN structure so as to adapt to the dirt detection of a target vehicle body zoomed in different scales, the FPN structure is from top to bottom, the characteristic information of a high layer is transmitted and fused in an up-sampling mode to obtain a predicted characteristic diagram, and strong semantic characteristics are conveyed;
and the feature fusion unit is used for down-sampling the enhanced high-level features based on a preset PAN structure, and transmitting the strong positioning features from bottom to top to obtain corresponding target image vectors.
Optionally, the training apparatus 400 for the vehicle body dirt detection model further includes:
and the updating unit is used for screening out the final result through non-maximum suppression, and suppressing repeated predicted coordinate frames and coordinate frames with small probability.
The training device 400 for the vehicle body dirt detection model provided by the embodiment of the invention can realize each implementation mode in the training method embodiment of the vehicle body dirt detection model and corresponding beneficial effects, and is not repeated here for avoiding repetition.
Referring to fig. 15, fig. 15 is a schematic structural diagram of a vehicle body dirt detection apparatus according to an embodiment of the present invention. The vehicle body dirt detection device 500 is performed based on the training device 400 of the vehicle body dirt detection model provided in the above embodiment, and the vehicle body dirt detection device 500 includes:
the acquiring module 501 is used for acquiring a vehicle body image to be detected;
the detection module 502 is used for inputting the vehicle body image into the vehicle body dirt detection model for detection to obtain a detection result;
a judging module 503, configured to judge whether dirt exists in the vehicle body image according to the detection result;
and the alarm module 504 is used for giving an alarm if the automobile body image has dirt.
Optionally, the vehicle body dirt detecting apparatus 500 further includes:
and the storage module is used for analyzing the vehicle body image with dirt and storing a corresponding analysis result.
The vehicle body dirt detection device 500 provided by the embodiment of the present invention can implement each implementation manner in the foregoing vehicle body dirt detection method embodiments, and corresponding beneficial effects, and for avoiding repetition, details are not described here again.
Referring to fig. 16, fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, where the electronic device 600 includes: the memory 602, the processor 601 and a computer program stored on the memory 602 and operable on the processor, when the processor 601 executes the computer program, the method for training the vehicle body fouling detection model provided by the above embodiment and the steps in the vehicle body fouling detection method provided by the above embodiment are implemented.
The electronic device 600 provided by the embodiment of the present invention can implement each implementation manner in the above method embodiments and corresponding beneficial effects, and is not described herein again to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the training method for a vehicle body fouling detection model provided in the embodiment of the present invention and each process in the vehicle body fouling detection method provided in the foregoing embodiment are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not repeated here.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, and the program can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (12)

1. A method of training a vehicle body fouling detection model, the method comprising the steps of:
acquiring an initial detection image data set, and performing data preprocessing on the initial detection image data set to obtain a training image for detecting the dirt of a target vehicle body, wherein the training image comprises the dirt of the target vehicle body;
training a preset neural network model based on the training image, extracting a high-level characteristic diagram of the training image, and obtaining a corresponding vehicle body dirt detection model;
the step of carrying out data preprocessing on the initial detection image data set to obtain a training image for detecting the dirt of the target vehicle body comprises the following steps:
determining a cutting frame of each target vehicle body dirt of each image in the initial detection image data set, wherein each image in the initial detection image data set comprises a corresponding original label;
determining an overlap ratio between a current sliced frame and a next sliced frame for each image in the initial detected image dataset;
calculating a number of cropping steps adapted to each image in the initial inspection image data set based on the overlap ratio of each image in the initial inspection image data set;
performing image cutting processing on the corresponding image in the initial detection image data set according to the cutting step number of each image in the initial detection image data set to obtain a training image for detecting the dirt of the target vehicle body, wherein the training image comprises a corresponding training image label, and the training image label is obtained by processing the original label when the image cutting processing is performed;
each image in the initial detection image data set is an image to be cut, and the step of cutting the image corresponding to the initial detection image data set according to the cutting step number of each image in the initial detection image data set to obtain the training image for detecting the dirt of the target vehicle body comprises the following steps:
performing sliding cutting on the image corresponding to the initial detection image data set according to the cutting step number of each image in the initial detection image data set, and judging whether the last cutting step number exceeds the boundary of the image to be cut when performing sliding cutting;
if the dirt does not exceed the boundary of the image to be cut, circularly traversing the image to be cut for sliding cutting to obtain a training image for detecting the dirt of the target vehicle body;
and if the boundary of the image to be cut is exceeded, re-determining the overlapping ratio between the current cut-drawing frame and the next cut-drawing frame to obtain a new overlapping ratio, and returning to execute the step of calculating the cutting step number suitable for each image in the initial detection image data set according to the overlapping ratio of each image in the initial detection image data set according to the new overlapping ratio.
2. The method for training a vehicle body fouling detection model according to claim 1, wherein the step of acquiring an initial detection image data set and performing data preprocessing on the initial detection image data set to obtain a training image of the target vehicle body fouling detection further comprises:
rotating, zooming and changing the color gamut of the images in the initial detection image data set, and combining the images according to a preset direction to obtain a target combined image;
and carrying out black edge filling on the reduced target combined image according to the calculated scaling, scaling size and black edge filling number, so as to obtain a training image for detecting the dirt of the target vehicle body.
3. The method for training the vehicle body fouling detection model according to claim 2, wherein the step of training a preset neural network model based on the training image, extracting a high-level feature map of the training image, and obtaining a corresponding vehicle body fouling detection model comprises:
carrying out slicing processing on the training image for detecting the dirt of the target vehicle body to obtain a slice characteristic diagram;
integrating and splicing the slice characteristic diagrams to obtain spliced characteristic diagrams;
performing convolution processing on the spliced feature map to obtain a convolution feature map;
carrying out batch normalization processing on the convolution characteristic graph to obtain a normalized characteristic graph;
performing activation function processing on the normalized feature map to obtain a target feature map;
and performing iterative training on the preset neural network model according to the slice characteristic diagram, the splicing characteristic diagram, the convolution characteristic diagram, the normalization characteristic diagram and the target characteristic diagram, and extracting a high-level characteristic diagram of the training image to obtain the corresponding vehicle body dirt detection model.
4. The training method of the vehicle body dirt detection model according to claim 1, characterized by further comprising the steps of:
performing vector conversion on the advanced feature map to obtain a target image vector corresponding to the training image for detecting the dirt of the target vehicle body;
and performing loss calculation on the target image vector based on a preset loss function, and continuously performing iterative training in a neural network to reduce the difference between a predicted value and a true value.
5. The method for training a vehicle body dirt detection model according to claim 4, wherein the step of performing vector conversion on the high-level feature map to obtain a target image vector corresponding to the training image for the target vehicle body dirt detection includes:
performing multi-scale maximum pooling on the features in the advanced feature map based on a preset SPP structure, and then splicing to obtain advanced features, wherein the preset SPP structure comprises three groups of different pooling operations;
enhancing the advanced features based on a preset FPN structure to adapt to the dirt detection of the target vehicle body zoomed in and out at different scales;
and fusing the enhanced advanced features based on a preset PAN structure to obtain the corresponding target image vector.
6. The training method of the vehicle body dirt detection model according to claim 2, characterized by further comprising:
and screening out the final result through non-maximum suppression, and suppressing repeated predicted coordinate frames and coordinate frames with low probability.
7. A vehicle body dirt detection method that is performed based on a vehicle body dirt detection model trained by the vehicle body dirt detection model training method according to any one of claims 1 to 6, the vehicle body dirt detection method comprising:
acquiring a vehicle body image to be detected;
inputting the vehicle body image into the vehicle body dirt detection model for detection to obtain a detection result;
judging whether dirt exists in the vehicle body image according to the detection result;
and giving an alarm if the vehicle body image has dirt.
8. The vehicle body dirt detection method according to claim 7, characterized by further comprising the steps of:
and analyzing the vehicle body image with dirt, and storing a corresponding analysis result.
9. A training device for a vehicle body dirt detection model is characterized by comprising:
the system comprises a data preprocessing module, a data processing module and a data processing module, wherein the data preprocessing module is used for acquiring an initial detection image data set and carrying out data preprocessing on the initial detection image data set to obtain a training image for detecting the dirt of a target vehicle body, and the training image comprises the dirt of the target vehicle body;
the training module is used for training a preset neural network model based on the training image, extracting a high-level characteristic diagram of the training image and obtaining a corresponding vehicle body dirt detection model;
the data preprocessing module comprises:
a first determining unit, configured to determine a map cutting frame of each target vehicle body dirt in each image in the initial detection image data set, where each image in the initial detection image data set includes a corresponding original tag;
a second determining unit for determining an overlapping ratio between a current frame cut and a next frame cut of each image in the initial detection image data set;
a calculation unit configured to calculate the number of cutting steps adapted to each image in the initial detection image data set, based on the overlap ratio of each image in the initial detection image data set;
the image cutting unit is used for carrying out image cutting processing on the corresponding image in the initial detection image data set according to the cutting step number of each image in the initial detection image data set to obtain a training image for detecting the dirt of the target vehicle body, wherein the training image comprises a corresponding training image label, and the training image label is obtained by processing the original label when the image cutting processing is carried out;
the map cutting unit comprises:
the cutting subunit is used for performing sliding cutting on the image corresponding to the initial detection image data set according to the cutting step number of each image in the initial detection image data set, and judging whether the last cutting step number exceeds the boundary of the image to be cut when performing sliding cutting;
the circulating subunit is used for circularly traversing the image to be cut for sliding cutting to obtain a training image for detecting the dirt of the target vehicle body if the boundary of the image to be cut is not exceeded;
and a re-determining subunit, configured to re-determine an overlap ratio between the current frame and the next frame if the boundary of the image to be cut is exceeded, to obtain a new overlap ratio, and return to perform the step of calculating the number of cutting steps adapted to each image in the initial detection image data set according to the overlap ratio of each image in the initial detection image data set according to the new overlap ratio.
10. A vehicle body dirt detection device that is performed based on the training device of the vehicle body dirt detection model according to claim 9, the vehicle body dirt detection device comprising:
the acquisition module is used for acquiring an image of a vehicle body to be detected;
the detection module is used for inputting the vehicle body image into the vehicle body dirt detection model for detection to obtain a detection result;
the judging module is used for judging whether dirt exists in the vehicle body image according to the detection result;
and the alarm module is used for giving an alarm if the automobile body image has dirt.
11. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the training method of the vehicle body dirt detection model according to any one of claims 1 to 6 and the steps in the vehicle body dirt detection method according to any one of claims 7 to 8 when executing the computer program.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the method of training a vehicle body fouling detection model according to any one of claims 1 to 6 and the steps in the method of carrying out vehicle body fouling detection according to any one of claims 7 to 8.
CN202110514569.XA 2021-05-12 2021-05-12 Training method of vehicle body dirt detection model, vehicle body dirt detection method and device Active CN112927231B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110514569.XA CN112927231B (en) 2021-05-12 2021-05-12 Training method of vehicle body dirt detection model, vehicle body dirt detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110514569.XA CN112927231B (en) 2021-05-12 2021-05-12 Training method of vehicle body dirt detection model, vehicle body dirt detection method and device

Publications (2)

Publication Number Publication Date
CN112927231A CN112927231A (en) 2021-06-08
CN112927231B true CN112927231B (en) 2021-07-23

Family

ID=76174848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110514569.XA Active CN112927231B (en) 2021-05-12 2021-05-12 Training method of vehicle body dirt detection model, vehicle body dirt detection method and device

Country Status (1)

Country Link
CN (1) CN112927231B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996186A (en) * 2014-04-29 2014-08-20 小米科技有限责任公司 Image cutting method and image cutting device
CN110147833A (en) * 2019-05-09 2019-08-20 北京迈格威科技有限公司 Facial image processing method, apparatus, system and readable storage medium storing program for executing
CN110264444A (en) * 2019-05-27 2019-09-20 阿里巴巴集团控股有限公司 Damage detecting method and device based on weak segmentation

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8553088B2 (en) * 2005-11-23 2013-10-08 Mobileye Technologies Limited Systems and methods for detecting obstructions in a camera field of view
US10402696B2 (en) * 2016-01-04 2019-09-03 Texas Instruments Incorporated Scene obstruction detection using high pass filters
KR102565849B1 (en) * 2018-05-14 2023-08-11 한국전자통신연구원 A method and Apparatus for segmentation small objects of moving pictures in real-time
CN110110722A (en) * 2019-04-30 2019-08-09 广州华工邦元信息技术有限公司 A kind of region detection modification method based on deep learning model recognition result

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103996186A (en) * 2014-04-29 2014-08-20 小米科技有限责任公司 Image cutting method and image cutting device
CN110147833A (en) * 2019-05-09 2019-08-20 北京迈格威科技有限公司 Facial image processing method, apparatus, system and readable storage medium storing program for executing
CN110264444A (en) * 2019-05-27 2019-09-20 阿里巴巴集团控股有限公司 Damage detecting method and device based on weak segmentation

Also Published As

Publication number Publication date
CN112927231A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN112967243B (en) Deep learning chip packaging crack defect detection method based on YOLO
RU2484531C2 (en) Apparatus for processing video information of security alarm system
GB2607749A (en) Fine-grained visual recognition in mobile augmented reality
CN110264444B (en) Damage detection method and device based on weak segmentation
CN114240821A (en) Weld defect detection method based on improved YOLOX
CN109272060B (en) Method and system for target detection based on improved darknet neural network
CN112308856A (en) Target detection method and device for remote sensing image, electronic equipment and medium
CN113139543B (en) Training method of target object detection model, target object detection method and equipment
CN111524145A (en) Intelligent picture clipping method and system, computer equipment and storage medium
CN113033553B (en) Multi-mode fusion fire detection method, device, related equipment and storage medium
CN113657409A (en) Vehicle loss detection method, device, electronic device and storage medium
CN114241386A (en) Method for detecting and identifying hidden danger of power transmission line based on real-time video stream
CN112668672A (en) TensorRT-based target detection model acceleration method and device
CN113569981A (en) Power inspection bird nest detection method based on single-stage target detection network
CN116703919A (en) Surface impurity detection method based on optimal transmission distance loss model
CN114724246A (en) Dangerous behavior identification method and device
CN115019274A (en) Pavement disease identification method integrating tracking and retrieval algorithm
CN114399734A (en) Forest fire early warning method based on visual information
CN112013820B (en) Real-time target detection method and device for deployment of airborne platform of unmanned aerial vehicle
CN113076889A (en) Container lead seal identification method and device, electronic equipment and storage medium
CN115861922B (en) Sparse smoke detection method and device, computer equipment and storage medium
CN112927231B (en) Training method of vehicle body dirt detection model, vehicle body dirt detection method and device
CN116310568A (en) Image anomaly identification method, device, computer readable storage medium and equipment
CN112884755B (en) Method and device for detecting contraband
CN113191237A (en) Improved YOLOv 3-based fruit tree image small target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PP01 Preservation of patent right
PP01 Preservation of patent right

Effective date of registration: 20240109

Granted publication date: 20210723