WO2021135552A1 - Segmentation effect assessment method and apparatus based on deep learning, and device and medium - Google Patents

Segmentation effect assessment method and apparatus based on deep learning, and device and medium Download PDF

Info

Publication number
WO2021135552A1
WO2021135552A1 PCT/CN2020/123255 CN2020123255W WO2021135552A1 WO 2021135552 A1 WO2021135552 A1 WO 2021135552A1 CN 2020123255 W CN2020123255 W CN 2020123255W WO 2021135552 A1 WO2021135552 A1 WO 2021135552A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
picture
segmentation effect
training
test
Prior art date
Application number
PCT/CN2020/123255
Other languages
French (fr)
Chinese (zh)
Inventor
史鹏
刘莉红
刘玉宇
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021135552A1 publication Critical patent/WO2021135552A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Definitions

  • This application relates to the technical field of artificial intelligence and deep learning, and in particular to a method, device, computer equipment, and readable storage medium for evaluating segmentation effects based on deep learning.
  • the judgment of vehicle damage mainly relies on manual estimation.
  • the damage assessor conducts on-site investigation and judgment at the scene of the vehicle accident.
  • Manual classification by the damage assessor will take a lot of time, not only requires a lot of labor costs, but also is inefficient.
  • the damage assessor needs to classify the different images collected and distinguish the damage of the components according to the images. It is easily affected by various subjective factors and cannot guarantee the damage assessment of the vehicle damage. Accuracy.
  • the application of computer vision image recognition and semantic segmentation technology to the scene of vehicle damage assessment came into being.
  • the usual process is: acquiring a picture of a vehicle through an image acquisition tool, and then using a computer to automatically recognize the picture and semantic segmentation technology to reflect the damage condition of the vehicle, and to judge the damage condition of the vehicle.
  • the applicant realizes that due to the complexity of the actual scene and the robustness of the semantic segmentation model, the process of image segmentation by the part segmentation model may cause segmentation failure, and the part segmentation is the subsequent part recognition.
  • the technical problem to be solved by this application is to provide a segmentation effect evaluation method, device, computer equipment and readable storage medium based on deep learning for the problem that segmentation failures may occur in the segmented image of the vehicle components in the prior art. Achieve accurate assessment of the segmentation effect of the part segmentation model and filter the invalid images.
  • the first aspect of the present application provides a segmentation effect evaluation method, the method includes:
  • the training tags include tags with good segmentation effect and tags with poor segmentation effect; wherein, the segmentation effect is good means that the segmented picture has a predetermined shape and/or uniform color. Feature, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
  • a second aspect of the present application provides a segmentation effect evaluation device, which includes:
  • the first acquisition module is configured to acquire a vehicle picture, and acquire a segmented picture according to the vehicle picture and the pre-trained component segmentation model;
  • the label creation module is used to create training labels based on the features of the segmented picture.
  • the training labels include labels with good segmentation effect and labels with poor segmentation effect.
  • the good segmentation effect means that the segmented picture has a shape conforming to a preset shape. And/or the feature of uniform color, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
  • An annotation module configured to annotate the segmented picture according to the label with good segmentation effect and the label with poor segmentation effect, to obtain the annotated picture
  • the second acquisition module is configured to construct a deep learning model, and acquire a target segmentation effect evaluation model according to the deep learning model and the labeled picture;
  • the evaluation result module is used to obtain the picture to be tested for which the component segmentation has been performed, and obtain the evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model.
  • a third aspect of the present application provides a computer device, including a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, wherein the processor executes the computer-readable instruction
  • the following steps are implemented when ordering:
  • the training tags include tags with good segmentation effect and tags with poor segmentation effect; wherein, the segmentation effect is good means that the segmented picture has a predetermined shape and/or uniform color. Feature, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
  • the fourth aspect of the present application provides one or more readable storage media storing computer readable instructions.
  • the computer readable instructions When executed by one or more processors, the one or more processors perform the following steps :
  • the training tags include tags with good segmentation effect and tags with poor segmentation effect; wherein, the segmentation effect is good means that the segmented picture has a predetermined shape and/or uniform color. Feature, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
  • the above-mentioned deep learning-based segmentation effect evaluation method, device, computer equipment and readable storage medium are implemented in one of the solutions: obtaining segmentation pictures generated by the pre-training component segmentation model; creating training labels based on the features of the segmentation pictures, Training tags include labels with good segmentation effect and labels with poor segmentation effect.
  • a good segmentation effect means that the segmented image has features that conform to the preset shape and/or color uniformity, and a poor segmentation effect means that the segmentation image has flowers and/or color intersections.
  • label the segmented image to obtain the labeled image constructs the deep learning model to obtain the target segmentation effect evaluation model according to the deep learning model and the labeled image; obtain the component segmentation
  • the evaluation result of the picture to be tested is obtained according to the picture to be tested and the target segmentation effect evaluation model.
  • This application is based on the deep learning segmentation effect evaluation method, creates training labels based on the features of the pre-trained component segmentation model segmentation picture, and combines the deep learning model for testing and training to obtain the target segmentation effect evaluation model, which can achieve accurate evaluation of the component segmentation model The segmentation effect and filter segmentation failure pictures.
  • FIG. 1 is a schematic flowchart of a segmentation effect evaluation method in an embodiment of the present application
  • Fig. 2 is a schematic diagram of a feature with good segmentation effect in an embodiment of the present application
  • FIG. 3 is a schematic diagram of a feature with poor segmentation effect in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a process for obtaining a target segmentation effect evaluation model in an embodiment of the present application
  • FIG. 5 is a schematic diagram of a process for obtaining an evaluation result of obtaining a picture to be tested in an embodiment of the present application
  • FIG. 6 is a schematic diagram of a structure of a segmentation effect evaluation device in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a structure of a computer device in an embodiment of the present application.
  • the deep learning-based segmentation effect evaluation method provided by the embodiment of the present application can be applied to a scene for segmentation of vehicle parts. Specifically, as shown in FIG. 1, it may include the following steps S10-S50:
  • S10 Obtain a vehicle picture, and obtain a segmented picture according to the vehicle picture and the pre-trained component segmentation model.
  • a vehicle picture is acquired, and segmentation pictures are acquired according to the vehicle picture and the pre-trained component segmentation model.
  • a certain number of vehicle pictures may be acquired first, for example, 200,000 vehicle pictures are acquired, and the acquired 20 Ten thousand vehicle pictures are input to the pre-trained component segmentation model to obtain segmentation pictures based on vehicle pictures.
  • more vehicle pictures such as 500,000 vehicle pictures can be obtained, which is not limited here.
  • the pre-trained component segmentation model is a vehicle component segmentation model that has been pre-trained, and the corresponding segmentation picture can be obtained according to the vehicle picture and the pre-trained component segmentation model.
  • S20 Create training labels based on the features of the segmented image.
  • the training labels include labels with good segmentation effect and labels with poor segmentation effect.
  • a good segmentation effect means that the segmented image has features that conform to the preset shape and/or color, and the segmentation effect is poor. It means that the segmented picture has the characteristics of flower dots and/or color crossing.
  • the process of segmenting pictures through the pre-trained part segmentation model may have the problem of image segmentation failure.
  • the part segmentation model is When the problem of image segmentation failure occurs in the process of segmenting pictures, the segmented images generated by the component segmentation model correspond to different features.
  • the features can include, for example, shape features and/or color features.
  • the shape feature of the segmented image can be used.
  • different features such as color features to create training labels.
  • the color feature can be that the part segmentation model is set in advance to output different colors according to different parts of the vehicle.
  • training labels can be created according to the characteristics of the segmented picture. Specifically, training labels can be divided into labels including good segmentation effect labels and poor segmentation effect labels.
  • a good segmentation effect may be that the segmented pictures generated by the component segmentation model have features that conform to a preset shape and/or features that are uniform in color.
  • the feature that conforms to the preset shape can be conformity to the contour of each preset component or the feature of the preset area of each component. It can be understood that the segmented image of the vehicle involves different components. When the area of a certain component in the segmented image conforms to the preset component When the contour, or the area enclosed by the contour of the component or the features of other preset areas, etc., it can be considered that the segmented picture has features that conform to the preset shape.
  • the feature of uniform color may be that the color of a part in the segmented picture has the feature of uniform color and/or single color.
  • a and B can respectively represent different parts of the vehicle and display different colors, for example, A represents the door of the vehicle and displays dark green, and B represents the leaf of the vehicle and displays light green.
  • the poor segmentation effect means that the segmented picture generated by the component segmentation model has the feature of flower dots and/or the feature of color crossing.
  • the feature with dots can be that the area involved in a certain part in the segmented picture has the features of dots in the picture.
  • the contour of the vehicle can also be used as the shape feature.
  • contour of a certain part appears
  • Features such as dots or jaggedness, or spots or dots in the area enclosed by the contour of a part, can be considered to have poor segmentation effect of the segmented image; features with color intersections can be displayed for the area involved in a part of the segmented image
  • the pre-trained part segmentation model is set in advance, that is, different colors can be output according to different parts of the vehicle. Normally, segmentation A certain part in the picture corresponds to the same color. If multiple colors appear at this time, or when the colors intersect, it can be considered that the segmentation effect of the current segmentation picture is poor.
  • C can represent the window of a vehicle
  • C1 represents the window display in gray
  • D can represent the leaf panel of the vehicle
  • C2 represents the leaf panel displays dark red
  • C3 represents the leaf panel displays blue
  • C4 represents the leaf panel showing light red
  • E represents a certain part of the vehicle
  • C5 represents a certain part showing yellow
  • C6 represents a certain part showing pink
  • C7 represents a certain part showing black
  • C8 represents a certain part showing purple.
  • the contour of the car window C appears similar to jagged features
  • the contour of the fender D appears similar to flower dots and jagged features
  • the area of the car window C and the fender D involves multiple colors, for example, the car window C involves To C3 blue, fender D involves C3 blue and C4 pink; there are multiple colors on a certain part E, such as C6 pink, C7 black and C8 purple cross fusion.
  • C6 pink, C7 black and C8 purple cross fusion there are multiple colors on a certain part E, such as C6 pink, C7 black and C8 purple cross fusion.
  • segmented pictures generated based on the component segmentation model can be defined and classified according to different features corresponding to the segmented pictures, and corresponding training labels can be created accordingly.
  • schematic diagrams shown in FIG. 2 and FIG. 3 are only used for illustration, and do not represent actual divided pictures.
  • S30 Annotate the segmented picture according to the label with good segmentation effect and the label with poor segmentation effect to obtain the annotated picture.
  • the component segmentation model based on the component segmentation model to generate different features of the segmented image, you can create labels with good segmentation effects and labels with poor segmentation effects, and segmentation of the pre-trained component segmentation model based on the tags with good segmentation effects and poor segmentation effects.
  • the pictures are labeled. Specifically, manual labeling can be used to label the segmented pictures with features that conform to the preset shape and/or color uniformity with a good segmentation effect label, and for those that have the features of flower dots and/or color intersections.
  • the segmented pictures are labeled with poor segmentation effect labels, so that an annotated image based on the labels with good segmentation effect and training labels with poor segmentation effect can be obtained.
  • S40 Construct a deep learning model, and obtain a target segmentation effect evaluation model based on the deep learning model and the marked pictures.
  • a deep learning model can be constructed, and a target segmentation effect evaluation model can be obtained according to the deep learning model and the marked pictures.
  • the deep learning model can be the MobilenetV1 model.
  • the MobilenetV1 model as a lightweight deep learning model developed by Google in the early days, has the characteristics of light weight, small calculation amount and high speed, and can be applied to portable terminals equipment.
  • the deep learning model can also be the MobilenetV2 model.
  • the MobilenetV2 model is a new generation of lightweight deep learning model released by Google. Compared with the previous version of the MobilenetV1 model, the MobilenetV2 model can also be used on the one hand.
  • the accuracy and computing speed of deep learning model training can also reduce the amount of model training parameters, that is, it can reduce the amount of calculation and memory usage, and the deep learning model can be applied to mobile terminal devices.
  • the following embodiments mainly take the MobilenetV2 model as an example, but it is not limited to only the MobilenetV2 model.
  • the target effect evaluation model may also be stored in the blockchain network.
  • the mobilenetV2 model is based on the deep learning model.
  • the target segmentation effect evaluation model is obtained according to the deep learning model and the labeled pictures. Specifically, as shown in FIG. 4, the following steps may be included S401-S404:
  • S401 Divide the marked pictures into a training set and a test set.
  • the labeled pictures can be divided into training set and test set.
  • the labeled pictures can be divided into training set and test set according to a certain preset ratio, exemplarily ,
  • the preset ratio can be, for example, set to 3:1, and the labeled pictures can be divided into training set and test set according to the ratio of 3:1. It should be noted that the preset ratio can also be, for example, 3: 2, etc., which are not limited here, and can be set according to the actual situation.
  • S402 Perform data amplification on the training set and the test set to obtain an amplified training set and an amplified test set.
  • data amplification can also be performed on the training set and test set in the marked pictures, so that a new training set and test set can be formed.
  • the training set and the test set can be augmented with data.
  • the data augmentation can include, but is not limited to, random cropping, random horizontal flipping, and random rotation, etc.
  • the labeled images in the training set and test set can be randomly cropped, randomly horizontally flipped, or randomly rotated, etc., or you can randomly set different contrast, brightness, and saturation to the labeled images, and you can get More new training sets and test sets, that is, an augmented training set and augmented test set can be obtained.
  • the data volume of the training set and the test set can be increased to more than the original multiple orders of magnitude, thereby increasing the diversity of the training set, and not only can avoid overfitting , It also helps to improve the recognition performance of subsequent models.
  • data amplification can also be achieved in other ways, such as acquiring more data sets, which is not limited here.
  • the augmented training set can be input to the MobilenetV2 model for training to obtain the training model.
  • the model can be saved every epoch, that is, the new training set is trained and saved once.
  • the augmented training set can also be preprocessed with pictures and then input into the MobilenetV2 model for training.
  • the preprocessing of the pictures can include, but is not limited to, picture cropping or picture scaling.
  • the augmented training set is unified After scaling to a size of 365*365 and/or uniformly cropping to a size of 320*320, input to the MobilenetV2 model for training.
  • the deep learning model preferentially selects the MobilenetV2 model.
  • the MobileNetV2 model is a lightweight deep learning model. Compared with the MobilenetV1 model, the MobileNetV2 model can greatly reduce the amount of calculation while also improving the efficiency of model training. And the accuracy of model training.
  • S404 Input the amplified test set into the training model for testing, and obtain the target segmentation effect evaluation model according to the accuracy of the test.
  • the augmented test set can be input to the training model for testing, and the target segmentation effect evaluation model can be obtained according to the accuracy of the test.
  • step S404 that is, obtaining the target segmentation effect evaluation model according to the accuracy of the test
  • steps S4041-S4042 may be included:
  • S4041 Adjust the model parameters according to the accuracy of the test, and obtain the test model with the top K scores of accuracy.
  • the amplification test set involves related model parameters during the testing process of the training model.
  • the model parameters can include, but are not limited to, for example, learning rate, optimization method, regular term, batch size, etc., and can be continuously based on the accuracy of the test. Adjust the model parameters. Illustratively, you can continuously adjust the learning rate, optimizer method, regularization term, batch size and other model parameters according to the accuracy of the test set, and train and test enough epochs until the model converges, which can be implemented in different Under the model parameters, select the test model with the top K scores of test accuracy, and use the test model with the top K scores of accuracy as the target segmentation effect model.
  • the target segmentation effect model generated through the training of the annotated image has good segmentation effect and poor segmentation effect training label based on the training label with good segmentation effect and poor segmentation effect. Attributes.
  • S4042 Use the test model with the top K rankings in the accuracy score as the target segmentation effect evaluation model.
  • the test models with the top K positions in the accuracy score can be used as the target segmentation effect evaluation model.
  • the top K test models can be selected first, for example, for example, the top 2 test models or the top 3 test models are selected. This is not limited here, and can be performed according to the actual situation. select. Based on obtaining the top K test models, the top K test models can be used as the target segmentation effect evaluation model.
  • S50 Obtain a picture to be tested that has undergone component segmentation, and obtain an evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model.
  • the segmentation picture to be tested and the target segmentation effect evaluation model obtain the evaluation result based on the picture to be tested.
  • the evaluation result based on the picture to be tested is obtained, that is, the segmentation effect of the component segmentation model can be correspondingly evaluated according to the evaluation result of the picture to be tested, so that continuous training can be carried out according to the evaluation result to improve and Improve the accuracy of the part segmentation model.
  • the segmentation quality is good and the segmentation quality is poor based on the above process definition, and partially invalid segmented pictures can also be filtered in advance according to the evaluation result, thereby improving the accuracy of obtaining the segmented pictures.
  • the evaluation result of the picture to be tested is obtained according to the picture to be tested and the target segmentation effect evaluation model. Specifically, as shown in FIG. 5, steps S501-S503 may be included:
  • S501 Scale the picture to be tested to a preset size to obtain a scaled picture.
  • picture preprocessing can be performed on each segmented picture.
  • the picture preprocessing can include, but is not limited to, for example, picture scaling and/or picture cropping, which is not limited here.
  • the images to be tested based on component segmentation may involve a variety of different sizes, and the target effect evaluation model has certain restrictions on the size of the input images, so these images of different sizes need to be adjusted to a fixed Large and small pictures, specifically, each picture to be tested can be scaled to a preset size to obtain a scaled picture based on the preset size, where the preset size can be set to, for example, a size of 365*365 or 480*480 The size is not limited here.
  • the preset size in this embodiment is 365*365 as an example, that is, each picture to be tested can be scaled to 365*365, and the acquisition is based on the preset size of 365*365. Zoomed picture.
  • the data volume of the sample can be reduced on the premise that the sample volume remains unchanged, and the adjustment to a fixed and uniform format is also beneficial to processing in other steps.
  • S502 Perform edge cropping on the zoomed picture to obtain an image block used to input the target segmentation effect evaluation model to obtain the evaluation result.
  • picture preprocessing may be performed again on the zoomed picture.
  • the zoomed picture may also be edge cropped to obtain an image block used to input the target segmentation effect evaluation model to obtain the evaluation result.
  • the zoomed picture can be edge cropped. It can be understood that the edges can represent unimportant features in the zoomed picture.
  • the zoomed picture is cropped based on the center point of the zoomed picture. For the edge, the image block extending from the center point of the zoomed picture to the four sides to a size of 320*320 can be reserved, that is, the zoomed picture can be cropped into a 320*320 image block.
  • the evaluation result may be acquired according to the image block and the target segmentation effect evaluation model. Specifically, by creating a corresponding forward propagation program, and inputting the image pre-processed image block into the target segmentation effect model, the evaluation result based on the specific image to be tested can be obtained.
  • the target segmentation effect model generated based on the training tags that is, the target segmentation effect model has the attributes of good segmentation effect and poor segmentation effect, when segmentation pictures generated by different component segmentation models are input to the target segmentation In the effect model, one evaluation result of two training labels with good output segmentation effect or poor segmentation effect can be obtained.
  • the segmentation effect of different component segmentation models can be obtained according to the target segmentation effect model.
  • step S503 the evaluation result is obtained according to the image block and the target segmentation effect evaluation model. Specifically, it may further include steps S5031-S5032:
  • S5031 Input the image block into the target segmentation effect evaluation model, and obtain test results corresponding to the target segmentation effect evaluation model.
  • the test model with the top K scores in the accuracy rate is used as the target effect segmentation evaluation model.
  • the two test models with the top 2 accuracy scores can be selected as the target effect segmentation evaluation models, and the target segmentation effect evaluation can be obtained according to the two test models and image blocks with the top 2 accuracy scores.
  • the test result corresponding to the model specifically, when the picture to be tested is preprocessed into a picture block with a fixed size of 320*320 or 365*365, the picture block can be input into two test models for prediction. Then, the corresponding test results based on the two test models can be obtained respectively.
  • the selection of the two test models with the top 2 accuracy scores as the target effect segmentation evaluation model is only used as an example.
  • the three test models with the top 3 accuracy scores can also be selected as the target effect.
  • the segmentation evaluation model, or the selection of the four test models with the top 4 accuracy scores as the target effect segmentation evaluation model is not limited here.
  • S5032 Perform a probability average calculation on the test result to obtain an evaluation result corresponding to the target segmentation effect evaluation model.
  • the corresponding test results can be calculated for probability average, and then the evaluation results corresponding to the target segmentation effect evaluation model can be obtained.
  • the test result generated by one test model is [0.6, 0.5, 0.05, 0.05]
  • the evaluation result generated by the other test model is [0.4, 0.3, 0.05, 0.05]
  • the two are merged by the average probability
  • the test result can be evaluated as [0.5, 0.4, 0.05, 0.05].
  • the method of calculating the probability average of the test results can reduce the evaluation.
  • the results are affected by a single model to improve the accuracy of obtaining the evaluation results.
  • the selection of the two test models with the top 2 accuracy scores as the target effect segmentation evaluation model is only used as an example and is not limited. Of course, the three top 3 scores can also be selected.
  • One test model or the top four test models, that is, the corresponding probability average calculation of the test results can be performed to obtain the corresponding evaluation results based on the target segmentation effect evaluation model.
  • a segmentation effect evaluation device based on deep learning is provided, and the functions implemented by the segmentation effect evaluation device correspond one-to-one with the steps of the segmentation effect evaluation method in the foregoing embodiment.
  • the segmentation effect evaluation may include a first acquisition module 10, a label creation module 20, an annotation module 30, a second acquisition module 40, and an evaluation result module 50.
  • the detailed description of each functional module is as follows:
  • the first acquisition module 10 is configured to acquire a vehicle picture, and acquire a segmented picture according to the evaluation result of the vehicle picture and the pre-trained component segmentation model;
  • the label creation module 20 is configured to create training labels according to the characteristics of the segmented image of the evaluation result, and the training labels of the evaluation result include labels with good segmentation effect and labels with poor segmentation effect;
  • the tagging module 30 is used to tag the segmented pictures of the evaluation result according to the tags with good segmentation effect of the evaluation results and the tags with poor segmentation effect of the evaluation result to obtain the tagged pictures;
  • the second acquisition module 40 is used to construct a deep learning model, and acquire a target segmentation effect evaluation model according to the evaluation result of the deep learning model and the marked image of the evaluation result;
  • the evaluation result module 50 is used to obtain the picture to be tested for which the component segmentation has been performed, and obtain the evaluation result of the picture to be tested for the evaluation result according to the evaluation result to be tested picture and the evaluation result target segmentation effect evaluation model.
  • the evaluation result deep learning model is the MobilenetV2 model
  • the evaluation result second obtaining module 40 is also used for:
  • the evaluation result amplification test set is input to the evaluation result training model for testing, and the target segmentation effect evaluation model is obtained according to the accuracy of the test.
  • each module in the above-mentioned segmentation effect evaluation device can be implemented in whole or in part by software, hardware and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device 60 of this embodiment includes a processor 61, a memory 62, and computer-readable instructions 63 that are stored in the memory 62 and can run on the processor 61.
  • the processor 61 executes the computer-readable instruction 63, the steps in the segmentation effect evaluation method of the foregoing embodiment are implemented. To avoid repetition, details are not described herein again.
  • the processor 61 executes the computer-readable instruction 63, the following steps are implemented:
  • the training tags include tags with good segmentation effect and tags with poor segmentation effect; wherein, the segmentation effect is good means that the segmented picture has a predetermined shape and/or uniform color. Feature, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
  • one or more readable storage media storing computer readable instructions are provided.
  • the readable storage media provided in this embodiment include non-volatile readable storage media and volatile readable storage.
  • the one or more processors execute the following steps:
  • the training tags include tags with good segmentation effect and tags with poor segmentation effect; wherein, the segmentation effect is good means that the segmented picture has a predetermined shape and/or uniform color. Feature, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
  • the computer-readable storage medium may include: any entity or device capable of carrying the computer-readable instruction code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), electrical carrier signal and telecommunication signal, etc.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (SynchlinK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • the blockchain referred to in this application is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.

Abstract

The present application relates to artificial intelligence and deep learning. Disclosed are a segmentation effect assessment method and apparatus based on deep learning, and a device and a storage medium. The method comprises: acquiring a vehicle image, and acquiring segmented images according to the vehicle image and a pre-trained component segmentation model; creating training labels according to features of the segmented images, wherein the training labels comprise good segmentation effect labels indicating that the segmented images have the features of conforming to a preset shape and/or being uniform in color, and bad segmentation effect labels indicating that the segmented images have the features of having dots and/or color crossover; labeling the segmented images according to the training labels, so as to obtain labeled images; acquiring a target segmentation effect assessment model according to a constructed deep learning model and the labeled images; and acquiring an image to be tested, and acquiring an assessment result according to the image to be tested and the target segmentation effect assessment model. By means of the segmentation effect assessment method proposed in the present application, the segmentation effect of a component segmentation model can be assessed, and an image that fails to be segmented by the segmentation model can be filtered out.

Description

基于深度学习的分割效果评估方法、装置、设备及介质Method, device, equipment and medium for evaluating segmentation effect based on deep learning
本申请要求于2020年6月28日提交中国专利局、申请号为202010599352.9,申请名称为“基于深度学习的分割效果评估方法、装置、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on June 28, 2020, the application number is 202010599352.9, and the application title is "Deep learning-based segmentation effect evaluation method, device, equipment and medium", and its entire content Incorporated in this application by reference.
技术领域Technical field
本申请涉及人工智能和深度学习的技术领域,尤其涉及一种基于深度学习的分割效果评估方法、装置、计算机设备及可读存储介质。This application relates to the technical field of artificial intelligence and deep learning, and in particular to a method, device, computer equipment, and readable storage medium for evaluating segmentation effects based on deep learning.
 To
背景技术Background technique
随着汽车保有量的激增,交通碰撞事故也随之频发,因此所带来的车辆损伤问题也越来越多。为此,保险公司陆续推出车险的保险服务来保障群众的车辆财产安全,当被保险的车辆发生交通事故时,需要保险公司安排对受损的车辆进行车辆定损,并根据车辆受损的最重情况确定车险理赔金额,车险定损是理赔的关键环节。With the rapid increase in the number of cars, traffic collisions also occur frequently, so the problem of vehicle damage is also increasing. For this reason, insurance companies have successively launched auto insurance services to protect the safety of the people’s vehicles and property. When the insured vehicle is involved in a traffic accident, the insurance company is required to arrange for the damage of the damaged vehicle to be assessed and based on the maximum damage of the vehicle. The amount of auto insurance claims is determined based on circumstances, and auto insurance loss determination is a key link in claims.
目前,对于车辆损伤的判断主要依赖人工进行估计,定损员在车辆事故现场进行现场查勘判断,通过定损员手动分类的方式会花费大量的时间,不仅还需要投入大量人工成本,而且效率低下,不利于车险理赔的快速实现,并且定损员需要对采集的不同图像进行分类并根据图像区分出部件所受到的损伤,容易受到各种主观因素的影响而无法保证对车辆受损情况定损的准确性。At present, the judgment of vehicle damage mainly relies on manual estimation. The damage assessor conducts on-site investigation and judgment at the scene of the vehicle accident. Manual classification by the damage assessor will take a lot of time, not only requires a lot of labor costs, but also is inefficient. , It is not conducive to the rapid realization of auto insurance claims, and the damage assessor needs to classify the different images collected and distinguish the damage of the components according to the images. It is easily affected by various subjective factors and cannot guarantee the damage assessment of the vehicle damage. Accuracy.
因此,为解决上述问题,将计算机视觉图像识别以及语义分割技术应用到车辆定损的场景中便应运而生。现有技术中,通常的流程为:通过图像采集工具获取车辆的图片,再利用计算机自动识别图片和语义分割技术来反映车辆的受损状况,并对车辆的受损情况进行判断。但目前的技术中,申请人意识到,由于实际场景的复杂性以及语义分割模型的鲁棒性,部件分割模型进行图像分割的过程可能会出现分割失效的问题,而部件分割是后续进行部件识别的基础;基于部件分割模型,若不能准确获取分割模型的分割效果,则可能对后续针对车辆各部件受损特征的准确评估造成影响,如此则可能存在定损结果不一致的问题,从而给车辆用户或者保险公司带来经济损失。因此,针对车辆部件分割的效果评估有待进一步提高。Therefore, in order to solve the above-mentioned problems, the application of computer vision image recognition and semantic segmentation technology to the scene of vehicle damage assessment came into being. In the prior art, the usual process is: acquiring a picture of a vehicle through an image acquisition tool, and then using a computer to automatically recognize the picture and semantic segmentation technology to reflect the damage condition of the vehicle, and to judge the damage condition of the vehicle. However, in the current technology, the applicant realizes that due to the complexity of the actual scene and the robustness of the semantic segmentation model, the process of image segmentation by the part segmentation model may cause segmentation failure, and the part segmentation is the subsequent part recognition. Based on the part segmentation model, if the segmentation effect of the segmentation model cannot be accurately obtained, it may affect the subsequent accurate assessment of the damaged features of the various parts of the vehicle. In this way, there may be a problem of inconsistent damage results, which will cause vehicle users Or insurance companies bring economic losses. Therefore, the evaluation of the effect of vehicle component segmentation needs to be further improved.
 To
发明内容Summary of the invention
本申请所要解决的技术问题在于,针对现有技术中针对车辆的部件分割图片可能出现分割失效的问题,提供一种基于深度学习的分割效果评估方法、装置、计算机设备及可读存储介质,以实现准确评估部件分割模型的分割效果以及过滤分割失效的图片。The technical problem to be solved by this application is to provide a segmentation effect evaluation method, device, computer equipment and readable storage medium based on deep learning for the problem that segmentation failures may occur in the segmented image of the vehicle components in the prior art. Achieve accurate assessment of the segmentation effect of the part segmentation model and filter the invalid images.
本申请第一方面提供一种分割效果评估方法,所述方法包括:The first aspect of the present application provides a segmentation effect evaluation method, the method includes:
获取车辆图片,根据所述车辆图片和预训练的部件分割模型获取分割图片;Acquiring a vehicle picture, and acquiring a segmented picture according to the vehicle picture and the pre-trained component segmentation model;
根据所述分割图片的特征创建训练标签,所述训练标签包括分割效果好标签和分割效果差标签;其中,所述分割效果好是指所述分割图片具有符合预设形状和/或颜色均匀的特征,所述分割效果差是指所述分割图片具有花点和/或颜色交叉的特征;Create training tags based on the features of the segmented pictures, the training tags include tags with good segmentation effect and tags with poor segmentation effect; wherein, the segmentation effect is good means that the segmented picture has a predetermined shape and/or uniform color. Feature, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
根据所述分割效果好标签和所述分割效果差标签对所述分割图片进行标注,得到已标注图片;Labeling the segmented picture according to the label with good segmentation effect and the label with poor segmentation effect, to obtain an annotated picture;
构建深度学习模型,根据所述深度学习模型和所述已标注图片获取目标分割效果评估模型;Constructing a deep learning model, and obtaining a target segmentation effect evaluation model according to the deep learning model and the labeled picture;
获取已进行部件分割的待测试图片,根据所述待测试图片和所述目标分割效果评估模型获取所述待测试图片的评估结果。Obtain a picture to be tested that has undergone component segmentation, and obtain an evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model.
本申请第二方面提供一种分割效果评估装置,所述装置包括:A second aspect of the present application provides a segmentation effect evaluation device, which includes:
第一获取模块,用于获取车辆图片,根据所述车辆图片和预训练的部件分割模型获取分割图片;The first acquisition module is configured to acquire a vehicle picture, and acquire a segmented picture according to the vehicle picture and the pre-trained component segmentation model;
标签创建模块,用于根据所述分割图片的特征创建训练标签,所述训练标签包括分割效果好标签和分割效果差标签,其中,所述分割效果好是指所述分割图片具有符合预设形状和/或颜色均匀的特征,所述分割效果差是指所述分割图片具有花点和/或颜色交叉的特征;The label creation module is used to create training labels based on the features of the segmented picture. The training labels include labels with good segmentation effect and labels with poor segmentation effect. The good segmentation effect means that the segmented picture has a shape conforming to a preset shape. And/or the feature of uniform color, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
标注模块,用于根据所述分割效果好标签和所述分割效果差标签对所述分割图片进行标注,得到已标注图片;An annotation module, configured to annotate the segmented picture according to the label with good segmentation effect and the label with poor segmentation effect, to obtain the annotated picture;
第二获取模块,用于构建深度学习模型,根据所述深度学习模型和所述已标注图片获取目标分割效果评估模型;The second acquisition module is configured to construct a deep learning model, and acquire a target segmentation effect evaluation model according to the deep learning model and the labeled picture;
评估结果模块,用于获取已进行部件分割的待测试图片,根据所述待测试图片和所述目标分割效果评估模型获取所述待测试图片的评估结果。The evaluation result module is used to obtain the picture to be tested for which the component segmentation has been performed, and obtain the evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model.
本申请第三方面提供一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下步骤:A third aspect of the present application provides a computer device, including a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, wherein the processor executes the computer-readable instruction The following steps are implemented when ordering:
获取车辆图片,根据所述车辆图片和预训练的部件分割模型获取分割图片;Acquiring a vehicle picture, and acquiring a segmented picture according to the vehicle picture and the pre-trained component segmentation model;
根据所述分割图片的特征创建训练标签,所述训练标签包括分割效果好标签和分割效果差标签;其中,所述分割效果好是指所述分割图片具有符合预设形状和/或颜色均匀的特征,所述分割效果差是指所述分割图片具有花点和/或颜色交叉的特征;Create training tags based on the features of the segmented pictures, the training tags include tags with good segmentation effect and tags with poor segmentation effect; wherein, the segmentation effect is good means that the segmented picture has a predetermined shape and/or uniform color. Feature, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
根据所述分割效果好标签和所述分割效果差标签对所述分割图片进行标注,得到已标注图片;Labeling the segmented picture according to the label with good segmentation effect and the label with poor segmentation effect, to obtain an annotated picture;
构建深度学习模型,根据所述深度学习模型和所述已标注图片获取目标分割效果评估模型;Constructing a deep learning model, and obtaining a target segmentation effect evaluation model according to the deep learning model and the labeled picture;
获取已进行部件分割的待测试图片,根据所述待测试图片和所述目标分割效果评估模型获取所述待测试图片的评估结果。Obtain a picture to be tested that has undergone component segmentation, and obtain an evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model.
本申请第四方面提供一个或多个存储有计算机可读指令的可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:The fourth aspect of the present application provides one or more readable storage media storing computer readable instructions. When the computer readable instructions are executed by one or more processors, the one or more processors perform the following steps :
获取车辆图片,根据所述车辆图片和预训练的部件分割模型获取分割图片;Acquiring a vehicle picture, and acquiring a segmented picture according to the vehicle picture and the pre-trained component segmentation model;
根据所述分割图片的特征创建训练标签,所述训练标签包括分割效果好标签和分割效果差标签;其中,所述分割效果好是指所述分割图片具有符合预设形状和/或颜色均匀的特征,所述分割效果差是指所述分割图片具有花点和/或颜色交叉的特征;Create training tags based on the features of the segmented pictures, the training tags include tags with good segmentation effect and tags with poor segmentation effect; wherein, the segmentation effect is good means that the segmented picture has a predetermined shape and/or uniform color. Feature, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
根据所述分割效果好标签和所述分割效果差标签对所述分割图片进行标注,得到已标注图片;Labeling the segmented picture according to the label with good segmentation effect and the label with poor segmentation effect, to obtain an annotated picture;
构建深度学习模型,根据所述深度学习模型和所述已标注图片获取目标分割效果评估模型;Constructing a deep learning model, and obtaining a target segmentation effect evaluation model according to the deep learning model and the labeled picture;
获取已进行部件分割的待测试图片,根据所述待测试图片和所述目标分割效果评估模型获取所述待测试图片的评估结果。Obtain a picture to be tested that has undergone component segmentation, and obtain an evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model.
 To
有益效果Beneficial effect
上述一种基于深度学习的分割效果评估方法、装置、计算机设备及可读存储介质,所实现的其中一个方案中:获取预训练部件分割模型产生的分割图片;根据分割图片的特征创建训练标签,训练标签包括分割效果好标签和分割效果差标签,其中,分割效果好是指分割图片具有符合预设形状和/或颜色均匀的特征,分割效果差是指分割图片具有花点和/或颜色交叉的特征;根据分割效果好标签和分割效果差标签对分割图片进行标注,得到已标注图片;构建深度学习模型,根据深度学习模型和已标注图片获取目标分割效果评估模型;获取已进行部件分割的待测试图片,根据待测试图片和目标分割效果评估模型获取待测试图片的评估结果。本申请基于深度学习的分割效果评估方法,根据预训练的部件分割模型分割图片的特征创建训练标签,并结合深度学习模型进行测试和训练可以获取目标分割效果评估模型,可以实现准确评估部件分割模型的分割效果以及过滤分割失效的图片。The above-mentioned deep learning-based segmentation effect evaluation method, device, computer equipment and readable storage medium are implemented in one of the solutions: obtaining segmentation pictures generated by the pre-training component segmentation model; creating training labels based on the features of the segmentation pictures, Training tags include labels with good segmentation effect and labels with poor segmentation effect. A good segmentation effect means that the segmented image has features that conform to the preset shape and/or color uniformity, and a poor segmentation effect means that the segmentation image has flowers and/or color intersections. According to the label of good segmentation effect and the label of poor segmentation effect, label the segmented image to obtain the labeled image; construct the deep learning model to obtain the target segmentation effect evaluation model according to the deep learning model and the labeled image; obtain the component segmentation For the picture to be tested, the evaluation result of the picture to be tested is obtained according to the picture to be tested and the target segmentation effect evaluation model. This application is based on the deep learning segmentation effect evaluation method, creates training labels based on the features of the pre-trained component segmentation model segmentation picture, and combines the deep learning model for testing and training to obtain the target segmentation effect evaluation model, which can achieve accurate evaluation of the component segmentation model The segmentation effect and filter segmentation failure pictures.
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。The details of one or more embodiments of the present application are presented in the following drawings and description, and other features and advantages of the present application will become apparent from the description, drawings and claims.
 To
附图说明Description of the drawings
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions of the embodiments of the present application more clearly, the following will briefly introduce the drawings that need to be used in the description of the embodiments of the present application. Obviously, the drawings in the following description are only some embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative labor.
图1是本申请一实施例中分割效果评估方法的一流程示意图;FIG. 1 is a schematic flowchart of a segmentation effect evaluation method in an embodiment of the present application;
图2是本申请一实施例中分割效果好的特征的一示意图;Fig. 2 is a schematic diagram of a feature with good segmentation effect in an embodiment of the present application;
图3是本申请一实施例中分割效果差的特征的一示意图;3 is a schematic diagram of a feature with poor segmentation effect in an embodiment of the present application;
图4是本申请一实施例中获取目标分割效果评估模型的一流程示意图;4 is a schematic diagram of a process for obtaining a target segmentation effect evaluation model in an embodiment of the present application;
图5是本申请一实施例中获取获取待测试图片的评估结果的一流程示意图;FIG. 5 is a schematic diagram of a process for obtaining an evaluation result of obtaining a picture to be tested in an embodiment of the present application;
图6是本申请一实施例中分割效果评估装置的一架构示意图;FIG. 6 is a schematic diagram of a structure of a segmentation effect evaluation device in an embodiment of the present application;
图7是本申请一实施例中计算机设备的一架构示意图。FIG. 7 is a schematic diagram of a structure of a computer device in an embodiment of the present application.
 To
具体实施方式Detailed ways
    下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present application in conjunction with the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, not all of them. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
本申请实施例提供的基于深度学习的分割效果评估方法,可以应用于针对车辆部件分割的场景中,具体地,如图1所示,可以包括如下步骤S10-S50:The deep learning-based segmentation effect evaluation method provided by the embodiment of the present application can be applied to a scene for segmentation of vehicle parts. Specifically, as shown in FIG. 1, it may include the following steps S10-S50:
S10:获取车辆图片,根据车辆图片和预训练的部件分割模型获取分割图片。S10: Obtain a vehicle picture, and obtain a segmented picture according to the vehicle picture and the pre-trained component segmentation model.
在一个实施例中,获取车辆图片,根据车辆图片和预训练的部件分割模型获取分割图片,具体地,可以先获取一定数量的车辆图片,例如获取20万张车辆图片,并将该获取的20万张车辆图片输入到预训练的部件分割模型,以获取基于车辆图片的分割图片,当然还可以获取更多的车辆图片例如50万张车辆图片,此处并不限定。该实施例中,可以理解,预训练的部件分割模型为已经过预先训练的车辆部件分割模型,则可以根据车辆图片和预训练的部件分割模型获取相应的分割图片。In one embodiment, a vehicle picture is acquired, and segmentation pictures are acquired according to the vehicle picture and the pre-trained component segmentation model. Specifically, a certain number of vehicle pictures may be acquired first, for example, 200,000 vehicle pictures are acquired, and the acquired 20 Ten thousand vehicle pictures are input to the pre-trained component segmentation model to obtain segmentation pictures based on vehicle pictures. Of course, more vehicle pictures such as 500,000 vehicle pictures can be obtained, which is not limited here. In this embodiment, it can be understood that the pre-trained component segmentation model is a vehicle component segmentation model that has been pre-trained, and the corresponding segmentation picture can be obtained according to the vehicle picture and the pre-trained component segmentation model.
S20:根据分割图片的特征创建训练标签,训练标签包括分割效果好标签和分割效果差标签,其中,分割效果好是指分割图片具有符合预设形状和/或颜色均匀的特征,分割效果差是指分割图片具有花点和/或颜色交叉的特征。S20: Create training labels based on the features of the segmented image. The training labels include labels with good segmentation effect and labels with poor segmentation effect. A good segmentation effect means that the segmented image has features that conform to the preset shape and/or color, and the segmentation effect is poor. It means that the segmented picture has the characteristics of flower dots and/or color crossing.
在一个应用场景中,基于实际交通场景的复杂性和部件分割模型鲁棒性等原因,通过预训练的部件分割模型分割图片的过程可能会存在图片分割失效的问题,可以理解,若部件分割模型分割图片的过程出现图片分割失效的问题,则获取部件分割模型产生的分割图片对应有不同的特征,该特征可以包括例如形状特征和/或颜色特征等,此时则可以根据分割图片的形状特征和/或颜色特征等不同特征创建训练标签。其中,颜色特征可以为部件分割模型根据车辆不同的部件提前设置输出不同的颜色。在一个实施例中,可以根据分割图片的特征创建训练标签,具体地,训练标签可以分为包括分割效果好标签和分割效果差标签。In an application scenario, based on the complexity of the actual traffic scene and the robustness of the part segmentation model, the process of segmenting pictures through the pre-trained part segmentation model may have the problem of image segmentation failure. It is understandable that if the part segmentation model is When the problem of image segmentation failure occurs in the process of segmenting pictures, the segmented images generated by the component segmentation model correspond to different features. The features can include, for example, shape features and/or color features. In this case, the shape feature of the segmented image can be used. And/or different features such as color features to create training labels. Among them, the color feature can be that the part segmentation model is set in advance to output different colors according to different parts of the vehicle. In one embodiment, training labels can be created according to the characteristics of the segmented picture. Specifically, training labels can be divided into labels including good segmentation effect labels and poor segmentation effect labels.
基于分割图片的特征,在一个实施例中,分割效果好可以是部件分割模型产生的分割图片具有符合预设形状的特征和/或颜色均匀的特征。其中,符合预设形状的特征可以为符合预设各部件的轮廓或者各部件预设区域的特征等,可以理解,车辆分割图片涉及不同部件,当分割图片中某部件的区域符合预设部件的轮廓,或者部件的轮廓围合的区域或者其他预设区域的特征等时,可以认为该分割图片具有符合预设形状的特征。颜色均匀的特征可以为分割图片中某部件的颜色具有颜色均匀和/或颜色单一的特征。具体地,如图2所示,其中的A和B分别可以代表车辆的不同部件以及分割显示不同的颜色,例如A代表车辆门和显示深绿色,B代表着车辆的叶子板和显示浅绿色,此时可以根据实际分割图片是否符合具有预设形状的特征和/或颜色均匀的特征,若分割图片符合具有预设形状的特征和/或颜色均匀的特征,则可以确定分割图片具有分割效果好的特征,也即可以确定图2所示的分割图片具有分割效果好的特征。Based on the features of the segmented pictures, in one embodiment, a good segmentation effect may be that the segmented pictures generated by the component segmentation model have features that conform to a preset shape and/or features that are uniform in color. Among them, the feature that conforms to the preset shape can be conformity to the contour of each preset component or the feature of the preset area of each component. It can be understood that the segmented image of the vehicle involves different components. When the area of a certain component in the segmented image conforms to the preset component When the contour, or the area enclosed by the contour of the component or the features of other preset areas, etc., it can be considered that the segmented picture has features that conform to the preset shape. The feature of uniform color may be that the color of a part in the segmented picture has the feature of uniform color and/or single color. Specifically, as shown in Figure 2, where A and B can respectively represent different parts of the vehicle and display different colors, for example, A represents the door of the vehicle and displays dark green, and B represents the leaf of the vehicle and displays light green. At this point, it can be based on whether the actual segmented image meets the features of the preset shape and/or the uniform color. If the segmented image meets the features of the preset shape and/or the feature of uniform color, it can be determined that the segmented image has a good segmentation effect. It can also be determined that the segmented picture shown in FIG. 2 has a feature of good segmentation effect.
在一个实施例中,分割效果差是指部件分割模型产生的分割图片具有花点的特征和/或颜色交叉的特征。其中,具有花点的特征可以为分割图片中某部件涉及的区域具有图片花点的特征,具体地,基于车辆的各个部件,还可以以车辆的轮廓作为形状特征,若某部件的轮廓上出现花点或锯齿等特征,或者某部件的轮廓围合的区域出现斑点或花点等特征,可以认为该分割图片的分割效果差;具有颜色交叉的特征可以为分割图片中某部件涉及的区域显示多种不同颜色,或者不同的颜色交叉融合,可以理解,基于上述中颜色特征为提前对预训练的部件分割模型进行设置,也即可以实现根据车辆不同的部件输出不同的颜色,通常情况下分割图片中的某一部件对应同种颜色,此时若出现多种颜色,或者颜色交叉的情况时,可以认为当前的分割图片的分割效果差。示例性地,如图3所示,例如C可以代表车辆的车窗,C1代表车窗显示灰色;D可以代表车辆的叶子板,C2代表叶子板显示深红色,C3代表叶子板显示蓝色,C4代表叶子板显示浅红色;例如E代表车辆的某个部位,C5代表某个部件显示黄色,C6代表某个部件显示粉红色,C7代表某个部件显示黑色,C8代表某个部件显示紫色。其中,车窗C的轮廓上出现类似锯齿的特征,叶子板D的轮廓上出现类似花点和锯齿的特征,并且车窗C和叶子板D的区域内涉及多种颜色,例如车窗C涉及到C3蓝色,叶子板D涉及到C3蓝色和C4粉红色;某个部件E上出现多种颜色,例如C6的粉红色、C7的黑色和C8的紫色交叉融合,可见,基于图3的C部件、D部件和E部件的特征,可以确定图3所示的分割图片具有分割效果差的特征。In one embodiment, the poor segmentation effect means that the segmented picture generated by the component segmentation model has the feature of flower dots and/or the feature of color crossing. Among them, the feature with dots can be that the area involved in a certain part in the segmented picture has the features of dots in the picture. Specifically, based on the various parts of the vehicle, the contour of the vehicle can also be used as the shape feature. If the contour of a certain part appears Features such as dots or jaggedness, or spots or dots in the area enclosed by the contour of a part, can be considered to have poor segmentation effect of the segmented image; features with color intersections can be displayed for the area involved in a part of the segmented image A variety of different colors, or cross fusion of different colors, it is understandable that based on the above-mentioned color features, the pre-trained part segmentation model is set in advance, that is, different colors can be output according to different parts of the vehicle. Normally, segmentation A certain part in the picture corresponds to the same color. If multiple colors appear at this time, or when the colors intersect, it can be considered that the segmentation effect of the current segmentation picture is poor. Exemplarily, as shown in Fig. 3, for example, C can represent the window of a vehicle, C1 represents the window display in gray; D can represent the leaf panel of the vehicle, C2 represents the leaf panel displays dark red, and C3 represents the leaf panel displays blue. C4 represents the leaf panel showing light red; for example, E represents a certain part of the vehicle, C5 represents a certain part showing yellow, C6 represents a certain part showing pink, C7 represents a certain part showing black, and C8 represents a certain part showing purple. Among them, the contour of the car window C appears similar to jagged features, the contour of the fender D appears similar to flower dots and jagged features, and the area of the car window C and the fender D involves multiple colors, for example, the car window C involves To C3 blue, fender D involves C3 blue and C4 pink; there are multiple colors on a certain part E, such as C6 pink, C7 black and C8 purple cross fusion. It can be seen that based on Figure 3 The features of the C component, the D component, and the E component can confirm that the segmented image shown in FIG. 3 has the feature of poor segmentation effect.
上述的实施例中,可以理解,基于部件分割模型产生的分割图片,可以根据分割图片对应的不同特征进行定义分类,并以此创建对应的训练标签。需要说明的是,上述图2和图3所示的示意图仅用于举例说明,并不代表实际的分割图片。In the foregoing embodiment, it can be understood that the segmented pictures generated based on the component segmentation model can be defined and classified according to different features corresponding to the segmented pictures, and corresponding training labels can be created accordingly. It should be noted that the schematic diagrams shown in FIG. 2 and FIG. 3 are only used for illustration, and do not represent actual divided pictures.
S30:根据分割效果好标签和分割效果差标签对分割图片进行标注,得到已标注图片。S30: Annotate the segmented picture according to the label with good segmentation effect and the label with poor segmentation effect to obtain the annotated picture.
在一个应用场景中,基于部件分割模型产生分割图片的不同特征,可以创建分割效果好标签和分割效果差标签,并根据分割效果好标签和分割效果差标签对预训练的部件分割模型产生的分割图片进行标注,具体地,可以采用人工标注的方式,针对其中具有符合预设形状和/或颜色均匀的特征的分割图片标注分割效果好标签,针对其中具有花点和/或颜色交叉的特征的分割图片标注分割效果差标签,如此可以得到基于分割效果好标签和分割效果差的训练标签的已标注图片。In an application scenario, based on the component segmentation model to generate different features of the segmented image, you can create labels with good segmentation effects and labels with poor segmentation effects, and segmentation of the pre-trained component segmentation model based on the tags with good segmentation effects and poor segmentation effects. The pictures are labeled. Specifically, manual labeling can be used to label the segmented pictures with features that conform to the preset shape and/or color uniformity with a good segmentation effect label, and for those that have the features of flower dots and/or color intersections. The segmented pictures are labeled with poor segmentation effect labels, so that an annotated image based on the labels with good segmentation effect and training labels with poor segmentation effect can be obtained.
S40:构建深度学习模型,根据深度学习模型和已标注图片获取目标分割效果评估模型。S40: Construct a deep learning model, and obtain a target segmentation effect evaluation model based on the deep learning model and the marked pictures.
在一个实施例中,基于人工智能和深度学习技术,可以构建深度学习模型,并根据深度学习模型和已标注图片获取目标分割效果评估模型。该实施例中,深度学习模型可以为MobilenetV1模型,可以理解,MobilenetV1模型作为谷歌Google公司早期开发的轻量级深度学习模型,具有轻量化、计算量小和速度快等特点,可以应用于便携式终端设备。另外,作为优选的实施例,深度学习模型还可以优选MobilenetV2模型,MobilenetV2模型为Google谷歌公司发布的新一代轻量级深度学习模型,相对于上一版本的MobilenetV1模型,通过MobilenetV2模型一方面还可以进一步提高深度学习模型训练的准确性和运算速度,另一方面还可以减少模型训练的参数量,也即可以减少计算量和降低内存占用,并可使该深度学习模型应用于移动终端设备。需要说明的是,下述的实施例中主要以MobilenetV2模型为举例说明,但不局限于仅适用MobilenetV2模型。In one embodiment, based on artificial intelligence and deep learning technology, a deep learning model can be constructed, and a target segmentation effect evaluation model can be obtained according to the deep learning model and the marked pictures. In this embodiment, the deep learning model can be the MobilenetV1 model. It can be understood that the MobilenetV1 model, as a lightweight deep learning model developed by Google in the early days, has the characteristics of light weight, small calculation amount and high speed, and can be applied to portable terminals equipment. In addition, as a preferred embodiment, the deep learning model can also be the MobilenetV2 model. The MobilenetV2 model is a new generation of lightweight deep learning model released by Google. Compared with the previous version of the MobilenetV1 model, the MobilenetV2 model can also be used on the one hand. Further improve the accuracy and computing speed of deep learning model training, on the other hand, it can also reduce the amount of model training parameters, that is, it can reduce the amount of calculation and memory usage, and the deep learning model can be applied to mobile terminal devices. It should be noted that the following embodiments mainly take the MobilenetV2 model as an example, but it is not limited to only the MobilenetV2 model.
在一个实施例中,为了进一步保证目标效果评估模型的私密性和安全性,本实施例中还可以将目标效果评估模型存储至区块链网络。In one embodiment, in order to further ensure the privacy and security of the target effect evaluation model, in this embodiment, the target effect evaluation model may also be stored in the blockchain network.
在一个实施例中,基于深度学习模型为MobilenetV2模型,上述的步骤S40中,也即根据深度学习模型和已标注图片获取目标分割效果评估模型,具体地,如图 4所示,可以包括以下步骤S401-S404:In one embodiment, the mobilenetV2 model is based on the deep learning model. In the above step S40, the target segmentation effect evaluation model is obtained according to the deep learning model and the labeled pictures. Specifically, as shown in FIG. 4, the following steps may be included S401-S404:
S401:将已标注图片分为训练集和测试集。S401: Divide the marked pictures into a training set and a test set.
基于标注后具有训练标签的已标注图片,可以将该已标注图片分为训练集和测试集,具体地,可以按照一定的预设比例将已标注图片分为训练集和测试集,示例性地,该预设比例可以为例如设为3:1,则可以根据将已标注图片按照3:1的比例分为训练集和测试集,需要说明的是,该预设比例还可以为例如3:2等,此处并不限定,可以根据实际情况进行设定。Based on the labeled pictures with training labels after being labeled, the labeled pictures can be divided into training set and test set. Specifically, the labeled pictures can be divided into training set and test set according to a certain preset ratio, exemplarily , The preset ratio can be, for example, set to 3:1, and the labeled pictures can be divided into training set and test set according to the ratio of 3:1. It should be noted that the preset ratio can also be, for example, 3: 2, etc., which are not limited here, and can be set according to the actual situation.
S402:对训练集和测试集进行数据扩增,得到扩增训练集和扩增测试集。S402: Perform data amplification on the training set and the test set to obtain an amplified training set and an amplified test set.
在一个实施例中,基于分类的训练集和测试集,还可以对已标注图片中的训练集和测试集中进行数据扩增,如此可以形成新的训练集和测试集。在一个应用场景中,例如在模型训练阶段时,可以将训练集和测试集进行数据扩增,其中,数据扩增可以包括但不局限于例如随机裁剪、随机水平翻转和随机旋转等,也即可以对训练集和测试集中的已标注图片进行随机裁剪、随机水平翻转或者随机旋转等多种方式的变换,或者还可以对已标注图片随机设置不同的对比度、亮度和饱和度等,则可以获取更多新的训练集和测试集,也即可以得到扩增训练集和扩增测试集。该实施例中,通过对训练集和测试集进行数据扩增,可以将训练集和测试集的数据量增长为原来的多个数量级以上,从而增加训练集的多样性,不仅能够避免过度拟合,还有利于提高后续模型的识别性能。另外,需要说明的是,数据扩增还可以通过其他方式例如获取更多的数据集等方式实现,此处并不限定。In one embodiment, based on the classified training set and test set, data amplification can also be performed on the training set and test set in the marked pictures, so that a new training set and test set can be formed. In an application scenario, for example, during the model training phase, the training set and the test set can be augmented with data. The data augmentation can include, but is not limited to, random cropping, random horizontal flipping, and random rotation, etc. The labeled images in the training set and test set can be randomly cropped, randomly horizontally flipped, or randomly rotated, etc., or you can randomly set different contrast, brightness, and saturation to the labeled images, and you can get More new training sets and test sets, that is, an augmented training set and augmented test set can be obtained. In this embodiment, by performing data amplification on the training set and the test set, the data volume of the training set and the test set can be increased to more than the original multiple orders of magnitude, thereby increasing the diversity of the training set, and not only can avoid overfitting , It also helps to improve the recognition performance of subsequent models. In addition, it should be noted that data amplification can also be achieved in other ways, such as acquiring more data sets, which is not limited here.
S403:将扩增训练集输入MobilenetV2模型进行训练,获取训练模型。S403: Input the augmented training set into the MobilenetV2 model for training, and obtain the training model.
基于获取的扩增训练集,可以将扩增训练集输入MobilenetV2模型进行训练,以获取训练模型,其中,可以每经过一个epoch保存一次模型,也即对新的训练集中进行一遍训练保存一次模型,以此获取最后的训练模型。具体地,还可以将扩增训练集进行图片预处理后再输入MobilenetV2模型进行训练,该图片预处理可以包括但不局限于图片裁剪或者图片缩放等,示例性地,例如将扩增训练集统一缩放为365*365的尺寸和/或统一裁剪为320*320的尺寸后再输入MobilenetV2模型进行训练,如此可以实现在样本量不变的前提下减轻样本的数据量,以减少MobilenetV2模型的计算量,并且调整为固定统一的格式也有利于后面其他步骤的处理。该实施例中,深度学习模型优先选择MobilenetV2模型,MobileNetV2模型作为一种轻量级的深度学习模型,相比于MobilenetV1模型,MobileNetV2模型可以大幅度减少计算量的同时,还可以提高模型训练的效率和模型训练的精确度。Based on the acquired augmented training set, the augmented training set can be input to the MobilenetV2 model for training to obtain the training model. Among them, the model can be saved every epoch, that is, the new training set is trained and saved once. To obtain the final training model. Specifically, the augmented training set can also be preprocessed with pictures and then input into the MobilenetV2 model for training. The preprocessing of the pictures can include, but is not limited to, picture cropping or picture scaling. Illustratively, for example, the augmented training set is unified After scaling to a size of 365*365 and/or uniformly cropping to a size of 320*320, input to the MobilenetV2 model for training. This can reduce the amount of sample data under the premise of the same sample size, so as to reduce the amount of calculation of the MobilenetV2 model , And adjusting to a fixed and unified format is also conducive to the processing of other steps later. In this embodiment, the deep learning model preferentially selects the MobilenetV2 model. The MobileNetV2 model is a lightweight deep learning model. Compared with the MobilenetV1 model, the MobileNetV2 model can greatly reduce the amount of calculation while also improving the efficiency of model training. And the accuracy of model training.
S404:将扩增测试集输入训练模型进行测试,根据测试的准确率获取目标分割效果评估模型。S404: Input the amplified test set into the training model for testing, and obtain the target segmentation effect evaluation model according to the accuracy of the test.
基于经扩增训练集训练获取的训练模型,可以将扩增测试集输入该训练模型进行测试,并根据测试的准确率获取目标分割效果评估模型。Based on the training model obtained through the training of the augmented training set, the augmented test set can be input to the training model for testing, and the target segmentation effect evaluation model can be obtained according to the accuracy of the test.
在一个实施例中,步骤S404中,也即根据测试的准确率获取目标分割效果评估模型,具体地,可以包括以下步骤S4041-S4042:In an embodiment, in step S404, that is, obtaining the target segmentation effect evaluation model according to the accuracy of the test, specifically, the following steps S4041-S4042 may be included:
S4041:根据测试的准确率调整模型参数,获取准确率得分排名前K位的测试模型。S4041: Adjust the model parameters according to the accuracy of the test, and obtain the test model with the top K scores of accuracy.
扩增测试集在训练模型的测试过程中,涉及到相关的模型参数,该模型参数可以包括但不局限于例如学习率、优化方式、正则项、batch尺寸等,则可以根据测试的准确率不断调整模型参数,示例性地,具体可以根据测试集的准确率不断调整学习率、优化器方法、正则项、batch尺寸等这些模型参数,并训练测试足够的epoch直至模型收敛,可以实现在不同的模型参数下选取测试准确率得分排名前K位的测试模型,并将准确率得分排名前K位的测试模型作为目标分割效果模型。该实施例中,可以理解,基于训练已标注图片具有分割效果好和分割效果差的训练标签,则通过该已标注图片训练生成的目标分割效果模型也具有分割效果好和分割效果差训练标签的属性。The amplification test set involves related model parameters during the testing process of the training model. The model parameters can include, but are not limited to, for example, learning rate, optimization method, regular term, batch size, etc., and can be continuously based on the accuracy of the test. Adjust the model parameters. Illustratively, you can continuously adjust the learning rate, optimizer method, regularization term, batch size and other model parameters according to the accuracy of the test set, and train and test enough epochs until the model converges, which can be implemented in different Under the model parameters, select the test model with the top K scores of test accuracy, and use the test model with the top K scores of accuracy as the target segmentation effect model. In this embodiment, it can be understood that the target segmentation effect model generated through the training of the annotated image has good segmentation effect and poor segmentation effect training label based on the training label with good segmentation effect and poor segmentation effect. Attributes.
S4042:将准确率得分排名前K位的测试模型作为目标分割效果评估模型。S4042: Use the test model with the top K rankings in the accuracy score as the target segmentation effect evaluation model.
基于选取测试准确率得分排名前K位的测试模型,可以将准确率得分排名前K位的测试模型作为目标分割效果评估模型。具体地,可以优先选择排名靠前K位的测试模型,示例性地,例如选择排名前2位的测试模型,或者选择排名前3为的测试模型,此处并不限定,可以根据实际情况进行选择。基于获取排名前K位的测试模型,可以将排名前K位的测试模型作为目标分割效果评估模型。Based on the selection of test models with the top K positions in the test accuracy score, the test models with the top K positions in the accuracy score can be used as the target segmentation effect evaluation model. Specifically, the top K test models can be selected first, for example, for example, the top 2 test models or the top 3 test models are selected. This is not limited here, and can be performed according to the actual situation. select. Based on obtaining the top K test models, the top K test models can be used as the target segmentation effect evaluation model.
S50:获取已进行部件分割的待测试图片,根据待测试图片和目标分割效果评估模型获取待测试图片的评估结果。S50: Obtain a picture to be tested that has undergone component segmentation, and obtain an evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model.
基于实际场景中,不同部件分割模型产生的分割图片,当需要验证其中的分割模型的分割效果或者需要过滤分割失效的分割图片时,可以获取已进行部件分割的待测试图片,并根据已进行部件分割的待测试图片和目标分割效果评估模型获取基于待测试图片的评估结果。该实施例中,可以理解,当获取到基于待测试图片的评估结果,也即可以根据待测试图片的评估结果对应评估部件分割模型的分割效果,如此可以根据评估结果不断进行训练,以完善和提升部件分割模型的准确度。进一步地,基于上述过程定义的分割质量好和分割质量差,还可以根据评估结果提前过滤部分失效的分割图片,以此提高获取分割图片的准确度。Based on the segmentation pictures generated by different component segmentation models in the actual scene, when it is necessary to verify the segmentation effect of the segmentation model or filter the segmented images that have failed segmentation, you can obtain the test images that have been segmented and based on the components that have been segmented. The segmented picture to be tested and the target segmentation effect evaluation model obtain the evaluation result based on the picture to be tested. In this embodiment, it can be understood that when the evaluation result based on the picture to be tested is obtained, that is, the segmentation effect of the component segmentation model can be correspondingly evaluated according to the evaluation result of the picture to be tested, so that continuous training can be carried out according to the evaluation result to improve and Improve the accuracy of the part segmentation model. Further, the segmentation quality is good and the segmentation quality is poor based on the above process definition, and partially invalid segmented pictures can also be filtered in advance according to the evaluation result, thereby improving the accuracy of obtaining the segmented pictures.
在一个实施例中,根据待测试图片和目标分割效果评估模型获取待测试图片的评估结果,具体地,如图5所示,可以包括步骤S501-S503:In an embodiment, the evaluation result of the picture to be tested is obtained according to the picture to be tested and the target segmentation effect evaluation model. Specifically, as shown in FIG. 5, steps S501-S503 may be included:
S501:将待测试图片缩放到预设尺寸,以获取缩放图片。S501: Scale the picture to be tested to a preset size to obtain a scaled picture.
基于需要验证的分割图片,可以对其中的每张分割图片进行图片预处理,其中,图片预处理可以包括但不限于例如图片缩放和/或图片裁剪等,此处并不限定。Based on the segmented pictures that need to be verified, picture preprocessing can be performed on each segmented picture. The picture preprocessing can include, but is not limited to, for example, picture scaling and/or picture cropping, which is not limited here.
在一个应用场景中,基于已进行部件分割的待测试图片可能涉及多种不同的尺寸,以及目标效果评估模型对输入图片的大小有一定的限制,因此需要将这些不同尺寸的图片调整为固定的大小图片,具体地,可以将其中的每张待测试图片缩放到预设尺寸,以获取基于该预设尺寸的缩放图片,其中预设尺寸可以设为例如365*365的尺寸或者为480*480等的尺寸,此处并不限定,本实施例中的预设尺寸以365*365作举例说明,也即可以将每张待测试图片缩放到365*365,获取基于预设尺寸为365*365的缩放图片。该实施例中,通过对待测试图片进行图片缩放的预处理,可以实现在样本量不变的前提下减轻样本的数据量,并且调整为固定统一的格式也有利于其他步骤的处理。In an application scenario, the images to be tested based on component segmentation may involve a variety of different sizes, and the target effect evaluation model has certain restrictions on the size of the input images, so these images of different sizes need to be adjusted to a fixed Large and small pictures, specifically, each picture to be tested can be scaled to a preset size to obtain a scaled picture based on the preset size, where the preset size can be set to, for example, a size of 365*365 or 480*480 The size is not limited here. The preset size in this embodiment is 365*365 as an example, that is, each picture to be tested can be scaled to 365*365, and the acquisition is based on the preset size of 365*365. Zoomed picture. In this embodiment, through preprocessing of image scaling on the image to be tested, the data volume of the sample can be reduced on the premise that the sample volume remains unchanged, and the adjustment to a fixed and uniform format is also beneficial to processing in other steps.
S502:对缩放图片进行边缘裁剪,以获取用于输入目标分割效果评估模型获取评估结果的图像块。S502: Perform edge cropping on the zoomed picture to obtain an image block used to input the target segmentation effect evaluation model to obtain the evaluation result.
在一个实施例中,还可以对缩放图片再次进行图片预处理,示例性地,还可以对缩放图片进行边缘裁剪,以获取用于输入目标分割效果评估模型获取评估结果的图像块。基于步骤S501中获取的缩放图片为365*365,可以对缩放图片进行边缘裁剪,可以理解,其中的边缘可以表示缩放图片中不重要的特征,具体地,基于缩放图片的中心点裁剪缩放图片的边缘,则可以保留以缩放图片的中心点向四边伸展为320*320尺寸的图像块,也即可以将缩放图片经过裁剪为320*320的图像块。该实施例中,可以理解,基于步骤S501中对图片进行缩放,再将缩放图片进行裁剪,可以进一步减轻样本的数据量。需要说明的是,上述步骤中的图片缩放和图片裁剪的过程仅用于举例说明,并不限定先后顺序。In an embodiment, picture preprocessing may be performed again on the zoomed picture. Illustratively, the zoomed picture may also be edge cropped to obtain an image block used to input the target segmentation effect evaluation model to obtain the evaluation result. Based on the size of the zoomed picture obtained in step S501 of 365*365, the zoomed picture can be edge cropped. It can be understood that the edges can represent unimportant features in the zoomed picture. Specifically, the zoomed picture is cropped based on the center point of the zoomed picture. For the edge, the image block extending from the center point of the zoomed picture to the four sides to a size of 320*320 can be reserved, that is, the zoomed picture can be cropped into a 320*320 image block. In this embodiment, it can be understood that based on the zooming of the picture in step S501, and then cropping the zoomed picture, the data volume of the sample can be further reduced. It should be noted that the process of image scaling and image cropping in the above steps is only for illustration, and the sequence is not limited.
S503:根据图像块和目标分割效果评估模型获取评估结果。S503: Obtain an evaluation result according to the image block and the target segmentation effect evaluation model.
基于获取的目标分割效果模型,可以根据所述图像块和所述目标分割效果评估模型获取所述评估结果。具体地,创建对应的前向传播程序,并将经图片预处理的图片块输入到目标分割效果模型,则可以获取基于具体待测试图片的评估结果。Based on the acquired target segmentation effect model, the evaluation result may be acquired according to the image block and the target segmentation effect evaluation model. Specifically, by creating a corresponding forward propagation program, and inputting the image pre-processed image block into the target segmentation effect model, the evaluation result based on the specific image to be tested can be obtained.
该实施例中,可以理解,基于训练标签生成的目标分割效果模型,也即目标分割效果模型具有分割效果好和分割效果差的属性,当将不同部件分割模型产生的分割图片输入到该目标分割效果模型时,可以获取输出分割效果好或者分割效果差两种训练标签的一种评估结果,如此,可以根据该目标分割效果模型获取不同部件分割模型的分割效果。同时还可以将分割模型分割失效的图片进行过滤,则可以进一步提高后续其他模型例如部件识别模型的准确性。In this embodiment, it can be understood that the target segmentation effect model generated based on the training tags, that is, the target segmentation effect model has the attributes of good segmentation effect and poor segmentation effect, when segmentation pictures generated by different component segmentation models are input to the target segmentation In the effect model, one evaluation result of two training labels with good output segmentation effect or poor segmentation effect can be obtained. In this way, the segmentation effect of different component segmentation models can be obtained according to the target segmentation effect model. At the same time, it is possible to filter the images whose segmentation model fails to segment, which can further improve the accuracy of other subsequent models such as component recognition models.
在一个实施例中,步骤S503中,根据图像块和目标分割效果评估模型获取评估结果,具体地,还可以包括步骤S5031-S5032:In an embodiment, in step S503, the evaluation result is obtained according to the image block and the target segmentation effect evaluation model. Specifically, it may further include steps S5031-S5032:
S5031:将图像块输入目标分割效果评估模型,获取基于目标分割效果评估模型对应的测试结果。S5031: Input the image block into the target segmentation effect evaluation model, and obtain test results corresponding to the target segmentation effect evaluation model.
基于上述的步骤S4042中,将准确率得分排名前K位的测试模型作为目标效果分割评估模型。在一个实施例中,可以选择准确率得分排名前2位的两个测试模型作为目标效果分割评估模型,则可以根据准确率得分排名前2位的两个测试模型和图像块获取目标分割效果评估模型对应的测试结果,具体地,在将待测试图片经过图片预处理成固定尺寸为320*320或者为365*365等的图片块时,可以将该图片块分别输入两个测试模型进行预测,则可以分别获取基于两个测试模型对应的测试结果。需要说明的是,此处选择准确率得分排名前2位的两个测试模型作为目标效果分割评估模型仅用于举例,当然还可以选择准确率得分排名前3位的三个测试模型作为目标效果分割评估模型,或者选择准确率得分排名前4位的四个测试模型作为目标效果分割评估模型,此处并不限定。Based on the above-mentioned step S4042, the test model with the top K scores in the accuracy rate is used as the target effect segmentation evaluation model. In one embodiment, the two test models with the top 2 accuracy scores can be selected as the target effect segmentation evaluation models, and the target segmentation effect evaluation can be obtained according to the two test models and image blocks with the top 2 accuracy scores. The test result corresponding to the model, specifically, when the picture to be tested is preprocessed into a picture block with a fixed size of 320*320 or 365*365, the picture block can be input into two test models for prediction. Then, the corresponding test results based on the two test models can be obtained respectively. It should be noted that the selection of the two test models with the top 2 accuracy scores as the target effect segmentation evaluation model is only used as an example. Of course, the three test models with the top 3 accuracy scores can also be selected as the target effect. The segmentation evaluation model, or the selection of the four test models with the top 4 accuracy scores as the target effect segmentation evaluation model, is not limited here.
S5032:对测试结果进行概率平均计算,以获取基于目标分割效果评估模型对应的评估结果。S5032: Perform a probability average calculation on the test result to obtain an evaluation result corresponding to the target segmentation effect evaluation model.
基于步骤S5031中获取的两个测试模型对应的测试结果,在一个实施例中,可以将对应的测试结果进行概率平均计算,则可以获取基于目标分割效果评估模型对应的评估结果。具体地,例如其中一个测试模型产生的测试结果为[0.6,0.5,0.05,0.05],另一个测试模型产生的评估结果为[0.4,0.3,0.05,0.05],则通过概率平均的融合两个测试结果,可以得到评估结果为[0.5,0.4,0.05,0.05]。该实施例中,可以理解,根据不同模型参数训练的产生不同的测试模型,则通过不同测试模型会产生不同的测试结果,该实施例中通过对测试结果进行概率平均计算的方式,可以降低评估结果受到单一模型的影响,以提高获取评估结果的准确率。需要说明的是,上述的实施例中,选择准确率得分排名前2位的两个测试模型作为目标效果分割评估模型仅用于举例,并不限定,当然还可以选择得分排名前3位的三个测试模型或者前4位的四个测试模型,也即可以通过将测试结果进行对应的概率平均计算,以获取基于目标分割效果评估模型对应的评估结果。Based on the test results corresponding to the two test models obtained in step S5031, in one embodiment, the corresponding test results can be calculated for probability average, and then the evaluation results corresponding to the target segmentation effect evaluation model can be obtained. Specifically, for example, the test result generated by one test model is [0.6, 0.5, 0.05, 0.05], and the evaluation result generated by the other test model is [0.4, 0.3, 0.05, 0.05], then the two are merged by the average probability The test result can be evaluated as [0.5, 0.4, 0.05, 0.05]. In this embodiment, it can be understood that if different test models are generated according to different model parameter training, different test results will be generated through different test models. In this embodiment, the method of calculating the probability average of the test results can reduce the evaluation. The results are affected by a single model to improve the accuracy of obtaining the evaluation results. It should be noted that in the above-mentioned embodiment, the selection of the two test models with the top 2 accuracy scores as the target effect segmentation evaluation model is only used as an example and is not limited. Of course, the three top 3 scores can also be selected. One test model or the top four test models, that is, the corresponding probability average calculation of the test results can be performed to obtain the corresponding evaluation results based on the target segmentation effect evaluation model.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence number of each step in the foregoing embodiment does not mean the order of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation to the implementation process of the embodiment of the present application.
在一个实施例中,提供一种基于深度学习的分割效果评估装置,该分割效果评估装置实现的功能与上述实施例中分割效果评估方法对应的步骤一一对应。具体地,如图6所示,该分割效果评估可以包括第一获取模块10、标签创建模块20、标注模块30、第二获取模块40和评估结果模块50。其中,各功能模块详细说明如下:In one embodiment, a segmentation effect evaluation device based on deep learning is provided, and the functions implemented by the segmentation effect evaluation device correspond one-to-one with the steps of the segmentation effect evaluation method in the foregoing embodiment. Specifically, as shown in FIG. 6, the segmentation effect evaluation may include a first acquisition module 10, a label creation module 20, an annotation module 30, a second acquisition module 40, and an evaluation result module 50. Among them, the detailed description of each functional module is as follows:
第一获取模块10,用于获取车辆图片,根据评估结果车辆图片和预训练的部件分割模型获取分割图片;The first acquisition module 10 is configured to acquire a vehicle picture, and acquire a segmented picture according to the evaluation result of the vehicle picture and the pre-trained component segmentation model;
标签创建模块20,用于根据评估结果分割图片的特征创建训练标签,评估结果训练标签包括分割效果好标签和分割效果差标签;The label creation module 20 is configured to create training labels according to the characteristics of the segmented image of the evaluation result, and the training labels of the evaluation result include labels with good segmentation effect and labels with poor segmentation effect;
标注模块30,用于根据评估结果分割效果好标签和评估结果分割效果差标签对评估结果分割图片进行标注,得到已标注图片;The tagging module 30 is used to tag the segmented pictures of the evaluation result according to the tags with good segmentation effect of the evaluation results and the tags with poor segmentation effect of the evaluation result to obtain the tagged pictures;
第二获取模块40,用于构建深度学习模型,根据评估结果深度学习模型和评估结果已标注图片获取目标分割效果评估模型;The second acquisition module 40 is used to construct a deep learning model, and acquire a target segmentation effect evaluation model according to the evaluation result of the deep learning model and the marked image of the evaluation result;
评估结果模块50,用于获取已进行部件分割的待测试图片,根据评估结果待测试图片和评估结果目标分割效果评估模型获取评估结果待测试图片的评估结果。The evaluation result module 50 is used to obtain the picture to be tested for which the component segmentation has been performed, and obtain the evaluation result of the picture to be tested for the evaluation result according to the evaluation result to be tested picture and the evaluation result target segmentation effect evaluation model.
优选地,评估结果深度学习模型为MobilenetV2模型,评估结果第二获取模块40还用于:Preferably, the evaluation result deep learning model is the MobilenetV2 model, and the evaluation result second obtaining module 40 is also used for:
将评估结果已标注图片分为训练集和测试集;Divide the marked images of the evaluation results into training set and test set;
对评估结果训练集和评估结果测试集进行数据扩增,得到扩增训练集和扩增测试集; Perform data amplification on the evaluation result training set and the evaluation result test set to obtain an amplified training set and an amplified test set;
将评估结果扩增训练集输入评估结果MobilenetV2模型进行训练,获取训练模型;Input the evaluation result augmentation training set into the evaluation result MobilenetV2 model for training, and obtain the training model;
将评估结果扩增测试集输入评估结果训练模型进行测试,根据测试的准确率获取目标分割效果评估模型。The evaluation result amplification test set is input to the evaluation result training model for testing, and the target segmentation effect evaluation model is obtained according to the accuracy of the test.
关于分割效果评估装置的具体限定可以参见上文中对于分割效果评估方法的限定,在此不再赘述。上述分割效果评估装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。For the specific definition of the segmentation effect evaluation device, please refer to the above definition of the segmentation effect evaluation method, which will not be repeated here. Each module in the above-mentioned segmentation effect evaluation device can be implemented in whole or in part by software, hardware and a combination thereof. The above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
在一个实施例中,如图7所示,提供一种计算机设备。具体地,该实施例的计算机设备60包括:处理器61、存储器62以及存储在存储器62中并可在处理器61上运行的计算机可读指令63。处理器61执行计算机可读指令63时实现上述实施例分割效果评估方法中的步骤,为避免重复,这里不再赘述。或者,处理器61执行计算机可读指令63时实现以下步骤:In one embodiment, as shown in FIG. 7, a computer device is provided. Specifically, the computer device 60 of this embodiment includes a processor 61, a memory 62, and computer-readable instructions 63 that are stored in the memory 62 and can run on the processor 61. When the processor 61 executes the computer-readable instruction 63, the steps in the segmentation effect evaluation method of the foregoing embodiment are implemented. To avoid repetition, details are not described herein again. Alternatively, when the processor 61 executes the computer-readable instruction 63, the following steps are implemented:
获取车辆图片,根据所述车辆图片和预训练的部件分割模型获取分割图片;Acquiring a vehicle picture, and acquiring a segmented picture according to the vehicle picture and the pre-trained component segmentation model;
根据所述分割图片的特征创建训练标签,所述训练标签包括分割效果好标签和分割效果差标签;其中,所述分割效果好是指所述分割图片具有符合预设形状和/或颜色均匀的特征,所述分割效果差是指所述分割图片具有花点和/或颜色交叉的特征;Create training tags based on the features of the segmented pictures, the training tags include tags with good segmentation effect and tags with poor segmentation effect; wherein, the segmentation effect is good means that the segmented picture has a predetermined shape and/or uniform color. Feature, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
根据所述分割效果好标签和所述分割效果差标签对所述分割图片进行标注,得到已标注图片;Labeling the segmented picture according to the label with good segmentation effect and the label with poor segmentation effect, to obtain an annotated picture;
构建深度学习模型,根据所述深度学习模型和所述已标注图片获取目标分割效果评估模型;Constructing a deep learning model, and obtaining a target segmentation effect evaluation model according to the deep learning model and the labeled picture;
获取已进行部件分割的待测试图片,根据所述待测试图片和所述目标分割效果评估模型获取所述待测试图片的评估结果。Obtain a picture to be tested that has undergone component segmentation, and obtain an evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model.
在一个实施例中,提供了一个或多个存储有计算机可读指令的可读存储介质,本实施例所提供的可读存储介质包括非易失性可读存储介质和易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:In one embodiment, one or more readable storage media storing computer readable instructions are provided. The readable storage media provided in this embodiment include non-volatile readable storage media and volatile readable storage. In the medium, when the computer-readable instructions are executed by one or more processors, the one or more processors execute the following steps:
获取车辆图片,根据所述车辆图片和预训练的部件分割模型获取分割图片;Acquiring a vehicle picture, and acquiring a segmented picture according to the vehicle picture and the pre-trained component segmentation model;
根据所述分割图片的特征创建训练标签,所述训练标签包括分割效果好标签和分割效果差标签;其中,所述分割效果好是指所述分割图片具有符合预设形状和/或颜色均匀的特征,所述分割效果差是指所述分割图片具有花点和/或颜色交叉的特征;Create training tags based on the features of the segmented pictures, the training tags include tags with good segmentation effect and tags with poor segmentation effect; wherein, the segmentation effect is good means that the segmented picture has a predetermined shape and/or uniform color. Feature, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
根据所述分割效果好标签和所述分割效果差标签对所述分割图片进行标注,得到已标注图片;Labeling the segmented picture according to the label with good segmentation effect and the label with poor segmentation effect, to obtain an annotated picture;
构建深度学习模型,根据所述深度学习模型和所述已标注图片获取目标分割效果评估模型;Constructing a deep learning model, and obtaining a target segmentation effect evaluation model according to the deep learning model and the labeled picture;
获取已进行部件分割的待测试图片,根据所述待测试图片和所述目标分割效果评估模型获取所述待测试图片的评估结果。Obtain a picture to be tested that has undergone component segmentation, and obtain an evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model.
可以理解地,所述计算机可读存储介质可以包括:能够携带所述计算机可读指令代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号和电信信号等。Understandably, the computer-readable storage medium may include: any entity or device capable of carrying the computer-readable instruction code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory ( ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), electrical carrier signal and telecommunication signal, etc.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(SynchlinK) DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through computer-readable instructions. The computer-readable instructions can be stored in a non-volatile computer. In a readable storage medium, when the computer-readable instructions are executed, they may include the processes of the above-mentioned method embodiments. Wherein, any reference to memory, storage, database, or other media used in the embodiments provided in this application may include non-volatile and/or volatile memory. Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM) or external cache memory. As an illustration and not a limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (SynchlinK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。The blockchain referred to in this application is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm. Blockchain, essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block. The blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块、子模块和单元完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。Those skilled in the art can clearly understand that for the convenience and conciseness of description, only the division of the above-mentioned functional units and modules is used as an example. In practical applications, the above-mentioned functions can be allocated to different functional modules and modules as required. Sub-module and unit completion, that is, the internal structure of the device is divided into different functional units or modules to complete all or part of the functions described above.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。   The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, a person of ordinary skill in the art should understand that it can still implement the foregoing The technical solutions recorded in the examples are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the application, and should be included in Within the scope of protection of this application. To
 To

Claims (20)

  1. 基于深度学习的分割效果评估方法,其中,所述方法包括: A method for evaluating segmentation effect based on deep learning, wherein the method includes:
        获取车辆图片,根据所述车辆图片和预训练的部件分割模型获取分割图片;Obtain a picture of a vehicle, and obtain a segmented picture according to the vehicle picture and the pre-trained component segmentation model;
    根据所述分割图片的特征创建训练标签,所述训练标签包括分割效果好标签和分割效果差标签;其中,所述分割效果好是指所述分割图片具有符合预设形状和/或颜色均匀的特征,所述分割效果差是指所述分割图片具有花点和/或颜色交叉的特征;Create training tags based on the features of the segmented pictures, the training tags include tags with good segmentation effect and tags with poor segmentation effect; wherein, the segmentation effect is good means that the segmented picture has a predetermined shape and/or uniform color. Feature, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
    根据所述分割效果好标签和所述分割效果差标签对所述分割图片进行标注,得到已标注图片;Labeling the segmented picture according to the label with good segmentation effect and the label with poor segmentation effect, to obtain an annotated picture;
    构建深度学习模型,根据所述深度学习模型和所述已标注图片获取目标分割效果评估模型;Constructing a deep learning model, and obtaining a target segmentation effect evaluation model according to the deep learning model and the labeled picture;
    获取已进行部件分割的待测试图片,根据所述待测试图片和所述目标分割效果评估模型获取所述待测试图片的评估结果。Obtain a picture to be tested that has undergone component segmentation, and obtain an evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model.
  2. 如权利要求1所述的分割效果评估方法,其中,所述深度学习模型为MobilenetV2模型,所述根据所述深度学习模型和所述已标注图片获取目标分割效果评估模型,包括: 5. The segmentation effect evaluation method according to claim 1, wherein the deep learning model is a MobilenetV2 model, and the obtaining a target segmentation effect evaluation model according to the deep learning model and the labeled picture comprises:
    将所述已标注图片分为训练集和测试集;Divide the labeled pictures into a training set and a test set;
    对所述训练集和所述测试集进行数据扩增,得到扩增训练集和扩增测试集; Performing data amplification on the training set and the test set to obtain an amplified training set and an amplified test set;
    将所述扩增训练集输入所述MobilenetV2模型进行训练,获取训练模型;Input the augmented training set into the MobilenetV2 model for training, and obtain a training model;
    将所述扩增测试集输入所述训练模型进行测试;Inputting the amplification test set to the training model for testing;
    根据测试的准确率获取目标分割效果评估模型。Obtain the target segmentation effect evaluation model according to the test accuracy.
  3. 如权利要求2所述的分割效果评估方法,其中,所述根据测试的准确率获取目标分割效果评估模型,包括: 3. The segmentation effect evaluation method according to claim 2, wherein the obtaining the target segmentation effect evaluation model according to the test accuracy includes:
    根据测试的准确率调整模型参数,获取准确率得分排名前K位的测试模型; Adjust the model parameters according to the accuracy of the test, and obtain the test model with the top K scores of accuracy;
    将所述准确率得分排名前K位的测试模型作为目标分割效果评估模型。The test model with the top K rankings in the accuracy score is used as the target segmentation effect evaluation model.
  4. 如权利要求3所述的分割效果评估方法,其中,所述根据所述待测试图片和所述目标分割效果评估模型获取所述待测试图片的评估结果,包括: The segmentation effect evaluation method according to claim 3, wherein said obtaining the evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model comprises:
    将所述待测试图片缩放到预设尺寸,以获取缩放图片;Zooming the picture to be tested to a preset size to obtain a zoomed picture;
    对所述缩放图片进行边缘裁剪,获取用于输入所述目标分割效果评估模型获取评估结果的图像块;Performing edge cropping on the zoomed picture to obtain an image block used to input the target segmentation effect evaluation model to obtain an evaluation result;
    根据所述图像块和所述目标分割效果评估模型获取所述评估结果。Obtaining the evaluation result according to the image block and the target segmentation effect evaluation model.
  5. 如权利要求4所述的分割效果评估方法,其中,所述根据所述图像块和所述目标分割效果评估模型获取所述评估结果,包括: 5. The segmentation effect evaluation method according to claim 4, wherein said obtaining said evaluation result according to said image block and said target segmentation effect evaluation model comprises:
    将所述图像块输入所述目标分割效果评估模型,获取基于所述目标分割效果评估模型对应的测试结果;Input the image block into the target segmentation effect evaluation model, and obtain a test result corresponding to the target segmentation effect evaluation model;
    对所述测试结果进行概率平均计算,以获取基于所述目标分割效果评估模型对应的评估结果。A probability average calculation is performed on the test result to obtain an evaluation result corresponding to the target segmentation effect evaluation model.
  6. 如权利要求2所述的分割效果评估方法,其中,所述数据扩增包括随机裁剪、随机水平翻转和随机旋转。 The segmentation effect evaluation method according to claim 2, wherein the data amplification includes random cropping, random horizontal flipping, and random rotation.
  7. 基于深度学习的分割效果评估装置,其中,所述装置包括: A device for evaluating segmentation effects based on deep learning, wherein the device includes:
    第一获取模块,用于获取车辆图片,根据所述车辆图片和预训练的部件分割模型获取分割图片;The first acquisition module is configured to acquire a vehicle picture, and acquire a segmented picture according to the vehicle picture and the pre-trained component segmentation model;
    标签创建模块,用于根据所述分割图片的特征创建训练标签,所述训练标签包括分割效果好标签和分割效果差标签,其中,所述分割效果好是指所述分割图片具有符合预设形状和/或颜色均匀的特征,所述分割效果差是指所述分割图片具有花点和/或颜色交叉的特征;The label creation module is used to create training labels based on the features of the segmented picture. The training labels include labels with good segmentation effect and labels with poor segmentation effect. The good segmentation effect means that the segmented picture has a shape conforming to a preset shape. And/or the feature of uniform color, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
    标注模块,用于根据所述分割效果好标签和所述分割效果差标签对所述分割图片进行标注,得到已标注图片;An annotation module, configured to annotate the segmented picture according to the label with good segmentation effect and the label with poor segmentation effect, to obtain the annotated picture;
    第二获取模块,用于构建深度学习模型,根据所述深度学习模型和所述已标注图片获取目标分割效果评估模型;The second acquisition module is configured to construct a deep learning model, and acquire a target segmentation effect evaluation model according to the deep learning model and the labeled picture;
    评估结果模块,用于获取已进行部件分割的待测试图片,根据所述待测试图片和所述目标分割效果评估模型获取所述待测试图片的评估结果。The evaluation result module is used to obtain the picture to be tested for which the component segmentation has been performed, and obtain the evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model.
  8. 如权利要求7所述的分割效果评估装置,其中,所述深度学习模型为MobilenetV2模型,所述第二获取模块还用于: 8. The segmentation effect evaluation device according to claim 7, wherein the deep learning model is a MobilenetV2 model, and the second acquisition module is further used for:
    将所述已标注图片分为训练集和测试集;Divide the labeled pictures into a training set and a test set;
    对所述训练集和所述测试集进行数据扩增,得到扩增训练集和扩增测试集; Performing data amplification on the training set and the test set to obtain an amplified training set and an amplified test set;
    将所述扩增训练集输入所述MobilenetV2模型进行训练,获取训练模型;Input the augmented training set into the MobilenetV2 model for training, and obtain a training model;
    将所述扩增测试集输入所述训练模型进行测试,根据测试的准确率获取目标分割效果评估模型。The amplification test set is input to the training model for testing, and the target segmentation effect evaluation model is obtained according to the accuracy of the test.
  9. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其中,所述处理器执行所述计算机可读指令时实现如下步骤: A computer device includes a memory, a processor, and computer-readable instructions that are stored in the memory and can run on the processor, wherein the processor implements the following steps when the processor executes the computer-readable instructions:
    获取车辆图片,根据所述车辆图片和预训练的部件分割模型获取分割图片;Acquiring a vehicle picture, and acquiring a segmented picture according to the vehicle picture and the pre-trained component segmentation model;
    根据所述分割图片的特征创建训练标签,所述训练标签包括分割效果好标签和分割效果差标签;其中,所述分割效果好是指所述分割图片具有符合预设形状和/或颜色均匀的特征,所述分割效果差是指所述分割图片具有花点和/或颜色交叉的特征;Create training tags based on the features of the segmented pictures, the training tags include tags with good segmentation effect and tags with poor segmentation effect; wherein, the segmentation effect is good means that the segmented picture has a predetermined shape and/or uniform color. Feature, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
    根据所述分割效果好标签和所述分割效果差标签对所述分割图片进行标注,得到已标注图片;Labeling the segmented picture according to the label with good segmentation effect and the label with poor segmentation effect, to obtain an annotated picture;
    构建深度学习模型,根据所述深度学习模型和所述已标注图片获取目标分割效果评估模型;Constructing a deep learning model, and obtaining a target segmentation effect evaluation model according to the deep learning model and the labeled picture;
    获取已进行部件分割的待测试图片,根据所述待测试图片和所述目标分割效果评估模型获取所述待测试图片的评估结果。Obtain a picture to be tested that has undergone component segmentation, and obtain an evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model.
  10. 如权利要求9所述的计算机设备,其中,所述深度学习模型为MobilenetV2模型,所述根据所述深度学习模型和所述已标注图片获取目标分割效果评估模型,包括以下步骤: 9. The computer device according to claim 9, wherein the deep learning model is a MobilenetV2 model, and acquiring a target segmentation effect evaluation model according to the deep learning model and the labeled picture comprises the following steps:
    将所述已标注图片分为训练集和测试集;Divide the labeled pictures into a training set and a test set;
    对所述训练集和所述测试集进行数据扩增,得到扩增训练集和扩增测试集; Performing data amplification on the training set and the test set to obtain an amplified training set and an amplified test set;
    将所述扩增训练集输入所述MobilenetV2模型进行训练,获取训练模型;Input the augmented training set into the MobilenetV2 model for training, and obtain a training model;
    将所述扩增测试集输入所述训练模型进行测试;Inputting the amplification test set to the training model for testing;
    根据测试的准确率获取目标分割效果评估模型。Obtain the target segmentation effect evaluation model according to the test accuracy.
  11. 如权利要求10所述的计算机设备,其中,所述根据测试的准确率获取目标分割效果评估模型,包括以下步骤: 10. The computer device according to claim 10, wherein said obtaining the target segmentation effect evaluation model according to the test accuracy includes the following steps:
    根据测试的准确率调整模型参数,获取准确率得分排名前K位的测试模型; Adjust the model parameters according to the accuracy of the test, and obtain the test model with the top K scores of accuracy;
    将所述准确率得分排名前K位的测试模型作为目标分割效果评估模型。The test model with the top K rankings in the accuracy score is used as the target segmentation effect evaluation model.
  12. 如权利要求11所述的计算机设备,其中,所述根据所述待测试图片和所述目标分割效果评估模型获取所述待测试图片的评估结果,包括以下步骤: 11. The computer device according to claim 11, wherein said obtaining the evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model comprises the following steps:
    将所述待测试图片缩放到预设尺寸,以获取缩放图片;Zooming the picture to be tested to a preset size to obtain a zoomed picture;
    对所述缩放图片进行边缘裁剪,获取用于输入所述目标分割效果评估模型获取评估结果的图像块;Performing edge cropping on the zoomed picture to obtain an image block used to input the target segmentation effect evaluation model to obtain an evaluation result;
    根据所述图像块和所述目标分割效果评估模型获取所述评估结果。Obtaining the evaluation result according to the image block and the target segmentation effect evaluation model.
  13. 如权利要求12所述的计算机设备,其中,所述根据所述图像块和所述目标分割效果评估模型获取所述评估结果,包括以下步骤: The computer device according to claim 12, wherein said obtaining said evaluation result according to said image block and said target segmentation effect evaluation model comprises the following steps:
    将所述图像块输入所述目标分割效果评估模型,获取基于所述目标分割效果评估模型对应的测试结果;Input the image block into the target segmentation effect evaluation model, and obtain a test result corresponding to the target segmentation effect evaluation model;
    对所述测试结果进行概率平均计算,以获取基于所述目标分割效果评估模型对应的评估结果。A probability average calculation is performed on the test result to obtain an evaluation result corresponding to the target segmentation effect evaluation model.
  14. 如权利要求10所述的计算机设备,其中,所述数据扩增包括随机裁剪、随机水平翻转和随机旋转。 10. The computer device of claim 10, wherein the data augmentation includes random cropping, random horizontal flipping, and random rotation.
  15. 一个或多个存储有计算机可读指令的可读存储介质,其中,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤: One or more readable storage media storing computer readable instructions, where when the computer readable instructions are executed by one or more processors, the one or more processors execute the following steps:
    获取车辆图片,根据所述车辆图片和预训练的部件分割模型获取分割图片;Acquiring a vehicle picture, and acquiring a segmented picture according to the vehicle picture and the pre-trained component segmentation model;
    根据所述分割图片的特征创建训练标签,所述训练标签包括分割效果好标签和分割效果差标签;其中,所述分割效果好是指所述分割图片具有符合预设形状和/或颜色均匀的特征,所述分割效果差是指所述分割图片具有花点和/或颜色交叉的特征;Create training tags based on the features of the segmented pictures, the training tags include tags with good segmentation effect and tags with poor segmentation effect; wherein, the segmentation effect is good means that the segmented picture has a predetermined shape and/or uniform color. Feature, the poor segmentation effect means that the segmented picture has the feature of flower dots and/or color crossing;
    根据所述分割效果好标签和所述分割效果差标签对所述分割图片进行标注,得到已标注图片;Labeling the segmented picture according to the label with good segmentation effect and the label with poor segmentation effect, to obtain an annotated picture;
    构建深度学习模型,根据所述深度学习模型和所述已标注图片获取目标分割效果评估模型;Constructing a deep learning model, and obtaining a target segmentation effect evaluation model according to the deep learning model and the labeled picture;
    获取已进行部件分割的待测试图片,根据所述待测试图片和所述目标分割效果评估模型获取所述待测试图片的评估结果。Obtain a picture to be tested that has undergone component segmentation, and obtain an evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model.
  16. 如权利要求15所述的可读存储介质,其中,所述深度学习模型为MobilenetV2模型,所述根据所述深度学习模型和所述已标注图片获取目标分割效果评估模型,包括以下步骤: 15. The readable storage medium according to claim 15, wherein the deep learning model is a MobilenetV2 model, and acquiring a target segmentation effect evaluation model according to the deep learning model and the labeled picture comprises the following steps:
    将所述已标注图片分为训练集和测试集;Divide the labeled pictures into a training set and a test set;
    对所述训练集和所述测试集进行数据扩增,得到扩增训练集和扩增测试集; Performing data amplification on the training set and the test set to obtain an amplified training set and an amplified test set;
    将所述扩增训练集输入所述MobilenetV2模型进行训练,获取训练模型;Input the augmented training set into the MobilenetV2 model for training, and obtain a training model;
    将所述扩增测试集输入所述训练模型进行测试;Inputting the amplification test set to the training model for testing;
    根据测试的准确率获取目标分割效果评估模型。Obtain the target segmentation effect evaluation model according to the test accuracy.
  17. 如权利要求16所述的可读存储介质,其中,所述根据测试的准确率获取目标分割效果评估模型,包括以下步骤: 15. The readable storage medium of claim 16, wherein the obtaining the target segmentation effect evaluation model according to the test accuracy includes the following steps:
    根据测试的准确率调整模型参数,获取准确率得分排名前K位的测试模型; Adjust the model parameters according to the accuracy of the test, and obtain the test model with the top K scores of accuracy;
    将所述准确率得分排名前K位的测试模型作为目标分割效果评估模型。The test model with the top K rankings in the accuracy score is used as the target segmentation effect evaluation model.
  18. 如权利要求17所述的可读存储介质,其中,所述根据所述待测试图片和所述目标分割效果评估模型获取所述待测试图片的评估结果,包括以下步骤: 17. The readable storage medium according to claim 17, wherein the obtaining the evaluation result of the picture to be tested according to the picture to be tested and the target segmentation effect evaluation model comprises the following steps:
    将所述待测试图片缩放到预设尺寸,以获取缩放图片;Zooming the picture to be tested to a preset size to obtain a zoomed picture;
    对所述缩放图片进行边缘裁剪,获取用于输入所述目标分割效果评估模型获取评估结果的图像块;Performing edge cropping on the zoomed picture to obtain an image block used to input the target segmentation effect evaluation model to obtain an evaluation result;
    根据所述图像块和所述目标分割效果评估模型获取所述评估结果。Obtaining the evaluation result according to the image block and the target segmentation effect evaluation model.
  19. 如权利要求18所述的可读存储介质,其中,所述根据所述图像块和所述目标分割效果评估模型获取所述评估结果,包括以下步骤: 18. The readable storage medium according to claim 18, wherein said obtaining said evaluation result according to said image block and said target segmentation effect evaluation model comprises the following steps:
    将所述图像块输入所述目标分割效果评估模型,获取基于所述目标分割效果评估模型对应的测试结果;Input the image block into the target segmentation effect evaluation model, and obtain a test result corresponding to the target segmentation effect evaluation model;
    对所述测试结果进行概率平均计算,以获取基于所述目标分割效果评估模型对应的评估结果。A probability average calculation is performed on the test result to obtain an evaluation result corresponding to the target segmentation effect evaluation model.
  20. 如权利要求16所述的可读存储介质,其中,所述数据扩增包括随机裁剪、随机水平翻转和随机旋转。 15. The readable storage medium of claim 16, wherein the data augmentation includes random cropping, random horizontal flipping, and random rotation.
PCT/CN2020/123255 2020-06-28 2020-10-23 Segmentation effect assessment method and apparatus based on deep learning, and device and medium WO2021135552A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010599352.9A CN111753843A (en) 2020-06-28 2020-06-28 Segmentation effect evaluation method, device, equipment and medium based on deep learning
CN202010599352.9 2020-06-28

Publications (1)

Publication Number Publication Date
WO2021135552A1 true WO2021135552A1 (en) 2021-07-08

Family

ID=72676852

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/123255 WO2021135552A1 (en) 2020-06-28 2020-10-23 Segmentation effect assessment method and apparatus based on deep learning, and device and medium

Country Status (2)

Country Link
CN (1) CN111753843A (en)
WO (1) WO2021135552A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116052137A (en) * 2023-01-30 2023-05-02 北京化工大学 Deep learning-based classical furniture culture attribute identification method and system

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753843A (en) * 2020-06-28 2020-10-09 平安科技(深圳)有限公司 Segmentation effect evaluation method, device, equipment and medium based on deep learning
CN113139072A (en) * 2021-04-20 2021-07-20 苏州挚途科技有限公司 Data labeling method and device and electronic equipment
CN113436175B (en) * 2021-06-30 2023-08-18 平安科技(深圳)有限公司 Method, device, equipment and storage medium for evaluating vehicle image segmentation quality
CN117058498B (en) * 2023-10-12 2024-02-06 腾讯科技(深圳)有限公司 Training method of segmentation map evaluation model, and segmentation map evaluation method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777060A (en) * 2009-12-23 2010-07-14 中国科学院自动化研究所 Automatic evaluation method and system of webpage visual quality
WO2018209057A1 (en) * 2017-05-11 2018-11-15 The Research Foundation For The State University Of New York System and method associated with predicting segmentation quality of objects in analysis of copious image data
US20190163949A1 (en) * 2017-11-27 2019-05-30 International Business Machines Corporation Intelligent tumor tracking system
CN111145206A (en) * 2019-12-27 2020-05-12 联想(北京)有限公司 Liver image segmentation quality evaluation method and device and computer equipment
CN111489328A (en) * 2020-03-06 2020-08-04 浙江工业大学 Fundus image quality evaluation method based on blood vessel segmentation and background separation
CN111753843A (en) * 2020-06-28 2020-10-09 平安科技(深圳)有限公司 Segmentation effect evaluation method, device, equipment and medium based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764372B (en) * 2018-06-08 2019-07-16 Oppo广东移动通信有限公司 Construction method and device, mobile terminal, the readable storage medium storing program for executing of data set

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777060A (en) * 2009-12-23 2010-07-14 中国科学院自动化研究所 Automatic evaluation method and system of webpage visual quality
WO2018209057A1 (en) * 2017-05-11 2018-11-15 The Research Foundation For The State University Of New York System and method associated with predicting segmentation quality of objects in analysis of copious image data
US20190163949A1 (en) * 2017-11-27 2019-05-30 International Business Machines Corporation Intelligent tumor tracking system
CN111145206A (en) * 2019-12-27 2020-05-12 联想(北京)有限公司 Liver image segmentation quality evaluation method and device and computer equipment
CN111489328A (en) * 2020-03-06 2020-08-04 浙江工业大学 Fundus image quality evaluation method based on blood vessel segmentation and background separation
CN111753843A (en) * 2020-06-28 2020-10-09 平安科技(深圳)有限公司 Segmentation effect evaluation method, device, equipment and medium based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUO LILI: "Research on Convolutional Neural Network Based Image Segmentation Quality Assessment", CHINESE MASTER'S THESES FULL-TEXT DATABASE, TIANJIN POLYTECHNIC UNIVERSITY, CN, 15 December 2019 (2019-12-15), CN, XP055828235, ISSN: 1674-0246 *
SHI WEN: "Research on Convolutional Neural Network Based Image Segmentation Quality Evaluation and Segmentation Repairing Method", CHINESE MASTER'S THESES FULL-TEXT DATABASE, TIANJIN POLYTECHNIC UNIVERSITY, CN, 15 September 2018 (2018-09-15), CN, XP055828233, ISSN: 1674-0246 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116052137A (en) * 2023-01-30 2023-05-02 北京化工大学 Deep learning-based classical furniture culture attribute identification method and system
CN116052137B (en) * 2023-01-30 2024-01-30 北京化工大学 Deep learning-based classical furniture culture attribute identification method and system

Also Published As

Publication number Publication date
CN111753843A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
WO2021135552A1 (en) Segmentation effect assessment method and apparatus based on deep learning, and device and medium
WO2021135499A1 (en) Damage detection model training and vehicle damage detection methods, device, apparatus, and medium
CN108230339B (en) Stomach cancer pathological section labeling completion method based on pseudo label iterative labeling
CN108776772B (en) Cross-time building change detection modeling method, detection device, method and storage medium
CN110705583B (en) Cell detection model training method, device, computer equipment and storage medium
CN110334585A (en) Table recognition method, apparatus, computer equipment and storage medium
US11354797B2 (en) Method, device, and system for testing an image
WO2021114809A1 (en) Vehicle damage feature detection method and apparatus, computer device, and storage medium
CN109543627A (en) A kind of method, apparatus and computer equipment judging driving behavior classification
WO2021135513A1 (en) Deep learning model-based vehicle loss assessment method and apparatus, device and medium
WO2021217940A1 (en) Vehicle component recognition method and apparatus, computer device, and storage medium
WO2022134354A1 (en) Vehicle loss detection model training method and apparatus, vehicle loss detection method and apparatus, and device and medium
CN111415336B (en) Image tampering identification method, device, server and storage medium
US11756288B2 (en) Image processing method and apparatus, electronic device and storage medium
CN113112416B (en) Semantic-guided face image restoration method
CN113657409A (en) Vehicle loss detection method, device, electronic device and storage medium
CN111461211B (en) Feature extraction method for lightweight target detection and corresponding detection method
JP2023526899A (en) Methods, devices, media and program products for generating image inpainting models
CN112232336A (en) Certificate identification method, device, equipment and storage medium
CN114972771A (en) Vehicle loss assessment and claim settlement method and device, electronic equipment and storage medium
CN117197763A (en) Road crack detection method and system based on cross attention guide feature alignment network
CN114241344B (en) Plant leaf disease and pest severity assessment method based on deep learning
CN117095180B (en) Embryo development stage prediction and quality assessment method based on stage identification
CN117152484B (en) Small target cloth flaw detection method based on improved YOLOv5s
CN111353689B (en) Risk assessment method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20909389

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20909389

Country of ref document: EP

Kind code of ref document: A1