CN111753843A - Segmentation effect evaluation method, device, equipment and medium based on deep learning - Google Patents

Segmentation effect evaluation method, device, equipment and medium based on deep learning Download PDF

Info

Publication number
CN111753843A
CN111753843A CN202010599352.9A CN202010599352A CN111753843A CN 111753843 A CN111753843 A CN 111753843A CN 202010599352 A CN202010599352 A CN 202010599352A CN 111753843 A CN111753843 A CN 111753843A
Authority
CN
China
Prior art keywords
picture
segmentation
model
label
tested
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010599352.9A
Other languages
Chinese (zh)
Inventor
史鹏
刘莉红
刘玉宇
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010599352.9A priority Critical patent/CN111753843A/en
Publication of CN111753843A publication Critical patent/CN111753843A/en
Priority to PCT/CN2020/123255 priority patent/WO2021135552A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention relates to the field of artificial intelligence, and discloses a segmentation effect evaluation method, a segmentation effect evaluation device, segmentation effect evaluation equipment and a storage medium based on deep learning, wherein the method comprises the following steps: obtaining a vehicle picture, and obtaining a segmentation picture according to the vehicle picture and a pre-trained component segmentation model; creating a training label according to the characteristics of the segmented picture, wherein the characteristics of the segmented picture comprise a first characteristic which is consistent with a preset shape and/or uniform color and a second characteristic which is crossed with a flower dot and/or color, and the training label comprises a first label and a second label which are corresponding to the first characteristic and the second characteristic; labeling the segmented picture according to the training label to obtain a labeled picture; acquiring a segmentation feature extraction model according to the constructed deep learning model and the labeled picture; and acquiring a picture to be tested, and acquiring a segmentation effect according to the picture to be tested and the segmentation feature extraction model. The segmentation effect evaluation method can evaluate the effect of the segmentation of the component segmentation model and filter pictures with failed segmentation of the segmentation model.

Description

Segmentation effect evaluation method, device, equipment and medium based on deep learning
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a segmentation effect evaluation method and device based on deep learning, computer equipment and a readable storage medium.
Background
With the rapid increase of the automobile holding capacity, traffic collision accidents are frequent, so that the problem of vehicle damage is more and more. Therefore, insurance companies continuously launch insurance services of vehicle insurance to guarantee the vehicle property safety of the masses, when the insured vehicles have traffic accidents, the insurance companies are required to arrange vehicle damage assessment on the damaged vehicles and determine vehicle insurance claim amount according to the heaviest condition of vehicle damage, and the vehicle insurance damage assessment is a key link of claim settlement.
At present, the judgement to the vehicle damage mainly relies on the manual work to estimate, the level of damage personnel carries out the scene survey judgement at the vehicle accident scene, can spend a large amount of times through the manual categorised mode of level of damage personnel, not only need still invest a large amount of cost of labor, and inefficiency is unfavorable for the quick realization of vehicle insurance claim settlement, and the level of damage personnel need classify the different images of gathering and distinguish the damage that the part received according to the image, receive the influence of various subjective factors easily and can't guarantee the accuracy of the impaired condition level of vehicle.
Therefore, in order to solve the above problems, computer vision image recognition and semantic segmentation technologies are applied to the scene of vehicle damage assessment. In the prior art, a general process is as follows: the image acquisition tool is used for acquiring the image of the vehicle, the computer is used for automatically identifying the image and the semantic segmentation technology is used for reflecting the damaged condition of the vehicle, and the damaged condition of the vehicle is judged. However, in the current technology, due to the complexity of an actual scene and the robustness of a semantic segmentation model, the problem of segmentation failure may occur in the process of image segmentation by a component segmentation model, and the component segmentation is the basis for subsequent component identification; based on the component segmentation model, if the segmentation effect of the segmentation model cannot be accurately obtained, the subsequent accurate evaluation aiming at the damaged features of each component of the vehicle may be influenced, so that the problem of inconsistent damage assessment results may exist, and economic losses are brought to vehicle users or insurance companies. Therefore, evaluation of the effect on the vehicle component division is to be further improved.
Disclosure of Invention
The invention aims to solve the technical problem that in the prior art, aiming at the problem that a part segmentation image of a vehicle is possibly subjected to segmentation failure, a segmentation effect evaluation method and device based on deep learning, a computer device and a readable storage medium are provided so as to accurately evaluate the segmentation effect of a part segmentation model and filter the image with segmentation failure.
A first aspect of the present invention provides a segmentation effect evaluation method, including:
obtaining a vehicle picture and a pre-trained component segmentation model, and obtaining a segmentation picture according to the vehicle picture and the pre-trained component segmentation model;
creating training labels according to the characteristics of the segmented pictures, wherein the characteristics of the segmented pictures comprise a first characteristic which is consistent with a preset shape and/or uniform color and a second characteristic which is crossed with a flower dot and/or color, and the training labels comprise a first label corresponding to the first characteristic and a second label corresponding to the second characteristic;
labeling the segmented picture according to the first label and the second label to obtain a labeled picture;
constructing a deep learning model, and acquiring a segmentation feature extraction model according to the deep learning model and the labeled picture;
obtaining a picture output by a to-be-tested part segmentation model, and taking the picture output by the to-be-tested part segmentation model as the to-be-tested picture;
acquiring a picture to be tested subjected to part segmentation, and inputting the picture to be tested into the segmentation feature extraction model to determine a label corresponding to the picture to be tested;
if the corresponding label accords with the first label, determining that the picture to be tested accords with a preset segmentation effect; and if the corresponding label accords with the second label, determining that the picture to be tested does not accord with the preset segmentation effect.
Optionally, the deep learning model is a MobilenetV2 model, and the obtaining the segmentation feature extraction model according to the deep learning model and the labeled picture includes:
dividing the marked pictures into a training set and a testing set;
performing data amplification on the training set and the test set to obtain an amplified training set and an amplified test set;
inputting the amplification training set into the MobilenetV2 model for training to obtain a training model;
inputting the amplification test set into the training model for testing to obtain a test result;
and acquiring the segmentation feature extraction model according to the accuracy of the test result.
Optionally, the obtaining a segmentation feature extraction model according to the accuracy of the test result includes:
adjusting model parameters according to the accuracy of the test result to obtain a test model with the accuracy score of K bits before ranking;
and taking the test model with the accuracy scores of K bits before ranking as the segmentation feature extraction model.
Optionally, the obtaining a to-be-tested picture subjected to component segmentation, and inputting the to-be-tested picture into the segmentation feature extraction model to determine a label corresponding to the to-be-tested picture includes:
zooming the to-be-tested picture to a preset size to obtain a zoomed picture;
performing edge clipping on the zoomed picture to obtain an image block which is used for inputting the segmentation feature extraction model to determine a label corresponding to the picture to be tested;
and inputting the image block into the segmentation feature extraction model to determine a label corresponding to the picture to be tested.
Optionally, the inputting the image block into the segmentation feature extraction model to determine a label corresponding to the picture to be tested includes:
inputting the image block into the front K-bit segmentation feature extraction models to obtain a label result output by each segmentation feature extraction model;
and performing probability average calculation on the label result output by each segmentation feature extraction model to determine the label corresponding to the picture to be tested.
Optionally, the performing data amplification on the training set and the test set to obtain an amplified training set and an amplified test set includes:
randomly cutting, randomly horizontally turning or randomly rotating the marked pictures in the training set to obtain a first amplified picture;
taking the labeled picture and the first amplification picture in the training set as the amplification training set;
randomly cutting, randomly horizontally turning or randomly rotating the marked pictures in the test set to obtain a second amplified picture;
and taking the marked picture and the second amplification picture in the test set as the amplification test set.
A second aspect of the present invention provides a segmentation effect evaluation apparatus, the apparatus including:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a vehicle picture and a pre-trained component segmentation model and acquiring a segmentation picture according to the vehicle picture and the pre-trained component segmentation model;
the label creating module is used for creating training labels according to the characteristics of the segmented pictures, the characteristics of the segmented pictures comprise first characteristics which are consistent with a preset shape and/or uniform color and second characteristics which are crossed by a flower dot and/or color, and the training labels comprise first labels corresponding to the first characteristics and second labels corresponding to the second characteristics;
the labeling module is used for labeling the segmented picture according to the first label and the second label to obtain a labeled picture;
the second acquisition module is used for constructing a deep learning model and acquiring a segmentation feature extraction model according to the deep learning model and the labeled picture;
the third acquisition module is used for acquiring a picture to be tested subjected to component segmentation, and inputting the picture to be tested into the segmentation feature extraction model so as to determine a label corresponding to the picture to be tested;
the segmentation effect evaluation module is used for determining that the picture to be tested accords with a preset segmentation effect if the corresponding label accords with the first label; and if the corresponding label accords with the second label, determining that the picture to be tested does not accord with the preset segmentation effect.
Optionally, the deep learning model is a MobilenetV2 model, and the second obtaining module is further configured to:
dividing the marked pictures into a training set and a testing set;
performing data amplification on the training set and the test set to obtain an amplified training set and an amplified test set;
inputting the amplification training set into the MobilenetV2 model for training to obtain a training model;
inputting the amplification test set into the training model for testing to obtain a test result;
and acquiring the segmentation feature extraction model according to the accuracy of the test result.
A third aspect of the present invention provides a computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the deep learning based segmentation effect evaluation method according to the first aspect of the present invention when executing the computer program.
A fourth aspect of the present invention provides a computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the deep learning-based segmentation effect evaluation method according to the first aspect of the present invention.
The invention provides a segmentation effect evaluation method and device based on deep learning, computer equipment and a readable storage medium, wherein the method comprises the following steps: obtaining a vehicle picture and a pre-trained component segmentation model, and obtaining a segmentation picture according to the vehicle picture and the pre-trained component segmentation model; creating a training label according to the characteristics of the segmented picture, wherein the characteristics of the segmented picture comprise a first characteristic which is consistent with a preset shape and/or uniform color and a second characteristic which is crossed with a flower dot and/or color, and the training label comprises a first label corresponding to the first characteristic and a second label corresponding to the second characteristic; labeling the segmented picture according to the first label and the second label to obtain a labeled picture; constructing a deep learning model, and acquiring a segmentation feature extraction model according to the deep learning model and the labeled picture; acquiring a picture to be tested subjected to part segmentation, and inputting the picture to be tested into a segmentation feature extraction model to determine a label corresponding to the picture to be tested; if the corresponding label accords with the first label, determining that the picture to be tested accords with a preset segmentation effect; and if the corresponding label accords with the second label, determining that the picture to be tested does not accord with the preset segmentation effect. The segmentation effect evaluation method based on deep learning comprises the steps of establishing training labels according to the characteristics of a picture segmented by a pre-trained part segmentation model, testing and training by combining the deep learning model to obtain a segmentation characteristic extraction model, inputting a picture to be tested into the segmentation characteristic extraction model to obtain a label corresponding to the picture to be tested, and determining whether the picture to be tested accords with a preset segmentation effect according to the corresponding label so as to accurately evaluate the segmentation effect of the part segmentation model to be tested and filter the picture which does not accord with the preset segmentation effect (segmentation failure).
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flow chart illustrating a segmentation effect evaluation method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a first feature in an embodiment of the invention;
FIG. 3 is a schematic view of a second feature in an embodiment of the invention;
FIG. 4 is a flow chart illustrating a process of obtaining a segmentation feature extraction model according to an embodiment of the present invention;
FIG. 5 is a flow chart illustrating an evaluation result of a picture to be tested according to an embodiment of the present invention;
FIG. 6 is a block diagram of a segmentation effect evaluation apparatus according to an embodiment of the present invention;
FIG. 7 is a block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The segmentation effect evaluation method based on deep learning provided by the embodiment of the present invention may be applied to a scene for vehicle component segmentation, and specifically, as shown in fig. 1, the method may include the following steps S10-S60:
s10: and acquiring a vehicle picture and a pre-trained component segmentation model, and acquiring a segmentation picture according to the vehicle picture and the pre-trained component segmentation model.
In one embodiment, a vehicle picture and a pre-trained component segmentation model are obtained, and a segmentation picture is obtained according to the vehicle picture and the pre-trained component segmentation model, specifically, a certain number of vehicle pictures, for example, 20 ten thousand vehicle pictures, may be obtained first, and the obtained 20 ten thousand vehicle pictures are input to the pre-trained component segmentation model to obtain a segmentation picture based on the vehicle picture, and of course, more vehicle pictures, for example, 50 ten thousand vehicle pictures, may also be obtained, which is not limited herein. In this embodiment, it can be understood that, if the pre-trained component segmentation model is a vehicle component segmentation model that has been pre-trained, a corresponding segmentation picture can be obtained according to the vehicle picture and the pre-trained component segmentation model.
S20: and creating a training label according to the characteristics of the segmented picture, wherein the characteristics of the segmented picture comprise a first characteristic which is consistent with a preset shape and/or uniform color and a second characteristic which is crossed with a flower dot and/or color, and the training label comprises a first label corresponding to the first characteristic and a second label corresponding to the second characteristic.
In an application scenario, based on the reasons of complexity of an actual traffic scene, robustness of a component segmentation model, and the like, there may be a problem of picture segmentation failure in a process of segmenting a picture by using a pre-trained component segmentation model, and it can be understood that, if the problem of picture segmentation failure occurs in the process of segmenting the picture by using the component segmentation model, a segmented picture generated by obtaining the component segmentation model corresponds to different features, which may include, for example, shape features and/or color features, and at this time, training labels may be created according to the different features, such as the shape features and/or the color features, of the segmented picture. Wherein, the color feature can set and output different colors for the component segmentation model according to different components of the vehicle in advance. In one embodiment, training labels may be created according to features of the segmented picture, and in particular, the features of the segmented picture may include a first feature having a shape conforming to a preset shape and/or a uniform color, and a second feature having a cross-point and/or a color, and the training labels include a first label corresponding to the first feature and a second label corresponding to the second feature.
Based on the features of the segmented picture, in one embodiment, the segmented picture may include a first feature having, for example, a feature conforming to a preset shape and/or a uniform color, which may be considered as a feature that the segmentation of the component segmentation model is effective. The characteristic of the preset shape can be the characteristic of the preset outline of each part or the characteristic of the preset area of each part, and it can be understood that the vehicle segmentation picture relates to different parts, and when the area of a certain part in the segmentation picture conforms to the outline of the preset part, or the characteristic of the area enclosed by the outlines of the parts or other preset areas, the segmentation picture can be considered to have the characteristic of the preset shape. The feature of uniform color may be that the color of a certain component in the segmented picture has the feature of uniform color and/or single color. Specifically, as shown in fig. 2, a and B may represent different parts of the vehicle and display different colors in a segmented manner, for example, a represents a door of the vehicle and displays dark green, and B represents a fender of the vehicle and displays light green, at this time, according to whether the actual segmented picture conforms to a feature having a preset shape and/or a feature having uniform color, if the segmented picture conforms to the feature having the preset shape and/or the first feature having uniform color, it may be determined that the segmented picture has a feature having a good segmentation effect, that is, it may be determined that the segmented picture shown in fig. 2 has a feature having a good segmentation effect.
In one embodiment, the segmentation picture may include a second feature having features such as flower dots and/or color intersections, which may be considered as a feature that the segmentation of the component segmentation model is not effective. The characteristic having the flower points may be a characteristic that a region related to a certain part in the divided picture has the flower points of the picture, specifically, based on each part of the vehicle, the contour of the vehicle may be used as a shape characteristic, and if the characteristic such as the flower points or the saw teeth appears on the contour of the certain part, or the characteristic such as the spots or the flower points appears in the region surrounded by the contour of the certain part, the dividing effect of the divided picture may be considered to be poor; the feature with color intersection may display a plurality of different colors for the area related to a certain component in the segmented picture, or the different colors are intersected and fused, and it can be understood that the pre-trained component segmentation model may be set in advance based on the color feature mentioned above, that is, different colors may be output according to different components of the vehicle, and a certain component in the segmented picture corresponds to the same color under normal conditions, and if a plurality of colors or color intersection occurs at this time, the current segmented picture may be considered to have a poor segmentation effect. Illustratively, as shown in fig. 3, for example, C may represent a window of a vehicle, and C1 represents that the window is displayed in gray; d can represent a fender of a vehicle, C2 represents that the fender displays deep red, C3 represents that the fender displays blue, and C4 represents that the fender displays light red; for example, E represents a certain part of the vehicle, C5 represents a certain component showing yellow, C6 represents a certain component showing pink, C7 represents a certain component showing black, and C8 represents a certain component showing purple. Wherein the contour of the window C is provided with features similar to saw teeth, the contour of the fender D is provided with features similar to flower points and saw teeth, and the areas of the window C and the fender D are related to various colors, for example, the window C is related to C3 blue, the fender D is related to C3 blue and C4 pink; a plurality of colors, for example, pink of C6, black of C7, and purple of C8, appear on a certain part E and are crossly blended, and it can be seen that, based on the characteristics of the part C, the part D, and the part E of fig. 3, the divided picture shown in fig. 3 can be determined to have a second characteristic of poor division effect.
In the above embodiments, it can be understood that, based on the segmented picture generated by the component segmentation model, for example, the first feature and the second feature corresponding to the segmented picture may be classified by definition, and a corresponding training label is created thereby, specifically, the training label may include a first label corresponding to the first feature and a second label corresponding to the second feature, and it can be understood that the first label may be considered as a label with a good segmentation effect, and the second label may be considered as a label with a poor segmentation effect. It should be noted that the schematic diagrams shown in fig. 2 and fig. 3 are only for illustration and do not represent actual divided pictures.
S30: and labeling the divided picture according to the first label and the second label to obtain a labeled picture.
In an application scenario, different features of a segmented picture are generated based on a component segmentation model, a first label and a second label may be created, and the segmented picture generated by the pre-trained component segmentation model is labeled according to the first label and the second label, specifically, a manual labeling manner may be adopted, for example, a first label may be labeled for the segmented picture having features conforming to a preset shape and/or uniform color, and a second label may be labeled for the segmented picture having features intersecting with a flower dot and/or color, so that a labeled picture based on the first label and the second label may be obtained.
S40: and constructing a deep learning model, and acquiring a segmentation feature extraction model according to the deep learning model and the labeled picture.
In one embodiment, based on artificial intelligence and deep learning technology, a deep learning model can be constructed, and a segmentation feature extraction model is obtained according to the deep learning model and the labeled picture. In this embodiment, the deep learning model may be a MobilenetV1 model, and it can be understood that the MobilenetV1 model, as a lightweight deep learning model developed in Google, has the characteristics of light weight, small calculation amount, high speed, and the like, and can be applied to portable terminal devices. In addition, as a preferred embodiment, the deep learning model may also be a mobileneetv 2 model, the mobileneetv 2 model is a new-generation lightweight deep learning model issued by Google, and compared with the previous version of mobileneetv 1 model, the accuracy and the operation speed of deep learning model training may be further improved through the mobileneetv 2 model, and on the other hand, the parameter amount of model training may be reduced, that is, the calculation amount and the memory occupation may be reduced, and the deep learning model may be applied to the mobile terminal device. It should be noted that the following examples mainly illustrate the MobilenetV2 model, but are not limited to the application of the MobilenetV2 model.
In one embodiment, the depth learning based model is a MobilenetV2 model, and the above step S40, that is, the segmentation feature extraction model is obtained according to the depth learning model and the labeled picture, specifically, as shown in fig. 4, the following steps S401 to S404 may be included:
s401: and dividing the marked pictures into a training set and a testing set.
Based on the labeled pictures with the training labels after labeling, the labeled pictures can be divided into a training set and a test set, specifically, the labeled pictures can be divided into the training set and the test set according to a certain preset proportion, and exemplarily, the preset proportion can be, for example, 3: 1, the marked picture can be marked according to the following relation that the marked picture is marked according to the following 3: the ratio of 1 is divided into a training set and a test set, and it should be noted that the preset ratio may also be, for example, 3: 2, etc., and is not limited thereto and may be set according to actual circumstances.
S402: and performing data amplification on the training set and the test set to obtain an amplified training set and an amplified test set. In one embodiment, based on the classified training set and test set, data amplification can be performed on the training set and test set in the labeled picture, so that a new training set and test set can be formed. In an application scenario, for example, during a model training phase, data amplification may be performed on a training set and a test set, where the data amplification may include, but is not limited to, for example, random cropping, random horizontal flipping, or random rotation, that is, transformation in multiple ways, such as random cropping, random horizontal flipping, or random rotation, may be performed on a labeled picture in the training set and the test set, and specifically, random cropping, random horizontal flipping, or random rotation may be performed on a labeled picture in the training set to obtain a first amplified picture; the marked picture and the first amplification picture in the training set can be used as an amplification training set; the marked pictures in the test set can be randomly cut, randomly horizontally turned or randomly rotated to obtain a second amplification picture, and the marked pictures and the second amplification picture in the test set are used as an amplification test set. It should be noted that, in addition to the above-mentioned embodiments, by randomly setting different contrast, brightness, saturation, and the like for the labeled pictures, more new training sets and test sets can be obtained, and an amplified training set and an amplified test set can also be obtained. In the embodiment, the data amplification is performed on the training set and the test set, so that the data volume of the training set and the test set can be increased to more than original multiple orders of magnitude, the diversity of the training set is increased, overfitting can be avoided, and the improvement of the recognition performance of subsequent models is facilitated. In addition, data amplification may also be achieved by other means, such as acquiring more data sets, and the like, and is not limited herein.
S403: and inputting the amplified training set into a MobilenetV2 model for training to obtain a training model.
Based on the obtained amplification training set, the amplification training set can be input into a MobilenetV2 model for training to obtain a training model, wherein the model can be saved once every epoch is passed, that is, the model is saved once for one training in a new training set, so as to obtain a final training model. Specifically, the augmented training set may be further subjected to picture preprocessing, which may include, but is not limited to, picture cropping or picture scaling, and then input into the MobilenetV2 model for training, for example, the augmented training set is uniformly scaled to 365 × 365 and/or uniformly scaled to 320 × 320 and then input into the MobilenetV2 model for training, so that the data volume of the sample may be reduced without changing the sample volume, the calculation volume of the MobilenetV2 model may be reduced, and the fixed and uniform format may also be beneficial to processing of other subsequent steps. In this embodiment, the MobilenetV2 model and the MobilenetV2 model are preferentially selected as the deep learning model of light weight, and compared with the MobilenetV1 model, the MobilenetV2 model can greatly reduce the amount of calculation and improve the efficiency of model training and the accuracy of model training.
S404: and inputting the amplification test set into a training model for testing to obtain a test result, and acquiring a segmentation feature extraction model according to the accuracy of the test result.
Based on the training model obtained by training the amplification training set, the amplification test set can be input into the training model for testing, and the segmentation feature extraction model can be obtained according to the accuracy of the test result.
In one embodiment, in step S404, that is, obtaining the segmentation feature extraction model according to the accuracy of the test result, specifically, the following steps S4041 to S4042 may be included:
s4041: and adjusting model parameters according to the accuracy of the test result to obtain a test model with the accuracy score of K bits before ranking.
In the test process of the training model, the amplification test set relates to relevant model parameters, the model parameters can include, but are not limited to, for example, a learning rate, an optimization mode, a regular term, a batch size and the like, and then the model parameters can be continuously adjusted according to the accuracy of the test result. In this embodiment, it can be understood that, based on training that the labeled picture has the training labels with good segmentation effect and poor segmentation effect, the segmentation feature extraction model generated through the labeled picture training also has the attributes of the training labels with good segmentation effect and poor segmentation effect.
S4042: and taking the test model with the accuracy score of K bits before ranking as a segmentation feature extraction model.
Based on the test model with the test accuracy score of K bits before ranking, the test model with the accuracy score of K bits before ranking can be used as a segmentation feature extraction model. Specifically, the test model ranked K top may be preferentially selected, for example, the test model ranked 2 top may be selected, or the test model ranked 3 top may be selected, which is not limited herein and may be selected according to the actual situation. Based on the test model for obtaining K bits before ranking, the test model for K bits before ranking can be used as a segmentation feature extraction model.
S50: and acquiring a picture to be tested subjected to part segmentation, and inputting the picture to be tested into the segmentation feature extraction model so as to determine a label corresponding to the picture to be tested.
Based on the obtained segmentation feature extraction model, in an actual scene, different part segmentation models generate different segmentation pictures, when the segmentation effect of the segmentation model needs to be verified or the segmentation pictures with poor segmentation effect (or segmentation failure) need to be filtered, a picture to be tested subjected to part segmentation can be obtained, and the picture to be tested is input into the segmentation feature extraction model so as to determine a label corresponding to the picture to be tested. In this embodiment, it can be understood that the picture to be tested may be obtained through, for example, a pre-trained component segmentation model, and the picture to be tested may be used to verify the quality of the segmentation of the component segmentation model, and specifically, when the picture to be tested is input into the segmentation feature extraction model, a label corresponding to the picture to be tested may be obtained, so that a segmentation effect is obtained according to the label of the picture to be tested.
S60: if the corresponding label accords with the first label, determining that the picture to be tested accords with a preset segmentation effect; and if the corresponding label accords with the second label, determining that the picture to be tested does not accord with the preset segmentation effect.
In one embodiment, after the picture to be tested is obtained, the picture to be tested may be input into the segmentation feature extraction model to obtain a corresponding tag based on the picture to be tested, and then it may be determined whether the picture to be tested conforms to a preset segmentation effect according to the corresponding tag, specifically, if the corresponding tag conforms to the first tag, it is determined that the picture to be tested conforms to the preset segmentation effect; and if the corresponding label accords with the second label, determining that the picture to be tested does not accord with a preset segmentation effect, wherein the preset segmentation effect can be the good segmentation effect of the example, and the non-preset segmentation effect can be the poor segmentation effect of the example.
In this embodiment, it can be understood that, when the corresponding tag based on the picture to be tested is obtained, the segmentation effect (good segmentation effect or poor segmentation effect) of the picture to be tested can be evaluated according to the corresponding tag of the picture to be tested, so that the component segmentation model can be continuously trained according to the segmentation effect, so as to perfect and improve the accuracy of the component segmentation model. Further, based on the first feature and the first label, and the second feature and the second label defined in the above process, the first label may be regarded as a label with a good segmentation effect, and the second feature may be regarded as a label with a poor segmentation effect, and then, a segmented picture with a partially failed part (with a poor segmentation effect) may be further filtered in advance according to the evaluation result, so as to improve the accuracy of obtaining the segmented picture.
In an embodiment, the obtaining of the evaluation result of the picture to be tested according to the picture to be tested and the segmentation feature extraction model may specifically include, as shown in fig. 5, steps S501 to S503:
s501: and zooming the picture to be tested to a preset size to obtain a zoomed picture.
Based on the segmented pictures needing verification, picture preprocessing can be performed on each segmented picture, wherein the picture preprocessing can include, but is not limited to, for example, picture scaling and/or picture cropping, and the like, and is not limited herein.
In an application scenario, based on obtaining a to-be-tested picture subjected to component segmentation, the to-be-tested picture may involve a plurality of different sizes, and the segmentation feature extraction model has a certain limitation on the size of the input picture, and therefore, the to-be-tested picture of these different sizes needs to be adjusted to a fixed size picture, specifically, each of the to-be-tested pictures may be scaled to a preset size to obtain a scaled picture based on the preset size, where the preset size may be set to, for example, a size of 365, or a size of 480, and the like, and is not limited herein, the preset size in this embodiment is illustrated as 365, that is, each to-be-tested picture may be scaled to 365, and a scaled picture based on the preset size of 365 is obtained. In this embodiment, by preprocessing the picture scaling of the trial picture to be measured, the data size of the sample can be reduced on the premise that the sample size is not changed, and the adjustment to a fixed and uniform format is also beneficial to the processing of other steps.
S502: and performing edge cutting on the zoomed picture to obtain an image block for inputting the segmentation feature extraction model to obtain a label corresponding to the picture to be tested.
In one embodiment, the scaled picture may be further subjected to picture preprocessing, and for example, the scaled picture may be further subjected to edge cropping to obtain an image block for inputting the segmentation feature extraction model to determine the corresponding tag of the picture to be tested. Based on the scaled picture 365 × 365 obtained in step S501, edge clipping may be performed on the scaled picture, and it can be understood that the edge may represent an unimportant feature in the scaled picture, specifically, the edge of the scaled picture is clipped based on the center point of the scaled picture, and then the image block extending from the center point of the scaled picture to four sides and having the size of 320 × 320 may be retained, that is, the scaled picture may be clipped to the image block of 320 × 320. In this embodiment, it can be understood that, based on scaling the picture in step S501 and then cropping the scaled picture, the data amount of the sample can be further reduced. It should be noted that the process of scaling and cropping in the above steps is only for illustration, and does not limit the sequence.
S503: and inputting the image block into the segmentation feature extraction model to determine a label corresponding to the picture to be tested.
Based on the obtained segmentation feature extraction model, the image block can be input into the segmentation feature extraction model to determine the label corresponding to the to-be-tested image. Specifically, a corresponding forward propagation program may be created first, and the picture block subjected to picture preprocessing is input to the segmentation feature extraction model, so that the label corresponding to the picture to be tested may be determined.
In this embodiment, it can be understood that the segmentation feature extraction model generated based on the training labels, that is, the segmentation feature extraction model includes a first label (which may be considered to have an attribute with a good segmentation effect) corresponding to the first feature and includes a second label (which may be considered to have an attribute with a poor segmentation effect) corresponding to the second feature, and when the to-be-tested picture generated by different part segmentation models is input to the segmentation feature extraction model, the label of the corresponding feature of the to-be-tested picture may be determined, so that the segmentation effect (good segmentation effect or poor segmentation effect) of the to-be-tested picture may be determined according to the specific label of the to-be-tested picture. Meanwhile, pictures in the segmentation model which generate a second label (for example, segmentation failure) can be filtered, so that the accuracy of other subsequent models such as a component identification model can be further improved.
In one embodiment, in step S503, the image block is input into the segmentation feature extraction model to determine a label corresponding to the picture to be tested, and specifically, steps S5031-S5032 may further be included:
s5031: and inputting the image block into the previous K-bit segmentation feature extraction models to obtain a label result output by each segmentation feature extraction model.
Based on the above step S4042, the test model with the top K-th rank of accuracy score may be used as the segmentation feature extraction model. In an embodiment, an image block may be input into a top K-bit segmentation feature extraction model to obtain a tag result output by each segmentation feature extraction model, specifically, two test models with top 2-bit accuracy scores may be selected as segmentation feature extraction models, and then, tag results output by two corresponding segmentation feature extraction models may be obtained according to the two test models with top 2-bit accuracy scores and the image block, specifically, when a to-be-tested image is preprocessed into an image block with a fixed size of 320 × 320 or 365 × 365, the image block may be respectively input into the two test models for prediction, and then, tag results corresponding to the two test models may be respectively obtained. It should be noted that, here, the two test models with the top 2-bit accuracy score are selected as the segmentation feature extraction models only for example, and of course, three test models with the top 3-bit accuracy score may also be selected as the segmentation feature extraction models, or four test models with the top 4-bit accuracy score may also be selected as the segmentation feature extraction models, which is not limited here.
S5032: and performing probability average calculation on the label result output by each segmentation feature extraction model to determine the label corresponding to the picture to be tested.
Based on the label results corresponding to the two segmented feature extraction model obtained in step S5031, in an embodiment, a probability average calculation may be performed on the label result output by each segmented feature extraction model to more accurately determine the label corresponding to the to-be-tested picture.
In one embodiment, specifically, for example, the threshold result of the label result obtained by the first label and the second label is [0.5, 0.5, 0.05, 0.05], and for example, the first label result output by one of the segmentation feature extraction models is [0.75, 0.55, 0.05, 0.05], which corresponds to the first label, and the second label result output by the other segmentation feature extraction model is [0.45, 0.45, 0.05, 0.05], which corresponds to the second label; if only one segmentation feature extraction model is used, two different labels may be obtained, at this time, probability average calculation may be performed on the label result output by each segmentation feature extraction model, specifically, probability average calculation may be performed on the first label result and the second label result, at this time, the evaluation result may be obtained as [0.6, 0.5, 0.05, 0.05], and then the label corresponding to the to-be-tested picture may be obtained as the first label. In the above embodiment, it can be understood that different test models (segmentation feature extraction models) are generated according to different model parameter training, different label results are generated through the different test models, and the different label results correspond to different labels. It should be noted that, in the foregoing embodiment, the two test models with the top 2 scores are selected as the segmentation feature extraction model only for example, and not limited, and certainly, three test models with the top 3 scores or four test models with the top 4 scores may also be selected as the segmentation feature extraction model to obtain the label corresponding to the to-be-tested picture.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In one embodiment, a deep learning-based segmentation effect evaluation device is provided, and the functions implemented by the segmentation effect evaluation device correspond to the steps corresponding to the segmentation effect evaluation method in the above embodiments in a one-to-one manner. Specifically, as shown in fig. 6, the segmentation effect evaluation may include a first obtaining module 10, a label creating module 20, a labeling module 30, a second obtaining module 40, a third obtaining module 50, and a segmentation effect evaluation module 60. Wherein, each functional module is explained in detail as follows:
the first acquisition module 10 is used for acquiring a vehicle picture and a pre-trained component segmentation model and acquiring a segmentation picture according to the vehicle picture and the pre-trained component segmentation model;
the label creating module 20 is configured to create a training label according to features of the segmented picture, where the features of the segmented picture include a first feature that conforms to a preset shape and/or is uniform in color and a second feature that has a cross of a flower dot and/or a color, and the training label includes a first label corresponding to the first feature and a second label corresponding to the second feature;
the labeling module 30 is configured to label the segmented picture according to the first label and the second label to obtain a labeled picture;
the second obtaining module 40 is configured to construct a deep learning model, and obtain a segmentation feature extraction model according to the deep learning model and the labeled picture;
the third obtaining module 50 is configured to obtain a picture output by the to-be-tested part segmentation model, and use the picture output by the to-be-tested part segmentation model as the to-be-tested picture;
and the segmentation effect evaluation module 60 is used for inputting the picture to be tested into the segmentation feature extraction model to obtain an evaluation result based on the picture to be tested, and evaluating the segmentation effect of the segmentation model of the part to be tested according to the evaluation result.
In one embodiment, the deep learning model is a MobilenetV2 model, and the second obtaining module 40 is further configured to:
dividing the marked pictures into a training set and a test set;
performing data amplification on the training set and the test set to obtain an amplified training set and an amplified test set;
inputting the amplified training set into a MobilenetV2 model for training to obtain a training model;
inputting the amplification test set into a training model for testing to obtain a test result;
and acquiring a segmentation feature extraction model according to the accuracy of the test result.
For the specific limitations of the segmentation effect evaluation device, reference may be made to the above limitations of the segmentation effect evaluation method, which are not described herein again. The respective modules in the above-described segmentation effect evaluation apparatus may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In an embodiment, a computer-readable storage medium is provided, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the segmentation effect evaluation method according to the above embodiments are implemented, and are not described herein again to avoid repetition. Or, when being executed by a processor, the computer program implements the functions of the modules in the segmentation effect evaluation method according to the above embodiments, and is not described herein again to avoid repetition. It is to be understood that the computer-readable storage medium may include: any entity or device capable of carrying said computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, etc.
In one embodiment, as shown in FIG. 7, a computer device is provided. Specifically, the computer device 60 of this embodiment includes: a processor 61, a memory 62 and a computer program 63 stored in the memory 62 and executable on the processor 61. The steps in the segmentation effect evaluation method according to the above embodiment are implemented when the processor 61 executes the computer program 63, and are not described herein again to avoid repetition. Alternatively, the processor 61 implements the functions of the modules in the segmentation effect evaluation apparatus according to the above embodiment when executing the computer program 63, and the descriptions thereof are omitted here for avoiding redundancy.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
It will be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing functional units and modules are merely illustrated as being divided, and in practical applications, the foregoing functional allocation may be performed by different functional modules, sub-modules and units according to needs, that is, the internal structure of the device is divided into different functional units or modules to perform all or part of the above-described functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A segmentation effect evaluation method based on deep learning is characterized by comprising the following steps:
obtaining a vehicle picture and a pre-trained component segmentation model, and obtaining a segmentation picture according to the vehicle picture and the pre-trained component segmentation model;
creating training labels according to the characteristics of the segmented pictures, wherein the characteristics of the segmented pictures comprise a first characteristic which is consistent with a preset shape and/or uniform color and a second characteristic which is crossed with a flower dot and/or color, and the training labels comprise a first label corresponding to the first characteristic and a second label corresponding to the second characteristic;
labeling the segmented picture according to the first label and the second label to obtain a labeled picture;
constructing a deep learning model, and acquiring a segmentation feature extraction model according to the deep learning model and the labeled picture;
acquiring a picture to be tested subjected to part segmentation, and inputting the picture to be tested into the segmentation feature extraction model to determine a label corresponding to the picture to be tested;
if the corresponding label accords with the first label, determining that the picture to be tested accords with a preset segmentation effect; and if the corresponding label accords with the second label, determining that the picture to be tested does not accord with the preset segmentation effect.
2. The segmentation effect evaluation method according to claim 1, wherein the deep learning model is a MobilenetV2 model, and the obtaining of the segmentation feature extraction model according to the deep learning model and the labeled picture comprises:
dividing the marked pictures into a training set and a testing set;
performing data amplification on the training set and the test set to obtain an amplified training set and an amplified test set;
inputting the amplification training set into the MobilenetV2 model for training to obtain a training model;
inputting the amplification test set into the training model for testing to obtain a test result;
and acquiring the segmentation feature extraction model according to the accuracy of the test result.
3. The segmentation effectiveness evaluation method according to claim 2, wherein the obtaining the segmentation feature extraction model according to the accuracy of the test result comprises:
adjusting model parameters according to the accuracy of the test result to obtain a test model with the accuracy score of K bits before ranking;
and taking the test model with the accuracy scores of K bits before ranking as the segmentation feature extraction model.
4. The segmentation effect evaluation method according to claim 3, wherein the obtaining a picture to be tested that has been subjected to part segmentation, and inputting the picture to be tested into the segmentation feature extraction model to determine a label corresponding to the picture to be tested, comprises:
zooming the to-be-tested picture to a preset size to obtain a zoomed picture;
performing edge clipping on the zoomed picture to obtain an image block which is used for inputting the segmentation feature extraction model to determine a label corresponding to the picture to be tested;
and inputting the image block into the segmentation feature extraction model to determine a label corresponding to the picture to be tested.
5. The segmentation effect evaluation method according to claim 4, wherein the inputting the image blocks into the segmentation feature extraction model to determine the labels corresponding to the pictures to be tested comprises:
inputting the image block into the front K-bit segmentation feature extraction models to obtain a label result output by each segmentation feature extraction model;
and performing probability average calculation on the label result output by each segmentation feature extraction model to determine the label corresponding to the picture to be tested.
6. The segmentation effectiveness assessment method according to claim 2, wherein the data amplification of the training set and the test set to obtain an amplified training set and an amplified test set comprises:
randomly cutting, randomly horizontally turning or randomly rotating the marked pictures in the training set to obtain a first amplified picture;
taking the labeled picture and the first amplification picture in the training set as the amplification training set;
randomly cutting, randomly horizontally turning or randomly rotating the marked pictures in the test set to obtain a second amplified picture;
and taking the marked picture and the second amplification picture in the test set as the amplification test set.
7. A segmentation effectiveness evaluation apparatus based on deep learning, characterized by comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a vehicle picture and a pre-trained component segmentation model and acquiring a segmentation picture according to the vehicle picture and the pre-trained component segmentation model;
the label creating module is used for creating training labels according to the characteristics of the segmented pictures, the characteristics of the segmented pictures comprise first characteristics which are consistent with a preset shape and/or uniform color and second characteristics which are crossed by a flower dot and/or color, and the training labels comprise first labels corresponding to the first characteristics and second labels corresponding to the second characteristics;
the labeling module is used for labeling the segmented picture according to the first label and the second label to obtain a labeled picture;
the second acquisition module is used for constructing a deep learning model and acquiring a segmentation feature extraction model according to the deep learning model and the labeled picture;
the third acquisition module is used for acquiring a picture to be tested subjected to component segmentation, and inputting the picture to be tested into the segmentation feature extraction model so as to determine a label corresponding to the picture to be tested;
the segmentation effect evaluation module is used for determining that the picture to be tested accords with a preset segmentation effect if the corresponding label accords with the first label; and if the corresponding label accords with the second label, determining that the picture to be tested does not accord with the preset segmentation effect.
8. The segmentation effect evaluation apparatus according to claim 7, wherein the deep learning model is a MobilenetV2 model, and the second obtaining module is further configured to:
dividing the marked pictures into a training set and a testing set;
performing data amplification on the training set and the test set to obtain an amplified training set and an amplified test set;
inputting the amplification training set into the MobilenetV2 model for training to obtain a training model;
inputting the amplification test set into the training model for testing to obtain a test result;
and acquiring the segmentation feature extraction model according to the accuracy of the test result.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the segmentation effect evaluation method according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the segmentation effect evaluation method according to any one of claims 1 to 6.
CN202010599352.9A 2020-06-28 2020-06-28 Segmentation effect evaluation method, device, equipment and medium based on deep learning Pending CN111753843A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010599352.9A CN111753843A (en) 2020-06-28 2020-06-28 Segmentation effect evaluation method, device, equipment and medium based on deep learning
PCT/CN2020/123255 WO2021135552A1 (en) 2020-06-28 2020-10-23 Segmentation effect assessment method and apparatus based on deep learning, and device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010599352.9A CN111753843A (en) 2020-06-28 2020-06-28 Segmentation effect evaluation method, device, equipment and medium based on deep learning

Publications (1)

Publication Number Publication Date
CN111753843A true CN111753843A (en) 2020-10-09

Family

ID=72676852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010599352.9A Pending CN111753843A (en) 2020-06-28 2020-06-28 Segmentation effect evaluation method, device, equipment and medium based on deep learning

Country Status (2)

Country Link
CN (1) CN111753843A (en)
WO (1) WO2021135552A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021135552A1 (en) * 2020-06-28 2021-07-08 平安科技(深圳)有限公司 Segmentation effect assessment method and apparatus based on deep learning, and device and medium
CN113139072A (en) * 2021-04-20 2021-07-20 苏州挚途科技有限公司 Data labeling method and device and electronic equipment
WO2023273296A1 (en) * 2021-06-30 2023-01-05 平安科技(深圳)有限公司 Vehicle image segmentation quality evaluation method and apparatus, device, and storage medium
CN117058498A (en) * 2023-10-12 2023-11-14 腾讯科技(深圳)有限公司 Training method of segmentation map evaluation model, and segmentation map evaluation method and device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116052137B (en) * 2023-01-30 2024-01-30 北京化工大学 Deep learning-based classical furniture culture attribute identification method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018209057A1 (en) * 2017-05-11 2018-11-15 The Research Foundation For The State University Of New York System and method associated with predicting segmentation quality of objects in analysis of copious image data
US20190163949A1 (en) * 2017-11-27 2019-05-30 International Business Machines Corporation Intelligent tumor tracking system
WO2019233297A1 (en) * 2018-06-08 2019-12-12 Oppo广东移动通信有限公司 Data set construction method, mobile terminal and readable storage medium
CN111145206A (en) * 2019-12-27 2020-05-12 联想(北京)有限公司 Liver image segmentation quality evaluation method and device and computer equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777060B (en) * 2009-12-23 2012-05-23 中国科学院自动化研究所 Webpage classification method and system based on webpage visual characteristics
CN111489328B (en) * 2020-03-06 2023-06-30 浙江工业大学 Fundus image quality evaluation method based on blood vessel segmentation and background separation
CN111753843A (en) * 2020-06-28 2020-10-09 平安科技(深圳)有限公司 Segmentation effect evaluation method, device, equipment and medium based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018209057A1 (en) * 2017-05-11 2018-11-15 The Research Foundation For The State University Of New York System and method associated with predicting segmentation quality of objects in analysis of copious image data
US20190163949A1 (en) * 2017-11-27 2019-05-30 International Business Machines Corporation Intelligent tumor tracking system
WO2019233297A1 (en) * 2018-06-08 2019-12-12 Oppo广东移动通信有限公司 Data set construction method, mobile terminal and readable storage medium
CN111145206A (en) * 2019-12-27 2020-05-12 联想(北京)有限公司 Liver image segmentation quality evaluation method and device and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭莉丽: ""基于卷积神经网络的图像分割质量评估方法研究"", 《中国优秀硕士学位论文全文数据库信息科技辑,第12期》, pages 138 - 630 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021135552A1 (en) * 2020-06-28 2021-07-08 平安科技(深圳)有限公司 Segmentation effect assessment method and apparatus based on deep learning, and device and medium
CN113139072A (en) * 2021-04-20 2021-07-20 苏州挚途科技有限公司 Data labeling method and device and electronic equipment
WO2023273296A1 (en) * 2021-06-30 2023-01-05 平安科技(深圳)有限公司 Vehicle image segmentation quality evaluation method and apparatus, device, and storage medium
CN117058498A (en) * 2023-10-12 2023-11-14 腾讯科技(深圳)有限公司 Training method of segmentation map evaluation model, and segmentation map evaluation method and device
CN117058498B (en) * 2023-10-12 2024-02-06 腾讯科技(深圳)有限公司 Training method of segmentation map evaluation model, and segmentation map evaluation method and device

Also Published As

Publication number Publication date
WO2021135552A1 (en) 2021-07-08

Similar Documents

Publication Publication Date Title
CN111753843A (en) Segmentation effect evaluation method, device, equipment and medium based on deep learning
WO2021212659A1 (en) Video data processing method and apparatus, and computer device and storage medium
CN109063706A (en) Verbal model training method, character recognition method, device, equipment and medium
CN111666995B (en) Vehicle damage assessment method, device, equipment and medium based on deep learning model
CN110390666A (en) Road damage detecting method, device, computer equipment and storage medium
CN109086652A (en) Handwritten word model training method, Chinese characters recognition method, device, equipment and medium
CN111666990A (en) Vehicle damage characteristic detection method and device, computer equipment and storage medium
CN110378254B (en) Method and system for identifying vehicle damage image modification trace, electronic device and storage medium
CN112613553B (en) Picture sample set generation method and device, computer equipment and storage medium
CN115239644B (en) Concrete defect identification method, device, computer equipment and storage medium
CN111860027A (en) Two-dimensional code identification method and device
CN109726195A (en) A kind of data enhancement methods and device
CN109063720A (en) Handwritten word training sample acquisition methods, device, computer equipment and storage medium
CN113505781A (en) Target detection method and device, electronic equipment and readable storage medium
CN112232336A (en) Certificate identification method, device, equipment and storage medium
CN114241344B (en) Plant leaf disease and pest severity assessment method based on deep learning
CN111768405A (en) Method, device, equipment and storage medium for processing annotated image
CN111899191A (en) Text image restoration method and device and storage medium
CN113963353A (en) Character image processing and identifying method and device, computer equipment and storage medium
CN113706513A (en) Vehicle damage image analysis method, device, equipment and medium based on image detection
CN112836756B (en) Image recognition model training method, system and computer equipment
CN111353689B (en) Risk assessment method and device
CN113901883A (en) Seal identification method, system and storage medium based on deep learning
CN111311601A (en) Segmentation method and device for spliced image
CN113012030A (en) Image splicing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40032358

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination