CN113223017A - Training method of target segmentation model, target segmentation method and device - Google Patents

Training method of target segmentation model, target segmentation method and device Download PDF

Info

Publication number
CN113223017A
CN113223017A CN202110540043.9A CN202110540043A CN113223017A CN 113223017 A CN113223017 A CN 113223017A CN 202110540043 A CN202110540043 A CN 202110540043A CN 113223017 A CN113223017 A CN 113223017A
Authority
CN
China
Prior art keywords
target segmentation
training
loss function
training sample
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110540043.9A
Other languages
Chinese (zh)
Inventor
王伟农
戴宇荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110540043.9A priority Critical patent/CN113223017A/en
Publication of CN113223017A publication Critical patent/CN113223017A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a training method of a target segmentation model, a target segmentation method and equipment. The training method comprises the following steps: obtaining a plurality of training samples, wherein each training sample comprises an image with a label of a target segmentation true value; inputting each training sample in the plurality of training samples into the target segmentation model to obtain a target segmentation predicted value; calculating a loss function for the target segmentation model based on the target segmentation predicted value, the target segmentation real value and the labeling precision of the target segmentation real value of each training sample; training the target segmentation model based on the calculated loss function for the target segmentation model; the marking precision of the real target segmentation value of the training sample indicates the marking precision and fineness of the actual target segmentation area in the training sample.

Description

Training method of target segmentation model, target segmentation method and device
Technical Field
The present disclosure relates generally to the field of artificial intelligence, and more particularly, to a training method and apparatus for a target segmentation model, and a target segmentation method and apparatus.
Background
Image object segmentation is a very important computer vision task, and has a variety of applications in image retrieval, vision tracking, picture editing, movie and television production, and the like.
In the related art, by using a suitable method to improve the image target segmentation effect, for example, applying a deep neural network to image target segmentation, the high-level semantic features extracted from the deep neural network can more accurately distinguish a target object and a background from a complex scene, thereby greatly improving the target segmentation effect. In addition, the performance of the algorithm model for image target segmentation can be improved by utilizing fine details, global semantics, a convolutional neural network attention mechanism, edge information and the like.
Disclosure of Invention
An exemplary embodiment of the present disclosure is to provide a training method of a target segmentation model, a target segmentation method and an apparatus, which can improve the performance of the target segmentation model and the effect of image target segmentation from the perspective of annotation accuracy of a training data set.
According to a first aspect of the embodiments of the present disclosure, there is provided a training method of a target segmentation model, including: obtaining a plurality of training samples, wherein each training sample comprises an image with a label of a target segmentation true value; inputting each training sample in the plurality of training samples into the target segmentation model to obtain a target segmentation predicted value; calculating a loss function for the target segmentation model based on the target segmentation predicted value, the target segmentation real value and the labeling precision of the target segmentation real value of each training sample; training the target segmentation model based on the calculated loss function for the target segmentation model; the marking precision of the real target segmentation value of the training sample indicates the marking precision and fineness of the actual target segmentation area in the training sample.
Optionally, the step of calculating a loss function for the target segmentation model based on the target segmentation predicted value, the target segmentation true value, and the labeling precision of the target segmentation true value of each training sample includes: calculating a loss function of each training sample based on the target segmentation predicted value and the target segmentation true value of each training sample; calculating a loss function for the target segmentation model based on the loss function of each training sample and its degree of contribution to the loss function for the target segmentation model; the higher the marking precision of the target segmentation true value of the training sample is, the greater the contribution degree of the loss function of the training sample to the loss function of the target segmentation model is.
Optionally, a corresponding weight value is set for the loss function of each training sample according to the labeling precision of the target segmentation true value of the training sample, wherein the contribution degree is represented by the weight value, and the higher the labeling precision of the target segmentation true value of the training sample is, the higher the weight value of the loss function of the training sample is; wherein the step of calculating a loss function for the target segmentation model based on the loss function of each training sample and its degree of contribution to the loss function for the target segmentation model comprises: calculating a loss function for the target segmentation model based on the loss function and its weight value for each training sample.
Optionally, the training samples are divided into N levels according to the labeling precision, and weight values of loss functions of the training samples in the same level are the same, where N is an integer greater than 0.
Optionally, the plurality of training samples satisfies the following condition: the higher the annotation precision level is, the larger the number of training samples therein is.
Optionally, the plurality of training samples comprises: the labeled training samples are completed using a machine learning model.
Optionally, the loss function L for the object segmentation model is as follows:
Figure BDA0003071332680000021
wherein, TiDenotes the ith training sample in the jth level, θ (T)i) Represents TiTarget segmentation prediction value of yiRepresents TiF () represents a loss function, njRepresenting in the j-th levelNumber of training samples, djRepresenting the weight values of the loss function of the training samples in the jth level.
According to a second aspect of the embodiments of the present disclosure, there is provided a target segmentation method, including: obtaining a sample to be predicted; inputting the sample to be predicted into a target segmentation model trained by executing the method of any one of claims 1 to 7, and obtaining a predicted target segmentation result.
According to a third aspect of the embodiments of the present disclosure, there is provided a training apparatus of a target segmentation model, including: a training sample acquisition unit configured to acquire a plurality of training samples, wherein each training sample comprises an image with an annotation of a real value of a target segmentation; a predicted value obtaining unit configured to input each of the plurality of training samples into the target segmentation model to obtain a target segmentation predicted value; the loss function calculation unit is configured to calculate a loss function for the target segmentation model based on the target segmentation predicted value, the target segmentation real value and the labeling precision of the target segmentation real value of each training sample; a training unit configured to train the target segmentation model based on the calculated loss function for the target segmentation model; the marking precision of the real target segmentation value of the training sample indicates the marking precision and fineness of the actual target segmentation area in the training sample.
Optionally, the loss function calculation unit is configured to calculate a loss function for each training sample based on the target segmentation predicted value and the target segmentation true value of each training sample; calculating a loss function for the target segmentation model based on the loss function of each training sample and the contribution degree of the loss function to the loss function for the target segmentation model; the higher the marking precision of the target segmentation true value of the training sample is, the greater the contribution degree of the loss function of the training sample to the loss function of the target segmentation model is.
Optionally, a corresponding weight value is set for the loss function of each training sample according to the labeling precision of the target segmentation true value of the training sample, wherein the contribution degree is represented by the weight value, and the higher the labeling precision of the target segmentation true value of the training sample is, the higher the weight value of the loss function of the training sample is; wherein the loss function calculation unit is configured to calculate a loss function for the target segmentation model based on the loss function of each training sample and its weight value.
Optionally, the training samples are divided into N levels according to the labeling precision, and weight values of loss functions of the training samples in the same level are the same, where N is an integer greater than 0.
Optionally, the plurality of training samples satisfies the following condition: the higher the annotation precision level is, the larger the number of training samples therein is.
Optionally, the plurality of training samples comprises: the labeled training samples are completed using a machine learning model.
Optionally, the loss function L for the object segmentation model is as follows:
Figure BDA0003071332680000031
wherein, TiDenotes the ith training sample in the jth level, θ (T)i) Represents TiTarget segmentation prediction value of yiRepresents TiF () represents a loss function, njRepresenting the number of training samples in the jth level, djRepresenting the weight values of the loss function of the training samples in the jth level.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a target segmentation apparatus including: a to-be-predicted sample acquisition unit configured to acquire a to-be-predicted sample; and a prediction result obtaining unit configured to input the sample to be predicted to the target segmentation model which is trained by the training device of the target segmentation model as described above, and obtain a predicted target segmentation result.
According to a fifth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform a training method of an object segmentation model as described above and/or an object segmentation method as described above.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein instructions, when executed by at least one processor, cause the at least one processor to perform the training method of the object segmentation model as described above and/or the object segmentation method as described above.
According to a seventh aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer instructions which, when executed by at least one processor, implement the training method of the object segmentation model as described above and/or the object segmentation method as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: the method and the device have the advantages that the marking accuracy of the target segmentation true value of the training sample is considered when the loss function of the target segmentation model is calculated in the model training process, and the problem that the performance of the model is poor due to the fact that the training sample training model with low marking accuracy is used under the condition that the marking accuracy of the training data set is uneven is solved. In addition, the contribution of the loss functions of the training samples with different labeling precision levels to the loss function of the target segmentation model is limited, namely, the contribution of the training sample set with higher labeling precision to model training is increased, the contribution of the training sample set with lower labeling precision to model training is reduced, effective information in the training samples with different labeling precision levels can be effectively extracted, the functions of the training samples with different labeling precision levels are fully exerted, the performance of a model algorithm is maximized, and therefore the performance (such as prediction effect, stability and the like) of the target segmentation model is improved, and the image target segmentation effect is improved.
In addition, the marking precision of the target segmentation true value of the training sample is fully considered when the loss function of the target segmentation model is calculated, the training sample with low marking precision does not need to be completely removed, the training sample which is automatically marked by the machine learning model can be used for training the target segmentation model without influencing the training effect of the model, and therefore the problem that the number of the training data sets is insufficient can be solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 shows a flow chart of a method of training a target segmentation model according to an exemplary embodiment of the present disclosure;
FIG. 2 shows a flow chart of a target segmentation method according to an exemplary embodiment of the present disclosure;
FIG. 3 shows a block diagram of a training apparatus for a target segmentation model according to an exemplary embodiment of the present disclosure;
fig. 4 illustrates a block diagram of a target segmentation apparatus according to an exemplary embodiment of the present disclosure;
fig. 5 illustrates a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
In this case, the expression "at least one of the items" in the present disclosure means a case where three types of parallel expressions "any one of the items", "a combination of any plural ones of the items", and "the entirety of the items" are included. For example, "include at least one of a and B" includes the following three cases in parallel: (1) comprises A; (2) comprises B; (3) including a and B. For another example, "at least one of the first step and the second step is performed", which means that the following three cases are juxtaposed: (1) executing the step one; (2) executing the step two; (3) and executing the step one and the step two.
Fig. 1 shows a flowchart of a training method of a target segmentation model according to an exemplary embodiment of the present disclosure. The object segmentation model is a machine learning model for image object segmentation.
Referring to fig. 1, in step S101, a plurality of training samples are obtained, wherein each training sample includes an annotated image with a real value of a target segmentation.
As an example, the annotated image with the true value of the target segmentation may be: some of which are labeled as belonging to the image of the target segmented region.
In step S102, each of the plurality of training samples is input into the target segmentation model, and a target segmentation prediction value is obtained.
In step S103, a loss function for the target segmentation model is calculated based on the target segmentation predicted value, the target segmentation true value, and the labeling accuracy of the target segmentation true value (hereinafter, also referred to as the labeling accuracy of the training samples) of each training sample.
And the marking precision of the real target segmentation value of the training sample indicates the marking precision and fineness of the actual target segmentation area in the training sample.
It should be understood that the labeled (e.g., manually labeled or automatically labeled by a model) real target segmentation value may have false label and false label relative to the actual target segmentation region, and in the present disclosure, the accuracy of labeling the actual target segmentation region in the training sample may be used to evaluate the accuracy of labeling the actual target segmentation region in the training sample or the accuracy of the labeled real target segmentation value, for example, the evaluation may be performed from both the false label and the false label; further, for the same size image, the higher the pixel is, the higher the fineness is, and accordingly, the finer the labeling of the edge of the target segmentation region when labeling the real value of the target segmentation, in the present disclosure, the fineness of the labeling of the actual target segmentation region in the training sample can be used to evaluate the fineness of the labeling of the actual target segmentation region in the training sample, for example, the evaluation can be made in terms of the pixel of the image.
As an example, a loss function for each training sample may be calculated based on the target segmentation predicted value and the target segmentation true value for each training sample; and calculating a loss function for the target segmentation model based on the loss function of each training sample and the contribution degree of the loss function to the loss function for the target segmentation model, wherein the higher the labeling accuracy of the training sample is, the greater the contribution degree of the loss function of the training sample to the loss function of the target segmentation model is.
As an example, the loss function of each training sample may be set with a corresponding weight value according to the labeling precision of the training sample, wherein the contribution degree is represented by the weight value, and the higher the labeling precision of the training sample is, the higher the weight value of the loss function of the training sample is; accordingly, the step of calculating a loss function for the target segmentation model based on the loss function of each training sample and its degree of contribution to the loss function for the target segmentation model may comprise: calculating a loss function for the target segmentation model based on the loss function and its weight value for each training sample.
As an example, the plurality of training samples may be divided into N levels according to the labeling precision, and the weight values of the loss functions of the training samples in the same level are the same, where N is an integer greater than 0. In other words, the loss functions for training samples in the same class can be uniformly assigned with the same weight value, and the assignment of the weight value is related to the labeling precision of the class, and the higher the labeling precision, the higher the assigned weight value.
As an example, the N levels may be: l1,l2…lNFrom left to right, the marking precision is decreased in sequence, that is, the grade l1The training sample in (1) is the training sample subset with the highest labeling precision, and the grade is lNThe training samples in (1) are the training sample subset with the lowest labeling precision, and the number of the training samples contained in each grade is n in turn1,n2…nNAnd the total number m of the plurality of training samples is n1+n2+…+nNAnd, the weight value of the loss function of the training sample for different marking precision levels is marked as d1,d2…dN. As an example, 1.0 > ═ d may be satisfied1>d2>…>dN0.0. As another example, 1.0 > - ═ d can be satisfied1>d2>…>dN0.5. As another example, when N is 3, d1Can be 1.0, d2Can be 0.8, d3May be 0.5.
As an example, a range of the labeling precision corresponding to each level may be preset, so that the training samples with the labeling precision in the range may be classified under the level. As another example, the plurality of training samples may be ranked from high to low according to their labeling precision and ranked according to the ranking.
Further, as an example, the plurality of training samples may satisfy the following condition: the higher the annotation precision level is, the greater the number of training samples therein is, i.e., n is satisfied1>=n2>=…>=nN
As an example, the training samples may be screened from a plurality of training samples based on labeling precision of the training samples, and used for training the target segmentation model. For example, a plurality of training samples may be divided into N levels according to the labeling accuracy, and then, based on the labeling accuracy of each level, part of the training samples are deleted to adjust the number of training samples in each level, so that the higher the labeling accuracy level is, the greater the number of training samples therein is, and the remaining training samples are the plurality of training samples.
As an example, the plurality of training samples may include: the labeled training samples are completed using a machine learning model. For example, the machine learning model may be a machine learning model that is dedicated to labeling samples. According to the exemplary embodiment of the disclosure, the target segmentation model is trained by automatically labeling the finished training samples by using the machine learning model, and the contribution of the training samples to model training is determined based on the labeling precision of the training samples, so that the performance of the trained model is ensured while the problem of insufficient training samples of the training data set is solved.
As an example, the loss function L for the target segmentation model may be as shown in equation (1):
Figure BDA0003071332680000071
wherein, TiDenotes the ith training sample in the jth level, θ (T)i) Represents TiTarget segmentation prediction value of yiRepresents TiF () represents a loss function, njRepresenting the number of training samples in the jth level, djRepresenting the weight values of the loss function of the training samples in the jth level.
As an example, the loss function f () may be various types of loss functions suitable for the target segmentation task. For example, it may be a two-class cross entropy loss function.
It should be appreciated that the object segmentation model is a machine learning model employing a suitable machine learning algorithm. For example, a deep neural network may be used as a machine learning algorithm for the object segmentation model. The object segmentation model may be, for example, a machine learning model for image saliency object segmentation, but it should be understood that other types of object segmentation models are possible.
In step S104, the target segmentation model is trained based on the calculated loss function for the target segmentation model.
Fig. 2 illustrates a flowchart of a target segmentation method according to an exemplary embodiment of the present disclosure.
Referring to fig. 2, in step S201, a sample to be predicted is acquired.
In step S202, the sample to be predicted is input to the target segmentation model trained by executing the training method according to the above exemplary embodiment, and a predicted target segmentation result is obtained.
Fig. 3 illustrates a block diagram of a training apparatus of a target segmentation model according to an exemplary embodiment of the present disclosure.
As shown in fig. 3, the training apparatus 10 of the target segmentation model according to the exemplary embodiment of the present disclosure includes: a training sample acquisition unit 101, a predicted value acquisition unit 102, a loss function calculation unit 103, and a training unit 104.
Specifically, the training sample acquisition unit 101 is configured to acquire a plurality of training samples, wherein each training sample comprises an annotated image with real values of a target segmentation.
The predicted value obtaining unit 102 is configured to input each of the plurality of training samples into the target segmentation model, and obtain a target segmentation predicted value.
The loss function calculation unit 103 is configured to calculate a loss function for the target segmentation model based on the target segmentation predicted value, the target segmentation true value, and the labeling precision of the target segmentation true value of each training sample.
The training unit 104 is configured to train the target segmentation model based on the calculated loss function for the target segmentation model; the marking precision of the real target segmentation value of the training sample indicates the marking precision and fineness of the actual target segmentation area in the training sample.
As an example, the loss function calculation unit 103 may be configured to calculate a loss function for each training sample based on the target segmentation predicted value and the target segmentation true value of each training sample; calculating a loss function for the target segmentation model based on the loss function of each training sample and the contribution degree of the loss function to the loss function for the target segmentation model; the higher the marking precision of the target segmentation true value of the training sample is, the greater the contribution degree of the loss function of the training sample to the loss function of the target segmentation model is.
Optionally, the loss function of each training sample may be set with a corresponding weight value according to the labeling precision of the target segmentation true value of the training sample, wherein the contribution degree is represented by the weight value, and the higher the labeling precision of the target segmentation true value of the training sample is, the higher the weight value of the loss function of the training sample is; wherein the loss function calculation unit 103 may be configured to calculate a loss function for the target segmentation model based on the loss function of each training sample and its weight value.
Optionally, the training samples may be divided into N levels according to the labeling precision, and weight values of loss functions of training samples in the same level are the same, where N is an integer greater than 0.
Optionally, the plurality of training samples may satisfy the following condition: the higher the annotation precision level is, the larger the number of training samples therein is.
Optionally, the plurality of training samples may include: the labeled training samples are completed using a machine learning model.
Alternatively, the loss function L for the target segmentation model may be as shown in equation (1).
Fig. 4 illustrates a block diagram of a target segmentation apparatus according to an exemplary embodiment of the present disclosure.
As shown in fig. 4, the target segmentation apparatus 20 according to an exemplary embodiment of the present disclosure includes: a to-be-predicted sample acquisition unit 201 and a prediction result acquisition unit 202.
Specifically, the to-be-predicted sample acquisition unit 201 is configured to acquire a to-be-predicted sample.
The prediction result obtaining unit 202 is configured to input the sample to be predicted to the target segmentation model that is trained by the training apparatus 10 of the target segmentation model as described in the above exemplary embodiment, and obtain a predicted target segmentation result.
With regard to the apparatus in the above-described embodiment, the specific manner in which the respective units perform operations has been described in detail in the embodiment related to the method, and will not be elaborated upon here.
Furthermore, it should be understood that the respective units in the training apparatus 10 and the target segmentation apparatus 20 of the target segmentation model according to the exemplary embodiments of the present disclosure may be implemented as hardware components and/or software components. The individual units may be implemented, for example, using Field Programmable Gate Arrays (FPGAs) or Application Specific Integrated Circuits (ASICs), depending on the processing performed by the individual units as defined by the skilled person.
Fig. 5 illustrates a block diagram of an electronic device according to an exemplary embodiment of the present disclosure. Referring to fig. 5, the electronic device 30 includes: at least one memory 301 and at least one processor 302, the at least one memory 301 having stored therein a set of computer-executable instructions that, when executed by the at least one processor 302, perform a method of training a target segmentation model as described in the above exemplary embodiments and/or a method of target segmentation as described in the above exemplary embodiments.
By way of example, the electronic device 30 may be a PC computer, tablet device, personal digital assistant, smart phone, or other device capable of executing the set of instructions described above. Here, the electronic device 30 need not be a single electronic device, but can be any collection of devices or circuits that can execute the above instructions (or sets of instructions) individually or in combination. The electronic device 30 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with local or remote (e.g., via wireless transmission).
In the electronic device 30, the processor 302 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a programmable logic device, a special purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processor 302 may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
The processor 302 may execute instructions or code stored in the memory 301, wherein the memory 301 may also store data. The instructions and data may also be transmitted or received over a network via a network interface device, which may employ any known transmission protocol.
The memory 301 may be integrated with the processor 302, for example, by having RAM or flash memory disposed within an integrated circuit microprocessor or the like. Further, memory 301 may comprise a stand-alone device, such as an external disk drive, storage array, or any other storage device usable by a database system. The memory 301 and the processor 302 may be operatively coupled or may communicate with each other, e.g., through I/O ports, network connections, etc., such that the processor 302 is able to read files stored in the memory.
In addition, the electronic device 30 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the electronic device 30 may be connected to each other via a bus and/or a network.
According to an exemplary embodiment of the present disclosure, a computer-readable storage medium storing instructions may also be provided, wherein the instructions, when executed by at least one processor, cause the at least one processor to perform a training method of a target segmentation model as described in the above exemplary embodiment and/or a target segmentation method as described in the above exemplary embodiment. Examples of the computer-readable storage medium herein include: read-only memory (ROM), random-access programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, non-volatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD + RW, DVD-ROM, DVD-R, DVD + R, DVD-RW, DVD + RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, Blu-ray or compact disc memory, Hard Disk Drive (HDD), solid-state drive (SSD), card-type memory (such as a multimedia card, a Secure Digital (SD) card or a extreme digital (XD) card), magnetic tape, a floppy disk, a magneto-optical data storage device, an optical data storage device, a hard disk, a magnetic tape, a magneto-optical data storage device, a, A solid state disk, and any other device configured to store and provide a computer program and any associated data, data files, and data structures to a processor or computer in a non-transitory manner such that the processor or computer can execute the computer program. The computer program in the computer-readable storage medium described above can be run in an environment deployed in a computer apparatus, such as a client, a host, a proxy device, a server, and the like, and further, in one example, the computer program and any associated data, data files, and data structures are distributed across a networked computer system such that the computer program and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by one or more processors or computers.
According to an exemplary embodiment of the present disclosure, a computer program product may also be provided, in which instructions are executable by at least one processor to perform a training method of an object segmentation model as described in the above exemplary embodiment and/or an object segmentation method as described in the above exemplary embodiment.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for training a target segmentation model, comprising:
obtaining a plurality of training samples, wherein each training sample comprises an image with a label of a target segmentation true value;
inputting each training sample in the plurality of training samples into the target segmentation model to obtain a target segmentation predicted value;
calculating a loss function for the target segmentation model based on the target segmentation predicted value, the target segmentation real value and the labeling precision of the target segmentation real value of each training sample;
training the target segmentation model based on the calculated loss function for the target segmentation model;
the marking precision of the real target segmentation value of the training sample indicates the marking precision and fineness of the actual target segmentation area in the training sample.
2. The method of claim 1, wherein the step of calculating the loss function for the target segmentation model based on the predicted value of the target segmentation, the true value of the target segmentation, and the labeling precision of the true value of the target segmentation of each training sample comprises:
calculating a loss function of each training sample based on the target segmentation predicted value and the target segmentation true value of each training sample;
calculating a loss function for the target segmentation model based on the loss function of each training sample and its degree of contribution to the loss function for the target segmentation model;
the higher the marking precision of the target segmentation true value of the training sample is, the greater the contribution degree of the loss function of the training sample to the loss function of the target segmentation model is.
3. The method according to claim 2, wherein the loss function of each training sample is set with a corresponding weight value according to the labeling precision of the target segmentation true value of the training sample, wherein the contribution degree is characterized by the weight value, and the higher the labeling precision of the target segmentation true value of the training sample is, the higher the weight value of the loss function of the training sample is;
wherein the step of calculating a loss function for the target segmentation model based on the loss function of each training sample and its degree of contribution to the loss function for the target segmentation model comprises: calculating a loss function for the target segmentation model based on the loss function and its weight value for each training sample.
4. The method of claim 3, wherein the plurality of training samples are divided into N levels according to the labeling precision, and weight values of loss functions of training samples in the same level are the same, wherein N is an integer greater than 0.
5. An object segmentation method, comprising:
obtaining a sample to be predicted;
inputting the sample to be predicted into a target segmentation model trained by executing the method of any one of claims 1 to 4 to obtain a predicted target segmentation result.
6. An apparatus for training an object segmentation model, comprising:
a training sample acquisition unit configured to acquire a plurality of training samples, wherein each training sample comprises an image with an annotation of a real value of a target segmentation;
a predicted value obtaining unit configured to input each of the plurality of training samples into the target segmentation model to obtain a target segmentation predicted value;
the loss function calculation unit is configured to calculate a loss function for the target segmentation model based on the target segmentation predicted value, the target segmentation real value and the labeling precision of the target segmentation real value of each training sample;
a training unit configured to train the target segmentation model based on the calculated loss function for the target segmentation model;
the marking precision of the real target segmentation value of the training sample indicates the marking precision and fineness of the actual target segmentation area in the training sample.
7. An object segmentation apparatus, characterized by comprising:
a to-be-predicted sample acquisition unit configured to acquire a to-be-predicted sample;
a prediction result obtaining unit configured to input the sample to be predicted to a target segmentation model trained by the apparatus according to claim 6, and obtain a predicted target segmentation result.
8. An electronic device, comprising:
at least one processor;
at least one memory storing computer-executable instructions,
wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform a training method of an object segmentation model according to any one of claims 1 to 4 and/or an object segmentation method according to claim 5.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by at least one processor, cause the at least one processor to perform a method of training an object segmentation model according to any one of claims 1 to 4 and/or a method of object segmentation according to claim 5.
10. A computer program product comprising computer instructions, characterized in that the computer instructions, when executed by at least one processor, implement a training method of an object segmentation model according to any one of claims 1 to 4 and/or an object segmentation method according to claim 5.
CN202110540043.9A 2021-05-18 2021-05-18 Training method of target segmentation model, target segmentation method and device Pending CN113223017A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110540043.9A CN113223017A (en) 2021-05-18 2021-05-18 Training method of target segmentation model, target segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110540043.9A CN113223017A (en) 2021-05-18 2021-05-18 Training method of target segmentation model, target segmentation method and device

Publications (1)

Publication Number Publication Date
CN113223017A true CN113223017A (en) 2021-08-06

Family

ID=77092647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110540043.9A Pending CN113223017A (en) 2021-05-18 2021-05-18 Training method of target segmentation model, target segmentation method and device

Country Status (1)

Country Link
CN (1) CN113223017A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435545A (en) * 2021-08-14 2021-09-24 北京达佳互联信息技术有限公司 Training method and device of image processing model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308701A (en) * 2018-08-31 2019-02-05 南京理工大学 The SD-OCT image GA lesion segmentation method of depth cascade model
CN109410185A (en) * 2018-10-10 2019-03-01 腾讯科技(深圳)有限公司 A kind of image partition method, device and storage medium
CN109816111A (en) * 2019-01-29 2019-05-28 北京金山数字娱乐科技有限公司 Reading understands model training method and device
WO2019100844A1 (en) * 2017-11-22 2019-05-31 阿里巴巴集团控股有限公司 Machine learning model training method and device, and electronic device
CN111723856A (en) * 2020-06-11 2020-09-29 广东浪潮大数据研究有限公司 Image data processing method, device and equipment and readable storage medium
WO2020215985A1 (en) * 2019-04-22 2020-10-29 腾讯科技(深圳)有限公司 Medical image segmentation method and device, electronic device and storage medium
CN112528862A (en) * 2020-12-10 2021-03-19 西安电子科技大学 Remote sensing image target detection method based on improved cross entropy loss function

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019100844A1 (en) * 2017-11-22 2019-05-31 阿里巴巴集团控股有限公司 Machine learning model training method and device, and electronic device
CN109308701A (en) * 2018-08-31 2019-02-05 南京理工大学 The SD-OCT image GA lesion segmentation method of depth cascade model
CN109410185A (en) * 2018-10-10 2019-03-01 腾讯科技(深圳)有限公司 A kind of image partition method, device and storage medium
CN109816111A (en) * 2019-01-29 2019-05-28 北京金山数字娱乐科技有限公司 Reading understands model training method and device
WO2020215985A1 (en) * 2019-04-22 2020-10-29 腾讯科技(深圳)有限公司 Medical image segmentation method and device, electronic device and storage medium
CN111723856A (en) * 2020-06-11 2020-09-29 广东浪潮大数据研究有限公司 Image data processing method, device and equipment and readable storage medium
CN112528862A (en) * 2020-12-10 2021-03-19 西安电子科技大学 Remote sensing image target detection method based on improved cross entropy loss function

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435545A (en) * 2021-08-14 2021-09-24 北京达佳互联信息技术有限公司 Training method and device of image processing model

Similar Documents

Publication Publication Date Title
US11593458B2 (en) System for time-efficient assignment of data to ontological classes
US20200356901A1 (en) Target variable distribution-based acceptance of machine learning test data sets
US10671656B2 (en) Method for recommending text content based on concern, and computer device
Bolón-Canedo et al. Feature selection for high-dimensional data
US20180247405A1 (en) Automatic detection and semantic description of lesions using a convolutional neural network
AU2016225947B2 (en) System and method for multimedia document summarization
Pandey et al. Towards understanding human similarity perception in the analysis of large sets of scatter plots
Hou et al. Dataset of segmented nuclei in hematoxylin and eosin stained histopathology images of ten cancer types
US10521567B2 (en) Digital image processing for element removal and/or replacement
US10339642B2 (en) Digital image processing through use of an image repository
CN110633421A (en) Feature extraction, recommendation, and prediction methods, devices, media, and apparatuses
US11687839B2 (en) System and method for generating and optimizing artificial intelligence models
CN114003758B (en) Training method and device of image retrieval model and retrieval method and device
US20230336532A1 (en) Privacy Preserving Document Analysis
CN104516635A (en) Content display management
Popovici et al. Image-based surrogate biomarkers for molecular subtypes of colorectal cancer
US11709885B2 (en) Determining fine-grain visual style similarities for digital images by extracting style embeddings disentangled from image content
Khadangi et al. EM-stellar: benchmarking deep learning for electron microscopy image segmentation
Huang et al. Learning natural colors for image recoloring
CN104937540A (en) Acquiring identification of application lifecycle management entity associated with similar code
CN114565768A (en) Image segmentation method and device
CN113223017A (en) Training method of target segmentation model, target segmentation method and device
CN113656797A (en) Behavior feature extraction method and behavior feature extraction device
CN107430633A (en) The representative content through related optimization being associated to data-storage system
CN114937072A (en) Image processing method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination