CN111160411B - Classification model training method, image processing method, device, medium and equipment - Google Patents

Classification model training method, image processing method, device, medium and equipment Download PDF

Info

Publication number
CN111160411B
CN111160411B CN201911268187.2A CN201911268187A CN111160411B CN 111160411 B CN111160411 B CN 111160411B CN 201911268187 A CN201911268187 A CN 201911268187A CN 111160411 B CN111160411 B CN 111160411B
Authority
CN
China
Prior art keywords
pixel point
training
loss value
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911268187.2A
Other languages
Chinese (zh)
Other versions
CN111160411A (en
Inventor
顾文剑
崔朝辉
赵立军
张霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201911268187.2A priority Critical patent/CN111160411B/en
Publication of CN111160411A publication Critical patent/CN111160411A/en
Application granted granted Critical
Publication of CN111160411B publication Critical patent/CN111160411B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The disclosure relates to a classification model training method, an image processing device, a medium and equipment, so as to solve the problem of low model recall rate in the related art. The method comprises the following steps: acquiring multiple sets of training data, wherein each set of training data comprises a historical image, a first pixel point marked as belonging to a target category and a second pixel point not marked as belonging to the target category; performing iterative training on the image classification model according to multiple sets of training data to obtain a trained image classification model; after each training is finished, if the training stopping condition is not met, updating the image classification model according to the target loss value of the image classification model of the training, and performing the next training by using the updated image classification model; the target loss value is determined according to the loss value of each first pixel point in the historical image of the training, and when the initial loss value of the first pixel point is larger than the loss threshold value, the loss value of the first pixel point is larger than the initial loss value of the first pixel point.

Description

Classification model training method, image processing method, device, medium and equipment
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a classification model training method, an image processing device, a medium and equipment.
Background
When classifying images, manually labeling pixels belonging to preset categories in the images, and performing model training based on the content of manual labeling to obtain a trained model. In the process of manual labeling, a phenomenon of label missing may occur, so that pixels which should be labeled as belonging to a preset class are used as pixels of a non-preset class in the training process, and the recall rate of a model obtained by training is low.
Disclosure of Invention
The invention aims to provide a classification model training method, an image processing device, a medium and equipment, so as to improve recall rate of an image classification model.
To achieve the above object, according to a first aspect of the present disclosure, there is provided an image classification model training method, the method comprising:
acquiring multiple sets of training data, wherein each set of training data comprises a historical image, a first pixel marked as belonging to a target category in the historical image and a second pixel not marked as belonging to the target category;
Performing iterative training on the image classification model according to the multiple groups of training data until the training stopping condition is met, so as to obtain a trained image classification model;
in each training process, inputting a historical image of a group of training data into an image classification model used in the current training, updating the image classification model according to a target loss value of the image classification model used in the current training if the training stopping condition is not met after each training, and performing the next training by utilizing the updated image classification model; the target loss value is determined according to the loss value of each first pixel point in the historical image used in the training, and when the initial loss value of the first pixel point is larger than a loss threshold value, the loss value of the first pixel point is larger than the initial loss value of the first pixel point.
Optionally, the loss value of each first pixel point is determined by:
and determining the product of the first weight coefficient corresponding to the first pixel point and the initial loss value of the first pixel point as the loss value of the first pixel point, wherein the first weight coefficient corresponding to the first pixel point and the initial loss value of the first pixel point are in positive correlation.
Optionally, after each training is finished, the image classification model outputs the probability that each pixel point in the historical image used in the training belongs to the target category; and
the first weight coefficient corresponding to the first pixel point is a difference value between the inverse of the probability that the first pixel point belongs to the target category and 1.
Optionally, the initial loss value of the first pixel point i is-log (P i ) Wherein P is i And the probability that the first pixel point i belongs to the target category.
Optionally, the target loss value is determined according to the loss value of each second pixel point in the history image used in the training, and the target loss value is the sum of the loss values of each pixel point in the history image used in the training; and
the loss value of each second pixel point is determined by the following method:
and determining the product of a preset second weight coefficient and the initial loss value of the second pixel point as the loss value of the second pixel point, wherein the preset second weight coefficient is in a numerical interval formed by (0, 1).
Optionally, after each training is finished, the image classification model outputs the probability that each pixel point in the historical image used in the training belongs to the target category; and
The initial loss value of the second pixel point j is-log (1-Qj), wherein Q j And the probability that the second pixel point j belongs to the target category.
Optionally, the history images are all medical images or all object images;
if all the history images are medical images, pixels corresponding to the change of body tissues in the medical images belong to the target category;
and if all the historical images are object images, pixels corresponding to object breakage in the object images belong to the target category.
According to a second aspect of the present disclosure, there is provided an image processing method, the method comprising:
acquiring an image to be processed;
inputting the image to be processed into a target image classification model to obtain a target output result of the target image classification model, wherein the target image classification model is obtained by training according to the image classification model training method in the first aspect of the disclosure, and the target output result is used for indicating whether each pixel point in the image to be processed belongs to the target category;
and displaying the target output result, wherein the pixel points belonging to the target category and the pixel points not belonging to the target category have different display modes.
According to a third aspect of the present disclosure, there is provided an image classification model training apparatus, the apparatus comprising:
the first acquisition module is used for acquiring a plurality of groups of training data, wherein each group of training data comprises a historical image, a first pixel point marked as belonging to a target category in the historical image and a second pixel point not marked as belonging to the target category;
the training module is used for carrying out iterative training on the image classification model according to the plurality of groups of training data until the training stopping condition is met so as to obtain a trained image classification model;
in each training process, inputting a historical image of a group of training data into an image classification model used in the current training, updating the image classification model according to a target loss value of the image classification model used in the current training if the training stopping condition is not met after each training, and performing the next training by utilizing the updated image classification model; the target loss value is determined according to the loss value of each first pixel point in the historical image used in the training, and when the initial loss value of the first pixel point is larger than a loss threshold value, the loss value of the first pixel point is larger than the initial loss value of the first pixel point.
Optionally, the loss value of each first pixel point is determined by:
and determining the product of the first weight coefficient corresponding to the first pixel point and the initial loss value of the first pixel point as the loss value of the first pixel point, wherein the first weight coefficient corresponding to the first pixel point and the initial loss value of the first pixel point are in positive correlation.
Optionally, after each training is finished, the image classification model outputs the probability that each pixel point in the historical image used in the training belongs to the target category; and
the first weight coefficient corresponding to the first pixel point is a difference value between the inverse of the probability that the first pixel point belongs to the target category and 1.
Optionally, the initial loss value of the first pixel point i is-log (P i ) Wherein P is i And the probability that the first pixel point i belongs to the target category.
Optionally, the target loss value is determined according to the loss value of each second pixel point in the history image used in the training, and the target loss value is the sum of the loss values of each pixel point in the history image used in the training; and
the loss value of each second pixel point is determined by the following method:
And determining the product of a preset second weight coefficient and the initial loss value of the second pixel point as the loss value of the second pixel point, wherein the preset second weight coefficient is in a numerical interval formed by (0, 1).
Optionally, after each training is finished, the image classification model outputs the probability that each pixel point in the historical image used in the training belongs to the target category; and
the initial loss value of the second pixel point j is-log (1-Q j ) Wherein Q is j And the probability that the second pixel point j belongs to the target category.
Optionally, the history images are all medical images or all object images;
if all the history images are medical images, pixels corresponding to the change of body tissues in the medical images belong to the target category;
and if all the historical images are object images, pixels corresponding to object breakage in the object images belong to the target category.
According to a fourth aspect of the present disclosure, there is provided an image processing apparatus including:
the second acquisition module is used for acquiring the image to be processed;
the image processing module is used for inputting the image to be processed into a target image classification model to obtain a target output result of the target image classification model, wherein the target image classification model is obtained by training according to the image classification model training method in the first aspect of the disclosure, and the target output result is used for indicating whether each pixel point in the image to be processed belongs to the target category;
And the display module is used for displaying the target output result, wherein the pixel points belonging to the target category and the pixel points not belonging to the target category have different display modes.
According to a fifth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method according to the first aspect of the present disclosure or which when executed by a processor performs the steps of the method according to the second aspect of the present disclosure.
According to a sixth aspect of the present disclosure, there is provided an electronic device comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method according to the first aspect of the disclosure, or to implement the steps of the method according to the second aspect of the disclosure.
According to the technical scheme, multiple sets of training data are obtained, and the image classification model is subjected to iterative training according to the multiple sets of training data until the training stopping condition is met, so that the trained image classification model is obtained. Each group of training data comprises a historical image, a first pixel point marked as belonging to a target category in the historical image and a second pixel point not marked as belonging to the target category, in the training process of each time, one historical image of one group of training data is input into an image classification model used in the training, and after each training, if the training stopping condition is not met, the image classification model is updated according to the target loss value of the image classification model used in the training, and the next training is carried out by utilizing the updated image classification model. The target loss value is determined according to the loss value of each first pixel point in the historical image used in the training, and when the loss value of the first pixel point is greater than a loss threshold value, the loss value of the first pixel point is greater than the initial loss value of the first pixel point. That is, when the initial loss value of the first pixel point is greater than the loss threshold value, the training method provided by the present disclosure can introduce a greater loss value for such first pixel point, and use the greater loss value as a subsequent model training, thereby helping the subsequent model training to more quickly implement correct classification of the pixel points belonging to the target class, so that more positive samples are correctly predicted as positive samples, thereby improving the recall rate of the model, and improving the effect of the image classification model.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure. In the drawings:
FIG. 1 is a flow chart of an image classification model training method provided in accordance with an embodiment of the present disclosure;
FIG. 2 is a flow chart of an image processing method provided in accordance with one embodiment of the present disclosure;
FIG. 3 is a block diagram of an image classification model training apparatus provided in accordance with an embodiment of the present disclosure;
FIG. 4 is a block diagram of an image processing apparatus provided according to one embodiment of the present disclosure;
FIG. 5 is a block diagram of an electronic device, shown in accordance with an exemplary embodiment;
FIG. 6 is a block diagram of an electronic device, shown in accordance with an exemplary embodiment;
FIG. 7 is a block diagram of an electronic device, shown in accordance with an exemplary embodiment;
fig. 8 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the disclosure, are not intended to limit the disclosure.
Before describing the solution of the present disclosure, a brief description will be first given of related content related to model training in the prior art.
In the prior art, in the process of training a model, multiple times of training are generally required until the model meets the condition of stopping training, and the process is an iterative training process. And after each training, if the model does not meet the condition of stopping training, calculating the loss value of the current model, updating the parameters of the model by using the loss value, and using the updated model as the next training, and repeating the steps until the model meets the condition of stopping training.
In the prior art, the following formula can be referred to for Loss value Loss determination during image processing:
wherein M, N is the row and column number of the pixel points forming the image, A mn A is the mark value of the pixel point (m, n) mn There are two values, 0 and 1 respectively, when A mn 1, the pixel (m, n) is said to belong to a preset class (i.e. is marked as belonging to the preset class), which corresponds to a "positive sample" in training, and when A mn For 0, the pixel (m, n) is not in the preset category (i.e. is not marked as belonging to the preset category), which is equivalent to the "negative sample" in training, R mn Is the model output in the current training processProbability that pixel (m, n) of (C) belongs to a predetermined class, C (a mn ,R mn ) The loss value of the pixel point (m, n), and the model is used for distinguishing whether the pixel point belongs to a preset category.
When the mark value of the pixel point (m, n) is 1, the calculation formula of the loss value is as follows:
C(A mn ,R mn )=-log(R mn )
when the mark value of the pixel point (m, n) is 0, the calculation formula of the loss value is as follows:
C(A mn ,R mn )=-log(1-R mn )
the loss value can reflect the difference between the output of the current model and the content of the tag, the smaller the loss value, the closer the output of the current model is to the content of the tag, i.e., to the "correct result", and conversely, the larger the loss value, the larger the difference between the output of the model and the content of the tag, i.e., the larger the difference from the "correct result".
In addition, in the prior art, there are many evaluation indexes for evaluating the effect of a model, and the recall is a very common index. Recall (recovery) is a measure of coverage, which measures how many positive samples are correctly divided into positive samples, whose calculation formula is shown below:
recall=TP/(TP+FN)
where TP (True Positive) refers to the number of Positive samples predicted as Positive samples, and FN (False Negative) refers to the number of Positive samples predicted as Negative samples. During the evaluation more positive samples should be predicted as positive samples, i.e. a high recall is maintained.
Fig. 1 is a flow chart of an image classification model training method provided in accordance with an embodiment of the present disclosure. It should be noted that, the image classification model obtained by training in the present disclosure is a two-class model, which can be regarded as a binary classifier, that is, to identify whether the pixel points in the image belong to the target class. As shown in fig. 1, the method may include the following steps.
In step 11, a plurality of sets of training data are acquired.
Each set of training data comprises a historical image, a first pixel marked as belonging to the target category in the historical image and a second pixel not marked as belonging to the target category.
Before training the image classification model, training data required by model training is collected first, and the training data is multiple groups. And, as described above, each set of training data includes a history image, first pixels in the history image that are labeled as belonging to the target class, and second pixels that are not labeled as belonging to the target class.
The image classification training method can be applied to a scene of change detection, and accordingly, the image classification model obtained through training is used for detecting the changed part in the image. In the training data collection stage, a labeling person labels a historical image, namely, pixels which are changed in the historical image are labeled as first pixels belonging to a target class, and pixels which are not labeled as the target class in the historical image naturally become second pixels. Here, the first pixel is a "positive sample" (belonging to the target class), and the second pixel is a "negative sample" (not belonging to the target class).
For example, the history images may all be medical images. Correspondingly, the pixels corresponding to the body tissue change in the medical images belong to the target class, and when labeling the training data, labeling personnel marks the pixels corresponding to the body tissue change in each medical image in the training data as belonging to the target class.
For another example, the history images may all be the item images. Correspondingly, the pixel points corresponding to the damage of the article in the article images belong to the target category, and when labeling the training data, labeling personnel marks the pixel points corresponding to the damage of the article in each article image in the training data as belonging to the target category.
In step 12, according to the multiple sets of training data, iterative training is performed on the image classification model until the training stopping condition is met, so as to obtain a trained image classification model.
In each training process, a history image of a group of training data is input into an image classification model used in the training. Moreover, the history images used in the two adjacent training processes may be the same history image or different history images, which is not limited in the present disclosure.
And after each training, if the training stopping condition is not met, updating the image classification model according to the target loss value of the image classification model used in the training, and performing the next training by using the updated image classification model. For example, the training stopping condition may be that the number of training times reaches a preset number of times, the target loss value of the image classification model is smaller than a preset threshold value, or the like. The training conditions are stopped, and the model is updated according to the loss value, which is consistent with the prior art and is not repeated here.
The image classification model training method provided by the disclosure is different from the prior art in the calculation of the loss value. The target loss value is determined according to the loss value of each first pixel point in the historical image used in the training, and when the initial loss value of the first pixel point is larger than the loss threshold value, the loss value of the first pixel point is larger than the initial loss value of the first pixel point. The loss value of the first pixel point mentioned here refers to the loss value of a single first pixel point.
The initial loss value of the first pixel point is the loss value obtained according to the loss value calculation mode in the prior art, and the scheme is to further process the initial loss value of the first pixel point to obtain the final loss value of the first pixel point. In this new manner of calculating the loss value, if the loss value of the first pixel is greater than the loss threshold, the loss value of the first pixel is greater than the initial loss value of the first pixel. That is, if the loss value of the first pixel (corresponding to the "positive sample") reaches a certain degree, it is indicated that the error of the "positive sample" is large, and in order to more forcefully correct the error in the subsequent training, a larger loss value is introduced into the first pixel by the loss value calculation method of the present disclosure, that is, the initial loss value is amplified as the final loss value, so that the loss value of the first pixel is larger than the initial loss value of the pixel. Therefore, the correct classification of the positive samples can be realized more quickly through subsequent training, so that more positive samples are correctly predicted as positive samples, and the recall rate of the model is improved.
According to the technical scheme, multiple sets of training data are obtained, and the image classification model is subjected to iterative training according to the multiple sets of training data until the training stopping condition is met, so that the trained image classification model is obtained. Each group of training data comprises a historical image, a first pixel point marked as belonging to a target category in the historical image and a second pixel point not marked as belonging to the target category, in the training process of each time, one historical image of one group of training data is input into an image classification model used in the training, and after each training, if the training stopping condition is not met, the image classification model is updated according to the target loss value of the image classification model used in the training, and the next training is carried out by utilizing the updated image classification model. The target loss value is determined according to the loss value of each first pixel point in the historical image used in the training, and when the loss value of the first pixel point is greater than a loss threshold value, the loss value of the first pixel point is greater than the initial loss value of the first pixel point. That is, when the initial loss value of the first pixel point is greater than the loss threshold value, the training method provided by the present disclosure can introduce a greater loss value for such first pixel point, and use the greater loss value as a subsequent model training, thereby helping the subsequent model training to more quickly implement correct classification of the pixel points belonging to the target class, so that more positive samples are correctly predicted as positive samples, thereby improving the recall rate of the model, and improving the effect of the image classification model.
In order to enable those skilled in the art to better understand the technical solutions provided by the embodiments of the present invention, the following detailed description of the corresponding steps and related concepts is provided.
First, a method for calculating the loss value of the first pixel in the disclosure will be described in detail.
In one possible implementation, the loss value of each first pixel point may be determined by:
and determining the product of the first weight coefficient corresponding to the first pixel point and the initial loss value of the first pixel point as the loss value of the first pixel point.
The first weight coefficient corresponding to the first pixel point and the initial loss value of the first pixel point are in positive correlation.
In this embodiment, the first weight coefficient is introduced, and the first weight coefficient corresponding to each pixel point has a positive correlation with the initial loss value of the first pixel point, that is, the first weight coefficient of each first pixel point is different, and the first weight coefficient of the first pixel point has a positive correlation with the initial loss value of the first pixel point. Therefore, the loss value of the first pixel point is the product of the first weight coefficient corresponding to the first pixel point and the initial loss value of the first pixel point.
In one possible embodiment, the first weight coefficient corresponding to the first pixel is a difference between the inverse of the probability that the first pixel belongs to the target class and 1.
After each training, the image classification model outputs the probability that each pixel point in the historical image used in the training belongs to the target class, so that the probability that the first pixel point belongs to the target class can be known.
For the first pixel point i, its initial loss value is-log (P i ) Wherein P is i For the probability that the first pixel point i belongs to the target class, the initial loss value is the same as in the prior art given above where the "positive sample" loss value is calculated. Accordingly, in this embodiment, the Loss value Loss of the first pixel point i i The calculation of (2) can be expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,is the first weight coefficient of the first pixel point i. From the formula, it can be seen that the initial loss value of the first pixel point is determined by the probability that the first pixel point output by the training model belongs to the target class. P (P) i The closer to 1, the closer to the correct classification the first pixel point i, P i The closer to 0, the farther from the correct classification the first pixel point i is explained. Thus, when P i Below a certain value, the initial loss value-log (P i ) Is greater than the loss threshold, whereby the first weight coefficient +.>With this, when the initial loss value of the first pixel is greater than the loss threshold, the loss value of the first pixel is greater than the initial loss value of the first pixel, and thus, when the output of the current image classification model for the first pixel i is too different from the mark of the first pixel i, a larger loss value can be introduced for the first pixel i.
In the above formula, the probability threshold for introducing a larger loss value for the first pixel i is 0.5, that is, if the current image classification model outputs P for the first pixel i i Less than 0.5, a larger loss value is introduced for it, and P i The smaller the initial loss value is, the greater the degree to which it is amplified. While for the current image classification model, P is output for the first pixel point i i If the value is greater than or equal to 0.5, the initial loss value of the first pixel point is not amplified, and P i The larger (closer to 1), the smaller the loss value of the finally determined first pixel point. Therefore, the loss value of each pixel point can be flexibly adjusted, and the model training effect is improved.
In addition, the target loss value is determined according to the loss value of each second pixel point in the historical image used in the current training, and the target loss value is the sum of the loss values of each pixel point in the historical image used in the current training.
In one possible embodiment, the second pixel point used in the present disclosure may be calculated in the same manner as in the prior art, i.e., in the same manner as in the prior art to calculate the "negative sample" loss value. For the second pixel point j, its initial loss value is-log (1-Q j ) Wherein Q is j The probability that the second pixel j belongs to the target class.
In another possible embodiment, since in the process of labeling the historical image, the pixels belonging to the target class are labeled as the first pixels, and the pixels not labeled as the pixels belonging to the target class are automatically regarded as the second pixels. In this process, there may be a case of missing marks, which may result in that a pixel that should be marked as a first pixel is used as a second pixel in the training process, that is, the second pixel used in the training process may not be all true second pixels, and there may be a missing marked first pixel. Accordingly, a confidence (i.e., a second weight coefficient) may be introduced based on the existing initial loss value of the second pixel to balance the number difference between the positive and negative samples (i.e., the first pixel and the second pixel). In this embodiment, the loss value of each second pixel point may be determined as follows:
And determining the product of the preset second weight coefficient and the initial loss value of the second pixel point as the loss value of the second pixel point.
The preset second weight coefficient is in a numerical interval formed by (0, 1), and the second weight coefficients used by all the second pixel points are consistent.
As described above, for the second pixel point j, the initial loss value is-log (1-Q j ). Accordingly, in this embodiment, the Loss value Loss of the second pixel point j j The equation can be expressed as:
Loss j =-log(1-Q j )*w j
wherein w is j Is a preset second weight coefficient.
Therefore, in the scheme, after each training is finished, the loss value of each first pixel point in the historical image used in the training is calculated through the calculation mode of the loss value of the first pixel point, meanwhile, the loss value of each second pixel point in the historical image used in the training is calculated through the calculation mode of the loss value of the second pixel point, and finally, the loss values of all the pixel points in the historical image are summed to obtain the target loss value of the image classification model used in the training.
By adopting the mode, the target loss value of the image classification model is further processed from the first pixel point and the second pixel point. For the first pixel point, when the difference between the output of the model and the marking content is too large, a larger loss value is introduced, and for the second pixel point, a second weight coefficient (confidence coefficient) is introduced, so that subsequent model training is facilitated, an excellent model is obtained more quickly, and the model recall rate is improved.
Fig. 2 is a flowchart of an image processing method provided according to one embodiment of the present disclosure. As shown in fig. 2, the method may include the following steps.
In step 21, an image to be processed is acquired.
As described above, the image to be processed may be an article image, a medical image, or the like.
In step 22, the image to be processed is input to the target image classification model to obtain a target output result of the target image classification model.
The target image classification model is trained by the image classification model training method provided by any embodiment of the disclosure. The target image classification model is an image classification model for processing an image to be processed. For example, if the image to be processed is an item image, the target image classification model is an image classification model trained based on historical images that are all item images. If the image to be processed is a medical image, the target image classification model is an image classification model trained based on all historical images of the medical image.
And the target output result is used for indicating whether each pixel point in the image to be processed belongs to the target category.
In step 23, the target output result is displayed.
Wherein, the pixel points belonging to the target category and the pixel points not belonging to the target category have different display modes.
For example, pixels belonging to the target class may be displayed in different colors to distinguish between the two. For another example, the pixel points belonging to the target category may be highlighted (for example, the pixel points belonging to the target category are displayed in a solid dot manner, and the pixel points not belonging to the target category are displayed in a hollow dot manner), so as to embody the outline of the pixel points belonging to the target category and improve the intuitiveness of information display.
Through the technical scheme, after the image to be processed is obtained, the image to be processed is input into the target image classification model obtained through training by the image classification model training method provided by the disclosure, the target output result of the target image classification model is obtained, and the target output result is displayed. Therefore, the model training mode provided by the disclosure is utilized for training, so that the target image classification model with excellent effect is obtained and is used for image processing of the image to be processed, whether each pixel point in the image to be processed belongs to the target class is identified, and the image processing effect can be improved. In addition, after the target output result of the target image classification model is obtained, the target output result is displayed, and the pixel points belonging to the target class and the pixel points not belonging to the target class are displayed in a distinguishing mode, so that the display is more visual, and the display effect is improved.
Fig. 3 is a block diagram of an image classification model training apparatus provided in accordance with an embodiment of the present disclosure. As shown in fig. 3, the apparatus 30 may include:
a first obtaining module 31, configured to obtain a plurality of sets of training data, where each set of training data includes a history image, a first pixel marked as belonging to a target class in the history image, and a second pixel not marked as belonging to the target class;
the training module 32 is configured to iteratively train the image classification model according to the multiple sets of training data until the training stopping condition is met, so as to obtain a trained image classification model;
in each training process, inputting a historical image of a group of training data into an image classification model used in the current training, updating the image classification model according to a target loss value of the image classification model used in the current training if the training stopping condition is not met after each training, and performing the next training by utilizing the updated image classification model; the target loss value is determined according to the loss value of each first pixel point in the historical image used in the training, and when the initial loss value of the first pixel point is larger than a loss threshold value, the loss value of the first pixel point is larger than the initial loss value of the first pixel point.
Optionally, the loss value of each first pixel point is determined by:
and determining the product of the first weight coefficient corresponding to the first pixel point and the initial loss value of the first pixel point as the loss value of the first pixel point, wherein the first weight coefficient corresponding to the first pixel point and the initial loss value of the first pixel point are in positive correlation.
Optionally, after each training is finished, the image classification model outputs the probability that each pixel point in the historical image used in the training belongs to the target category; and
the first weight coefficient corresponding to the first pixel point is a difference value between the inverse of the probability that the first pixel point belongs to the target category and 1.
Optionally, the initial loss value of the first pixel point i is-log (P i ) Wherein P is i And the probability that the first pixel point i belongs to the target category.
Optionally, the target loss value is determined according to the loss value of each second pixel point in the history image used in the training, and the target loss value is the sum of the loss values of each pixel point in the history image used in the training; and
the loss value of each second pixel point is determined by the following method:
And determining the product of a preset second weight coefficient and the initial loss value of the second pixel point as the loss value of the second pixel point, wherein the preset second weight coefficient is in a numerical interval formed by (0, 1).
Optionally, after each training is finished, the image classification model outputs the probability that each pixel point in the historical image used in the training belongs to the target category; and
the initial loss value of the second pixel point j is-log (1-Q j ) Wherein Q is j And the probability that the second pixel point j belongs to the target category.
Optionally, the history images are all medical images or all object images;
if all the history images are medical images, pixels corresponding to the change of body tissues in the medical images belong to the target category;
and if all the historical images are object images, pixels corresponding to object breakage in the object images belong to the target category.
Fig. 4 is a block diagram of an image processing apparatus provided according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus 40 may include:
a second acquiring module 41, configured to acquire an image to be processed;
the image processing module 42 is configured to input the image to be processed into a target image classification model to obtain a target output result of the target image classification model, where the target image classification model is obtained by training according to an image classification model training method provided by any embodiment of the disclosure, and the target output result is used to indicate whether each pixel point in the image to be processed belongs to the target class;
And a display module 43, configured to display the target output result, where the pixel points belonging to the target category and the pixel points not belonging to the target category have different display modes.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 5 is a block diagram of an electronic device, according to an example embodiment. As shown in fig. 5, the electronic device 500 may include: a processor 501, a memory 502. The electronic device 500 may also include one or more of a multimedia component 503, an input/output (I/O) interface 504, and a communication component 505.
Wherein the processor 501 is configured to control the overall operation of the electronic device 500 to perform all or part of the steps in the image classification model training method described above. The memory 502 is used to store various types of data to support operation at the electronic device 500, which may include, for example, instructions for any application or method operating on the electronic device 500, as well as application-related data, such as contact data, messages sent and received, pictures, audio, video, and so forth. The Memory 502 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia component 503 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen, the audio component being for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signals may be further stored in the memory 502 or transmitted through the communication component 505. The audio assembly further comprises at least one speaker for outputting audio signals. The I/O interface 504 provides an interface between the processor 501 and other interface modules, which may be a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 505 is used for wired or wireless communication between the electronic device 500 and other devices. Wireless communication, such as Wi-Fi, bluetooth, near field communication (Near Field Communication, NFC for short), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or one or a combination of more of them, is not limited herein. The corresponding communication component 505 may thus comprise: wi-Fi module, bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic device 500 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated ASIC), digital signal processor (Digital Signal Processor, abbreviated DSP), digital signal processing device (Digital Signal Processing Device, abbreviated DSPD), programmable logic device (Programmable Logic Device, abbreviated PLD), field programmable gate array (Field Programmable Gate Array, abbreviated FPGA), controller, microcontroller, microprocessor, or other electronic components for performing the image classification model training method described above.
In another exemplary embodiment, a computer readable storage medium is also provided comprising program instructions which, when executed by a processor, implement the steps of the image classification model training method described above. For example, the computer readable storage medium may be the memory 502 described above including program instructions executable by the processor 501 of the electronic device 500 to perform the image classification model training method described above.
Fig. 6 is a block diagram of an electronic device, according to an example embodiment. For example, the electronic device 600 may be provided as a server. Referring to fig. 6, the electronic device 600 includes a processor 622, which may be one or more in number, and a memory 632 for storing computer programs executable by the processor 622. The computer program stored in memory 632 may include one or more modules each corresponding to a set of instructions. Further, the processor 622 may be configured to execute the computer program to perform the image classification model training method described above.
In addition, the electronic device 600 may further include a power supply component 626 and a communication component 650, the power supply component 626 may be configured to perform power management of the electronic device 600, and the communication component 650 may be configured to enable communication of the electronic device 600, e.g., wired or wireless communication. In addition, the electronic device 600 may also include an input/output (I/O) interface 658. The electronic device 600 may operate based on an operating system stored in the memory 632, such as Windows Server, mac OS XTM, unixTM, linuxTM, and the like.
In another exemplary embodiment, a computer readable storage medium is also provided comprising program instructions which, when executed by a processor, implement the steps of the image classification model training method described above. For example, the computer readable storage medium may be the memory 632 described above that includes program instructions that are executable by the processor 622 of the electronic device 600 to perform the image classification model training method described above.
In another exemplary embodiment, a computer program product is also provided, the computer program product comprising a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described image classification model training method when executed by the programmable apparatus.
Fig. 7 is a block diagram of an electronic device, according to an example embodiment. As shown in fig. 7, the electronic device 700 may include: a processor 701, a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the electronic device 700 to perform all or part of the steps in the image processing method described above. The memory 702 is used to store various types of data to support operation on the electronic device 700, which may include, for example, instructions for any application or method operating on the electronic device 700, as well as application-related data, such as contact data, messages sent and received, pictures, audio, video, and so forth. The Memory 702 may be implemented by any type or combination of volatile or non-volatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia component 703 can include a screen and an audio component. Wherein the screen may be, for example, a touch screen, the audio component being for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signals may be further stored in the memory 702 or transmitted through the communication component 705. The audio assembly further comprises at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, which may be a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is for wired or wireless communication between the electronic device 700 and other devices. Wireless communication, such as Wi-Fi, bluetooth, near field communication (Near Field Communication, NFC for short), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or one or a combination of more of them, is not limited herein. The corresponding communication component 705 may thus comprise: wi-Fi module, bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic device 700 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), digital signal processors (Digital Signal Processor, abbreviated as DSP), digital signal processing devices (Digital Signal Processing Device, abbreviated as DSPD), programmable logic devices (Programmable Logic Device, abbreviated as PLD), field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), controllers, microcontrollers, microprocessors, or other electronic components for performing the image processing methods described above.
In another exemplary embodiment, a computer readable storage medium is also provided, comprising program instructions which, when executed by a processor, implement the steps of the image processing method described above. For example, the computer readable storage medium may be the memory 702 including program instructions described above, which are executable by the processor 701 of the electronic device 700 to perform the image processing method described above.
Fig. 8 is a block diagram of an electronic device, according to an example embodiment. For example, the electronic device 800 may be provided as a server. Referring to fig. 8, the electronic device 800 includes a processor 822, which may be one or more in number, and a memory 832 for storing computer programs executable by the processor 822. The computer program stored in memory 832 may include one or more modules each corresponding to a set of instructions. Further, the processor 822 may be configured to execute the computer program to perform the image processing method described above.
In addition, the electronic device 800 may further include a power supply component 826 and a communication component 850, the power supply component 826 may be configured to perform power management of the electronic device 800, and the communication component 850 may be configured to enable communication of the electronic device 800, such as wired or wireless communication. In addition, the electronic device 800 may also include an input/output (I/O) interface 858. The electronic device 800 may operate based on an operating system stored in the memory 832, such as Windows Server, mac OS XTM, unixTM, linuxTM, etc.
In another exemplary embodiment, a computer readable storage medium is also provided, comprising program instructions which, when executed by a processor, implement the steps of the image processing method described above. For example, the computer readable storage medium may be the memory 832 including program instructions described above that are executable by the processor 822 of the electronic device 800 to perform the image processing method described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned image processing method when being executed by the programmable apparatus.
The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solutions of the present disclosure within the scope of the technical concept of the present disclosure, and all the simple modifications belong to the protection scope of the present disclosure.
In addition, the specific features described in the above embodiments may be combined in any suitable manner without contradiction. The various possible combinations are not described further in this disclosure in order to avoid unnecessary repetition.
Moreover, any combination between the various embodiments of the present disclosure is possible as long as it does not depart from the spirit of the present disclosure, which should also be construed as the disclosure of the present disclosure.

Claims (18)

1. A method of training an image classification model, the method comprising:
acquiring multiple sets of training data, wherein each set of training data comprises a historical image, a first pixel marked as belonging to a target category in the historical image and a second pixel not marked as belonging to the target category;
performing iterative training on the image classification model according to the multiple groups of training data until the training stopping condition is met, so as to obtain a trained image classification model;
In each training process, inputting a historical image of a group of training data into an image classification model used in the current training, updating the image classification model according to a target loss value of the image classification model used in the current training if the training stopping condition is not met after each training, and performing the next training by utilizing the updated image classification model; the target loss value is determined according to the loss value of each first pixel point in the historical image used in the training, and when the initial loss value of the first pixel point is larger than a loss threshold value, the loss value of the first pixel point is larger than the initial loss value of the first pixel point.
2. The method of claim 1, wherein the loss value for each of the first pixels is determined by:
and determining the product of the first weight coefficient corresponding to the first pixel point and the initial loss value of the first pixel point as the loss value of the first pixel point, wherein the first weight coefficient corresponding to the first pixel point and the initial loss value of the first pixel point are in positive correlation.
3. The method according to claim 2, wherein the image classification model outputs the probability that each pixel point in the history image used in the present training belongs to the target class after each training is completed; and
The first weight coefficient corresponding to the first pixel point is a difference value between the inverse of the probability that the first pixel point belongs to the target category and 1.
4. A method according to claim 3, wherein the initial loss value of the first pixel point i is-log (P i ) Wherein P is i And the probability that the first pixel point i belongs to the target category.
5. The method of claim 1, wherein the target loss value is further determined according to a loss value of each second pixel point in the history image used in the present training, and the target loss value is a sum of loss values of each pixel point in the history image used in the present training; and
the loss value of each second pixel point is determined by the following method:
and determining the product of a preset second weight coefficient and the initial loss value of the second pixel point as the loss value of the second pixel point, wherein the preset second weight coefficient is in a numerical interval formed by (0, 1).
6. The method of claim 5, wherein the image classification model outputs a probability that each pixel point in the history image used in the present training belongs to the target class after each training is completed; and
The initial loss value of the second pixel point j is-log (1-Q j ) Wherein Q is j And the probability that the second pixel point j belongs to the target category.
7. The method of any one of claims 1-6, wherein the history images are all medical images or all item images;
if all the history images are medical images, pixels corresponding to the change of body tissues in the medical images belong to the target category;
and if all the historical images are object images, pixels corresponding to object breakage in the object images belong to the target category.
8. An image processing method, the method comprising:
acquiring an image to be processed;
inputting the image to be processed into a target image classification model to obtain a target output result of the target image classification model, wherein the target image classification model is trained according to the image classification model training method according to any one of claims 1-7, and the target output result is used for indicating whether each pixel point in the image to be processed belongs to the target category;
and displaying the target output result, wherein the pixel points belonging to the target category and the pixel points not belonging to the target category have different display modes.
9. An image classification model training apparatus, the apparatus comprising:
the first acquisition module is used for acquiring a plurality of groups of training data, wherein each group of training data comprises a historical image, a first pixel point marked as belonging to a target category in the historical image and a second pixel point not marked as belonging to the target category;
the training module is used for carrying out iterative training on the image classification model according to the plurality of groups of training data until the training stopping condition is met so as to obtain a trained image classification model;
in each training process, inputting a historical image of a group of training data into an image classification model used in the current training, updating the image classification model according to a target loss value of the image classification model used in the current training if the training stopping condition is not met after each training, and performing the next training by utilizing the updated image classification model; the target loss value is determined according to the loss value of each first pixel point in the historical image used in the training, and when the initial loss value of the first pixel point is larger than a loss threshold value, the loss value of the first pixel point is larger than the initial loss value of the first pixel point.
10. The apparatus of claim 9, wherein the loss value for each of the first pixels is determined by:
and determining the product of the first weight coefficient corresponding to the first pixel point and the initial loss value of the first pixel point as the loss value of the first pixel point, wherein the first weight coefficient corresponding to the first pixel point and the initial loss value of the first pixel point are in positive correlation.
11. The apparatus of claim 10, wherein the image classification model outputs a probability that each pixel point in the history image used for the present training belongs to the target class after each training is completed; and
the first weight coefficient corresponding to the first pixel point is a difference value between the inverse of the probability that the first pixel point belongs to the target category and 1.
12. The apparatus of claim 11, wherein the initial loss value of the first pixel point i is-log (P i ) Wherein P is i And the probability that the first pixel point i belongs to the target category.
13. The apparatus of claim 9, wherein the target loss value is further determined according to a loss value of each second pixel in the history image used in the present training, and the target loss value is a sum of loss values of each pixel in the history image used in the present training; and
The loss value of each second pixel point is determined by the following method:
and determining the product of a preset second weight coefficient and the initial loss value of the second pixel point as the loss value of the second pixel point, wherein the preset second weight coefficient is in a numerical interval formed by (0, 1).
14. The apparatus of claim 13, wherein the image classification model outputs a probability that each pixel point in the history image used for the present training belongs to the target class after each training is completed; and
the initial loss value of the second pixel point j is-log (1-Q j ) Wherein Q is j And the probability that the second pixel point j belongs to the target category.
15. The apparatus of any one of claims 9-14, wherein the history image is all medical images or all item images;
if all the history images are medical images, pixels corresponding to the change of body tissues in the medical images belong to the target category;
and if all the historical images are object images, pixels corresponding to object breakage in the object images belong to the target category.
16. An image processing apparatus, characterized in that the apparatus comprises:
The second acquisition module is used for acquiring the image to be processed;
the image processing module is used for inputting the image to be processed into a target image classification model to obtain a target output result of the target image classification model, wherein the target image classification model is obtained by training according to the image classification model training method according to any one of claims 1-7, and the target output result is used for indicating whether each pixel point in the image to be processed belongs to the target category;
and the display module is used for displaying the target output result, wherein the pixel points belonging to the target category and the pixel points not belonging to the target category have different display modes.
17. A computer readable storage medium, on which a computer program is stored, characterized in that the program when being executed by a processor implements the steps of the method according to any one of claims 1-7, or the program when being executed by a processor implements the steps of the method according to claim 8.
18. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing said computer program in said memory for carrying out the steps of the method of any one of claims 1-7 or for carrying out the steps of the method of claim 8.
CN201911268187.2A 2019-12-11 2019-12-11 Classification model training method, image processing method, device, medium and equipment Active CN111160411B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911268187.2A CN111160411B (en) 2019-12-11 2019-12-11 Classification model training method, image processing method, device, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911268187.2A CN111160411B (en) 2019-12-11 2019-12-11 Classification model training method, image processing method, device, medium and equipment

Publications (2)

Publication Number Publication Date
CN111160411A CN111160411A (en) 2020-05-15
CN111160411B true CN111160411B (en) 2023-09-29

Family

ID=70557083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911268187.2A Active CN111160411B (en) 2019-12-11 2019-12-11 Classification model training method, image processing method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN111160411B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784595B (en) * 2020-06-10 2023-08-29 北京科技大学 Dynamic tag smooth weighting loss method and device based on historical record
CN112766389B (en) * 2021-01-26 2022-11-29 北京三快在线科技有限公司 Image classification method, training method, device and equipment of image classification model
CN113239878B (en) * 2021-06-01 2023-09-05 平安科技(深圳)有限公司 Image classification method, device, equipment and medium
CN114663731B (en) * 2022-05-25 2022-09-20 杭州雄迈集成电路技术股份有限公司 Training method and system of license plate detection model, and license plate detection method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344752A (en) * 2018-09-20 2019-02-15 北京字节跳动网络技术有限公司 Method and apparatus for handling mouth image
CN110084271A (en) * 2019-03-22 2019-08-02 同盾控股有限公司 A kind of other recognition methods of picture category and device
CN110443280A (en) * 2019-07-05 2019-11-12 北京达佳互联信息技术有限公司 Training method, device and the storage medium of image detection model
JP2019197323A (en) * 2018-05-08 2019-11-14 国立研究開発法人情報通信研究機構 Prediction system and prediction method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5652227B2 (en) * 2011-01-25 2015-01-14 ソニー株式会社 Image processing apparatus and method, and program
JP6546271B2 (en) * 2015-04-02 2019-07-17 株式会社日立製作所 Image processing apparatus, object detection apparatus, and image processing method
CN109359515A (en) * 2018-08-30 2019-02-19 东软集团股份有限公司 A kind of method and device that the attributive character for target object is identified

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019197323A (en) * 2018-05-08 2019-11-14 国立研究開発法人情報通信研究機構 Prediction system and prediction method
CN109344752A (en) * 2018-09-20 2019-02-15 北京字节跳动网络技术有限公司 Method and apparatus for handling mouth image
CN110084271A (en) * 2019-03-22 2019-08-02 同盾控股有限公司 A kind of other recognition methods of picture category and device
CN110443280A (en) * 2019-07-05 2019-11-12 北京达佳互联信息技术有限公司 Training method, device and the storage medium of image detection model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邰凌楠 ; 王春雨 ; 田茂再 ; .缺失数据下的逆概率多重加权分位回归估计及其应用.统计研究.2018,(09),第115-120页. *

Also Published As

Publication number Publication date
CN111160411A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN111160411B (en) Classification model training method, image processing method, device, medium and equipment
CN108197658B (en) Image annotation information processing method, device, server and system
CN112633384B (en) Object recognition method and device based on image recognition model and electronic equipment
CN110704634A (en) Method and device for checking and repairing knowledge graph link errors and storage medium
KR102002024B1 (en) Method for processing labeling of object and object management server
CN109542289B (en) MES operation method, device, equipment and storage medium
CN108427941B (en) Method for generating face detection model, face detection method and device
US20210390728A1 (en) Object area measurement method, electronic device and storage medium
WO2019167556A1 (en) Label-collecting device, label collection method, and label-collecting program
CN114385869A (en) Method and device for detecting data abnormity, storage medium and computer equipment
CN113344862A (en) Defect detection method, defect detection device, electronic equipment and storage medium
CN110287817B (en) Target recognition and target recognition model training method and device and electronic equipment
CN111385659B (en) Video recommendation method, device, equipment and storage medium
CN114821551A (en) Method, apparatus and storage medium for legacy detection and model training
CN114782769A (en) Training sample generation method, device and system and target object detection method
CN112420150B (en) Medical image report processing method and device, storage medium and electronic equipment
CN112651315A (en) Information extraction method and device of line graph, computer equipment and storage medium
CN112149698A (en) Method and device for screening difficult sample data
US11068716B2 (en) Information processing method and information processing system
CN111427874B (en) Quality control method and device for medical data production and electronic equipment
CN113537192A (en) Image detection method, image detection device, electronic equipment and storage medium
CN108228063B (en) Preference scheme determination method and device and electronic equipment
CN111124862A (en) Intelligent equipment performance testing method and device and intelligent equipment
CN112241448A (en) Response information generation method, device, equipment and storage medium
CN114494818B (en) Image processing method, model training method, related device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant