CN110070113A - A kind of training method and device of training set - Google Patents
A kind of training method and device of training set Download PDFInfo
- Publication number
- CN110070113A CN110070113A CN201910252738.XA CN201910252738A CN110070113A CN 110070113 A CN110070113 A CN 110070113A CN 201910252738 A CN201910252738 A CN 201910252738A CN 110070113 A CN110070113 A CN 110070113A
- Authority
- CN
- China
- Prior art keywords
- training
- training set
- samples pictures
- subset
- training subset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present invention provides the training method and device of a kind of training set, which comprises obtains the training set for training preset model;The interference samples pictures are filtered from all samples pictures, to construct first training set, and retain all first remaining samples pictures in addition to the interference samples pictures;The first object picture is filtered from all first remaining samples pictures, to construct second training set, and retains all second remaining samples pictures in addition to the interference samples pictures and the first object picture;Second Target Photo is obtained from all second remaining samples pictures, to construct the third training set;The first training set, the second training set and third training set that have constructed are trained respectively.Described device executes the above method.The training method and device of training set provided in an embodiment of the present invention, can be improved the reasonability of training set building, and then more reasonably be trained to training set.
Description
Technical field
The present embodiments relate to image processing technology more particularly to the training methods and device of a kind of training set.
Background technique
Capsule endoscope have many advantages, such as it is painless, without wound, shooting image contain much information, have wide application value.
The prior art is identified by the original image of capsule endoscope shooting using manual type and divides original image
Class needs to construct model to more accurately and efficiently identify original image, but model usually require before the use into
Row training, need to be trained training set, so that model can more accurately carry out picture recognition, still, it is existing for
The training method of training set, since the training set of building is not reasonable, the model accuracy after leading to training is not high.
Therefore, how to avoid drawbacks described above, improve the reasonability of training set building, so more reasonably to training set into
Row training, becoming need solve the problems, such as.
Summary of the invention
In view of the problems of the existing technology, the embodiment of the present invention provides the training method and device of a kind of training set.
The embodiment of the present invention provides a kind of training method of training set, comprising:
Obtain the training set for training preset model;The training set includes the first training comprising interference samples pictures
Collection, shooting exterior surface do not include corresponding second training set of first object picture of off-note, shooting exterior surface includes
The corresponding third training set of second Target Photo of the off-note;The off-note includes protruding features and/or specifies
Color characteristic;
The interference samples pictures are filtered from all samples pictures, to construct first training set, and are retained and are removed institute
State all first remaining samples pictures except interference samples pictures;
The first object picture is filtered from all first remaining samples pictures, to construct second training set, and
Retain all second remaining samples pictures in addition to the interference samples pictures and the first object picture;
Second Target Photo is obtained from all second remaining samples pictures, to construct the third training set;
The first training set, the second training set and third training set that have constructed are trained respectively.
The embodiment of the present invention provides a kind of training device of training set, comprising:
Acquiring unit, for obtaining the training set for training preset model;The training set includes comprising interfering sample
The first training set, the shooting exterior surface of picture do not include corresponding second training set of first object picture of off-note, clap
Take the photograph the corresponding third training set of the second Target Photo that exterior surface includes the off-note;The off-note includes protrusion
Feature and/or designated color feature;
First construction unit, for filtering the interference samples pictures from all samples pictures, to construct described first
Training set, and retain all first remaining samples pictures in addition to the interference samples pictures;
Second construction unit, for filtering the first object picture from all first remaining samples pictures, with building
Second training set, and retain all second remaining samples in addition to the interference samples pictures and the first object picture
This picture;
Third construction unit, for obtaining second Target Photo from all second remaining samples pictures, with building
The third training set;
Training unit, for being trained respectively to the first training set, the second training set and third training set that have constructed.
The embodiment of the present invention provides a kind of electronic equipment, comprising: processor, memory and bus, wherein
The processor and the memory complete mutual communication by the bus;
The memory is stored with the program instruction that can be executed by the processor, and the processor calls described program to refer to
Order is able to carry out following method:
Obtain the training set for training preset model;The training set includes the first training comprising interference samples pictures
Collection, shooting exterior surface do not include corresponding second training set of first object picture of off-note, shooting exterior surface includes
The corresponding third training set of second Target Photo of the off-note;The off-note includes protruding features and/or specifies
Color characteristic;
The interference samples pictures are filtered from all samples pictures, to construct first training set, and are retained and are removed institute
State all first remaining samples pictures except interference samples pictures;
The first object picture is filtered from all first remaining samples pictures, to construct second training set, and
Retain all second remaining samples pictures in addition to the interference samples pictures and the first object picture;
Second Target Photo is obtained from all second remaining samples pictures, to construct the third training set;
The first training set, the second training set and third training set that have constructed are trained respectively.
The embodiment of the present invention provides a kind of non-transient computer readable storage medium, comprising:
The non-transient computer readable storage medium stores computer instruction, and the computer instruction makes the computer
Execute following method:
Obtain the training set for training preset model;The training set includes the first training comprising interference samples pictures
Collection, shooting exterior surface do not include corresponding second training set of first object picture of off-note, shooting exterior surface includes
The corresponding third training set of second Target Photo of the off-note;The off-note includes protruding features and/or specifies
Color characteristic;
The interference samples pictures are filtered from all samples pictures, to construct first training set, and are retained and are removed institute
State all first remaining samples pictures except interference samples pictures;
The first object picture is filtered from all first remaining samples pictures, to construct second training set, and
Retain all second remaining samples pictures in addition to the interference samples pictures and the first object picture;
Second Target Photo is obtained from all second remaining samples pictures, to construct the third training set;
The first training set, the second training set and third training set that have constructed are trained respectively.
The training method and device of training set provided in an embodiment of the present invention, by successively filtering interference sample graph step by step
Piece, first object picture and the second Target Photo, and corresponding training set is constructed, then be trained respectively to training set, it can
The reasonability of training set building is improved, and then more reasonably training set is trained.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without creative efforts, can be with root
Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the training method embodiment flow chart of training set of the present invention;
Fig. 2 (a)~Fig. 2 (h) is the screenshot of the full exposure image of shooting of the embodiment of the present invention;
Fig. 3 (a)~Fig. 3 (h) is the first object picture partially with change in shape of shooting of the embodiment of the present invention
Screenshot;
Fig. 4 (a)~Fig. 4 (h) is cutting for second Target Photo rotten to the corn with protuberance property of shooting of the embodiment of the present invention
Figure;
Fig. 5 is the training device example structure schematic diagram of training set of the present invention;
Fig. 6 is electronic equipment entity structure schematic diagram provided in an embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Fig. 1 is the training method embodiment flow chart of training set of the present invention, as shown in Figure 1, provided in an embodiment of the present invention
A kind of training method of training set, comprising the following steps:
S101: the training set for training preset model is obtained;The training set includes the comprising interference samples pictures
One training set, shooting exterior surface do not include corresponding second training set of first object picture of off-note, subject appearance
The corresponding third training set of second Target Photo of the bread containing the off-note;The off-note include protruding features and/
Or designated color feature.
Specifically, device obtains the training set for training preset model;The training set includes comprising interfering sample graph
The first training set, the shooting exterior surface of piece do not include corresponding second training set of first object picture of off-note, shooting
Exterior surface includes the corresponding third training set of the second Target Photo of the off-note;The off-note includes that protrusion is special
Sign and/or designated color feature.It should be understood that above-mentioned interference samples pictures, first object picture and the second Target Photo
The scope of samples pictures is belonged to, samples pictures are pictures choosing from original image, can be used as training sample, this is original
Picture is shot by capsule endoscope, is explained as follows to the course of work of capsule endoscope:
Capsule endoscope enters alimentary canal from oral cavity, then naturally drains in vitro from anus.
The battery durable power of capsule endoscope is limited, and effective operation interval is oral cavity, esophagus, Stomach duodenum, small intestine
With large intestine a part.
Each activity of capsule endoscope, which all generates, checks picture and overseas inspection picture in domain.
Check that picture is to a certain section of shooting result carried out of alimentary canal in domain.
Overseas inspection picture is the picture that capsule endoscope photographed in passing other than checking picture in domain.
Whole pictures can automatic identification, be not necessarily to any manpower intervention (including image preprocessing).
Identify image after, by capsule endoscope shoot picture be divided into six major class (125 groups), automatically save in
In 125 Photo folders, wherein six major class can be with are as follows:
First major class: a kind of overseas tag along sort (10 classifications).
Second major class: the overseas tag along sort of two classes (13 classifications).
Third major class: the first object picture classification label (14 classifications) based on partial structurtes feature.
The fourth-largest class: hole shape structure first object picture classification label (8 classifications).
The fifth-largest class: the first object picture classification label (24 classifications) based on global structure feature.
The sixth-largest class: the second Target Photo tag along sort (56 classifications).
It being capable of the gastral different parts such as automatic identification oral cavity, esophagus, Stomach duodenum, small intestine and large intestine.
The quantity for the original image that every capsule endoscope can be shot every time can be 2000~3000, i.e. capsule endoscope
The picture number in pictures got.
It can be exported from hospital information system, original image (the JPG lattice that the capsule endoscope without any processing is shot
Formula).Interference samples pictures can be understood as the samples pictures for being not used to picture recognition, after identifying these pictures, need
These pictures are rejected as early as possible, to reduce the operand during trained preset model.It should be understood that interference samples pictures
It may include full exposure image, Fig. 2 (a)~Fig. 2 (h) is the screenshot of the full exposure image of shooting of the embodiment of the present invention, each to scheme
Between it is mutually indepedent, be all the form of expression of full exposure image respectively.Off-note may include protruding features and/or specified face
Color characteristic, protruding features may include swelling, granular substance protrusion.Designated color feature may include red, white, not make to have
Body limits.When preset model output result includes the second Target Photo, the special marking for off-note can be generated,
Such as off-note is selected with box frame, it is partially carefully checked with indicating that related personnel selects party's circle, i.e. off-note
It can be used as the middle reference feature during certain medicals diagnosis on disease, only rely only on the off-note and be also not enough to be diagnosed to be disease
Disease.First object picture may include the first object picture partially with change in shape, and the content of concrete shape variation can be with
Including fold, crack, staggeredly etc., be not especially limited.Fig. 3 (a)~Fig. 3 (h) is the local band of shooting of the embodiment of the present invention
There is a screenshot of the first object picture of change in shape, it is mutually indepedent between each figure, it is all partially with the of change in shape respectively
The form of expression of one Target Photo.Second Target Photo may include rubescent, swelling, erosion, ulcer etc., Fig. 4 (a)~Fig. 4 (h)
It is the screenshot of second Target Photo rotten to the corn with protuberance property of shooting of the embodiment of the present invention, it is mutually indepedent between each figure, all
It is the form of expression partially with the second rotten to the corn Target Photo of protuberance property respectively.
S102: filtering the interference samples pictures from all samples pictures, to construct first training set, and retains
The all first remaining samples pictures in addition to the interference samples pictures.
Specifically, device filters the interference samples pictures from all samples pictures, to construct first training set,
And retain all first remaining samples pictures in addition to the interference samples pictures.It is illustrated below: all samples pictures
It is respectively to interfere samples pictures, first object picture, the second Target Photo including { A, B, C }, A, B, C, it is to be appreciated that A,
B, every one kind in C is all picture set, which filters A, i.e. first training set of the building comprising A from samples pictures, will
B, C retains, and as all first remaining samples pictures.
S103: filtering the first object picture from all first remaining samples pictures, to construct second training
Collection, and retain all second remaining samples pictures in addition to the interference samples pictures and the first object picture.
Specifically, device filters the first object picture from all first remaining samples pictures, to construct described the
Two training sets, and retain all second remaining sample graphs in addition to the interference samples pictures and the first object picture
Piece.Referring to the example above, be described as follows: all first remaining samples pictures include { B, C }, and the step is surplus from all first
B, i.e. second training set of the building comprising B are filtered in remaining samples pictures, C is retained, and as all second remaining sample graphs
Piece.
S104: obtaining second Target Photo from all second remaining samples pictures, to construct the third training
Collection.
Specifically, device obtains second Target Photo from all second remaining samples pictures, to construct described the
Three training sets.Referring to the example above, be described as follows: all second remaining samples pictures include { C }, and the step is from all the
C, i.e. third training set of the building comprising C are obtained in two remaining samples pictures.It should be noted that due to the spy of certain pictures
Property, such as easily obscure, lead to that the other types of picture in addition to C may be will also include in the second remaining samples pictures, leads to
The step is crossed, can make the third training set constructed includes the second whole Target Photos as far as possible, and does not include and remove C
Except other types of picture.
S105: the first training set, the second training set and third training set that have constructed are trained respectively.
Specifically, device is respectively trained the first training set, the second training set and third training set that have constructed.It is right
The method that the training set constructed is trained is this field mature technology, is repeated no more.
The training method of training set provided in an embodiment of the present invention, by successively filtering interference samples pictures, first step by step
Target Photo and the second Target Photo, and corresponding training set is constructed, then be trained respectively to training set, it can be improved training
Collect the reasonability of building, and then more reasonably training set is trained.
On the basis of the above embodiments, first training set includes the first training subset and the second training subset;Phase
It answers, it is described that the interference samples pictures are filtered from all samples pictures, to construct first training set, comprising:
First training set is split as first training subset and second training subset;First training
Subset is training subset corresponding with the overseas tag along sort of one kind, the overseas tag along sort of one kind is based on original image
Shoot defect, the shooting position unrelated with target site to be detected determines;Second training subset be and overseas point of two classes
The corresponding training subset of class label, the overseas tag along sort of two classes be the original image being worth based on no medical judgment, attached
Have the original image of covering, include slaking residue object original image determine.
Specifically, first training set is split as first training subset and second training subset by device;
First training subset is training subset corresponding with the overseas tag along sort of one kind, the overseas tag along sort of one kind is base
It is determined in the shooting defect of original image, the shooting position unrelated with target site to be detected;Second training subset is
Training subset corresponding with the overseas tag along sort of two classes, the overseas tag along sort of two classes are worth based on no medical judgment
Original image, the original image for being attached with covering, include slaking residue object original image determine.I.e. first training
Collection and the second training subset respectively correspond above-mentioned first major class and the second major class, and shooting defect may include full exposure image, complete
Black picture, half-exposure picture, partial exposure picture, structural fuzzy picture and details blurred picture.Target site to be detected can be with
It is stomach, above-mentioned shooting position may include the picture shot before the capsule endoscope entrance, the figure shot in esophagus
Piece, oral cavity picture, enteron aisle picture.The original image of no medical judgment value may include:
The whole figure of homogeneous, i.e. subject surface flat-satin, without significant texture, color is uniform, although shooting quality
It is very high, but since content is excessively single, lost medical judgment value (can not judge the location of reference object, angle,
Organ carrier, anatomical features etc.).The quantity accounting of picture is about 5.8%, this ratio is very high.This kind of picture is due to losing
Medical value is removed, although on surface not being rubbish picture, is actually distinguished with " rubbish picture " without what.Subsequent processing mistake
It can ignore completely in journey.
Waterline picture: the boundary line of the air and water that occur in picture, picture structure are clearly simple.Exposure is in air
Part, content is similar to the content of the whole figure of above-mentioned homogeneous, without medical value;It is submerged in the part of underwater, due to
It is covered by moisture film, also without the exposure of valuable information, therefore entire picture can be considered as " rubbish also without medical value
Picture ", picture number accounting are about 3.8%.
The covering being attached in the original image of covering may include lumps suspended matter, bubble population, mucous membrane body etc.,
It is completely covered since subject is covered by, cause this kind of picture to lose medical value.
The original image of slaking residue object: not draining clean food waste in alimentary canal, in stomach, intestines all may
Have, picture number accounting about 1%.In most cases, the coverage of slaking residue can be bigger, and occupying map sheet is more than 50%
Area, as long as but either with or without the place covered, it is necessary to ensure that no off-note occurs, can guarantee by this in this way
A classification is led through the picture come, is all free from the picture of off-note, can also be referred to " rubbish picture " and be no longer participate in
Subsequent processing.
It filters from all samples pictures and is respectively corresponded with first training subset and second training subset respectively
Interference samples pictures, to construct first training subset and second training subset respectively.
Specifically, device is filtered from all samples pictures and first training subset and the second training respectively
Collect corresponding interference samples pictures, to construct first training subset and second training subset respectively.Referring to upper
Citing is stated, A is split as A1 and A2, respectively corresponds above-mentioned first major class and the second major class, the first training subset of building and first
Major class A1 is corresponding, and the second training subset of building is corresponding with the second major class A2.
The training method of training set provided in an embodiment of the present invention, by constructing the first training subset and the second training respectively
Subset is further able to improve the reasonability of training set building, and then is more reasonably trained to training set.
On the basis of the above embodiments, second training set includes third training subset, the 4th training subset and
Five training subsets;Correspondingly, described filter the first object picture from all first remaining samples pictures, described in building
Second training set, comprising:
Second training set is split as the third training subset, the 4th training subset and the 5th training
Subset;The third training subset is training corresponding with the first object picture classification label based on partial structurtes feature
Collection;4th training subset is training subset corresponding with hole shape structure first object picture classification label;Described 5th
Training subset is training subset corresponding with the first object picture classification label based on global structure feature.
Specifically, device by second training set be split as the third training subset, the 4th training subset and
5th training subset;The third training subset be and the first object picture classification label phase based on partial structurtes feature
Corresponding training subset;4th training subset is training corresponding with hole shape structure first object picture classification label
Collection;5th training subset is training corresponding with the first object picture classification label based on global structure feature
Collection.First object picture based on partial structurtes feature may include:
The first object picture of edge defect, the i.e. picture of edge defect are usually the knot shot to hole shape texture edge
Fruit, most of map sheet structure is simple, only there is incomplete hole shape or half radial structure in marginal portion.Due to many off-notes
This region is also appeared in, therefore the picture of this classification has very strong control to act on.
Simple linear structure: refer to that drawing has small part shadow region only comprising 1 to 2 short fusiform structures in picture
Domain, rest part is smooth, without obvious texture.Many pictures comprising off-note also have similar background structure, therefore,
The picture of this classification can be with off-note picture row at compareing well.
Hole shape structure is presented in hole shape structure first object picture, i.e. subject, can be big according to the size discrimination of hole shape
Hole structure, duck eye structure.
First object picture based on global structure feature may include:
Stomach corner structure, i.e. stomach angle are the constructions that lesser curvature is formed at least significant end angle, generally in 90 ° of corners, be body of stomach with
Boundary of the pyloric part in lesser curvature.When stomach wall is shunk, stomach angle surface usually has a shape of threads fold, and when stomach wall diastole, fold disappears.
Therefore the picture of this specific position, it is classified as a classification.
The stomach wall structure of dense texture structure, i.e. vista shot shows the curve texture of dense arrangement, is usually interior
What wall took when shrinking, some structural informations can be also superimposed on this texture, so that the background of entire picture becomes non-
It is often complicated, off-note is found and identified in the background of this kind of picture, also becomes abnormal difficult.Picture number accounting is about
4.5%.
Filtered from all first remaining samples pictures respectively with the third training subset, the 4th training subset and
The corresponding first object picture of 5th training subset, to construct the third training subset, the 4th instruction respectively
Practice subset and the 5th training subset.
Specifically, device is filtered from all first remaining samples pictures and the third training subset, described the respectively
Four training subsets and the corresponding first object picture of the 5th training subset, to construct third training respectively
Collection, the 4th training subset and the 5th training subset.Referring to the example above, B is split as B1, B2 and B3, respectively corresponds
Above-mentioned third major class is to the fifth-largest class, and the third training subset of building is corresponding with third major class B1, building the 4th training is sub
Collect, fiveth training subset of building corresponding with the fourth-largest class B2 and the fifth-largest class B3 is corresponding.
The training method of training set provided in an embodiment of the present invention, by constructing third training subset respectively to the 5th training
Subset is further able to improve the reasonability of training set building, and then is more reasonably trained to training set.
On the basis of the above embodiments, described that the first training set, the second training set and third constructed is instructed respectively
Practice collection to be trained, comprising:
Respectively to first training subset, second training subset, the third training subset, described constructed
4th training subset, the 5th training subset and the third training set are trained.
Specifically, device respectively instructs first training subset, second training subset, the third constructed
Practice subset, the 4th training subset, the 5th training subset and the third training set to be trained.To the instruction constructed
White silk integrates the method being trained as this field mature technology, repeats no more.
The training method of training set provided in an embodiment of the present invention, by respectively to the first training subset constructed to
Five training subsets and third training set are trained, and are further able to improve the reasonability of training set building, and then more rationally
Ground is trained training set.
On the basis of the above embodiments, the method also includes:
Obtain first training subset, second training subset, the third training subset, the 4th training
Collection, the 5th training subset and the corresponding trained duration of the third training set.
Specifically, device obtains first training subset, second training subset, the third training subset, institute
State the 4th training subset, the 5th training subset and the corresponding trained duration of the third training set.Training duration can
To be interpreted as completing the duration the moment from training start time to training.
If the target training that there is a trained duration reaches preset duration, to preset duration is reached is known at least in judgement
The corresponding target training set of duration is split, so that the corresponding trained duration of target training set after splitting is less than described preset
Duration.
Specifically, if device judgement knows that at least there is a trained duration reaches preset duration, when to reaching default
The long corresponding target training set of target training duration is split, so that the corresponding trained duration of target training set after splitting
Less than the preset duration.Preset duration can be independently arranged according to the actual situation.For there are a trained durations to reach pre-
If the case where duration, such as: the training duration T1 of only the first training subset reaches preset duration, then target training set is first
Training subset, then the first training subset is split, specific fractionation mode is not especially limited.For there are it is multiple trained when
Long the case where reaching preset duration, for example, two, i.e. the training duration T1 and the second training subset of only the first training subset
Training duration T2 reach preset duration, then target training set is the first training subset and the second training subset, then respectively to the
One training subset and the second training subset are split.
The training method of training set provided in an embodiment of the present invention, by tearing the training set for training duration too long open
Point, it is further able to improve the reasonability of training set building, and then be more reasonably trained training set.
Fig. 5 is the training device example structure schematic diagram of training set of the present invention, as shown in figure 5, the embodiment of the present invention mentions
A kind of training device of training set, including acquiring unit 501, the first construction unit 502, the second construction unit 503, are supplied
Three construction units 504 and training unit 505, in which:
Acquiring unit 501 is used to obtain the training set for training preset model;The training set includes comprising interfering sample
First training set of this picture, shooting exterior surface do not include off-note corresponding second training set of first object picture,
Shoot the corresponding third training set of the second Target Photo that exterior surface includes the off-note;The off-note includes convex
Play feature and/or designated color feature;First construction unit 502 from all samples pictures for filtering the interference sample graph
Piece to construct first training set, and retains all first remaining samples pictures in addition to the interference samples pictures;The
Two construction units 503 are used to filter the first object picture from all first remaining samples pictures, to construct described second
Training set, and retain all second remaining samples pictures in addition to the interference samples pictures and the first object picture;
Third construction unit 504 is used to obtain second Target Photo from all second remaining samples pictures, to construct described the
Three training sets;Training unit 505 is for respectively instructing the first training set, the second training set and third training set that have constructed
Practice.
Specifically, acquiring unit 501 is used to obtain the training set for training preset model;The training set include comprising
The first training set of samples pictures, shooting exterior surface is interfered not to include corresponding second instruction of first object picture of off-note
Practice the corresponding third training set of the second Target Photo of collection, shooting exterior surface comprising the off-note;The off-note
Including protruding features and/or designated color feature;First construction unit 502 from all samples pictures for filtering the interference
Samples pictures to construct first training set, and retain all first remaining samples in addition to the interference samples pictures
Picture;Second construction unit 503 is used to filter the first object picture from all first remaining samples pictures, to construct
The second training set is stated, and retains all second remaining samples in addition to the interference samples pictures and the first object picture
Picture;Third construction unit 504 is used to obtain second Target Photo from all second remaining samples pictures, to construct
State third training set;Training unit 505 for respectively to the first training set, the second training set and third training set constructed into
Row training.
The training device of training set provided in an embodiment of the present invention, by successively filtering interference samples pictures, first step by step
Target Photo and the second Target Photo, and corresponding training set is constructed, then be trained respectively to training set, it can be improved training
Collect the reasonability of building, and then more reasonably training set is trained.
The training device of training set provided in an embodiment of the present invention specifically can be used for executing above-mentioned each method embodiment
Process flow, details are not described herein for function, is referred to the detailed description of above method embodiment.
Fig. 6 is electronic equipment entity structure schematic diagram provided in an embodiment of the present invention, as shown in fig. 6, the electronic equipment
It include: processor (processor) 601, memory (memory) 602 and bus 603;
Wherein, the processor 601, memory 602 complete mutual communication by bus 603;
The processor 601 is used to call the program instruction in the memory 602, to execute above-mentioned each method embodiment
Provided method, for example, obtain the training set for training preset model;The training set includes comprising interfering sample
The first training set, the shooting exterior surface of picture do not include corresponding second training set of first object picture of off-note, clap
Take the photograph the corresponding third training set of the second Target Photo that exterior surface includes the off-note;The off-note includes protrusion
Feature and/or designated color feature;The interference samples pictures are filtered, from all samples pictures to construct first training
Collection, and retain all first remaining samples pictures in addition to the interference samples pictures;From all first remaining samples pictures
In filter the first object picture, to construct second training set, and retain except the interference samples pictures and described the
The all second remaining samples pictures except one Target Photo;Second target is obtained from all second remaining samples pictures
Picture, to construct the third training set;The first training set, the second training set and third training set constructed is carried out respectively
Training.
The present embodiment discloses a kind of computer program product, and the computer program product includes being stored in non-transient calculating
Computer program on machine readable storage medium storing program for executing, the computer program include program instruction, when described program instruction is calculated
When machine executes, computer is able to carry out method provided by above-mentioned each method embodiment, for example, obtains default for training
The training set of model;The training set includes the first training set comprising interference samples pictures, shooting exterior surface not comprising different
Corresponding second training set of the first object picture of Chang Tezheng, shooting exterior surface include the second target figure of the off-note
The corresponding third training set of piece;The off-note includes protruding features and/or designated color feature;From all samples pictures
The interference samples pictures are filtered, to construct first training set, and are retained all in addition to the interference samples pictures
First remaining samples pictures;The first object picture is filtered from all first remaining samples pictures, to construct described second
Training set, and retain all second remaining samples pictures in addition to the interference samples pictures and the first object picture;
Second Target Photo is obtained from all second remaining samples pictures, to construct the third training set;Respectively to structure
The first training set, the second training set and the third training set built are trained.
The present embodiment provides a kind of non-transient computer readable storage medium, the non-transient computer readable storage medium
Computer instruction is stored, the computer instruction makes the computer execute method provided by above-mentioned each method embodiment, example
It such as include: the training set obtained for training preset model;The training set includes the first training comprising interference samples pictures
Collection, shooting exterior surface do not include corresponding second training set of first object picture of off-note, shooting exterior surface includes
The corresponding third training set of second Target Photo of the off-note;The off-note includes protruding features and/or specifies
Color characteristic;The interference samples pictures are filtered from all samples pictures, to construct first training set, and are retained and are removed institute
State all first remaining samples pictures except interference samples pictures;Described first is filtered from all first remaining samples pictures
Target Photo to construct second training set, and retains in addition to the interference samples pictures and the first object picture
The all second remaining samples pictures;Second Target Photo is obtained from all second remaining samples pictures, to construct
State third training set;The first training set, the second training set and third training set that have constructed are trained respectively.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above method embodiment can pass through
The relevant hardware of program instruction is completed, and program above-mentioned can be stored in a computer readable storage medium, the program
When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes: ROM, RAM, magnetic disk or light
The various media that can store program code such as disk.
The apparatus embodiments described above are merely exemplary, wherein described, unit can as illustrated by the separation member
It is physically separated with being or may not be, component shown as a unit may or may not be physics list
Member, it can it is in one place, or may be distributed over multiple network units.It can be selected according to the actual needs
In some or all of the modules achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not paying creativeness
Labour in the case where, it can understand and implement.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can
It realizes by means of software and necessary general hardware platform, naturally it is also possible to pass through hardware.Based on this understanding, on
Stating technical solution, substantially the part that contributes to existing technology can be embodied in the form of software products in other words, should
Computer software product may be stored in a computer readable storage medium, such as ROM/RAM, magnetic disk, CD, including several fingers
It enables and using so that a computer equipment (can be personal computer, server or the network equipment etc.) executes each implementation
Method described in certain parts of example or embodiment.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although
Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used
To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;
And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and
Range.
Claims (8)
1. a kind of training method of training set characterized by comprising
Obtain the training set for training preset model;The training set include comprising interference samples pictures the first training set,
It shoots exterior surface and does not include corresponding second training set of first object picture of off-note, shooting exterior surface comprising described
The corresponding third training set of second Target Photo of off-note;The off-note includes protruding features and/or designated color
Feature;
The interference samples pictures are filtered from all samples pictures, to construct first training set, and are retained except described dry
Disturb all first remaining samples pictures except samples pictures;
The first object picture is filtered from all first remaining samples pictures, to construct second training set, and is retained
The all second remaining samples pictures in addition to the interference samples pictures and the first object picture;
Second Target Photo is obtained from all second remaining samples pictures, to construct the third training set;
The first training set, the second training set and third training set that have constructed are trained respectively.
2. the method according to claim 1, wherein first training set includes the first training subset and second
Training subset;Correspondingly, described filter the interference samples pictures from all samples pictures, to construct first training
Collection, comprising:
First training set is split as first training subset and second training subset;First training subset
It is the shooting based on original image for training subset corresponding with the overseas tag along sort of one kind, the overseas tag along sort of one kind
Defect, the shooting position unrelated with target site to be detected determine;Second training subset be and the overseas contingency table of two classes
Sign corresponding training subset, the overseas tag along sort of two classes is the original image being worth based on no medical judgment, is attached with
The original image of covering, include slaking residue object original image determine;
It is filtered from all samples pictures respectively corresponding with first training subset and second training subset dry
Samples pictures are disturbed, to construct first training subset and second training subset respectively.
3. according to the method described in claim 2, it is characterized in that, second training set includes third training subset, the 4th
Training subset and the 5th training subset;Correspondingly, described filter the first object figure from all first remaining samples pictures
Piece, to construct second training set, comprising:
Second training set is split as the third training subset, the 4th training subset and the 5th training
Collection;The third training subset is training corresponding with the first object picture classification label based on partial structurtes feature
Collection;4th training subset is training subset corresponding with hole shape structure first object picture classification label;Described 5th
Training subset is training subset corresponding with the first object picture classification label based on global structure feature;
It is filtered from all first remaining samples pictures respectively and the third training subset, the 4th training subset and described
The corresponding first object picture of 5th training subset, to construct the third training subset, the 4th training respectively
Collection and the 5th training subset.
4. according to the method described in claim 3, it is characterized in that, described respectively to the first training set constructed, the second instruction
Practice collection and third training set be trained, comprising:
Respectively to first training subset, second training subset, the third training subset, the described 4th constructed
Training subset, the 5th training subset and the third training set are trained.
5. according to the method described in claim 4, it is characterized in that, the method also includes:
Obtain first training subset, second training subset, the third training subset, the 4th training subset,
5th training subset and the corresponding trained duration of the third training set;
If judgement knows that at least there is a trained duration reaches preset duration, to the target training duration for reaching preset duration
Corresponding target training set is split, when so that the corresponding trained duration of target training set after splitting being less than described default
It is long.
6. a kind of training device of training set characterized by comprising
Acquiring unit, for obtaining the training set for training preset model;The training set includes comprising interfering samples pictures
The first training set, shooting exterior surface do not include corresponding second training set of first object picture of off-note, subject
Outer surface includes the corresponding third training set of the second Target Photo of the off-note;The off-note includes protruding features
And/or designated color feature;
First construction unit, for filtering the interference samples pictures from all samples pictures, to construct first training
Collection, and retain all first remaining samples pictures in addition to the interference samples pictures;
Second construction unit, for filtering the first object picture from all first remaining samples pictures, described in building
Second training set, and retain all second remaining sample graphs in addition to the interference samples pictures and the first object picture
Piece;
Third construction unit, for obtaining second Target Photo from all second remaining samples pictures, described in building
Third training set;
Training unit, for being trained respectively to the first training set, the second training set and third training set that have constructed.
7. a kind of electronic equipment characterized by comprising processor, memory and bus, wherein
The processor and the memory complete mutual communication by the bus;
The memory is stored with the program instruction that can be executed by the processor, and the processor calls described program to instruct energy
Enough execute method as claimed in claim 1 to 5.
8. a kind of non-transient computer readable storage medium, which is characterized in that the non-transient computer readable storage medium is deposited
Computer instruction is stored up, the computer instruction makes the computer execute method as claimed in claim 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910252738.XA CN110070113B (en) | 2019-03-29 | 2019-03-29 | Training method and device for training set |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910252738.XA CN110070113B (en) | 2019-03-29 | 2019-03-29 | Training method and device for training set |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110070113A true CN110070113A (en) | 2019-07-30 |
CN110070113B CN110070113B (en) | 2021-03-30 |
Family
ID=67366818
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910252738.XA Active CN110070113B (en) | 2019-03-29 | 2019-03-29 | Training method and device for training set |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110070113B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104809465A (en) * | 2014-01-23 | 2015-07-29 | 北京三星通信技术研究有限公司 | Classifier training method, target detection, segmentation or classification method and target detection, segmentation or classification device |
CN105488453A (en) * | 2015-11-30 | 2016-04-13 | 杭州全实鹰科技有限公司 | Detection identification method of no-seat-belt-fastening behavior of driver based on image processing |
CN106097340A (en) * | 2016-06-12 | 2016-11-09 | 山东大学 | A kind of method automatically detecting and delineating Lung neoplasm position based on convolution grader |
CN106778583A (en) * | 2016-12-07 | 2017-05-31 | 北京理工大学 | Vehicle attribute recognition methods and device based on convolutional neural networks |
CN107103187A (en) * | 2017-04-10 | 2017-08-29 | 四川省肿瘤医院 | The method and system of Lung neoplasm detection classification and management based on deep learning |
CN107833219A (en) * | 2017-11-28 | 2018-03-23 | 腾讯科技(深圳)有限公司 | Image-recognizing method and device |
CN107909572A (en) * | 2017-11-17 | 2018-04-13 | 合肥工业大学 | Pulmonary nodule detection method and system based on image enhancement |
CN107945875A (en) * | 2017-11-17 | 2018-04-20 | 合肥工业大学 | Pulmonary nodule detection method and system based on data enhancing |
CN107977963A (en) * | 2017-11-30 | 2018-05-01 | 北京青燕祥云科技有限公司 | Decision method, device and the realization device of Lung neoplasm |
CN108133476A (en) * | 2017-12-26 | 2018-06-08 | 安徽科大讯飞医疗信息技术有限公司 | A kind of Lung neoplasm automatic testing method and system |
CN108364006A (en) * | 2018-01-17 | 2018-08-03 | 超凡影像科技股份有限公司 | Medical Images Classification device and its construction method based on multi-mode deep learning |
CN108664924A (en) * | 2018-05-10 | 2018-10-16 | 东南大学 | A kind of multi-tag object identification method based on convolutional neural networks |
CN109086716A (en) * | 2018-08-01 | 2018-12-25 | 北京嘀嘀无限科技发展有限公司 | A kind of method and device of seatbelt wearing detection |
CN109508741A (en) * | 2018-11-09 | 2019-03-22 | 哈尔滨工业大学 | Method based on deep learning screening training set |
-
2019
- 2019-03-29 CN CN201910252738.XA patent/CN110070113B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104809465A (en) * | 2014-01-23 | 2015-07-29 | 北京三星通信技术研究有限公司 | Classifier training method, target detection, segmentation or classification method and target detection, segmentation or classification device |
CN105488453A (en) * | 2015-11-30 | 2016-04-13 | 杭州全实鹰科技有限公司 | Detection identification method of no-seat-belt-fastening behavior of driver based on image processing |
CN106097340A (en) * | 2016-06-12 | 2016-11-09 | 山东大学 | A kind of method automatically detecting and delineating Lung neoplasm position based on convolution grader |
CN106778583A (en) * | 2016-12-07 | 2017-05-31 | 北京理工大学 | Vehicle attribute recognition methods and device based on convolutional neural networks |
CN107103187A (en) * | 2017-04-10 | 2017-08-29 | 四川省肿瘤医院 | The method and system of Lung neoplasm detection classification and management based on deep learning |
CN107909572A (en) * | 2017-11-17 | 2018-04-13 | 合肥工业大学 | Pulmonary nodule detection method and system based on image enhancement |
CN107945875A (en) * | 2017-11-17 | 2018-04-20 | 合肥工业大学 | Pulmonary nodule detection method and system based on data enhancing |
CN107833219A (en) * | 2017-11-28 | 2018-03-23 | 腾讯科技(深圳)有限公司 | Image-recognizing method and device |
CN107977963A (en) * | 2017-11-30 | 2018-05-01 | 北京青燕祥云科技有限公司 | Decision method, device and the realization device of Lung neoplasm |
CN108133476A (en) * | 2017-12-26 | 2018-06-08 | 安徽科大讯飞医疗信息技术有限公司 | A kind of Lung neoplasm automatic testing method and system |
CN108364006A (en) * | 2018-01-17 | 2018-08-03 | 超凡影像科技股份有限公司 | Medical Images Classification device and its construction method based on multi-mode deep learning |
CN108664924A (en) * | 2018-05-10 | 2018-10-16 | 东南大学 | A kind of multi-tag object identification method based on convolutional neural networks |
CN109086716A (en) * | 2018-08-01 | 2018-12-25 | 北京嘀嘀无限科技发展有限公司 | A kind of method and device of seatbelt wearing detection |
CN109508741A (en) * | 2018-11-09 | 2019-03-22 | 哈尔滨工业大学 | Method based on deep learning screening training set |
Also Published As
Publication number | Publication date |
---|---|
CN110070113B (en) | 2021-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108615236A (en) | A kind of image processing method and electronic equipment | |
CN102697446B (en) | Image processing apparatus and image processing method | |
CN110084275A (en) | A kind of choosing method and device of training sample | |
CN113129287A (en) | Automatic lesion mapping method for upper gastrointestinal endoscope image | |
CN109978015B (en) | Image processing method and device and endoscope system | |
CN112184699A (en) | Aquatic product health detection method, terminal device and storage medium | |
JP3842171B2 (en) | Tomographic image processing device | |
CN112232977A (en) | Aquatic product cultivation evaluation method, terminal device and storage medium | |
CN107625513A (en) | Enhancing shows Narrow-Band Imaging endoscopic system and its imaging method | |
CN112907544A (en) | Machine learning-based liquid dung character recognition method and system and handheld intelligent device | |
CN110110750A (en) | A kind of classification method and device of original image | |
CN111563439A (en) | Aquatic organism disease detection method, device and equipment | |
CN111493805A (en) | State detection device, method, system and readable storage medium | |
CN110070113A (en) | A kind of training method and device of training set | |
CN110097082A (en) | A kind of method for splitting and device of training set | |
CN110097080B (en) | Construction method and device of classification label | |
CN110110749A (en) | Image processing method and device in a kind of training set | |
CN109273084A (en) | The method and system of feature modeling are learned based on multi-modal ultrasound group | |
CN110083727A (en) | A kind of method and device of determining tag along sort | |
CN110874824B (en) | Image restoration method and device | |
CN110110746A (en) | A kind of method and device of determining tag along sort | |
CN110084276A (en) | A kind of method for splitting and device of training set | |
CN109993226A (en) | A kind of method for splitting and device of training set | |
CN110084280A (en) | A kind of method and device of determining tag along sort | |
CN110084278A (en) | A kind of method for splitting and device of training set |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |