CN108985302A - A kind of skin lens image processing method, device and equipment - Google Patents

A kind of skin lens image processing method, device and equipment Download PDF

Info

Publication number
CN108985302A
CN108985302A CN201810772239.9A CN201810772239A CN108985302A CN 108985302 A CN108985302 A CN 108985302A CN 201810772239 A CN201810772239 A CN 201810772239A CN 108985302 A CN108985302 A CN 108985302A
Authority
CN
China
Prior art keywords
image
lens image
skin
processed
skin lens
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810772239.9A
Other languages
Chinese (zh)
Inventor
栾欣泽
王晓婷
何光宇
孟健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201810772239.9A priority Critical patent/CN108985302A/en
Publication of CN108985302A publication Critical patent/CN108985302A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Abstract

The application discloses a kind of skin lens image processing method, device and equipment, this method comprises: receiving skin lens image to be processed;Image characteristics extraction is carried out to the skin lens image to be processed, obtains the feature vector of the skin lens image to be processed;The first classification results are obtained after the processing of first disaggregated model using the feature vector of the skin lens image to be processed as the input parameter of trained first disaggregated model;According to first classification results, the cutaneous lesions result on the skin lens image to be processed is determined.The application using the first disaggregated model to skin lens image carry out cutaneous lesions identification, accurate cutaneous lesions can be obtained as a result, and recognition efficiency it is higher.

Description

A kind of skin lens image processing method, device and equipment
Technical field
This application involves field of image processings, and in particular to a kind of skin lens image processing method, device and equipment.
Background technique
With the arrival of family doctor's concept, more and more families wish in the case where staying indoor can and doctor into Row is linked up, to obtain the diagnostic result of some professions.For this purpose, some family's diagnostic equipments also come into being, such as household blood pressure Instrument, domestic glucometer, household ophthalmoscope etc., skin mirror device is increasingly becoming each family as dermopathic diagnostic equipment Requisite instrumentation.Skin mirror device is a kind of skin microscope that can amplify decades of times, is for observing skin-color disposition illness Sharp weapon, using its obtain skin lens image can be used in diagnosing skin disease.
Currently, family doctor manually assesses cutaneous lesions by skin lens image, still, for general family Doctor, the cutaneous lesions that accurate and self-confident determining whether there is needs skin biopsy or expert to change the place of examination still lack necessary training, So needing a kind of effective ways that cutaneous lesions can be recognized accurately by skin lens image at present.
Summary of the invention
To solve the above problems, this application provides a kind of skin lens image processing method, device and equipment, particular technique Scheme is as follows:
In a first aspect, this application provides a kind of skin lens image processing methods, which comprises
Receive skin lens image to be processed;
Image characteristics extraction is carried out to the skin lens image to be processed, obtains the feature of the skin lens image to be processed Vector;
Using the feature vector of the skin lens image to be processed as the input parameter of trained first disaggregated model, After the processing of first disaggregated model, the first classification results are obtained;
According to first classification results, the cutaneous lesions result on the skin lens image to be processed is determined.
Optionally, described according to first classification results, determine the cutaneous lesions on the skin lens image to be processed As a result before, further includes:
Edge extracting processing is carried out to the skin lens image to be processed, obtains the edge of the skin lens image to be processed Extract image;
Using the edge extracting image of the skin lens image to be processed as the input of trained second disaggregated model Parameter obtains the second classification results after the processing of second disaggregated model;
Correspondingly, it is described according to first classification results, determine the cutaneous lesions on the skin lens image to be processed As a result, specifically:
Comprehensive first classification results and second classification results, determine the skin on the skin lens image to be processed Skin lesion result.
Optionally, described that image characteristics extraction is carried out to the skin lens image to be processed, obtain the skin to be processed Before the feature vector of mirror image, further includes:
The processing of image enhanced fuzzy is carried out to the skin lens image to be processed.
Optionally, it is described the processing of image enhanced fuzzy carried out to the skin lens image to be processed before, further includes:
Denoising is filtered to the skin mirror image to be processed.
Optionally, described using the feature vector of the skin lens image to be processed as trained first disaggregated model Input parameter, after the processing of first disaggregated model, before obtaining the first classification results, further includes:
The first training set of images is obtained, the first image training set includes several dermoscopies with cutaneous lesions label Image;
Image characteristics extraction is carried out to each skin lens image in the first image training set respectively, obtains each skin The feature vector of skin mirror image;
The first pre-generated disaggregated model is trained using the feature vector of each skin lens image, obtain by The first trained disaggregated model.
Optionally, described using the edge extracting image of the skin lens image to be processed as trained second classification The input parameter of model, after the processing of second disaggregated model, before obtaining the second classification results, further includes:
The second training set of images is obtained, second training set of images includes several dermoscopies with cutaneous lesions label Image;
Edge extracting processing is carried out to each skin lens image in second training set of images respectively, obtains each skin The edge extracting image of skin mirror image;
The second pre-generated disaggregated model is trained using the edge extracting image of each skin lens image, is obtained Trained second disaggregated model.
Optionally, the pretreated skin lens image described to process respectively carries out image characteristics extraction or described Respectively before the pretreated skin lens image progress edge extracting processing described to process, further includes:
The skin lens image with cutaneous lesions label is pre-processed respectively, the pretreatment includes that filtering is gone Make an uproar processing and image enhanced fuzzy processing.
Optionally, it is described pretreatment further include predetermined angle rotation processing and or mirror image processing.
Second aspect, present invention also provides a kind of skin lens image processing unit, described device includes:
Receiving module, for receiving skin lens image to be processed;
First extraction module obtains described wait locate for carrying out image characteristics extraction to the skin lens image to be processed Manage the feature vector of skin lens image;
First categorization module, for using the feature vector of the skin lens image to be processed as trained first point The input parameter of class model obtains the first classification results after the processing of first disaggregated model;
Determining module, for determining the skin disease on the skin lens image to be processed according to first classification results Become result.
Optionally, described device further include:
Second extraction module obtains described wait locate for carrying out edge extracting processing to the skin lens image to be processed Manage the edge extracting image of skin lens image;
Second categorization module, for using the edge extracting image of the skin lens image to be processed as trained The input parameter of two disaggregated models obtains the second classification results after the processing of second disaggregated model;
Correspondingly, the determining module, is specifically used for:
Comprehensive first classification results and second classification results, determine the skin on the skin lens image to be processed Skin lesion result.
Optionally, described device further include:
First preprocessing module, for carrying out the processing of image enhanced fuzzy to the skin lens image to be processed.
Optionally, described device further include:
Second preprocessing module, for being filtered denoising to the skin mirror image to be processed.
Optionally, described device further include:
First obtains module, and for obtaining the first training set of images, the first image training set includes several with skin The skin lens image of skin lesion label;
It is special to carry out image to each skin lens image in the first image training set for respectively for third extraction module Sign is extracted, and the feature vector of each skin lens image is obtained;
First training module, for the feature vector using each skin lens image to the first pre-generated disaggregated model It is trained, obtains trained first disaggregated model.
Optionally, described device further include:
Second obtains module, and for obtaining the second training set of images, second training set of images includes several with skin The skin lens image of skin lesion label;
4th extraction module is mentioned for carrying out edge to each skin lens image in second training set of images respectively Processing is taken, the edge extracting image of each skin lens image is obtained;
Second training module, for the edge extracting image using each skin lens image to the second pre-generated classification Model is trained, and obtains trained second disaggregated model.
Optionally, described device further include:
Third preprocessing module, for being pre-processed respectively to the skin lens image with cutaneous lesions label, The pretreatment includes filtering and noise reduction processing and the processing of image enhanced fuzzy.
Optionally, it is described pretreatment further include predetermined angle rotation processing and or mirror image processing.
The third aspect, this application provides a kind of skin lens image processing equipment, the equipment includes memory and processing Device,
Said program code is transferred to the processor for storing program code by the memory;
The processor is used to execute described in any item skins of first aspect according to the instruction in said program code Mirror image processing method.
In skin lens image processing method provided by the present application, after receiving skin lens image to be processed, to described wait locate It manages skin lens image and carries out image characteristics extraction, obtain the feature vector of the skin lens image to be processed;It will be described to be processed Input parameter of the feature vector of skin lens image as trained first disaggregated model, by first disaggregated model Processing after, obtain the first classification results;Finally, according to first classification results, the skin lens image to be processed is determined On cutaneous lesions result.The application carries out the identification of cutaneous lesions using the first disaggregated model to skin lens image, and existing The manual evaluation cutaneous lesions mode for lacking necessary training in technology is compared, and the application utilizes first by great amount of samples training Disaggregated model skin lens image is identified can obtain accurate cutaneous lesions as a result, and recognition efficiency it is higher.
In addition, the application can also be identified using edge extracting image of second disaggregated model to skin lens image, The classification results of final comprehensive first disaggregated model and the second disaggregated model determine cutaneous lesions as a result, further improving skin The accuracy of skin lesion result.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for For those of ordinary skill in the art, without any creative labor, it can also be obtained according to these attached drawings His attached drawing.
Fig. 1 is a kind of flow chart of skin lens image processing method provided by the embodiments of the present application;
Fig. 2 is the flow chart of another skin lens image processing method provided by the embodiments of the present application;
Fig. 3 is a kind of structural schematic diagram of skin lens image processing unit provided by the embodiments of the present application;
Fig. 4 is the structural schematic diagram of another skin lens image processing unit provided by the embodiments of the present application;
Fig. 5 provides a kind of structural schematic diagram of skin lens image processing equipment for the embodiment of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall in the protection scope of this application.
Skin lens image is the image obtained using skin mirror device, since skin lens image is able to reflect cutaneous lesions feelings Condition, so, doctor can diagnose skin disease based on skin lens image.But the diagnosis to skin disease, especially The cutaneous lesions for needing skin biopsy or expert to change the place of examination are judged whether there is, the knowledge that comparison is professional, general family doctor are needed Life may lack relevant professional knowledge, so accurate cutaneous lesions result can not be provided.
Based on this, this application provides a kind of skin lens image processing methods, based on by the training of a large amount of skin lens images Disaggregated model, the identification of cutaneous lesions is carried out to skin lens image, so that it is determined that whether patient suffers from skin disease, and really Patient is determined with any class skin disease etc..Specifically, the application is after receiving skin lens image to be processed, to the skin to be processed Skin mirror image carries out image characteristics extraction, obtains the feature vector of the skin lens image to be processed.Using this feature vector as warp The input parameter for crossing the first disaggregated model of training obtains the first classification results, most after the processing of first disaggregated model The cutaneous lesions result on skin lens image is determined according to first classification results eventually.The application is utilized by great amount of samples training Disaggregated model to skin lens image carry out automatic identification, accurate cutaneous lesions can be obtained as a result, and recognition efficiency It is higher.
The embodiment for introducing a kind of skin lens image processing method provided by the present application in detail below is this Shen with reference to Fig. 1 Please a kind of flow chart of skin lens image processing method that provides of embodiment, this method specifically includes:
S101: skin lens image to be processed is received.
In the embodiment of the present application, skin lens image to be processed is the image obtained using skin mirror device, skin to be processed Mirror image is able to reflect cutaneous lesions situation, can know whether patient suffers from cutaneous lesions based on skin lens image to be processed Deng.
The embodiment of the present application is using skin lens image to be processed as process object, wherein skin lens image to be processed can be with It is an image, is also possible to multiple images, that is to say, that the embodiment of the present application can be simultaneously to multiple dermoscopy figure to be processed Identification as carrying out cutaneous lesions result, to further increase recognition efficiency.
S102: image characteristics extraction is carried out to the skin lens image to be processed, obtains the skin lens image to be processed Feature vector.
In the embodiment of the present application, after receiving skin lens image to be processed, image characteristics extraction is carried out to it, to obtain The feature vector of the skin lens image to be processed.Wherein, this feature vector can embody the feature of the skin lens image to be processed. Scale invariant features transform (Scale-invariant feature transform, SIFT) calculation is generallyd use in practical application Method carries out feature extraction to skin lens image, with no restrictions to specific features extracting method at this.
In order to enable the image effect that cutaneous lesions situation is able to reflect in skin lens image to be processed is apparent, the application Embodiment first filters the skin lens image to be processed before carrying out image characteristics extraction to skin lens image to be processed Wave denoising, to remove the noise on image.
In a kind of optional embodiment, it can use mean filter and skin lens image to be processed be filtered at denoising Reason, this method are a kind of methods denoised using linear function.Specifically, assuming that f (x, y) is the original that a frame contains pollution noise Beginning image, g (x, y) is the image handled by filtering and noise reduction, and in filtering and noise reduction treatment process, the value of g (x, y) is pixel Point (x, y) is positioned adjacent to the average value of the grey scale pixel value of image-region, passes through filtering and noise reduction by what average value operation determined Reduce pixel outstanding (i.e. noise spot) in the image of processing, to inhibit noise jamming, specifically, above-mentioned utilize mean value The process that image denoising processing is completed in filtering can indicate that formula (1) is as follows with formula (1):
Wherein, the point centered on (x, y), the region having a size of m × n are adjacent image region.
In addition, the mode equivalence that formwork calculation also can be used in the functional operation in formula (1) replaces, wherein mean value filter Wave template size is m × n.In practical application, each specific gravity power in mean filter template can be adjusted according to specific needs Value, for example, the mean filter template that size is 3 × 3 below is common template, each specific gravity weight therein can be according to specific It needs to be adjusted;Specifically, common template is as follows, specific gravity weight therein is respectively 1/9,1/10,1/16:
It is worth noting that, above mean filter is a kind of side for being filtered denoising to skin lens image Formula, the embodiment of the present application can also realize filtering and noise reduction using other modes, no longer introduce one by one herein.
In addition, the skin lens image got due to skin mirror device is there may be unsharp situation, in order to by skin Feature instantiation in mirror image becomes apparent from, to obtain more accurate cutaneous lesions recognition result, the embodiment of the present application is right Before skin lens image to be processed carries out image characteristics extraction, image enhanced fuzzy processing can also be carried out to it.Specifically, this Application can complete it is above-mentioned denoising is filtered to skin lens image to be processed after, to the skin lens image to be processed into The processing of row image enhanced fuzzy.
In practical application, before carrying out the processing of image enhanced fuzzy to skin lens image to be processed, ash is carried out to it first Degreeization processing carries out the processing of image enhanced fuzzy, a kind of image enhanced fuzzy processing introduced below based on the image after gray processing Concrete mode:
Step 1), according to formula (2) by the data obfuscation of skin lens image to be processed;
Wherein, fijIndicate that the gray value of pixel (i, j), L indicate the skin lens image to be processed by gray processing processing Gray level;μijIndicate the degree of membership of pixel (i, j);Work as fijWhen=0, μijFor minimum value 0, work as fij=L-1, μijFor maximum Value 1, that is to say, that μijValue range be [0,1].
Specifically, skin lens image to be processed can be transformed into fuzzy space from data space using formula (2), determine The fuzzy matrix of the skin lens image to be processed.It wherein, include the degree of membership of each pixel in fuzzy matrix.
Step 2) completes μijCalculating after, enhanced fuzzy fortune is implemented to the skin lens image to be processed using formula (3) It calculates;
Wherein, μcExpression is getted over a little, μcValue need not be equal to 0.5, set generally according to demand;I(μij) indicate To degree of membership μijImplement the membership values obtained after enhanced fuzzy operation;
Specifically, I (μij) for indicating that reduction (works as μij> μc) or increase (work as μij≤μcijValue;Work as μij≤μcWhen, Nonlinear transformation operation can make μijValue increase, thus make the low ash angle value f of pixel (i, j)ijIncrease;In turn, work as μij> μcWhen, nonlinear transformation operation can make μijValue reduce, therefore make the high gray value f of pixel (i, j)ijReduce.The application is real The membership values of each pixel on skin lens image to be processed can be enhanced using above-mentioned enhanced fuzzy operation by applying example.
Step 3) after calculating enhancing by nonlinear function to skin lens image to be processed on fuzzy space, needs every The degree of membership μ of a pixelijIt carries out taking inverse transformation, skin lens image to be processed is converted back into data space from fuzzy space, is obtained The image that must enhance that treated;Wherein, for taking the formula (4) of inverse transformation as follows:
f′ij=(L-1) μij (4)
Wherein, f 'ijIndicate μijThe gray value obtained after taking inverse transformation.
Specifically, taking inverse transformation formula (4) can be by each pixel on enhancing treated image using above-mentioned Degree of membership is converted to gray value, obtains skin lens image to be processed and implements the result after enhanced fuzzy operation.
Step 4), the adjustment to enhancing treated image degree of comparing obtained in step 3) obtain final complete The image handled at image enhanced fuzzy;
Specifically, calculating the completion image enhanced fuzzy after setting contrast using formula (5), (6), (7), (8) The gray value of each pixel on the image of processing;
Wherein, f indicates initial pictures, usually by filtering and noise reduction treated skin lens image;gkIt indicates as volume The Gaussian function of product core;fkIndicate the image after two-dimensional discrete convolutional calculation;K is Gaussian function number;KkValue meet Formula (6);It can choosekIt is 3, choosing σ is 5,20,100;
Specifically, do two-dimensional discrete convolutional calculation to initial pictures f using formula (5), the image f that obtains that treatedk;So Afterwards, the image f that each process of convolution obtains is calculated separately using formula (7)kWith the ratio of initial pictures f:
Finally, carrying out linear weighted function calculating using formula (8), picture contrast is adjusted, processing result f " is obtained;
Wherein, wkFor weighting coefficient, usually taking 1/2, f " is the gray value of pixel in image f.
In the embodiment of the present application, by above-mentioned steps 1) to step 4) processing skin lens image feature instantiation it is brighter It is aobvious, it ultimately helps to obtain more accurate cutaneous lesions recognition result.
In succession by the skin lens image to be processed of the processing of above-mentioned filtering and noise reduction, the processing of image enhanced fuzzy, spy is completed After the extraction for levying vector, S103 is continued to execute.
S103: using the feature vector of the skin lens image to be processed as the input of trained first disaggregated model Parameter obtains the first classification results after the processing of first disaggregated model.
In the embodiment of the present application, after the feature vector that S102 gets skin lens image to be processed, by this feature vector As the input parameter of trained first disaggregated model, which is based on this feature vector and classifies, and obtains To corresponding first classification results of the skin lens image to be processed.
It is right in advance before the first disaggregated model is used to carry out cutaneous lesions identification to skin lens image in practical application First disaggregated model is trained, and specific training process is introduced subsequent.
S104: according to first classification results, the cutaneous lesions result on the skin lens image to be processed is determined.
In the embodiment of the present application, after the first disaggregated model exports the first classification results of the skin lens image to be processed, According to first classification results, the cutaneous lesions result on the skin lens image to be processed is determined.
In a kind of optional embodiment, if the classification results of first disaggregated model are 0 and 1, and 0 represents skin disease Become result and represents cutaneous lesions result into no lesion or without certain cutaneous lesions, 1 to have lesion or having certain cutaneous lesions;Then root It can determine according to the first classification results with the presence or absence of cutaneous lesions on the skin lens image to be processed, or may determine whether to deposit In certain cutaneous lesions, such as melanoma.
In another optional embodiment, if the classification results of first disaggregated model are 0,1,2 ..., and 0 generation Table cutaneous lesions result is no lesion, and 1,2 ... respectively represent a kind of skin disease type (such as melanoma);Then according to One classification results determination can determine the cutaneous lesions that whether there is on the skin lens image to be processed, and determine that there are skins Specific skin disease type can be known when lesion.
Skin lens image processing method provided by the embodiments of the present application is based on using trained first disaggregated model Image feature vector carries out the identification of cutaneous lesions to skin lens image, determines the cutaneous lesions result of patient.With the prior art The middle manual evaluation cutaneous lesions mode for lacking necessary training is compared, and the embodiment of the present application utilizes the by great amount of samples training One disaggregated model skin lens image is identified can obtain accurate cutaneous lesions as a result, and recognition efficiency it is higher.
In order to more accurately determine the cutaneous lesions on skin lens image as a result, the embodiment of the present application also provides a kind of skins Skin mirror image processing method, on the basis of above method embodiment, the embodiment of the present application can also utilize trained the Two disaggregated models classify to skin lens image to be processed, obtain the second classification results, final comprehensive first classification results and Second classification results, determine the cutaneous lesions on skin lens image to be processed as a result, compared with above method embodiment, the application Embodiment can obtain more accurate skin disease diagnostic result.
The embodiment for introducing another skin lens image processing method provided by the embodiments of the present application in detail below, with reference to figure 2, for the flow chart of another skin lens image processing method provided by the embodiments of the present application, this method is specifically included:
S201: skin lens image to be processed is received.
S202: image characteristics extraction is carried out to the skin lens image to be processed, obtains the skin lens image to be processed Feature vector.
S203: using the feature vector of the skin lens image to be processed as the input of trained first disaggregated model Parameter obtains the first classification results after the processing of first disaggregated model.
S201-S203 in the embodiment of the present application is identical as the S101-S103 in above method embodiment, can refer to reason Solution, details are not described herein.
S204: edge extracting processing is carried out to the skin lens image to be processed, obtains the skin lens image to be processed Edge extracting image.
Also influencing disease due to the edge shape (as the edge of melanoma is irregular) of the lesion region on skin Diagnosis, detects the cutaneous lesions on skin lens image for this purpose, the embodiment of the present application is also based on edge extracting image. Specifically, the embodiment of the present application can successively be filtered denoising, image enhanced fuzzy to skin lens image to be processed After processing etc., edge extracting processing is carried out to the skin lens image to be processed, the edge for obtaining the skin lens image to be processed mentions Take image, wherein edge extracting image can embody the edge feature of the skin lens image to be processed.
In practical application, Su Beier Sobel operator can be used by carrying out edge extracting processing to skin lens image, wherein Sobel operator includes both direction template, the two direction templates are used to calculate the transverse edge and longitudinal edge of skin lens image Edge, and calculated transverse edge and longitudinal edge carry out operation with image convolution respectively, finally obtain horizontal and vertical The gradient approximation of straight edge.
It is illustrated below, it is assumed that A indicates initial skin mirror image, GxWith GyRespectively indicate the ladder at horizontal and vertical edge Approximation is spent, calculation formula and horizontal vertical template are as follows:
Wherein, horizontal shuttering:Vertical formwork:
Then calculation formula:
In practical application, after obtaining the gradient approximation at horizontal and vertical edge by above-mentioned calculation formula, continue by According to formula G=| H |+| V | carry out the approximate integral gradient value for seeking the skin lens image.The embodiment of the present application passes through above-mentioned Sobel The skin lens image to be processed can be calculated by edge extracting treated edge extracting image in operator.
It is worth noting that, above-mentioned Sobel operator is only intended to realize the one way in which of Edge extraction, this Shen Please embodiment other modes are not limited.
S205: using the edge extracting image of the skin lens image to be processed as trained second disaggregated model Input parameter obtains the second classification results after the processing of second disaggregated model.
In the embodiment of the present application, after S204 gets the edge extracting image of skin lens image to be processed, by the edge Input parameter of the image as trained second disaggregated model is extracted, which is based on the edge extracting image Classify, obtains corresponding second classification results of the skin lens image to be processed.
It is right in advance before the second disaggregated model is used to carry out cutaneous lesions identification to skin lens image in practical application Second disaggregated model is trained, and specific training process is introduced subsequent.
S206: comprehensive first classification results and second classification results determine the skin lens image to be processed On cutaneous lesions result.
After obtaining the first classification results and the second classification results, in order to more accurately determine on skin lens image to be processed Cutaneous lesions as a result, comprehensive first classification results of the embodiment of the present application and the second classification results, finally determine cutaneous lesions As a result.
In a kind of optional embodiment, if the first classification results are no lesion, and the second classification results are no lesion, It is then both comprehensive as a result, the cutaneous lesions result on final skin lens image to be processed can be disease-free change;If first point Class result is to have a lesion, and the second classification results are to have lesion, then both comprehensive as a result, on final skin lens image to be processed Cutaneous lesions result can be ill change;In addition, if only by one to have in the first classification results and the second classification results Lesion, then it is both comprehensive as a result, the cutaneous lesions result on final skin lens image to be processed is uncertain.
In addition, for the case where classification results are specific skin disease type, only in the first classification results and second point When class result is a certain skin disease type determined, the cutaneous lesions knot on skin lens image to be processed can be determined Fruit is this kind of skin disease type, and otherwise cutaneous lesions result is uncertain.
The embodiment of the present application is using the first disaggregated model based on image feature vector to the cutaneous lesions on skin lens image It is identified, and the cutaneous lesions on skin lens image is identified based on edge extracting image using the second disaggregated model, most Both comprehensive classification results determine the cutaneous lesions on the skin lens image as a result, compared with above method embodiment eventually, this Application embodiment further improves the accuracy of skin disease diagnosis.
In addition, the application is before handling skin lens image using the first disaggregated model and the second disaggregated model, It is trained firstly the need of to pre-generated the first disaggregated model and the second disaggregated model, specifically, to the first disaggregated model Training method it is as follows:
S1 obtains the first training set of images, and described image training set includes several dermoscopies with cutaneous lesions label Image.
S2 carries out image characteristics extraction to each skin lens image in the first training set of images respectively, obtains each skin The feature vector of skin mirror image;
S3 is trained the first pre-generated disaggregated model using the feature vector of each skin lens image, obtains Trained first disaggregated model.
In the embodiment of the present application, several skin lens images with cutaneous lesions label are obtained as training sample and form the One training set of images, for being trained to the first pre-generated disaggregated model.In practical application, there is cutaneous lesions label Skin lens image can be the manual mark from professional skin disease doctor, be also possible to other modes and obtain, do not do herein It limits.
Skin disease label can be to show whether corresponding skin lens image has the label of lesion, be also possible to show pair The skin lens image answered has the label of certain skin disease type, is not defined herein to the form of label, the class of label Type determines the identification degree of cutaneous lesions, if the label of training sample is whether to have lesion, final cutaneous lesions knot Fruit is also whether to have lesion, if the label of training sample is with certain skin disease type, final cutaneous lesions As a result or certain skin disease type is identified.
In order to more fully learn to the skin lens image with cutaneous lesions label, the embodiment of the present application can be right Each skin lens image in first training set of images carries out the rotation processing or mirror image processing of predetermined angle, obtains angular transformation Image, and angular transformation image is also added in the first training set of images, as training sample.By above-mentioned to skin lens image Pretreatment, the study of all angles feature can be more fully carried out to existing skin lens image, while also further The quantity of training sample is expanded.It is worth noting that, the angular transformation image of the first training set of images, which is added, also has correspondence Skin lens image cutaneous lesions label.
In addition, before being trained using the training sample in the first training set of images to the first disaggregated model, first Each training sample is pre-processed respectively, including successively carry out predetermined angle rotation processing and or mirror image processing, filtering Denoising, the processing of image enhanced fuzzy, specific treatment process can refer to the description of preceding method embodiment, no longer superfluous herein It states.
In the embodiment of the present application, the skin lens image in the first training set of images with cutaneous lesions label is subjected to image After feature extraction, the feature vector of each skin lens image is obtained, using the feature vector of each skin lens image to pre- Mr. At the first disaggregated model be trained, obtain trained first disaggregated model, for skin lens image carry out skin The identification of lesion.
To the training method of the second disaggregated model with it is above-mentioned similar to the training method of the first disaggregated model, can refer to reason Solution.Specifically, as follows to the training method of the second disaggregated model:
S11 obtains the second training set of images, and second training set of images includes several skins with cutaneous lesions label Skin mirror image;
S12 carries out edge extracting processing to each skin lens image in the second training set of images respectively, obtains each skin The edge extracting image of skin mirror image;
S13 is trained the second pre-generated disaggregated model using the edge extracting image of each skin lens image, Obtain trained second disaggregated model.
Second training set of images and above-mentioned first training set of images can be the same training set of images, that is, include identical Training sample is also possible to different training set of images, specifically, to having cutaneous lesions label in the second training set of images The pretreatment of skin lens image can refer to the pre- place to the skin lens image in the first training set of images with cutaneous lesions label Understood, details are not described herein.Wherein, pretreatment include predetermined angle rotation processing and or mirror image processing, filtering go Make an uproar processing and image enhanced fuzzy processing etc..
In addition, during being trained to the second disaggregated model will there are cutaneous lesions in the second training set of images After the skin lens image of label carries out edge extracting processing, the edge extracting image of each skin lens image is obtained, utilization is each The edge extracting image of skin lens image is trained the second pre-generated disaggregated model, obtains trained second point Class model, for carrying out the identification of cutaneous lesions to skin lens image.Specifically, carrying out edge extracting processing to skin lens image Process can refer to the description of preceding method embodiment, details are not described herein.
In the embodiment of the present application, the first disaggregated model and the second disaggregated model are trained using a large amount of training sample Afterwards, trained first disaggregated model and the second disaggregated model are obtained, subsequent while two kinds of models of utilization are to skin lens image On cutaneous lesions identified that both comprehensive classification results more efficient can more accurately obtain skin disease diagnosis knot Fruit.
Corresponding to the above method embodiment, present invention also provides a kind of skin lens image processing units, with reference to figure 3, it is a kind of structural schematic diagram of skin lens image processing unit provided by the embodiments of the present application, described device includes:
Receiving module 301, for receiving skin lens image to be processed;
First extraction module 302, for carrying out image characteristics extraction to the skin lens image to be processed, obtain it is described to Handle the feature vector of skin lens image;
First categorization module 303, for using the feature vector of the skin lens image to be processed as trained The input parameter of one disaggregated model obtains the first classification results after the processing of first disaggregated model;
Determining module 304, for determining the skin on the skin lens image to be processed according to first classification results Lesion result.
The embodiment of the present application also provides a kind of skin lens image processing units to provide with reference to Fig. 4 for the embodiment of the present application Another skin lens image processing unit structural schematic diagram, described device not only includes the modules in Fig. 1, can be with Include:
Second extraction module 401, for carrying out edge extracting processing to the skin lens image to be processed, obtain it is described to Handle the edge extracting image of skin lens image;
Second categorization module 402, for using the edge extracting image of the skin lens image to be processed as by training The input parameter of the second disaggregated model obtain the second classification results after the processing of second disaggregated model;
Correspondingly, the determining module 304, is specifically used for:
Comprehensive first classification results and second classification results, determine the skin on the skin lens image to be processed Skin lesion result.
Described device further include:
First preprocessing module, for carrying out the processing of image enhanced fuzzy to the skin lens image to be processed.
Described device further include:
Second preprocessing module, for being filtered denoising to the skin mirror image to be processed.
In order to be trained to the first disaggregated model, described device further include:
First obtains module, and for obtaining the first training set of images, the first image training set includes several with skin The skin lens image of skin lesion label;
It is special to carry out image to each skin lens image in the first image training set for respectively for third extraction module Sign is extracted, and the feature vector of each skin lens image is obtained;
First training module, for the feature vector using each skin lens image to the first pre-generated disaggregated model It is trained, obtains trained first disaggregated model.
In order to be trained to the second disaggregated model, described device further include:
Second obtains module, and for obtaining the second training set of images, second training set of images includes several with skin The skin lens image of skin lesion label;
4th extraction module is mentioned for carrying out edge to each skin lens image in second training set of images respectively Processing is taken, the edge extracting image of each skin lens image is obtained;
Second training module, for the edge extracting image using each skin lens image to the second pre-generated classification Model is trained, and obtains trained second disaggregated model.
In order to improve the accuracy to the first disaggregated model and the second disaggregated model training, described device further include:
Third preprocessing module, for being pre-processed respectively to the skin lens image with cutaneous lesions label, The pretreatment includes filtering and noise reduction processing and the processing of image enhanced fuzzy.
In order to enrich training sample, the pretreatment further include predetermined angle rotation processing and or mirror image processing.
Skin lens image processing unit provided by the embodiments of the present application is based on using trained first disaggregated model Image feature vector carries out the identification of cutaneous lesions to skin lens image, determines the cutaneous lesions result of patient.With the prior art Compare, the embodiment of the present application can obtain accurate cutaneous lesions as a result, and recognition efficiency it is higher.
In addition, skin lens image processing unit provided by the embodiments of the present application, can also be based on using the second disaggregated model Edge extracting image identifies that the classification results of final comprehensive the two determine the dermoscopy figure to the cutaneous lesions on skin lens image As upper cutaneous lesions as a result, further improving the accuracy of skin disease diagnosis.
Correspondingly, the embodiment of the present invention also provides a kind of skin lens image processing equipment, it is shown in Figure 5, may include:
Processor 501, memory 502, input unit 503 and output device 504.Place in skin lens image processing equipment The quantity for managing device 501 can be one or more, take a processor as an example in Fig. 5.In some embodiments of the invention, it handles Device 501, memory 502, input unit 503 and output device 504 can be connected by bus or other means, wherein in Fig. 5 with For being connected by bus.
Memory 502 can be used for storing software program and module, and processor 501 is stored in memory 502 by operation Software program and module, thereby executing the various function application and data processing of skin lens image processing equipment.Storage Device 502 can mainly include storing program area and storage data area, wherein storing program area can storage program area, at least one Application program needed for function etc..In addition, memory 502 may include high-speed random access memory, it can also include non-easy The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.Input dress Setting 503 can be used for receiving the number or character information of input, and generate with the user setting of skin lens image processing equipment with And the related signal input of function control.
Specifically in the present embodiment, processor 501 can be according to following instruction, by one or more application program The corresponding executable file of process be loaded into memory 502, and run and be stored in memory 502 by processor 501 Application program, to realize the various functions in above-mentioned skin lens image processing method.
For device embodiment, since it corresponds essentially to embodiment of the method, so related place is referring to method reality Apply the part explanation of example.The apparatus embodiments described above are merely exemplary, wherein described be used as separation unit The unit of explanation may or may not be physically separated, and component shown as a unit can be or can also be with It is not physical unit, it can it is in one place, or may be distributed over multiple network units.It can be according to actual It needs that some or all of the modules therein is selected to achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not In the case where making the creative labor, it can understand and implement.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in process, method, article or equipment including the element.
A kind of skin lens image processing method, device and equipment provided by the embodiment of the present application have been carried out in detail above It introduces, specific examples are used herein to illustrate the principle and implementation manner of the present application, the explanation of above embodiments It is merely used to help understand the present processes and its core concept;At the same time, for those skilled in the art, according to this The thought of application, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification is not answered It is interpreted as the limitation to the application.

Claims (10)

1. a kind of skin lens image processing method, which is characterized in that the described method includes:
Receive skin lens image to be processed;
Image characteristics extraction is carried out to the skin lens image to be processed, obtain the feature of the skin lens image to be processed to Amount;
Using the feature vector of the skin lens image to be processed as the input parameter of trained first disaggregated model, pass through After the processing of first disaggregated model, the first classification results are obtained;
According to first classification results, the cutaneous lesions result on the skin lens image to be processed is determined.
2. skin lens image processing method according to claim 1, which is characterized in that described to be tied according to first classification Fruit, before determining the cutaneous lesions result on the skin lens image to be processed, further includes:
Edge extracting processing is carried out to the skin lens image to be processed, obtains the edge extracting of the skin lens image to be processed Image;
Using the edge extracting image of the skin lens image to be processed as the input parameter of trained second disaggregated model, After the processing of second disaggregated model, the second classification results are obtained;
Correspondingly, described according to first classification results, determine the cutaneous lesions on the skin lens image to be processed as a result, Specifically:
Comprehensive first classification results and second classification results, determine the skin disease on the skin lens image to be processed Become result.
3. skin lens image processing method according to claim 1 or 2, which is characterized in that described to the skin to be processed Skin mirror image carries out image characteristics extraction, before obtaining the feature vector of the skin lens image to be processed, further includes:
The processing of image enhanced fuzzy is carried out to the skin lens image to be processed.
4. skin lens image processing method according to claim 3, which is characterized in that described to the dermoscopy to be processed Image carries out before the processing of image enhanced fuzzy, further includes:
Denoising is filtered to the skin mirror image to be processed.
5. skin lens image processing method according to claim 1, which is characterized in that described by the dermoscopy to be processed Input parameter of the feature vector of image as trained first disaggregated model, by the processing of first disaggregated model Afterwards, before obtaining the first classification results, further includes:
The first training set of images is obtained, the first image training set includes several dermoscopy figures with cutaneous lesions label Picture;
Image characteristics extraction is carried out to each skin lens image in the first image training set respectively, obtains each dermoscopy The feature vector of image;
The first pre-generated disaggregated model is trained using the feature vector of each skin lens image, is obtained by training The first disaggregated model.
6. skin lens image processing method according to claim 2, which is characterized in that described by the dermoscopy to be processed Input parameter of the edge extracting image of image as trained second disaggregated model, by second disaggregated model After processing, before obtaining the second classification results, further includes:
The second training set of images is obtained, second training set of images includes several dermoscopy figures with cutaneous lesions label Picture;
Edge extracting processing is carried out to each skin lens image in second training set of images respectively, obtains each dermoscopy The edge extracting image of image;
The second pre-generated disaggregated model is trained using the edge extracting image of each skin lens image, obtain by The second trained disaggregated model.
7. skin lens image processing method according to claim 5 or 6, which is characterized in that described respectively described in process Pretreated skin lens image carry out image characteristics extraction or it is described respectively to by the pretreated skin lens image into Before the processing of row edge extracting, further includes:
The skin lens image with cutaneous lesions label is pre-processed respectively, the pretreatment includes at filtering and noise reduction Reason and the processing of image enhanced fuzzy.
8. skin lens image processing method according to claim 7, which is characterized in that the pretreatment further includes preset angle The rotation processing of degree and or mirror image processing.
9. a kind of skin lens image processing unit, which is characterized in that described device includes:
Receiving module, for receiving skin lens image to be processed;
First extraction module obtains the skin to be processed for carrying out image characteristics extraction to the skin lens image to be processed The feature vector of skin mirror image;
First categorization module, for using the feature vector of the skin lens image to be processed as trained first classification mould The input parameter of type obtains the first classification results after the processing of first disaggregated model;
Determining module, for determining the cutaneous lesions knot on the skin lens image to be processed according to first classification results Fruit.
10. a kind of skin lens image processing equipment, which is characterized in that the equipment includes memory and processor,
Said program code is transferred to the processor for storing program code by the memory;
The processor is used for according to the instruction in said program code, and perform claim requires skin described in any one of 1-8 Mirror image processing method.
CN201810772239.9A 2018-07-13 2018-07-13 A kind of skin lens image processing method, device and equipment Pending CN108985302A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810772239.9A CN108985302A (en) 2018-07-13 2018-07-13 A kind of skin lens image processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810772239.9A CN108985302A (en) 2018-07-13 2018-07-13 A kind of skin lens image processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN108985302A true CN108985302A (en) 2018-12-11

Family

ID=64537544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810772239.9A Pending CN108985302A (en) 2018-07-13 2018-07-13 A kind of skin lens image processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN108985302A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111772588A (en) * 2020-07-29 2020-10-16 天津大学 Classification method of skin mirror images based on neural network ensemble learning
CN111797923A (en) * 2020-07-03 2020-10-20 北京阅视智能技术有限责任公司 Training method of image classification model, and image classification method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201806692U (en) * 2009-12-31 2011-04-27 中国人民解放军空军总医院 Multispectral dermoscopy image automatic analytical instrument for diagnosing malignant melanocyte tumour
CN103646398A (en) * 2013-12-04 2014-03-19 山西大学 Demoscopy focus automatic segmentation method
CN103778441A (en) * 2014-02-26 2014-05-07 东南大学 Dezert-Smaradache Theory (DSmT) and Hidden Markov Model (HMM) aircraft sequence target recognition method
CN104680498A (en) * 2015-03-24 2015-06-03 江南大学 Medical image segmentation method based on improved gradient vector flow model
CN106682435A (en) * 2016-12-31 2017-05-17 西安百利信息科技有限公司 System and method for automatically detecting lesions in medical image through multi-model fusion
CN107203999A (en) * 2017-04-28 2017-09-26 北京航空航天大学 A kind of skin lens image automatic division method based on full convolutional neural networks
CN107464230A (en) * 2017-08-23 2017-12-12 京东方科技集团股份有限公司 Image processing method and device
CN107729948A (en) * 2017-10-31 2018-02-23 京东方科技集团股份有限公司 Image processing method and device, computer product and storage medium
US20180130203A1 (en) * 2016-11-06 2018-05-10 International Business Machines Corporation Automated skin lesion segmentation using deep side layers
CN108198620A (en) * 2018-01-12 2018-06-22 洛阳飞来石软件开发有限公司 A kind of skin disease intelligent auxiliary diagnosis system based on deep learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201806692U (en) * 2009-12-31 2011-04-27 中国人民解放军空军总医院 Multispectral dermoscopy image automatic analytical instrument for diagnosing malignant melanocyte tumour
CN103646398A (en) * 2013-12-04 2014-03-19 山西大学 Demoscopy focus automatic segmentation method
CN103778441A (en) * 2014-02-26 2014-05-07 东南大学 Dezert-Smaradache Theory (DSmT) and Hidden Markov Model (HMM) aircraft sequence target recognition method
CN104680498A (en) * 2015-03-24 2015-06-03 江南大学 Medical image segmentation method based on improved gradient vector flow model
US20180130203A1 (en) * 2016-11-06 2018-05-10 International Business Machines Corporation Automated skin lesion segmentation using deep side layers
CN106682435A (en) * 2016-12-31 2017-05-17 西安百利信息科技有限公司 System and method for automatically detecting lesions in medical image through multi-model fusion
CN107203999A (en) * 2017-04-28 2017-09-26 北京航空航天大学 A kind of skin lens image automatic division method based on full convolutional neural networks
CN107464230A (en) * 2017-08-23 2017-12-12 京东方科技集团股份有限公司 Image processing method and device
CN107729948A (en) * 2017-10-31 2018-02-23 京东方科技集团股份有限公司 Image processing method and device, computer product and storage medium
CN108198620A (en) * 2018-01-12 2018-06-22 洛阳飞来石软件开发有限公司 A kind of skin disease intelligent auxiliary diagnosis system based on deep learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797923A (en) * 2020-07-03 2020-10-20 北京阅视智能技术有限责任公司 Training method of image classification model, and image classification method and device
CN111772588A (en) * 2020-07-29 2020-10-16 天津大学 Classification method of skin mirror images based on neural network ensemble learning

Similar Documents

Publication Publication Date Title
CN110428475B (en) Medical image classification method, model training method and server
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
CN107895367B (en) Bone age identification method and system and electronic equipment
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
Kumar et al. Resnet-based approach for detection and classification of plant leaf diseases
CN109241967B (en) Thyroid ultrasound image automatic identification system based on deep neural network, computer equipment and storage medium
CN112508850B (en) Deep learning-based method for detecting malignant area of thyroid cell pathological section
CN108446621A (en) Bank slip recognition method, server and computer readable storage medium
CN109389584A (en) Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN
CN109447981A (en) Image-recognizing method and Related product
CN111291825A (en) Focus classification model training method and device, computer equipment and storage medium
CN113012155A (en) Bone segmentation method in hip image, electronic device, and storage medium
CN112233777A (en) Gallstone automatic identification and segmentation system based on deep learning, computer equipment and storage medium
Zhao et al. Fine-grained diabetic wound depth and granulation tissue amount assessment using bilinear convolutional neural network
CN108985302A (en) A kind of skin lens image processing method, device and equipment
WO2024074921A1 (en) Distinguishing a disease state from a non-disease state in an image
CN114757908A (en) Image processing method, device and equipment based on CT image and storage medium
Hatano et al. Classification of osteoporosis from phalanges CR images based on DCNN
CN110930373A (en) Pneumonia recognition device based on neural network
CN113706514A (en) Focus positioning method, device and equipment based on template image and storage medium
CN103268494A (en) Parasite egg identifying method based on sparse representation
CN108876776A (en) A kind of method of generating classification model, eye fundus image classification method and device
CN114010227B (en) Right ventricle characteristic information identification method and device
CN113344911B (en) Method and device for measuring size of calculus
CN115880358A (en) Construction method of positioning model, positioning method of image mark points and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181211

RJ01 Rejection of invention patent application after publication