CN109740611A - Tongue image analysis method and device - Google Patents

Tongue image analysis method and device Download PDF

Info

Publication number
CN109740611A
CN109740611A CN201910075131.9A CN201910075131A CN109740611A CN 109740611 A CN109740611 A CN 109740611A CN 201910075131 A CN201910075131 A CN 201910075131A CN 109740611 A CN109740611 A CN 109740611A
Authority
CN
China
Prior art keywords
tongue
image
patient
region
pixel region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910075131.9A
Other languages
Chinese (zh)
Inventor
代超
何帆
刘立
周振
陈金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHINA POWER HEALTH CLOUD TECHNOLOGY Co.,Ltd.
China Japan Friendship Hospital
Original Assignee
Zhongdian Health Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongdian Health Cloud Technology Co Ltd filed Critical Zhongdian Health Cloud Technology Co Ltd
Priority to CN201910075131.9A priority Critical patent/CN109740611A/en
Publication of CN109740611A publication Critical patent/CN109740611A/en
Pending legal-status Critical Current

Links

Abstract

The embodiment of the present application provides a kind of tongue image analysis method and device, collected patient image when stretching out tongue by obtaining each patient, and corresponding tongue area image is extracted from each patient image according to tongue parted pattern trained in advance.Then for each tongue area image of extraction, determine the foreground pixel region and background pixel region in the tongue area image, and corresponding tongue image is extracted from the tongue area image according to foreground pixel region and background pixel region, so as to accurately extract the tongue image of patient.Then each tongue image is marked according to the corresponding patient categories of each tongue image, generate tongue image pattern collection, and the Classification and Identification of the tongue image of patient is carried out using deep learning algorithm, to which whether fast slowdown monitoring patient suffers from fatty liver, fatty liver monitoring can be carried out at any time by carrying out interrogation treatment without patient to hospital, be used convenient for patient.

Description

Tongue image analysis method and device
Technical field
This application involves field of computer technology, in particular to a kind of tongue image analysis method and device.
Background technique
Therapy of combing traditional Chinese and Western medicine is the important topic of medicine for a long time, how by the approach application of Chinese medicine to doctor trained in Western medicine, is Numerous people provides the medical services of efficient quick, becomes the problem of educational circles.Discovery and diagnosis for fatty liver use at present Way be still to go whether to measure with fatty liver by certain body indexs based on the diagnosis and treatment data such as blood test.But In the formation of early stage fatty liver, patient is simultaneously ignorant, and also there will be no motivations, and hospital to be gone to carry out the coherence checks such as blood, causes to work as When making a definite diagnosis fatty liver, conditions of patients is very serious.
Summary of the invention
In order to overcome above-mentioned deficiency in the prior art, the application's is designed to provide a kind of tongue image analysis method And device, to solve or improve the above problem.
To achieve the goals above, the embodiment of the present application the technical solution adopted is as follows:
In a first aspect, the embodiment of the present application provides a kind of tongue image analysis method, it is applied to electronic equipment, the method Include:
Obtain each patient and stretch out collected patient image when tongue, and according to tongue parted pattern trained in advance from Corresponding tongue area image is extracted in each patient image;
For each tongue area image of extraction, the foreground pixel region in the tongue area image and background picture are determined Plain region, and corresponding tongue figure is extracted from the tongue area image according to the foreground pixel region and background pixel region Picture;
Each tongue image is marked according to the corresponding patient categories of each tongue image, generates tongue image pattern Collection, wherein the patient categories are Patients with Fatty Liver classification or non-fat hepatopath's classification;
Classification based training is carried out to default sorter network according to the tongue image pattern collection, obtains fatty liver identification model.
In a kind of possible embodiment, the tongue parted pattern that the basis is trained in advance is from each patient image The step of extracting corresponding tongue area image, comprising:
According to tongue parted pattern trained in advance from detecting tongue region in each patient image in each patient image Location information, the location information includes abscissa and ordinate and the tongue area of the edge angle in the tongue region The area occupied information in domain;
Correspondence is extracted from each patient image according to the location information in tongue region in each patient image detected Tongue area image.
In a kind of possible embodiment, the method also includes:
The tongue parted pattern is trained in advance, is specifically included:
The position in corresponding tongue region is marked to believe in collected patient image when each patient stretches out tongue in advance Breath, the location information include the abscissa of the edge angle in the tongue region and the occupancy in ordinate and the tongue region Area information;
With each patient image be input, with the location information in the tongue region marked in each patient image be output change Generation training deep neural network, and corresponding tongue parted pattern is exported when reaching trained termination condition.
In a kind of possible embodiment, it is described according to the foreground pixel region and background pixel region from the tongue Area image extracts the step of corresponding tongue image, comprising:
Using in the tongue area image foreground pixel region and background pixel region as input, calculated using GrabCut Method carries out image segmentation to the tongue area image, obtains corresponding segmented image;
Each tongue area image is subtracted into corresponding segmented image and obtains tongue segmented image, and is divided from the tongue Largest connected region is chosen in each connected region of image as tongue image.
Foreground pixel region and background picture in a kind of possible embodiment, in the determination tongue area image The step of plain region, comprising:
The first box is generated as foreground pixel region, in the tongue region in the image centre of the tongue area image Corresponding second box is generated at each edge angle of image as background pixel region, wherein first box and described The side length of second box is identical.
It is described that default sorter network is divided according to the tongue image pattern collection in a kind of possible embodiment Class training, after obtaining the step of fatty liver identifies model, the method also includes:
The tongue image of patient to be measured is input in the fatty liver identification model, the patient to be measured is obtained and suffers from fatty liver Confidence level;
If the confidence level that the patient to be measured suffers from fatty liver is greater than setting confidence level, determine the patient to be measured with fat Otherwise liver determines that the patient to be measured does not suffer from fatty liver.
Second aspect, the embodiment of the present application also provide a kind of tongue image analysis apparatus, are applied to electronic equipment, the dress It sets and includes:
Module is obtained, collected patient image when for obtaining each patient's stretching tongue, and according to training in advance Tongue parted pattern extracts corresponding tongue area image from each patient image;
Determining module, for determining the prospect picture in the tongue area image for each tongue area image extracted Plain region and background pixel region, and extracted according to the foreground pixel region and background pixel region from the tongue area image Corresponding tongue image;
Mark module, it is raw for each tongue image to be marked according to the corresponding patient categories of each tongue image At tongue image pattern collection, wherein the patient categories are Patients with Fatty Liver classification or non-fat hepatopath's classification;
Classification based training module is obtained for carrying out classification based training to default sorter network according to the tongue image pattern collection Model is identified to fatty liver.
The third aspect, the embodiment of the present application also provide a kind of readable storage medium storing program for executing, are stored thereon with computer program, described Computer program, which is performed, realizes above-mentioned tongue image analysis method.
In terms of existing technologies, the application has the advantages that
The embodiment of the present application provides a kind of tongue image analysis method and device, when stretching out tongue by obtaining each patient Collected patient image, and corresponding tongue area is extracted from each patient image according to tongue parted pattern trained in advance Area image.Then for each tongue area image of extraction, the foreground pixel region in the tongue area image and back are determined Scape pixel region, and corresponding tongue figure is extracted from the tongue area image according to foreground pixel region and background pixel region Picture, so as to accurately extract the tongue image of patient.Then according to the corresponding patient categories of each tongue image to each tongue Head image is marked, and generates tongue image pattern collection, and the classification of the tongue image using deep learning algorithm progress patient Identification, so that whether fast slowdown monitoring patient suffers from fatty liver, rouge can be carried out at any time by carrying out interrogation treatment without patient to hospital The monitoring of fat liver, uses convenient for patient.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be to needed in the embodiment attached Figure is briefly described, it should be understood that the following drawings illustrates only some embodiments of the application, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.
Fig. 1 is the flow diagram of tongue image analysis method provided by the embodiments of the present application;
Fig. 2 is the position in the foreground pixel region and background pixel region in tongue area image provided by the embodiments of the present application Set schematic diagram;
Fig. 3 is the tongue image schematic diagram provided by the embodiments of the present application extracted from tongue area image;
Fig. 4 is one of the functional block diagram of tongue image analysis apparatus provided by the embodiments of the present application;
Fig. 5 is the two of the functional block diagram of tongue image analysis apparatus provided by the embodiments of the present application;
Fig. 6 is that the structure of the electronic equipment provided by the embodiments of the present application for realizing above-mentioned tongue image analysis method is shown Meaning block diagram.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiment is some embodiments of the present application, instead of all the embodiments.Usually herein The component of the embodiment of the present application described and illustrated in place's attached drawing can be arranged and be designed with a variety of different configurations.
Therefore, the detailed description of the embodiments herein provided in the accompanying drawings is not intended to limit below claimed Scope of the present application, but be merely representative of the selected embodiment of the application.Based on the embodiment in the application, this field is common Technical staff's all other embodiment obtained without creative labor belongs to the application protection Range.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.
Referring to Fig. 1, being a kind of flow diagram of tongue image analysis method provided by the embodiments of the present application, should manage Solution, in other embodiments, the sequence of the tongue image analysis method part step of the present embodiment can be according to practical need It is exchanged with each other or part steps therein also can be omitted or delete.The detailed step of the tongue image analysis method is situated between It continues as follows.
Step S210 obtains each patient and stretches out collected patient image when tongue, and according to tongue trained in advance Parted pattern extracts corresponding tongue area image from each patient image.
In a kind of possible example, patient can be allowed to stretch out tongue when acquiring patient image and be directed at camera, with This guarantees that patient's tongue appears in the centre of the patient image of preparation for acquiring as far as possible, then acquires each patient by camera Patient image.Then, it is detected in each patient image according to tongue parted pattern trained in advance from each patient image The location information in tongue region, the location information include abscissa and ordinate and the institute of the edge angle in the tongue region State the area occupied information in tongue region.Then, according to the location information in tongue region in each patient image detected from Corresponding tongue area image is extracted in each patient image.
Wherein, the present embodiment also needs to train the tongue parted pattern in advance before this, for example, can be in advance each A patient marks the location information (x, y, w, h) in corresponding tongue region in collected patient image when stretching out tongue, described Location information (x, y, w, h) include the edge angle in the tongue region abscissa x and ordinate y and the tongue region Area occupied information w, h, optionally, x can be the upper left corner abscissa in tongue region, and y can be the upper left corner in tongue region Ordinate, w can be the width in tongue region, and h can be the width in tongue region.
It on the basis of the above, is input, with the position in the tongue region marked in each patient image with each patient image It is output repetitive exercise deep neural network that confidence, which ceases (x, y, w, h), and corresponding tongue is exported when reaching trained termination condition Head parted pattern.
The tongue parted pattern that training obtains as a result, can have the position letter for detecting tongue region in each patient image The ability of breath.When actually detected, believe first from the position for detecting tongue region in each patient image in each patient image It ceases (x, y, w, h) then corresponding tongue area image is partitioned into from patient image with this positioning tongue head region.
Step S220 determines the foreground pixel area in the tongue area image for each tongue area image of extraction Domain and background pixel region, and extracted and corresponded to from the tongue area image according to the foreground pixel region and background pixel region Tongue image.
It, can be before the image centre of the tongue area image generates the conduct of the first box in a kind of possible example Scape pixel region, generates corresponding second box as background pixel region at each edge angle of the tongue area image, Wherein, first box is identical with the side length of second box.It is intermediate such as in tongue area image shown in Fig. 2 White box is the first box namely foreground pixel region, and the white box at four edge angles is the second box, namely Background pixel region.Assuming that the width of tongue area image and height are respectively as follows: the side length of w, h, the first box and the second box It is d, then the coordinate points in the first box upper left corner as foreground pixel region are (w/2-d/2, h/2-d/2).As back The coordinate points in the upper left corner at four edge angles of scape pixel region are respectively as follows: (0,0), (w-d-1, h-d-1), (0, h-d-1), (w-d-1,0).
Then, using in the tongue area image foreground pixel region and background pixel region as input, use GrabCut algorithm carries out image segmentation to the tongue area image, corresponding segmented image is obtained, then by each tongue region Image subtracts corresponding segmented image and obtains tongue segmented image, and selects from each connected region of the tongue segmented image Take largest connected region as tongue image.
Step S230 is marked each tongue image according to the corresponding patient categories of each tongue image, generates tongue Head image pattern collection.
In the present embodiment, the patient categories are Patients with Fatty Liver classification or non-fat hepatopath's classification.
Step S240 carries out classification based training to default sorter network according to the tongue image pattern collection, obtains fatty liver Identify model.
Optionally, the default sorter network can using VGG (Visual Geometry Group) sorter network, InceptionV3+ image classification network etc., the present embodiment is not intended to be limited in any this.The fatty liver identification obtained by training Model can have the ability of confidence level of the identification patient with fatty liver, in detail can be defeated by the tongue image of patient to be measured Enter into fatty liver identification model, the confidence level that the patient to be measured suffers from fatty liver is obtained, if the patient to be measured suffers from fatty liver Confidence level be greater than setting confidence level, then determine that the patient to be measured with fatty liver, otherwise determines that the patient to be measured does not suffer from rouge Fat liver.
Tongue image analysis method provided in this embodiment as a result, collects when stretching out tongue by obtaining each patient Patient image, and corresponding tongue administrative division map is extracted from each patient image according to tongue parted pattern trained in advance Picture.Then for each tongue area image of extraction, the foreground pixel region in the tongue area image and background picture are determined Plain region, and corresponding tongue image is extracted from the tongue area image according to foreground pixel region and background pixel region, from And it can accurately extract the tongue image of patient.Then according to the corresponding patient categories of each tongue image to each tongue image It is marked, generates tongue image pattern collection, and the Classification and Identification of the tongue image using deep learning algorithm progress patient, from And whether fast slowdown monitoring patient suffers from fatty liver, fatty liver prison can be carried out at any time by carrying out interrogation treatment without patient to hospital It surveys, is used convenient for patient.
Further, referring to Fig. 4, the embodiment of the present application also provides a kind of tongue image analysis apparatus 200, the tongue figure As the function that analytical equipment 200 is realized can correspond to the step of above-mentioned tongue image analysis method executes.As shown in figure 4, the report Literary forwarding controller 200 may include, separately below to the function of each functional module of the tongue image analysis apparatus 200 It is described in detail.
Module 210 is obtained, collected patient image when for obtaining each patient's stretching tongue, and according to preparatory training Tongue parted pattern corresponding tongue area image is extracted from each patient image.
Determining module 220, for determining the prospect in the tongue area image for each tongue area image extracted Pixel region and background pixel region, and mentioned according to the foreground pixel region and background pixel region from the tongue area image Take corresponding tongue image.
Mark module 230, for each tongue image to be marked according to the corresponding patient categories of each tongue image, Generate tongue image pattern collection, wherein the patient categories are Patients with Fatty Liver classification or non-fat hepatopath's classification.
Classification based training module 240, for carrying out classification based training to default sorter network according to the tongue image pattern collection, Obtain fatty liver identification model.
In a kind of possible example, the acquisition module 210 is mentioned from each patient image especially by following manner Take corresponding tongue area image:
According to first trained tongue parted pattern from detecting tongue region in each patient image in each patient image Location information, the location information include abscissa and ordinate and the tongue region of the edge angle in the tongue region Area occupied information;
Correspondence is extracted from each patient image according to the location information in tongue region in each patient image detected Tongue area image.
In a kind of possible example, referring to Fig. 5, tongue image analysis apparatus 200 can also include:
Training module 209 is specifically included for training the tongue parted pattern in advance:
The position in corresponding tongue region is marked to believe in collected patient image when each patient stretches out tongue in advance Breath, the location information include the abscissa of the edge angle in the tongue region and the occupancy in ordinate and the tongue region Area information;
With each patient image be input, with the location information in the tongue region marked in each patient image be output change Generation training deep neural network, and corresponding tongue parted pattern is exported when reaching trained termination condition.
In a kind of possible example, the determining module 220 from the tongue area image for mentioning in the following manner Take corresponding tongue image:
Using in the tongue area image foreground pixel region and background pixel region as input, calculated using GrabCut Method carries out image segmentation to the tongue area image, obtains corresponding segmented image;
Each tongue area image is subtracted into corresponding segmented image and obtains tongue segmented image, and is divided from the tongue Largest connected region is chosen in each connected region of image as tongue image.
It is understood that the concrete operation method of each functional module in the present embodiment can refer to above method embodiment The detailed description of middle corresponding steps, it is no longer repeated herein.
Further, referring to Fig. 6, being the electronics provided by the embodiments of the present application for above-mentioned tongue image analysis method The structural schematic block diagram of equipment 100.In the present embodiment, the electronic equipment 100 can be made general total wire body by bus 110 Architecture is realized.According to the concrete application of electronic equipment 100 and overall design constraints condition, bus 110 may include any The interconnection bus and bridge joint of quantity.Together by various circuit connections, these circuits include processor 120, storage Jie to bus 110 Matter 130 and bus interface 140.Optionally, electronic equipment 100 can be used bus interface 140 by network adapter 150 it is equal via Bus 110 connects.Network adapter 150 can be used for realizing the signal processing function of physical layer in electronic equipment 100, and pass through day Line realizes sending and receiving for radiofrequency signal.User interface 160 can connect external equipment, such as: keyboard, display, mouse Or control stick etc..Bus 110 can also connect various other circuits, such as timing source, peripheral equipment, voltage regulator or function Rate manages circuit etc., these circuits are known in the art, therefore are no longer described in detail.
It can replace, electronic equipment 100 may also be configured to generic processing system, such as be commonly referred to as chip, the general place Reason system includes: to provide the one or more microprocessors of processing function, and provide at least part of of storage medium 130 External memory, it is all these all to be linked together by external bus architecture and other support circuits.
Alternatively, following realize can be used in electronic equipment 100: having processor 120, bus interface 140, user The ASIC (specific integrated circuit) of interface 160;And it is integrated at least part of the storage medium 130 in one single chip, or Following realize can be used in person, electronic equipment 100: one or more FPGA (field programmable gate array), PLD are (programmable Logical device), controller, state machine, gate logic, discrete hardware components, any other suitable circuit or be able to carry out this Any combination of the circuit of various functions described in application in the whole text.
Wherein, processor 120 is responsible for management bus 110 and general processing (is stored on storage medium 130 including executing Software).One or more general processors and/or application specific processor can be used to realize in processor 120.Processor 120 Example includes microprocessor, microcontroller, dsp processor and the other circuits for being able to carry out software.It should be by software broadly It is construed to indicate instruction, data or any combination thereof, regardless of being called it as software, firmware, middleware, microcode, hard Part description language or other.
Storage medium 130 is illustrated as separating with processor 120 in Fig. 6, however, those skilled in the art be easy to it is bright White, storage medium 130 or its arbitrary portion can be located at except electronic equipment 100.For example, storage medium 130 may include Transmission line, the carrier waveform modulated with data, and/or the computer product that separates with radio node, these media can be with It is accessed by processor 120 by bus interface 140.Alternatively, storage medium 130 or its arbitrary portion can integrate everywhere It manages in device 120, for example, it may be cache and/or general register.
Above-described embodiment can be performed in the processor 120, specifically, can store in the storage medium 130 described Tongue image analysis apparatus 200, the processor 120 can be used for executing the tongue image analysis apparatus 200.
Further, the embodiment of the present application also provides a kind of nonvolatile computer storage media, the computer is deposited Storage media is stored with computer executable instructions, which can be performed the tongue in above-mentioned any means embodiment Head image analysis method.
In conclusion the embodiment of the present application provides a kind of tongue image analysis method and device, passes through and obtain each patient Collected patient image when stretching out tongue, and extracted from each patient image pair according to tongue parted pattern trained in advance The tongue area image answered.Then for each tongue area image of extraction, the prospect picture in the tongue area image is determined Plain region and background pixel region, and extracted and corresponded to from the tongue area image according to foreground pixel region and background pixel region Tongue image, so as to accurately extract the tongue image of patient.Then according to the corresponding patient categories of each tongue image Each tongue image is marked, tongue image pattern collection is generated, and carries out the tongue figure of patient using deep learning algorithm The Classification and Identification of picture, so that whether fast slowdown monitoring patient suffers from fatty liver, carrying out that interrogation treats without patient to hospital can be with The monitoring of Shi Jinhang fatty liver, uses convenient for patient.
In embodiment provided herein, it should be understood that disclosed device and method, it can also be by other Mode realize.Device and method embodiment described above is only schematical, for example, flow chart and frame in attached drawing Figure shows the system frame in the cards of the system of multiple embodiments according to the application, method and computer program product Structure, function and operation.In this regard, each box in flowchart or block diagram can represent a module, section or code A part, a part of the module, section or code includes one or more for implementing the specified logical function Executable instruction.It should also be noted that function marked in the box can also be with not in some implementations as replacement It is same as the sequence marked in attached drawing generation.For example, two continuous boxes can actually be basically executed in parallel, they have When can also execute in the opposite order, this depends on the function involved.It is also noted that in block diagram and or flow chart Each box and the box in block diagram and or flow chart combination, can function or movement as defined in executing it is dedicated Hardware based system realize, or can realize using a combination of dedicated hardware and computer instructions.
In addition, each functional module in each embodiment of the application can integrate one independent portion of formation together Point, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
It can replace, can be realized wholly or partly by software, hardware, firmware or any combination thereof.When When using software realization, can entirely or partly it realize in the form of a computer program product.The computer program product Including one or more computer instructions.It is all or part of when loading on computers and executing the computer program instructions Ground is generated according to process or function described in the embodiment of the present application.The computer can be general purpose computer, special purpose computer, Computer network or other programmable devices.The computer instruction may be stored in a computer readable storage medium, or Person is transmitted from a computer readable storage medium to another computer readable storage medium, for example, the computer instruction Wired (such as coaxial cable, optical fiber, digital subscriber can be passed through from a web-site, computer, server or data center Line (DSL)) or wireless (such as infrared, wireless, microwave etc.) mode to another web-site, computer, server or data It is transmitted at center.The computer readable storage medium can be any usable medium that computer can access and either wrap The data storage devices such as electronic equipment, server, the data center integrated containing one or more usable mediums.The usable medium It can be magnetic medium, (for example, floppy disk, hard disk, tape), optical medium (for example, DVD) or semiconductor medium (such as solid-state Hard disk Solid State Disk (SSD)) etc..
It should be noted that, in this document, term " including ", " including " or its any other variant are intended to non-row Its property includes, so that the process, method, article or equipment for including a series of elements not only includes those elements, and And further include the other elements being not explicitly listed, or further include for this process, method, article or equipment institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence " including one ... ", it is not excluded that including institute State in the process, method, article or equipment of element that there is also other identical elements.
It is obvious to a person skilled in the art that the application is not limited to the details of above-mentioned exemplary embodiment, Er Qie In the case where without departing substantially from spirit herein or essential characteristic, the application can be realized in other specific forms.Therefore, no matter From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and scope of the present application is by appended power Benefit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent elements of the claims Variation is included in the application.Any reference signs in the claims should not be construed as limiting the involved claims.

Claims (10)

1. a kind of tongue image analysis method, which is characterized in that be applied to electronic equipment, which comprises
It obtains each patient and stretches out collected patient image when tongue, and according to tongue parted pattern trained in advance from each Corresponding tongue area image is extracted in patient image;
For each tongue area image of extraction, the foreground pixel region and background pixel area in the tongue area image are determined Domain, and corresponding tongue image is extracted from the tongue area image according to the foreground pixel region and background pixel region;
Each tongue image is marked according to the corresponding patient categories of each tongue image, generates tongue image pattern collection, Wherein, the patient categories are Patients with Fatty Liver classification or non-fat hepatopath's classification;
Classification based training is carried out to default sorter network according to the tongue image pattern collection, obtains fatty liver identification model.
2. tongue image analysis method according to claim 1, which is characterized in that the tongue that the basis is trained in advance point Cut the step of model extracts corresponding tongue area image from each patient image, comprising:
According to tongue parted pattern trained in advance from the position for detecting tongue region in each patient image in each patient image Confidence breath, the location information include abscissa and ordinate and the tongue region of the edge angle in the tongue region Area occupied information;
Corresponding tongue is extracted from each patient image according to the location information in tongue region in each patient image detected Head Section area image.
3. tongue image analysis method according to claim 1, which is characterized in that the method also includes:
The tongue parted pattern is trained in advance, is specifically included:
Mark the location information in corresponding tongue region, institute in collected patient image when each patient stretches out tongue in advance State the abscissa and ordinate and the area occupied in the tongue region of the edge angle that location information includes the tongue region Information;
It is input with each patient image, take the location information in the tongue region marked in each patient image as output iteration instruction Practice deep neural network, and exports corresponding tongue parted pattern when reaching trained termination condition.
4. tongue image analysis method according to claim 1, which is characterized in that described according to the foreground pixel region The step of extracting corresponding tongue image from the tongue area image with background pixel region, comprising:
Using in the tongue area image foreground pixel region and background pixel region as input, using GrabCut algorithm pair The tongue area image carries out image segmentation, obtains corresponding segmented image;
Each tongue area image is subtracted into corresponding segmented image and obtains tongue segmented image, and from the tongue segmented image Each connected region in choose largest connected region as tongue image.
5. tongue image analysis method according to claim 1, which is characterized in that in the determination tongue area image Foreground pixel region and background pixel region the step of, comprising:
The first box is generated as foreground pixel region, in the tongue area image in the image centre of the tongue area image Each edge angle at generate corresponding second box as background pixel region, wherein first box and described second The side length of box is identical.
6. tongue image analysis method according to claim 1, which is characterized in that described according to the tongue image pattern After the step of collection carries out classification based training to default sorter network, obtains fatty liver identification model, the method also includes:
The tongue image of patient to be measured is input in fatty liver identification model, obtain that the patient to be measured suffers from fatty liver sets Reliability;
If the confidence level that the patient to be measured suffers from fatty liver is greater than setting confidence level, determine that the patient to be measured suffers from fatty liver, it is no Then determine that the patient to be measured does not suffer from fatty liver.
7. a kind of tongue image analysis apparatus, which is characterized in that be applied to electronic equipment, described device includes:
Module is obtained, collected patient image when for obtaining each patient's stretching tongue, and according to tongue trained in advance Parted pattern extracts corresponding tongue area image from each patient image;
Determining module, for determining the foreground pixel area in the tongue area image for each tongue area image extracted Domain and background pixel region, and extracted and corresponded to from the tongue area image according to the foreground pixel region and background pixel region Tongue image;
Mark module generates tongue for each tongue image to be marked according to the corresponding patient categories of each tongue image Head image pattern collection, wherein the patient categories are Patients with Fatty Liver classification or non-fat hepatopath's classification;
Classification based training module obtains rouge for carrying out classification based training to default sorter network according to the tongue image pattern collection Fat liver identifies model.
8. tongue image analysis apparatus according to claim 7, which is characterized in that the acquisition module is especially by following Mode extracts corresponding tongue area image from each patient image:
According to tongue parted pattern trained in advance from the position for detecting tongue region in each patient image in each patient image Confidence breath, the location information include abscissa and ordinate and the tongue region of the edge angle in the tongue region Area occupied information;
Corresponding tongue is extracted from each patient image according to the location information in tongue region in each patient image detected Head Section area image.
9. tongue image analysis apparatus according to claim 7, which is characterized in that described device further include:
Training module is specifically included for training the tongue parted pattern in advance:
Mark the location information in corresponding tongue region, institute in collected patient image when each patient stretches out tongue in advance State the abscissa and ordinate and the area occupied in the tongue region of the edge angle that location information includes the tongue region Information;
It is input with each patient image, take the location information in the tongue region marked in each patient image as output iteration instruction Practice deep neural network, and exports corresponding tongue parted pattern when reaching trained termination condition.
10. tongue image analysis apparatus according to claim 7, which is characterized in that the determining module be used for by with Under type extracts corresponding tongue image from the tongue area image:
Using in the tongue area image foreground pixel region and background pixel region as input, using GrabCut algorithm pair The tongue area image carries out image segmentation, obtains corresponding segmented image;
Each tongue area image is subtracted into corresponding segmented image and obtains tongue segmented image, and from the tongue segmented image Each connected region in choose largest connected region as tongue image.
CN201910075131.9A 2019-01-25 2019-01-25 Tongue image analysis method and device Pending CN109740611A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910075131.9A CN109740611A (en) 2019-01-25 2019-01-25 Tongue image analysis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910075131.9A CN109740611A (en) 2019-01-25 2019-01-25 Tongue image analysis method and device

Publications (1)

Publication Number Publication Date
CN109740611A true CN109740611A (en) 2019-05-10

Family

ID=66366239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910075131.9A Pending CN109740611A (en) 2019-01-25 2019-01-25 Tongue image analysis method and device

Country Status (1)

Country Link
CN (1) CN109740611A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210483A (en) * 2019-06-13 2019-09-06 上海鹰瞳医疗科技有限公司 Medical image lesion region dividing method, model training method and equipment
CN113033488A (en) * 2021-04-22 2021-06-25 脉景(杭州)健康管理有限公司 Medical feature recognition method and device, electronic device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592268A (en) * 2012-01-06 2012-07-18 清华大学深圳研究生院 Method for segmenting foreground image
CN102663760A (en) * 2012-04-23 2012-09-12 苏州大学 Location and segmentation method for windshield area of vehicle in images
CN103984959A (en) * 2014-05-26 2014-08-13 中国科学院自动化研究所 Data-driven and task-driven image classification method
CN104537379A (en) * 2014-12-26 2015-04-22 上海大学 High-precision automatic tongue partition method
CN105160346A (en) * 2015-07-06 2015-12-16 上海大学 Tongue coating greasyness identification method based on texture and distribution characteristics
CN105930798A (en) * 2016-04-21 2016-09-07 厦门快商通科技股份有限公司 Tongue image quick detection and segmentation method based on learning and oriented to handset application
CN106295139A (en) * 2016-07-29 2017-01-04 姹ゅ钩 A kind of tongue body autodiagnosis health cloud service system based on degree of depth convolutional neural networks
CN109087313A (en) * 2018-08-03 2018-12-25 厦门大学 A kind of intelligent tongue body dividing method based on deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592268A (en) * 2012-01-06 2012-07-18 清华大学深圳研究生院 Method for segmenting foreground image
CN102663760A (en) * 2012-04-23 2012-09-12 苏州大学 Location and segmentation method for windshield area of vehicle in images
CN103984959A (en) * 2014-05-26 2014-08-13 中国科学院自动化研究所 Data-driven and task-driven image classification method
CN104537379A (en) * 2014-12-26 2015-04-22 上海大学 High-precision automatic tongue partition method
CN105160346A (en) * 2015-07-06 2015-12-16 上海大学 Tongue coating greasyness identification method based on texture and distribution characteristics
CN105930798A (en) * 2016-04-21 2016-09-07 厦门快商通科技股份有限公司 Tongue image quick detection and segmentation method based on learning and oriented to handset application
CN106295139A (en) * 2016-07-29 2017-01-04 姹ゅ钩 A kind of tongue body autodiagnosis health cloud service system based on degree of depth convolutional neural networks
CN109087313A (en) * 2018-08-03 2018-12-25 厦门大学 A kind of intelligent tongue body dividing method based on deep learning

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
CARSTEN ROTHER等: ""GrabCut": interactive foreground extraction using iterated graph cuts", 《SIGGRAPH04:SPECIAL INTEREST GROUP ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES》 *
何以然: "基于视觉注意机制的图像分割算法研究及其应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
刘金珠等: "基于阈值分割的脂肪肝超声图像分类方法", 《计算机工程与应用》 *
李刚等: "基于光谱法舌诊的脂肪肝快速诊断", 《光谱学与光谱分析》 *
李方玲: "体检人群中脂肪肝患者的症状规律与舌象特征研究", 《中国优秀博硕士学位论文全文数据库 (博士) 医药卫生科技辑》 *
王盛花: "体检人群中脂肪肝患者的舌象特征研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 *
王盛花等: "中医舌诊在脂肪肝患者健康管理中的应用", 《中华健康管理学杂志》 *
王钧铭: "GrabCut彩色图像分割算法的研究", 《电视技术》 *
解博等: "基于机器视觉的杏核图像识别系统设计", 《电子科技》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210483A (en) * 2019-06-13 2019-09-06 上海鹰瞳医疗科技有限公司 Medical image lesion region dividing method, model training method and equipment
CN113033488A (en) * 2021-04-22 2021-06-25 脉景(杭州)健康管理有限公司 Medical feature recognition method and device, electronic device and storage medium
CN113033488B (en) * 2021-04-22 2023-11-21 脉景(杭州)健康管理有限公司 Medical feature recognition method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US9662040B2 (en) Computer-aided diagnosis apparatus and method
Echegaray et al. Core samples for radiomics features that are insensitive to tumor segmentation: method and pilot study using CT images of hepatocellular carcinoma
Lee et al. Unsupervised connectivity-based thresholding segmentation of midsagittal brain MR images
US9674447B2 (en) Apparatus and method for adaptive computer-aided diagnosis
US8712122B2 (en) Shape based similarity of continuous wave doppler images
CN109460717A (en) Alimentary canal Laser scanning confocal microscope lesion image-recognizing method and device
CN102542242B (en) The biological characteristic area positioning method and device of contactless collection image
US11357435B2 (en) Automatic extraction of disease-specific features from doppler images
CN109785345A (en) Image partition method and device
CN106157279A (en) Eye fundus image lesion detection method based on morphological segment
Liu et al. Automatic segmentation of brain MR images using an adaptive balloon snake model with fuzzy classification
CN109740611A (en) Tongue image analysis method and device
CN109446627A (en) Endoscopic images recognition methods and device
Rasouli D et al. A novel depth image analysis for sleep posture estimation
CN109658399A (en) A kind of neck patch image-recognizing method and device
CN109035212A (en) A kind of labeling method of lung ct image particular tissues
Singh et al. An empirical review on evaluating the impact of image segmentation on the classification performance for skin lesion detection
CN108898173A (en) A kind of the electrocardiogram Medical image fusion and classification method of multiple dimensioned multiple features
CN109493971A (en) Other fatty liver prediction technique and device are known each other based on tongue
CN109785346A (en) Monitoring model training method and device based on tongue phase partitioning technique
JP7265805B2 (en) Image analysis method, image analysis device, image analysis system, control program, recording medium
Ruthven et al. A segmentation-informed deep learning framework to register dynamic two-dimensional magnetic resonance images of the vocal tract during speech
CN109816665A (en) A kind of fast partition method and device of optical coherence tomographic image
Popescu et al. Detection of small tumors of the brain using medical imaging
Degadwala et al. Eye melanoma cancer detection and classification using CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Chao Dai

Inventor after: He Fan

Inventor after: Liu Li

Inventor after: Zhou Zhen

Inventor after: Chen Jin

Inventor after: Yao Shukun

Inventor after: Duan Shaojie

Inventor after: Chen Jialiang

Inventor before: Chao Dai

Inventor before: He Fan

Inventor before: Liu Li

Inventor before: Zhou Zhen

Inventor before: Chen Jin

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210304

Address after: 610200 Sichuan Chengdu Shuangliu District Dongsheng Street Chengdu core Valley Industrial Park concentrated area

Applicant after: CHINA POWER HEALTH CLOUD TECHNOLOGY Co.,Ltd.

Applicant after: China Japan Friendship Hospital (China Japan Friendship Institute of clinical medicine)

Address before: 610200 Sichuan Chengdu Shuangliu District Dongsheng Street Chengdu core Valley Industrial Park concentrated area

Applicant before: CHINA POWER HEALTH CLOUD TECHNOLOGY Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190510