CN107967688A - The method and system split to the object in image - Google Patents

The method and system split to the object in image Download PDF

Info

Publication number
CN107967688A
CN107967688A CN201711399271.9A CN201711399271A CN107967688A CN 107967688 A CN107967688 A CN 107967688A CN 201711399271 A CN201711399271 A CN 201711399271A CN 107967688 A CN107967688 A CN 107967688A
Authority
CN
China
Prior art keywords
image
parted pattern
labeled data
category
correcting code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711399271.9A
Other languages
Chinese (zh)
Other versions
CN107967688B (en
Inventor
姜譞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201711399271.9A priority Critical patent/CN107967688B/en
Publication of CN107967688A publication Critical patent/CN107967688A/en
Application granted granted Critical
Publication of CN107967688B publication Critical patent/CN107967688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Present disclose provides a kind of method that object in image is split, including:Obtain the first labeled data corresponding to N class objects included in the first image;Determine the error correcting code per class object in N class objects, obtain N number of error correcting code;According to N number of error correcting code, the second labeled data will be converted into the first labeled data corresponding to N class objects;And M parted pattern is trained based on the second labeled data, wherein, M parted pattern is used to split object included in the second image, M≤N, and N and M is integer.The disclosure additionally provides the system that a kind of object in image is split.

Description

The method and system split to the object in image
Technical field
A kind of this disclosure relates to method and system that object in image is split.
Background technology
Image Segmentation Technology is a kind of technology for dividing the image into several specific regions with unique properties.Not With application field, such as can be medical image analysis field, traffic image analysis field, military field engineering field etc., at it The picture material of reason becomes increasingly complex, and the accuracy requirement of the image partition method of use is also higher and higher.
With the fast development of artificial intelligence technology, can constantly be changed by the method electronic equipment of machine learning It is apt to the performance of itself.Specifically, for example, in medical image analysis field, it usually needs to CT (Computed Tomography, Referred to as CT) biologic-organ in image split, by the method for artificial intelligence, utilize the deep learning in machine learning Technology, trains corresponding parted pattern, is partitioned into multiple organs, in abdominal CT images, is partitioned into liver, spleen, pancreas etc..
However, during the disclosure is realized, inventor has found that at least there are following defect in correlation technique:Image point Cut, split in particular by the image of the multi-split model simply merged based on voting mechanism, its accuracy is relatively low.
The content of the invention
An aspect of this disclosure provides a kind of method that object in image is split, including obtains the first figure The first labeled data as included in corresponding to N class objects;Determine the error correcting code per class object in above-mentioned N class objects, obtain To N number of error correcting code;According to above-mentioned N number of error correcting code, above-mentioned first labeled data corresponding to above-mentioned N class objects is converted into Two labeled data;And M parted pattern is trained based on above-mentioned second labeled data, wherein, above-mentioned M parted pattern is used to divide It is integer to cut object included in the second image, M≤N, and N and M.
Alternatively, each there is M error correction bit in above-mentioned N number of error correcting code, will be with above-mentioned N according to above-mentioned N number of error correcting code Above-mentioned first labeled data corresponding to class object, which is converted into the second labeled data, to be included:Based on the above-mentioned N with M error correction bit A error correcting code generates the encoder matrix of a N rows M row;And the M column informations in above-mentioned encoder matrix, to above-mentioned first figure N class objects are classified as included in, will be converted into realizing with the first labeled data corresponding to above-mentioned N class objects State the second labeled data.
Alternatively, the above method, which further includes, passes sequentially through included in above-mentioned M parted pattern pair and above-mentioned second image The corresponding pixel of object carry out binary conversion treatment, obtain the binaryzation data of corresponding pixel points;And pass through pre-defined algorithm By above-mentioned binaryzation data compared with the error correcting code of object included in above-mentioned second image, to determine above-mentioned second figure The classification of object as included in.
Alternatively, first category parted pattern and second category parted pattern are included at least in above-mentioned M parted pattern, on State and first category object and second category object are included at least in N class objects, the above method is further included by above-mentioned first category Parted pattern splits above-mentioned first category object;And/or by above-mentioned second category parted pattern to above-mentioned second class Other object is split.
Alternatively, M parted pattern is trained to include using machine learning method based on upper based on above-mentioned second labeled data State the second labeled data and train above-mentioned M parted pattern.
Alternatively, above-mentioned N class objects include N class biologic-organs.
Another aspect of the present disclosure provides the system that a kind of object in image is split, including acquisition module, Determining module, modular converter and training module.Acquisition module is used to obtain corresponding to N class objects included in the first image First labeled data;Determining module is used to determine the error correcting code in above-mentioned N class objects per class object, obtains N number of error correcting code;Conversion Module is used for according to above-mentioned N number of error correcting code, and above-mentioned first labeled data corresponding to above-mentioned N class objects is converted into the second mark Note data;And training module is used to train M parted pattern based on above-mentioned second labeled data, wherein, above-mentioned M segmentation mould Type is used to split object included in the second image, M≤N, and N and M is integer.
Alternatively, each in above-mentioned N number of error correcting code has a M error correction bit, above-mentioned modular converter including generation unit with Taxon.Generation unit is used for the coding square that a N rows M row are generated based on above-mentioned N number of error correcting code with M error correction bit Battle array;And taxon is used for the M column informations in above-mentioned encoder matrix, to N classes pair included in above-mentioned first image As classifying, above-mentioned second labeled data will be converted into the first labeled data corresponding to above-mentioned N class objects to realize.
Alternatively, said system further includes processing module and comparison module.Processing module is used to pass sequentially through above-mentioned M points Cut model pair pixel corresponding with object included in above-mentioned second image and carry out binary conversion treatment, obtain corresponding pixel points Binaryzation data;And comparison module is used for above-mentioned binaryzation data by pre-defined algorithm with being wrapped in above-mentioned second image The error correcting code of the object contained is compared, to determine the classification of object included in above-mentioned second image.
Alternatively, first category parted pattern and second category parted pattern are included at least in above-mentioned M parted pattern, on State and first category object and second category object included at least in N class objects, said system further include the first segmentation module and/or Second segmentation module.First segmentation module is used to divide above-mentioned first category object by above-mentioned first category parted pattern Cut;Second segmentation module is used to split above-mentioned second category object by above-mentioned second category parted pattern.
Alternatively, above-mentioned training module is based on above-mentioned second labeled data using machine learning method and trains above-mentioned M point Cut model.
Another aspect of the present disclosure provides a kind of non-volatile memory medium, is stored with computer executable instructions, on State instruction and be used for realization the method split as described above to the object in image when executed.
Another aspect of the present disclosure provides a kind of computer program, and above computer program can perform finger including computer Order, above-metioned instruction are used for realization the method split as described above to the object in image when executed.
Brief description of the drawings
In order to be more fully understood from the disclosure and its advantage, referring now to being described below with reference to attached drawing, wherein:
Figure 1A diagrammatically illustrates the scanning figure of the original image according to the embodiment of the present disclosure;
Figure 1B is diagrammatically illustrated utilizes the method split to the object in image point according to the embodiment of the present disclosure Cut the design sketch after original image;
Fig. 2 diagrammatically illustrates the method flow split to the object in the second image according to the embodiment of the present disclosure Figure;
Fig. 3 A diagrammatically illustrate converting the first labeled data corresponding to N class objects according to the embodiment of the present disclosure For the flow chart of the second labeled data;
Fig. 3 B diagrammatically illustrate the side split to the object in the second image according to another embodiment of the disclosure Method flow chart;
Fig. 4 diagrammatically illustrates the block diagram of the system split to the object in image according to the embodiment of the present disclosure;
Fig. 5 A diagrammatically illustrate the block diagram of the modular converter according to the embodiment of the present disclosure;
Fig. 5 B are diagrammatically illustrated according to the system split to the object in image of another embodiment of the disclosure Block diagram;
Fig. 5 C are diagrammatically illustrated according to the system split to the object in image of another embodiment of the disclosure Block diagram;And
Fig. 6 diagrammatically illustrates the block diagram of the electronic equipment according to the embodiment of the present disclosure.
Embodiment
Hereinafter, it will be described with reference to the accompanying drawings embodiment of the disclosure.However, it should be understood that these descriptions are simply exemplary , and it is not intended to limit the scope of the present disclosure.In addition, in the following description, the description to known features and technology is eliminated, with Avoid unnecessarily obscuring the concept of the disclosure.
Term as used herein is not intended to limit the disclosure just for the sake of description specific embodiment.Use herein Term " comprising ", "comprising" etc. indicate the presence of the feature, step, operation and/or component, but it is not excluded that in the presence of Or addition one or more other features, step, operation or components.
All terms (including technical and scientific term) as used herein have what those skilled in the art were generally understood Implication, unless otherwise defined.It should be noted that term used herein should be interpreted that with consistent with the context of this specification Implication, without should by idealization or it is excessively mechanical in a manner of explain.
, in general should be according to this in the case of using " in A, B and C etc. at least one " such statement is similar to Field technology personnel are generally understood that the implication of the statement to make an explanation (for example, " having system at least one in A, B and C " Should include but not limited to individually with A, individually with B, individually with C, with A and B, with A and C, with B and C, and/or System with A, B, C etc.).In the case of using " in A, B or C etc. at least one " such statement is similar to, it is general come Say be generally understood that the implication of the statement to make an explanation (for example, " having in A, B or C at least according to those skilled in the art The system of one " should include but not limited to individually with A, individually with B, individually with C, with A and B, with A and C, with B and C, and/or system etc. with A, B, C).It should also be understood by those skilled in the art that substantially arbitrarily represent two or more The adversative conjunction and/or phrase of optional project, either in specification, claims or attached drawing, shall be construed as Give including one of these projects, the possibility of these projects either one or two projects.For example, " A or B " should for phrase It is understood to include " A " or " B " or " possibility of A and B ".
Shown in the drawings of some block diagrams and/or flow chart.It is to be understood that some sides in block diagram and/or flow chart Frame or its combination can be realized by computer program instructions.These computer program instructions can be supplied to all-purpose computer, The processor of special purpose computer or other programmable data processing units, so that these instructions can be with when being performed by the processor Create the device for being used for realization these block diagrams and/or function/operation illustrated in flow chart.
Therefore, the technology of the disclosure can be realized in the form of hardware and/or software (including firmware, microcode etc.).Separately Outside, the technology of the disclosure can take the form of the computer program product on the computer-readable medium for being stored with instruction, should Computer program product uses for instruction execution system or combined command execution system uses.In the context of the disclosure In, computer-readable medium can be the arbitrary medium that can include, store, transmit, propagate or transmit instruction.For example, calculate Machine computer-readable recording medium can include but is not limited to electricity, magnetic, optical, electromagnetic, infrared or semiconductor system, device, device or propagation medium. The specific example of computer-readable medium includes:Magnetic memory apparatus, such as tape or hard disk (HDD);Light storage device, such as CD (CD-ROM);Memory, such as random access memory (RAM) or flash memory;And/or wire/wireless communication link.
Embodiment of the disclosure provides the method and its system that a kind of object in image is split.This method bag Include the first labeled data obtained included in the first image corresponding to N class objects;Determine entangling per class object in N class objects Error code, obtains N number of error correcting code;According to N number of error correcting code, the second mark will be converted into the first labeled data corresponding to N class objects Note data;And M parted pattern is trained based on the second labeled data, wherein, M parted pattern is used to split in the second image Comprising object, M≤N, and N and M is integer.
Figure 1A diagrammatically illustrates the scanning figure of the original image according to the embodiment of the present disclosure.
As shown in Figure 1A, which is the CT images of human abdomen, includes the multiple of human body in the CT images Organ, such as liver can be included, spleen, courage and pancreas etc., since the size and form of each organ are different, it is by electronic equipment After scanning and handling, the blurred contour information of the different organ of multiple shapes and sizes is obtained.
In order to determine the profile information of multiple Organ namings and corresponding organ in original image, provided by the disclosure The M parted pattern that method is trained splits original image, can automatically be partitioned into included in original image Multiple organs.
Figure 1B is diagrammatically illustrated utilizes the method split to the object in image point according to the embodiment of the present disclosure Cut the design sketch after original image.
As shown in Figure 1B, arrow A meanings region is the liver that obtains after singulated in figure, and arrow B meanings region is warp in figure The courage obtained after segmentation, arrow C meanings region is the spleen obtained after singulated in figure.
It should be noted that the original image shown in Figure 1A and 1B is only can showing using the scene of the embodiment of the present disclosure Example, to help skilled in the art to understand the technology contents of the disclosure, but is not meant to that the embodiment of the present disclosure cannot be used In other equipment, system, environment or scene.
For example, the original image of the disclosure can also be the environmental information around traffic image, including spot for photography, example Such as, including building, the vehicles etc..For another example the original image of the disclosure can also be subsurface rock image, including difference The Rock information of species or construction.
By embodiment of the disclosure, can at least solve in correlation technique by one or more professionals to original Image carry out manual segmentation, or according to correlation technique to original image split after, carry out multi-model fusion when need all moulds The accuracy requirement of type is high, and when the accuracy of model is low, the title for determining each organ using corresponding voting mechanism is led Cause the low technical problem of segmentation organ accuracy.
Fig. 2 diagrammatically illustrates the method flow diagram split to the object in image according to the embodiment of the present disclosure.
As shown in Fig. 2, this method includes operation S201~S204.
In operation S201, the first labeled data corresponding to N class objects included in the first image is obtained.
In operation S202, determine the error correcting code per class object in N class objects, obtain N number of error correcting code.
In operation S203, according to N number of error correcting code, the second mark will be converted into the first labeled data corresponding to N class objects Note data.
In operation S204, M parted pattern is trained based on the second labeled data, wherein, M parted pattern is for segmentation the Object included in two images, M≤N, and N and M are integer.
In accordance with an embodiment of the present disclosure, the species of the first image does not limit, for example, it may be the CT images of human body, or Person can also be the structural map picture of subsurface rock.In the first image, N class objects can have corresponding peculiar property, such as Can have certain a shape and/or construction, peculiar property possessed by different types of object is according to the species of the first image It is different and different.For example, by taking the first image is the CT image of organism as an example, N class objects can be different types of biological device Official, such as human organ, animal organ.First labeled data can be artificial or machine recognition N class objects after obtain be used for mark Know the mark of N class objects.Wherein, different types of object corresponds to different labeled data.By taking N is equal to 4 as an example, 4 class objects Labeled data can be A1, A2, A3And A4It is indicated respectively.
In accordance with an embodiment of the present disclosure, the error correcting code per class object can be binary error correcting code or ternary error correction Code, error correcting code can include error correction bit and information bit.By taking binary error correcting code (n, k) as an example, wherein n is error correction bit length, and k is letter Bit length (such as (5,2) Hamming code) is ceased, different types of object is distinguished by choosing the information bit of different length, it is not of the same race The length n of the error correction bit of class object can be after the length k of information bit be determined, by way of checking code table or according to reality It is required that determine the length n of error correction bit.
In accordance with an embodiment of the present disclosure, for example, exemplified by splitting 4 organs in the first image, information bit length k is chosen For 2, for representing 4 organs.For example, representing organ C1 with 00, organ C2 is represented with 01, organ C3 is represented with 10, with 11 tables Show organ C4, so as to distinguish different types of organ.After definite information bit length K is 2, error correction bit length n can pass through Check that the mode of code table is determined as 5.Specifically, for example, the error correction bit of organ C1 can be 01011, the error correction bit of organ C2 can To be 10010, the error correction bit of organ C3 can be 01101, and the error correction bit of organ C4 can be 00110.It should be noted that institute The digit of the error correction bit of selection is bigger, and the accuracy of the object in segmentation figure picture is higher.The selection mode of error correcting code can be according to reality Border application scenarios make choice, and details are not described herein.
In accordance with an embodiment of the present disclosure, by taking N is equal to 4 as an example, according to 4 error correcting codes, to 4 class objects in the first image into Row division, and by with the first labeled data A corresponding to 4 class objects1, A2, A3And A4It is converted into the second labeled data B1, B2, B3 And B4.According to the second labeled data B after conversion1, B2, B3And B4M parted pattern can be trained.According to the implementation of the disclosure Example, when needing to split 4 class object, can train 4 parted patterns.When needing to split 3 class object, 4 can be trained Parted patterns a or less than 4.
By embodiment of the disclosure, the first mark according to corresponding to the error correcting code of multiple class by N class objects Data convert the second labeled data of multiple class, and multiple segmentation moulds are trained according to the second labeled data after conversion Type.When the multiple parted patterns trained with this are used to split the object in other images, the accuracy requirement of each model It is low.Therefore, can at least solve to carry out manual segmentation to original image by one or more professionals in correlation technique, or After person splits original image according to correlation technique, needed during multi-model fusion the accuracy of all models high, and work as mould When the accuracy of type is low, determine that the title of each organ causes to split the low skill of organ accuracy using corresponding voting mechanism Art problem.
Below with reference to Fig. 3 A~Fig. 3 B, the method shown in Fig. 2 is described further in conjunction with specific embodiments.
Fig. 3 A diagrammatically illustrate converting the first labeled data corresponding to N class objects according to the embodiment of the present disclosure For the flow chart of the second labeled data.
In accordance with an embodiment of the present disclosure, each there is M error correction bit in N number of error correcting code, as shown in Figure 3A, according to N number of Error correcting code, includes operation S2031~S2032 by the second labeled data is converted into the first labeled data corresponding to N class objects.
In operation S2031, the encoder matrix based on one N rows M row of N number of error correcting code generation with M error correction bit.
In operation S2032, the M column informations in encoder matrix, divide N class objects included in the first image Class, the second labeled data will be converted into realize with the first labeled data corresponding to N class objects.
In accordance with an embodiment of the present disclosure, there is M error correction bit, by taking binary error correcting code (M, k) as an example, N in each error correcting code A error correcting code can generate the encoder matrix of a N rows M row.Specifically, as shown in table 1, exemplified by with binary error correcting code (5,2), It can obtain the encoder matrix of 4 rows 5 row.
Table 1
0 1 0 1 1
1 0 0 1 0
0 1 1 0 1
0 0 1 1 0
Further, the information bit and error correction bit with reference to error correcting code can obtain the information in table 2.
Table 2
Wherein, information bit 00 is used to represent organ C1, and the error correction bit of organ C1 can be 01011, and information bit 01 is used for table Show organ C2, the error correction bit of organ C2 can be 10010, and information bit 10 is used to represent organ C3, and the error correction bit of organ C3 can be with It is 01101, information bit 11 is used to represent organ C4, and the error correction bit of organ C4 can be 00110.5 row in encoder matrix Information trains different parted patterns, can train 5 parted patterns (f1, f2, f3, f4, f5).
Wherein, the task of parted pattern f1 is to discriminate between { (C1, C3, C4) and C2 };The task of parted pattern f2 is to discriminate between { (C2, C4) and (C1, C3) };The task of parted pattern f3 is to discriminate between { (C1, C2) and (C3, C4) };The task of parted pattern f4 It is to discriminate between { (C3) and (C1, C2, C4) };The task of parted pattern f5 is to discriminate between { (C2, C4) and (C1, C3) }.
For the task of different parted patterns, original labeled data can be converted to new labeled data.Wherein, for Mark { 0 and 1 } in parted pattern f1,0 represents (C1, C3, C4), and 1 represents C2;For the mark { 0 and 1 } of parted pattern f2,0 Represent (C2, C4), 1 represents (C1, C3) };(C1, C2) is represented for the mark { 0 and 1 } of parted pattern f3,0,1 representative (C3, C4)};(C3) is represented for the mark { 0 and 1 } of parted pattern f4,0,1 represents (C1, C2, C4) };For the mark of parted pattern f5 Note { 0 and 1 }, 0 represents (C2, C4), and 1 represents (C1, C3) }.The CT figures of certain pixel are input to the multiple segmentation moulds trained In type, each model can do classification differentiation to each pixel, so as to obtain the labeled data obtained after 5 Model Identifications.
By embodiment of the disclosure, by the way that the second mark will be converted into the first labeled data corresponding to N class objects Data, can be low to the accuracy requirement of each parted pattern in the case where needing to reach predetermined segmentation accuracy.
Fig. 3 B diagrammatically illustrate the side split to the object in the second image according to another embodiment of the disclosure Method flow chart.
As shown in Figure 3B, the method split to the object in the second image includes operation S205~S206.
In operation S205, M parted pattern pair pixel corresponding with object included in the second image is passed sequentially through Binary conversion treatment is carried out, obtains the binaryzation data of corresponding pixel points.
In operation S206, by pre-defined algorithm by the error correcting code of object included in binaryzation data and the second image into Row compares, to determine the classification of object included in the second image.
In accordance with an embodiment of the present disclosure, it is right included in the second image that the M parted pattern obtained by training is handled Can be that the pixel corresponding to different objects is subjected to binary conversion treatment as corresponding pixel.
By taking the CT figures of 512*512 pixels as an example, for the pixel in the CT figures of the 512*512 pixels, all by multiple After model treatment, corresponding binaryzation data are obtained, by calculating binaryzation data and object included in the second image Hamming distance or Euclidean distance between error correcting code, to determine the classification of object included in the second image.Wherein, Hamming distance The character number for being changed into replacing required for another character string by character string from expression.Euclidean distance be also known as Euclid away from From, the actual distance in expression m-dimensional space between two points, or the natural length of vector.
For example, for any one pixel, 5 models can export 5 discriminant values, it is assumed that the binaryzation data of pixel are 00101, i.e., for this pixel, output that output that output that the output of f1 is 0, f2 is 0, f3 is 1, f4 is that 0, f5's is defeated Go out for 1.Output for 5 bit is as a result, make the meter of Hamming distance or Euclidean distance with 4 groups of error correcting codes in encoder matrix Calculate, smallest hamming distance or Euclidean distance can be chosen, according to the coding corresponding to calculating smallest hamming distance or Euclidean distance The pixel is sorted out.
Export result 00101 and different error correction code according to the binaryzation data of pixel, calculate Hamming distance or Euclidean away from From obtaining, the results are shown in Table 3.
Table 3
Since the classification minimum in Hamming distance and Euclidean distance with output result 00101 is C3, the pixel is most C3 is categorized as eventually.
By embodiment of the disclosure, parted pattern splits new image after can using training, due to using M After a parted pattern splits object, may there is certain error, by pre-defined algorithm by binaryzation data and error correcting code into Row compares, so that it is determined that the classification of object, can improve the accuracy of segmentation.
In accordance with an embodiment of the present disclosure, first category parted pattern and second category point are included at least in M parted pattern Model is cut, first category object and second category object, the side split to the object in image are included at least in N class objects Method, which is further included, splits first category object by first category parted pattern;And/or pass through second category parted pattern Second category object is split.
In accordance with an embodiment of the present disclosure, a variety of objects can be included in N class objects, are using the first image as abdominal CT images Example, first category object can be courage, and second category object can be liver.It is connected to due to liver and courage in organism, such as Fruit splits different classes of liver and courage, accuracy at this time using identical parted pattern to reduce.Therefore, training is utilized Obtained different parted patterns split different classes of liver and courage respectively, can improve the accuracy of the difficult organ of segmentation.
By embodiment of the disclosure, for different classes of object, it is split using different parted patterns, The segmentation accuracy of parted pattern can be further improved, solves in correlation technique and more points is carried out to image by single model Cut and cause the relatively low technical problem of accuracy.
In accordance with an embodiment of the present disclosure, M parted pattern is trained to include using machine learning side based on the second labeled data Method is based on the second labeled data and trains M parted pattern.In accordance with an embodiment of the present disclosure, can utilize in machine learning method Depth learning technology, trains to obtain M parted pattern based on the second labeled data, specifically, for example, using neutral net to figure Object as in carries out higher level of abstraction, to train multiple parted patterns.
By embodiment of the disclosure, train to obtain multiple parted patterns using aforesaid way, instruction can be effectively reduced Practice the time, further, by the optimum organization between multiple parted patterns, can cause the parted pattern more intelligence trained Energyization.
In accordance with an embodiment of the present disclosure, N class objects include N class biologic-organs.Biologic-organ can be human organ or move Sundries official, for example, human organ can be heart, lung, courage, liver etc..By embodiment of the disclosure, instructed using aforesaid way The multiple models got are used to split biologic-organ, improve the accuracy of organ segmentation, caused by reducing human factor Judge probability by accident.
Fig. 4 diagrammatically illustrates the block diagram of the system split to the object in image according to the embodiment of the present disclosure.
As shown in figure 4, system 400 includes acquisition module 410, determining module 420, modular converter 430 and training module 440。
Acquisition module 410 is used to obtain the first labeled data included in the first image corresponding to N class objects.
Determining module 420 is used to determine the error correcting code in N class objects per class object, obtains N number of error correcting code.
Modular converter 430 is used for according to N number of error correcting code, will be converted into the with the first labeled data corresponding to N class objects Two labeled data.
Training module 440 is used to train M parted pattern based on the second labeled data, wherein, M parted pattern is used to divide It is integer to cut object included in the second image, M≤N, and N and M.
By embodiment of the disclosure, the first mark according to corresponding to the error correcting code of multiple class by N class objects Data convert the second labeled data of multiple class, and multiple segmentation moulds are trained according to the second labeled data after conversion Type.When the multiple parted patterns trained with this are used to split the object in other images, the accuracy requirement of each model It is low.Therefore, can at least solve to carry out manual segmentation to original image by one or more professionals in correlation technique, or After person splits original image according to correlation technique, needed during multi-model fusion the accuracy of all models high, and work as mould When the accuracy of type is low, determine that the title of each organ causes to split the low skill of organ accuracy using corresponding voting mechanism Art problem.
Fig. 5 A diagrammatically illustrate the block diagram of the modular converter according to another embodiment of the disclosure.
As shown in Figure 5A, each there is M error correction bit in N number of error correcting code, modular converter 430 includes generation unit 431 With taxon 432.
Generation unit 431 is used for the encoder matrix that a N rows M row are generated based on N number of error correcting code with M error correction bit.
Taxon 432 is used for M column informations in encoder matrix, to N class objects included in the first image into Row classification, the second labeled data will be converted into realize with the first labeled data corresponding to N class objects.
By embodiment of the disclosure, by the way that the second mark will be converted into the first labeled data corresponding to N class objects Data, can be low to the accuracy requirement of each parted pattern in the case where needing to reach predetermined segmentation accuracy.
Fig. 5 B are diagrammatically illustrated according to the system split to the object in image of another embodiment of the disclosure Block diagram.
As shown in Figure 5 B, system 400 is except including acquisition module 410, determining module 420, modular converter 430 and training mould Outside block 440, processing module 450 and comparison module 460 are further included.
Processing module 450 is used to pass sequentially through M parted pattern pair picture corresponding with object included in the second image Vegetarian refreshments carries out binary conversion treatment, obtains the binaryzation data of corresponding pixel points.
Comparison module 460 is used to pass through error correction of the pre-defined algorithm by object included in binaryzation data and the second image Code is compared, to determine the classification of object included in the second image.
By embodiment of the disclosure, parted pattern splits new image after can using training, due to using M After a parted pattern splits object, may there is certain error, by pre-defined algorithm by binaryzation data and error correcting code into Row compares, so that it is determined that the classification of object, can improve the accuracy of segmentation.
Fig. 5 C are diagrammatically illustrated according to the system split to the object in image of another embodiment of the disclosure Block diagram.
Alternatively, first category parted pattern and second category parted pattern, N classes pair are included at least in M parted pattern First category object and second category object are included at least as in, system further includes the first segmentation module and/or the second segmentation mould Block, as shown in Figure 5 C, system 400 further include the first segmentation module 470 and the second segmentation module 480.
First segmentation module 470 is used to split first category object by first category parted pattern.
Second segmentation module 480 is used to split second category object by second category parted pattern.
Alternatively, training module 440 is based on the second labeled data using machine learning method and trains M parted pattern.
By embodiment of the disclosure, for different classes of object, it is split using different parted patterns, The segmentation accuracy of parted pattern can be further improved, solves in correlation technique and more points is carried out to image by single model Cut and cause the relatively low technical problem of accuracy.
It is understood that acquisition module 410, determining module 420, modular converter 430, training module 440, processing module 450th, the segmentation of comparison module 460, first segmentation of module 470 and second module 480, which may be incorporated in a module, realizes, or Any one module therein can be split into multiple modules.Alternatively, one or more of these modules module is at least Partial function can be combined with least part function of other modules, and be realized in a module.It is real according to the present invention Apply example, acquisition module 410, determining module 420, modular converter 430, training module 440, processing module 450, comparison module 460, First segmentation module 470 and second, which is split in module 480, at least one can at least be implemented partly as hardware circuit, example As being in the system on field programmable gate array (FPGA), programmable logic array (PLA), system-on-chip, substrate, encapsulation System, application-specific integrated circuit (ASIC), or can be to carry out the hardware such as any other rational method that is integrated or encapsulating to circuit Or firmware is realized, or realized with software, the appropriately combined of hardware and firmware three kinds of implementations.Alternatively, acquisition module 410th, determining module 420, modular converter 430, training module 440, processing module 450, comparison module 460, first split module 470 and second segmentation module 480 in it is at least one can at least be implemented partly as computer program module, when the program When being run by computer, the function of corresponding module can be performed.
Fig. 6 diagrammatically illustrates the block diagram of the electronic equipment according to the embodiment of the present disclosure.
As shown in fig. 6, electronic equipment 600 includes processor 610, computer-readable recording medium 620.The electronic equipment 600 can perform above with reference to Fig. 2, the method for Fig. 3 A~Fig. 3 B descriptions.
Specifically, processor 610 can for example include general purpose microprocessor, instruction set processor and/or related chip group And/or special microprocessor (for example, application-specific integrated circuit (ASIC)), etc..Processor 610 can also include being used to cache using The onboard storage device on way.Processor 610 can be performed for according to the disclosure being implemented with reference to figure 2, Fig. 3 A~Fig. 3 B descriptions Single treatment unit either multiple processing units of the different actions of the method flow of example.
Computer-readable recording medium 620, such as can include, store, transmit, propagate or transmit appointing for instruction Meaning medium.For example, readable storage medium storing program for executing can include but is not limited to electricity, magnetic, optical, electromagnetic, infrared or semiconductor system, device, Device or propagation medium.The specific example of readable storage medium storing program for executing includes:Magnetic memory apparatus, such as tape or hard disk (HDD);Optical storage Device, such as CD (CD-ROM);Memory, such as random access memory (RAM) or flash memory;And/or wire/wireless communication chain Road.
Computer-readable recording medium 620 can include computer program 621, which can include generation Code/computer executable instructions, it by processor 610 when being performed so that processor 610 is performed for example above in conjunction with Fig. 2, figure The described method flows of 3A~Fig. 3 B and its any deformation.
Computer program 621 can be configured with such as computer program code including computer program module.Example Such as, in the exemplary embodiment, the code in computer program 621 can include one or more program modules, such as including 621A, module 621B ....It should be noted that the dividing mode and number of module are not fixed, those skilled in the art can To be combined according to actual conditions using suitable program module or program module, when these program modules are combined by processor 610 During execution so that processor 610 can be performed for example above in conjunction with Fig. 2, the described method flows of Fig. 3 A~Fig. 3 B and its appointed What is deformed.
According to an embodiment of the invention, acquisition module 410, determining module 420, modular converter 430, training module 440, place Reason module 450, comparison module 460, first, which are split module 470 and second and split in module 480, at least one can be implemented as The computer program module described with reference to figure 6, it by processor 610 when being performed, it is possible to achieve corresponding operating described above.
It will be understood by those skilled in the art that the feature described in each embodiment and/or claim of the disclosure can To carry out multiple combinations or/or combination, even if such combination or combination are not expressly recited in the disclosure.Especially, exist In the case of not departing from disclosure spirit or teaching, the feature described in each embodiment and/or claim of the disclosure can To carry out multiple combinations and/or combination.All these combinations and/or combination each fall within the scope of the present disclosure.
Although the disclosure, art technology has shown and described in the certain exemplary embodiments with reference to the disclosure Personnel it should be understood that in the case of the spirit and scope of the present disclosure limited without departing substantially from the following claims and their equivalents, A variety of changes in form and details can be carried out to the disclosure.Therefore, the scope of the present disclosure should not necessarily be limited by above-described embodiment, But should be not only determined by appended claims, also it is defined by the equivalent of appended claims.

Claims (10)

1. a kind of method that object in image is split, including:
Obtain the first labeled data corresponding to N class objects included in the first image;
Determine the error correcting code per class object in the N class objects, obtain N number of error correcting code;
According to N number of error correcting code, first labeled data corresponding to the N class objects is converted into the second mark number According to;And
M parted pattern is trained based on second labeled data, wherein, the M parted pattern is used to split the second image Included in object, M≤N, and N and M is integer.
2. according to the method described in claim 1, wherein, each there is M error correction bit in N number of error correcting code, according to institute N number of error correcting code is stated, first labeled data corresponding to the N class objects is converted into the second labeled data includes:
The encoder matrix of a N rows M row is generated based on N number of error correcting code with M error correction bit;And
M column informations in the encoder matrix, classify N class objects included in described first image, with reality Now second labeled data is converted into the first labeled data corresponding to the N class objects.
3. according to the method described in claim 2, wherein, the method further includes:
Pass sequentially through the M parted pattern pair pixel corresponding with object included in second image and carry out two-value Change is handled, and obtains the binaryzation data of corresponding pixel points;And
By pre-defined algorithm by the binaryzation data compared with the error correcting code of object included in second image, To determine the classification of object included in second image.
4. according to the method described in claim 1, wherein, first category parted pattern is included at least in the M parted pattern With second category parted pattern, first category object and second category object are included at least in the N class objects, the method is also Including:
The first category object is split by the first category parted pattern;And/or
The second category object is split by the second category parted pattern.
5. according to the method described in claim 1, wherein, M parted pattern is trained to include based on second labeled data:
Second labeled data is based on using machine learning method and trains the M parted pattern.
6. according to the method described in claim 1, wherein:
The N class objects include N class biologic-organs.
7. the system that a kind of object in image is split, including:
Acquisition module, for obtaining the first labeled data corresponding to N class objects included in the first image;
Determining module, for determining the error correcting code in the N class objects per class object, obtains N number of error correcting code;
Modular converter, for according to N number of error correcting code, first labeled data corresponding to the N class objects to be turned Turn to the second labeled data;And
Training module, for training M parted pattern based on second labeled data, wherein, the M parted pattern is used for It is integer to split object included in the second image, M≤N, and N and M.
8. system according to claim 7, wherein, each in N number of error correcting code has a M error correction bit, described turn Mold changing block includes:
Generation unit, for generating the encoder matrix of a N rows M row based on N number of error correcting code with M error correction bit;With And
Taxon, for the M column informations in the encoder matrix, to N class objects included in described first image Classify, second labeled data will be converted into the first labeled data corresponding to the N class objects to realize.
9. system according to claim 8, wherein, the system also includes:
Processing module, it is corresponding with object included in second image for passing sequentially through the M parted pattern pair Pixel carries out binary conversion treatment, obtains the binaryzation data of corresponding pixel points;And
Comparison module, for the entangling the binaryzation data and object included in second image by pre-defined algorithm Error code is compared, to determine the classification of object included in second image.
10. system according to claim 7, wherein, first category parted pattern is included at least in the M parted pattern With second category parted pattern, first category object and second category object are included at least in the N class objects, the system is also Including:
First segmentation module, for being split by the first category parted pattern to the first category object;And/or
Second segmentation module, for being split by the second category parted pattern to the second category object.
CN201711399271.9A 2017-12-21 2017-12-21 Method and system for segmenting object in image Active CN107967688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711399271.9A CN107967688B (en) 2017-12-21 2017-12-21 Method and system for segmenting object in image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711399271.9A CN107967688B (en) 2017-12-21 2017-12-21 Method and system for segmenting object in image

Publications (2)

Publication Number Publication Date
CN107967688A true CN107967688A (en) 2018-04-27
CN107967688B CN107967688B (en) 2021-06-15

Family

ID=61995014

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711399271.9A Active CN107967688B (en) 2017-12-21 2017-12-21 Method and system for segmenting object in image

Country Status (1)

Country Link
CN (1) CN107967688B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648195A (en) * 2018-05-09 2018-10-12 联想(北京)有限公司 A kind of image processing method and device
WO2020052668A1 (en) * 2018-09-15 2020-03-19 北京市商汤科技开发有限公司 Image processing method, electronic device, and storage medium
CN111325231A (en) * 2018-12-14 2020-06-23 财团法人工业技术研究院 Neural network model fusion method and electronic device applying same
CN111724371A (en) * 2020-06-19 2020-09-29 联想(北京)有限公司 Data processing method and device and electronic equipment
WO2020199477A1 (en) * 2019-04-04 2020-10-08 平安科技(深圳)有限公司 Image labeling method and apparatus based on multi-model fusion, and computer device and storage medium
US10963757B2 (en) 2018-12-14 2021-03-30 Industrial Technology Research Institute Neural network model fusion method and electronic device using the same
TWI786330B (en) * 2018-09-15 2022-12-11 大陸商北京市商湯科技開發有限公司 Image processing method, electronic device, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1543625A (en) * 2001-05-31 2004-11-03 全感知有限公司 Personal identity verification process and system
WO2006055413A2 (en) * 2004-11-11 2006-05-26 The Trustees Of Columbia University In The City Of New York Methods and systems for identifying and localizing objects based on features of the objects that are mapped to a vector
US20100329544A1 (en) * 2009-06-30 2010-12-30 Sony Corporation Information processing apparatus, information processing method, and program
CN103426004A (en) * 2013-07-04 2013-12-04 西安理工大学 Vehicle type recognition method based on error correction output code
CN104239896A (en) * 2014-09-04 2014-12-24 四川省绵阳西南自动化研究所 Method for classifying crowd density degrees in video image
CN105182219A (en) * 2015-09-06 2015-12-23 南京航空航天大学 Power converter fault classification method based on Hamming error correcting code support vector machine
CN105931253A (en) * 2016-05-16 2016-09-07 陕西师范大学 Image segmentation method combined with semi-supervised learning
CN106709853A (en) * 2016-11-30 2017-05-24 开易(北京)科技有限公司 Image retrieval method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1543625A (en) * 2001-05-31 2004-11-03 全感知有限公司 Personal identity verification process and system
WO2006055413A2 (en) * 2004-11-11 2006-05-26 The Trustees Of Columbia University In The City Of New York Methods and systems for identifying and localizing objects based on features of the objects that are mapped to a vector
US20100329544A1 (en) * 2009-06-30 2010-12-30 Sony Corporation Information processing apparatus, information processing method, and program
CN103426004A (en) * 2013-07-04 2013-12-04 西安理工大学 Vehicle type recognition method based on error correction output code
CN104239896A (en) * 2014-09-04 2014-12-24 四川省绵阳西南自动化研究所 Method for classifying crowd density degrees in video image
CN105182219A (en) * 2015-09-06 2015-12-23 南京航空航天大学 Power converter fault classification method based on Hamming error correcting code support vector machine
CN105931253A (en) * 2016-05-16 2016-09-07 陕西师范大学 Image segmentation method combined with semi-supervised learning
CN106709853A (en) * 2016-11-30 2017-05-24 开易(北京)科技有限公司 Image retrieval method and system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CIOMPI, F 等: "ECOC Random Fields for Lumen Segmentation in Radial Artery IVUS Sequences", 《MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION–MICCAI 2009》 *
WANG, HONGZHI 等: "A learning-based wrapper method to correct systematic errors in automatic image segmentation: Consistently improved performance in hippocampus, cortex and brain segmentation", 《NEUROIMAGE》 *
倪心强: "SAR图像分类与自动目标识别技术研究", 《中国博士学位论文全文数据库信息科技辑》 *
李杰 等: "使用支持向量机的纹理识别方法", 《光电工程》 *
饶倩: "基于条件随机场与纠错输出码的图像自动标注方法研究", 《万方数据库》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648195A (en) * 2018-05-09 2018-10-12 联想(北京)有限公司 A kind of image processing method and device
WO2020052668A1 (en) * 2018-09-15 2020-03-19 北京市商汤科技开发有限公司 Image processing method, electronic device, and storage medium
TWI786330B (en) * 2018-09-15 2022-12-11 大陸商北京市商湯科技開發有限公司 Image processing method, electronic device, and storage medium
CN111325231A (en) * 2018-12-14 2020-06-23 财团法人工业技术研究院 Neural network model fusion method and electronic device applying same
US10963757B2 (en) 2018-12-14 2021-03-30 Industrial Technology Research Institute Neural network model fusion method and electronic device using the same
TWI727237B (en) * 2018-12-14 2021-05-11 財團法人工業技術研究院 Neural network model fusion method and electronic device using the same
CN111325231B (en) * 2018-12-14 2023-08-15 财团法人工业技术研究院 Neural network model fusion method and electronic device applying same
WO2020199477A1 (en) * 2019-04-04 2020-10-08 平安科技(深圳)有限公司 Image labeling method and apparatus based on multi-model fusion, and computer device and storage medium
CN111724371A (en) * 2020-06-19 2020-09-29 联想(北京)有限公司 Data processing method and device and electronic equipment
CN111724371B (en) * 2020-06-19 2023-05-23 联想(北京)有限公司 Data processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN107967688B (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN107967688A (en) The method and system split to the object in image
Toğaçar et al. COVID-19 detection using deep learning models to exploit Social Mimic Optimization and structured chest X-ray images using fuzzy color and stacking approaches
Chen et al. Symmetrical dense-shortcut deep fully convolutional networks for semantic segmentation of very-high-resolution remote sensing images
CN110738207B (en) Character detection method for fusing character area edge information in character image
US11113816B2 (en) Image segmentation apparatus, method and relevant computing device
Yu et al. Casenet: Deep category-aware semantic edge detection
CN105512289B (en) Image search method based on deep learning and Hash
Liang et al. Semantic object parsing with local-global long short-term memory
Wang et al. Bag of contour fragments for robust shape classification
CN104517112B (en) A kind of Table recognition method and system
CN110910395B (en) Image encoding method and apparatus, and test method and test apparatus using the same
US10776662B2 (en) Weakly-supervised spatial context networks to recognize features within an image
CN110837836A (en) Semi-supervised semantic segmentation method based on maximized confidence
CN107516096A (en) A kind of character identifying method and device
CN108230339A (en) A kind of gastric cancer pathological section based on pseudo label iteration mark marks complementing method
CN108475331A (en) Use the candidate region for the image-region for including interested object of multiple layers of the characteristic spectrum from convolutional neural networks model
Lee et al. Wide-residual-inception networks for real-time object detection
CN106156777B (en) Text picture detection method and device
CN107506796A (en) A kind of alzheimer disease sorting technique based on depth forest
CN110334724B (en) Remote sensing object natural language description and multi-scale correction method based on LSTM
CN106354701A (en) Chinese character processing method and device
Muangkote et al. Rr-cr-IJADE: An efficient differential evolution algorithm for multilevel image thresholding
CN111275057A (en) Image processing method, device and equipment
Pei et al. Localized traffic sign detection with multi-scale deconvolution networks
CN108053407A (en) Data processing method and data handling system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant