CN110533638A - A kind of method and device of measurement object size - Google Patents

A kind of method and device of measurement object size Download PDF

Info

Publication number
CN110533638A
CN110533638A CN201910713614.7A CN201910713614A CN110533638A CN 110533638 A CN110533638 A CN 110533638A CN 201910713614 A CN201910713614 A CN 201910713614A CN 110533638 A CN110533638 A CN 110533638A
Authority
CN
China
Prior art keywords
image
dimension
target object
default
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910713614.7A
Other languages
Chinese (zh)
Inventor
石磊
郑永升
魏子昆
杨忠程
王�琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
According To Hangzhou Medical Technology Co Ltd
Original Assignee
According To Hangzhou Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by According To Hangzhou Medical Technology Co Ltd filed Critical According To Hangzhou Medical Technology Co Ltd
Priority to CN201910713614.7A priority Critical patent/CN110533638A/en
Publication of CN110533638A publication Critical patent/CN110533638A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present invention provides a kind of method and device of measurement object size, wherein method includes: to obtain default image, and the corresponding target image of target object is determined from default image, using the maximum distance on target image in the distance of any two edge pixel point as the major diameter of target object, and an at least frame image is obtained using the plane cutting target image perpendicular to major diameter, using the maximum distance on an at least frame image in the distance of any two edge pixel point as the minor axis of target object.In the embodiment of the present invention, by may not need dependence manual positioning, so as to improve the efficiency and accuracy of measurement based on the pixel automatic measurement major diameter and minor axis on target image;And the embodiment of the present invention determines major diameter and minor axis by the way of enumerating edge pixel point, it not only can be with the size of the target object of measurement rules shape, but also the size of the target object of irregular shape can be measured, so as to reduce the difficulty of development.

Description

A kind of method and device of measurement object size
Technical field
The present embodiments relate to machine learning techniques field more particularly to a kind of methods and dress of measurement object size It sets.
Background technique
In field of medical technology, the problem of generally involving the measurement object size from image, the object in image can To refer to tubercle or tumour, such as Lung neoplasm, stomach tubercle, lymph tubercle, mastadenoma etc..In general, object size can be marked The state of an illness of knowledge patient, such as the lymph nodule size in image are bigger, and the state of an illness of patient may be more serious, the lymph node in image Section ruler cun is smaller, and the state of an illness of patient may be relatively light;And with the growth of time, object in new collected image or Object size can may also change, for example the size of existing object increases, the size of existing object reduces, or addition is new Object etc..It, can be in order to formulating relatively reasonable treatment by judging the state of an illness of patient based on the object size in image Scheme.
At this stage, for the shortening intermediate link of maximum possible, strive for that preferable therapeutic time, filmed image become for patient Efficiently and the best detection methods taken into account of economy.Mainly judge in such a way that software is combined with manual positioning in the prior art The size of object in image, for example, doctor can rule of thumb observe image and determine may be Lung neoplasm region, Jin Er Major diameter and minor axis are marked in software in a manner of manual positioning, in this way, software can be measured according to the mark information of doctor To major diameter and minor axis.However, aforesaid way usually requires to expend longer time, cause the efficiency of measurement lower, and fixed manually The mode of position more relies on human eye observation ability, so that the size that measurement obtains may be inaccuracy.
To sum up, a kind of method for needing measurement object size at present, to solve in the prior art using software combination hand The technical problem that efficiency is lower, accuracy is poor caused by the mode measurement object size of dynamic positioning.
Summary of the invention
The embodiment of the present invention provides a kind of method and device of measurement object size, for solving in the prior art using soft Part combines the technical problem that efficiency is lower, accuracy is poor caused by the mode measurement object size of manual positioning.
In a first aspect, a kind of method of measurement object size provided in an embodiment of the present invention, comprising:
Default image is obtained, and determines the corresponding target image of target object from the default image, described in measurement The distance of any two edge pixel point on target image, by the maximum distance in the distance of any two edge pixel point Major diameter as the target object;Further, it using target image described in the plane cutting perpendicular to the major diameter, obtains An at least frame image, any frame being directed in an at least frame image, measures any two edge pixel of described image The distance of point, it is corresponding alternative short using the maximum distance in the distance of any two edge pixel point as described image Diameter;Using the alternative minor axis of the maximum in the corresponding alternative minor axis of an at least frame image as the minor axis of the target object.
In above-mentioned design, after determining the corresponding target image of target object, by based on the pixel on target image Point automatic measurement major diameter and minor axis may not need dependence manual positioning, so as to improve the efficiency and accuracy of measurement;And Above-mentioned design determines major diameter by the way of the edge pixel point for enumerating target image, and using the image for enumerating vertical major diameter The mode of edge pixel point determine minor axis, not only can be with the size of the target object of measurement rules shape, but also can measure and not advise The then size of the target object of shape, so as to reduce the difficulty of development.
In a kind of possible design, the method also includes: measure the pixel total amount that the target image includes, root The pixel total amount and preset ratio that include according to the target image determine the volume of the target object.
In above-mentioned design, by the quantity of whole pixels included by statistics target image, and according to whole pixels The quantity and preset ratio of point determine the volume of target object, can rapidly measure the volume of target object, so that The efficiency of measurement is higher.
In a kind of possible design, the corresponding target image of target object is determined from the default image, comprising: For the image bearing layer to be identified of any dimension of the default image, the image bearing layer to be identified of the dimension is inputted into default convolution Neural network model obtains the default image and is distributed in the confidence level of the dimension;Wherein, the image to be identified of each dimension Layer is to use to preset a frame or the continuous image of multiframe that image obtains described in the bisecting plane cutting of corresponding dimension;Each dimension Bisecting plane is not parallel;The default image is in the image bearing layer packet to be identified that the confidence level distribution of the dimension includes the dimension Each pixel belongs to the confidence level of the target object on at least frame image contained;Further, according to the default shadow As the confidence level distribution in each dimension, the corresponding target image of the target object is determined from the default image.
In above-mentioned design, the target object in default image is identified by using default convolutional neural networks model, it can To check default image without human subjective, so as to identify the efficiency of target image from default image;And it above-mentioned sets Meter carries out comprehensive identification using the image bearing layer to be identified of multiple dimensions, so as to quasi- in the more comprehensive situation of identification information Really identify target image.
In a kind of possible design, the confidence level according to the default image in each dimension is distributed, from described pre- If determining the corresponding target image of the target object in image, comprising: for any pixel point in the default image, Determined from the image bearing layer to be identified of each dimension include the pixel a frame or multiple image, and then according to including institute The confidence level distribution for stating each image bearing layer to be identified belonging to a frame or multiple image for pixel, determines that the pixel belongs to institute The objective degrees of confidence for stating target object, the region that one or more pixels that objective degrees of confidence is greater than preset threshold are formed are true It is set to the corresponding target image of the target object.
In above-mentioned design, by using the picture on the image bearing layer to be identified of the one or more dimensions degree comprising a certain pixel The confidence level of element determines the objective degrees of confidence of the pixel, the confidence information of each dimension can be integrated, to avoid only adopting The lower technical problem of accuracy of identification caused by being identified with the confidence information of certain dimension;And by using default threshold Value cutting presets image and obtains the corresponding target image of target object, the lesser pixel of confidence level can be excluded, to drop The confidence level of low wrong report improves the accuracy of identification target image.
It is described to obtain default image in a kind of possible design, comprising: to obtain initial image and the target object Three-dimensional coordinate, and centered on the three-dimensional coordinate of the target object, using pre-determined distance as radius, divide from the initial image Cut to obtain the default image including the target object.
In above-mentioned design, by from initial image cutting obtain the default image comprising target object, can be preparatory The feature unrelated with measurement object size is filtered out, so that needed for the process based on default radiographic measurement major diameter and minor axis The data volume of reason is smaller, further increases the efficiency of measurement.
Second aspect, a kind of device of measurement object size provided in an embodiment of the present invention, comprising:
Module is obtained, for obtaining default image;
Determining module, for determining the corresponding target image of target object from the default image;
Measurement module, for measuring the distance of any two edge pixel point on the target image, by described any two Major diameter of the maximum distance as the target object in the distance of a edge pixel point;And using perpendicular to the major diameter Plane cutting described in target image, obtain an at least frame image, any frame being directed in an at least frame image, measurement The distance of any two edge pixel point of described image, by the maximum distance in the distance of any two edge pixel point As the corresponding alternative minor axis of described image;The alternative minor axis of maximum in the corresponding alternative minor axis of an at least frame image is made For the minor axis of the target object.
In a kind of possible design, the measurement module is also used to: it is total to measure the pixel that the target image includes Amount, the pixel total amount and preset ratio that include according to the target image determine the volume of the target object.
In a kind of possible design, the determining module is specifically used for: for any dimension of the default image The image bearing layer to be identified of the dimension is inputted default convolutional neural networks model, obtains the default shadow by image bearing layer to be identified As the confidence level in the dimension is distributed;Wherein, the image bearing layer to be identified of each dimension is the bisecting plane using corresponding dimension A frame or the continuous image of multiframe that image obtains are preset described in cutting;The bisecting plane of each dimension is not parallel;The default shadow As each picture on at least frame image that the image bearing layer to be identified that the distribution of the confidence level of the dimension includes the dimension includes Vegetarian refreshments belongs to the confidence level of the target object;Further, the confidence level according to the default image in each dimension is distributed, The corresponding target image of the target object is determined from the default image.
In a kind of possible design, the determining module is specifically used for: for any pixel in the default image Point determines a frame or multiple image comprising the pixel from the image bearing layer to be identified of each dimension, and then according to packet The confidence level of each image bearing layer to be identified belonging to a frame or multiple image containing the pixel is distributed, and determines the pixel category In the objective degrees of confidence of the target object, the area that one or more pixels that objective degrees of confidence is greater than preset threshold are formed Domain is determined as the corresponding target image of the target object.
In a kind of possible design, the module that obtains is specifically used for: obtaining initial image and the target object Three-dimensional coordinate, and centered on the three-dimensional coordinate of the target object, using pre-determined distance as radius, divide from the initial image It cuts to obtain the default image comprising the target object.
The third aspect, a kind of calculating equipment provided in an embodiment of the present invention, including at least one processing unit and at least One storage unit, wherein the storage unit is stored with computer program, when described program is executed by the processing unit When, so that the processing unit executes the method as described in above-mentioned first aspect is any.
Fourth aspect, a kind of computer readable storage medium provided in an embodiment of the present invention, being stored with can be set by calculating The standby computer program executed, when described program is run on said computing device, so that the calculating equipment executes as above State any method of first aspect.
The aspects of the invention or other aspects can more straightforwards in the following description.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly introduced, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill in field, without any creative labor, it can also be obtained according to these attached drawings His attached drawing.
Fig. 1 is a kind of corresponding flow diagram of method of measurement object size provided in an embodiment of the present invention;
Fig. 2 is a kind of schematic diagram of initial image provided in an embodiment of the present invention;
Fig. 3 is a kind of corresponding flow diagram of method of determining target image provided in an embodiment of the present invention;
Fig. 4 is that a kind of corresponding process of method for training default convolutional neural networks model provided in an embodiment of the present invention is shown It is intended to;
Fig. 5 is a kind of structural schematic diagram of default convolutional neural networks model provided in an embodiment of the present invention;
Fig. 6 is a kind of structural schematic diagram of the device of measurement object size provided in an embodiment of the present invention;
Fig. 7 is a kind of structural schematic diagram for calculating equipment provided in an embodiment of the present invention.
Specific embodiment
In order to which the purpose of the present invention, technical solution and beneficial effect is more clearly understood, below in conjunction with attached drawing and implementation Example, the present invention will be described in further detail.It should be appreciated that specific embodiment described herein is only used to explain this hair It is bright, it is not intended to limit the present invention.
Fig. 1 is a kind of corresponding flow diagram of method of measurement object size provided in an embodiment of the present invention, this method It can be executed by the device of measurement object size, as shown in Figure 1, this method comprises:
Step 101, default image is obtained.
In one possible implementation, the three-dimensional coordinate of initial image and target object can be obtained in advance, in turn Centered on the three-dimensional coordinate of target object, pre-determined distance is radiated out, and then cutting is obtained comprising mesh from initial image Mark the default image of object, default image (or can may be spherical or other for the cube comprising several pixels Shape is not construed as limiting).Wherein, the three-dimensional coordinate of target object can be the three-dimensional coordinate for referring to the point inside target object, such as The three-dimensional coordinate of target object central point, or may also mean that the three-dimensional coordinate of the point of target object surface;Correspondingly, if mesh The three-dimensional coordinate for marking object is the three-dimensional coordinate of target object central point, then pre-determined distance can be the default of target object radius Multiple, such as 1.5 times of 1.25 times of target object radius or target object radius, if the three-dimensional coordinate of target object is The three-dimensional coordinate of non-central point inside target object, or the three-dimensional coordinate of the point for target object surface, then pre-determined distance can Think the presupposition multiple of target object diameter, such as 1.5 times of 1.25 times of target object diameter or target object diameter.
It should be noted that the process that cutting obtains default image can be by professional (such as doctor) in medical software Upper operation, or default Slicing Model for Foreign also can be used and execute cutting process automatically, presupposition multiple can be by those skilled in the art Member is rule of thumb configured, and the embodiment of the present invention is not construed as limiting this.
In the embodiment of the present invention, initial image can refer to that the specific image shot using X-ray, such as computer body-layer are taken the photograph Shadow (Computed Tomography, CT) image, magnetic resonance imaging (Magnetic Resonance Imaging, MRI) image Deng.By taking CT images as an example, initial image can be not limited to chest CT image, leg CT image and brain CT image, correspondingly, mesh Mark object can be not limited to Lung neoplasm, thyroid nodule, Breast Nodules etc..Further, initial image can be 3-dimensional image, Illustratively, initial image can be as shown in Figure 2.
In one example, after cutting obtains default image, default image can also be pre-processed, is pre-processed It may include zoom operations and information completion operation.It can be fixed size by default image-zooming in zoom operations, such as 980*980*980 improves the efficiency of image procossing to avoid the biggish default image of subsequent processing data volume.Correspondingly, In In information completion operation, since each pixel on the default image after scaling can correspond to one on former default image A pixel, therefore can also be to each pixel additional spatial information channel on the default image after scaling, space letter Breath channel can be the relative coordinate of pixel, or may be pixel at a distance from the three-dimensional coordinate of target object.
Step 102, the corresponding target image of target object is determined from default image.
In the embodiment of the present invention, it can determine that target object is corresponding from default image by the way of handmarking Target image, or the corresponding mesh of target object can also be determined from default image using default convolutional neural networks model Image is marked, is specifically not construed as limiting.
Fig. 3 is a kind of corresponding flow diagram of method of determining target image provided in an embodiment of the present invention, this method May include:
Step 301, the image bearing layer to be identified of one or more dimensions is obtained from default image.
In specific implementation, after obtaining default image from initial Image Segmentation, cutting can be carried out to default image, cut First default image can be converted to the image of DICOM format before point, then be selected according to the DICOM information of DICOM format image Fixed window width and window level cutting is taken to preset image;In this way, image can be preset with cutting to obtain multiple image.In an example In, window width can be chosen for W=80, and window position can be chosen for L=40.
Further, after different dimensions carries out multiple image of the cutting to obtain different dimensions to default image, also The multiple image of different dimensions can be normalized.Specifically, can be carried out to the multiple image of different dimensions Scaling, for example the multiple image of different dimensions can be scaled to same size, or can also be by the multiframe of same dimension Image scaling is same size, and the multiple image of different dimensions is scaled different sizes, is specifically not construed as limiting.The present invention is implemented In example, it is normalized by the multiple image to different dimensions, the multiple image or difference of same dimension can be made The multiple image of dimension is with uniformity, so as to improve the subsequent efficiency for determining target object from image.
For example, for default image, reference frame, reference coordinate can be set on default image in advance System can be made of origin o, x-axis, y-axis and z-axis;It is possible to further with xoy plane (i.e. cross section) be bisecting plane, or With yoz plane (i.e. coronal-plane) for bisecting plane, or with xoz plane (i.e. sagittal plane) for bisecting plane, so that shadow is preset in cutting As obtaining the image bearing layer to be identified an of dimension;Or it can be with any number of flat in xoy plane, yoz plane and xoz plane Face is bisecting plane, so that image is preset in cutting obtains the image bearing layer to be identified of multiple dimensions, is specifically not construed as limiting.
By taking xoy plane, yoz plane and xoz plane are three bisecting planes as an example, it is default that xoy plane cutting can be used Image obtains the first dimension image of multiframe (such as 90 frames), obtains multiframe (such as 90 frames) using the default image of yoz plane cutting Second dimension image obtains multiframe (such as 90 frames) third dimension image using the default image of xoz plane cutting.Wherein, 90 frame Any first dimension image in first dimension image can be parallel with xoy plane, and any in 90 frame the second dimension images Two-dimensions image can be parallel with yoz plane, and any third dimension image in 90 frame third dimension images can be with xoz plane In parallel.
Further, 90 the first dimension of frame images, 90 frame the second dimension images and 90 frame third dimension figures are obtained in cutting As after, this 270 frame image can also be zoomed in and out;In one example, this 270 frame image can be zoomed to fixed big It is small, such as 512*512 pixel.By taking 90 frame the first dimension images as an example, in order to guarantee the integrality of subsequent detection image with it is consistent Property, before scaling 90 frame the first dimension images, black surround can also be added in the surrounding of 90 frame the first dimension images, thus by this The Aspect Ratio of 90 frame the first dimension images is adjusted to 1:1.
In one possible implementation, multiple groups can be determined from 90 frame the first dimension images using sliding window mode The image bearing layer to be identified of first dimension, wherein presetting sliding window frame number used by sliding window mode can be by those skilled in the art It is rule of thumb configured, for example can be 1 frame, or at least two frames are specifically not construed as limiting.If default sliding window frame number is 3 frames can then determine the image bearing layer to be identified of 88 group of first dimension, wherein first from 90 frame the first dimension images ~third frame the first dimension image can form the image bearing layer to be identified of first group of first dimension, the second~the 4th the first dimension of frame Image, which can form the image bearing layer to be identified of second group of first dimension, third~the 5th frame the first dimension image, can form third The image bearing layer ... ... to be identified of the first dimension of group, the 88th~the 90th frame the first dimension image can form the 88th The image bearing layer to be identified of the first dimension of group.
It should be noted that above-mentioned implementation is only a kind of illustrative explanation, the restriction to scheme is not constituted.Tool During body is implemented, the image to be identified of the first dimension of multiple groups can also be determined from 90 frame the first dimension images using other way Layer, for example can will set the continuous image of quantity as one group of image bearing layer to be identified, if then may be used than setting quantity as 3 To determine the image bearing layer to be identified of 30 group of first dimension from 90 frame the first dimension images, wherein first~third frame Dimension image can form the image bearing layer to be identified of first group of first dimension, and the 4th~the 6th frame the first dimension image can group The first dimension of third group can be formed at the image bearing layer to be identified of second group of first dimension, the 7th~the 9th frame the first dimension image Image bearing layer ... ... to be identified, the 88th~the 90th frame the first dimension image can form the 30th group of the first dimension Image bearing layer to be identified.
It, can be from default image if in every group of image bearing layer to be identified including 3 frame images using above-mentioned implementation Cutting obtains the image bearing layer to be identified and 80 of the image bearing layer to be identified of 88 group of first dimension, 88 group of second dimension The image bearing layer to be identified of eight groups of third dimension, the image bearing layer to be identified of every group of first dimension may include 3 frame the first dimension images, The image bearing layer to be identified of every group of second dimension may include 3 frame the second dimension images, the image bearing layer to be identified of every group of third dimension It may include 3 frame third dimension images.
It should be noted that the embodiment of the present invention does not limit the quantity of the image bearing layer to be identified of the first dimension, the second dimension The quantity of the image bearing layer to be identified of the quantity and third dimension of the image bearing layer to be identified of degree, such as the shadow to be identified of the first dimension As quantity, the quantity of the image bearing layer to be identified of the quantity of the image bearing layer to be identified of the second dimension and third dimension of layer can phases Together, it or can also be different, be specifically not construed as limiting.
Step 302, it is directed to the image bearing layer to be identified of any dimension, the image bearing layer to be identified of the dimension is input to pre- If in convolutional neural networks model, exporting the confidence level distribution of the image bearing layer to be identified of the dimension.
The embodiment of the present invention can image bearing layer to be identified in any order execute the scheme in step 302, such as can First to execute step 302 to the image bearing layer to be identified of 88 groups of third dimension, then to the to be identified of 88 group of second dimension Image bearing layer executes step 302, finally executes step 302 to the image bearing layer to be identified of 88 group of first dimension;Or it can also be with Step 302 first is executed to the image bearing layer to be identified of the 50th group~the 88th group the first dimension, then 88 group second is tieed up The image bearing layer to be identified of degree executes step 302, then executes step 302 to the image bearing layer to be identified of 88 groups of third dimension, most Step 302 is executed to the image bearing layer to be identified of first group~the 49th group the first dimension afterwards.
Description obtains the shadow to be identified of the first dimension of T group by taking the image bearing layer to be identified of the first dimension of T group as an example below The specific implementation process being distributed as the confidence level of layer, it is possible to understand that ground obtains the confidence level distribution of other image bearing layers to be identified Process is referred to this method execution, and details are not described herein again.Wherein, the image to be identified of 88 group of first dimension if it exists Layer, then T meets: 1≤T≤88.
In specific implementation, the image bearing layer to be identified of the first dimension of T group can be input to default convolutional neural networks mould It, can be in this way, default convolutional neural networks model is after the image bearing layer to be identified to the first dimension of T group is handled in type Export the confidence level distribution of the image bearing layer to be identified of the first dimension of T group.Wherein, the image bearing layer to be identified of the first dimension of T group Confidence level distribution may include the first dimension of T group image bearing layer to be identified include all images in each pixel category In the confidence level of target object, for example, if the image bearing layer to be identified of the first dimension of T group includes the first dimension of first~third frame Image, then the confidence level distribution that the pixel of the image bearing layer to be identified of the first dimension of T group belongs to target object may include the In one frame the first dimension image each pixel belong to the confidence level of target object, each pixel in second frame the first dimension image Point belongs to the confidence level that each pixel in the confidence level and third frame the first dimension image of target object belongs to target object. Herein, it can be [0,1] that each pixel, which belongs to the value range of the confidence level of target object,.
In the embodiment of the present invention, confidence level distribution can exist in the form of confidence level distribution table, can also be with confidence level The form of distribution map exists, and is specifically not construed as limiting.
Default convolutional neural networks in the embodiment of the present invention can be for the target image where marked target object Multiple groups history image be trained, as shown in figure 4, the process that training obtains default convolutional neural networks model can be with Include the following steps 401~step 403:
Step 401, multiple groups history image is obtained as training sample.
Herein, history image can be the multiple groups history image selected in advance, or select in advance Single group history image, it is not limited in the embodiment of the present invention.
It, can be by multiple groups history image directly as training after getting multiple groups history image in the embodiment of the present invention Sample can also carry out enhancing operation to multiple groups history image, be re-used as training sample.Wherein, enhancing operation includes but unlimited In: at random up and down translation setting pixel (such as 0~20 pixel), Random-Rotation set angle (such as -20~20 degree), with Machine scaling setting multiple (such as 0.8~1.2 times);By executing enhancing operation to history image, training sample can be expanded Data volume.
Step 402, region belonging to target object in handmarking's training sample.
In specific implementation, the region for belonging to target object in training sample can be marked by professionals such as doctors Note, the content of label are not limited to the centre coordinate and diameter of target object.It specifically, can be by several doctors to target object Affiliated region is labeled, and determines final region and region parameter in such a way that more people vote synthesis, as a result may be used It is saved by a manner of mask figure.
It should be noted that in handmarking's training sample the process in region belonging to target object and training sample increasing Strong operating process can in no particular order sequentially, it can region belonging to target object is manually first marked from training sample, Then the training sample for having affiliated region to label again carries out enhancing operation, or can also be first to training sample Enhancing operation is carried out, then manually the training sample after enhancing operation is marked again.
Step 403, training sample is inputted initial convolution neural network model to be trained, obtains default convolutional Neural net Network model.
In the embodiment of the present invention, the structure of initial convolution neural network model may include input layer, characteristic extracting module, Down-sampling convolution block, up-sampling convolution block, target detection network and output layer, or also may include input layer, down-sampling Convolution block, up-sampling convolution block, target detection network and output layer, are specifically not construed as limiting.
In specific implementation, first training sample can be pre-processed, and then will be in the input of pretreated training sample State initial convolution neural network model;Wherein, pretreatment may include normalized, or also may include other processes, It is not construed as limiting.It further, can setting output after the confidence level distribution for obtaining initial convolution neural network model output Reliability, which is distributed, carries out loss function calculating with the mask figure of the training sample marked in advance, then can use back-propagation algorithm And stochastic gradient descent (Stochastic Gradient Descent, SGD) optimization algorithm iterates, and constantly updates just The parameter of beginning convolutional neural networks model.If loss function is less than or equal to preset threshold in certain training, can basis The model parameter of this time training determines default convolutional neural networks model.
Fig. 5 is a kind of structural schematic diagram of default convolutional neural networks model provided in an embodiment of the present invention, the default volume Product neural network model can be 3 dimension (3Dimensions, 3D) convolutional neural networks models, such as full convolutional neural networks (Fully Convolutional Network, FCN) model, U-NET model etc..As shown in figure 5, default convolutional neural networks Model may include sequentially connected characteristic extracting module, down-sampling block and up-sampling block.Wherein, characteristic extracting module can wrap Continuous first convolution unit and the second convolution unit are included, the first convolution unit may include a 3D convolutional layer, a batch It normalizes (batch narmalization, BN) layer and an excitation function layer, the second convolution unit also may include a 3D Convolutional layer, a batch normalization layer and a relu excitation function layer.
It should be noted that the excitation function in the embodiment of the present invention can be a plurality of types of excitation functions, for example, can Think line rectification function (Rectified Linear Unit, ReLU), specifically without limitation.
In the embodiment of the present invention, presetting the quantity of up-sampling block and down-sampling block in convolutional neural networks model can be by this Field technical staff is rule of thumb configured, for example may include a down-sampling block and a up-sampling block, or can also To include multiple (two or more) up-sampling blocks and multiple down-sampling blocks, specifically it is not construed as limiting.Wherein, each down-sampling Block may include a 3D down-sampling layer and a convolution feature extraction block, and the size of 3D down-sampling layer can be 2*2*2;Phase Ying Di, each up-sampling block may include a 3D deconvolution up-sampling layer, a splicing layer and a convolution feature extraction Block, the size that 3D deconvolution up-samples layer can be 2*2*2.In the embodiment of the present invention, the splicing layer for up-sampling block can splice The output result of the down-sampling layer of down-sampling block obtains characteristic pattern.
By taking T is 1 as an example, in specific implementation, the image bearing layer to be identified of first group of first dimension is being inputted into default convolution mind After network model, default convolutional neural networks model can include according to the image bearing layer to be identified of first group of first dimension the One is calculated first to the corresponding 3 channel array of pixels of third frame the first dimension image to third frame the first dimension image, into And the 3 channel array of pixels is input to characteristic extracting module.Correspondingly, characteristic extracting module passes sequentially through the first convolution unit In 3D convolutional layer, BN layers, excitation function layer and 3D convolutional layer in the second convolution unit, BN layers and excitation function layer be to 3 Channel array of pixels is handled, thus the corresponding fisrt feature figure of the image bearing layer to be identified for extracting first group of first dimension Picture;Wherein, fisrt feature image can be indicated by way of four dimensional vectors, for example the size of fisrt feature image can be 512*512*3*32.Further, fisrt feature image can be respectively sent to the first down-sampling block, by characteristic extracting module Two down-sampling blocks and third down-sampling block.
In one example, up-sampling block may include the first down-sampling block, the second down-sampling block and third down-sampling block, Up-sampling block may include on first be correspondingly arranged respectively with the first down-sampling block, the second down-sampling block and third down-sampling block Sampling block, the second up-sampling block and third up-sample block, and the first down-sampling block, the second down-sampling block and third down-sampling block can be with It is sequentially connected characteristic extracting module.Wherein, the down-sampling layer of the first down-sampling block can connect the splicing layer of the first up-sampling block, The down-sampling layer of second down-sampling block can connect the splicing layer of the first up-sampling block, and the down-sampling layer of down-sampling block can connect The splicing layer of first up-sampling block.
Further, the first down-sampling block, the second down-sampling block and third down-sampling block are receiving fisrt feature image Afterwards, second feature can be extracted from fisrt feature image by respective 3D down-sampling layer and convolution feature extraction block respectively Image, third feature image and fourth feature image.Wherein, the size of second feature image can be 256*256*3*32, the The size of three characteristic images can be 128*128*3*48, and the size of fourth feature image can be 64*64*3*64.Under first Sampling block, the second down-sampling block, third down-sampling block can be respectively by respective 3D down-sampling layer by second feature images, Three characteristic images, fourth feature image are exported in the splicing layer of the first up-sampling block, the splicing layer of the second up-sampling block, third The splicing layer of sampling block.
In one example, the first up-sampling block, the second up-sampling block and third up-sampling block can be respectively by respective Including convolution feature extraction block second feature image, third feature image and fourth feature image are extracted, to obtain Fifth feature image, sixth feature image and seventh feature image;Wherein, the size of fifth feature image can be 64*64*3* 64, the size of sixth feature image can be 128*128*3*48, and the size of seventh feature image can be 256*256*3*32. In this way, the first up-sampling block, the second up-sampling block and third up-sampling block can be respectively by fifth feature images, sixth feature figure Picture and seventh feature image and the characteristic image of the down-sampling block of correspondingly-sized output are spliced, for example, the first up-sampling block Fifth feature image and fourth feature image, the second up-sampling block, which can be spliced, can splice sixth feature image and third feature Image, third up-sampling block can splice seventh feature image and second feature image.
In another example, for any up-sampling block, 3 features that first~third down-sampling block can be exported Characteristic image identical with the characteristic image size of upper up-sampling block output merges in image, as the up-sampling block Input.For example, since first up-samples the size for the fifth feature image that block exports for 64*64*3*64, on second Sampling block can select the fourth feature image having a size of 64*64*3*64 from second feature image~fourth feature image, and Fourth feature image and fifth feature image can be merged, the input as the second up-sampling block;On second The size of the sixth feature image of sampling block output is 128*128*3*48, therefore third up-sampling block can be from second feature figure Select the third feature image having a size of 128*128*3*48 in picture~fourth feature image, and can by third feature image and Sixth feature image merges, the input as third up-sampling block;In this way, the seventh feature image of third up-sampling block output Size can then be selected from second feature image~fourth feature image having a size of 256*256*3* if 256*256*3*32 32 second feature image, and second feature image and seventh feature image being merged, as first group of first dimension The corresponding characteristic image of image bearing layer to be identified.
Further, third up-sampling block can also use the convolution kernel in third up-sampling block to first group of first dimension The corresponding characteristic image of image bearing layer to be identified carry out deconvolution, to obtain the image bearing layer to be identified of first group of first dimension Confidence level distribution (can be confidence level distribution map, or may be confidence level distribution table, be not construed as limiting);Wherein, first group The confidence level distribution of the image bearing layer to be identified of first dimension may include first frame each picture into third frame the first dimension image Vegetarian refreshments belongs to the confidence level of target object.
It should be noted that above-mentioned implementation is only a kind of illustrative explanation, do not constitute to the embodiment of the present invention Restriction.In specific implementation, up-sample the quantity of block and/or down-sampling block, the structure of up-sampling block and/or down-sampling block and The size of characteristic image can be configured according to actual needs, and 6 up-sampling blocks such as can only be arranged or be only arranged under 6 Sampling block, or pond layer, deconvolution up-sampling layer, splicing layer and convolution feature extraction mould can also be set in up-sampling block Block etc., is specifically not construed as limiting.
Step 303, the confidence level according to the default image in one or more of dimensions is distributed, from the default shadow The corresponding target image of the target object is determined as in.
Wherein, the confidence level distribution for presetting image in one or more of dimensions may include 88 group of first dimension The confidence level distributed intelligence of image bearing layer to be identified every first dimension image including, 88 group of second dimension it is to be identified The confidence level distributed intelligence for every two dimensional image that image bearing layer includes and the image bearing layer to be identified of 88 groups of third dimension Including every first dimension image confidence level distributed intelligence, the confidence level distributed intelligence of every image may include every figure As upper all pixels point belongs to the confidence level of target object.
In one possible implementation, any pixel point u being directed in default image, can be respectively from 88 The image bearing layer to be identified of the first dimension of group, the image bearing layer to be identified of 88 group of second dimension and 88 groups of third dimension to The first dimension of target image, target the second dimension image and the target third dimension including pixel u are selected in identification image bearing layer Spend image.Wherein, the quantity of the first dimension of target image, target the second dimension image and target third dimension image can be one Frame may be multiframe.For example, when default sliding window frame number is 3 frame, if pixel u is the edge picture of default image Vegetarian refreshments, then pixel u can be corresponding with 1 the first dimension of frame target image, 1 frame target the second dimension image and 1 frame target third Dimension image;If pixel u is the adjacent pixel of the edge pixel point of default image, pixel u can be corresponding with 2 frame mesh Mark the first dimension image, 2 frame target the second dimension images and 2 frame target third dimension images;If pixel u is not default image Marginal point and be not the adjacent pixel of edge pixel point, then pixel u can be corresponding with 3 the first dimension of frame target images, 3 Frame target the second dimension image and 3 frame target third dimension images.
Further, it is not the marginal point of default image with pixel u and is not that the adjacent pixel of edge pixel point is Example, can determine that pixel u belongs to target object according to the confidence level distributed intelligence of 3 frame target the first dimension images respectively First confidence level, the second confidence level and third confidence level are distinguished according to the confidence level distributed intelligence of 3 frame target the second dimension images Determine that pixel u belongs to the 4th confidence level, the 5th confidence level and the 6th confidence level of target object, and according to 3 frame targets The confidence level distributed intelligence of third dimension image determines that pixel u belongs to the 7th confidence level of target object, the 8th sets respectively Reliability and the 9th confidence level, and then can belong to the first confidence level~the 9th confidence level average confidence value as pixel u In the objective degrees of confidence of target object.
Correspondingly, the embodiment of the present invention can determine target object from default image by the way of confidence level cutting Corresponding target image, in one example, however, it is determined that the objective degrees of confidence that pixel u belongs to target object is less than default threshold Value, then can delete pixel corresponding with pixel u in default image, however, it is determined that the target that pixel u belongs to target object is set Reliability is greater than or equal to preset threshold, then can retain pixel corresponding with pixel u in default image;In this way, to default After all pixels point on image executes aforesaid operations, 3 dimension images of the pixel composition retained in default image can be mesh Mark the corresponding target image of object.In another example, however, it is determined that the objective degrees of confidence that pixel u belongs to target object is less than Preset threshold can then be distinguished in the first dimension of delete target image, target the second dimension image and target third dimension image Pixel u, however, it is determined that pixel u belong to target object objective degrees of confidence be greater than or equal to preset threshold, then can retain mesh Mark the pixel u in the first dimension image, target the second dimension image and target third dimension image;In this way, to all pictures After vegetarian refreshments executes aforesaid operations, 90 the first dimension of frame images, 90 frame the second dimension images and 90 frame third dimension figures can be merged Picture, in this way, the 3 dimension images for merging the pixel composition retained in obtained image are the corresponding target image of target object.
Step 103, measurement obtains the major diameter of target object.
In a kind of existing implementation, the major diameter of target object can be determined by the way of special shape fitting, If than target object shape similar to ellipsoid, then can using ellipsoid fitting algorithm by the corresponding target image of target object with The image of default ellipsoid is fitted, and then the major diameter of target object is calculated according to the major diameter formula of default ellipsoid.Using this kind Mode is generally only capable of carrying out dimensional measurement to the target object of regular shape, is unable to measure the target object of irregular shape Size;And target object of different shapes corresponds to different default fitting algorithms, it is more tired so as to cause algorithm development work It is difficult.
Based on this, in the embodiment of the present invention, after obtaining the corresponding target image of target object, it can enumerate and be located at one by one Pixel on target image edge, and measure the distance of any two edge pixel point;In this way, can be by any two edge Major diameter of the maximum distance as target object in the distance of pixel.For compared with the existing technology, the embodiment of the present invention is logical It crosses and determines major diameter by the way of enumerating edge pixel point, not only can be with the size of the target object of measurement rules shape, but also it can be with Measure the size of the target object of irregular shape, so as to realize to the size of shaped target object survey Amount, the target object without being directed to each shape are arranged a kind of fitting algorithm, reduce the difficulty of development.
Step 104, measurement obtains the minor axis of target object.
In specific implementation, after determining the major diameter of target object, the plane cutting target perpendicular to major diameter can be used Image obtains an at least frame image;Such as can first rolling target image the major diameter of target image is overlapped with default axis, It reuses the preset plane cutting target image vertical with default axis and obtains multiple image, wherein any in multiple image Frame image can be vertical with the major diameter of target image.
Further, any frame image (such as image A) being directed in an at least frame image, can enumerate one by one and be located at Pixel on the edge image A, and then the distance of any two edge pixel point of image A is measured, by any two edge picture Maximum distance in the distance of vegetarian refreshments is as the corresponding alternative minor axis of image A;In this way, obtaining at least frame image difference in measurement It, can be using the alternative minor axis of the maximum in the corresponding alternative minor axis of an at least frame image as target object after corresponding alternative minor axis Minor axis.
In one example, the total quantity of whole pixels included by target image can also be counted, and can basis The corresponding relationship of each pixel and the pixel on former image, determines the volume of target object on target image.For example, if Target image includes 100 pixels, and the pixel on target image and initial image is 1:10, y-coordinate in the ratio of x coordinate Ratio be 1:5, the ratio of z coordinate is 1:7, then the volume of target object can be 100*10*5*7.
In the above embodiment of the present invention, after obtaining default image, target object pair is determined from the default image The target image answered, measures the distance of any two edge pixel point on the target image, and by any two edge Major diameter of the maximum distance as the target object in the distance of pixel;Further, using perpendicular to the major diameter Target image described in plane cutting obtains an at least frame image, and any frame being directed in an at least frame image measures institute The distance for stating any two edge pixel point of image makees the maximum distance in the distance of any two edge pixel point For the corresponding alternative minor axis of described image;Using the alternative minor axis of maximum in the corresponding alternative minor axis of an at least frame image as The minor axis of the target object.In the embodiment of the present invention, after determining the corresponding target image of target object, by being based on target Pixel automatic measurement major diameter and minor axis on image may not need dependence manual positioning, so as to improve the efficiency of measurement And accuracy;And the embodiment of the present invention determines major diameter, and use piece by the way of the edge pixel point for enumerating target image The mode for lifting the edge pixel point of the image of vertical major diameter determines minor axis, both can be with the ruler of the target object of measurement rules shape It is very little, and the size of the target object of irregular shape can be measured, so as to realize to shaped target object ruler Very little to measure, a kind of fitting algorithm is arranged in the target object without being directed to each shape, reduces the difficulty of development Degree.
For above method process, the embodiment of the present invention also provides a kind of device of measurement object size, the tool of the device Hold in vivo and is referred to above method implementation.
Fig. 6 is a kind of device of measurement object size provided in an embodiment of the present invention, which includes:
Module 601 is obtained, for obtaining default image;
Determining module 602, for determining the corresponding target image of target object from the default image;
Measurement module 603 will be described any for measuring the distance of any two edge pixel point on the target image Major diameter of the maximum distance as the target object in the distance of two edge pixel points;And using perpendicular to the length Target image described in the plane cutting of diameter obtains an at least frame image, and any frame being directed in an at least frame image is surveyed The distance for measuring any two edge pixel point of described image, by the distance of any two edge pixel point it is maximum away from From as the corresponding alternative minor axis of described image;By the alternative minor axis of maximum in the corresponding alternative minor axis of an at least frame image Minor axis as the target object.
Optionally, the measurement module 603 is also used to:
The pixel total amount that the target image includes is measured, the pixel total amount that includes according to the target image and pre- If the volume of target object described in ratio-dependent.
Optionally, the determining module 602 is specifically used for:
For the image bearing layer to be identified of any dimension of the default image, the image bearing layer to be identified of the dimension is inputted Default convolutional neural networks model obtains the default image and is distributed in the confidence level of the dimension;Wherein, each dimension to Identification image bearing layer is the frame or the continuous image of multiframe obtained using image is preset described in the bisecting plane cutting for corresponding to dimension; The bisecting plane of each dimension is not parallel;The default image includes the to be identified of the dimension in the confidence level distribution of the dimension Each pixel belongs to the confidence level of the target object on at least frame image that image bearing layer includes;
Confidence level according to the default image in each dimension is distributed, and determines the target from the default image The corresponding target image of object.
Optionally, the determining module 602 is specifically used for:
For any pixel point in the default image, determine to include institute from the image bearing layer to be identified of each dimension State the frame or multiple image of pixel;
The confidence level of each image bearing layer to be identified according to belonging to a frame or multiple image comprising the pixel is distributed, really The fixed pixel belongs to the objective degrees of confidence of the target object;
The region that one or more pixels that objective degrees of confidence is greater than preset threshold form is determined as the target pair As corresponding target image.
Optionally, the acquisition module 601 is specifically used for:
Obtain the three-dimensional coordinate of initial image and the target object;
Centered on the three-dimensional coordinate of the target object, using pre-determined distance as radius, divide from the initial image Obtain the default image comprising the target object.
It can be seen from the above: in the above embodiment of the present invention, after obtaining default image, from the default image In determine the corresponding target image of target object, measure the distance of any two edge pixel point on the target image, and Using the maximum distance in the distance of any two edge pixel point as the major diameter of the target object;Further, make The target image described in the plane cutting perpendicular to the major diameter obtains an at least frame image, is directed to an at least frame figure Any frame as in, measures the distance of any two edge pixel point of described image, by any two edge pixel point Distance in maximum distance as the corresponding alternative minor axis of described image;It will the corresponding alternative minor axis of an at least frame image In alternative minor axis of the minor axis as the target object of maximum.In the embodiment of the present invention, the corresponding mesh of target object is being determined After marking image, by may not need dependence manual positioning based on the pixel automatic measurement major diameter and minor axis on target image, from And the efficiency and accuracy of measurement can be improved;And the embodiment of the present invention is using the side for the edge pixel point for enumerating target image Formula determines major diameter, and determines minor axis by the way of enumerating the edge pixel of image of vertical major diameter point, both can be with measuring gage The then size of the target object of shape, and the size of the target object of irregular shape can be measured, so as to realize to institute The size of shaped target object measures, and a kind of fitting calculation is arranged in the target object without being directed to each shape Method reduces the difficulty of development.
Based on the same inventive concept, the embodiment of the invention also provides a kind of calculating equipment, including at least one processing is single Member and at least one storage unit, wherein the storage unit is stored with computer program, when described program is by the processing When unit executes, so that the step of processing unit executes the method for measurement object size.As shown in fig. 7, real for the present invention The hardware structural diagram of calculating equipment described in example is applied, which is specifically as follows desktop computer, portable meter Calculation machine, smart phone, tablet computer etc..Specifically, which may include memory 701, processor 702 and is stored in Computer program on memory, the processor 702 realize any measurement pair in above-described embodiment when executing described program As size method the step of.Wherein, memory 701 may include read-only memory (ROM) and random access memory (RAM), the program instruction stored in memory 701 and data and to processor 702 are provided.
Further, calculating equipment described in the embodiment of the present application can also include input unit 703 and output dress Set 704 etc..Input unit 703 may include keyboard, mouse, touch screen etc.;Output device 704 may include display equipment, such as Liquid crystal display (Liquid Crystal Display, LCD), cathode-ray tube (Cathode Ray Tube, CRT) touch Screen etc..Memory 701, processor 702, input unit 703 and output device 704 can be connected by bus or other modes It connects, in Fig. 7 for being connected by bus.The program instruction that processor 702 calls memory 701 to store and the journey according to acquisition The method of sequence instruction execution measurement object size provided by the above embodiment.
The embodiment of the invention also provides a kind of computer readable storage medium, being stored with can be executed by calculating equipment Computer program, when described program is run on the computing device, so that the side for calculating equipment and executing measurement object size The step of method.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method or computer program product. Therefore, complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the present invention Form.It is deposited moreover, the present invention can be used to can be used in the computer that one or more wherein includes computer usable program code The shape for the computer program product implemented on storage media (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) Formula.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to include these modifications and variations.

Claims (12)

1. a kind of method of measurement object size characterized by comprising
Obtain default image;
The corresponding target image of target object is determined from the default image;
The distance for measuring any two edge pixel point on the target image, by the distance of any two edge pixel point In major diameter of the maximum distance as the target object;
Using target image described in the plane cutting perpendicular to the major diameter, obtain an at least frame image, be directed to it is described at least Any frame image in one frame image measures the distance of any two edge pixel point of described image, by any two Maximum distance in the distance of edge pixel point is as the corresponding alternative minor axis of described image;An at least frame image is corresponded to Alternative minor axis in alternative minor axis of the minor axis as the target object of maximum.
2. the method according to claim 1, wherein the method also includes:
The pixel total amount that the target image includes is measured, the pixel total amount for including according to the target image and default ratio Example determines the volume of the target object.
3. the method according to claim 1, wherein described determine target object pair from the default image The target image answered, comprising:
For the image bearing layer to be identified of any dimension of the default image, the image bearing layer to be identified of the dimension is inputted default Convolutional neural networks model obtains the default image and is distributed in the confidence level of the dimension;Wherein, each dimension is to be identified Image bearing layer is the frame or the continuous image of multiframe obtained using image is preset described in the bisecting plane cutting for corresponding to dimension;Each dimension The bisecting plane of degree is not parallel;The default image is in the image to be identified that the confidence level distribution of the dimension includes the dimension Each pixel belongs to the confidence level of the target object on at least frame image that layer includes;
Confidence level according to the default image in each dimension is distributed, and determines the target object from the default image Corresponding target image.
4. according to the method described in claim 3, it is characterized in that, it is described according to the default image each dimension confidence Degree distribution, determines the corresponding target image of the target object from the default image, comprising:
For any pixel point in the default image, determine to include the picture from the image bearing layer to be identified of each dimension A frame or multiple image for vegetarian refreshments;
The confidence level of each image bearing layer to be identified according to belonging to a frame or multiple image comprising the pixel is distributed, and determines institute State the objective degrees of confidence that pixel belongs to the target object;
The region that one or more pixels that objective degrees of confidence is greater than preset threshold form is determined as the target object pair The target image answered.
5. method according to claim 1 to 4, which is characterized in that described to obtain default image, comprising:
Obtain the three-dimensional coordinate of initial image and the target object;
Centered on the three-dimensional coordinate of the target object, using pre-determined distance as radius, divides from the initial image and obtain The default image comprising the target object.
6. a kind of device of measurement object size characterized by comprising
Module is obtained, for obtaining default image;
Determining module, for determining the corresponding target image of target object from the default image;
Measurement module, for measuring the distance of any two edge pixel point on the target image, by any two side Major diameter of the maximum distance as the target object in the distance of edge pixel;And using perpendicular to the flat of the major diameter Target image described in the cutting of face obtains an at least frame image, any frame being directed in an at least frame image, described in measurement The distance of any two edge pixel point of image, using the maximum distance in the distance of any two edge pixel point as The corresponding alternative minor axis of described image;Using the alternative minor axis of the maximum in the corresponding alternative minor axis of an at least frame image as institute State the minor axis of target object.
7. device according to claim 6, which is characterized in that the measurement module is also used to:
The pixel total amount that the target image includes is measured, the pixel total amount for including according to the target image and default ratio Example determines the volume of the target object.
8. device according to claim 6, which is characterized in that the determining module is specifically used for:
For the image bearing layer to be identified of any dimension of the default image, the image bearing layer to be identified of the dimension is inputted default Convolutional neural networks model obtains the default image and is distributed in the confidence level of the dimension;Wherein, each dimension is to be identified Image bearing layer is the frame or the continuous image of multiframe obtained using image is preset described in the bisecting plane cutting for corresponding to dimension;Each dimension The bisecting plane of degree is not parallel;The default image is in the image to be identified that the confidence level distribution of the dimension includes the dimension Each pixel belongs to the confidence level of the target object on at least frame image that layer includes;
Confidence level according to the default image in each dimension is distributed, and determines the target object from the default image Corresponding target image.
9. device according to claim 8, which is characterized in that the determining module is specifically used for:
For any pixel point in the default image, determine to include the picture from the image bearing layer to be identified of each dimension A frame or multiple image for vegetarian refreshments;
The confidence level of each image bearing layer to be identified according to belonging to a frame or multiple image comprising the pixel is distributed, and determines institute State the objective degrees of confidence that pixel belongs to the target object;
The region that one or more pixels that objective degrees of confidence is greater than preset threshold form is determined as the target object pair The target image answered.
10. device according to any one of claims 6 to 9, which is characterized in that the acquisition module is specifically used for:
Obtain the three-dimensional coordinate of initial image and the target object;
Centered on the three-dimensional coordinate of the target object, using pre-determined distance as radius, divides from the initial image and obtain The default image comprising the target object.
11. a kind of calculating equipment, which is characterized in that including at least one processing unit and at least one storage unit, wherein The storage unit is stored with computer program, when described program is executed by the processing unit, so that the processing unit Perform claim requires the step of 1~5 any claim the method.
12. a kind of computer readable storage medium, which is characterized in that it is stored with can be by computer journey that calculating equipment executes Sequence, when described program is run on said computing device, so that calculating equipment perform claim requirement 1~5 is any described The step of method.
CN201910713614.7A 2019-08-02 2019-08-02 A kind of method and device of measurement object size Pending CN110533638A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910713614.7A CN110533638A (en) 2019-08-02 2019-08-02 A kind of method and device of measurement object size

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910713614.7A CN110533638A (en) 2019-08-02 2019-08-02 A kind of method and device of measurement object size

Publications (1)

Publication Number Publication Date
CN110533638A true CN110533638A (en) 2019-12-03

Family

ID=68661253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910713614.7A Pending CN110533638A (en) 2019-08-02 2019-08-02 A kind of method and device of measurement object size

Country Status (1)

Country Link
CN (1) CN110533638A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553903A (en) * 2020-04-29 2020-08-18 北京优视魔方科技有限公司 Self-adaptive measuring method and device for focus area image
CN116681892A (en) * 2023-06-02 2023-09-01 山东省人工智能研究院 Image precise segmentation method based on multi-center polar mask model improvement

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003265475A (en) * 2002-03-19 2003-09-24 Toshiba Medical System Co Ltd Ultrasonograph, image processor and image processing program
US20120321155A1 (en) * 2008-01-16 2012-12-20 Yuanzhong Li Method, apparatus, and program for measuring sizes of tumor regions
CN104102805A (en) * 2013-04-15 2014-10-15 上海联影医疗科技有限公司 Medical image information processing method and medical image information processing device
CN105389813A (en) * 2015-10-30 2016-03-09 上海联影医疗科技有限公司 Medical image organ recognition method and segmentation method
CN109102502A (en) * 2018-08-03 2018-12-28 西北工业大学 Pulmonary nodule detection method based on Three dimensional convolution neural network
CN109712131A (en) * 2018-12-27 2019-05-03 上海联影智能医疗科技有限公司 Quantization method, device, electronic equipment and the storage medium of Lung neoplasm feature

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003265475A (en) * 2002-03-19 2003-09-24 Toshiba Medical System Co Ltd Ultrasonograph, image processor and image processing program
US20120321155A1 (en) * 2008-01-16 2012-12-20 Yuanzhong Li Method, apparatus, and program for measuring sizes of tumor regions
CN104102805A (en) * 2013-04-15 2014-10-15 上海联影医疗科技有限公司 Medical image information processing method and medical image information processing device
CN105389813A (en) * 2015-10-30 2016-03-09 上海联影医疗科技有限公司 Medical image organ recognition method and segmentation method
CN109102502A (en) * 2018-08-03 2018-12-28 西北工业大学 Pulmonary nodule detection method based on Three dimensional convolution neural network
CN109712131A (en) * 2018-12-27 2019-05-03 上海联影智能医疗科技有限公司 Quantization method, device, electronic equipment and the storage medium of Lung neoplasm feature

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553903A (en) * 2020-04-29 2020-08-18 北京优视魔方科技有限公司 Self-adaptive measuring method and device for focus area image
CN111553903B (en) * 2020-04-29 2024-03-08 北京优视魔方科技有限公司 Adaptive measurement method and device for focus area image
CN116681892A (en) * 2023-06-02 2023-09-01 山东省人工智能研究院 Image precise segmentation method based on multi-center polar mask model improvement
CN116681892B (en) * 2023-06-02 2024-01-26 山东省人工智能研究院 Image precise segmentation method based on multi-center polar mask model improvement

Similar Documents

Publication Publication Date Title
US11937962B2 (en) Systems and methods for automated and interactive analysis of bone scan images for detection of metastases
EP2116973B1 (en) Method for interactively determining a bounding surface for segmenting a lesion in a medical image
CN108717700B (en) Method and device for detecting length of long diameter and short diameter of nodule
CN110599528A (en) Unsupervised three-dimensional medical image registration method and system based on neural network
US10275909B2 (en) Systems and methods for an integrated system for visualizing, simulating, modifying and 3D printing 3D objects
CN108648192A (en) A kind of method and device of detection tubercle
CN110458830A (en) Image processing method, device, server and storage medium
CN107292884A (en) The method and device of oedema and hemotoncus in a kind of identification MRI image
US11954860B2 (en) Image matching method and device, and storage medium
WO2012116746A1 (en) Image processing device for finding corresponding regions in two image data sets of an object
WO2007026598A1 (en) Medical image processor and image processing method
CN110533029A (en) Determine the method and device of target area in image
CN110533638A (en) A kind of method and device of measurement object size
US9142017B2 (en) TNM classification using image overlays
CN110992310A (en) Method and device for determining partition where mediastinal lymph node is located
CN111275617B (en) Automatic splicing method and system for ABUS breast ultrasound panorama and storage medium
WO2012107057A1 (en) Image processing device
CN108510506A (en) A kind of tubular structure image partition method
Lou et al. Object-based deformation technique for 3D CT lung nodule detection
CN115018825B (en) Coronary artery dominant type classification method, classification device and storage medium
CN116958443A (en) SMPLX-based digital human quantitative detection model reconstruction method and application
CN114187252B (en) Image processing method and device, and method and device for adjusting detection frame
CN110533637B (en) Method and device for detecting object
Zheng et al. Coordinate-guided U-Net for automated breast segmentation on MRI images
CN112712507A (en) Method and device for determining calcified area of coronary artery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191203