CN108510493A - Boundary alignment method, storage medium and the terminal of target object in medical image - Google Patents

Boundary alignment method, storage medium and the terminal of target object in medical image Download PDF

Info

Publication number
CN108510493A
CN108510493A CN201810310092.1A CN201810310092A CN108510493A CN 108510493 A CN108510493 A CN 108510493A CN 201810310092 A CN201810310092 A CN 201810310092A CN 108510493 A CN108510493 A CN 108510493A
Authority
CN
China
Prior art keywords
target object
network model
segmentation
medical image
alignment method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810310092.1A
Other languages
Chinese (zh)
Inventor
周永进
曾雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201810310092.1A priority Critical patent/CN108510493A/en
Priority to PCT/CN2018/082986 priority patent/WO2019196099A1/en
Publication of CN108510493A publication Critical patent/CN108510493A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses boundary alignment method, storage medium and the terminal of target object in medical image, the method is applied to technical field of medical image processing, specifically includes:Terminal obtains the pending target object video data that clinical acquisitions arrive, and video data is input to the pre- segmentation network model for first passing through off-line training foundation;The segmentation network model is partitioned into the region where the target object in video data automatically, exports segmentation result.The present invention is by the way that pending medical image video input to be split to study is first passed through in advance with training foundation segmentation network model, the boundary of target object can be accurately positioned out in real time, eliminate the influence that bloom speck and artefact in imaging position target area upper and lower interface, the accuracy that target area boundaries judge is improved, the high-precision requirement clinically required is reached.

Description

Boundary alignment method, storage medium and the terminal of target object in medical image
Technical field
The present invention relates to technical field of medical image processing, and in particular to the boundary alignment side of target object in medical image Method, storage medium and terminal.
Background technology
In the prior art, it is always for the research of the boundary alignment technology of a certain specific target object in medical image Important project.Such as in the field of medicine, corneal thickness is preoperative must take into consideration a condition, while can also influence to perform the operation The selection of method, the setting of cutting region size, the assessment etc. of post-operative recovery.Therefore the accurate positionin of corneal boundary is corneal thickness The primary premise measured.
Currently, the positioning of corneal boundary mostly uses the method directly related with gradation of image.Specifically, cornea area is first determined Position Approximate, cornea is then partitioned by Otsu threshold process, then determine corneal boundary, then carries out next thickness It measures.But due to being difficult to eliminate the presence of instrument error, so the dividing method based on gray scale is easy near by cornea Random highlighted patch and strip image artifacts influence, seriously affect the accurate measurement of corneal thickness.Therefore, the prior art In the boundary alignment precision of the target object in medical image is difficult to reach the high-precision clinically required.
Therefore, the existing technology needs to be improved and developed.
Invention content
The technical problem to be solved in the present invention is, for the drawbacks described above of the prior art, provides in a kind of medical image Boundary alignment method, storage medium and the terminal of target object, it is intended to solve the mesh in the prior art in medical image The boundary alignment method for marking object is easy, by randomness artifacts in imaging process, to cause error larger, is especially hanging down Histogram to precision it is inadequate, be unable to reach the high-precision problem of clinical requirement.
The technical proposal for solving the technical problem of the invention is as follows:
A kind of boundary alignment method of target object in medical image, wherein the method is applied to Medical Image Processing Field specifically includes:
Step A, terminal obtains the pending target object video data that clinical acquisitions arrive, and video data is input in advance The segmentation network model established by off-line training;
Step B, the described segmentation network model is partitioned into the region where the target object in video data, output segmentation knot automatically Fruit.
The boundary alignment method of target object in the medical image, wherein further include before the step A:
Step S, the pre- mode for first passing through off-line training establishes the segmentation net for being partitioned into the region where target object automatically Network model.
The boundary alignment method of target object in the medical image, wherein further include after the step B:
Step C, fitting of a polynomial is carried out to the targeted object region being partitioned into, obtains the up-and-down boundary of target object.
The boundary alignment method of target object in the medical image, wherein the step S includes:
Step S1, collected target object video is obtained, chooses first frame image and to the target object in first frame image The region at place is labeled;
Step S2, the provincial characteristics of target object is obtained, and carries out deep learning and training;
Step S3, the displacement of sinusoidal wave is introduced by piecewise affine transformations in the training process, simulated target object bounds Variation, and establish the segmentation network model for being partitioned into target object region automatically.
The boundary alignment method of target object in the medical image, wherein the step S further includes:
Be added in the training process conventional affine transformation, projection transformation either transitting probability and using image erosion techniques or Spine removal technology removes the sharpened edge in the targeted object region variation in smooth front and back frame image.
The boundary alignment method of target object in the medical image, wherein the step B includes:
Step B1, the described segmentation network model carries out target object area to each frame image in target object video data successively Regional partition;
Step B2, automatically that the segmentation result of previous frame image is defeated when carrying out targeted object region segmentation to next frame image Enter the segmentation network model;
Step B3, the described segmentation network model carries out mesh using the segmentation result of previous frame image as reference, to next frame image Mark subject area segmentation.
The boundary alignment method of target object in the medical image, wherein the step B further includes:
The segmentation result is input in segmentation network model, to be updated to the segmentation network model.
The boundary alignment method of target object in the medical image, wherein further include after the step C:
Step D, according to the up-and-down boundary of target object, the calculating of target object thickness is carried out.
A kind of storage medium is stored thereon with a plurality of instruction, wherein and described instruction is suitable for being loaded and being executed by processor, To realize the boundary alignment method of target object in medical image described in any one of the above embodiments.
A kind of terminal, wherein including:Processor, the storage medium being connect with processor communication, the storage medium are suitable for Store a plurality of instruction;The processor is suitable for calling the instruction in the storage medium, is realized described in any of the above-described with executing Medical image in target object boundary alignment method.
Beneficial effects of the present invention:The present invention by by pending medical image video input to it is pre- first pass through study with Training is established segmentation network model and is split, and the boundary of target object can be accurately positioned out in real time, eliminates the height in imaging The influence that bright spot and artefact position target area upper and lower interface improves the accuracy that target area boundaries judge, reaches and face The high-precision requirement required on bed.
Description of the drawings
Fig. 1 is the flow signal of the preferred embodiment of the boundary alignment method of target object in the medical image of the present invention Figure.
Fig. 2 is the image of the present invention established after the first frame image A and mark obtained when cornea segmentation network model B。
Fig. 3 is that odd even Ah not that establish of the present invention divides the corneal boundary simulated by piecewise affine transformations when network model Change schematic diagram.
Fig. 4 is dividing network model using cornea and dividing to cornea and effect after fitting of a polynomial for the present invention Figure.
Fig. 5 is the schematic diagram of the function of the terminal of the present invention.
Specific implementation mode
To make the objectives, technical solutions, and advantages of the present invention clearer and more explicit, develop simultaneously embodiment pair referring to the drawings The present invention is further described.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and do not have to It is of the invention in limiting.
It is clinically wanted due to being difficult to reach for the boundary alignment precision of the target object in medical image in the prior art The high-precision requirement asked.Especially for the positioning analysis of pupil region, it is difficult to the problem captured always.Therefore, the present invention carries For a kind of boundary alignment method of target object in medical image, in particular for the boundary alignment method of pupil region.Such as Fig. 1 Shown, Fig. 1 is the flow diagram of the preferred embodiment of the boundary alignment method of target object in the medical image of the present invention.Institute The method of stating includes:
Step S100, terminal obtains the pending target object video data that clinical acquisitions arrive, and video data is input to The pre- cornea segmentation network model for first passing through off-line training foundation.
Specifically, in order to better illustrate technical scheme of the present invention, the present invention is made with being analyzed pupil region For embodiment.Therefore in the present embodiment, target object is just cornea, and segmentation network model is just that cornea divides network model. Since corneal thickness is the condition that many vision correction procedures must take into consideration, while the selection of operation method can be also influenced, The setting of cutting region size, the assessment etc. of post-operative recovery.And the accurate positionin of corneal boundary be Cornea thickness survey it is primary before It carries.
The positioning of existing corneal boundary mostly uses the method directly related with gradation of image.Specifically, cornea is first determined Then the Position Approximate in area is partitioned into cornea by Otsu threshold process, then determines corneal boundary, then carry out next thickness Degree measures.But due to being difficult to eliminate the presence of instrument error, so the dividing method based on gray scale is extremely easy attached by cornea The influence of close random highlighted patch and strip image artifacts, if continuing to have using the threshold value that Otsu methods obtain at this time very big Error, be not only the threshold segmentation method of Otsu in fact, other such as Li, the segmentation result of Mean, the methods of Yen is also all It is undesirable, the accurate measurement of corneal thickness is seriously affected, the high-precision clinically required is much not achieved.
For the segmentation of pupil region, since human body is elastic fibrous tissue, so dividing method needs the bullet to target area Property deformation and brightness change have lasting operability.In the field of Medical Image Processing, deep neural network is commonly used to sentence The presence or absence of disconnected specified disease or the relevant lesion of segmentation human body or vitals.With the development of deep learning so that The accuracy of computer assisted diagnosis image early period greatly improves, and greatly improves the interrogation efficiency of doctor, also power-assisted is cured The sound development of industry is treated, deep learning will possess broader foreground in medical field.
Therefore, the present embodiment use deep learning method, establish it is a kind of can be automatically to the cornea area in video image The cornea that domain is split divides network model.Specifically, the cornea video arrived by obtaining clinical acquisitions, chooses first frame figure Picture is simultaneously labeled the pupil region in first frame image.As shown in Fig. 2, the cornea of establishing that Fig. 2 is the present invention divides network Image B after the first frame image A and mark that are obtained when model.Preferably, the method that generally use is traced by hand is to cornea Region is labeled, and ensures accuracy as possible, to establish more accurate network model.Then the angle after label is obtained Diaphragm area feature, and carry out deep learning and training;During training, sinusoidal wave is introduced by piecewise affine transformations Displacement, the variation on analog cornea boundary.As shown in figure 3, Fig. 3, which is establishing when cornea divides network model for the present invention, passes through segmentation The change schematic diagram of the corneal boundary of affine transformation simulation.The present invention is based on convolutional neural networks to carry out deep learning, this implementation The feature that example is learnt by deep neural network is insensitive for random speck, and the accurate robust of energy orients the upper and lower of cornea Boundary.And characteristics of image is obtained by deep learning automatically, instead of the cumbersome manual inconvenience for obtaining feature of tradition, is excluded The interference of random noise improves the precision of feature acquisition.
Further, during training, the segmentation result of simulation former frame in order to be more accurate, the present invention from Conventional affine transformation, projection transformation or transitting probability, such as upper and lower displacement are additionally added when line training.It is worth noting that In this step, structural element need to be used to remove smooth front and back frame figure for the image erosion techniques of 1 pixel or spine removal technology The sharpened edge in the variation of pupil region as in, to the variation on better analog cornea boundary.It is added in above-mentioned steps Affine transformation, projection transformation, transitting probability, and go to be smoothly used technology just to illustrate the technical side of the present invention Case is not intended to limit the present invention, and the converter technique of other forms or smoothing technique is gone to all belong to the scope of protection of the present invention.
Established cornea segmentation network model is embedded in terminal by the present embodiment so that terminal has automatic progress angle The function of film segmentation.When needing to carry out cornea segmentation to the video data clinically acquired, terminal obtains pending cornea Video data, and video data is input in the above-mentioned cornea segmentation network model established by off-line training.
Further, step S200, the described segmentation network model is partitioned into automatically where the target object in video data Region, export segmentation result.
Preferably, the step S200 is specifically included:
Step S201, the described segmentation network model carries out target object to each frame image in target object video data successively Region segmentation;
Step S202, when carrying out targeted object region segmentation to next frame image, automatically by the segmentation result of previous frame image Input the segmentation network model;
Step S203, the described segmentation network model carries out next frame image using the segmentation result of previous frame image as reference Divide targeted object region.
When it is implemented, since target object in the present embodiment is cornea, segmentation network model is that cornea divides network Model, therefore cornea segmentation network model carries out pupil region segmentation to each frame image in cornea video data successively.It passes When handling single image of system can't consider sequence problem, and the cornea of the present invention divides network model under When one frame image carries out pupil region segmentation, the segmentation result of previous frame image is inputted into the cornea automatically and divides network mould Type;Using the segmentation result of previous frame image as reference, and rough estimate is made to corneal boundary, then angle is carried out to next frame image Diaphragm area is divided, to preferably improve the precision of cornea segmentation.
Preferably, the segmentation result is also input in cornea segmentation network model by the present embodiment, with to the cornea Segmentation network model is updated, to further increase the precision of cornea segmentation.
Specifically, the present embodiment can will carry out segmentation result the fitting of a polynomial of up-and-down boundary, and last result is such as Shown in Fig. 4, Fig. 4 is dividing network model using cornea and dividing to cornea and effect after fitting of a polynomial for the present invention Figure.Even if very small if having influence of the presence of random bloom artifact for fitting result as can see from Figure 4.It is worth note Meaning, when carrying out fitting of a polynomial, because number is higher, easily there is a situation where over-fittings, so this technology is handling this When problem, the region of about 60 pixel of the right boundary of image is not considered, the accuracy of fitting can be greatly improved.
Further, the present embodiment can also carry out the calculating of corneal thickness according to the up-and-down boundary for the cornea determined. In order to further increase robustness, using image center or so 10 pixels when calculated thickness, the region calculated thickness of totally 20 pixels Average value.The present invention is more accurate by the corneal boundary that the cornea segmentation network model institute automatic measurement of foundation goes out, therefore Obtained corneal thickness is also more accurate.
Certainly, in medical image proposed by the invention target object boundary alignment method, although fixed with corneal boundary The method of position does not limit the processing that the present invention is only used for medical image as embodiment, can also be used in natural image In the segmentation of video.
Based on above-described embodiment, the invention also discloses a kind of terminals, as shown in figure 5, including:Processor (processor) 10 the storage medium (memory) 20, being connect with processor 10;Wherein, the processor 10 is for calling institute The program instruction in storage medium 20 is stated, to execute the method that above-described embodiment is provided, such as is executed:
Step S100, terminal obtains the pending target object video data that clinical acquisitions arrive, and video data is input to The pre- segmentation network model for first passing through off-line training foundation;
Step S200, the described segmentation network model is partitioned into the region where the target object in video data, output point automatically Cut result.
The embodiment of the present invention also provides a kind of storage medium, and computer instruction, the calculating are stored on the storage medium Machine instruction makes computer execute the method that the various embodiments described above are provided.
In conclusion in medical image provided by the invention target object boundary alignment method, storage medium and terminal, The method is applied to technical field of medical image processing, specifically includes:Terminal obtains the pending target that clinical acquisitions arrive Object video data, and video data is input to the pre- segmentation network model for first passing through off-line training foundation;The segmentation net Network model is partitioned into the region where the target object in video data automatically, exports segmentation result.The present invention is located by that will wait The medical image video input of reason is split to study is first passed through in advance with training foundation segmentation network model, can be accurate in real time The boundary of target object is oriented, the influence that bloom speck and artefact in imaging position target area upper and lower interface is eliminated, The accuracy that target area boundaries judge is improved, the high-precision requirement clinically required is reached.
It should be understood that the application of the present invention is not limited to the above for those of ordinary skills can With improvement or transformation based on the above description, all these modifications and variations should all belong to the guarantor of appended claims of the present invention Protect range.

Claims (10)

1. a kind of boundary alignment method of target object in medical image, which is characterized in that the method is applied to medical image Processing technology field specifically includes:
Step A, terminal obtains the pending target object video data that clinical acquisitions arrive, and video data is input in advance The segmentation network model established by off-line training;
Step B, the described segmentation network model is partitioned into the region where the target object in video data, output segmentation knot automatically Fruit.
2. according to the boundary alignment method of target object in the medical image described in claim 1, which is characterized in that the step Further include before rapid A:
Step S, the pre- mode for first passing through off-line training establishes the segmentation net for being partitioned into the region where target object automatically Network model.
3. according to the boundary alignment method of target object in the medical image described in claim 1, which is characterized in that the step Further include after rapid B:
Step C, fitting of a polynomial is carried out to the region for the target object being partitioned into, obtains the up-and-down boundary of target object.
4. according to the boundary alignment method of target object in the medical image described in claim 2, which is characterized in that the step Suddenly S includes:
Step S1, collected target object video is obtained, chooses first frame image and to the target object in first frame image The region at place is labeled;
Step S2, the provincial characteristics of target object is obtained, and carries out deep learning and training;
Step S3, the displacement of sinusoidal wave is introduced by piecewise affine transformations in the training process, simulated target object bounds Variation, and establish the segmentation network model for being partitioned into target object region automatically.
5. according to the boundary alignment method of target object in the medical image described in claim 2, which is characterized in that the step Suddenly S further includes:
Be added in the training process conventional affine transformation, projection transformation either transitting probability and using image erosion techniques or Spine removal technology removes the sharpened edge in the targeted object region variation in smooth front and back frame image.
6. according to the boundary alignment method of target object in the medical image described in claim 1, which is characterized in that the step Suddenly B includes:
Step B1, the described segmentation network model carries out target object area to each frame image in target object video data successively Regional partition;
Step B2, automatically that the segmentation result of previous frame image is defeated when carrying out targeted object region segmentation to next frame image Enter the segmentation network model;
Step B3, the described segmentation network model carries out mesh using the segmentation result of previous frame image as reference, to next frame image Mark subject area segmentation.
7. according to the boundary alignment method of target object in the medical image described in claim 1, which is characterized in that the step Suddenly B further includes:
The segmentation result is input in segmentation network model, to be updated to the segmentation network model.
8. according to the boundary alignment method of target object in the medical image described in claim 3, which is characterized in that the step Further include after rapid C:
Step D, according to the up-and-down boundary of target object, the calculating of target object thickness is carried out.
9. a kind of storage medium is stored thereon with a plurality of instruction, which is characterized in that described instruction is suitable for being loaded and being held by processor Row, to realize the boundary alignment method of target object in the claims 1-8 any one of them medical images.
10. a kind of terminal, which is characterized in that including:Processor, the storage medium being connect with processor communication, the storage are situated between Matter is suitable for storing a plurality of instruction;The processor is suitable for calling the instruction in the storage medium, to execute realization aforesaid right It is required that in 1-8 any one of them medical images target object boundary alignment method.
CN201810310092.1A 2018-04-09 2018-04-09 Boundary alignment method, storage medium and the terminal of target object in medical image Pending CN108510493A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810310092.1A CN108510493A (en) 2018-04-09 2018-04-09 Boundary alignment method, storage medium and the terminal of target object in medical image
PCT/CN2018/082986 WO2019196099A1 (en) 2018-04-09 2018-04-13 Method for positioning boundaries of target object in medical image, storage medium, and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810310092.1A CN108510493A (en) 2018-04-09 2018-04-09 Boundary alignment method, storage medium and the terminal of target object in medical image

Publications (1)

Publication Number Publication Date
CN108510493A true CN108510493A (en) 2018-09-07

Family

ID=63381276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810310092.1A Pending CN108510493A (en) 2018-04-09 2018-04-09 Boundary alignment method, storage medium and the terminal of target object in medical image

Country Status (2)

Country Link
CN (1) CN108510493A (en)
WO (1) WO2019196099A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969640A (en) * 2018-09-29 2020-04-07 Tcl集团股份有限公司 Video image segmentation method, terminal device and computer-readable storage medium
CN111627017A (en) * 2020-05-29 2020-09-04 昆山戎影医疗科技有限公司 Blood vessel lumen automatic segmentation method based on deep learning
CN112396601A (en) * 2020-12-07 2021-02-23 中山大学 Real-time neurosurgical instrument segmentation method and device based on endoscope image and storage medium
CN112819831A (en) * 2021-01-29 2021-05-18 北京小白世纪网络科技有限公司 Segmentation model generation method and device based on convolution Lstm and multi-model fusion
WO2021171255A1 (en) * 2020-02-26 2021-09-02 Bright Clinical Research Limited A radar system for dynamically monitoring and guiding ongoing clinical trials
CN113358042A (en) * 2021-06-30 2021-09-07 长江存储科技有限责任公司 Method for measuring film thickness
US11734826B2 (en) 2018-11-27 2023-08-22 Tencent Technologv (Chenzhen) Company Limited Image segmentation method and apparatus, computer device, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435456A (en) * 2021-02-08 2021-09-24 中国石油化工股份有限公司 Rock slice component identification method and device based on machine learning and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102136135A (en) * 2011-03-16 2011-07-27 清华大学 Method for extracting inner outline of cornea from optical coherence tomography image of anterior segment of eye and method for extracting inner outline of anterior chamber from optical coherence tomography image of anterior segment of eye
US20160171688A1 (en) * 2010-01-20 2016-06-16 Duke University Segmentation and identification of layered structures in images
CN106846314A (en) * 2017-02-04 2017-06-13 苏州大学 A kind of image partition method based on post-operative cornea OCT image datas
CN107274406A (en) * 2017-08-07 2017-10-20 北京深睿博联科技有限责任公司 A kind of method and device of detection sensitizing range

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015199257A1 (en) * 2014-06-25 2015-12-30 삼성전자 주식회사 Apparatus and method for supporting acquisition of area-of-interest in ultrasound image
US9607224B2 (en) * 2015-05-14 2017-03-28 Google Inc. Entity based temporal segmentation of video streams
CN107622257A (en) * 2017-10-13 2018-01-23 深圳市未来媒体技术研究院 A kind of neural network training method and three-dimension gesture Attitude estimation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160171688A1 (en) * 2010-01-20 2016-06-16 Duke University Segmentation and identification of layered structures in images
CN102136135A (en) * 2011-03-16 2011-07-27 清华大学 Method for extracting inner outline of cornea from optical coherence tomography image of anterior segment of eye and method for extracting inner outline of anterior chamber from optical coherence tomography image of anterior segment of eye
CN106846314A (en) * 2017-02-04 2017-06-13 苏州大学 A kind of image partition method based on post-operative cornea OCT image datas
CN107274406A (en) * 2017-08-07 2017-10-20 北京深睿博联科技有限责任公司 A kind of method and device of detection sensitizing range

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ANNA K.等: "Learning Video Object Segmentation from Static Images", 《ARXIV》 *
MARTA E.R.等: "Corneal deformation dynamics in normal and glaucoma patients utilizing scheimpflug imaging", 《2015 37TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC)》 *
OLAF R.等: "U-Net: Convolutional Networks for Biomedical Image Segmentation", 《ARXIV》 *
王一丁 等: "《数字图像处理》", 31 August 2015, 西安:西安电子科技大学出版社 *
王炳锡 等: "《数字水印技术》", 30 November 2003, 西安:西安电子科技大学出版社 *
田小林 等: "《光学相干层析图像处理及应用》", 31 January 2015, 北京:北京理工大学出版社 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969640A (en) * 2018-09-29 2020-04-07 Tcl集团股份有限公司 Video image segmentation method, terminal device and computer-readable storage medium
US11734826B2 (en) 2018-11-27 2023-08-22 Tencent Technologv (Chenzhen) Company Limited Image segmentation method and apparatus, computer device, and storage medium
WO2021171255A1 (en) * 2020-02-26 2021-09-02 Bright Clinical Research Limited A radar system for dynamically monitoring and guiding ongoing clinical trials
CN111627017A (en) * 2020-05-29 2020-09-04 昆山戎影医疗科技有限公司 Blood vessel lumen automatic segmentation method based on deep learning
CN111627017B (en) * 2020-05-29 2024-02-23 苏州博动戎影医疗科技有限公司 Automatic segmentation method for vascular lumen based on deep learning
CN112396601A (en) * 2020-12-07 2021-02-23 中山大学 Real-time neurosurgical instrument segmentation method and device based on endoscope image and storage medium
CN112396601B (en) * 2020-12-07 2022-07-29 中山大学 Real-time neurosurgical instrument segmentation method based on endoscope images
CN112819831A (en) * 2021-01-29 2021-05-18 北京小白世纪网络科技有限公司 Segmentation model generation method and device based on convolution Lstm and multi-model fusion
CN112819831B (en) * 2021-01-29 2024-04-19 北京小白世纪网络科技有限公司 Segmentation model generation method and device based on convolution Lstm and multi-model fusion
CN113358042A (en) * 2021-06-30 2021-09-07 长江存储科技有限责任公司 Method for measuring film thickness
CN113358042B (en) * 2021-06-30 2023-02-14 长江存储科技有限责任公司 Method for measuring film thickness

Also Published As

Publication number Publication date
WO2019196099A1 (en) 2019-10-17

Similar Documents

Publication Publication Date Title
CN108510493A (en) Boundary alignment method, storage medium and the terminal of target object in medical image
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
CN109859203B (en) Defect tooth image identification method based on deep learning
CN107665486B (en) Automatic splicing method and device applied to X-ray images and terminal equipment
CN111161290B (en) Image segmentation model construction method, image segmentation method and image segmentation system
CN107886503A (en) A kind of alimentary canal anatomical position recognition methods and device
CN104899876B (en) A kind of eye fundus image blood vessel segmentation method based on adaptive Gauss difference
CN111951221B (en) Glomerular cell image recognition method based on deep neural network
CN111862044B (en) Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium
CN109166124A (en) A kind of retinal vascular morphologies quantization method based on connected region
CN112465772B (en) Fundus colour photographic image blood vessel evaluation method, device, computer equipment and medium
CN108198185B (en) Segmentation method and device for fundus focus image, storage medium and processor
CN113436070B (en) Fundus image splicing method based on deep neural network
CN108272434A (en) The method and device that eye fundus image is handled
CN109087310B (en) Meibomian gland texture region segmentation method and system, storage medium and intelligent terminal
CN107292835A (en) A kind of method and device of eye fundus image retinal vessel Automatic Vector
CN106446805B (en) A kind of eyeground shine in optic cup dividing method and system
CN106548491B (en) A kind of method for registering images, its image interfusion method and its device
CN106846293A (en) Image processing method and device
CN109754388B (en) Carotid artery stenosis degree calculation method and device and storage medium
CN114627067A (en) Wound area measurement and auxiliary diagnosis and treatment method based on image processing
CN114419181A (en) CTA image reconstruction method and device, display method and device
US20240005494A1 (en) Methods and systems for image quality assessment
CN112950737A (en) Fundus fluorescence radiography image generation method based on deep learning
CN116312986A (en) Three-dimensional medical image labeling method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180907