CN108615235A - A kind of method and device that temporo ear image is handled - Google Patents

A kind of method and device that temporo ear image is handled Download PDF

Info

Publication number
CN108615235A
CN108615235A CN201810401351.1A CN201810401351A CN108615235A CN 108615235 A CN108615235 A CN 108615235A CN 201810401351 A CN201810401351 A CN 201810401351A CN 108615235 A CN108615235 A CN 108615235A
Authority
CN
China
Prior art keywords
target
ear
image
ossicle
temporal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810401351.1A
Other languages
Chinese (zh)
Other versions
CN108615235B (en
Inventor
杨琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Zhuojian Information Technology Co ltd
Original Assignee
Beijing Pat Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Pat Intelligent Technology Co Ltd filed Critical Beijing Pat Intelligent Technology Co Ltd
Priority to CN201810401351.1A priority Critical patent/CN108615235B/en
Publication of CN108615235A publication Critical patent/CN108615235A/en
Application granted granted Critical
Publication of CN108615235B publication Critical patent/CN108615235B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

Embodiment of the invention discloses that a kind of method and device handled temporo ear image, this method is directed to the temporo ear 3-D view of target object, the corresponding target phonophore image of phonophore structure is obtained by the first preset model of training in advance, the target area that the phonophore structure in phonophore and default phonophore image in the target phonophore image has differences is identified by the second preset model again, by adjusting the displaying direction of target temporo ear 3-D view, on the screen by target area displaying, to reduce the range that professional checks target temporo ear 3-D view, to accelerate to check speed, improve efficiency.This method goes out target area by Model Identification trained in advance, by adjusting angles of display so that target area is displayed on the screen, shortens professional to the review time of the target temporo ear 3-D view, improves efficiency.

Description

Method and device for processing temporal ear image
Technical Field
The embodiment of the invention relates to the technical field of image processing and machine learning, in particular to a method and a device for processing a temporal ear image.
Background
The ear structure is complex and fine, and the current inspection of the ear structure mainly depends on the imaging inspection, especially the High Resolution CT (HRCT) inspection. However, the CT post-processing technique is complex, and usually requires careful examination of the CT image to determine whether the ear structure under examination differs from a standard ear structure, which is slow and inefficient.
In the process of implementing the embodiment of the invention, the inventor finds that due to the complex structure of the ear part, a professional needs to check for a long time to obtain the checking result, and the checking efficiency is low.
Disclosure of Invention
The invention aims to solve the technical problem of low examination efficiency by finding that a professional can obtain an examination result through a long-time examination due to the complex structure of the ear part.
In view of the above technical problem, an embodiment of the present invention provides a method for processing a temporal ear image, including:
acquiring a three-dimensional image of a temporal ear of a target object as a three-dimensional image of the target temporal ear, and segmenting an ossicle structure from the target temporal ear image through a first preset model to obtain a target ossicle image;
identifying an area, which is different from the auditory ossicle structure in the preset auditory ossicle image, of the target auditory ossicle image through a second preset model, and using the area as a target area;
and adjusting the display angle of the target temporal ear three-dimensional image to enable the target area to be displayed on a preset screen.
Optionally, segmenting an ossicle structure from the target temporal ear image through a first preset model to obtain a target ossicle image, including:
identifying the region and boundary information of an ossicle structure from the target temporal ear three-dimensional image through the first preset model, and separating the ossicle structure from the target temporal ear three-dimensional image according to the region and boundary information identified from the target temporal ear three-dimensional image to obtain the target ossicle image;
or,
identifying the area and boundary information of an ossicle structure from each two-dimensional image forming the target temporal ear three-dimensional image through the first preset model, separating an ossicle two-dimensional image according to the area and boundary information from each two-dimensional image, and performing three-dimensional reconstruction on the separated ossicle two-dimensional image to obtain the target ossicle image.
Optionally, the identifying, by the second preset model, a region where the ossicle structure in the target ossicle image is different from the ossicle structure in the preset ossicle image as a target region includes:
and comparing the ossicle structure in the target ossicle image with the ossicle structure in the preset ossicle image through the second preset model, and identifying a differential area as the target area.
Optionally, the adjusting the display angle of the three-dimensional image of the target temporal ear so that the target area is displayed on a preset screen includes:
marking the target area, and rotating the target temporal ear three-dimensional image or sectioning the target temporal ear three-dimensional image to display the target area on a screen.
Optionally, the training of the first preset model comprises:
taking a temporalis three-dimensional image of an auditory ossicle structure which is pre-marked out as a first training sample, and performing machine learning on the first training sample to obtain a first preset model;
the training of the second preset model comprises:
and acquiring a plurality of preset ossicle images with preset ossicle structures, taking the preset ossicle images as second training samples, and performing machine learning on the second training samples to obtain a second preset model.
Optionally, the acquiring a three-dimensional image of a temporal ear of the target object as a three-dimensional image of the target temporal ear includes:
acquiring a scanning image obtained by scanning the head of the target object;
segmenting each scanned image to obtain an area and boundary information corresponding to a temporal ear area;
obtaining a three-dimensional image of the temporal ear of the target object through three-dimensional reconstruction according to the obtained region and boundary information of the temporal ear region, and taking the three-dimensional image as the three-dimensional image of the target temporal ear
Optionally, the segmenting each scanned image to obtain an area and boundary information corresponding to a temporal ear area includes:
and segmenting each scanned image through a pre-trained full convolution network to obtain the region and boundary information corresponding to the temporal ear region of the target object.
In a second aspect, the present embodiment provides an apparatus for processing a temporal ear image, comprising:
the acquisition module is used for acquiring a three-dimensional image of a temporal ear of a target object as a three-dimensional image of the target temporal ear, and segmenting an ossicle structure from the target temporal ear image through a first preset model to obtain a target ossicle image;
the identification module is used for identifying an area where the ossicle structure in the target ossicle image is different from the ossicle structure in the preset ossicle image through a second preset model to serve as a target area;
and the display module is used for adjusting the display angle of the target temporal ear three-dimensional image so that the target area is displayed on a preset screen.
In a third aspect, embodiments of the present invention provide an electronic device, comprising at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, which when called by the processor are capable of performing the methods described above.
In a fourth aspect, embodiments of the invention also provide a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the method of any of the above.
The embodiment of the invention provides a method and a device for processing a temporal ear image, aiming at a temporal ear three-dimensional image of a target object, the method comprises the steps of obtaining a target auditory ossicle image corresponding to an auditory ossicle structure through a pre-trained first preset model, identifying a target area with difference between the auditory ossicle in the target auditory ossicle image and the auditory ossicle structure in the preset auditory ossicle image through a second preset model, and displaying the target area on a screen by adjusting the display direction of the target temporal ear three-dimensional image so as to reduce the range of a professional for checking the target temporal ear three-dimensional image, thereby accelerating the checking speed and improving the efficiency. According to the method, the target area is identified through the pre-trained model, and the target area is displayed on the screen by adjusting the display angle, so that the time for a professional to check the target temporal ear three-dimensional image is shortened, and the efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for processing a temporal ear image according to an embodiment of the present invention;
fig. 2 is a block diagram of an apparatus for processing a temporal ear image according to another embodiment of the present invention;
fig. 3 is a block diagram of an electronic device according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flowchart of a method for processing a temporal ear image according to this embodiment, and referring to fig. 1, the method includes:
101: acquiring a three-dimensional image of a temporal ear of a target object as a three-dimensional image of the target temporal ear, and segmenting an ossicle structure from the target temporal ear image through a first preset model to obtain a target ossicle image;
102: identifying an area, which is different from the auditory ossicle structure in the preset auditory ossicle image, of the target auditory ossicle image through a second preset model, and using the area as a target area;
103: and adjusting the display angle of the target temporal ear three-dimensional image to enable the target area to be displayed on a preset screen.
It should be noted that the method provided by the present embodiment is executed by a device, such as a computer, capable of implementing the method. The first preset model and the second preset model are models obtained by training in advance through machine learning. The method provided by the embodiment is used for enabling professionals to quickly recognize the difference between the ossicle structure in the target temporal ear three-dimensional image and the ossicle structure in any preset ossicle image, so that the temporal ear structure of a target object can be known according to the difference, and experimental research and teaching instruction can be conveniently carried out (for example, a principle that different ossicle structures can ensure normal hearing of the object can be explained to students through a displayed target area, or a database is formed by extracting the characteristics of different objects in the area through the displayed target area, and the database is perfected to provide data support for scientific research and teaching).
Based on this, the preset ossicle image may be an image of an ossicle structure with normal hearing, an image of an ossicle structure with abnormal hearing, or an image of an ossicle structure with a special structure and normal hearing, which is not particularly limited in this embodiment.
The display angle is an angle of a currently displayed part of the target three-dimensional image of the temporal ear rotated relative to a preset reference position, for example, a currently displayed picture is a picture formed by rotating the reference position by a certain angle along the central axis of the target three-dimensional image of the temporal ear. The preset screen is the screen of the device for displaying the three-dimensional image of the target temporal ear.
The target object may be a human, an animal, or a specimen, which is not particularly limited in this embodiment. The target three-dimensional image of the temporal ear may be a pre-stored three-dimensional image of the temporal ear of the target object, or may be a three-dimensional image of the temporal ear obtained by scanning the head of the target object and using a three-dimensional reconstruction technique, or a three-dimensional image of the temporal ear made by using related software, which is not limited in this embodiment.
In the method provided by this embodiment, the display angle of the three-dimensional image of the target temporal ear is adjusted, so that the target area is displayed on the screen. The target region is a region having a difference from the ossicle structure in the preset ossicle image, for example, when the preset ossicle image is the ossicle structure with the most common normal hearing, the identified target region may be a region with abnormal hearing. After the target area is displayed, the worker further checks the abnormal area according to experience, for example, further determines whether the abnormal area lacks an ossicle, lacks or exceeds a part of structure without affecting the hearing of the target object, or whether the abnormal area has a more specific structure without affecting the hearing of the target object. By the method, the staff can quickly know the ossicle structure of the target object, so that subsequent tests or teaching can be conveniently carried out according to the known ossicle structure.
The embodiment provides a method for processing a temporal ear image, which is implemented by aiming at a temporal ear three-dimensional image of a target object, acquiring a target auditory ossicle image corresponding to an auditory ossicle structure through a pre-trained first preset model, identifying a target area where the auditory ossicle in the target auditory ossicle image is different from the auditory ossicle structure in the preset auditory ossicle image through a second preset model, and displaying the target area on a screen by adjusting the display direction of the target temporal ear three-dimensional image so as to narrow the range of a professional for checking the target temporal ear three-dimensional image, thereby accelerating the checking speed and improving the efficiency. According to the method, the target area is identified through the pre-trained model, and the target area is displayed on the screen by adjusting the display angle, so that the time for a professional to check the target temporal ear three-dimensional image is shortened, and the efficiency is improved.
Further, on the basis of the above embodiment, the segmenting the ossicle structure from the target temporal ear image through the first preset model to obtain the target ossicle image includes:
identifying the region and boundary information of an ossicle structure from the target temporal ear three-dimensional image through the first preset model, and separating the ossicle structure from the target temporal ear three-dimensional image according to the region and boundary information identified from the target temporal ear three-dimensional image to obtain the target ossicle image;
or,
identifying the area and boundary information of an ossicle structure from each two-dimensional image forming the target temporal ear three-dimensional image through the first preset model, separating an ossicle two-dimensional image according to the area and boundary information from each two-dimensional image, and performing three-dimensional reconstruction on the separated ossicle two-dimensional image to obtain the target ossicle image.
It should be noted that the auditory ossicle structure is segmented from the target temporal ear image may be obtained directly from the displayed three-dimensional target temporal ear image, or an area corresponding to the auditory ossicle structure in each two-dimensional image of the target temporal ear image is first isolated, and then the three-dimensional reconstruction is performed on the isolated two-dimensional area to obtain the target auditory ossicle image of the three-dimensional auditory ossicle structure.
The embodiment provides a method for processing a temporal ear image, and the method provides a method for extracting an ossicle structure from a target temporal ear three-dimensional image.
Further, on the basis of the foregoing embodiments, the identifying, by the second preset model, a region where an ossicular structure in the target ossicular image differs from an ossicular structure in the preset ossicular image as a target region includes:
and comparing the ossicle structure in the target ossicle image with the ossicle structure in the preset ossicle image through the second preset model, and identifying a differential area as the target area.
It should be noted that, in this embodiment, the identification of the target area is realized through the second preset model. The training process of the second preset model comprises the following steps: and taking a plurality of preset ossicle images with preset ossicle structures as training samples, and performing machine learning on the training samples to obtain the model. And after the divided target ossicle image is received, identifying whether the ossicle structure in the target ossicle image is the same as the ossicle structure in the preset ossicle image, and if not, marking a region with a difference so as to adjust the display angle to display the region.
The embodiment provides a method for processing a temporal ear image, which more specifically limits whether an auditory ossicle structure in a recognition target temporal ear three-dimensional image is the same as an auditory ossicle structure in a preset auditory ossicle image, so that when a worker knows the auditory ossicle structure, the worker does not need to manually search the auditory ossicle structure or manually adjust a display angle, the worker can conveniently and rapidly conduct further research on the auditory ossicle structure, and the efficiency is improved.
Still further, on the basis of the foregoing embodiments, the adjusting the display angle of the three-dimensional image of the target temporal ear so that the target area is displayed on a preset screen includes:
marking the target area, and rotating the target temporal ear three-dimensional image or sectioning the target temporal ear three-dimensional image to display the target area on a screen.
Further, on the basis of the above embodiments, the method further includes:
if a first instruction for rotating the target temporal ear three-dimensional image is received, rotating the target temporal ear three-dimensional image;
and if a second instruction for viewing the section image of the target temporal ear three-dimensional image intercepted by the preset section is received, displaying the section image.
After the target area is determined, in order to conveniently show the target area, the method provided by this embodiment needs to rotate or cut the target temporal ear three-dimensional image according to the specific position of the target area, so that the target area is just displayed in the center of the screen.
Of course, after the target area is displayed, the professional can also rotate the three-dimensional image of the target temporal ear or perform other operations of changing the display view angle according to the needs. For example, if a first instruction for rotating the three-dimensional image of the target temporal ear is received, rotating the three-dimensional image of the target temporal ear according to the direction in the first instruction;
and if a second instruction for viewing the section image of the target temporal ear three-dimensional image intercepted by the preset section is received, generating the section image according to the second instruction and displaying the section image.
It can be understood that the first instruction is realized by the triggering operation of the corresponding key on the screen performed by the staff, for example, clicking a rotation key on the screen to realize the rotation of the three-dimensional image of the target temporal ear. The second instruction is also implemented by a trigger operation performed on a corresponding key on the screen, for example, after a key for viewing the cross section on the screen is clicked, a position of the preset cross section is selected, and then a confirmation key is clicked, so that the cross section image intercepted by the preset cross section can be displayed.
The embodiment provides a method for processing a temporal ear image, which defines how to display a target area, so that a professional can see the target area through a screen to quickly know a structure displayed by the three-dimensional image of the target temporal ear.
Further, on the basis of the foregoing embodiments, the training of the first preset model includes:
taking a temporalis three-dimensional image of an auditory ossicle structure which is pre-marked out as a first training sample, and performing machine learning on the first training sample to obtain a first preset model;
the training of the second preset model comprises:
and acquiring a plurality of preset ossicle images with preset ossicle structures, taking the images as second training samples, and performing machine learning on the second training samples to obtain a second preset model.
The temporale ear three-dimensional image as the first training sample needs to be an image in which an auditory ossicle structure is previously divided. The second training sample needs to be a preset ossicle image with a preset ossicle structure, and the second preset model trained by the second training sample can mark an area, in the input ossicle image, where the ossicle structure is different from the preset ossicle structure.
Specifically, a convolutional neural network is used as a machine learning method, the convolutional neural network is trained through a first training sample to obtain a first preset model, and the trained convolutional neural network can be used for separating an auditory ossicle structure in an input target temporal ear three-dimensional image to obtain an auditory ossicle image.
And performing machine learning on a Support Vector Machine (SVM) through a second training sample to obtain a second preset model. The second preset model can identify the ossicle image separated from the first preset model, and mark an area where the ossicle structure in the ossicle image is different from the preset ossicle structure.
The embodiment provides a method for processing a temporal ear image, which defines how to train to obtain a first preset model and a second preset model, and can quickly find a target area to be displayed in a target temporal ear three-dimensional graph input to the preset model through machine learning of the first training sample and the second training sample.
Further, on the basis of the foregoing embodiments, the acquiring a three-dimensional image of a temporal ear of the target object as a three-dimensional image of the target temporal ear includes:
acquiring a scanning image obtained by scanning the head of the target object;
segmenting each scanned image to obtain an area and boundary information corresponding to a temporal ear area;
and obtaining a three-dimensional image of the temporal ear of the target object through three-dimensional reconstruction according to the obtained region and boundary information of the temporal ear region, and taking the three-dimensional image as the three-dimensional image of the target temporal ear.
Further, on the basis of the foregoing embodiments, the segmenting each scanned image to obtain the region and boundary information corresponding to the temporal ear region includes:
and segmenting each scanned image through a pre-trained full convolution network to obtain the region and boundary information corresponding to the temporal ear region of the target object.
The process of segmenting the scanned image can also be realized by machine learning. For example, machine learning is performed on the segmentation of the temporal ear region through a full convolution neural network (a training sample is a series of scanned images of the temporal ear region which is segmented in advance), so that the segmentation of the input scanned images is realized.
The process of scanning an object and performing three-dimensional modeling according to a scanned image belongs to the existing modeling technology, and this embodiment does not specifically limit this.
The present embodiment provides a method of processing a temporal ear image, which defines how to create a three-dimensional image from a scanned image, by which a three-dimensional image can be quickly created from a scanned image.
As a more specific embodiment, this embodiment organically combines an image processing technique, a three-dimensional reconstruction and presentation technique, a deep learning method, and a machine learning method, and implements the study on the ear three-dimensional image by the following method, where the method includes:
(1) and performing image segmentation. The image segmentation aims to segment all components of the temporal ear, and the image data is directly segmented by a full convolution neural network (FCN) in a deep learning method to obtain the region and boundary information of all parts of the temporal ear.
(2) And (4) three-dimensional modeling. Specifically, the purpose of extracting the three-dimensional information is to acquire and reconstruct the three-dimensional information of each part of the temporal ear, including the calculation and three-dimensional reconstruction of the three-dimensional information of each part of the temporal bone.
(3) The method comprises the steps of automatically learning the characteristics of abnormal areas in training samples, analyzing input target temporal ear three-dimensional images, and recognizing areas different from preset temporal ear three-dimensional images by adopting a Support Vector Machine (SVM) technology in a machine learning method.
(4) And (4) performing self-adaptive display. Namely, the information such as the position, the shape, the angle and the like of the identified target area is automatically displayed, so that the professional can know the specific structure of the target area conveniently, and the optimal displayed parameters are obtained according to the machine learning result.
According to the method provided by the embodiment, the machine learning method is applied to difference recognition of the three-dimensional image of the temporal ear and the preset three-dimensional image of the temporal ear, so that a professional does not need to manually compare the images, a difference area is searched, and the working efficiency of the professional can be greatly improved. The method applies a deep learning method to the segmentation of each part of the ear structure (for example, the temporal bone), and can obtain higher accuracy and faster segmentation speed than the traditional method. According to the method, the machine learning method is applied to the identification of the difference between the temporal ear and a preset temporal ear structure or the difference between the ossicle auditory ossicle and a reference ossicle structure, so that a worker can directly perform next experimental research or teaching work according to the difference area found by the machine, and the working efficiency is improved.
In a second aspect, fig. 2 is an apparatus for processing a temporal ear image according to this embodiment, which includes an obtaining module 201, an identifying module 202, and a displaying module 203, wherein,
an obtaining module 201, configured to obtain a three-dimensional image of a temporal ear of a target object as a three-dimensional image of the target temporal ear, and segment an ossicle structure from the target temporal ear image through a first preset model to obtain a target ossicle image;
the identification module 202 is configured to identify, as a target area, an area where an ossicle structure in the target ossicle image is different from an ossicle structure in a preset ossicle image through a second preset model;
the display module 203 is configured to adjust a display angle of the target temporal ear three-dimensional image, so that the target area is displayed on a preset screen.
The device for processing a temporal ear image provided in this embodiment is suitable for the method for processing a temporal ear image in the above embodiments, and is not described herein again.
The embodiment provides a device for processing a temporal ear image, the device is directed at a temporal ear three-dimensional image of a target object, a target auditory ossicle image corresponding to an auditory ossicle structure is obtained through a pre-trained first preset model, a target area with difference between the auditory ossicle in the target auditory ossicle image and the auditory ossicle structure in the preset auditory ossicle image is identified through a second preset model, the target area is displayed on a screen by adjusting the display direction of the target temporal ear three-dimensional image, the range of a professional for checking the target temporal ear three-dimensional image is reduced, the checking speed is accelerated, and the efficiency is improved. The device identifies the target area through the pre-trained model, and the target area is displayed on the screen by adjusting the display angle, so that the time for a professional to check the target temporal ear three-dimensional image is shortened, and the efficiency is improved.
Referring to fig. 3, the electronic device includes: at least one processor (processor)301, at least one memory (memory)302 communicatively coupled to the processor 301,
the processor 301 and the memory 302 complete mutual communication through the bus 303;
the processor 301 is configured to call program instructions in the memory 302 to perform the methods provided by the above-mentioned method embodiments, including: acquiring a three-dimensional image of a temporal ear of a target object as a three-dimensional image of the target temporal ear, and segmenting an ossicle structure from the target temporal ear image through a first preset model to obtain a target ossicle image; identifying an area, which is different from the auditory ossicle structure in the preset auditory ossicle image, of the target auditory ossicle image through a second preset model, and using the area as a target area; and adjusting the display angle of the target temporal ear three-dimensional image to enable the target area to be displayed on a preset screen.
In a fourth aspect, the present embodiment provides a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the method provided by the above method embodiments, for example, including: acquiring a three-dimensional image of a temporal ear of a target object as a three-dimensional image of the target temporal ear, and segmenting an ossicle structure from the target temporal ear image through a first preset model to obtain a target ossicle image; identifying an area, which is different from the auditory ossicle structure in the preset auditory ossicle image, of the target auditory ossicle image through a second preset model, and using the area as a target area; and adjusting the display angle of the target temporal ear three-dimensional image to enable the target area to be displayed on a preset screen.
In a fifth aspect, the present embodiments disclose a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions that, when executed by a computer, enable the computer to perform the methods provided by the above-described method embodiments, for example, comprising: acquiring a three-dimensional image of a temporal ear of a target object as a three-dimensional image of the target temporal ear, and segmenting an ossicle structure from the target temporal ear image through a first preset model to obtain a target ossicle image; identifying an area, which is different from the auditory ossicle structure in the preset auditory ossicle image, of the target auditory ossicle image through a second preset model, and using the area as a target area; and adjusting the display angle of the target temporal ear three-dimensional image to enable the target area to be displayed on a preset screen.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above-described embodiments of the electronic device and the like are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may also be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the embodiments of the present invention, and are not limited thereto; although embodiments of the present invention have been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of processing a temporal ear image, comprising:
acquiring a three-dimensional image of a temporal ear of a target object as a three-dimensional image of the target temporal ear, and segmenting an ossicle structure from the target temporal ear image through a first preset model to obtain a target ossicle image;
identifying an area, which is different from the auditory ossicle structure in the preset auditory ossicle image, of the target auditory ossicle image through a second preset model, and using the area as a target area;
and adjusting the display angle of the target temporal ear three-dimensional image to enable the target area to be displayed on a preset screen.
2. The method according to claim 1, wherein the segmenting the ossicle structure from the target temporal ear image through a first preset model to obtain a target ossicle image comprises:
identifying the region and boundary information of an ossicle structure from the target temporal ear three-dimensional image through the first preset model, and separating the ossicle structure from the target temporal ear three-dimensional image according to the region and boundary information identified from the target temporal ear three-dimensional image to obtain the target ossicle image;
or,
identifying the area and boundary information of an ossicle structure from each two-dimensional image forming the target temporal ear three-dimensional image through the first preset model, separating an ossicle two-dimensional image according to the area and boundary information from each two-dimensional image, and performing three-dimensional reconstruction on the separated ossicle two-dimensional image to obtain the target ossicle image.
3. The method according to claim 1, wherein the identifying a region in which the ossicular structure in the target ossicular image is different from the ossicular structure in the preset ossicular image through a second preset model as a target region comprises:
and comparing the ossicle structure in the target ossicle image with the ossicle structure in the preset ossicle image through the second preset model, and identifying a differential area as the target area.
4. The method according to claim 1, wherein the adjusting the display angle of the three-dimensional image of the target temporal ear so that the target area is displayed on a preset screen comprises:
marking the target area, and rotating the target temporal ear three-dimensional image or sectioning the target temporal ear three-dimensional image to display the target area on a screen.
5. The method of claim 1,
the training of the first preset model comprises:
taking a temporalis three-dimensional image of an auditory ossicle structure which is pre-marked out as a first training sample, and performing machine learning on the first training sample to obtain a first preset model;
the training of the second preset model comprises:
and acquiring a plurality of preset ossicle images with preset ossicle structures, taking the preset ossicle images as second training samples, and performing machine learning on the second training samples to obtain a second preset model.
6. The method of claim 1, wherein the obtaining a three-dimensional image of the temporal ear of the target subject as a three-dimensional image of the target temporal ear comprises:
acquiring a scanning image obtained by scanning the head of the target object;
segmenting each scanned image to obtain an area and boundary information corresponding to a temporal ear area;
and obtaining a three-dimensional image of the temporal ear of the target object through three-dimensional reconstruction according to the obtained region and boundary information of the temporal ear region, and taking the three-dimensional image as the three-dimensional image of the target temporal ear.
7. The method of claim 6, wherein the segmenting each scanned image to obtain the region and boundary information corresponding to the temporal ear region comprises:
and segmenting each scanned image through a pre-trained full convolution network to obtain the region and boundary information corresponding to the temporal ear region of the target object.
8. An apparatus for processing a temporal ear image, comprising:
the acquisition module is used for acquiring a three-dimensional image of a temporal ear of a target object as a three-dimensional image of the target temporal ear, and segmenting an ossicle structure from the target temporal ear image through a first preset model to obtain a target ossicle image;
the identification module is used for identifying an area where the ossicle structure in the target ossicle image is different from the ossicle structure in the preset ossicle image through a second preset model to serve as a target area;
and the display module is used for adjusting the display angle of the target temporal ear three-dimensional image so that the target area is displayed on a preset screen.
9. An electronic device, comprising:
at least one processor; and
at least one memory communicatively coupled to the processor, wherein:
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1-7.
10. A non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method of any one of claims 1-7.
CN201810401351.1A 2018-04-28 2018-04-28 Method and device for processing temporal ear image Active CN108615235B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810401351.1A CN108615235B (en) 2018-04-28 2018-04-28 Method and device for processing temporal ear image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810401351.1A CN108615235B (en) 2018-04-28 2018-04-28 Method and device for processing temporal ear image

Publications (2)

Publication Number Publication Date
CN108615235A true CN108615235A (en) 2018-10-02
CN108615235B CN108615235B (en) 2021-03-09

Family

ID=63661291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810401351.1A Active CN108615235B (en) 2018-04-28 2018-04-28 Method and device for processing temporal ear image

Country Status (1)

Country Link
CN (1) CN108615235B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652971A (en) * 2020-06-09 2020-09-11 上海商汤智能科技有限公司 Display control method and device
CN112950646A (en) * 2021-04-06 2021-06-11 高燕军 HRCT image ossicle automatic segmentation method based on deep learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1877637A (en) * 2006-06-20 2006-12-13 长春工业大学 Medical image template matching method based on microcomputer
CN104751178A (en) * 2015-03-31 2015-07-01 上海理工大学 Pulmonary nodule detection device and method based on shape template matching and combining classifier
WO2016057960A1 (en) * 2014-10-10 2016-04-14 Radish Medical Solutions, Inc. Apparatus, system and method for cloud based diagnostics and image archiving and retrieval
CN106803256A (en) * 2017-01-13 2017-06-06 深圳市唯特视科技有限公司 A kind of 3D shape based on projection convolutional network is split and semantic marker method
CN107274402A (en) * 2017-06-27 2017-10-20 北京深睿博联科技有限责任公司 A kind of Lung neoplasm automatic testing method and system based on chest CT image
WO2018015414A1 (en) * 2016-07-21 2018-01-25 Siemens Healthcare Gmbh Method and system for artificial intelligence based medical image segmentation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1877637A (en) * 2006-06-20 2006-12-13 长春工业大学 Medical image template matching method based on microcomputer
WO2016057960A1 (en) * 2014-10-10 2016-04-14 Radish Medical Solutions, Inc. Apparatus, system and method for cloud based diagnostics and image archiving and retrieval
CN104751178A (en) * 2015-03-31 2015-07-01 上海理工大学 Pulmonary nodule detection device and method based on shape template matching and combining classifier
WO2018015414A1 (en) * 2016-07-21 2018-01-25 Siemens Healthcare Gmbh Method and system for artificial intelligence based medical image segmentation
CN106803256A (en) * 2017-01-13 2017-06-06 深圳市唯特视科技有限公司 A kind of 3D shape based on projection convolutional network is split and semantic marker method
CN107274402A (en) * 2017-06-27 2017-10-20 北京深睿博联科技有限责任公司 A kind of Lung neoplasm automatic testing method and system based on chest CT image

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652971A (en) * 2020-06-09 2020-09-11 上海商汤智能科技有限公司 Display control method and device
CN112950646A (en) * 2021-04-06 2021-06-11 高燕军 HRCT image ossicle automatic segmentation method based on deep learning

Also Published As

Publication number Publication date
CN108615235B (en) 2021-03-09

Similar Documents

Publication Publication Date Title
US11922626B2 (en) Systems and methods for automatic detection and quantification of pathology using dynamic feature classification
CN108648169B (en) Method and device for automatically identifying defects of high-voltage power transmission tower insulator
US9367753B2 (en) Method and system for recognizing information on a card
CN108986169B (en) Method and apparatus for processing image
CN111161311A (en) Visual multi-target tracking method and device based on deep learning
CN112712906B (en) Video image processing method, device, electronic equipment and storage medium
CN109191442B (en) Ultrasonic image evaluation and screening method and device
CN114596290B (en) Defect detection method and device, storage medium, and program product
WO2020029608A1 (en) Method and apparatus for detecting burr of electrode sheet
CN110766656B (en) Method, device, equipment and storage medium for screening fundus macular region abnormality
CN108615235B (en) Method and device for processing temporal ear image
CN111738199B (en) Image information verification method, device, computing device and medium
CN111488872A (en) Image detection method, image detection device, computer equipment and storage medium
CN112633221A (en) Face direction detection method and related device
CN116824135A (en) Atmospheric natural environment test industrial product identification and segmentation method based on machine vision
CN111680577A (en) Face detection method and device
CN118506115A (en) Multi-focal-length embryo image prokaryotic detection method and system based on optimal arc fusion
CN109165572B (en) Method and apparatus for generating information
CN113793385A (en) Method and device for positioning fish head and fish tail
CN111768406A (en) Cell image processing method, device, equipment and storage medium
CN108664948B (en) Method and apparatus for generating information
CN111314665A (en) Key video segment extraction system and method for video post-scoring
CN113516328B (en) Data processing method, service providing method, device, equipment and storage medium
JP2004527048A (en) Identification method of recorded information
CN113870210A (en) Image quality evaluation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210719

Address after: 310018 22nd floor, building 1, 199 Yuancheng Road, Xiasha street, Hangzhou Economic and Technological Development Zone, Zhejiang Province

Patentee after: Hangzhou Zhuojian Information Technology Co.,Ltd.

Address before: 100085 Haidian District, Beijing 1 High Court No. 18 building 103-86

Patentee before: BEIJING PAIYIPAI INTELLIGENT TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right