CN115311188B - Image recognition method and device, electronic equipment and storage medium - Google Patents

Image recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115311188B
CN115311188B CN202110498712.0A CN202110498712A CN115311188B CN 115311188 B CN115311188 B CN 115311188B CN 202110498712 A CN202110498712 A CN 202110498712A CN 115311188 B CN115311188 B CN 115311188B
Authority
CN
China
Prior art keywords
focus
weight
feature
real
suspicious
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110498712.0A
Other languages
Chinese (zh)
Other versions
CN115311188A (en
Inventor
肖月庭
阳光
郑超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shukun Technology Co ltd
Original Assignee
Shukun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shukun Technology Co ltd filed Critical Shukun Technology Co ltd
Priority to CN202110498712.0A priority Critical patent/CN115311188B/en
Publication of CN115311188A publication Critical patent/CN115311188A/en
Application granted granted Critical
Publication of CN115311188B publication Critical patent/CN115311188B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The application discloses an image recognition method, an image recognition device, electronic equipment and a storage medium, wherein the image recognition method comprises the following steps: acquiring medical images of a plurality of different display parameters corresponding to a target part; according to the spatial position similarity of each focus area in different medical images, determining focus areas with focus area spatial position similarity meeting a threshold condition in each medical image as real focus areas, and determining focus areas with focus area spatial position similarity not meeting the threshold condition in at least one medical image as suspicious focus areas; acquiring real focus features corresponding to the real focus areas and suspicious focus features corresponding to the suspicious focus areas; acquiring the feature matching degree of the real focus features and the suspicious focus features; and correcting the suspicious focus area corresponding to the suspicious focus features with the feature matching degree meeting the correction condition into a real focus area. The image recognition method provided by the application can automatically recognize the real focus area, and improves the diagnosis efficiency and accuracy of doctors.

Description

Image recognition method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of medical image processing technologies, and in particular, to an image recognition method, an image recognition device, an electronic device, and a storage medium.
Background
In actual clinical diagnosis in the medical field, the diagnosis of a disease generally requires analysis and identification of medical images obtained by a large number of medical scans to determine whether a lesion exists.
Currently, analytical identification of medical images is often achieved through visual inspection by doctors and manual labeling of lesion locations, i.e., manual diagnosis. However, when screening large-scale medical images, the amount of data required for processing and analyzing by a doctor is very large, resulting in heavy workload of the doctor, so that the manual diagnosis method is time-consuming and labor-consuming, and the manual screening for suspected lesions is relatively high in subjectivity, and is easy to cause technical problems of missed diagnosis, misdiagnosis and the like.
Disclosure of Invention
The application provides an image identification method, an image identification device, electronic equipment and a storage medium, so as to alleviate the technical problem that manual diagnosis is required at present.
In order to solve the technical problems, the application provides the following technical scheme:
an image recognition method, comprising:
acquiring a plurality of medical images of different display parameters corresponding to a target part, wherein each medical image has at least one focus area;
According to the spatial position similarity of each focus area in different medical images, determining a focus area with the spatial position similarity of the focus area meeting a threshold condition in each medical image as a real focus area;
determining a focus area of which the similarity of the focus area space positions in at least one medical image does not meet a threshold condition as a suspicious focus area;
acquiring the real focus characteristics corresponding to the real focus area and the suspicious focus characteristics corresponding to the suspicious focus area;
acquiring the feature matching degree of the real focus features and the suspicious focus features;
and correcting the suspicious focus area corresponding to the suspicious focus features with the feature matching degree meeting the correction condition into a real focus area.
An image recognition apparatus comprising:
the first acquisition unit is used for acquiring a plurality of medical images of different display parameters corresponding to the target part, wherein each medical image has at least one focus area;
the first determining unit is used for determining focus areas, in which the spatial position similarity of the focus areas in the medical images meets a threshold condition, as real focus areas according to the spatial position similarity of the focus areas in the different medical images;
A second determining unit, configured to determine, as an in-doubt focal region, a focal region in which the spatial position similarity of the focal region in the at least one medical image does not satisfy a threshold condition;
the second acquisition unit is used for acquiring the real focus characteristics corresponding to the real focus area and the suspicious focus characteristics corresponding to the suspicious focus area;
a third obtaining unit, configured to obtain a feature matching degree of the real lesion feature and the in-doubt lesion feature;
and the correction unit is used for correcting the suspicious focus area corresponding to the suspicious focus characteristics with the characteristic matching degree meeting the correction condition into a real focus area.
An electronic device, comprising:
at least one processor;
and a memory communicatively coupled to the at least one processor;
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the image recognition methods provided by the embodiments of the present application.
A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform any of the image recognition methods provided by the embodiments of the present application.
The beneficial effects are that: the application provides an image recognition method, firstly, medical images of a plurality of different display parameters corresponding to a target part are acquired, each medical image has at least one focus area, then according to the spatial position similarity of each focus area in different medical images, the focus area of which the spatial position similarity of the focus area in each medical image meets a threshold condition is determined to be a real focus area, the focus area of which the spatial position similarity of the focus area in at least one medical image does not meet the threshold condition is determined to be a suspicious focus area, the real focus characteristics corresponding to the real focus area and the suspicious focus characteristics corresponding to the suspicious focus area are acquired, the characteristic matching degree of the real focus characteristics and the suspicious focus characteristics is acquired, and the suspicious focus area corresponding to the suspicious focus characteristics of which the characteristic matching degree meets the correction condition is corrected to be the real focus area. By the method, when a large number of focus areas in the medical images are identified, the focus areas can be firstly divided into real focus areas and in-doubt focus areas according to the similarity of the focus areas in the medical images, the real focus characteristics corresponding to the real focus areas and in-doubt focus characteristics corresponding to the in-doubt focus areas are obtained, then the real focus areas in the in-doubt focus areas are automatically identified according to the characteristic matching degree of the real focus characteristics and the in-doubt focus characteristics, the identification process does not need to be manually participated, the problem that manual diagnosis is time-consuming and labor-consuming is solved, the diagnosis efficiency of doctors is improved, meanwhile, in the process of identifying the in-doubt focus areas, the in-doubt focus characteristics are matched with the real focus characteristics, so that the data according to which each in-doubt focus area is diagnosed are more accurate, the diagnosis accuracy is improved, and the problems of misdiagnosis, missed diagnosis and the like caused by manual positioning of focuses are avoided.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1a is a schematic view of a scenario of an image recognition method provided in an embodiment of the present application;
fig. 1b is a networking schematic diagram of an image recognition system provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a first image recognition method according to an embodiment of the present application;
FIG. 3 is a block diagram of an image recognition model provided by an embodiment of the present application;
fig. 4 is a schematic flowchart of a second image recognition method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image recognition device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms "first," "second," "third," and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "includes" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
Flowcharts are used in this application to describe the operations performed according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
The embodiment of the application provides an image identification method and device, electronic equipment and a storage medium. The image recognition device may be integrated in an electronic device, which may be a server or a terminal.
In this embodiment, the medical image is an image obtained by a medical imaging technology and capable of reflecting different tissue structures and pathological states of a human body, such as a digital X-ray (Digital Radiography, DR) image, the DR image is an analog gray-scale image formed by transmitting X-rays through the human body, the gray-scale image reflects the different tissue structures and pathological states of the human body through density and change of images, the DR image shows the sum of projections of X-rays with different densities and thicknesses in a certain area of the human body, the result of superposition can enhance projection of some tissue structures or focus to obtain better display results, many diseases can be found in chest DR images, such as pneumonia, tuberculosis, pulmonary nodules, old focus, aortic sclerosis, heart shadow enlargement, emphysema, pneumothorax, etc., unlike detection and identification of natural images, detection and identification of focus in DR images need to have a clear knowledge of focus imaging, for example, the sign of pneumonia is a cloud flocculent density elevation, density is medium and uniform, and edge is unclear.
Referring to fig. 1a, taking the image recognition device integrated in an electronic device as an example, the electronic device may obtain medical images of a plurality of display parameters corresponding to a target portion, where at least one focal region exists in each medical image; according to the spatial position similarity of each focus area in different medical images, determining focus areas with focus area spatial position similarity meeting a threshold condition in each medical image as real focus areas, and determining focus areas with focus area control position similarity not meeting the threshold condition in at least one medical image as suspicious focus areas; acquiring real focus features corresponding to the real focus areas and suspicious focus features corresponding to the suspicious focus areas; acquiring the feature matching degree of the real focus features and the suspicious focus features; and correcting the suspicious focus area corresponding to the suspicious focus features with the feature matching degree meeting the correction condition into a real focus area.
Referring to fig. 1b, fig. 1b is a networking schematic diagram of an image recognition system provided in an embodiment of the present application, where the system may include terminals and servers, and the terminals, the servers, and the terminals and the servers are connected and communicate through the internet formed by various gateways, which is not described herein, where the terminals may include a detection terminal 101, a user terminal 102, and the servers may include a recognition server 103; wherein:
The detection terminal 101 mainly includes medical devices such as Computed Tomography (CT) devices (Computed Tomography, CT) and the like, and is mainly used to output medical images by medical scanning awaiting processing images.
The user terminals 102 include, but are not limited to, portable terminals such as mobile phones, tablets, fixed terminals such as computers and inquiring machines, and various virtual terminals; the method mainly provides an uploading function, an identification function and a display function of a preset lesion type identification result corresponding to the medical image.
The servers include local servers and/or remote servers, etc. The recognition server 103 may have a recognition function and a model training function, and may be deployed on a local server, or may be partially or fully deployed on a remote server.
In the embodiment of the application, the electronic equipment acquires a plurality of medical images with different display parameters corresponding to the target part, and each medical image has at least one focus area; according to the spatial position similarity of each focus area in different medical images, determining focus areas with focus area spatial position similarity meeting a threshold condition in each medical image as real focus areas, and determining focus areas with focus area spatial position similarity not meeting the threshold condition in at least one medical image as suspicious focus areas; acquiring real focus features corresponding to the real focus areas and suspicious focus features corresponding to the suspicious focus areas; acquiring the feature matching degree of the real focus features and the suspicious focus features; correcting the suspicious focus area corresponding to the suspicious focus features with the feature matching degree meeting the correction condition into a real focus area; by the method, when a large number of focus areas in the medical images are identified, the focus areas can be firstly divided into real focus areas and in-doubt focus areas according to the similarity of the focus areas in the medical images, the real focus characteristics corresponding to the real focus areas and in-doubt focus characteristics corresponding to the in-doubt focus areas are obtained, then the real focus areas in the in-doubt focus areas are automatically identified according to the characteristic matching degree of the real focus characteristics and the in-doubt focus characteristics, the identification process does not need to be manually participated, the problem that manual diagnosis is time-consuming and labor-consuming is solved, the diagnosis efficiency of doctors is improved, meanwhile, in the process of identifying the in-doubt focus areas, the in-doubt focus characteristics are matched with the real focus characteristics, so that the data according to which each in-doubt focus area is diagnosed are more accurate, the diagnosis accuracy is improved, and the problems of misdiagnosis, missed diagnosis and the like caused by manual positioning of focuses are avoided.
It should be noted that the system networking schematic shown in fig. 1b is only an example, and the servers and the scenarios described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided in the embodiments of the present application, and as one of ordinary skill in the art can know, with the evolution of the system and the appearance of a new service scenario, that the technical solutions provided in the embodiments of the present application are equally applicable to similar technical problems.
The following will explain the technical scheme in detail. The following description of the embodiments is not intended to limit the preferred embodiments. The present embodiment will be described from the viewpoint of an image recognition apparatus, which may be integrated in an electronic device, which may be a server or a terminal, or the like; the terminal may include, among other things, a tablet, notebook, personal computer (Personal Computer, PC), mini-processing box, or other device.
Referring to fig. 2, fig. 2 is a flowchart of an image recognition method according to an embodiment of the present application, where the image recognition method includes the following steps:
201: medical images of a plurality of different display parameters corresponding to the target part are acquired, and at least one focus area exists in each medical image.
In one embodiment, the medical image may be provided to the image recognition device after image acquisition of biological tissue, such as a knee, heart or liver, by means of a medical scan by the medical image acquisition device. The medical image acquisition device may include an electronic device such as a magnetic resonance imager (Magnetic Resonance Imaging, MRI), CT, colposcope, or endoscope.
In one embodiment, the target site may include a region of a living being, for example, a region of a living being such as a knee, a heart, or the like of a human body, and the tissue region may be a tissue region corresponding to a tissue such as fat, muscle, or the like.
In one embodiment, the target portion corresponds to a plurality of medical images, each medical image is based on different display parameters, and each medical image has at least one focal region, where the focal region may belong to a part of images in the medical images, and the display parameters may be parameters such as a focal region volume or a ratio of the focal region volume to the target portion region to which the focal region volume belongs.
202: according to the spatial position similarity of each focus area in different medical images, determining a focus area with the spatial position similarity of the focus area meeting a threshold condition in each medical image as a real focus area;
The real focus area refers to an area where lesions can be determined, and in one embodiment, spatial position coordinate alignment processing can be performed on different medical images to obtain spatial position coordinate values of each focus area in the different medical images; obtaining the spatial position similarity of each focus area in different medical images according to the spatial position coordinate values; determining focus areas with similarity of focus area space positions meeting a threshold condition in each medical image as real focus areas; the spatial position coordinate alignment process may be to perform coordinate transformation on different medical images so that the different medical images are in the same image coordinate system, so that the spatial position coordinate values of each focus region in the different medical images in the same coordinate system may be obtained.
In one embodiment, each focal region in different medical images of the medical scan corresponding to the target region may correspond to the same lesion type, and the disease corresponding to the lesion type may be a whole class of diseases, such as non-infectious knee joint diseases, and the like, and may also be specific diseases, such as knee osteoarthritis, rheumatoid arthritis, traumatic arthritis, gouty arthritis, and the like.
In one embodiment, the real focal region may be determined according to the association degree between the lesion type corresponding to each focal region and the tissue type corresponding to the focal region in different medical images, wherein the tissue type may refer to the name of a certain tissue region, such as fat, blood vessel, etc., the association degree may be that, for example, the lesion type of knee drool may be mainly aimed at the region corresponding to the tissue type of fat, and the lesion type of muscle strain may be mainly aimed at the region corresponding to the tissue type of muscle; specifically, for example, when the association degree of a lesion type corresponding to a certain lesion area and a tissue category corresponding to the lesion area is strong, the lesion area may be determined as a true lesion area.
203: and determining the focus area, of which the similarity of the focus area space positions in at least one medical image does not meet the threshold condition, as the suspicious focus area.
The suspicious focus area refers to an area which possibly has lesions but cannot be completely determined, and likewise, spatial position coordinate alignment treatment can be carried out on different medical images to obtain spatial position coordinate values of focus areas in the different medical images, then the spatial position similarity of the focus areas in the different medical images is obtained according to the spatial position coordinate values, and focus areas of which the spatial position similarity of the focus areas in at least one medical image does not meet a threshold condition are determined as suspicious focus areas; the spatial position coordinate alignment process may be to perform coordinate transformation on different medical images so that the different medical images are in the same image coordinate system, so that the spatial position coordinate values of each focus region in the different medical images in the same coordinate system may be obtained.
In one embodiment, the suspicious lesion area may be determined according to the association degree between the lesion type corresponding to each lesion area and the tissue type corresponding to the lesion area in different medical images, wherein the tissue type may refer to the name of a certain tissue area, such as fat, blood vessel, etc., the association degree may be that the lesion type, such as knee water accumulation, may be mainly aimed at the area corresponding to the tissue type, and the lesion type may be mainly aimed at the area corresponding to the tissue type; specifically, for example, when the association degree of a lesion type corresponding to a certain lesion area and a tissue category corresponding to the lesion area is weak, a lesion area with weak association degree is determined as a suspicious lesion area, for example, the association degree of a lesion type such as knee dropsy and a tissue category such as muscle is weak, then the tissue area corresponding to the muscle is not determined as a true lesion area under the lesion type such as knee dropsy, but is determined as a suspicious lesion area.
204: and acquiring the real focus characteristics corresponding to the real focus area and the suspicious focus characteristics corresponding to the suspicious focus area.
In one embodiment, this step may include: acquiring a first feature map and a first weight parameter set corresponding to a real focus area, and a second feature map and a second weight parameter set corresponding to an in-doubt focus area; determining the real focus features in the first feature map according to the first weight parameter set and a preset first weight threshold; and determining the suspicious focus features in the second feature map according to a second weight parameter set and a preset second weight threshold.
In one embodiment, a first feature map and a first weight parameter set corresponding to a real focus area may be obtained based on an image recognition model, where the first feature map may be a feature map obtained by processing a medical image including the real focus area through a convolution layer of the image recognition model, and a second feature map and a second weight parameter set corresponding to a feature channel in the first feature map, where the second feature map may be a feature map obtained by processing a medical image including the suspicious focus area through a convolution layer of the image recognition model, and the first weight parameter set may be a first weight parameter set corresponding to a feature channel in the first feature map, and the second weight parameter set may be a second weight parameter set corresponding to a feature channel in the second feature map.
In one embodiment, for example, feature extraction may be performed on a real lesion area or an in-doubt lesion area to obtain a plurality of feature information, for example, p spatial coordinates, for example, 512 spatial coordinates (u, v, z), then the spatial coordinates are convolved with a convolution function, for example, conv (linear convolution function), and the p feature vectors are divided into k feature channels, for example, 512 spatial coordinates (u, v, z) are convolved and then divided into 8 feature channels, each feature channel includes 32 spatial coordinates (u, v, z), so that the spatial coordinates of each feature channel are different coordinate points in a cube; then, processing the feature vectors in each feature channel by using a softmax function to obtain a feature space map of each feature channel; constructing a loss function according to the feature space map of each feature channel; and solving the loss function to obtain a continuous characteristic space map which is used as a weight parameter of each characteristic channel.
It should be noted that the softmax function is a normalized exponential function, and a K-dimensional vector containing any real number is "compressed" into another K-dimensional real vector, so that each element ranges between (0, 1), and the sum of all elements is 1, and the feature information in each feature channel is distributed in a cube with a unit side length of 1.
In one embodiment, the step of determining the real lesion feature in the first feature map according to the first set of weight parameters and the preset first weight threshold may include: acquiring N first characteristic channels in a first characteristic diagram and N first weight parameters in a first weight parameter set, wherein N is an integer greater than or equal to 1, and the first weight parameters have a corresponding relation with the first characteristic channels; according to the N first weight parameters and a preset first weight threshold, M third characteristic channels are determined in the N first characteristic channels, wherein M is an integer greater than or equal to 1; and determining the focus characteristic corresponding to the third characteristic channel as the real focus characteristic.
In one embodiment, the step of determining M third feature channels from the N first feature channels according to the N first weight parameters and the preset first weight threshold may include: acquiring first weight parameters of each first characteristic channel in the N first characteristic channels; and comparing the first weight parameters of the first characteristic channels with preset first weight thresholds respectively to obtain M third characteristic channels of which the first weight parameters are larger than or equal to the preset first weight thresholds in the N first characteristic channels.
In one embodiment, the step of determining the suspicious lesion feature in the second feature map according to the second set of weight parameters and the preset second weight threshold may include: acquiring N second characteristic channels in a second characteristic diagram and N second weight parameters in a second weight parameter set, wherein N is an integer greater than or equal to 1, and the second weight parameters have a corresponding relation with the second characteristic channels; according to the N second weight parameters and a preset second weight threshold value, M fourth characteristic channels are determined in the N second characteristic channels, wherein M is an integer greater than or equal to 1; and determining the focus characteristic corresponding to the fourth characteristic channel as the suspicious focus characteristic.
In one embodiment, the real focus feature corresponding to the real focus area may be obtained based on the feature information of the image content and the attention mechanism, and this process may be: acquiring a plurality of real focus characteristic information of a real focus area; dividing the plurality of real focus characteristic information into a plurality of real focus characteristic channels; obtaining a first attention weight according to the mass center of each real focus characteristic channel; and obtaining the real focus characteristics corresponding to the real focus areas according to the first attention weight and the plurality of real focus characteristic information.
In one embodiment, the suspicious lesion characteristics corresponding to the suspicious lesion areas may be obtained based on the characteristic information of the image content and the attention mechanism, and this process may be: acquiring a plurality of suspicious focus characteristic information of the suspicious focus area; dividing the multiple in-doubt focus feature information into multiple in-doubt focus feature channels; obtaining a second attention weight according to the mass center of each suspicious focus characteristic channel; and obtaining the suspicious focus features corresponding to the suspicious focus areas according to the second attention weight and the suspicious focus feature information.
In one embodiment, the feature extraction may be performed on the image content based on a residual network, and at this time, the step of obtaining a plurality of real lesion feature information of the real lesion area may include: acquiring a trained residual error network; and extracting the characteristics of the image content corresponding to the real focus area by using a residual error network to obtain a plurality of real focus characteristic information.
In one embodiment, the feature extraction may be performed on the image content based on a residual network, and at this time, the step of obtaining a plurality of in-doubt lesion feature information of the in-doubt lesion area may include: acquiring a trained residual error network; and extracting features of image contents corresponding to the suspicious focus areas by using a residual error network to obtain a plurality of suspicious focus feature information.
In one embodiment, the first attention weight or the second attention weight may be derived based on a loss function.
In one embodiment, the step of deriving the first attention weight from the centroid of each of the real lesion characterization channels may comprise: processing the real focus characteristic information of each real focus characteristic channel to obtain a real focus characteristic map of each real focus characteristic channel; acquiring a first centroid position of each real focus characteristic channel according to the real focus characteristic map of each real focus characteristic channel; constructing a first loss function according to the first centroid position of each real focus characteristic channel and the real focus characteristic map; and obtaining a continuous true focus characteristic map as a first attention weight according to the first loss function.
In one embodiment, the step of deriving the second attention weight from the centroid of each in-doubt lesion characterization channel may comprise: processing the in-doubt focus feature information of each in-doubt focus feature channel to obtain in-doubt focus feature maps of each in-doubt focus feature channel; acquiring a second centroid position of each in-doubt focus characteristic channel according to the in-doubt focus characteristic spectrum of each in-doubt focus characteristic channel; constructing a second loss function according to the second centroid position of each in-doubt focus characteristic channel and the in-doubt focus characteristic spectrum; and obtaining a continuous in-doubt focus characteristic map according to the second loss function as a second attention weight.
205: and obtaining the feature matching degree of the real focus features and the in-doubt focus features.
In one embodiment, this step may include: acquiring a third weight parameter of the real focus characteristic and a fourth weight parameter of the in-doubt focus characteristic; calculating the contribution degree of a fourth weight parameter to a third weight parameter, wherein the contribution degree represents the proportion of the fourth weight parameter in the third weight parameter; and obtaining the feature matching degree of the real focus features and the suspicious focus features according to the contribution degree.
206: and correcting the suspicious focus area corresponding to the suspicious focus features with the feature matching degree meeting the correction condition into a real focus area.
In one embodiment, the correction condition may be a set feature matching degree threshold, and when the feature matching degree is higher than the set feature matching degree threshold, the in-doubt lesion area corresponding to the in-doubt lesion feature may be corrected to a true lesion area.
According to the image identification method provided by the embodiment of the application, when a large number of focus areas in medical images are identified, the focus areas can be firstly divided into real focus areas and in-doubt focus areas according to the similarity of the focus area space positions in the medical images, the real focus characteristics corresponding to the real focus areas and the in-doubt focus characteristics corresponding to the in-doubt focus areas are obtained, then the real focus areas in the in-doubt focus areas are automatically identified according to the characteristic matching degree of the real focus characteristics and the in-doubt focus characteristics, the identification process does not need manual participation, the problem that manual diagnosis is time-consuming and labor-consuming is solved, the diagnosis efficiency of doctors is improved, and meanwhile, in the process of identifying the in-doubt focus areas, the in-doubt focus characteristics and the real focus characteristics are subjected to characteristic matching, so that the data on which diagnosis of each in-doubt focus area is based are more accurate, the diagnosis accuracy is improved, and the problems of misdiagnosis, missed diagnosis and the like caused by manual positioning of focus are avoided.
Fig. 4 is a second flowchart of an image recognition method provided in the embodiment of the present application, where the image recognition method in the embodiment is described in detail by a server; referring to fig. 4, the image recognition method includes the following steps:
401: the recognition server performs training of the image recognition model.
In this embodiment, as shown in fig. 3, the image recognition model 300 provided in the present application includes an input layer, a convolution layer, and an output layer.
402: the user terminal sends a data request to the detection terminal service.
In this embodiment, a doctor sends a data request to a detection terminal such as a CT apparatus through a user terminal, so as to request to acquire medical images of a plurality of different presentation parameters corresponding to a target site.
403: the detection terminal sends a data response to the user terminal.
In this embodiment, a detection terminal such as a CT apparatus performs CT scanning on a target portion of a patient according to a data request, generates a plurality of medical images of different display parameters corresponding to the target portion, and then transmits the medical images to a user terminal of a doctor according to a data response.
404: the user terminal transmits an image recognition request to the recognition server.
In this embodiment, after receiving the data response, the user terminal analyzes the data response to obtain a plurality of medical images of different display parameters corresponding to the target part; the plurality of medical images are then added to an image recognition request, which is sent to a recognition server requesting the recognition server to process the medical images.
405: the identification server identifies a plurality of medical images to obtain an image identification result.
In this embodiment, after receiving the image recognition request, the recognition server parses the image recognition request to obtain a plurality of medical images to be recognized.
In the embodiment, the identification server performs spatial position coordinate alignment processing on different medical images to obtain spatial position coordinate values of each focus area in the different medical images; obtaining the spatial position similarity of each focus area in different medical images according to the spatial position coordinate values; determining a focus area with the focus area spatial position similarity meeting a threshold condition in each medical image as a real focus area, and determining a focus area with the focus area spatial position similarity not meeting the threshold condition in at least one medical image as a suspicious focus area; the spatial position coordinate alignment process may be to perform coordinate transformation on different medical images so that the different medical images are in the same image coordinate system, so that the spatial position coordinate values of each focus region in the different medical images in the same coordinate system may be obtained.
In this embodiment, the recognition server uses a partial image corresponding to the real lesion area as a first input of the trained image recognition model, and uses a partial image corresponding to the suspicious lesion area as a second input of the trained image recognition model.
The recognition server may use the trained image recognition model to obtain a first feature map and a first weight parameter set corresponding to the real focus region according to a first input, and obtain a second feature map and a second weight parameter set corresponding to the suspicious focus region according to a second input, where the first feature map may be a feature map obtained by processing a medical image including the real focus region through a convolution layer of the image recognition model, the second feature map may be a feature map obtained by processing a medical image including the suspicious focus region through a convolution layer of the image recognition model, the first weight parameter set may be a first weight parameter set corresponding to a feature channel in the first feature map, and the second weight parameter set may be a second weight parameter set corresponding to a feature channel in the second feature map.
The recognition server can perform feature extraction on a partial image corresponding to a real focus area or a partial image corresponding to an in-doubt focus area through a convolution layer in the trained image recognition model to obtain corresponding multiple feature information, such as p space coordinates, for example 512 space coordinates (u, v, z), then perform convolution calculation on the space coordinates by using a convolution function, such as conv (linear convolution function), and divide p feature vectors into k feature channels by using an activation function, such as ReLU, for example, divide the convolved 512 space coordinates (u, v, z) into 8 feature channels, wherein each feature channel comprises 32 space coordinates (u, v, z), and thus the space coordinates of each feature channel are different coordinate points in a cube; then, processing the feature vectors in each feature channel by using a softmax function to obtain a feature space map of each feature channel; constructing a loss function according to the feature space map of each feature channel; and solving the loss function to obtain a continuous characteristic space map which is used as a weight parameter of each characteristic channel.
The identification server can acquire N first characteristic channels in a first characteristic diagram and N first weight parameters in a first weight parameter set, wherein N is an integer greater than or equal to 1, and the first weight parameters have a corresponding relation with the first characteristic channels; according to the N first weight parameters and a preset first weight threshold, M third characteristic channels are determined in the N first characteristic channels, wherein M is an integer greater than or equal to 1; and determining the focus characteristic corresponding to the third characteristic channel as the real focus characteristic.
The identification server can acquire first weight parameters of each first characteristic channel in the N first characteristic channels; and comparing the first weight parameters of the first characteristic channels with preset first weight thresholds respectively to obtain M third characteristic channels of which the first weight parameters are larger than or equal to the preset first weight thresholds in the N first characteristic channels.
The identification server can acquire N second characteristic channels in the second characteristic diagram and N second weight parameters in a second weight parameter set, wherein N is an integer greater than or equal to 1, and the second weight parameters have a corresponding relation with the second characteristic channels; according to the N second weight parameters and a preset second weight threshold value, M fourth characteristic channels are determined in the N second characteristic channels, wherein M is an integer greater than or equal to 1; and determining the focus characteristic corresponding to the fourth characteristic channel as the suspicious focus characteristic.
The identification server can acquire a third weight parameter of the real focus characteristic and a fourth weight parameter of the suspicious focus characteristic, and calculate the contribution degree of the fourth weight parameter to the third weight parameter, wherein the contribution degree represents the specific gravity of the fourth weight parameter in the third weight parameter; and obtaining the feature matching degree of the real focus features and the suspicious focus features according to the contribution degree.
Repeating the above process until the feature matching degree of each suspicious focus feature and the real focus feature is calculated, and correcting the suspicious focus region corresponding to the suspicious focus feature with the feature matching degree meeting the correction condition into the real focus region to obtain the image recognition result of each medical image.
In this embodiment, the correction condition may be a set feature matching degree threshold, and when the feature matching degree is higher than the set feature matching degree threshold, the in-doubt lesion area corresponding to the in-doubt lesion feature may be corrected to be a real lesion area.
406: the recognition server transmits an image recognition response to the user terminal.
In this embodiment, the recognition server adds the image recognition results of the plurality of medical images to the image recognition response and transmits to the user terminal.
407: the user terminal displays image recognition results of the plurality of medical images.
In this embodiment, the user terminal displays the image recognition results of the plurality of medical images, including which lesion areas are recognized, what the recognition results are (i.e., which areas are true lesion areas), so that the doctor can further perform disease diagnosis on the basis of the recognition results.
The embodiment takes the identification of the server as an example, and details how to automatically identify the real focus area in the in-doubt focus area when the method identifies the focus area in a large number of medical images, the identification process does not need manual participation, the problems of time and labor waste in manual diagnosis are solved, the diagnosis efficiency of doctors is improved, and simultaneously, when the in-doubt focus area is identified, the in-doubt focus characteristics are matched with the real focus characteristics, so that the data on which the diagnosis of each in-doubt focus area is based is more accurate, the diagnosis accuracy is improved, and the problems of misdiagnosis, missed diagnosis and the like caused by manual positioning of focuses are avoided.
Accordingly, fig. 5 is a schematic structural diagram of an image recognition device provided in an embodiment of the present application, please refer to fig. 5, the image recognition device includes the following modules:
A first obtaining unit 501, configured to obtain medical images of a plurality of display parameters corresponding to a target location, where each medical image has at least one focal region;
a first determining unit 502, configured to determine, as a real focal region, a focal region in which the spatial position similarity of the focal region in each medical image satisfies a threshold condition according to the spatial position similarity of each focal region in different medical images;
a second determining unit 503, configured to determine a focal region in at least one medical image, where the spatial position similarity of the focal region does not meet a threshold condition, as a suspicious focal region;
a second obtaining unit 504, configured to obtain a real lesion feature corresponding to the real lesion area and an in-doubt lesion feature corresponding to the in-doubt lesion area;
a third obtaining unit 505, configured to obtain a feature matching degree of the real lesion feature and the in-doubt lesion feature;
and a correction unit 506, configured to correct the suspicious focus area corresponding to the suspicious focus feature whose feature matching degree satisfies the correction condition to a real focus area.
In one embodiment, the first determining unit 502 may be configured to: carrying out space position coordinate alignment processing on different medical images to obtain space position coordinate values of each focus area in the different medical images; obtaining the spatial position similarity of each focus area in different medical images according to the spatial position coordinate values; and determining the focus area with the similarity of the focus area space positions meeting the threshold condition in each medical image as a real focus area.
In one embodiment, the second acquisition unit 504 may be configured to: acquiring a first feature map and a first weight parameter set corresponding to the real focus area, and a second feature map and a second weight parameter set corresponding to the suspicious focus area; determining the real focus features in the first feature map according to the first weight parameter set and a preset first weight threshold; and determining the suspicious focus features in the second feature map according to the second weight parameter set and a preset second weight threshold.
In one embodiment, the second acquisition unit 504 may be configured to: acquiring N first characteristic channels in the first characteristic diagram and N first weight parameters in the first weight parameter set, wherein N is an integer greater than or equal to 1, and the first weight parameters have a corresponding relation with the first characteristic channels; according to the N first weight parameters and a preset first weight threshold, M third characteristic channels are determined in the N first characteristic channels, wherein M is an integer greater than or equal to 1; and determining the focus characteristic corresponding to the third characteristic channel as the real focus characteristic.
In one embodiment, the second acquisition unit 504 may be configured to: acquiring first weight parameters of each first characteristic channel in the N first characteristic channels; and comparing the first weight parameters of the first characteristic channels with preset first weight thresholds respectively to obtain M third characteristic channels of which the first weight parameters are larger than or equal to the preset first weight thresholds in the N first characteristic channels.
In one embodiment, the second acquisition unit 504 may also be configured to: acquiring N second characteristic channels in the second characteristic diagram and N second weight parameters in the second weight parameter set, wherein N is an integer greater than or equal to 1, and the second weight parameters have a corresponding relation with the second characteristic channels; according to the N second weight parameters and a preset second weight threshold value, M fourth characteristic channels are determined in the N second characteristic channels, wherein M is an integer greater than or equal to 1; and determining the focus characteristic corresponding to the fourth characteristic channel as the suspicious focus characteristic.
In one embodiment, the third obtaining unit 505 may be configured to: acquiring a third weight parameter of the real focus feature and a fourth weight parameter of the in-doubt focus feature; calculating the contribution degree of the fourth weight parameter to the third weight parameter, wherein the contribution degree represents the proportion of the fourth weight parameter in the third weight parameter; and obtaining the feature matching degree of the real focus feature and the in-doubt focus feature according to the contribution degree.
In the implementation, each unit may be implemented as an independent entity, or may be implemented as the same entity or several entities in any combination, and the implementation of each unit may be referred to the foregoing method embodiment, which is not described herein again.
In addition, the embodiment of the application further provides an electronic device, referring to fig. 6, fig. 6 shows a schematic structural diagram of the electronic device according to the embodiment of the application.
As shown in fig. 6, the electronic device may include a processor 601 of the one or more processing cores, a memory 602 of one or more computer readable storage media, a power supply 603, and an input unit 604, among other components.
It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 6 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
In one embodiment, processor 601 is the control center of the electronic device, connects the various parts of the overall electronic device using various interfaces and lines, and performs various functions and processes of the electronic device by running or executing software programs and/or modules stored in memory 602, and invoking data stored in memory 602.
In one embodiment, processor 601 may include one or more processing cores.
In one embodiment, processor 601 may integrate an application processor that primarily handles operating systems, user interfaces, applications, and the like, with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 601.
In one embodiment, the memory 602 may be used to store software programs and modules, and the processor 601 may execute various functional applications and data processing by executing the software programs and modules stored in the memory 602.
In one embodiment, the memory 602 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. In addition, the memory 602 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 602 may also include a memory controller to provide access to the memory 602 by the processor 601.
In one embodiment, the power supply 603 may be logically connected to the processor 601 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system.
In one embodiment, the power supply 603 may also include one or more of any components, such as a DC or AC power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
In one embodiment, the electronic device may further comprise an input unit 604, which input unit 604 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the electronic device may further include a display unit or the like, which is not described herein.
In one embodiment, the processor 601 in the electronic device loads executable files corresponding to the processes of one or more application programs into the memory 602 according to the following instructions, and the processor 601 executes the application programs stored in the memory 602, so as to implement the image recognition function:
acquiring a plurality of medical images of different display parameters corresponding to a target part, wherein each medical image has at least one focus area;
according to the spatial position similarity of each focus area in different medical images, determining a focus area with the spatial position similarity of the focus area meeting a threshold condition in each medical image as a real focus area;
determining a focus area of which the similarity of the focus area space positions in at least one medical image does not meet a threshold condition as a suspicious focus area;
Acquiring the real focus characteristics corresponding to the real focus area and the suspicious focus characteristics corresponding to the suspicious focus area;
acquiring the feature matching degree of the real focus features and the suspicious focus features;
and correcting the suspicious focus area corresponding to the suspicious focus features with the feature matching degree meeting the correction condition into a real focus area.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application also provide a computer readable storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform the steps of any of the image recognition methods provided by the embodiments of the present application.
For example, the instructions may perform the steps of:
acquiring a plurality of medical images of different display parameters corresponding to a target part, wherein each medical image has at least one focus area;
According to the spatial position similarity of each focus area in different medical images, determining a focus area with the spatial position similarity of the focus area meeting a threshold condition in each medical image as a real focus area;
determining a focus area of which the similarity of the focus area space positions in at least one medical image does not meet a threshold condition as a suspicious focus area;
acquiring the real focus characteristics corresponding to the real focus area and the suspicious focus characteristics corresponding to the suspicious focus area;
acquiring the feature matching degree of the real focus features and the suspicious focus features;
and correcting the suspicious focus area corresponding to the suspicious focus features with the feature matching degree meeting the correction condition into a real focus area.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the computer-readable storage medium may comprise: read Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
Because the instructions stored in the computer readable storage medium may execute the steps in any image recognition method provided in the embodiments of the present application, the beneficial effects that any image recognition method provided in the embodiments of the present application can achieve are detailed in the previous embodiments, and are not described herein.
The image recognition method, the device, the electronic equipment and the storage medium provided by the embodiment of the application are described in detail, and specific examples are applied to the description of the principle and the implementation of the application, and the description of the above embodiments is only used for helping to understand the method and the core idea of the application; meanwhile, as those skilled in the art will have variations in specific embodiments and application scope in light of the ideas of the present application, the present disclosure should not be construed as limiting the technical solutions provided in the present application.

Claims (8)

1. An image recognition method, comprising:
acquiring a plurality of medical images of different display parameters corresponding to a target part, wherein each medical image has at least one focus area; the display parameters at least comprise at least one of focal region volume parameters and proportion parameters of focal region volume to the target region;
according to the spatial position similarity of each focus area in the medical images with different display parameters, determining a focus area with the spatial position similarity of the focus area meeting a threshold condition in the medical images with different display parameters as a real focus area;
Determining a focus area of which the similarity of the focus area space positions in at least one medical image does not meet a threshold condition as a suspicious focus area, wherein the method specifically comprises the following steps: performing spatial position coordinate alignment processing on medical images with different display parameters to obtain spatial position coordinate values of focus areas in the medical images with different display parameters; obtaining the spatial position similarity of each focus area in the medical images with different display parameters according to the spatial position coordinate values; determining focus areas with similarity of focus area space positions meeting a threshold condition in each medical image as real focus areas;
the method for acquiring the real focus features corresponding to the real focus region and the suspicious focus features corresponding to the suspicious focus region specifically comprises the following steps: acquiring a first feature map and a first weight parameter set corresponding to the real focus area, and a second feature map and a second weight parameter set corresponding to the suspicious focus area; determining the real focus features in the first feature map according to the first weight parameter set and a preset first weight threshold; determining suspicious focus features in the second feature map according to the second weight parameter set and a preset second weight threshold;
Acquiring the feature matching degree of the real focus features and the suspicious focus features;
and correcting the suspicious focus area corresponding to the suspicious focus features with the feature matching degree meeting the correction condition into a real focus area.
2. The image recognition method according to claim 1, wherein the step of determining the true lesion feature in the first feature map according to the first set of weight parameters and a preset first weight threshold value comprises:
acquiring N first characteristic channels in the first characteristic diagram and N first weight parameters in the first weight parameter set, wherein N is an integer greater than or equal to 1, and the first weight parameters have a corresponding relation with the first characteristic channels;
according to the N first weight parameters and a preset first weight threshold, M third characteristic channels are determined in the N first characteristic channels, wherein M is an integer greater than or equal to 1;
and determining the focus characteristic corresponding to the third characteristic channel as the real focus characteristic.
3. The image recognition method as set forth in claim 2, wherein the step of determining M third feature channels among the N first feature channels according to the N first weight parameters and a preset first weight threshold value includes:
Acquiring first weight parameters of each first characteristic channel in the N first characteristic channels;
and comparing the first weight parameters of the first characteristic channels with preset first weight thresholds respectively to obtain M third characteristic channels of which the first weight parameters are larger than or equal to the preset first weight thresholds in the N first characteristic channels.
4. The image recognition method of claim 1, wherein the step of determining the suspicious lesion feature in the second feature map according to the second set of weight parameters and a preset second weight threshold value comprises:
acquiring N second characteristic channels in the second characteristic diagram and N second weight parameters in the second weight parameter set, wherein N is an integer greater than or equal to 1, and the second weight parameters have a corresponding relation with the second characteristic channels;
according to the N second weight parameters and a preset second weight threshold value, M fourth characteristic channels are determined in the N second characteristic channels, wherein M is an integer greater than or equal to 1;
and determining the focus characteristic corresponding to the fourth characteristic channel as the suspicious focus characteristic.
5. The image recognition method of claim 1, wherein the step of obtaining a feature matching degree of the true lesion feature and the in-doubt lesion feature comprises:
Acquiring a third weight parameter of the real focus feature and a fourth weight parameter of the in-doubt focus feature;
calculating the contribution degree of the fourth weight parameter to the third weight parameter, wherein the contribution degree represents the proportion of the fourth weight parameter in the third weight parameter;
and obtaining the feature matching degree of the real focus feature and the in-doubt focus feature according to the contribution degree.
6. An image recognition apparatus, comprising:
the first acquisition unit is used for acquiring a plurality of medical images of different display parameters corresponding to the target part, wherein each medical image has at least one focus area; the display parameters at least comprise at least one of focal region volume parameters and proportion parameters of focal region volume to the target region;
the first determining unit is used for determining focus areas, of which the spatial position similarity of focus areas in medical images of different display parameters meets a threshold condition, as real focus areas according to the spatial position similarity of focus areas in medical images of different display parameters;
the second determining unit is configured to determine, as an in-doubt focal region, a focal region in which the spatial position similarity of the focal region in the at least one medical image does not satisfy a threshold condition, and specifically includes: performing spatial position coordinate alignment processing on medical images with different display parameters to obtain spatial position coordinate values of focus areas in the medical images with different display parameters; obtaining the spatial position similarity of each focus area in the medical images with different display parameters according to the spatial position coordinate values; determining focus areas with similarity of focus area space positions meeting a threshold condition in each medical image as real focus areas;
The second obtaining unit is configured to obtain a real focus feature corresponding to the real focus region and an in-doubt focus feature corresponding to the in-doubt focus region, and specifically includes: acquiring a first feature map and a first weight parameter set corresponding to the real focus area, and a second feature map and a second weight parameter set corresponding to the suspicious focus area; determining the real focus features in the first feature map according to the first weight parameter set and a preset first weight threshold; determining suspicious focus features in the second feature map according to the second weight parameter set and a preset second weight threshold;
a third obtaining unit, configured to obtain a feature matching degree of the real lesion feature and the in-doubt lesion feature;
and the correction unit is used for correcting the suspicious focus area corresponding to the suspicious focus characteristics with the characteristic matching degree meeting the correction condition into a real focus area.
7. An electronic device, comprising:
at least one processor;
and a memory communicatively coupled to the at least one processor;
wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 5.
8. A computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the method of any one of claims 1 to 5.
CN202110498712.0A 2021-05-08 2021-05-08 Image recognition method and device, electronic equipment and storage medium Active CN115311188B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110498712.0A CN115311188B (en) 2021-05-08 2021-05-08 Image recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110498712.0A CN115311188B (en) 2021-05-08 2021-05-08 Image recognition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115311188A CN115311188A (en) 2022-11-08
CN115311188B true CN115311188B (en) 2023-12-22

Family

ID=83853440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110498712.0A Active CN115311188B (en) 2021-05-08 2021-05-08 Image recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115311188B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116467481B (en) * 2022-12-14 2023-12-01 要务(深圳)科技有限公司 Information processing method and system based on cloud computing

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644419A (en) * 2017-09-30 2018-01-30 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image
CN109754387A (en) * 2018-11-23 2019-05-14 北京永新医疗设备有限公司 Medical image lesion detects localization method, device, electronic equipment and storage medium
CN110348543A (en) * 2019-06-10 2019-10-18 腾讯医疗健康(深圳)有限公司 Eye fundus image recognition methods, device, computer equipment and storage medium
CN110600122A (en) * 2019-08-23 2019-12-20 腾讯医疗健康(深圳)有限公司 Digestive tract image processing method and device and medical system
CN110674885A (en) * 2019-09-30 2020-01-10 杭州依图医疗技术有限公司 Focus matching method and device
CN110766682A (en) * 2019-10-29 2020-02-07 慧影医疗科技(北京)有限公司 Pulmonary tuberculosis positioning screening device and computer equipment
WO2020093563A1 (en) * 2018-11-07 2020-05-14 哈尔滨工业大学(深圳) Medical image processing method, system, device, and storage medium
CN111340756A (en) * 2020-02-13 2020-06-26 北京深睿博联科技有限责任公司 Medical image lesion detection and combination method, system, terminal and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644419A (en) * 2017-09-30 2018-01-30 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image
WO2020093563A1 (en) * 2018-11-07 2020-05-14 哈尔滨工业大学(深圳) Medical image processing method, system, device, and storage medium
CN109754387A (en) * 2018-11-23 2019-05-14 北京永新医疗设备有限公司 Medical image lesion detects localization method, device, electronic equipment and storage medium
CN110348543A (en) * 2019-06-10 2019-10-18 腾讯医疗健康(深圳)有限公司 Eye fundus image recognition methods, device, computer equipment and storage medium
CN110600122A (en) * 2019-08-23 2019-12-20 腾讯医疗健康(深圳)有限公司 Digestive tract image processing method and device and medical system
CN110674885A (en) * 2019-09-30 2020-01-10 杭州依图医疗技术有限公司 Focus matching method and device
CN110766682A (en) * 2019-10-29 2020-02-07 慧影医疗科技(北京)有限公司 Pulmonary tuberculosis positioning screening device and computer equipment
CN111340756A (en) * 2020-02-13 2020-06-26 北京深睿博联科技有限责任公司 Medical image lesion detection and combination method, system, terminal and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
一种基于CT影像的肺癌病灶检测新方法;贾同;魏颖;赵大哲;;电子学报(第11期);全文 *
乳腺辅助诊断系统中可疑肿块分割方法研究;沈霄;兰义华;卢玉领;尚耐丽;马晓普;;计算机科学(第S2期);全文 *
基于视觉注意力模型的医学图像目标检测;廖;孙季丰;;计算机仿真(第09期);全文 *

Also Published As

Publication number Publication date
CN115311188A (en) 2022-11-08

Similar Documents

Publication Publication Date Title
US10810735B2 (en) Method and apparatus for analyzing medical image
CN109035234B (en) Nodule detection method, device and storage medium
JP5814504B2 (en) Medical image automatic segmentation system, apparatus and processor using statistical model
JP3838954B2 (en) Medical video processing system and processing method
US8218849B2 (en) Method and system for automatic landmark detection using discriminative joint context
CN107464231B (en) System and method for determining optimal operating parameters for medical imaging
US11996198B2 (en) Determination of a growth rate of an object in 3D data sets using deep learning
CN112435341A (en) Training method and device for three-dimensional reconstruction network, and three-dimensional reconstruction method and device
CN113469981B (en) Image processing method, device and storage medium
CN109919912A (en) A kind of quality evaluating method and device of medical image
CN115311188B (en) Image recognition method and device, electronic equipment and storage medium
CN116869555A (en) Scanning protocol adjusting method, device and storage medium
EP4235566A1 (en) Method and system for determining a change of an anatomical abnormality depicted in medical image data
CN110992312A (en) Medical image processing method, device, storage medium and computer equipment
US11837352B2 (en) Body representations
CN115969414A (en) Method and system for using analytical aids during ultrasound imaging
JP2019118694A (en) Medical image generation apparatus
CN113962957A (en) Medical image processing method, bone image processing method, device and equipment
CN114360695A (en) Mammary gland ultrasonic scanning analysis auxiliary system, medium and equipment
CN112530580A (en) Medical image picture processing method and computer readable storage medium
CN114463323B (en) Focal region identification method and device, electronic equipment and storage medium
CN117475344A (en) Ultrasonic image interception method and device, terminal equipment and storage medium
US20240005503A1 (en) Method for processing medical images
CN116152143A (en) Dose deformation evaluation method, system and equipment
Wu et al. B-ultrasound guided venipuncture vascular recognition system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100120 rooms 303, 304, 305, 321 and 322, building 3, No. 11, Chuangxin Road, science and Technology Park, Changping District, Beijing

Applicant after: Shukun Technology Co.,Ltd.

Address before: 100120 rooms 303, 304, 305, 321 and 322, building 3, No. 11, Chuangxin Road, science and Technology Park, Changping District, Beijing

Applicant before: Shukun (Beijing) Network Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant