CN117576375A - Method, device and equipment for identifying hip joint lesions based on deep learning algorithm - Google Patents

Method, device and equipment for identifying hip joint lesions based on deep learning algorithm Download PDF

Info

Publication number
CN117576375A
CN117576375A CN202311595240.6A CN202311595240A CN117576375A CN 117576375 A CN117576375 A CN 117576375A CN 202311595240 A CN202311595240 A CN 202311595240A CN 117576375 A CN117576375 A CN 117576375A
Authority
CN
China
Prior art keywords
hip joint
deep learning
learning algorithm
image
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311595240.6A
Other languages
Chinese (zh)
Inventor
张逸凌
刘星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longwood Valley Medtech Co Ltd
Original Assignee
Longwood Valley Medtech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longwood Valley Medtech Co Ltd filed Critical Longwood Valley Medtech Co Ltd
Priority to CN202311595240.6A priority Critical patent/CN117576375A/en
Publication of CN117576375A publication Critical patent/CN117576375A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a method, a device, equipment and a computer readable storage medium for identifying hip joint pathological changes based on a deep learning algorithm. The method for identifying the hip joint lesion based on the deep learning algorithm comprises the following steps: acquiring a hip joint X-ray image; inputting the X-ray image of the hip joint into a preset recognition model of the hip joint pathological changes, and outputting the position of a detection frame, the pathological change recognition result and the thermodynamic diagram; wherein the hip joint pathology recognition model comprises a network structure for focusing on both global and local information when processing the image task. According to the embodiment of the application, the identification efficiency of the hip joint pathology can be improved.

Description

Method, device and equipment for identifying hip joint lesions based on deep learning algorithm
Technical Field
The application belongs to the technical field of deep learning intelligent recognition, and particularly relates to a method, a device, equipment and a computer readable storage medium for recognizing hip joint lesions based on a deep learning algorithm.
Background
Currently, the hip joint pathology recognition in the related art is: one model can only identify one specific lesion type, resulting in inefficient lesion identification.
Therefore, how to improve the recognition efficiency of hip joint diseases is a technical problem which needs to be solved by the person skilled in the art.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a computer readable storage medium for identifying hip joint lesions based on a deep learning algorithm, which can improve the identification efficiency of the hip joint lesions.
In a first aspect, an embodiment of the present application provides a method for identifying a hip joint disorder based on a deep learning algorithm, including:
acquiring a hip joint X-ray image;
inputting the X-ray image of the hip joint into a preset recognition model of the hip joint pathological changes, and outputting the position of a detection frame, the pathological change recognition result and the thermodynamic diagram;
wherein the hip joint pathology recognition model comprises a network structure for focusing on both global and local information when processing the image task.
Optionally, the network structure for focusing on global and local information simultaneously in processing image tasks is a hourslass network structure, comprising:
an input layer for inputting a hip X-ray image;
the first convolution layer is used for extracting a characteristic image of the hip joint X-ray image;
a downsampling layer for reducing the size of the feature map;
the residual block comprises residual connection and is used for network training and information transmission;
an upsampling layer for increasing the size of the feature map;
the second convolution layer is used for extracting the feature map after upsampling;
and the output layer is used for outputting the result.
Optionally, outputting the detection frame position includes:
predicting the center point of each target in the hip joint X-ray image in a regression mode;
for each spatial location, the head outputs a score indicating whether the location contains a target center;
predicting size information of the target by the head, including a width and a height of the target frame; wherein these predictions are relative offsets with respect to the center point.
Optionally, outputting the lesion recognition result includes:
predicting target categories which may exist at each center point position through head output;
probability distribution obtained through calculation of sonmax function;
and outputting a lesion recognition result based on the probability distribution.
Optionally, outputting the thermodynamic diagram includes:
intercepting a feature map through a detection frame;
by sigmoid activation of the function, a visual thermodynamic diagram is generated.
Optionally, the network structure for focusing on global and local information simultaneously when processing the image task is a UNet network structure, including:
one downsampling and one upsampling enable the network to capture global and local features of the image while retaining high resolution information.
Optionally, the method further comprises:
respectively determining the position of a detection frame, a lesion recognition result, a loss function of a thermodynamic diagram and corresponding weights;
based on the three loss functions and their corresponding weights, an overall loss function is calculated.
In a second aspect, embodiments of the present application provide a hip joint lesion recognition device based on a deep learning algorithm, the device comprising:
the image acquisition module is used for acquiring hip joint X-ray images;
the lesion recognition module is used for inputting the hip joint X-ray image into a preset hip joint lesion recognition model and outputting the position of the detection frame, the lesion recognition result and the thermodynamic diagram;
wherein the hip joint pathology recognition model comprises a network structure for focusing on both global and local information when processing the image task.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements a method of recognition of hip joint lesions based on a deep learning algorithm as shown in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method of hip joint pathology identification based on a deep learning algorithm as shown in the first aspect.
According to the method, the device, the equipment and the computer readable storage medium for identifying the hip joint lesions based on the deep learning algorithm, which are disclosed by the embodiment of the application, the identification efficiency of the hip joint lesions can be improved.
The embodiment of the application provides a method, a device, equipment and a computer readable storage medium for identifying hip joint lesions based on a deep learning algorithm, which can improve the identification efficiency of the hip joint lesions.
The method for identifying the hip joint lesion based on the deep learning algorithm comprises the following steps: acquiring a hip joint X-ray image; inputting the X-ray image of the hip joint into a preset recognition model of the hip joint pathological changes, and outputting the position of a detection frame, the pathological change recognition result and the thermodynamic diagram; wherein the hip joint pathology recognition model comprises a network structure for focusing on both global and local information when processing the image task.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described below, it will be obvious that the drawings in the description below are some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for identifying a hip joint disorder based on a deep learning algorithm according to one embodiment of the present application;
FIG. 2 is a flow chart of a method for identifying a hip joint disorder based on a deep learning algorithm according to one embodiment of the present application;
FIG. 3 is a schematic diagram of a Hoursclass network architecture according to one embodiment of the present application;
fig. 4 is a schematic diagram of a UNet network structure according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a deep learning algorithm-based hip arthropathy recognition device according to one embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application are described in detail below to make the objects, technical solutions and advantages of the present application more apparent, and to further describe the present application in conjunction with the accompanying drawings and the detailed embodiments. It should be understood that the specific embodiments described herein are intended to be illustrative of the application and are not intended to be limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present application by showing examples of the present application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
To solve the problems in the prior art, embodiments of the present application provide a method, an apparatus, a device, and a computer-readable storage medium for identifying hip joint lesions based on a deep learning algorithm. The following first describes a method for identifying a hip joint disorder based on a deep learning algorithm according to an embodiment of the present application.
Fig. 1 shows a flow chart of a method for identifying hip joint lesions based on a deep learning algorithm according to an embodiment of the present application. As shown in fig. 1, the method for identifying the hip joint lesion based on the deep learning algorithm comprises the following steps:
s101, acquiring a hip joint X-ray image;
s102, inputting a hip joint X-ray image into a preset hip joint lesion recognition model, and outputting a detection frame position, a lesion recognition result and a thermodynamic diagram;
wherein the hip joint pathology recognition model comprises a network structure for focusing on both global and local information when processing the image task.
Specifically, as shown in fig. 2, the algorithm firstly extracts the feature map of the X-ray image through a plurality of hourgassss blocks. Then give the head of 3 outputs. The first head, following the head output pattern of the center Ne model, outputs the center point of the hip joint position to be detected and the width and height (Bounding Box Regression Head). The second branch is a multi-tag classification tag (Classification Head), and multi-tag lesion classification is performed by classifying head, and 9 tags are output in total:
incomplete femoral head, collapse of femoral head, change of gap, cystic change, dislocation of femoral head, osteophyte, cartilage hardening, fusion of sacroiliac joint and change of insect and insect bite.
The description of the 9 tags is as follows:
incomplete femoral head:
description of: the contour of the femoral head is incomplete or missing.
Collapse of femoral head:
description of: the femoral head surface subsides or collapses.
Gap change:
description of: the joint space in the hip joint changes, possibly narrowing or expanding.
Cystic change:
description of: liquid cysts form around the joints.
Dislocation of femoral head:
description of: the femoral head is not in the normal position and is separated from the acetabulum.
Osteophyte:
description of: additional bone tissue growth occurs at the bone surface.
Cartilage hardening:
description of: the articular cartilage becomes hardened and loses elasticity.
Sacroiliac joint fusion:
description of: the joint between the sacrum and ilium fuses, losing normal joint motion.
Insect and insect phagocytosis changing:
description of: bone tissue exhibits an insect-like alteration, commonly referred to as a small bone defect.
The third branch is a thermodynamic diagram label (Segmentation Head) of the hip joint, and the feature map extracted through the horglass has a thermodynamic diagram property, so that the property can be reused in a model, and after the feature map of the hip joint is extracted, the thermodynamic diagram of the hip joint is generated through sigmoid operation, so that the visual display effect is more beneficial.
In one embodiment, the network structure for simultaneously focusing on global and local information in processing image tasks is a Hourglass network structure comprising:
an input layer for inputting a hip X-ray image;
the first convolution layer is used for extracting a characteristic image of the hip joint X-ray image;
a downsampling layer for reducing the size of the feature map;
the residual block comprises residual connection and is used for network training and information transmission;
an upsampling layer for increasing the size of the feature map;
the second convolution layer is used for extracting the feature map after upsampling;
and the output layer is used for outputting the result.
In particular, as shown in FIG. 3, "Hourglass" is generally used to describe a class of network structures having symmetrical, layer-by-layer decreases and increases, and is generally used for tasks in image processing, such as human body pose estimation. The network diagram of this structure looks like an hourglass (hour glass) and is therefore named.
Description:
input: an image or a feature map is input.
Convolutional Layers: a series of convolution layers for extracting features of the image.
Down sampling: downsampling, which is achieved by a pooling or convolution operation, reduces the size of the feature map.
Residual Blocks: and a residual block containing residual connection, which is helpful for network training and information transfer.
Upsampling: the feature map is increased in size by an upsampling operation (e.g., nearest neighbor interpolation).
Convolutional Layers: another set of convolution layers is used to further process the upsampled features.
Output: the output of the network, which may be predicted pose key points, segmentation results, etc., depends on the task of the network.
A key feature of the hourslass network architecture is to preserve high-level semantic information while reducing the size, and then gradually restore detail through upsampling. This architecture enables the network to focus on both global and local information while processing image tasks, thus achieving good performance in many computer vision tasks, particularly in the field of human body pose estimation and the like.
In one embodiment, outputting the detection frame position includes:
predicting the center point of each target in the hip joint X-ray image in a regression mode;
for each spatial location, the head outputs a score indicating whether the location contains a target center;
predicting size information of the target by the head, including a width and a height of the target frame; wherein these predictions are relative offsets with respect to the center point.
Specifically, center point prediction (Center Heatm ap):
the present patent predicts the center point of each object in the image using regression. For each spatial location, the head outputs a score indicating whether the location contains the target center.
Target Size Prediction (Size Prediction):
the size information of the object, typically the width and height of the object frame, is predicted by the header. These predictions are relative offsets with respect to the center point.
In one embodiment, outputting a lesion recognition result includes:
predicting target categories which may exist at each center point position through head output;
a probability distribution calculated by a softmax function;
and outputting a lesion recognition result based on the probability distribution.
Specifically, category Prediction (Class Prediction):
the patent also predicts the target class that may exist at each center point location through the head output. The probability distribution was calculated by the softmax function.
In one embodiment, outputting a thermodynamic diagram comprises:
intercepting a feature map through a detection frame;
by sigmoid activation of the function, a visual thermodynamic diagram is generated.
Specifically, thermodynamic diagram Mask Generation (Mask Generation):
the head of this patent outputs a thermodynamic diagram of a center point, where the value at each location indicates whether the location contains the target center. This thermodynamic diagram is typically processed through a sigmoid activation function to have a value between O and 1.
Specific way of thermodynamic diagram generation:
in the CNN network, a sigmoid activation function is added after the feature map, so that the sigmoid activation function can be converted into a numerical value in the range of [0,1], and a thermodynamic diagram is generated. This technique is typically used to visualize the degree of interest of a network in an input, i.e., the area of interest of the network when processing an input image.
head output mode:
the Head in this patent has three in total, one for segmentation of the target object (Segmentation Head), one for classification of the target object (Classification Head), and one for coordinate positioning of the target object (Bounding Box Regression Head).
The three heads are specifically as follows:
each head has its specific outputs which can be compared to corresponding targets by appropriate design of the loss function to perform joint training. This multi-headed architecture allows the network to handle segmentation, regression and classification tasks simultaneously, improving the versatility of the model.
In one embodiment, the network structure for simultaneously focusing on global and local information in processing image tasks is a UNet network structure, comprising:
one downsampling and one upsampling enable the network to capture global and local features of the image while retaining high resolution information.
Specifically, as shown in fig. 4, UNet (collectively referred to as U-shaped Network) is a Convolutional Neural Network (CNN) structure for image segmentation, which was originally used for biomedical image segmentation. The UNet structure is unique in that it employs a U-shaped architecture, consisting of one downsampled (encoder) and one upsampled (decoder) portion, enabling the network to capture global and local features of the image while retaining high resolution information.
In one embodiment, further comprising:
respectively determining the position of a detection frame, a lesion recognition result, a loss function of a thermodynamic diagram and corresponding weights;
based on the three loss functions and their corresponding weights, an overall loss function is calculated.
Specifically, the following is a general form of the overall loss function of the central net:
L total =λ center ·L centersize ·L sizeclass ·L classreg ·L reg
wherein:
L center is the center point prediction loss.
L size Is the target frame size loss.
L class Is a category loss.
L reg Is the mask predictive loss.
λ center 、λ size 、λ class 、λ reg Is a weight super parameter of the corresponding loss term for balancing the contribution of each loss.
Where N is the total number of predicted frames, y ij Is a binary label of whether the actual center point contains the target,is the predictive score of the model for the center point,b ij is the coordinates of the actual target frame,/->Is a model prediction of the target frame. C is the number of categories.
In deep learning, the term "end-to-end" training refers to optimizing the entire system or model as a single, end-to-end-trainable entity, rather than splitting the system into multiple phases, each of which is trained separately. This means that the model can generate the final output directly from the input without requiring manual design or manual extraction of features.
The method realizes the end-to-end training and reasoning of the hip joint lesion in the X-ray image through a model. The model complexity is reduced, and meanwhile, the reasoning time of the model is also reduced.
Fig. 5 is a schematic structural diagram of a deep learning algorithm-based hip joint pathology recognition device according to an embodiment of the present application, where the device includes:
an image acquisition module 501 for acquiring a hip X-ray image;
the lesion recognition module 502 is configured to input a hip joint X-ray image into a preset hip joint lesion recognition model, and output a detection frame position, a lesion recognition result and a thermodynamic diagram;
wherein the hip joint pathology recognition model comprises a network structure for focusing on both global and local information when processing the image task.
Fig. 6 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
The electronic device may include a processor 601 and a memory 602 storing computer program instructions.
In particular, the processor 601 may include a Central Processing Unit (CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured to implement one or more integrated circuits of embodiments of the present application.
Memory 602 may include mass storage for data or instructions. By way of example, and not limitation, memory 602 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) Drive, or a combination of two or more of the above. The memory 602 may include removable or non-removable (or fixed) media, where appropriate. The memory 602 may be internal or external to the electronic device, where appropriate. In particular embodiments, memory 602 may be a non-volatile solid state memory.
In one embodiment, memory 602 may be Read Only Memory (ROM). In one embodiment, the ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these.
The processor 601 implements any of the methods of the deep learning algorithm-based hip joint lesion recognition methods of the above embodiments by reading and executing computer program instructions stored in the memory 602.
In one example, the electronic device may also include a communication interface 603 and a bus 610. As shown in fig. 6, the processor 601, the memory 602, and the communication interface 603 are connected to each other through a bus 610 and perform communication with each other.
The communication interface 603 is mainly configured to implement communication between each module, apparatus, unit and/or device in the embodiments of the present application.
Bus 610 includes hardware, software, or both, that couple components of the electronic device to one another. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 610 may include one or more buses, where appropriate. Although embodiments of the present application describe and illustrate a particular bus, the present application contemplates any suitable bus or interconnect.
In addition, in combination with the method for identifying hip joint lesions based on the deep learning algorithm in the above embodiment, the embodiments of the present application may be implemented by providing a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by a processor, implement any of the methods of hip joint pathology recognition methods based on a deep learning algorithm of the above embodiments.
It should be clear that the present application is not limited to the particular arrangements and processes described above and illustrated in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present application are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications, and additions, or change the order between steps, after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the present application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this application describe some methods or systems based on a series of steps or devices. However, the present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be different from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, which are intended to be included in the scope of the present application.

Claims (10)

1. A method for identifying hip joint lesions based on a deep learning algorithm, comprising:
acquiring a hip joint X-ray image;
inputting the X-ray image of the hip joint into a preset recognition model of the hip joint pathological changes, and outputting the position of a detection frame, the pathological change recognition result and the thermodynamic diagram;
wherein the hip joint pathology recognition model comprises a network structure for focusing on both global and local information when processing the image task.
2. The deep learning algorithm-based hip joint pathology recognition method according to claim 1, wherein the network structure for simultaneously focusing on global and local information when processing an image task is a hourslass network structure, comprising:
an input layer for inputting a hip X-ray image;
the first convolution layer is used for extracting a characteristic image of the hip joint X-ray image;
a downsampling layer for reducing the size of the feature map;
the residual block comprises residual connection and is used for network training and information transmission;
an upsampling layer for increasing the size of the feature map;
the second convolution layer is used for extracting the feature map after upsampling;
and the output layer is used for outputting the result.
3. The deep learning algorithm-based hip joint pathology recognition method according to claim 1, wherein outputting the detection frame position comprises:
predicting the center point of each target in the hip joint X-ray image in a regression mode;
for each spatial location, the head outputs a score indicating whether the location contains a target center;
predicting size information of the target by the head, including a width and a height of the target frame; wherein these predictions are relative offsets with respect to the center point.
4. The method for identifying a hip joint lesion based on a deep learning algorithm according to claim 3, wherein outputting the lesion identification result comprises:
predicting target categories which may exist at each center point position through head output;
a probability distribution calculated by a softmax function;
and outputting a lesion recognition result based on the probability distribution.
5. The deep learning algorithm based hip joint pathology recognition method according to claim 4, wherein outputting the thermodynamic diagram comprises:
intercepting a feature map through a detection frame;
by sigmoid activation of the function, a visual thermodynamic diagram is generated.
6. The deep learning algorithm-based hip joint pathology recognition method according to claim 1, wherein the network structure for simultaneously focusing on global and local information when processing an image task is a UNet network structure, comprising:
one downsampling and one upsampling enable the network to capture global and local features of the image while retaining high resolution information.
7. The deep learning algorithm-based hip joint pathology recognition method according to claim 1, further comprising:
respectively determining the position of a detection frame, a lesion recognition result, a loss function of a thermodynamic diagram and corresponding weights;
based on the three loss functions and their corresponding weights, an overall loss function is calculated.
8. A deep learning algorithm-based hip joint pathology recognition device, the device comprising:
the image acquisition module is used for acquiring hip joint X-ray images;
the lesion recognition module is used for inputting the hip joint X-ray image into a preset hip joint lesion recognition model and outputting the position of the detection frame, the lesion recognition result and the thermodynamic diagram;
wherein the hip joint pathology recognition model comprises a network structure for focusing on both global and local information when processing the image task.
9. An electronic device, the electronic device comprising: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements a method for recognition of hip joint lesions based on a deep learning algorithm as claimed in any one of claims 1-7.
10. A computer readable storage medium, characterized in that it has stored thereon computer program instructions which, when executed by a processor, implement the deep learning algorithm based hip joint lesion recognition method according to any of claims 1-7.
CN202311595240.6A 2023-11-27 2023-11-27 Method, device and equipment for identifying hip joint lesions based on deep learning algorithm Pending CN117576375A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311595240.6A CN117576375A (en) 2023-11-27 2023-11-27 Method, device and equipment for identifying hip joint lesions based on deep learning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311595240.6A CN117576375A (en) 2023-11-27 2023-11-27 Method, device and equipment for identifying hip joint lesions based on deep learning algorithm

Publications (1)

Publication Number Publication Date
CN117576375A true CN117576375A (en) 2024-02-20

Family

ID=89889707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311595240.6A Pending CN117576375A (en) 2023-11-27 2023-11-27 Method, device and equipment for identifying hip joint lesions based on deep learning algorithm

Country Status (1)

Country Link
CN (1) CN117576375A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882531A (en) * 2020-07-15 2020-11-03 中国科学技术大学 Automatic analysis method for hip joint ultrasonic image
CN116758341A (en) * 2023-05-31 2023-09-15 北京长木谷医疗科技股份有限公司 GPT-based hip joint lesion intelligent diagnosis method, device and equipment
CN116894844A (en) * 2023-07-06 2023-10-17 北京长木谷医疗科技股份有限公司 Hip joint image segmentation and key point linkage identification method and device
CN116894973A (en) * 2023-07-06 2023-10-17 北京长木谷医疗科技股份有限公司 Integrated learning-based intelligent self-labeling method and device for hip joint lesions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882531A (en) * 2020-07-15 2020-11-03 中国科学技术大学 Automatic analysis method for hip joint ultrasonic image
CN116758341A (en) * 2023-05-31 2023-09-15 北京长木谷医疗科技股份有限公司 GPT-based hip joint lesion intelligent diagnosis method, device and equipment
CN116894844A (en) * 2023-07-06 2023-10-17 北京长木谷医疗科技股份有限公司 Hip joint image segmentation and key point linkage identification method and device
CN116894973A (en) * 2023-07-06 2023-10-17 北京长木谷医疗科技股份有限公司 Integrated learning-based intelligent self-labeling method and device for hip joint lesions

Similar Documents

Publication Publication Date Title
CN111401201B (en) Aerial image multi-scale target detection method based on spatial pyramid attention drive
CN106548127B (en) Image recognition method
Bahnsen et al. Rain removal in traffic surveillance: Does it matter?
CN108416266B (en) Method for rapidly identifying video behaviors by extracting moving object through optical flow
CN111428875A (en) Image recognition method and device and corresponding model training method and device
CN102682428B (en) Fingerprint image computer automatic mending method based on direction fields
CN113936256A (en) Image target detection method, device, equipment and storage medium
CN111709416A (en) License plate positioning method, device and system and storage medium
KR101901487B1 (en) Real-Time Object Tracking System and Method for in Lower Performance Video Devices
CN113076884B (en) Cross-mode eye state identification method from near infrared light to visible light
CN111582126A (en) Pedestrian re-identification method based on multi-scale pedestrian contour segmentation fusion
CN113160265A (en) Construction method of prediction image for brain corpus callosum segmentation for corpus callosum state evaluation
CN113077419A (en) Information processing method and device for hip joint CT image recognition
CN113780110A (en) Method and device for detecting weak and small targets in image sequence in real time
CN114119586A (en) Intelligent detection method for aircraft skin defects based on machine vision
CN111507337A (en) License plate recognition method based on hybrid neural network
JP2019125203A (en) Target recognition device, target recognition method, program and convolution neural network
CN111062347A (en) Traffic element segmentation method in automatic driving, electronic device and storage medium
CN116844143B (en) Embryo development stage prediction and quality assessment system based on edge enhancement
CN110570450B (en) Target tracking method based on cascade context-aware framework
CN113223614A (en) Chromosome karyotype analysis method, system, terminal device and storage medium
CN117576375A (en) Method, device and equipment for identifying hip joint lesions based on deep learning algorithm
Poomani et al. RETRACTED ARTICLE: Wiener filter based deep convolutional network approach for classification of satellite images
JP4818430B2 (en) Moving object recognition method and apparatus
Shi et al. Perceptual loss for superpixel-level multispectral and panchromatic image classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination