CN118096550A - Spinal image fusion method based on multi-scale residual pyramid attention network - Google Patents

Spinal image fusion method based on multi-scale residual pyramid attention network Download PDF

Info

Publication number
CN118096550A
CN118096550A CN202410153166.0A CN202410153166A CN118096550A CN 118096550 A CN118096550 A CN 118096550A CN 202410153166 A CN202410153166 A CN 202410153166A CN 118096550 A CN118096550 A CN 118096550A
Authority
CN
China
Prior art keywords
image
spine
scale
attention network
scale residual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410153166.0A
Other languages
Chinese (zh)
Inventor
张逸凌
刘星宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longwood Valley Medtech Co Ltd
Original Assignee
Longwood Valley Medtech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longwood Valley Medtech Co Ltd filed Critical Longwood Valley Medtech Co Ltd
Priority to CN202410153166.0A priority Critical patent/CN118096550A/en
Publication of CN118096550A publication Critical patent/CN118096550A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application provides a spine image fusion method, device and equipment based on a multi-scale residual pyramid attention network and a computer readable storage medium. The spine image fusion method based on the multi-scale residual pyramid attention network comprises the following steps: respectively acquiring a spine CT image and a spine MRI image; inputting the spine CT image and the spine MRI image into a preset multi-scale residual pyramid attention network model, and outputting an image fusion result; the multi-scale residual pyramid attention network model consists of a feature extractor, a fusion device and a reconstruction device; the feature extractor consists of three multi-scale residual pyramid attention networks and is used for extracting multi-scale features; the reconstructor consists of three convolution layers for reconstructing the fused features. According to the embodiment of the application, the accuracy of image fusion can be improved.

Description

Spinal image fusion method based on multi-scale residual pyramid attention network
Technical Field
The application belongs to the field of multi-mode image fusion, and particularly relates to a spine image fusion method, device and equipment based on a multi-scale residual pyramid attention network and a computer readable storage medium.
Background
Because of the great application value of image fusion, numerous fusion algorithms have been proposed. The multi-scale transformation is a classical image fusion algorithm. Firstly, carrying out multi-scale layer decomposition on an original image; the multi-scale layers are then fused using different rules. And finally obtaining a fused image through multi-scale inverse transformation. The method well utilizes multi-scale information and characteristics, and can obtain a fused image by flexibly using different fusion rules.
However, these algorithms are all based on the multi-scale transformation principle, and many decomposition operations are needed, and the number of decomposition layers is not easy to determine. Moreover, the method achieves better fusion results at the cost of increased computational effort and complexity, requiring adjustment of a large number of parameters.
Therefore, how to improve the accuracy of image fusion is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides a spine image fusion method, device and equipment based on a multi-scale residual pyramid attention network and a computer readable storage medium, which can improve the accuracy of image fusion.
In a first aspect, an embodiment of the present application provides a spine image fusion method based on a multi-scale residual pyramid attention network, including:
respectively acquiring a spine CT image and a spine MRI image;
Inputting the spine CT image and the spine MRI image into a preset multi-scale residual pyramid attention network model, and outputting an image fusion result;
the multi-scale residual pyramid attention network model consists of a feature extractor, a fusion device and a reconstruction device;
the feature extractor consists of three multi-scale residual pyramid attention networks and is used for extracting multi-scale features; the reconstructor consists of three convolution layers for reconstructing the fused features.
Optionally, the multi-scale residual pyramid attention network includes:
a multi-scale pyramid network for obtaining multi-scale features using three convolution scales;
residual attention network for gradients that do not mutate with increasing feature expression capacity.
Alternatively, three convolution scales are used to obtain the multi-scale feature, including:
When an original image is input, a1×1 convolution filter is applied to obtain a 64-dimensional feature map;
three feature images with the sizes being 1/2, 1/4 and 1/8 times that of the original image are obtained through pooling;
each layer is convolved with 3 x 3, 5 x 5 and 7 x 7 filters, respectively, to yield multi-scale features.
Optionally, the fusion device is used for fusing the images based on the characteristic energy ratio strategy.
Optionally, image fusion is performed based on a characteristic energy ratio strategy, including:
Respectively acquiring characteristic images of a spine CT image and a spine MRI image;
Respectively determining fusion coefficients of the two characteristic images;
and carrying out image fusion based on the two characteristic images and the fusion coefficients corresponding to the two characteristic images.
Optionally, the reconstructor is configured to obtain a fused image from the fused feature, including:
A 3 x 3 filter is used for carrying out convolution operation of 64-dimensional input/output channels;
a convolution operation of the 64-dimensional input channel and the 32-dimensional output channel is performed using one 3×3 filter;
and (3) performing convolution operation with the input channel of 32 dimensions and the output channel of 1 dimension by using a 1X 1 filter to obtain a fusion image with the 1-dimension channel as output.
Optionally, the method further comprises:
in the training process, the mean square error is used as a loss function to reduce the error between the fused image and the original image.
In a second aspect, an embodiment of the present application provides a spine image fusion apparatus based on a multi-scale residual pyramid attention network, including:
The image acquisition module is used for respectively acquiring a spine CT image and a spine MRI image;
The image fusion module is used for inputting the spine CT image and the spine MRI image into a preset multi-scale residual error pyramid attention network model and outputting an image fusion result;
the multi-scale residual pyramid attention network model consists of a feature extractor, a fusion device and a reconstruction device;
the feature extractor consists of three multi-scale residual pyramid attention networks and is used for extracting multi-scale features; the reconstructor consists of three convolution layers for reconstructing the fused features.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory storing computer program instructions;
And the processor executes the computer program instructions to realize a spine image fusion method based on a multi-scale residual error pyramid attention network.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon computer program instructions that, when executed by a processor, implement a spinal image fusion method based on a multi-scale residual pyramid attention network.
The spine image fusion method, device and equipment based on the multi-scale residual pyramid attention network and the computer readable storage medium can improve the accuracy of image fusion.
The spine image fusion method based on the multi-scale residual pyramid attention network comprises the following steps: respectively acquiring a spine CT image and a spine MRI image; inputting the spine CT image and the spine MRI image into a preset multi-scale residual pyramid attention network model, and outputting an image fusion result; the multi-scale residual pyramid attention network model consists of a feature extractor, a fusion device and a reconstruction device; the feature extractor consists of three multi-scale residual pyramid attention networks and is used for extracting multi-scale features; the reconstructor consists of three convolution layers for reconstructing the fused features.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a spinal image fusion method based on a multi-scale residual pyramid attention network provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a multi-scale residual pyramid attention network model according to one embodiment of the present application;
FIG. 3 is a schematic diagram of a feature extractor provided in one embodiment of the present application;
FIG. 4 is a schematic diagram of a multi-scale residual pyramid attention network provided by one embodiment of the present application;
FIG. 5 is a schematic view of a cage according to one embodiment of the present application;
FIG. 6 is a schematic diagram of a spinal image fusion device based on a multi-scale residual pyramid attention network according to one embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Features and exemplary embodiments of various aspects of the present application will be described in detail below, and in order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings and the detailed embodiments. It should be understood that the particular embodiments described herein are meant to be illustrative of the application only and not limiting. It will be apparent to one skilled in the art that the present application may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the application by showing examples of the application.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In order to solve the problems in the prior art, the embodiment of the application provides a spine image fusion method, a device, equipment and a computer readable storage medium based on a multi-scale residual pyramid attention network. The spinal image fusion method based on the multi-scale residual pyramid attention network provided by the embodiment of the application is first described below.
Fig. 1 shows a flow chart of a spinal image fusion method based on a multi-scale residual pyramid attention network according to an embodiment of the present application. As shown in fig. 1, the spine image fusion method based on the multi-scale residual pyramid attention network comprises the following steps:
S101, respectively acquiring a spine CT image and a spine MRI image;
S102, inputting a spine CT image and a spine MRI image into a preset multi-scale residual pyramid attention network model, and outputting an image fusion result;
As shown in fig. 2 and fig. 3, the multi-scale residual pyramid attention network model consists of a feature extractor, a fusion device and a reconstruction device; the feature extractor consists of three multi-scale residual pyramid attention networks and is used for extracting multi-scale features; the reconstructor consists of three convolution layers for reconstructing the fused features.
As shown in fig. 4, in one embodiment, a multi-scale residual pyramid attention network includes:
a multi-scale pyramid network for obtaining multi-scale features using three convolution scales;
residual attention network for gradients that do not mutate with increasing feature expression capacity.
In one embodiment, three convolution scales are used to obtain a multi-scale feature, including:
When an original image is input, a1×1 convolution filter is applied to obtain a 64-dimensional feature map;
three feature images with the sizes being 1/2, 1/4 and 1/8 times that of the original image are obtained through pooling;
each layer is convolved with 3 x 3, 5 x 5 and 7 x 7 filters, respectively, to yield multi-scale features.
As shown in fig. 5, in one embodiment, the fusion engine is configured to fuse images based on a feature energy ratio strategy.
In one embodiment, image fusion based on a feature energy ratio strategy comprises:
Respectively acquiring characteristic images of a spine CT image and a spine MRI image;
Respectively determining fusion coefficients of the two characteristic images;
and carrying out image fusion based on the two characteristic images and the fusion coefficients corresponding to the two characteristic images.
In one embodiment, the reconstructor is configured to obtain a fused image from the fused feature, including:
A 3 x 3 filter is used for carrying out convolution operation of 64-dimensional input/output channels;
a convolution operation of the 64-dimensional input channel and the 32-dimensional output channel is performed using one 3×3 filter;
and (3) performing convolution operation with the input channel of 32 dimensions and the output channel of 1 dimension by using a 1X 1 filter to obtain a fusion image with the 1-dimension channel as output.
In one embodiment, further comprising:
in the training process, the mean square error is used as a loss function to reduce the error between the fused image and the original image.
Fig. 6 is a schematic structural diagram of a spinal image fusion apparatus based on a multi-scale residual pyramid attention network according to an embodiment of the present application, including:
An image acquisition module 601, configured to acquire a spine CT image and a spine MRI image respectively;
The image fusion module 602 is configured to input the spine CT image and the spine MRI image into a preset multi-scale residual pyramid attention network model, and output an image fusion result;
the multi-scale residual pyramid attention network model consists of a feature extractor, a fusion device and a reconstruction device;
the feature extractor consists of three multi-scale residual pyramid attention networks and is used for extracting multi-scale features; the reconstructor consists of three convolution layers for reconstructing the fused features.
Fig. 7 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
The electronic device may include a processor 701 and a memory 702 storing computer program instructions.
In particular, the processor 701 may comprise a Central Processing Unit (CPU), or an Application SPECIFIC INTEGRATED Circuit (ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
Memory 702 may include mass storage for data or instructions. By way of example, and not limitation, memory 702 may include a hard disk drive (HARD DISK DRIVE, HDD), floppy disk drive, flash memory, optical disk, magneto-optical disk, magnetic tape, or universal serial bus (Universal Serial Bus, USB) drive, or a combination of two or more of the foregoing. The memory 702 may include removable or non-removable (or fixed) media, where appropriate. The memory 702 may be internal or external to the electronic device, where appropriate. In a particular embodiment, the memory 702 may be a non-volatile solid state memory.
In one embodiment, memory 702 may be Read Only Memory (ROM). In one embodiment, the ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory, or a combination of two or more of these.
The processor 701 reads and executes the computer program instructions stored in the memory 702 to implement any of the spinal image fusion methods based on the multi-scale residual pyramid attention network in the above embodiments.
In one example, the electronic device may also include a communication interface 703 and a bus 710. As shown in fig. 7, the processor 701, the memory 702, and the communication interface 703 are connected by a bus 710 and perform communication with each other.
The communication interface 703 is mainly used for implementing communication between each module, device, unit and/or apparatus in the embodiment of the present application.
Bus 710 includes hardware, software, or both that couple components of the electronic device to one another. By way of example, and not limitation, the buses may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a micro channel architecture (MCa) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus, or a combination of two or more of the above. Bus 710 may include one or more buses, where appropriate. Although embodiments of the application have been described and illustrated with respect to a particular bus, the application contemplates any suitable bus or interconnect.
In addition, in combination with the spine image fusion method based on the multi-scale residual pyramid attention network in the above embodiment, the embodiment of the application can be implemented by providing a computer readable storage medium. The computer readable storage medium has stored thereon computer program instructions; the computer program instructions, when executed by the processor, implement any of the spinal image fusion methods of the above embodiments based on a multi-scale residual pyramid attention network.
It should be understood that the application is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. The method processes of the present application are not limited to the specific steps described and shown, but various changes, modifications and additions, or the order between steps may be made by those skilled in the art after appreciating the spirit of the present application.
The functional blocks shown in the above-described structural block diagrams may be implemented in hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the application are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave. A "machine-readable medium" may include any medium that can store or transfer information. Examples of machine-readable media include electronic circuitry, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, radio Frequency (RF) links, and the like. The code segments may be downloaded via computer networks such as the internet, intranets, etc.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. The present application is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to being, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware which performs the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In the foregoing, only the specific embodiments of the present application are described, and it will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the systems, modules and units described above may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein. It should be understood that the scope of the present application is not limited thereto, and any equivalent modifications or substitutions can be easily made by those skilled in the art within the technical scope of the present application, and they should be included in the scope of the present application.

Claims (10)

1. A spinal image fusion method based on a multi-scale residual pyramid attention network, comprising:
respectively acquiring a spine CT image and a spine MRI image;
Inputting the spine CT image and the spine MRI image into a preset multi-scale residual pyramid attention network model, and outputting an image fusion result;
the multi-scale residual pyramid attention network model consists of a feature extractor, a fusion device and a reconstruction device;
the feature extractor consists of three multi-scale residual pyramid attention networks and is used for extracting multi-scale features; the reconstructor consists of three convolution layers for reconstructing the fused features.
2. The spine image fusion method based on a multi-scale residual pyramid attention network of claim 1, comprising:
a multi-scale pyramid network for obtaining multi-scale features using three convolution scales;
residual attention network for gradients that do not mutate with increasing feature expression capacity.
3. The multi-scale residual pyramid attention network based spine image fusion method of claim 2, wherein three convolution scales are used to obtain multi-scale features, comprising:
When an original image is input, a1×1 convolution filter is applied to obtain a 64-dimensional feature map;
three feature images with the sizes being 1/2, 1/4 and 1/8 times that of the original image are obtained through pooling;
each layer is convolved with 3 x 3, 5 x 5 and 7 x 7 filters, respectively, to yield multi-scale features.
4. The spine image fusion method based on the multi-scale residual pyramid attention network of claim 1, wherein the fusion device is used for image fusion based on a characteristic energy ratio strategy.
5. The spine image fusion method based on the multi-scale residual pyramid attention network of claim 4, wherein the image fusion based on the characteristic energy ratio strategy comprises the following steps:
Respectively acquiring characteristic images of a spine CT image and a spine MRI image;
Respectively determining fusion coefficients of the two characteristic images;
and carrying out image fusion based on the two characteristic images and the fusion coefficients corresponding to the two characteristic images.
6. The multi-scale residual pyramid attention network based spine image fusion method of claim 5 wherein the reconstructor is configured to obtain a fused image from the fused features, comprising:
A 3 x 3 filter is used for carrying out convolution operation of 64-dimensional input/output channels;
a convolution operation of the 64-dimensional input channel and the 32-dimensional output channel is performed using one 3×3 filter;
and (3) performing convolution operation with the input channel of 32 dimensions and the output channel of 1 dimension by using a 1X 1 filter to obtain a fusion image with the 1-dimension channel as output.
7. The multi-scale residual pyramid attention network based spine image fusion method of claim 6, further comprising:
in the training process, the mean square error is used as a loss function to reduce the error between the fused image and the original image.
8. A spinal image fusion device based on a multi-scale residual pyramid attention network, the device comprising:
The image acquisition module is used for respectively acquiring a spine CT image and a spine MRI image;
The image fusion module is used for inputting the spine CT image and the spine MRI image into a preset multi-scale residual error pyramid attention network model and outputting an image fusion result;
the multi-scale residual pyramid attention network model consists of a feature extractor, a fusion device and a reconstruction device;
the feature extractor consists of three multi-scale residual pyramid attention networks and is used for extracting multi-scale features; the reconstructor consists of three convolution layers for reconstructing the fused features.
9. An electronic device, the electronic device comprising: a processor and a memory storing computer program instructions;
The processor, when executing the computer program instructions, implements a spinal image fusion method based on a multi-scale residual pyramid attention network as claimed in any one of claims 1-7.
10. A computer readable storage medium, wherein computer program instructions are stored on the computer readable storage medium, which when executed by a processor, implement the multi-scale residual pyramid attention network based spine image fusion method according to any of claims 1-7.
CN202410153166.0A 2024-02-02 2024-02-02 Spinal image fusion method based on multi-scale residual pyramid attention network Pending CN118096550A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410153166.0A CN118096550A (en) 2024-02-02 2024-02-02 Spinal image fusion method based on multi-scale residual pyramid attention network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410153166.0A CN118096550A (en) 2024-02-02 2024-02-02 Spinal image fusion method based on multi-scale residual pyramid attention network

Publications (1)

Publication Number Publication Date
CN118096550A true CN118096550A (en) 2024-05-28

Family

ID=91143570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410153166.0A Pending CN118096550A (en) 2024-02-02 2024-02-02 Spinal image fusion method based on multi-scale residual pyramid attention network

Country Status (1)

Country Link
CN (1) CN118096550A (en)

Similar Documents

Publication Publication Date Title
CN109522874B (en) Human body action recognition method and device, terminal equipment and storage medium
CN108172213B (en) Surge audio identification method, surge audio identification device, surge audio identification equipment and computer readable medium
CN116543221B (en) Intelligent detection method, device and equipment for joint pathology and readable storage medium
CN113780492A (en) Two-dimensional code binarization method, device and equipment and readable storage medium
CN118096550A (en) Spinal image fusion method based on multi-scale residual pyramid attention network
CN116904569A (en) Signal processing method, device, electronic equipment, medium and product
CN116650110B (en) Automatic knee joint prosthesis placement method and device based on deep reinforcement learning
CN116959307A (en) Hip arthroscope operation auxiliary teaching system based on virtual reality
CN115393868B (en) Text detection method, device, electronic equipment and storage medium
CN116543222A (en) Knee joint lesion detection method, device, equipment and computer readable storage medium
CN111860003A (en) Image rain removing method and system based on dense connection depth residual error network
CN116363150A (en) Hip joint segmentation method, device, electronic equipment and computer readable storage medium
CN112444820A (en) Robot pose determining method and device, readable storage medium and robot
CN115689947A (en) Image sharpening method, system, electronic device and storage medium
CN114547380A (en) Data traversal query method and device, electronic equipment and readable storage medium
CN116523841B (en) Deep learning spine segmentation method and device based on multi-scale information fusion
CN113139617A (en) Power transmission line autonomous positioning method and device and terminal equipment
CN114202494A (en) Method, device and equipment for classifying cells based on cell classification model
CN118096676A (en) Image fusion method, device and equipment based on multi-scale mixed attention network
CN117197345B (en) Intelligent bone joint three-dimensional reconstruction method, device and equipment based on polynomial fitting
CN116883326A (en) Knee joint anatomical site recognition method, device, equipment and readable storage medium
CN110135247B (en) Data enhancement method, device, equipment and medium in pavement segmentation
CN117351232A (en) Knee joint key point detection method, device, equipment and readable storage medium
CN115984661B (en) Multi-scale feature map fusion method, device, equipment and medium in target detection
CN113205013B (en) Object identification method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination