CN115457598A - Acupuncture point identification method and acupuncture point identification network - Google Patents

Acupuncture point identification method and acupuncture point identification network Download PDF

Info

Publication number
CN115457598A
CN115457598A CN202211123794.1A CN202211123794A CN115457598A CN 115457598 A CN115457598 A CN 115457598A CN 202211123794 A CN202211123794 A CN 202211123794A CN 115457598 A CN115457598 A CN 115457598A
Authority
CN
China
Prior art keywords
feature
module
extraction
network
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211123794.1A
Other languages
Chinese (zh)
Inventor
陶世文
陈兆芃
黎田
周佳
王在进
刘菲
周天航
李锋
别东洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Siling Robot Technology Co ltd
Original Assignee
Beijing Siling Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Siling Robot Technology Co ltd filed Critical Beijing Siling Robot Technology Co ltd
Priority to CN202211123794.1A priority Critical patent/CN115457598A/en
Publication of CN115457598A publication Critical patent/CN115457598A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an acupuncture point identification method and an acupuncture point identification network, which relate to the technical field of image identification and comprise the following steps: inputting an image to be recognized into a feature extraction network obtained by pre-training to obtain a plurality of feature graphs output by the feature extraction network; the feature extraction network comprises a plurality of feature map extraction models which are connected in series; fusing and adding the characteristic graphs to obtain a fused characteristic graph; inputting the fusion characteristic diagram into a prediction network obtained by pre-training to predict acupuncture points, and obtaining the coordinates and categories of the acupuncture points in the image to be recognized. The invention can automatically identify the coordinates of each acupuncture point and the corresponding type of the acupuncture point in the image to be identified, and improves the speed and accuracy of acupuncture point identification.

Description

Acupuncture point identification method and acupuncture point identification network
Technical Field
The invention relates to the technical field of image recognition, in particular to an acupuncture point recognition method and an acupuncture point recognition network.
Background
The acupuncture points occupy an important position in the theory of traditional Chinese medicine, play an important role in acupuncture, massage and point pressing, are different due to body type differences, and usually require professionals to accurately find out the positions of the acupuncture points, and for people without professional training, the positions and corresponding names of the acupuncture points are difficult to find out, so that the problem of urgent need to be solved is how to help non-professionals quickly and accurately identify the positions and corresponding acupuncture point types.
Disclosure of Invention
In view of the above, the present invention provides an acupoint identification method and an acupoint identification network, which can automatically identify each acupoint coordinate and corresponding acupoint type in an image to be identified, and improve the acupoint identification speed and identification accuracy.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a method for identifying an acupoint, including: inputting an image to be recognized into a feature extraction network obtained by pre-training to obtain a plurality of feature graphs output by the feature extraction network; wherein the feature extraction network comprises a plurality of feature map extraction models connected in series; fusing and adding the characteristic graphs to obtain fused characteristic graphs; inputting the fusion characteristic diagram into a prediction network obtained by pre-training to perform acupoint prediction, and obtaining each acupoint coordinate and acupoint category in the image to be recognized.
Further, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the step of inputting the image to be recognized into a feature extraction network obtained by pre-training to obtain a plurality of feature maps output by the feature extraction network includes: inputting the image to be recognized into a first feature map extraction model so that the first feature map extraction model outputs a first feature map and a second feature map; and inputting the first feature diagram output by the previous feature diagram extraction model into the next feature diagram extraction model, so that the next feature diagram extraction model outputs the first feature diagram and the second feature diagram until the second feature diagrams output by all the feature diagram extraction models are obtained, and taking the second feature diagrams as a plurality of feature diagrams output by the feature extraction network.
Further, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the feature map extraction model includes multiple extraction modules, a residual error module, multiple residual error sampling modules, and multiple upsampling modules; the plurality of extraction modules are sequentially connected in series, the output end of each extraction module is connected with the input end of the corresponding residual error module, the output end of each residual error module is connected with the corresponding residual error sampling module, the plurality of residual error sampling modules are sequentially connected in series, and the output end of each residual error sampling module is used for outputting the first characteristic diagram; the input end of the up-sampling module is connected between the residual error sampling modules, and the output end of the up-sampling module is used for outputting the second characteristic diagram.
Further, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the plurality of extraction modules include a first extraction module to a fourth extraction module, and the plurality of residual sampling modules include a first residual sampling module to a fourth residual sampling module; the input end of the first extraction module is used for inputting the image to be identified or the first feature map, and the input end of the first extraction module is also connected with the output end of the fourth residual error sampling module; the output end of the first extraction module is respectively connected with the input end of the second extraction module and the output end of the third residual error sampling module; the output end of the second extraction module is respectively connected with the input end of the third extraction module and the output end of the second residual error sampling module; and the output end of the third extraction module is respectively connected with the input end of the fourth extraction module and the output end of the first residual error sampling module.
Further, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the multiple upsampling modules include a first upsampling module to a third upsampling module, the first upsampling module and the second upsampling module are configured to upsample the feature map output by the second residual sampling module twice, and the third upsampling module is configured to upsample the feature map output by the third residual sampling module; the second feature map is obtained by adding the feature map output by the second upsampling module, the feature map output by the third upsampling module, and the feature map output by the fourth residual sampling module.
Further, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the extracting module includes a residual module, a pooling layer, and a residual module, which are connected in sequence.
Further, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where the residual sampling module includes a residual module and an upsampling module.
Further, an embodiment of the present invention provides a seventh possible implementation manner of the first aspect, where the residual module includes a ReLU layer, a batchNormal layer, and a convolution layer, which are connected in sequence.
Further, an embodiment of the present invention provides an eighth possible implementation manner of the first aspect, where the prediction network includes a convolutional layer, a batch normal layer, a ReLU layer, and a linear layer.
In a second aspect, an embodiment of the present invention further provides an acupuncture point identification network, including: the feature extraction network, the feature map fusion network and the prediction network are sequentially connected in series; the feature extraction network comprises a plurality of feature map extraction models connected in series; the characteristic diagram extraction model comprises a plurality of extraction modules, a residual error module, a plurality of residual error sampling modules and a plurality of up-sampling modules; the feature extraction network is used for extracting a plurality of feature maps from the image to be identified and outputting the feature maps to the feature map fusion network; the feature map fusion model is used for fusing and adding the feature maps to obtain a fusion feature map; the prediction network is used for predicting acupuncture points based on the fusion characteristic graph to obtain each acupuncture point coordinate and each acupuncture point category.
The embodiment of the invention provides an acupuncture point identification method and an acupuncture point identification network, which comprise the following steps: inputting an image to be recognized into a feature extraction network obtained by pre-training to obtain a plurality of feature graphs output by the feature extraction network; the feature extraction network comprises a plurality of feature map extraction models which are connected in series; fusing and adding the characteristic graphs to obtain a fused characteristic graph; inputting the fusion characteristic diagram into a prediction network obtained by pre-training to predict acupuncture points, and obtaining the coordinates and categories of the acupuncture points in the image to be recognized. According to the invention, the image to be recognized is input into the feature extraction network comprising a plurality of feature extraction models, a plurality of feature maps output by the feature extraction network can be obtained, the plurality of feature maps are fused and added to obtain the fused feature map, so that the fused feature map input into the prediction network can simultaneously retain high-level features and low-level features, each acupuncture point coordinate and corresponding acupuncture point type in the image to be recognized can be automatically recognized through the preset network, and the acupuncture point recognition speed and the recognition accuracy are improved.
Additional features and advantages of embodiments of the invention will be set forth in the description which follows, or in part may be learned by the practice of the embodiments of the invention or may be learned by the practice of the embodiments of the invention as set forth hereinafter.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart illustrating an acupoint identification method according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating a structure of a hole recognition network according to an embodiment of the present invention;
fig. 3 shows a schematic structural diagram of a feature map extraction model provided in an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, not all, embodiments of the present invention.
The embodiment provides an acupuncture point identification method, which is applied to electronic equipment such as a computer, wherein an acupuncture point identification network is arranged in the electronic equipment, and referring to a flow chart of the acupuncture point identification method shown in fig. 1, the method mainly comprises the following steps:
and S102, inputting the image to be recognized into a feature extraction network obtained by pre-training to obtain a plurality of feature maps output by the feature extraction network.
The feature extraction network comprises a plurality of feature extraction models which are connected in series, a human body image is obtained to serve as an image to be recognized, the image to be recognized is input into the feature extraction network obtained through pre-training, the feature extraction network after training is used for extracting the feature image of the image to be recognized, the feature extraction network comprises a plurality of feature extraction models, feature image extraction is carried out on the basis of each feature extraction model, the feature image output by each feature extraction model is obtained, and a plurality of feature images output by the feature extraction network can be obtained. In a specific embodiment, the feature extraction network may include 8 feature map extraction models connected in series.
The acupoint recognition network comprises a feature extraction network, a feature graph fusion network and a prediction network, a plurality of human body images acquired in advance are acquired as acupoint image samples, sample labeling is carried out on each acupoint image sample, the acupoint coordinates and the acupoint types of each acupoint image sample are labeled, the labeled acupoint image samples are input into the acupoint recognition network for network training, and the trained feature extraction network and the trained prediction network are obtained.
And step S104, fusing and adding the feature maps to obtain a fused feature map.
And performing feature fusion on a plurality of feature maps output by the feature extraction network, for example, adding the feature maps by using add functions, and marking the added feature maps as fusion feature maps.
And S106, inputting the fusion characteristic graph into a prediction network obtained by pre-training for acupoint prediction to obtain each acupoint coordinate and acupoint category in the image to be recognized.
Inputting the fusion characteristic diagram obtained by adding and fusing into the trained prediction network to predict the acupuncture points, and enabling the prediction network to output the acupuncture point coordinates and the corresponding acupuncture point categories of all the acupuncture points in the image to be recognized.
In the method for identifying the acupuncture points provided by the embodiment, the image to be identified is input into the feature extraction network comprising the plurality of feature extraction models, the plurality of feature maps output by the feature extraction network can be obtained, the plurality of feature maps are fused and added to obtain the fused feature map, so that the fused feature map input into the prediction network can simultaneously retain high-level features and low-level features, the coordinates of each acupuncture point in the image to be identified and the corresponding type of the acupuncture point can be automatically identified through the preset network, and the speed and the accuracy of identifying the acupuncture points are improved.
Referring to the schematic diagram of the acupoint recognition network structure shown in fig. 2, the acupoint recognition network includes a feature extraction network 21, a feature map fusion network 22 and a prediction network 23, and the feature extraction network includes a plurality of feature map extraction modules 210 to 21N connected in series in sequence.
In an embodiment, in order to retain high-level features and low-level features of an image as much as possible, the embodiment provides an implementation manner in which an image to be recognized is input into a feature extraction network obtained by training in advance, and a plurality of feature maps output by the feature extraction network are obtained, which may be specifically executed with reference to the following steps (1) to (2):
step (1): and inputting the image to be recognized into the first feature map extraction model, so that the first feature map extraction model outputs a first feature map and a second feature map.
The feature extraction network comprises a plurality of feature map extraction models which are connected in sequence, each feature map extraction module comprises two output ends which are recorded as a first output end and a second output end, and the first output end of the previous feature map extraction model is connected with the input end of the next feature map extraction model.
As shown in fig. 2, the image to be recognized is input into the first feature map extraction model 210, each feature map extraction model may output two feature maps, which are respectively marked as a first feature map F1 and a second feature map F2, the feature map extraction model may output the first feature map F1 through a first output end, output the second feature map F2 through a second output end, and the last feature map extraction model may only output the second feature map F2.
Step (2): and inputting the first feature diagram output by the previous feature diagram extraction model into the next feature diagram extraction model so as to enable the next feature diagram extraction model to output the first feature diagram and the second feature diagram until the second feature diagrams output by all the feature diagram extraction models are obtained, and taking the second feature diagram as a plurality of feature diagrams output by the feature extraction network.
As shown in fig. 2, in the process of extracting feature maps by a plurality of feature map extraction models connected in series in sequence, each feature map extraction model outputs two feature maps, the former feature map extraction model inputs the first feature map F1 into the latter feature map extraction model through the first output end, the second feature maps F2 output by each feature map extraction model are input into the feature map fusion network 22 for feature map fusion and addition to obtain a fusion feature map, and the fusion feature map F is input into the prediction network 23 for acupoint prediction.
In one embodiment, the prediction network may be a deep learning network, such as a convolutional neural network. In a specific embodiment, the prediction network may include a convolutional layer, a batchNormal layer (BN layer, i.e., a normalized network layer), a ReLU layer (ReLU layer, i.e., an activation function layer), and a linear layer (a fully-connected layer) connected in sequence. The prediction network may use, for example, a heatmap algorithm to predict the coordinates of the acupuncture points, and the loss function of the prediction network may use a wingloss algorithm.
In an embodiment, the feature map extraction model may include a Hourglass network (Hourglass network), and in order to further improve the accuracy of the acupoint prediction result, the feature map extraction model in this embodiment may be an improved Hourglass network, such as a Hourglass + network.
The feature map extraction model provided in this embodiment includes a plurality of extraction modules (may also be referred to as RPR modules), a residual module (may also be referred to as Res module), a plurality of residual sampling modules (may also be referred to as RU modules), and a plurality of upsampling modules (may also be referred to as UP modules), where the plurality of extraction modules are sequentially connected in series, an output end of each extraction module is connected to an input end of the corresponding residual module, an output end of each residual module is connected to the corresponding residual sampling module, the plurality of residual sampling modules are sequentially connected in series, and an output end of each residual sampling module is used to output a first feature map; the input end of the up-sampling module is connected between the residual error sampling modules, and the output end of the up-sampling module is used for outputting a second characteristic diagram.
The plurality of up-sampling modules respectively up-sample the feature maps behind the last but one residual error sampling module and the third residual error sampling module, the feature map obtained by adding up-sampled feature maps is marked as a second feature map, and the feature map input by the last residual error sampling module is marked as a first feature map.
Referring to the schematic structural diagram of the feature map extraction model shown in fig. 3, the feature map extraction model provided in this embodiment includes first to fourth extraction modules RPR1 to RPR4, a residual module Res, first to fourth residual sampling modules RU1 to RU4, and first to third upsampling modules UP1 to UP3.
The input end of the first extraction module RPR1 is used for inputting an image to be identified or a first feature map, and the input end of the first extraction module RPR1 is further connected with the output end of the fourth residual sampling module RU 4. The input end of the first extraction module in the first feature map extraction model in the feature extraction network is used for inputting an image to be recognized, the input end of the first extraction module in the other feature map extraction models is used for inputting a first feature map F1, and the structure diagram of the feature map extraction models 211 to 21 (N-1) in the feature extraction network is shown in FIG. 3.
As shown in fig. 3, the output terminal of the first extraction module RPR1 is connected to the input terminal of the second extraction module RPR2 and the output terminal of the third residual error sampling module RU3, respectively; the output end of the second extraction module RPR2 is respectively connected with the input end of the third extraction module RPR3 and the output end of the second residual error sampling module RU 2; the output end of the third extraction module RPR3 is connected to the input end of the fourth extraction module RPR4 and the output end of the first residual error sampling module RU1, respectively.
The first UP-sampling module UP1 and the second UP-sampling module UP2 are configured to UP-sample the feature map output by the second residual sampling module RU2 twice, and the third UP-sampling module UP3 is configured to UP-sample the feature map output by the third residual sampling module RU 3; the second feature map is obtained by adding the feature map output by the second UP-sampling module UP2, the feature map output by the third UP-sampling module UP3 and the feature map output by the fourth residual sampling module RU4 with the same dimension.
Compared with the existing Hourglass network, the feature extraction network in this embodiment can output one more feature map, the multiple-output second feature map is obtained by adding 3 feature maps, and is a feature map obtained by performing two times of upsampling on the feature map after the second residual error sampling module RU2, a feature map obtained by performing one time of upsampling on the feature map after the third residual error sampling module RU3, and a feature map output by the fourth residual error sampling module RU4, respectively.
In one embodiment, the extraction module is mainly used for feature map extraction, such as feature map extraction using an image feature extraction algorithm. In a specific implementation manner, as shown in fig. 3, the extraction module RPR provided in this embodiment may include a residual module Res, a pooling layer (Pool), and a residual module Res, which are connected in sequence, where the residual module Res may be a ResNet residual network; the residual error sampling module RU comprises a residual error module Res and an UP-sampling module UP which are sequentially connected; the residual module Res includes a ReLU layer (activation function layer), a batch normal layer (BN layer), and a convolution layer (Conv layer) connected in this order; the UP-sampling module UP may sample the feature map using a sampling algorithm, such as a bilinear interpolation algorithm.
The feature extraction network provided by the embodiment is a nested residual structure, low-level features are reserved to a certain extent in the backward transfer process, and meanwhile, the plurality of Hourglasss are connected in series to allow the high-level features to be transferred backward, so that the concept of multi-scale fusion is realized, and the final prediction result can be effectively improved.
When the Hourglass networks are used for detecting key points, all Hourglass networks are connected in series, low-level and high-level features reserved inside one Hourglass network are reserved inside a single Hourglass only, the reservation capacity of the features among the Hourglass networks is very limited, the Hourglass + of the feature extraction network provided by the embodiment is used for outputting the internal feature fusion in a residual error mode through structural change, and adding the residual error features of all Hourglass +, which is a residual error structure of the Hourglass level, so that the fusion among the Hourglass networks is formed, the high-level features and the low-level features reserved by all Hourglasss are effectively guaranteed to be transmitted to a final prediction network, and no extra calculation cost is brought by the feature fusion, and the convergence speed of the network can be accelerated in the training process.
In order to verify the acupoint recognition network, the acupoint recognition network is trained based on the image marked with the acupoints, an acupoint recognition experiment is performed on the image to be recognized based on the trained acupoint recognition network, and the acupoint recognition result is tested, wherein the result shows that the acupoint recognition result of the acupoint recognition network is 84% and 82% in the evaluation index of AUC0.08, and the acupoint prediction time of the acupoint recognition network provided by the embodiment is 5% faster than that of the Hourglass network. The result shows that the acupuncture point identification network provided by the embodiment has higher identification speed and high identification accuracy.
According to the acupoint identification method provided by the embodiment, the hourglass network is improved, the feature maps obtained by the up-sampling of the internal residual error sampling module are fused and added, the fused feature map input into the prediction network is ensured to simultaneously retain high-level features and low-level features, and the acupoint identification speed and the acupoint identification accuracy are improved.
Corresponding to the acupoint identification method provided in the above embodiment, an embodiment of the present invention provides an acupoint identification network, as shown in fig. 2, the acupoint identification network includes: a feature extraction network 21, a feature map fusion network 22 and a prediction network 23 which are connected in series in sequence; the feature extraction network comprises a plurality of feature map extraction models 210-21N connected in series; the characteristic diagram extraction model comprises a plurality of extraction modules, a residual error module, a plurality of residual error sampling modules and a plurality of up-sampling modules.
The feature extraction network 21 is configured to extract a plurality of feature maps from the image to be recognized and output the feature maps to the feature map fusion network.
The feature map fusion model 22 is used for fusing and adding the feature maps to obtain a fusion feature map.
The prediction network 23 is used for predicting the acupuncture points based on the fused feature map to obtain the coordinates and the categories of the acupuncture points.
According to the acupoint identification network provided by the embodiment, the image to be identified is input into the feature extraction network comprising the plurality of feature map extraction models, the plurality of feature maps output by the feature extraction network can be obtained, the plurality of feature maps are fused and added to obtain the fused feature map, the fused feature map input into the prediction network can simultaneously reserve high-level features and low-level features, each acupoint coordinate and the corresponding acupoint type in the image to be identified can be automatically identified through the preset network, and the acupoint identification speed and the identification accuracy are improved.
In one embodiment, the feature map extraction model in the feature extraction network may output a first feature map and a second feature map; and inputting the first feature diagram output by the previous feature diagram extraction model into the next feature diagram extraction model, so that the next feature diagram extraction model outputs the first feature diagram and the second feature diagram until the second feature diagrams output by all the feature diagram extraction models are obtained, and taking the second feature diagrams as a plurality of feature diagrams output by the feature extraction network.
In one embodiment, the feature map extraction model includes a plurality of extraction modules, a residual module, a plurality of residual sampling modules, and a plurality of upsampling modules; the extraction modules are sequentially connected in series, the output ends of the extraction modules are connected with the input ends of the residual error modules, the output ends of the residual error modules are connected with the residual error sampling modules, the residual error sampling modules are sequentially connected in series, and the output ends of the residual error sampling modules are used for outputting a first characteristic diagram; the input end of the up-sampling module is connected between the residual error sampling modules, and the output end of the up-sampling module is used for outputting a second characteristic diagram.
In one embodiment, the plurality of extraction modules include first to fourth extraction modules, and the plurality of residual sampling modules include first to fourth residual sampling modules; the input end of the first extraction module is used for inputting the image to be identified or the first characteristic diagram, and the input end of the first extraction module is also connected with the output end of the fourth residual error sampling module; the output end of the first extraction module is respectively connected with the input end of the second extraction module and the output end of the third residual error sampling module; the output end of the second extraction module is respectively connected with the input end of the third extraction module and the output end of the second residual error sampling module; the output end of the third extraction module is respectively connected with the input end of the fourth extraction module and the output end of the first residual error sampling module.
In an embodiment, the plurality of upsampling modules include a first upsampling module to a third upsampling module, where the first upsampling module and the second upsampling module are configured to upsample the feature map output by the second residual sampling module twice, and the third upsampling module is configured to upsample the feature map output by the third residual sampling module; the second feature map is obtained by adding the feature map output by the second up-sampling module, the feature map output by the third up-sampling module and the feature map output by the fourth residual sampling module.
In one embodiment, the extraction module includes a residual module, a pooling layer, and a residual module, which are connected in sequence.
In one embodiment, the residual sampling module includes a residual module and an upsampling module.
In one embodiment, the residual module includes a ReLU layer, a batchNormal layer, and a convolution layer connected in sequence.
In one embodiment, the prediction network includes a convolutional layer, a batch normal layer, a ReLU layer, and a linear layer.
According to the acupoint identification network provided by the embodiment, the hourglass network is improved, the feature maps obtained by up-sampling of the internal residual error sampling module are fused and added, the fused feature maps input into the prediction network can be ensured to simultaneously retain high-level features and low-level features, and the acupoint identification speed and the acupoint identification accuracy are improved.
The implementation principle and the generated technical effects of the acupuncture point identification network provided by the embodiment are the same as those of the foregoing embodiment, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments for the parts of the network embodiment that are not mentioned.
An embodiment of the present invention provides an electronic device, where the electronic device includes a processor and a memory, where the memory stores a computer program that can be executed on the processor, and the processor executes the computer program to implement the steps of the method provided in the foregoing embodiment.
Embodiments of the present invention provide a computer-readable medium, wherein the computer-readable medium stores computer-executable instructions, which, when invoked and executed by a processor, cause the processor to implement the method of the above-described embodiments.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the system described above may refer to the corresponding process in the foregoing embodiment, and details are not described herein again.
The acupoint identification method and the computer program product of the acupoint identification network provided by the embodiment of the invention comprise a computer readable storage medium storing program codes, instructions included in the program codes can be used for executing the method in the previous method embodiment, and specific implementation can be referred to the method embodiment, and is not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as being fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplification of description, but do not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. An acupuncture point identification method is characterized by comprising the following steps:
inputting an image to be recognized into a feature extraction network obtained by pre-training to obtain a plurality of feature maps output by the feature extraction network; wherein the feature extraction network comprises a plurality of feature map extraction models connected in series;
fusing and adding the characteristic graphs to obtain fused characteristic graphs;
and inputting the fusion characteristic graph into a prediction network obtained by pre-training for acupoint prediction to obtain each acupoint coordinate and acupoint category in the image to be recognized.
2. The method according to claim 1, wherein the step of inputting the image to be recognized into a pre-trained feature extraction network to obtain a plurality of feature maps output by the feature extraction network comprises:
inputting the image to be recognized into a first feature map extraction model so that the first feature map extraction model outputs a first feature map and a second feature map;
and inputting the first feature diagram output by the previous feature diagram extraction model into the next feature diagram extraction model, so that the next feature diagram extraction model outputs the first feature diagram and the second feature diagram until the second feature diagrams output by all the feature diagram extraction models are obtained, and taking the second feature diagrams as a plurality of feature diagrams output by the feature extraction network.
3. The method of claim 2, wherein the feature map extraction model comprises a plurality of extraction modules, a residual module, a plurality of residual sampling modules, and a plurality of upsampling modules;
the plurality of extraction modules are sequentially connected in series, the output end of each extraction module is connected with the input end of the corresponding residual error module, the output end of each residual error module is connected with the corresponding residual error sampling module, the plurality of residual error sampling modules are sequentially connected in series, and the output end of each residual error sampling module is used for outputting the first characteristic diagram;
the input end of the up-sampling module is connected between the residual error sampling modules, and the output end of the up-sampling module is used for outputting the second characteristic diagram.
4. The method of claim 3, wherein the plurality of extraction modules comprises first through fourth extraction modules, and the plurality of residual sampling modules comprises first through fourth residual sampling modules;
the input end of the first extraction module is used for inputting the image to be identified or the first feature map, and the input end of the first extraction module is also connected with the output end of the fourth residual sampling module;
the output end of the first extraction module is respectively connected with the input end of the second extraction module and the output end of the third residual error sampling module;
the output end of the second extraction module is respectively connected with the input end of the third extraction module and the output end of the second residual error sampling module;
and the output end of the third extraction module is respectively connected with the input end of the fourth extraction module and the output end of the first residual error sampling module.
5. The method according to claim 4, wherein the plurality of upsampling modules includes a first upsampling module to a third upsampling module, the first upsampling module and the second upsampling module are configured to upsample twice the feature map output by the second residual sampling module, and the third upsampling module is configured to upsample the feature map output by the third residual sampling module;
the second feature map is obtained by adding the feature map output by the second up-sampling module, the feature map output by the third up-sampling module, and the feature map output by the fourth residual sampling module.
6. The method of claim 3, wherein the extraction module comprises a residual module, a pooling layer, and a residual module connected in sequence.
7. The method of claim 3, wherein the residual sampling module comprises a residual module and an upsampling module.
8. The method of claim 3 or any one of claims 6 to 7, wherein the residual module comprises a ReLU layer, a batch Normal layer and a convolutional layer connected in sequence.
9. The method of any one of claims 1-7, wherein the prediction network comprises a convolutional layer, a batch normal layer, a ReLU layer, and a linear layer.
10. A hole recognition network, comprising: the feature extraction network, the feature map fusion network and the prediction network are sequentially connected in series; the feature extraction network comprises a plurality of feature map extraction models connected in series; the characteristic diagram extraction model comprises a plurality of extraction modules, a residual error module, a plurality of residual error sampling modules and a plurality of up-sampling modules;
the feature extraction network is used for extracting a plurality of feature maps from the image to be identified and outputting the feature maps to the feature map fusion network;
the feature map fusion model is used for fusing and adding the feature maps to obtain a fusion feature map;
the prediction network is used for predicting acupuncture points based on the fusion characteristic graph to obtain each acupuncture point coordinate and each acupuncture point category.
CN202211123794.1A 2022-09-15 2022-09-15 Acupuncture point identification method and acupuncture point identification network Pending CN115457598A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211123794.1A CN115457598A (en) 2022-09-15 2022-09-15 Acupuncture point identification method and acupuncture point identification network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211123794.1A CN115457598A (en) 2022-09-15 2022-09-15 Acupuncture point identification method and acupuncture point identification network

Publications (1)

Publication Number Publication Date
CN115457598A true CN115457598A (en) 2022-12-09

Family

ID=84305263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211123794.1A Pending CN115457598A (en) 2022-09-15 2022-09-15 Acupuncture point identification method and acupuncture point identification network

Country Status (1)

Country Link
CN (1) CN115457598A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117636446A (en) * 2024-01-25 2024-03-01 江汉大学 Face acupoint positioning method, acupuncture robot and storage medium
CN118628840A (en) * 2024-08-12 2024-09-10 杭州医尔睿信息技术有限公司 Human body meridian point position visualization method and device based on AI image recognition

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117636446A (en) * 2024-01-25 2024-03-01 江汉大学 Face acupoint positioning method, acupuncture robot and storage medium
CN117636446B (en) * 2024-01-25 2024-05-07 江汉大学 Face acupoint positioning method, acupuncture robot and storage medium
CN118628840A (en) * 2024-08-12 2024-09-10 杭州医尔睿信息技术有限公司 Human body meridian point position visualization method and device based on AI image recognition

Similar Documents

Publication Publication Date Title
CN107688823B (en) A kind of characteristics of image acquisition methods and device, electronic equipment
CN115457598A (en) Acupuncture point identification method and acupuncture point identification network
CN109033068A (en) It is used to read the method, apparatus understood and electronic equipment based on attention mechanism
CN110334357A (en) A kind of method, apparatus, storage medium and electronic equipment for naming Entity recognition
CN108734212B (en) Method for determining classification result and related device
CN110020144B (en) Recommendation model building method and equipment, storage medium and server thereof
CN112712069B (en) Question judging method and device, electronic equipment and storage medium
CN111651474B (en) Method and system for converting natural language into structured query language
CN112364238B (en) Deep learning-based user interest point recommendation method and system
CN112819073B (en) Classification network training, image classification method and device and electronic equipment
CN114331122A (en) Key person risk level assessment method and related equipment
CN114005125A (en) Table identification method and device, computer equipment and storage medium
CN109597881A (en) Matching degree determines method, apparatus, equipment and medium
CN115470328A (en) Open field question-answering method based on knowledge graph and related equipment
CN111680021A (en) Multi-source heterogeneous disaster situation data processing and presenting method and device
CN112580368B (en) Method, device, equipment and storage medium for identifying intention sequence of conversation text
CN110516125A (en) Method, device and equipment for identifying abnormal character string and readable storage medium
CN114332484A (en) Key point detection method and device, computer equipment and storage medium
CN116484869B (en) Multi-mode named entity recognition method, device, equipment and storage medium
CN116484878B (en) Semantic association method, device, equipment and storage medium of power heterogeneous data
CN115906861B (en) Sentence emotion analysis method and device based on interaction aspect information fusion
CN116843832A (en) Single-view three-dimensional object reconstruction method, device, equipment and storage medium
CN115345917A (en) Multi-stage dense reconstruction method and device for low video memory occupation
CN115512693A (en) Audio recognition method, acoustic model training method, device and storage medium
CN113052661B (en) Method and device for acquiring attribute information, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination