CN114897763A - Human back acupuncture point identification method, system, device and storage medium - Google Patents
Human back acupuncture point identification method, system, device and storage medium Download PDFInfo
- Publication number
- CN114897763A CN114897763A CN202210191825.0A CN202210191825A CN114897763A CN 114897763 A CN114897763 A CN 114897763A CN 202210191825 A CN202210191825 A CN 202210191825A CN 114897763 A CN114897763 A CN 114897763A
- Authority
- CN
- China
- Prior art keywords
- human body
- image
- points
- semantic segmentation
- original
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a method, a system, a device and a storage medium for identifying acupuncture points on the back of a human body, which are used for automatically identifying the acupuncture points on the back of the human body and improving the identification accuracy of the acupuncture points on the back of the human body. The method comprises the following steps: acquiring an original human body image; inputting the original human body image into a semantic segmentation model to obtain a semantic segmentation image, wherein the semantic segmentation model is used for performing semantic segmentation on a background and a foreground in the original human body image; determining a human body contour boundary line according to the semantic segmentation image; inputting the original human body image into a key point detection model to obtain human body key point data, wherein the key point detection model is used for extracting human body key points in the original human body image; determining human shoulder boundary points according to the shoulder key points in the human body key point data and the human body contour boundary line; and establishing a first coordinate system by taking the human shoulder boundary points as reference points, and calculating the position coordinates of the human back acupuncture points in the first coordinate system.
Description
Technical Field
The present application relates to the field of data processing, and in particular, to a method, a system, a device, and a storage medium for identifying acupuncture points on the back of a human body.
Background
With the continuous progress of society, people have stronger health care consciousness, and the theory of traditional Chinese medicine is more and more emphasized. In traditional Chinese medicine, a human body consists of channels and collaterals, the channels and collaterals comprise acupuncture points, the positions of the acupuncture points are fixed relative to different parts of the human body, and certain treatment effect can be achieved by stimulating the acupuncture points at fixed points.
In the prior art, the acupuncture point can be positioned by applying voltage to a human body, but the positioning method needs to depend on special instruments and manual operation, and the acupuncture point identification technology which depends on VR and AR is generally designed according to average data of the public, so that the existing acupuncture point identification technology cannot accurately identify the acupuncture points of all people, unmanned operation is difficult to achieve, and the identification precision is low.
Disclosure of Invention
The application provides a method, a system, a device and a storage medium for identifying acupuncture points on the back of a human body, which are used for automatically identifying the acupuncture points on the back of the human body and improving the identification accuracy of the acupuncture points on the back of the human body.
The application provides a method for identifying acupuncture points on the back of a human body in a first aspect, which comprises the following steps:
acquiring an original human body image;
inputting the original human body image into a semantic segmentation model to obtain a semantic segmentation image, wherein the semantic segmentation model is used for performing semantic segmentation on a background and a foreground in the original human body image;
determining a human body contour boundary line according to the semantic segmentation image;
inputting the original human body image into a key point detection model to obtain human body key point data, wherein the key point detection model is used for extracting human body key points in the original human body image;
determining human shoulder boundary points according to the shoulder key points in the human body key point data and the human body contour boundary line;
and establishing a first coordinate system by taking the human shoulder boundary points as reference points, and calculating the position coordinates of the human back acupuncture points in the first coordinate system.
Optionally, in the semantic segmentation image, an RGB value (0,0,0) represents a background, and an RGB value (1,1,1) represents a foreground, and the determining a human body contour boundary line according to the semantic segmentation image includes:
extracting target pixel points with mutational RGB values in the semantic segmentation image;
and connecting the target pixel points to determine the boundary line of the human body contour.
Optionally, the determining the human shoulder boundary point according to the shoulder key point in the human body key point data and the human body contour boundary line includes:
determining shoulder key points in the original human body image according to the human body key point data;
scanning left and right along the horizontal direction of the shoulder key points;
and determining a coincidence point of the scanning path and the boundary line of the human body contour as a human body shoulder boundary point.
Optionally, the acquiring the original human body image includes:
acquiring an original human body image through a camera;
after the establishing a first coordinate system by taking the human shoulder boundary points as reference points and calculating the position coordinates of the human back acupuncture points in the first coordinate system, the method further comprises the following steps:
acquiring a second coordinate system established by taking the camera as a reference point;
and converting the position coordinates of the acupuncture points on the back of the human body in the first coordinate system into the position coordinates in the second coordinate system.
Optionally, before the inputting the original human body image into the semantic segmentation model, the method further includes:
acquiring a preset image specification;
converting the original human body image based on the preset image specification;
the inputting the original human body image into a semantic segmentation model comprises:
and inputting the processed original human body image into a semantic segmentation model.
Optionally, the semantic segmentation model includes a coding network and a decoding network, and the inputting the original human body image into the semantic segmentation model to obtain the semantic segmentation image includes:
inputting the original human body image into the coding network for depth separation convolution to obtain an abstract feature map;
and inputting the abstract feature map into the decoding network for transposition convolution and common 2D convolution to obtain a semantic segmentation image.
Optionally, the semantic segmentation model is obtained by training according to the following method:
collecting a sample human body image to obtain characteristic data;
performing label classification on pixel points in the sample human body image to obtain label data, wherein the label comprises a foreground label and a background label;
inputting the characteristic data into an initialized network model to obtain prediction data;
calculating a characteristic loss value between the prediction data and the label data through a preset loss function;
and updating the initialized network model according to the characteristic loss value back propagation until the characteristic loss value reaches a preset loss value, and determining that the initialized network model is trained to be finished to obtain the semantic segmentation model.
The present application provides in a second aspect a system for identifying acupuncture points on the back of a human body, comprising:
the acquisition unit is used for acquiring an original human body image;
the first input unit is used for inputting the original human body image into a semantic segmentation model to obtain a semantic segmentation image, and the semantic segmentation model is used for performing semantic segmentation on a background and a foreground in the original human body image;
the first determining unit is used for determining a human body contour boundary line according to the semantic segmentation image;
the second input unit is used for inputting the original human body image into a key point detection model to obtain human body key point data, and the key point detection model is used for extracting human body key points in the original human body image;
the second determining unit is used for determining human shoulder boundary points according to the shoulder key points in the human key point data and the human contour boundary line;
and the calculating unit is used for establishing a first coordinate system by taking the human shoulder boundary points as reference points and calculating the position coordinates of the acupuncture points on the back of the human body in the first coordinate system.
Optionally, in the semantically segmented image, the RGB values (0,0,0) represent a background, and the RGB values (1,1,1) represent a foreground, the first determining unit includes:
the extraction module is used for extracting target pixel points with sudden change of RGB values in the semantic segmentation image;
and the first determining module is used for connecting the target pixel points to determine a boundary line of the human body contour.
Optionally, the second determining unit includes:
the second determining module is used for determining shoulder key points in the original human body image according to the human body key point data;
the scanning module is used for scanning left and right along the horizontal direction of the shoulder key points;
and the third determining module is used for determining that a coincident point of the scanning path and the boundary line of the human body contour is a human body shoulder boundary point.
Optionally, the obtaining unit is specifically configured to:
acquiring an original human body image through a camera;
the system further comprises:
and the conversion unit is used for acquiring a second coordinate system established by taking the camera as a reference point and converting the position coordinates of the acupuncture points on the back of the human body in the first coordinate system into the position coordinates in the second coordinate system.
Optionally, the system further includes:
the processing unit is used for acquiring a preset image specification and converting the original human body image based on the preset image specification;
the first input unit is specifically configured to:
and inputting the processed original human body image into a semantic segmentation model.
Optionally, the semantic segmentation model includes an encoding network and a decoding network, and the first input unit includes:
the first input module is used for inputting the original human body image into the coding network for depth separation convolution to obtain an abstract feature map;
and the second input module is used for inputting the abstract feature map into the decoding network to perform transposition convolution and common 2D convolution so as to obtain a semantic segmentation image.
Optionally, the semantic segmentation model is obtained by training through the following method:
collecting a sample human body image to obtain characteristic data;
performing label classification on pixel points in the sample human body image to obtain label data, wherein the label comprises a foreground label and a background label;
inputting the characteristic data into an initialized network model to obtain predicted data;
calculating a characteristic loss value between the prediction data and the label data through a preset loss function;
and updating the initialized network model according to the characteristic loss value back propagation until the characteristic loss value reaches a preset loss value, and determining that the initialized network model is trained to be finished to obtain the semantic segmentation model.
A third aspect of the present application provides a human back acupoint recognition device, the device comprising:
the device comprises a processor, a memory, an input and output unit and a bus;
the processor is connected with the memory, the input and output unit and the bus;
the memory stores a program, and the processor calls the program to execute the human back acupuncture point identification method selectable in any one of the first aspect and the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium having a program stored thereon, where the program is executed on a computer to perform the method for identifying acupuncture points on the back of a human body according to any one of the first aspect and the first aspect.
According to the technical scheme, the method has the following advantages:
because the acupuncture points on the back of the human body are fixed and unchangeable relative to the human body, the invention obtains the human body outline boundary and the human body shoulder key points by respectively inputting the original human body pictures into the trained semantic segmentation model and the key point detection model, assists in positioning the shoulder boundary points of the human body by combining the human body shoulder key points with the human body outline boundary, and calculates the acupuncture points on the back of the human body by the shoulder boundary points, so that the acupuncture point positioning result on the back of the human body is more accurate.
The invention can automatically identify the acupuncture points on the back of the human body, is suitable for people with different body types and postures, has high identification speed, wide applicability and portability, and is suitable for embedded small equipment without much calculation.
Drawings
In order to more clearly illustrate the technical solutions in the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart of an embodiment of a method for identifying acupuncture points on the back of a human body provided by the present application;
2-1 and 2-2 are schematic flow charts of another embodiment of the method for identifying acupuncture points on the back of a human body provided by the application;
fig. 3 is a schematic structural diagram of an embodiment of a human back acupuncture point recognition system provided by the present application;
fig. 4 is a schematic structural diagram of another embodiment of the human back acupuncture point recognition system provided by the present application;
fig. 5 is a schematic structural view of an embodiment of the human back acupuncture point recognition device provided by the present application.
Detailed Description
The application provides a method, a system, a device and a storage medium for identifying acupuncture points on the back of a human body, which are used for automatically identifying the acupuncture points on the back of the human body and improving the identification accuracy of the acupuncture points on the back of the human body.
The method for identifying the acupuncture points on the back of the human body can be applied to various electronic devices, such as a smart phone or a computer, a tablet computer, an intelligent wearable device, a portable computer terminal or a robot. For convenience of explanation, the terminal is taken as an execution subject for illustration in the present application.
Referring to fig. 1, fig. 1 is a diagram illustrating an embodiment of a method for identifying acupuncture points on the back of a human body according to the present application, the method including:
101. acquiring an original human body image;
according to the method, the acupuncture points on the back of the human body in the image are identified by acquiring the human body image, so that before the acupuncture points on the back of the human body are identified, the terminal needs to acquire the original human body image, namely the human body back image shot in a real scene. Specifically, the terminal may read a picture or a video stream that needs to be predicted through opencv, and if the terminal acquires the video stream, it needs to determine a human body image in the video stream information first.
Further, after the original human body image is obtained, before the image is input into the model, the image needs to be processed into the size of the model input, and after the processing, the image is input into the model and is subjected to forward propagation to obtain an output result.
102. Inputting an original human body image into a semantic segmentation model to obtain a semantic segmentation image, wherein the semantic segmentation model is used for performing semantic segmentation on a background and a foreground in the original human body image;
and the terminal inputs the acquired original human body image into the semantic segmentation model and then acquires the semantic segmentation image generated by the semantic segmentation model. The semantic segmentation model can perform semantic segmentation on the foreground and the background in the image input into the model through training of a large number of samples. In the semantic segmentation image generated by the semantic segmentation model, the foreground information and the background information are represented by different RGB values, so that the terminal can distinguish the foreground information and the background information in the original human body image through the semantic segmentation image.
103. Determining a boundary line of the human body contour according to the semantic segmentation image;
the original human body image is an image for shooting the back of a human body, and the semantic segmentation image generated according to the semantic segmentation model expresses foreground information and background information in the original human body image by adopting different RGB values, so that the terminal can obtain a segmentation image of the whole human body contour and the background through the semantic segmentation image, and can determine a human body contour boundary line.
104. Inputting the original human body image into a key point detection model to obtain human body key point data, wherein the key point detection model is used for extracting human body key points in the original human body image;
the terminal inputs the obtained original human body image into a key point detection model, and the detection of a plurality of key points of the human body in the original human body image is realized through the key point detection model, wherein the detection of key points of the left shoulder, the right shoulder, the left elbow, the right wrist, the left hip, the right hip, the left knee, the right knee, the left ankle and the like of the human body is at least included.
The key point detection model is implemented by inputting an image, extracting the features of the image through a convolutional network to obtain a group of feature Maps, dividing the feature Maps into two paths for processing, extracting Part consistency Maps and Part Affinity Fields by using a CNN network respectively, obtaining the two information, then using Bipartite Matching in graph theory to solve Part Association, connecting joint points of the same person, and finally combining the generated even Matching result into an integral skeleton of one person due to the vectority of PAFs (programmable logic controller), and finally combining the integral skeleton with the human skeleton to realize the detection of the key points of the human body.
105. Determining human shoulder boundary points according to shoulder key points in the human body key point data and human body contour boundary lines;
the human body key points generally correspond to joints with a certain degree of freedom on the human body, and the human body key points extracted through the key point detection model are only nodes in a human body skeleton analyzed by an algorithm and can only replace the positions of the parts of the human body to a certain extent, so that in order to enable the identification of acupuncture points on the back of the human body to be more accurate, the terminal also needs to be combined with a predetermined human body contour boundary line to assist in positioning.
Specifically, the terminal determines left and right shoulder key points of the human body from a plurality of identified key points of the human body, and determines shoulder boundary points of the human body in a semantic segmentation map according to the shoulder key points and a human body contour boundary line.
106. And establishing a first coordinate system by taking the boundary points of the shoulders of the human body as reference points, and calculating the position coordinates of the acupuncture points on the back of the human body in the first coordinate system.
Because the back acupuncture points of the human body are fixed and unchangeable relative to the human body, the human body shoulder boundary points are used as reference points to establish a first coordinate system, and then the terminal can convert the position coordinates of each back acupuncture point in the original human body image in the first coordinate system by combining preset human body back acupuncture point data and mark the position coordinates to finish the identification of the back acupuncture points of the human body.
In the embodiment, the human body contour boundary and the human body shoulder key points are obtained by respectively inputting the original human body picture into the trained semantic segmentation model and the trained key point detection model, the human body shoulder key points are combined with the human body contour boundary to assist in positioning the human body shoulder boundary points, and the acupuncture points on the back of the human body are calculated through the shoulder boundary points, so that the acupuncture point positioning result on the back of the human body is more accurate.
The invention can automatically identify the acupuncture points on the back of the human body, is suitable for people with different body types and postures, has high identification speed, wide applicability and portability, and is suitable for embedded small equipment without much calculation.
The human back acupoint identification method provided by the present application is described above, and the human back acupoint identification method and the training process of the semantic segmentation model used in the method are described in detail below.
Referring to fig. 2-1 and 2-2, fig. 2-1 and 2-2 illustrate another embodiment of the method for identifying acupuncture points on the back of a human body according to the present application, the method including:
201. collecting a sample human body image to obtain characteristic data;
the step 201 is the preparation of a data set, the core technology applied by the semantic segmentation model is the human body contour separation based on the semantic segmentation, so that a plurality of sample human body images need to be acquired, the sample human body images are used as characteristic data, the sample human body images are images for shooting the back of a human body in a real scene, and abundant and high-quality image data are beneficial to the continuous optimization of the model.
202. Carrying out label classification on pixel points in the sample human body image to obtain label data, wherein the label comprises a foreground label and a background label;
after image acquisition is completed, all pixel points in a sample human body image need to be classified and labeled, a foreground and a background are represented by different RGB three-channel numerical values, wherein the three channels of the background are all 0, the three channels of the background are (0,0,0), the three channels of the foreground are all 1, the three channels of the foreground are (1,1,1), and the processed image is used as label data.
203. Inputting the characteristic data into an initialized network model to obtain predicted data;
the initialized network model is a model for processing input samples and outputting a segmented image. The network structure of the model is divided into two parts of coding and decoding, after the coding part is subjected to multiple deep separation convolutions by a Mobilenet network, an f 1-f 5 layer network can be respectively extracted from an original network for processing, and the obtained featuremap is an ideal abstract feature. The decoding network carries out 3 times of transposition convolution, enlarges the image to the size of the original image, adds a layer of common 2D convolution, and finally outputs the result to the loss function through the softmax activation function.
The activation function of the network model middle layer is initialized by relu, and a BN layer is added for accelerated iteration. The optimizer selects an Adam optimizer to perform batch gradient descent and update the weight parameters.
Specifically, the specific processing mode of the initialized network model on the sample image is to perform multiple deep separation convolutions on an image input coding network (a mobilene network) to obtain an abstract feature map. And inputting the abstract feature map into a decoding network, performing transposition convolution for 3 times, putting the image to the size of an original image, and performing a layer of common 2D convolution to obtain a segmented image.
In the model training process, the acquired data set is continuously input into the initialized network model, and the segmented image output by the model is used as prediction data.
In some embodiments, the input picture specification for the model network is 416 pixels by 416 pixels.
204. Calculating a characteristic loss value between the predicted data and the label data through a preset loss function;
the deep learning framework realized by the initialized network model is tensorflow, the preset loss function adopted in the model training process is a cross entropy loss function binary _ cross entropy (), and the specific cross entropy is calculated as the following formula:
wherein G represents a true value, i.e. tag data; p represents a prediction value, i.e., prediction data.
205. Updating the initialized network model according to the characteristic loss value back propagation until the characteristic loss value reaches a preset loss value, and determining that the initialized network model is trained to be finished to obtain a semantic segmentation model;
And when the characteristic loss value between the predicted data output by the network and the label data reaches a preset loss value, finishing the training of the initialized network model, and storing the model parameters to obtain the semantic segmentation model. Preferably, the preset loss value may be taken to be 0.068.
206. Acquiring an original human body image through a camera;
the human back acupuncture point identification method can be applied to some small embedded devices, and original human body images are obtained in real time through the camera.
It should be noted that, for the accuracy of subsequent identification, the camera needs to be calibrated before shooting.
207. Inputting an original human body image into a semantic segmentation model to obtain a semantic segmentation image, wherein the semantic segmentation model is used for performing semantic segmentation on a background and a foreground in the original human body image;
in this embodiment, step 207 is similar to step 102 of the previous embodiment, and is not repeated here.
It should be noted that, before the obtained original human body image is input to the semantic segmentation model, a preset image specification needs to be obtained first, where the preset image specification is an input image specification of the semantic segmentation model, and the terminal needs to convert the image specification into the preset image specification and then input the image into the semantic segmentation model.
208. Extracting target pixel points with mutational RGB values in the semantically segmented image;
in the semantic segmentation image generated by the semantic segmentation model, the RGB value (0,0,0) represents the background, and the RGB value (1,1,1) represents the foreground, so that the terminal can distinguish the human body contour boundary line by extracting the point of obvious mutation of the RGB value in the semantic segmentation image.
209. Connecting the target pixel points to determine a boundary line of the human body contour;
and the terminal connects all target pixel points with the mutational RGB values in the semantic segmentation image and takes the connection result as a human body contour boundary line.
210. Inputting the original human body image into a key point detection model to obtain human body key point data, wherein the key point detection model is used for extracting human body key points in the original human body image;
in this embodiment, step 210 is similar to step 104 of the previous embodiment, and is not described herein again.
211. Determining shoulder key points in the original human body image according to the human body key point data, and scanning left and right along the horizontal direction of the shoulder key points;
after the terminal identifies the key points of the human body, the key points of the left shoulder and the right shoulder of the human body are determined at the key points of the human body, and horizontal scanning is carried out on the semantic segmentation map along the key points of the shoulder.
212. Determining a coincident point of the scanning path and the boundary line of the human body contour as a human body shoulder boundary point;
when the terminal scans the boundary line of the human body contour, the point is the boundary point of the shoulder position on the human body contour, namely the point at which the terminal determines the coincidence point of the scanning path and the boundary line of the human body contour is the human body shoulder boundary point.
213. Calculating the coordinates of acupuncture points on the back of the human body by taking the boundary points of the shoulders of the human body as reference points;
in this embodiment, step 213 is similar to step 106 of the previous embodiment, and is not described herein again.
214. And acquiring a second coordinate system established by taking the camera as a reference point, and converting the position coordinates of the acupuncture points on the back of the human body in the first coordinate system into the position coordinates in the second coordinate system.
Specifically, the terminal acquires a second coordinate system established by taking the camera as a reference point, converts the position coordinates of the acupuncture points on the back of the human body in the first coordinate system into the position coordinates in the second coordinate system, namely converts the pixel coordinates of the acupuncture points in the image into world coordinates relative to the camera, and transmits the position coordinate information in the second coordinate system to other mechanisms on the equipment, for example, to the robot control module, so that the robot control module can position the acupuncture points on the back of the human body according to the coordinate information, and then the robot is driven to perform subsequent operations, for example, acupuncture point pressing, acupuncture and moxibustion.
In this embodiment, the semantic segmentation graph generated by the semantic segmentation model has higher accuracy through training of the initialization network model, the human body contour boundary further determined by the terminal through the semantic segmentation graph also has higher accuracy, the human body shoulder boundary points are determined by combining the human body shoulder key points, and the identified human body back acupuncture points are more accurate according to the human body acupuncture point coordinates calculated by the shoulder boundary points.
The method has the advantages of high recognition speed, wide applicability and portability, and is suitable for embedded small equipment without too much calculation force, for example, the robot can be further combined to automatically position and search acupuncture points on the back of a human body.
Referring to fig. 3, fig. 3 is a diagram illustrating an embodiment of a system for identifying acupuncture points on the back of a human body according to the present application, the system including:
an acquisition unit 301 for acquiring an original human body image;
a first input unit 302, configured to input an original human body image into a semantic segmentation model to obtain a semantic segmentation image, where the semantic segmentation model is used to perform semantic segmentation on a background and a foreground in the original human body image;
a first determining unit 303, configured to determine a boundary line of the human body contour according to the semantic segmentation image;
a second input unit 304, configured to input the original human body image into a key point detection model to obtain human body key point data, where the key point detection model is used to extract human body key points in the original human body image;
a second determining unit 305, configured to determine a human shoulder boundary point according to a shoulder key point in the human key point data and a human contour boundary line;
the calculating unit 306 is configured to establish a first coordinate system by using the boundary points of the shoulders of the human body as reference points, and calculate position coordinates of the acupuncture points on the back of the human body in the first coordinate system.
In this embodiment, the original human body picture is respectively input into the trained semantic segmentation model and the trained keypoint detection model through the first input unit 302 and the second input unit 303 to obtain the human body contour boundary and the human body shoulder keypoints, the human body shoulder keypoints are combined with the human body contour boundary through the second determination unit 305 to assist in positioning the shoulder boundary points of the human body, and the acupuncture points on the back of the human body are calculated through the shoulder boundary points through the calculation unit 306, so that the acupuncture point positioning result on the back of the human body is more accurate.
The invention can automatically identify the acupuncture points on the back of the human body, is suitable for people with different body types and postures, has high identification speed, wide applicability and portability, and is suitable for embedded small equipment without much calculation.
Referring to fig. 4, the system for identifying acupuncture points on the back of a human body provided by the present application will be described in detail below, and fig. 4 is another embodiment of the system for identifying acupuncture points on the back of a human body provided by the present application, and the system includes:
an acquiring unit 401, configured to acquire an original human body image;
a first input unit 402, configured to input an original human body image into a semantic segmentation model to obtain a semantic segmentation image, where the semantic segmentation model is used to perform semantic segmentation on a background and a foreground in the original human body image;
a first determining unit 403, configured to determine a boundary line of the human body contour according to the semantic segmentation image;
a second input unit 404, configured to input the original human body image into a key point detection model, so as to obtain human body key point data, where the key point detection model is used to extract human body key points in the original human body image;
a second determining unit 405, configured to determine a human shoulder boundary point according to a shoulder key point in the human body key point data and a human body contour boundary line;
and the calculating unit 406 is used for establishing a first coordinate system by taking the boundary points of the shoulders of the human body as reference points, and calculating the position coordinates of the acupuncture points on the back of the human body in the first coordinate system.
In this embodiment, in the semantic segmentation image, the background is represented by RGB values (0,0,0), and the foreground is represented by RGB values (1,1,1), and the first determination unit 403 further includes:
the extraction module 4031 is used for extracting target pixel points with sudden changes of RGB values in the semantic segmentation image;
a first determining module 4032, configured to connect the target pixel points to determine a boundary line of the human body contour.
In this embodiment, the second determining unit 405 further includes:
a second determining module 4051, configured to determine a shoulder key point in the original human body image according to the human body key point data;
the scanning module 4052 is used for scanning left and right along the horizontal direction of the shoulder key points;
the third determining module 4053 is configured to determine that a coincidence point of the scanning path and the boundary line of the human body contour is a human body shoulder boundary point.
In this embodiment, the obtaining unit 401 is specifically configured to:
acquiring an original human body image through a camera;
the system still further includes:
and the conversion unit 407 is configured to acquire a second coordinate system established by taking the camera as a reference point, and convert the position coordinates of the acupuncture points on the back of the human body in the first coordinate system into position coordinates in the second coordinate system.
In this embodiment, the system further includes:
the processing unit 408 is configured to obtain a preset image specification, and perform conversion processing on an original human body image based on the preset image specification;
the first input unit 402 is specifically configured to:
and inputting the processed original human body image into a semantic segmentation model.
In this embodiment, the semantic segmentation model includes an encoding network and a decoding network, and the first input unit 402 further includes:
the first input module 4021 is configured to input an original human body image to a coding network for deep separation convolution to obtain an abstract feature map;
the second input module 4022 is configured to input the abstract feature map to a decoding network for performing a transposed convolution and a normal 2D convolution, so as to obtain a semantic segmentation image.
In this embodiment, the semantic segmentation model is obtained by training through the following method:
collecting a sample human body image to obtain characteristic data;
carrying out label classification on pixel points in the sample human body image to obtain label data, wherein the label comprises a foreground label and a background label;
inputting the characteristic data into an initialized network model to obtain predicted data;
calculating a characteristic loss value between the predicted data and the label data through a preset loss function;
and updating the initialized network model according to the characteristic loss value back propagation until the characteristic loss value reaches a preset loss value, and determining that the initialized network model is trained to be finished to obtain a semantic segmentation model.
In the system of this embodiment, the functions of each unit correspond to the steps in the method embodiment shown in fig. 2, and are not described herein again.
The present application also provides a device for identifying acupuncture points on the back of a human body, please refer to fig. 5, and fig. 5 shows an embodiment of the device for identifying acupuncture points on the back of a human body provided by the present application, which includes:
a processor 501, a memory 502, an input/output unit 503, and a bus 504;
the processor 501 is connected with the memory 502, the input/output unit 503 and the bus 504;
the memory 502 stores a program, and the processor 501 calls the program to execute any of the above methods for identifying acupuncture points on the back of a human body.
The present application also relates to a computer-readable storage medium having a program stored thereon, wherein the program, when executed on a computer, causes the computer to perform any of the above methods for identifying acupuncture points on the back of a human body.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
Claims (10)
1. A human back acupuncture point identification method is characterized by comprising the following steps:
acquiring an original human body image;
inputting the original human body image into a semantic segmentation model to obtain a semantic segmentation image, wherein the semantic segmentation model is used for performing semantic segmentation on a background and a foreground in the original human body image;
determining a human body contour boundary line according to the semantic segmentation image;
inputting the original human body image into a key point detection model to obtain human body key point data, wherein the key point detection model is used for extracting human body key points in the original human body image;
determining human shoulder boundary points according to the shoulder key points in the human body key point data and the human body contour boundary line;
and establishing a first coordinate system by taking the human shoulder boundary points as reference points, and calculating the position coordinates of the human back acupuncture points in the first coordinate system.
2. The method according to claim 1, wherein the semantically segmented image represents a background with RGB values (0,0,0) and a foreground with RGB values (1,1,1), and the determining the human body contour boundary line according to the semantically segmented image comprises:
extracting target pixel points with mutational RGB values in the semantic segmentation image;
and connecting the target pixel points to determine the boundary line of the human body contour.
3. The method of claim 1, wherein the determining human shoulder boundary points from the shoulder keypoints in the human keypoint data and the human contour boundary line comprises:
determining shoulder key points in the original human body image according to the human body key point data;
scanning left and right along the horizontal direction of the shoulder key points;
and determining a coincidence point of the scanning path and the boundary line of the human body contour as a human body shoulder boundary point.
4. The method of claim 1, wherein the acquiring of the raw human body image comprises:
acquiring an original human body image through a camera;
after the establishing a first coordinate system by taking the human shoulder boundary points as reference points and calculating the position coordinates of the human back acupuncture points in the first coordinate system, the method further comprises the following steps:
acquiring a second coordinate system established by taking the camera as a reference point;
and converting the position coordinates of the acupuncture points on the back of the human body in the first coordinate system into the position coordinates in the second coordinate system.
5. The method according to claim 1, wherein prior to said inputting said original human body image into a semantic segmentation model, said method further comprises:
acquiring a preset image specification;
converting the original human body image based on the preset image specification;
the inputting the original human body image into a semantic segmentation model comprises:
and inputting the processed original human body image into a semantic segmentation model.
6. The method according to claim 1, wherein the semantic segmentation model comprises an encoding network and a decoding network, and the inputting the original human body image into the semantic segmentation model to obtain the semantic segmentation image comprises:
inputting the original human body image into the coding network for depth separation convolution to obtain an abstract feature map;
and inputting the abstract feature map into the decoding network for transposition convolution and common 2D convolution to obtain a semantic segmentation image.
7. The method according to any one of claims 1 to 6, wherein the semantic segmentation model is trained by:
collecting a sample human body image to obtain characteristic data;
performing label classification on pixel points in the sample human body image to obtain label data, wherein the label comprises a foreground label and a background label;
inputting the characteristic data into an initialized network model to obtain prediction data;
calculating a characteristic loss value between the prediction data and the label data through a preset loss function;
and updating the initialized network model according to the characteristic loss value back propagation until the characteristic loss value reaches a preset loss value, and determining that the initialized network model is trained to be finished to obtain the semantic segmentation model.
8. A system for identifying acupuncture points on the back of a human body, the system comprising:
the acquisition unit is used for acquiring an original human body image;
the first input unit is used for inputting the original human body image into a semantic segmentation model to obtain a semantic segmentation image, and the semantic segmentation model is used for performing semantic segmentation on a background and a foreground in the original human body image;
the first determining unit is used for determining a human body contour boundary line according to the semantic segmentation image;
the second input unit is used for inputting the original human body image into a key point detection model to obtain human body key point data, and the key point detection model is used for extracting human body key points in the original human body image;
the second determining unit is used for determining human shoulder boundary points according to the shoulder key points in the human body key point data and the human body contour boundary line;
and the calculating unit is used for establishing a first coordinate system by taking the human shoulder boundary points as reference points and calculating the position coordinates of the acupuncture points on the back of the human body in the first coordinate system.
9. A human back acupuncture point recognition device, characterized in that, the device includes:
the device comprises a processor, a memory, an input and output unit and a bus;
the processor is connected with the memory, the input and output unit and the bus;
the memory holds a program that the processor calls to perform the method of any one of claims 1 to 7.
10. A computer-readable storage medium having a program stored thereon, the program, when executed on a computer, performing the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210191825.0A CN114897763A (en) | 2022-02-28 | 2022-02-28 | Human back acupuncture point identification method, system, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210191825.0A CN114897763A (en) | 2022-02-28 | 2022-02-28 | Human back acupuncture point identification method, system, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114897763A true CN114897763A (en) | 2022-08-12 |
Family
ID=82715964
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210191825.0A Pending CN114897763A (en) | 2022-02-28 | 2022-02-28 | Human back acupuncture point identification method, system, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114897763A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115409840A (en) * | 2022-11-01 | 2022-11-29 | 北京石油化工学院 | Intelligent positioning system and method for acupoints on back of human body |
CN115429684A (en) * | 2022-09-29 | 2022-12-06 | 北京洛必德科技有限公司 | Robot massage method and device based on vision and electronic equipment |
CN116721150A (en) * | 2023-08-10 | 2023-09-08 | 深圳力驰传感技术有限公司 | Human body acupoint prediction method based on key point derivation and massage robot |
CN118628840A (en) * | 2024-08-12 | 2024-09-10 | 杭州医尔睿信息技术有限公司 | Human body meridian point position visualization method and device based on AI image recognition |
-
2022
- 2022-02-28 CN CN202210191825.0A patent/CN114897763A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115429684A (en) * | 2022-09-29 | 2022-12-06 | 北京洛必德科技有限公司 | Robot massage method and device based on vision and electronic equipment |
CN115409840A (en) * | 2022-11-01 | 2022-11-29 | 北京石油化工学院 | Intelligent positioning system and method for acupoints on back of human body |
CN115409840B (en) * | 2022-11-01 | 2023-10-10 | 北京石油化工学院 | Intelligent positioning system and method for acupoints on back of human body |
CN116721150A (en) * | 2023-08-10 | 2023-09-08 | 深圳力驰传感技术有限公司 | Human body acupoint prediction method based on key point derivation and massage robot |
CN118628840A (en) * | 2024-08-12 | 2024-09-10 | 杭州医尔睿信息技术有限公司 | Human body meridian point position visualization method and device based on AI image recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110135375B (en) | Multi-person attitude estimation method based on global information integration | |
CN114897763A (en) | Human back acupuncture point identification method, system, device and storage medium | |
WO2020108362A1 (en) | Body posture detection method, apparatus and device, and storage medium | |
CN111126272B (en) | Posture acquisition method, and training method and device of key point coordinate positioning model | |
CN111275518A (en) | Video virtual fitting method and device based on mixed optical flow | |
CN111325846B (en) | Expression base determination method, avatar driving method, device and medium | |
WO2020078119A1 (en) | Method, device and system for simulating user wearing clothing and accessories | |
WO2021052375A1 (en) | Target image generation method, apparatus, server and storage medium | |
CN108470178B (en) | Depth map significance detection method combined with depth credibility evaluation factor | |
CN111680550B (en) | Emotion information identification method and device, storage medium and computer equipment | |
CN113570684A (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN111080670A (en) | Image extraction method, device, equipment and storage medium | |
CN113255514B (en) | Behavior identification method based on local scene perception graph convolutional network | |
CN114998934A (en) | Clothes-changing pedestrian re-identification and retrieval method based on multi-mode intelligent perception and fusion | |
CN111108508A (en) | Facial emotion recognition method, intelligent device and computer-readable storage medium | |
CN111985332A (en) | Gait recognition method for improving loss function based on deep learning | |
CN112906520A (en) | Gesture coding-based action recognition method and device | |
CN118212028A (en) | Virtual fitting method, virtual fitting device, electronic equipment and readable storage medium | |
Li et al. | Global co-occurrence feature learning and active coordinate system conversion for skeleton-based action recognition | |
CN113362455B (en) | Video conference background virtualization processing method and device | |
CN117437690A (en) | Gesture recognition method, system and medium combining environment adaptation and estimation classification | |
CN116012875A (en) | Human body posture estimation method and related device | |
CN115530814A (en) | Child motion rehabilitation training method based on visual posture detection and computer deep learning | |
CN113327267A (en) | Action evaluation method based on monocular RGB video | |
CN115936796A (en) | Virtual makeup changing method, system, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |