CN113537057A - Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN - Google Patents

Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN Download PDF

Info

Publication number
CN113537057A
CN113537057A CN202110803402.5A CN202110803402A CN113537057A CN 113537057 A CN113537057 A CN 113537057A CN 202110803402 A CN202110803402 A CN 202110803402A CN 113537057 A CN113537057 A CN 113537057A
Authority
CN
China
Prior art keywords
image
acupuncture point
calibration
point calibration
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110803402.5A
Other languages
Chinese (zh)
Other versions
CN113537057B (en
Inventor
杨婕
闫敬来
上官宏
高阳
张�雄
贺文彬
田雅娟
李钦青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi University of Chinese Mediciine
Taiyuan University of Science and Technology
Original Assignee
Shanxi University of Chinese Mediciine
Taiyuan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi University of Chinese Mediciine, Taiyuan University of Science and Technology filed Critical Shanxi University of Chinese Mediciine
Priority to CN202110803402.5A priority Critical patent/CN113537057B/en
Publication of CN113537057A publication Critical patent/CN113537057A/en
Application granted granted Critical
Publication of CN113537057B publication Critical patent/CN113537057B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a facial acupuncture point automatic positioning detection system and method based on improved cycleGAN, and the method comprises the following steps: generating a training data set, wherein the training data set comprises a data set without acupuncture point calibration images, which is formed by collecting human face images, and acupuncture points are labeled on the human face images; the method comprises the steps that a component generates a confrontation network model in a circulating mode, and the generated confrontation network model trains and converges the circularly generated confrontation network model through inputting a training data set; the method comprises the steps of automatic acupoint calibration and output, inputting face image information, generating a confrontation network model through circulation after training, converting a face image without acupoint calibration into a face image with acupoint calibration, and then outputting the face image with acupoint calibration so as to automatically calibrate the acupoints according to different face images. According to the system and the method, the acupuncture points can be automatically calibrated, the acupuncture point calibration difficulty is reduced, and the acupuncture point calibration accuracy is improved.

Description

Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN
Technical Field
The invention relates to the technical field of facial acupoint positioning, in particular to a facial acupoint automatic positioning detection system and method based on improved cycleGAN.
Background
Acupuncture medicine has obvious treatment effects on many facial diseases such as juvenile myopia, dry eye, allergic rhinitis and peripheral facial paralysis, and the effectiveness, safety and scientificity of the world health organization and the American national institutes of health and the like are fully determined and become important components of the world medicine.
Acupuncture and moxibustion medicine focuses on the organic coordination of "principle", "method", "square", "acupoint" and "operation". The acupuncture points include accurate positioning of the acupuncture points, which is paid attention by doctors in all generations, and the 'Taiping Shenghui Fang' has a structure: "the acupoints are bad and bad, and the treatment is always careless. The correct positioning of the acupuncture points is the basis of acupuncture and moxibustion treatment, and can ensure the safety and effectiveness of the treatment. The traditional Chinese medicine acupoint positioning method can trace back to 2900 years before the yuan, and three acupoint positioning methods of a body surface anatomical landmark positioning method, a bone degree folding amount positioning method and an index positioning method are clinically used at present through the common efforts of ancestors of traditional Chinese medicine in the past. The three methods are combined when in use, namely, the body surface anatomical signs are taken as the main points, the distances of all parts are measured, and the fingers are used for measuring, thereby determining the positions of the acupuncture points. However, the traditional acupoint pressing method is a subjective method, and needs a long time of study and practice to be mastered correctly. Many and comparatively concentrated facial acupuncture points, it is very high to the practitioner's technical requirement, and the beginner acupuncture person dares not try easily, is unfavorable for the study inheritance and the popularization and application of facial acupuncture skill.
Therefore, the automatic position-taking technology of the facial acupuncture points is developed: (1) an automatic point-finding technique using the position of a facial feature as a reference coordinate: the scholars, Zhaoyang, partition facial features according to the rule of 'three-family five-eye' of the most basic human face structure theory in the Chinese art theory, detect the corners of the facial features by adopting a Minimum Eigenvalue operator and detect the edges of the facial features by adopting a Log operator, comprehensively apply the corner and edge information, position the positions of the facial features, and finally complete the positioning of the facial acupuncture points by taking the positions of the facial features as reference coordinates. The student's Changmulong realizes the automatic positioning of the facial acupuncture points based on the feature point positioning algorithm and the same body size method. (2) The technology for acquiring the facial acupuncture points by using the information fusion algorithm comprises the following steps: the scholars, the williams, the jun and the scholars, the Yang Xuming and the like collect data information of positioning of facial acupuncture points of multiple experts and acquire an optimal 'precision' data value of the facial acupuncture point by applying an 'point-by-point variable precision' information fusion algorithm, so that the automatic positioning of prescription acupuncture points of related diseases of a new treatment object is realized. (3) The automatic acupoint selection technology realized by electrophysiology comprises the following steps: the students such as Chenzhengliang and the students such as Lindong explore the electrical characteristics of the acupuncture points through the electrophysiological technology, and the acupuncture points are positioned by utilizing the electrical characteristics of different tissues.
Although scholars at home and abroad explore the automatic positioning of the traditional Chinese medicine facial acupuncture points by adopting various information technologies and make a contribution to the automatic technology of acupoint selection, the above technologies still have some defects, and the achievement of the technology still has certain limitations and difficulties in application and popularization: (1) when a target face image is not standard, the automatic acupoint selection technology using the positions of the facial features as reference coordinates can cause lower accuracy of positioning some acupuncture points; (2) the automatic acupoint selection technology of the technology for acquiring the facial acupoints by using the information fusion algorithm needs to match a sampling face shape with a face shape in a database, if matching fails, accurate acupoints cannot be acquired, and recognizable facial acupoints are few; (3) although the automatic acupoint selection technology realized by electrophysiology has safe, quick and accurate measurement process, the requirement on equipment personnel is high, and the popularization is difficult.
Based on this, the prior art needs to be improved, and the problem to be solved still needs to be solved at present by finding an automatic acupoint selection technology suitable for clinical research and daily health care, and the scheme is generated accordingly.
Disclosure of Invention
The invention aims to provide a facial acupoint automatic positioning detection system and method based on improved cycleGAN, which overcome the defects, automatically calibrate the acupoints according to facial images, reduce the difficulty of acupoint calibration and improve the accuracy of acupoint calibration.
In order to achieve the above purpose, the solution of the invention is:
an automatic facial acupoint positioning and detecting system based on improved cycleGAN comprises: the system comprises a face information input module, an acupuncture point calibration module and an output module;
the face information input module is used for inputting face image information;
the acupoint calibration module comprises a cycle generation countermeasure network, and the cycle generation countermeasure network comprises: the device comprises a generator G and a generator F, wherein the generator G is used for converting an image without acupuncture point calibration into an image with acupuncture point calibration, the generator F is used for converting the image with acupuncture point calibration into an image without academic point calibration, the generator G and the generator F both comprise a dual-channel serial attention network, the dual-channel serial attention network sequentially comprises a Shearlet image decomposition subnet for Shearlet conversion, a high-frequency channel or a low-frequency channel from an input end to an output end, and the high-frequency channel and the low-frequency channel sequentially comprise an encoder, a CA attention module, an SA attention module and a decoder from the input end to the output end; the cyclic generation confrontation network also comprises a discriminator D for judging that no acupuncture point calibration image existsAAnd a discriminator D for judging the marked image of the acupuncture pointsBSaid discriminator DAAnd discriminator DBComprises a Shuffle function for obtaining the point calibration images with different resolutions and the calibration images without degree, extracting the characteristics of the point calibration images with different resolutions and the calibration images without degree, fusing and reducing the dimensionsThe result is subjected to an output Sigmoid activation function, so that the confrontation network is circularly generated, and no acupoint calibration image is converted into an acupoint calibration image after training convergence;
the information output module outputs the information of the face image with the acupuncture point calibration.
A facial acupuncture point automatic positioning detection method based on improved cycleGAN comprises the following steps:
s1, generating a training data set, which comprises collecting a human face image to form a data set without acupoint calibration images, and marking the acupoints on the human face image to form a data set with acupoint calibration images;
s2, generating a confrontation network model by the member circulation, wherein the generated confrontation network model trains and converges the circularly generated confrontation network model by inputting a training data set;
s3, automatically calibrating and outputting acupuncture points, inputting the images without acupuncture point calibration except the data set without acupuncture point calibration images, generating an antagonistic network model through the trained circulation to convert the images without acupuncture point calibration into corresponding images with acupuncture points calibration, and then outputting the images with acupuncture points calibration so as to automatically generate corresponding images with acupuncture points according to the different input images without acupuncture point calibration.
Further, the circularly generated countermeasure network model comprises a generator G, a generator F and a discriminator DAAnd a discriminator DBThe training and convergence process in step S2 includes the following steps:
the generator G converts the image without the acupoint calibration into an image with the acupoint calibration and generates, and the generator F converts the image with the acupoint calibration into a circulating image without the acupoint calibration; meanwhile, the generator F converts the image with the acupuncture point calibration into an image without the acupuncture point calibration, and the generator G converts the image without the acupuncture point calibration into a circulating image with the acupuncture point calibration;
then, by a discriminator DAJudging whether the cyclic image without the acupoint calibration is an image without the acupoint calibration or not; by a discriminator DBJudging whether the image generated by acupoint calibration and the circulating image with acupoint calibration are present or notCalibrating the acupuncture points;
and finally, carrying out iterative optimization by using a gradient optimization algorithm.
Further, the generator G and the generator F comprise a dual-channel serial attention network, the dual-channel serial attention network comprises a shearlet image decomposition subnet, an encoder, a CA attention module, an SA space attention module and a decoder, the dual-channel serial attention network firstly decomposes facial image information input into a low-frequency part and a high-frequency part through the shearlet image decomposition subnet, then respectively extracts features through the encoder, convolutes the features, and finally inputs the facial image information into the decoder through the CA attention module and the SA space attention module to form a generated image.
Further, the CA attention module pools features maximally and averagely and sums them as shown in the following equation:
CA(M)=sigmoid(AvgPool(M)+MaxPool(M));
where M represents the character, AvgPool represents the mean pooling, and MaxPool represents the maximum pooling.
Further, the SA space attention module performs a convolution operation on the CA attention module as shown in the following formula:
SA(M)=sigmoid(f(CA(M)));
where f denotes the convolution operation.
Further, the discriminator DAAnd a discriminator DBFirstly, input the input into a discriminator D through shuffle operationAAnd a discriminator DBAdjusting the corresponding resolution ratio without an acupuncture point calibration image, without an acupuncture point calibration generating image, without an acupuncture point calibration circulating image, with an acupuncture point calibration generating image and with an acupuncture point calibration circulating image to obtain input images with different resolution ratios, then performing feature extraction on the input images with different resolution ratios, performing channel dimension reduction on the feature obtained by fusing the input images with different resolution ratios through a cascaded convolution layer, and finally obtaining an output result through a sigmoid activation function.
Further, the discriminator DAThe loss function of (d) is:
Figure BDA0003162576290000051
the discriminator DBThe loss function of (d) is:
Figure BDA0003162576290000052
the sum of the loss functions of the generator G and the generator F is expressed as:
Figure BDA0003162576290000053
the final overall cycle yields a loss function against the network expressed as:
L(G,F,DA,DB)=LGAN(G,DB,X,Y)+LGAN(F,DA,Y,X)+λLcyc(G,F);
wherein F represents the generator F, G represents the generators G, DARepresentation discriminator DA,DBRepresentation discriminator DBY represents that the acupuncture point calibration image exists, x represents that the acupuncture point calibration image does not exist, and lambda is a settable parameter.
Further, in step S1, a digital camera or a mobile phone front camera is used to collect a human face image, and acupuncture points are manually marked on the human face image to form an acupuncture point calibration image.
After the scheme is adopted, the invention has the beneficial effects that:
(1) inputting face image information, generating an antagonistic network model through a trained cycle, converting a face image without acupoint calibration into a face image with acupoint calibration, and outputting the image; the method has the advantages that the method automatically marks the acupuncture points of the face according to the facial image information of the patient through the circularly generated confrontation network, so that the acupuncture point calibration difficulty is reduced, the acupuncture point calibration accuracy is improved, and the efficiency of the academic degree calibration process is improved;
(2) the generator G and the generator F both comprise a dual-channel serial attention network, the image is decomposed into a low-frequency part and a high-frequency part through a Shearlet image decomposition subnet for Shearlet conversion, the low-frequency part reflects the general appearance information of the image, the image detail information is mainly concentrated on the high-frequency part, the low-frequency part enters the low-frequency channel, the high-frequency part enters the high-frequency channel, and the serial structures of an attention module CA and a space attention module SA are respectively introduced into the high-frequency channel and the low-frequency channel, so that the defect that the characterization capability is limited in the traditional loop generation countermeasure network is overcome, the sensitivity of the network to different semantic information is improved, and the positioning accuracy of facial acupuncture points is improved.
(3) The discriminator DAAnd discriminator DBThe method comprises an image Shuffle function for obtaining different resolutions, a Sigmoid activation function for outputting a result after feature extraction and fusion dimensionality reduction, reduces the image acceptance range of the discriminator, saves time cost and hardware resources compared with the discriminator in the traditional cyclic generation countermeasure network, and improves the discrimination capability of the discriminator.
Drawings
FIG. 1 is a schematic structural diagram of an automatic facial acupuncture point positioning and detecting system based on an improved cycleGAN of the present invention;
FIG. 2 is a schematic view of the process flow of the present invention for cyclically generating a countermeasure network;
FIG. 3 is a schematic diagram of the generator G and the generator F;
FIG. 4 shows a discriminator DAAnd a discriminator DBSchematic structural diagram of (1).
Detailed Description
The invention is described in detail below with reference to the accompanying drawings and specific embodiments.
The invention provides a facial acupuncture point automatic positioning detection system based on improved cycleGAN, as shown in figure 1, comprising: the system comprises a face information input module, an acupuncture point calibration module and an information output module;
the face information input module is used for inputting face image information, and the face image information comprises an acupuncture point calibration image and an acupuncture point calibration image;
as shown in fig. 2, 3 and 4, the acupoint calibration module includes a loop generation countermeasure network, and the loop generation countermeasure network includes: the device comprises a generator G used for converting an image without acupuncture point calibration into an image with acupuncture point calibration, a generator F used for converting the image with acupuncture point calibration into an image without academic position calibration, wherein the image without acupuncture point calibration is a human face image obtained by a digital camera, a camera and the like, the image with acupuncture point calibration is an image which is marked on the corresponding image without acupuncture point calibration and is used for marking the position of the acupuncture point on the face according to the facial features of different individuals, the generator G and the generator F both comprise a two-channel serial attention network, the two-channel serial attention network sequentially comprises a Shearlet image decomposition subnet used for Shearlet conversion, a high-frequency channel or a low-frequency channel from an input end to an output end, the high-frequency channel is parallel to the low-frequency channel, and the Shearlet image decomposition subnet used for Shearlet conversion decomposes the image without acupuncture point calibration or the image with acupuncture point calibration input into a low-frequency part and a high-frequency part, the low-frequency part reflects the general picture information of an image, the low-frequency part correspondingly enters a low-frequency channel, the high-frequency part reflects the detail information of the image and correspondingly enters a high-frequency channel, the high-frequency channel and the low-frequency channel sequentially comprise an encoder, a CA attention module, an SA attention module and a decoder from an input end to an output end, the encoder extracts features from the high-frequency part or the low-frequency part through convolution operation, the CA attention module respectively performs maximum pooling and average pooling on the features and then sums the features, so that more feature textures of the image and background information of the image are kept, and then the SA attention module performs convolution operation on the features processed by the CA attention module and generates the image through the decoder; the cyclic generation confrontation network also comprises a discriminator D for judging that no acupuncture point calibration image existsAAnd a discriminator D for judging the marked image of the acupuncture pointsBSaid discriminator DAAnd discriminator DBComprises a Shuffle function for obtaining point calibration images with different resolutions and without a degree calibration image, a point calibration image with different resolutions and without a degree calibration imageThe Sigmoid activation function for outputting the result after line feature extraction and dimension reduction are fused, and the Shuffle function is input to an input discriminator DAOr discriminator DBThe images without the acupuncture point calibration are generated or the images with the acupuncture point calibration are generated and adjusted to obtain input images with different resolutions, feature extraction is carried out on the input images with the resolutions, a top-down fusion mode is formed, the fused features are subjected to channel dimension reduction through cascaded convolution layers, and finally the fused features are sent into a sigmoid activation function to obtain an output result, so that the range of the images received by a discriminator is reduced, the discrimination effect of the discriminator is improved, and the images without the acupuncture point calibration are converted into the images with the acupuncture point calibration after the training convergence by a cyclic generation countermeasure network;
the information output module outputs face image information with acupuncture point calibration.
The invention also comprises a facial acupuncture point automatic positioning detection method based on the improved cycleGAN, which comprises the following steps:
s1, generating a training data set, which comprises collecting a human face image to form a data set without acupoint calibration images, and marking the acupoints on the human face image to form a data set with acupoint calibration images; in the embodiment, a digital camera or a mobile phone front-facing camera is used for collecting a human face image, acupuncture points are manually marked on the human face image to form an acupuncture point calibration image, and the collection of the human face image can be carried out in any mode of capturing human face information and forming a digital image without specific limitation;
s2, generating a confrontation network model by a component cycle, wherein the confrontation network model generated by the cycle trains and converges the confrontation network model generated by the cycle by inputting a training data set; specifically, as shown in fig. 2, the loop generation countermeasure network model includes a generator G, a generator F, and a discriminator DAAnd a discriminator DB
The discriminator DAThe loss function of (d) is:
Figure BDA0003162576290000081
the discriminator DBThe loss function of (d) is:
Figure BDA0003162576290000082
the sum of the loss functions of the generator G and the generator F is expressed as:
Figure BDA0003162576290000083
the final overall cycle yields a loss function against the network expressed as:
L(G,F,DA,DB)=LGAN(G,DB,X,Y)+LGAN(F,DA,Y,x)+λLcyc(G,F);
wherein F represents the generator F, G represents the generators G, DARepresentation discriminator DA,DBRepresentation discriminator DBY represents that an acupuncture point calibration image exists, x represents that no acupuncture point calibration image exists, and lambda is a settable parameter;
with continued reference to fig. 2, the training and convergence process includes the following steps: converting the image without acupoint calibration (real _ A) into an image with acupoint calibration (fake _ B) by a generator G, and converting the image with acupoint calibration (fake _ B) into a circulating image without acupoint calibration (cyc _ A) by a generator F; meanwhile, the generator F converts the image (real _ B) with the acupuncture point calibration into a generated image (fake _ A) without the acupuncture point calibration, and the generator G converts the generated image (fake _ A) without the acupuncture point calibration into a circulating image (cyc _ B) with the acupuncture point calibration; by a discriminator DAJudging whether the generated image without acupoint calibration (fake _ A) and the cyclic image without acupoint calibration (cyc _ A) are images without acupoint calibration (real _ A); by a discriminator DBJudging whether the image with the acupuncture point calibration generation image (fake _ B) and the cyclic image with the acupuncture point calibration (cyc _ B) are the image with the acupuncture point calibration (real _ B); then, iterative optimization is carried out by utilizing a gradient optimization algorithm until the loss function of the countermeasure network generated by the whole cycle is minimum;
s3, automatically calibrating and outputting acupuncture points, inputting the images without acupuncture point calibration except the data set without acupuncture point calibration images, generating an antagonistic network model through the trained circulation to convert the images without acupuncture point calibration into corresponding images with acupuncture points calibration, and then outputting the images with acupuncture points calibration so as to automatically generate corresponding images with acupuncture points according to the different input images without acupuncture point calibration.
In a further embodiment, as shown in fig. 3, the generator G and the generator F in S2 include a two-channel serial attention network, which includes a shearlet image decomposition subnet, an encoder, a CA attention module, an SA spatial attention module, and a decoder, and the two-channel serial attention network first decomposes the facial image information input to the generator G or the generator F into a low frequency part and a high frequency part through the shearlet image decomposition subnet, the low frequency part reflects the profile information of the image, the high frequency part reflects the detail information of the image, then extracts the features through the encoder, respectively, and convolutes the features, and finally inputs to the decoder through the CA attention module and the SA spatial attention module to form a generated image, wherein the CA attention module performs maximum pooling and average pooling and summing of the features respectively for more feature textures and background information of the retained image, as shown in the following formula:
CA(M)=sigmoid(AvgPool(M)+MaxPool(M));
wherein M represents the signature, AvgPool represents the mean pooling, MaxPool represents the maximum pooling;
the SA space attention module, which is a complement to the CA attention module, further performs a convolution operation on the CA attention module, as shown in the following equation:
SA(M)=sigmoid(f(CA(M)));
where f denotes the convolution operation.
As shown in fig. 4, the discriminator DAAnd a discriminator DBFirstly, input the input into a discriminator D through shuffle operationAAnd a discriminator DBThe corresponding resolution of the image without acupoint calibration, without acupoint calibration generating image, without acupoint calibration circulating image, with acupoint calibration image, with acupointThe method comprises the steps of calibrating to generate images and calibrating to obtain input images with different resolutions from bottom to top, then extracting features on the input images with different resolutions to form a top-down fusion mode, performing channel dimensionality reduction on the features of the input images with different resolutions after fusion through a cascaded convolution layer, and finally obtaining an output result through a sigmoid activation function, so that the problem that the resolution is low during image generation due to the fact that the range of images received by a discriminator is too large because the input images of the discriminator are whole images is avoided.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the design of the present invention, and all equivalent changes made in the design key point of the present invention fall within the protection scope of the present invention.

Claims (9)

1. A facial acupuncture point automatic positioning detection system based on improved cycleGAN is characterized by comprising: the system comprises a face information input module, an acupuncture point calibration module and an information output module;
the face information input module is used for inputting face image information;
the acupoint calibration module comprises a cycle generation countermeasure network, and the cycle generation countermeasure network comprises: the device comprises a generator G and a generator F, wherein the generator G is used for converting an image without acupuncture point calibration into an image with acupuncture point calibration, the generator F is used for converting the image with acupuncture point calibration into an image without academic point calibration, the generator G and the generator F both comprise a dual-channel serial attention network, the dual-channel serial attention network sequentially comprises a Shearlet image decomposition subnet for Shearlet conversion, a high-frequency channel or a low-frequency channel from an input end to an output end, and the high-frequency channel and the low-frequency channel sequentially comprise an encoder, a CA attention module, an SA attention module and a decoder from the input end to the output end; the cyclic generation confrontation network also comprises a discriminator D for judging that no acupuncture point calibration image existsAAnd a discriminator D for judging the marked image of the acupuncture pointsBSaid discriminator DAAnd discriminator DBComprises a Shuffle function for obtaining images with and without acupuncture point calibration with different resolutions, for different scoresThe resolution ratio has acupuncture point calibration images and does not have a Sigmoid activation function which carries out feature extraction and fusion dimensionality reduction on the images and then outputs the result, thereby circularly generating an antagonistic network and converting the images without acupuncture point calibration into images with acupuncture point calibration after the training convergence;
the information output module outputs the information of the acupuncture point calibration image.
2. A facial acupuncture point automatic positioning detection method based on improved cycleGAN is characterized by comprising the following steps:
s1, generating a training data set, which comprises collecting a human face image to form a data set without acupoint calibration images, and marking the acupoints on the human face image to form a data set with acupoint calibration images;
s2, generating a confrontation network model by the member circulation, wherein the generated confrontation network model trains and converges the circularly generated confrontation network model by inputting a training data set;
s3, automatically calibrating and outputting acupuncture points, inputting the images without acupuncture point calibration except the data set without acupuncture point calibration images, generating an antagonistic network model through the trained circulation to convert the images without acupuncture point calibration into corresponding images with acupuncture points calibration, and then outputting the images with acupuncture points calibration so as to automatically generate corresponding images with acupuncture points according to the different input images without acupuncture point calibration.
3. The facial acupuncture point automatic positioning detection method based on the improved CycleGAN as claimed in claim 2, characterized in that: the loop generation countermeasure network model comprises a generator G, a generator F and a discriminator DAAnd a discriminator DBThe training and convergence process in step S2 includes the following steps:
the generator G converts the image without the acupoint calibration into an image with the acupoint calibration and generates, and the generator F converts the image with the acupoint calibration into a circulating image without the acupoint calibration; meanwhile, the generator F converts the image with the acupuncture point calibration into an image without the acupuncture point calibration, and the generator G converts the image without the acupuncture point calibration into a circulating image with the acupuncture point calibration;
then, by a discriminator DAJudging whether the cyclic image without the acupoint calibration is an image without the acupoint calibration or not; by a discriminator DBJudging whether the image generated by the acupuncture point calibration and the circulating image with the acupuncture point calibration are the images with the acupuncture point calibration or not;
and finally, carrying out iterative optimization by using a gradient optimization algorithm.
4. The facial acupuncture point automatic positioning detection method based on the improved CycleGAN as claimed in claim 3, characterized in that: the generator G and the generator F comprise a dual-channel serial attention network, the dual-channel serial attention network comprises a shearlet image decomposition subnet, an encoder, a CA attention module, an SA space attention module and a decoder, the dual-channel serial attention network firstly decomposes facial image information input into the generator G or the generator F into a low-frequency part and a high-frequency part through the shearlet image decomposition subnet, then respectively extracts features through the encoder, convolutes the features, and finally inputs the features into the decoder through the CA attention module and the SA space attention module to form a generated image.
5. The facial acupuncture point automatic positioning detection method based on the improved CycleGAN as claimed in claim 4, characterized in that: the CA attention module pools features maximally and averagely and sums them as shown in the following equation:
CA(M)=sigmoid(AvgPool(M)+MaxPool(M));
where M represents the character, AvgPool represents the mean pooling, and MaxPool represents the maximum pooling.
6. The facial acupuncture point automatic positioning detection method based on the improved CycleGAN as claimed in claim 4, characterized in that: the SA space attention module performs a convolution operation on the CA attention module as shown in the following formula:
SA(M)=sigmoid(f(cA(M)));
where f denotes the convolution operation.
7. The facial acupuncture point automatic positioning detection method based on the improved CycleGAN as claimed in claim 3, characterized in that: the discriminator DAAnd a discriminator DBFirstly, input the input into a discriminator D through shuffle operationAAnd a discriminator DBAdjusting the corresponding resolution ratio without an acupuncture point calibration image, without an acupuncture point calibration generating image, without an acupuncture point calibration circulating image, with an acupuncture point calibration generating image and with an acupuncture point calibration circulating image to obtain input images with different resolution ratios, then performing feature extraction on the input images with different resolution ratios, performing channel dimension reduction on the feature obtained by fusing the input images with different resolution ratios through a cascaded convolution layer, and finally obtaining an output result through a sigmoid activation function.
8. The facial acupuncture point automatic positioning detection method based on the improved CycleGAN as claimed in claim 3, characterized in that:
the discriminator DAThe loss function of (d) is:
Figure FDA0003162576280000031
the discriminator DBThe loss function of (d) is:
Figure FDA0003162576280000032
the sum of the loss functions of the generator G and the generator F is expressed as:
Figure FDA0003162576280000033
the final overall cycle yields a loss function against the network expressed as:
L(G,F,DA,DB)=LGAN(G,DB,X,Y)+LGAN(F,DA,Y,X)+λLcyc(G,F);
wherein F represents the generator F, G represents the generators G, DARepresentation discriminator DA,DBRepresentation discriminator DBY represents that the acupuncture point calibration image exists, x represents that the acupuncture point calibration image does not exist, and lambda is a settable parameter.
9. The facial acupuncture point automatic positioning detection method based on the improved CycleGAN as claimed in claim 2, characterized in that: in step S1, a digital camera or a mobile phone front camera is used to collect a human face image, and acupuncture points are manually marked on the human face image to form an acupuncture point calibration image.
CN202110803402.5A 2021-07-14 2021-07-14 Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN Active CN113537057B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110803402.5A CN113537057B (en) 2021-07-14 2021-07-14 Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110803402.5A CN113537057B (en) 2021-07-14 2021-07-14 Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN

Publications (2)

Publication Number Publication Date
CN113537057A true CN113537057A (en) 2021-10-22
CN113537057B CN113537057B (en) 2022-11-01

Family

ID=78128196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110803402.5A Active CN113537057B (en) 2021-07-14 2021-07-14 Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN

Country Status (1)

Country Link
CN (1) CN113537057B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780250A (en) * 2021-11-11 2021-12-10 四川大学 End-to-end facial acupoint positioning method for small sample and electronic equipment
CN113990520A (en) * 2021-11-05 2022-01-28 天津工业大学 Traditional Chinese medicine prescription generation method based on controllable generation countermeasure network
CN116188816A (en) * 2022-12-29 2023-05-30 广东省新黄埔中医药联合创新研究院 Acupoint positioning method based on cyclic consistency deformation image matching network
CN117636446A (en) * 2024-01-25 2024-03-01 江汉大学 Face acupoint positioning method, acupuncture robot and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150097885A1 (en) * 2013-10-08 2015-04-09 Seiko Epson Corporation Printing apparatus and method of controlling printing apparatus
CN110288609A (en) * 2019-05-30 2019-09-27 南京师范大学 A kind of multi-modal whole-heartedly dirty image partition method of attention mechanism guidance
US20190339688A1 (en) * 2016-05-09 2019-11-07 Strong Force Iot Portfolio 2016, Llc Methods and systems for data collection, learning, and streaming of machine signals for analytics and maintenance using the industrial internet of things
CN111222515A (en) * 2020-01-06 2020-06-02 北方民族大学 Image translation method based on context-aware attention
CN111382601A (en) * 2018-12-28 2020-07-07 河南中原大数据研究院有限公司 Illumination face image recognition preprocessing system and method for generating confrontation network model
US20200364860A1 (en) * 2019-05-16 2020-11-19 Retrace Labs Artificial Intelligence Architecture For Identification Of Periodontal Features
CN112508083A (en) * 2020-12-02 2021-03-16 南京邮电大学 Image rain and fog removing method based on unsupervised attention mechanism
CN112926743A (en) * 2019-12-05 2021-06-08 三星电子株式会社 Computing device, operating method of computing device, and storage medium
CN113067653A (en) * 2021-03-17 2021-07-02 北京邮电大学 Spectrum sensing method and device, electronic equipment and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150097885A1 (en) * 2013-10-08 2015-04-09 Seiko Epson Corporation Printing apparatus and method of controlling printing apparatus
US20190339688A1 (en) * 2016-05-09 2019-11-07 Strong Force Iot Portfolio 2016, Llc Methods and systems for data collection, learning, and streaming of machine signals for analytics and maintenance using the industrial internet of things
CN111382601A (en) * 2018-12-28 2020-07-07 河南中原大数据研究院有限公司 Illumination face image recognition preprocessing system and method for generating confrontation network model
US20200364860A1 (en) * 2019-05-16 2020-11-19 Retrace Labs Artificial Intelligence Architecture For Identification Of Periodontal Features
CN110288609A (en) * 2019-05-30 2019-09-27 南京师范大学 A kind of multi-modal whole-heartedly dirty image partition method of attention mechanism guidance
CN112926743A (en) * 2019-12-05 2021-06-08 三星电子株式会社 Computing device, operating method of computing device, and storage medium
CN111222515A (en) * 2020-01-06 2020-06-02 北方民族大学 Image translation method based on context-aware attention
CN112508083A (en) * 2020-12-02 2021-03-16 南京邮电大学 Image rain and fog removing method based on unsupervised attention mechanism
CN113067653A (en) * 2021-03-17 2021-07-02 北京邮电大学 Spectrum sensing method and device, electronic equipment and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHUN ZHONG等: ""Camera Style Adaptation for Person Re-identification"", 《CVPR》, 31 December 2018 (2018-12-31), pages 5157 - 5166 *
林振峰等: ""基于条件生成式对抗网络的图像转换综述"", 《小微型计算机系统》, 31 December 2020 (2020-12-31), pages 2569 - 2581 *
郭静: ""基于对抗性深度学习的图像处理算法研究"", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》, 15 March 2021 (2021-03-15), pages 138 - 568 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113990520A (en) * 2021-11-05 2022-01-28 天津工业大学 Traditional Chinese medicine prescription generation method based on controllable generation countermeasure network
CN113990520B (en) * 2021-11-05 2024-06-07 天津工业大学 Traditional Chinese medicine prescription generation method based on controllable generation countermeasure network
CN113780250A (en) * 2021-11-11 2021-12-10 四川大学 End-to-end facial acupoint positioning method for small sample and electronic equipment
CN113780250B (en) * 2021-11-11 2022-01-28 四川大学 End-to-end facial acupoint positioning method for small sample and electronic equipment
CN116188816A (en) * 2022-12-29 2023-05-30 广东省新黄埔中医药联合创新研究院 Acupoint positioning method based on cyclic consistency deformation image matching network
CN116188816B (en) * 2022-12-29 2024-05-28 广东省新黄埔中医药联合创新研究院 Acupoint positioning method based on cyclic consistency deformation image matching network
CN117636446A (en) * 2024-01-25 2024-03-01 江汉大学 Face acupoint positioning method, acupuncture robot and storage medium
CN117636446B (en) * 2024-01-25 2024-05-07 江汉大学 Face acupoint positioning method, acupuncture robot and storage medium

Also Published As

Publication number Publication date
CN113537057B (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN113537057B (en) Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN
WO2019052063A1 (en) Medical image classification processing system and method based on artificial intelligence
CN109859203B (en) Defect tooth image identification method based on deep learning
WO2019052062A1 (en) Medical image report generation system and method based on artificial intelligence
CN109497990B (en) Electrocardiosignal identity recognition method and system based on canonical correlation analysis
CN104207931B (en) Face facial point precise positioning and acupuncture and moxibustion prescription, acumoxibustion prescription learning method
CN101901485A (en) 3D free head moving type gaze tracking system
CN110348375A (en) A kind of finger vena region of interest area detecting method neural network based
CN108154503A (en) A kind of leucoderma state of an illness diagnostic system based on image procossing
CN103984922B (en) Face identification method based on sparse representation and shape restriction
CN116189884B (en) Multi-mode fusion traditional Chinese medicine physique judging method and system based on facial vision
CN109242011A (en) A kind of method and device identifying image difference
CN111914925A (en) Patient behavior multi-modal perception and analysis system based on deep learning
TW201914520A (en) Easy detection method and system for sarcopenia characterized in that when the testee's walking speed and gripping force value are respectively lower than the predetermined walking speed and the predetermined gripping force value, it can be determined that the testee is a suspected sarcopenia patient
CN114612532A (en) Three-dimensional tooth registration method, system, computer equipment and storage medium
Bumacod et al. Image-processing-based digital goniometer using OpenCV
CN110338759A (en) A kind of front pain expression data acquisition method
CN110477921B (en) Height measurement method based on skeleton broken line Ridge regression
CN105844096A (en) Hand function evaluation method based on image processing technology
CN115568823A (en) Method, system and device for evaluating human body balance ability
Arai et al. Gait recognition method based on wavelet transformation and its evaluation with chinese academy of sciences (casia) gait database as a human gait recognition dataset
CN117611753B (en) Facial shaping and repairing auxiliary system and method based on artificial intelligent reconstruction technology
CN106447664A (en) Matching pair determination method and image capturing method
CN111312363B (en) Double-hand coordination enhancement system based on virtual reality
Cai et al. Combining chrominance features and fast ICA for noncontact imaging photoplethysmography

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant