CN111666890A - Spine deformation crowd identification method and device, computer equipment and storage medium - Google Patents

Spine deformation crowd identification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111666890A
CN111666890A CN202010513066.6A CN202010513066A CN111666890A CN 111666890 A CN111666890 A CN 111666890A CN 202010513066 A CN202010513066 A CN 202010513066A CN 111666890 A CN111666890 A CN 111666890A
Authority
CN
China
Prior art keywords
image
spine
identified
feature vector
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010513066.6A
Other languages
Chinese (zh)
Other versions
CN111666890B (en
Inventor
唐子豪
刘莉红
刘玉宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010513066.6A priority Critical patent/CN111666890B/en
Priority to PCT/CN2020/099253 priority patent/WO2021114623A1/en
Publication of CN111666890A publication Critical patent/CN111666890A/en
Application granted granted Critical
Publication of CN111666890B publication Critical patent/CN111666890B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of artificial intelligence, and discloses a method and a device for identifying people with spinal column deformation, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring image data and non-image data associated with a target to be identified; intercepting a to-be-identified back area image of a back area in image data; carrying out image enhancement processing on the back region image to be identified to obtain an enhanced image of the region to be identified; spine features in the enhanced image of the region to be identified are extracted through a spine identification model to obtain a first feature vector diagram, and normalization and edge weight processing are carried out on non-image data through a data standardization model to obtain a second feature vector diagram; performing edge filling on the second feature vector diagram to the first feature vector diagram to obtain a third feature vector diagram; and extracting spine frequency domain characteristics through a spine graph convolution network model according to a frequency spectrum domain method in the GCN to obtain an identification result. The method and the device realize the automatic identification of the category of the spinal deformation crowd of the target to be identified.

Description

Spine deformation crowd identification method and device, computer equipment and storage medium
Technical Field
The invention relates to the field of artificial intelligence image classification, in particular to a method and a device for identifying spinal deformation people, computer equipment and a storage medium.
Background
The spine is the central axis of a human body, and when the spine deforms seriously, the abnormal appearance and the motor dysfunction of the human body can be caused, and the cardio-pulmonary dysfunction can be caused due to the thoracic deformity, so that the life quality is reduced, and the physical and mental health development of teenagers is seriously influenced. If not prevented or discovered too early, not only does it affect the size and appearance of the patient, but it can cause cardiopulmonary dysfunction, premature degeneration of the spine, pain, imbalance of the trunk, and even death. Spinal deformity refers to the deformation of the spine in the horizontal and vertical directions, and is generally called lateral bending and kyphosis.
The spine deformation detection method mainly comprises a moire image measurement method, an X-ray measurement method, an Adams forward bending test and the like, manual physical measurement and detection are needed in the scheme in the prior art, detection steps are complex, detection efficiency is low, cost is high, especially radiation injury can be caused to teenagers by the X-ray measurement method, most of the existing schemes can detect patients who have spine deformation, and potential people cannot be reminded and prevented.
Disclosure of Invention
The invention provides a spine deformation crowd identification method, a spine deformation crowd identification device, computer equipment and a storage medium, which can automatically identify the category of a spine deformation crowd through a frequency spectrum domain method in a GCN, remind potential crowds and play a role in prevention, so that the accuracy and reliability of spine deformation crowd identification are improved, the identification cost is greatly reduced, and the reminding effect is played for the potential crowds.
A spine deformation crowd identification method comprises the following steps:
receiving a target identification instruction, and acquiring image data and non-image data associated with a unique code corresponding to a target to be identified; the image data is an image related to the back; the non-image data is information related to a target to be identified;
inputting the image data into a back region identification model, identifying the back region of the image data through the back region identification model, and acquiring a back region image to be identified, which is captured by the back region identification model; the back region identification model is a deep convolutional neural network model based on a YOLO model building frame;
carrying out image enhancement processing on the back area image to be identified to obtain an enhanced image of the area to be identified;
inputting the region-to-be-identified enhanced image into a spine identification model, extracting spine features in the region-to-be-identified enhanced image through the spine identification model, obtaining a first feature vector diagram output by the spine identification model according to the spine features, inputting the non-image data into a data standardization model, and performing normalization and edge weight processing on the non-image data through the data standardization model to obtain a second feature vector diagram;
performing edge filling on the second feature vector diagram to the first feature vector diagram to obtain a third feature vector diagram;
inputting the third feature vector diagram into the trained spine diagram convolution network model;
according to a frequency spectrum domain method in the GCN, spine frequency domain features in the third feature vector diagram are extracted through the spine diagram convolution network model, and an identification result output by the spine diagram convolution network model according to the spine frequency domain features is obtained; the identification result represents the category of the spinal deformation population of the target to be identified, and the category of the spinal deformation population comprises a lateral bending population, a spinal kyphosis population, a potential spinal lateral bending population, a potential spinal kyphosis population and a non-spinal deformation population.
A spinal deformation crowd identification device, comprising:
the receiving module is used for receiving a target identification instruction and acquiring image data and non-image data associated with a unique code corresponding to a target to be identified; the image data is an image related to the back; the non-image data is information related to a target to be identified;
the identification module is used for inputting the image data into a back area identification model, identifying the back area of the image data through the back area identification model and acquiring a to-be-identified back area image intercepted by the back area identification model; the back region identification model is a deep convolutional neural network model based on a YOLO model building frame;
the enhancement module is used for carrying out image enhancement processing on the back area image to be identified to obtain an enhanced image of the area to be identified;
the acquisition module is used for inputting the enhanced image of the region to be identified into a spine identification model, extracting spine features in the enhanced image of the region to be identified through the spine identification model, acquiring a first feature vector diagram output by the spine identification model according to the spine features, inputting the non-image data into a data standardization model, and performing normalization and edge weight processing on the non-image data through the data standardization model to obtain a second feature vector diagram;
a filling module, configured to perform edge filling on the second feature vector diagram to the first feature vector diagram to obtain a third feature vector diagram;
the input module is used for inputting the third feature vector diagram into the trained spine diagram convolution network model;
the output module is used for extracting the spine frequency domain characteristics in the third characteristic vector diagram through the spine diagram convolution network model according to a frequency spectrum domain method in the GCN, and acquiring the identification result output by the spine diagram convolution network model according to the spine frequency domain characteristics; the identification result represents the category of the spinal deformation population of the target to be identified, and the category of the spinal deformation population comprises a lateral bending population, a spinal kyphosis population, a potential spinal lateral bending population, a potential spinal kyphosis population and a non-spinal deformation population.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above-mentioned spinal deformity people identification method when executing the computer program.
A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned spinal deformation population identification method.
According to the method, the device, the computer equipment and the storage medium for identifying the spinal deformation crowd, the image data and the non-image data associated with the unique code corresponding to the target to be identified are obtained by receiving the target identification instruction; inputting the image data into a back region identification model, and acquiring a back region image to be identified, which is captured by the back region identification model; carrying out image enhancement processing on the back area image to be identified to obtain an enhanced image of the area to be identified; inputting the region-to-be-identified enhanced image into a spine identification model, extracting spine features in the region-to-be-identified enhanced image through the spine identification model, obtaining a first feature vector diagram output by the spine identification model according to the spine features, inputting the non-image data into a data standardization model, and performing normalization and edge weight processing on the non-image data through the data standardization model to obtain a second feature vector diagram; performing edge filling on the second feature vector diagram to the first feature vector diagram to obtain a third feature vector diagram; and according to a frequency spectrum domain method in the GCN, extracting the spine frequency domain characteristics in the third characteristic vector diagram through the spine diagram convolution network model, and acquiring an identification result output by the spine diagram convolution network model according to the spine frequency domain characteristics.
The method and the device realize the aim of identifying the target by acquiring the image data and the non-image data associated with the target to be identified; intercepting a to-be-identified back area image of a back area in the image data; carrying out image enhancement processing on the back area image to be identified to obtain an enhanced image of the area to be identified; spine features in the enhanced image of the region to be recognized are extracted through a spine recognition model to obtain a first feature vector diagram, and meanwhile, normalization and edge weight processing are carried out on the non-image data through a data standardization model to obtain a second feature vector diagram; performing edge filling on the second feature vector diagram to the first feature vector diagram to obtain a third feature vector diagram; according to the frequency spectrum domain method in the GCN, the spine frequency domain characteristic in the third characteristic vector diagram is extracted through the spine diagram convolution network model, and the identification result output by the spine diagram convolution network model according to the spine frequency domain characteristic is obtained, so that the invention realizes that the category of the spine deformation crowd (including the potential crowd lateral curvature crowd, the spine humpback crowd, the potential spine lateral curvature crowd, the potential spine humpback crowd and the non-spine deformation crowd) corresponding to the target to be identified is automatically identified through the frequency spectrum domain method in the GCN according to the image of the back and the relevant non-image information of the target to be identified, the category of the spine deformation crowd corresponding to the target to be identified can be rapidly and accurately identified, the potential crowd is reminded to play a role in prevention, and therefore, the accuracy and reliability of the identification of the spine deformation crowd are improved, the identification cost is greatly reduced, and a reminding effect is achieved for potential crowds.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an application environment of a group identification method for spinal deformity according to an embodiment of the present invention;
FIG. 2 is a flow chart of a population identification method for spinal deformity in accordance with an embodiment of the present invention;
FIG. 3 is a flowchart illustrating the step S20 of the method for identifying people with spinal deformities according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating the step S30 of the method for identifying people with spinal deformities according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating the step S40 of the method for identifying people with spinal deformities according to an embodiment of the present invention;
FIG. 6 is a flowchart of step S40 of a population identification method for spinal deformity according to another embodiment of the present invention;
FIG. 7 is a flowchart illustrating the step S60 of the population group identification method for spinal deformity according to an embodiment of the present invention;
FIG. 8 is a schematic block diagram of a spinal deformity people recognition device in accordance with an embodiment of the present invention;
FIG. 9 is a schematic diagram of a computer device in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The spinal deformation crowd identification method provided by the invention can be applied to the application environment shown in fig. 1, wherein a client (computer device) is communicated with a server through a network. The client (computer device) includes, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, cameras, and portable wearable devices. The server may be implemented as a stand-alone server or as a server cluster consisting of a plurality of servers.
In an embodiment, as shown in fig. 2, a method for identifying people with spinal deformity is provided, which mainly includes the following steps S10-S70:
s10, receiving a target identification instruction, and acquiring image data and non-image data associated with a unique code corresponding to a target to be identified; the image data is an image related to the back; the non-image data is information related to an object to be recognized.
Understandably, the target identification instruction is an instruction triggered when the target to be identified needs to be identified, the target to be identified is a person who needs to identify whether the spine is deformed, the unique code is a unique identification code of the target to be identified, the image data is a photo or an image for shooting the back of the target to be identified, and the non-image data is information related to the target to be identified, such as: and the gender, age, occupation and other related information of the target to be identified.
S20, inputting the image data into a back region identification model, identifying the back region of the image data through the back region identification model, and acquiring a to-be-identified back region image intercepted by the back region identification model; the back region identification model is a deep convolutional neural network model based on a YOLO model building frame.
Understandably, the back region identification model is a trained deep Convolutional neural network model based on a YOLO model scaffolding, the back region identification model is a model for identifying and locating a back region of the target to be identified in the image data, the back region identification model identifies a region on the back of the target to be identified through a YOLO (young Only Look one) algorithm, the YOLO algorithm is an algorithm for directly predicting categories and regions of different targets by using a CNN (Convolutional neural network) operation, a network structure of the YOLO model may be selected according to requirements, for example, the network structure of the YOLO model may be YOLO V1, YOLO V2, YOLO V3, or YOLO V4, and the image of the back region to be identified is an image obtained by intercepting the identified region after the back region identification model is identified, in this way, only the image of the effective region in the image data can be extracted by the back region identification model, and the interference image information can be removed.
In an embodiment, as shown in fig. 3, in step S20, that is, performing back area recognition on the image data by the back area recognition model, and acquiring a back area image to be recognized intercepted by the back area recognition model includes:
s201, inputting a target back image into a back area recognition model in the back area recognition models, and simultaneously inputting a target back side image into a back side area recognition model in the back area recognition models; the image data includes the target back backside image and the target back side image.
Understandably, the back region identification model includes the back region identification model and the back side region identification model, the image data comprises the target back image and the target back side image, the back region identification model is a trained deep convolutional neural network model based on a YOLO model scaffold, and is a model for identifying and positioning the back area, the back side area identification model is a deep convolution neural network model based on a YOLO model scaffold after training, and is a model for identifying and locating the back side area, the target back image is a photograph of the back of the subject to be identified when the subject is wearing underwear or the back of the subject is bare, the target back side image is a photograph of the back side of the target to be identified when the target is worn with underwear or the back is bare.
S202, according to a YOLO algorithm, the back area recognition model is used for recognition, the back area image to be recognized, which only contains the back of the target to be recognized, is intercepted, and meanwhile, the back side area recognition model is used for recognition, and the back side area image to be recognized, which only contains the back side of the target to be recognized, is intercepted.
Understandably, according to a YOLO algorithm, identifying back key points in the target back image through the back region identification model, wherein the back key points comprise a left back shoulder point, a right back shoulder point, an upper point of a back ridge, a middle point of the back ridge, a lower point of the back ridge, a left waist point and a right waist point, locating the position and the region of the back according to the identified back key points, and intercepting the position and the region of the located back to obtain the back region image to be identified, wherein the back region image to be identified is the back containing only the target to be identified and does not contain the neck and the arm of the target to be identified; according to a YOLO algorithm, identifying side key points in the target back side image through the back side area identification model, wherein the side key points comprise an arm upper point, an arm middle point, an arm lower point, a neck point, a side spine upper point, a side spine middle point, a side spine lower point and a chest point, positioning the side position and area according to the identified side key points, and intercepting the position and area of the positioned side to obtain the back side area image to be identified, wherein the back side area image to be identified is a back side only containing the target to be identified.
S203, determining the back area image to be recognized and the back side area image to be recognized as the back area image to be recognized.
Understandably, the back area image to be recognized and the back side area image to be recognized are marked as the back area image to be recognized.
According to the invention, the back area image and the side area image of the back of the target to be recognized are captured, and the image can be captured through the two dimensions of the back of the target to be recognized through the corresponding dimensions of the back and the side of the back, namely the back corresponds to a horizontal dimension and the side corresponds to a vertical dimension, so that an effective image is provided for improving the accuracy and reliability of recognition, and the recognition efficiency is improved.
And S30, performing image enhancement processing on the back area image to be recognized to obtain an enhanced image of the area to be recognized.
Understandably, the image enhancement processing refers to performing image processing operations such as graying processing, denoising and edge enhancement on the to-be-identified back image, wherein the graying processing is to use the brightness of three color (red, green and blue) components in a color image as the gray value of the gray image, the denoising algorithm can be selected according to requirements, for example, the denoising algorithm can be selected as a spatial domain filtering algorithm, a transform domain filtering algorithm, a partial differential equation algorithm, a variational algorithm, a morphological noise filtering algorithm and the like, preferably, the denoising algorithm is selected as a spatial domain filtering algorithm, the edge enhancement processing is to smooth the image, detect edge points, locate edges and sharpen the edges, and the to-be-identified region enhanced image is an image obtained after the image enhancement processing, therefore, the characteristics related to the spinal deformation in the back region image to be identified can be enhanced and optimized through the image enhancement processing, the identification of the spinal deformation can be facilitated, and the identification accuracy is improved.
In an embodiment, as shown in fig. 4, in step S30, that is, performing image enhancement processing on the image of the back region to be recognized to obtain an enhanced image of the region to be recognized, the method includes:
s301, performing graying processing on the to-be-identified back and back area image in the to-be-identified back area image to obtain a back and back gray image, and performing graying processing on the to-be-identified back and side area image in the to-be-identified back area image to obtain a back and side gray image.
Understandably, separating the back area image to be identified through a channel to separate a red channel image of a red channel, a green channel image of a green channel and a blue channel image of a blue channel, wherein the back area image to be identified comprises three channel (red channel, green channel and blue channel) images, namely each pixel point in each cut image has three channel component values which are respectively a red component value, a green component value and a blue component value, carrying out gray processing on the red channel image, the green channel image and the blue channel image to obtain a gray level image corresponding to a gray level channel, and calculating the red (R) component value, the green (G) component value and the blue (B) component value corresponding to each pixel point in the back area image to be identified through a weighted average method to obtain the gray level component value of each pixel point, the formula in the weighted average method can be set according to requirements, for example, the formula in the weighted average method is set as: y is 0.299R +0.587G +0.114B, where Y is a grayscale component value of each pixel, R is a red component value in each pixel, G is a green component value in each pixel, and B is a blue component value in each pixel, thereby obtaining the back-side grayscale image of the back-region image to be recognized, and similarly, the back-side grayscale image to be recognized in the back-region image to be recognized is grayed, thereby obtaining the back-side grayscale image.
S302, performing image denoising and edge enhancement processing on the back and back gray level image to obtain a back and back enhanced image, and performing image denoising and edge enhancement processing on the back and side gray level image to obtain a back and side enhanced image.
Understandably, denoising (also called denoising) is performed on the back gray image space domain image through a space domain filtering algorithm, which can be selected according to requirements, such as a neighborhood average method, a median filtering, a low-pass filtering, and the like, preferably, the space domain filtering algorithm is selected as a neighborhood average method, that is, a value of each pixel point in the back gray image and a value of a pixel point adjacent to the periphery of the pixel point are averaged to obtain a denoised value corresponding to the pixel point, so as to obtain the back gray image after denoising, the back gray image after denoising is subjected to edge enhancement processing, so as to obtain the back enhanced image, the edge enhancement processing is to perform smoothing processing on the image, detect edge points and position edges, and in the process of sharpening the edge, performing image denoising and edge enhancement on the back side gray level image to obtain the back side enhanced image.
S303, determining the back enhanced image and the back side enhanced image as the enhanced image of the area to be identified.
Understandably, the back side enhanced image and the back side enhanced image are marked as the to-be-identified region enhanced image.
According to the method, the optimized enhanced image of the area to be identified can be obtained by performing graying processing, denoising and edge enhancement on the back area image and the back side area image, so that the characteristics related to spinal deformation in the back area image to be identified are enhanced, the identification of the spinal deformation can be facilitated, and the identification accuracy is improved.
S40, inputting the region-to-be-identified enhanced image into a spine identification model, extracting spine features in the region-to-be-identified enhanced image through the spine identification model, obtaining a first feature vector diagram output by the spine identification model according to the spine features, inputting the non-image data into a data standardization model, and performing normalization and edge weight processing on the non-image data through the data standardization model to obtain a second feature vector diagram.
Understandably, the spine recognition model is a deep convolutional neural network model which is trained and used for extracting the spine features of the region-to-be-recognized enhanced image, recognizing and outputting the first feature vector diagram according to the spine features, a network structure of the spine recognition model can be set according to requirements, for example, the network structure of the spine recognition model can be an inclusion V4 network structure, a VGG16 network structure or the like, the spine features are feature vectors related to the shape, the curvature and the like of the spine, and the first feature vector diagram is a series of array matrices which are output according to the spine features extracted from the region-to-be-recognized enhanced image and contain the feature vectors corresponding to the spine features; the data standardization model is a set model which determines the correlation function of each dimension in the non-image data and the deformation degree of the spine and the existence of potential risks according to the collected historical non-image data, the data normalization model is capable of performing normalization and edge weight processing on the non-image data, the normalization processing is to carry out numerical value standardization on the data of each dimension in the non-image data according to a rule matched with each dimension to obtain data of a unified standard, the edge weight processing is to weight the value after the normalization processing according to the edge weight parameter matched with each dimension, the second feature vector diagram is an array matrix obtained after the non-image data is normalized and processed by the edge weight.
In one embodiment, as shown in FIG. 5, the spinal features include a lateral bending feature and a kyphosis feature; in S40, that is, the extracting, by the spine recognition model, the spine feature in the enhanced image of the region to be recognized, and obtaining a first feature vector diagram output by the spine recognition model according to the spine feature includes:
s401, performing the lateral bending feature extraction on the back side enhanced image through a lateral bending recognition model, and simultaneously performing the humpback feature extraction on the back side enhanced image through a humpback recognition model; the spine recognition model comprises the lateral bending recognition model and the humpback recognition model.
Understandably, the spine recognition model comprises the lateral bending recognition model and the kyphosis recognition model, the lateral bending recognition model is a neural network model trained and trained by extracting the lateral bending features from most of back-containing back images, the network structure of the lateral bending recognition model can be set according to requirements, the kyphosis recognition model is a neural network model trained and trained by extracting the kyphosis features from most of back-containing side images, the network structure of the kyphosis recognition model can be set according to requirements, the lateral bending features are the features of asymmetric back trunk sides, curved trunk sides and uneven shoulders, the kyphosis features are back bulges and arc-shaped back sides, the back strengthening image is input into the lateral bending recognition model, the lateral bending features in the back strengthening image are extracted through the lateral bending recognition model, and simultaneously inputting the back side enhanced image into the humpback recognition model, and extracting the humpback characteristic in the back side enhanced image through the humpback recognition model.
S402, obtaining a side bending feature vector diagram output by the side bending recognition model according to the side bending features, and simultaneously obtaining a humpback feature vector diagram output by the humpback recognition model according to the humpback features.
Understandably, the side bending characteristics are extracted after the side bending recognition model carries out the processing of a convolution layer, a pooling layer and a full connection layer on the back enhanced image, the extracted side bending characteristics are arranged and output to form a characteristic vector diagram, namely the side bending feature vector diagram, the side bending feature vector diagram is a matrix diagram containing a plurality of feature vectors, for example, the side bending feature vector diagram is a 100 × 100 matrix, meanwhile, the humpback feature is extracted after the processing of a convolution layer, a pooling layer and a full-connection layer is carried out on the back side enhanced image through the humpback recognition model, the extracted humpback feature is output to form a feature vector diagram, namely, the humpback feature vector diagram is a matrix diagram containing a plurality of feature vectors, for example, the humpback feature vector diagram is a 100 × 100 matrix.
And S403, splicing the side-bending feature vector diagram and the humpback feature vector diagram to obtain the first feature vector diagram.
Understandably, the side-bending feature vector diagram and the humpback feature vector diagram are connected in an up-down matrix manner to obtain the first feature vector diagram, for example, if the side-bending feature vector diagram is a 100 × 100 matrix, and the humpback feature vector diagram is a 100 × 100 matrix, the first feature vector diagram is a 100 × 200 matrix.
According to the invention, the scoliosis recognition model in the spine recognition model is used for carrying out measured feature extraction on the back side enhanced image and recognizing the scoliosis feature vector diagram, and meanwhile, the kyphosis recognition model in the spine recognition model is used for carrying out kyphosis feature extraction on the back side enhanced image and recognizing the kyphosis feature vector diagram, so that the scoliosis and kyphosis in spine deformation can be recognized more pertinently, and the accuracy and reliability of recognition are improved.
In an embodiment, as shown in fig. 6, in the step S40, the normalizing and edge weighting processing on the non-image data by the data normalization model to obtain a second feature vector diagram includes:
s404, obtaining each dimension in the non-image data and dimension data corresponding to each dimension.
Understandably, the non-image data is information related to the target to be identified, the non-image data includes multiple dimensions, the dimensions can be set according to requirements, dimension data corresponding to each dimension in the non-image data is obtained, and the dimension data is content which is input by the target to be identified correspondingly to the dimension.
In an embodiment, the dimensions in the non-image data include a target gender, a target age, a target occupation, and target information.
Understandably, the target gender is the gender of the target to be identified, the target age is the age of the target to be identified, the target occupation is the occupation engaged by the target to be identified, the target information is family information related to the target to be identified or a family map, and the target information can be set according to requirements.
S405, acquiring a normalization rule and an edge weight parameter matched with each dimension.
Understandably, the normalization rule is a rule for normalizing the dimension data matched with the dimension, that is, performing a conversion of the unified rule on the dimension data of the same dimension, for example: converting male dimension data with the dimension as the target age into 1, converting female dimension data with the dimension as the target age into 0 and other rules, wherein the edge weight parameter is a weighting parameter preset according to the dimension and matched with the dimension, the edge weight parameter is obtained through analysis in historical statistics, and the edge weight parameter indicates a measurement index of potential association degree of the dimension and the spine feature.
S406, performing normalization processing on all the dimension data according to the normalization rule matched with each dimension to obtain a dimension standard value corresponding to each dimension.
Understandably, the normalization processing is to convert dimension data into a unified format according to a normalization rule, and perform unified conversion on all the dimension data according to the normalization rule matched with each dimension to obtain a dimension standard value corresponding to each dimension.
And S407, performing edge weighting processing on all the dimension standard values according to the edge weight parameters matched with the dimensions to obtain weighted values corresponding to the dimensions.
Understandably, the dimension standard value and the edge weight parameter matched with the dimension are multiplied to obtain the weighted value, the edge weighting processing is to multiply the dimension standard value and the corresponding edge weight parameter, namely, the dimension standard value is subjected to expansion processing according to the edge weight parameter, the potential association degree between each dimension and the spine feature is pulled, and the identification can be objectively and scientifically carried out.
S408, expanding all the weighted values to obtain the second feature vector diagram.
Understandably, the expanding is to copy and fill all the weighted values to a preset matrix size, the copying and filling is to copy all the weighted values of one dimension to a horizontal length of the preset matrix size, finally, the elements which are not enough to be copied are filled with zero, and then copy all the elements of the row to a vertical length of the preset matrix size, thereby obtaining the second feature vector diagram.
According to the method, the non-image data are subjected to normalization and edge weight processing to generate the second feature vector diagram related to the features of spinal deformation, the third feature vector diagram of the relationship among the feature vectors can be better established, and the reliability of identification is improved.
And S50, performing edge filling on the second feature vector diagram to the first feature vector diagram to obtain a third feature vector diagram.
Understandably, the edge filling refers to filling on the array edges on the basis of the first feature vector diagram, that is, filling the second feature vector diagram to a preset size above and below the matrix of the first feature vector diagram to obtain the third feature vector diagram.
And S60, inputting the third feature vector diagram into the trained spine diagram convolution network model.
Understandably, the spine graph convolution network model is a neural network model which is recognized and trained based on a graph convolution network structure, and can extract spine frequency domain features according to feature vectors of elements in the third feature vector graph and association relations among the elements, classify and recognize the extracted spine frequency domain features, and finally output the categories of corresponding spine deformation crowds.
In an embodiment, as shown in fig. 7, before the step S60, that is, before the step S60 of inputting the third feature vector diagram into the trained spine volume network model, the method includes:
s601, acquiring a sample data set; the sample data set comprises sample data and sample tags which correspond to the sample data one by one; the sample data is a historical third feature vector graph; the sample labels include people with lateral curvature, people with kyphosis, people with potential kyphosis, and people with potential kyphosis.
Understandably, the historical third feature vector graph is obtained by processing sample image data and sample non-image data associated with the sample data through the steps S20 to S50, the sample data set includes a plurality of sample data, one sample data is associated with one sample label, and the sample labels include a lateral bending population, a kyphosis population, a potential kyphosis population and a potential kyphosis population.
S602, inputting the sample data into a spine image convolution neural network model containing initial parameters.
Understandably, the initial parameters of the spine image convolution neural network model may be set according to requirements, for example, the initial parameters may obtain all parameters of other image convolution models identified in relation to the back by a transfer learning method, or may be all set to a preset value.
S603, according to a frequency spectrum domain method in the GCN, extracting the spine frequency domain characteristics in the sample data through the spine convolution neural network model, and obtaining a sample result output by the spine convolution neural network model according to the spine frequency domain characteristics.
Understandably, the GCN is Graph conditional Network, namely, existing or potential classification is identified by extracting relevant spatial features of the topological graph with corresponding relation established by the vertexes and the edges, the spectral domain method is a method of studying the properties of a topological map and the results of GCN classification obtained by fourier transforming eigenvalues and eigenvectors of a laplacian matrix corresponding to the topological map, the spine frequency domain feature is a feature which is transformed by applying the frequency spectrum domain method and is related to spine deformation, the spine graph convolutional neural network model identifies the sample result of the sample data according to the extracted spine frequency domain features, the sample results include a population with lateral curvature, a population with kyphosis, a population with potential kyphosis, and a population with non-spinal deformity.
S604, determining a loss value according to the sample result and the sample label corresponding to the sample data.
Understandably, the sample result and the sample label are input into a loss function in the spine atlas convolution neural network model, the loss value corresponding to the sample data is obtained through calculation, the loss function can be set according to requirements, and the loss function is the logarithm of the difference between the sample result and the sample label and indicates the difference between the sample result and the sample label.
And S605, recording the converged spine image convolution neural network model as a trained spine image convolution network model when the loss value reaches a preset convergence condition.
Understandably, the convergence condition may be a condition that the loss value is smaller than a set threshold, that is, when the loss value is smaller than the set threshold, the spine convolution neural network model after convergence is recorded as a trained spine convolution network model.
In an embodiment, after the step S604, that is, after determining the loss value according to the sample result and the sample label corresponding to the sample data, the method further includes:
and S606, when the loss value does not reach the preset convergence condition, iteratively updating the initial parameters of the spine map convolution neural network model until the loss value reaches the preset convergence condition, and recording the converged spine map convolution neural network model as the trained spine map convolution network model.
Understandably, the convergence condition may also be a condition that the loss value is small and will not decrease again after 10000 times of calculation, that is, when the loss value is small and will not decrease again after 10000 times of calculation, the training is stopped, and the spine convolution neural network model after convergence is recorded as the trained spine convolution network model.
Therefore, when the loss value does not reach the preset convergence condition, the initial parameters of the spine image convolution neural network model are continuously updated and iterated, the initial parameters can be continuously drawn close to the accurate recognition result, and the accuracy of the recognition result is higher and higher.
S70, extracting the spine frequency domain feature in the third feature vector diagram through the spine diagram convolution network model according to a frequency spectrum domain method in the GCN, and acquiring an identification result output by the spine diagram convolution network model according to the spine frequency domain feature; the identification result represents the category of the spinal deformation population of the target to be identified, and the category of the spinal deformation population comprises a lateral bending population, a spinal kyphosis population, a potential spinal lateral bending population, a potential spinal kyphosis population and a non-spinal deformation population.
Understandably, the GCN is a Graph conditional Network, that is, existing or potential classification is identified by extracting relevant spatial features from a topological Graph in which vertices and edges establish corresponding relationships, the spectral domain method is a method for researching the properties of the topological Graph and a GCN classification result obtained by performing fourier transform on the eigenvalue and the eigenvector of a laplacian matrix corresponding to the topological Graph, and the spine Graph convolution Network model is used for identifying a back photo of the target to be identified, determining the category of the spine deformation population where the target to be identified is located, and prompting potential populations, and outputting corresponding prompting prompts or prevention prompts according to the category of the spine deformation population after determination.
The method comprises the steps of receiving a target identification instruction, and acquiring image data and non-image data associated with a unique code corresponding to a target to be identified; inputting the image data into a back region identification model, and acquiring a back region image to be identified, which is captured by the back region identification model; carrying out image enhancement processing on the back area image to be identified to obtain an enhanced image of the area to be identified; inputting the region-to-be-identified enhanced image into a spine identification model, extracting spine features in the region-to-be-identified enhanced image through the spine identification model, obtaining a first feature vector diagram output by the spine identification model according to the spine features, inputting the non-image data into a data standardization model, and performing normalization and edge weight processing on the non-image data through the data standardization model to obtain a second feature vector diagram; performing edge filling on the second feature vector diagram to the first feature vector diagram to obtain a third feature vector diagram; and according to a frequency spectrum domain method in the GCN, extracting the spine frequency domain characteristics in the third characteristic vector diagram through the spine diagram convolution network model, and acquiring an identification result output by the spine diagram convolution network model according to the spine frequency domain characteristics.
The method and the device realize the aim of identifying the target by acquiring the image data and the non-image data associated with the target to be identified; intercepting a to-be-identified back area image of a back area in the image data; carrying out image enhancement processing on the back area image to be identified to obtain an enhanced image of the area to be identified; spine features in the enhanced image of the region to be recognized are extracted through a spine recognition model to obtain a first feature vector diagram, and meanwhile, normalization and edge weight processing are carried out on the non-image data through a data standardization model to obtain a second feature vector diagram; performing edge filling on the second feature vector diagram to the first feature vector diagram to obtain a third feature vector diagram; according to the frequency spectrum domain method in the GCN, the spine frequency domain characteristic in the third characteristic vector diagram is extracted through the spine diagram convolution network model, and the identification result output by the spine diagram convolution network model according to the spine frequency domain characteristic is obtained, so that the invention realizes that the category of the spine deformation crowd (including the potential crowd lateral curvature crowd, the spine humpback crowd, the potential spine lateral curvature crowd, the potential spine humpback crowd and the non-spine deformation crowd) corresponding to the target to be identified is automatically identified through the frequency spectrum domain method in the GCN according to the image of the back and the relevant non-image information of the target to be identified, the category of the spine deformation crowd corresponding to the target to be identified can be rapidly and accurately identified, the potential crowd is reminded to play a role in prevention, and therefore, the accuracy and reliability of the identification of the spine deformation crowd are improved, the identification cost is greatly reduced, and a reminding effect is achieved for potential crowds.
In an embodiment, a spinal deformation population identification device is provided, and the spinal deformation population identification device corresponds to the spinal deformation population identification method in the embodiment one to one. As shown in fig. 8, the spinal deformity people identification apparatus includes a receiving module 11, an identifying module 12, an enhancing module 13, an obtaining module 14, a filling module 15, an input module 16, and an output module 17. The functional modules are explained in detail as follows:
the receiving module 11 is configured to receive a target identification instruction, and acquire image data and non-image data associated with a unique code corresponding to a target to be identified; the image data is an image related to the back; the non-image data is information related to a target to be identified;
the recognition module 12 is configured to input the image data into a back region recognition model, perform back region recognition on the image data through the back region recognition model, and acquire a to-be-recognized back region image captured by the back region recognition model; the back region identification model is a deep convolutional neural network model based on a YOLO model building frame;
the enhancement module 13 is configured to perform image enhancement processing on the to-be-identified back region image to obtain an to-be-identified region enhanced image;
the obtaining module 14 is configured to input the to-be-identified region enhanced image into a spine identification model, extract spine features in the to-be-identified region enhanced image through the spine identification model, obtain a first feature vector diagram output by the spine identification model according to the spine features, input the non-image data into a data normalization model, and perform normalization and edge weight processing on the non-image data through the data normalization model to obtain a second feature vector diagram;
a filling module 15, configured to perform edge filling on the second feature vector diagram to the first feature vector diagram to obtain a third feature vector diagram;
an input module 16, configured to input the third feature vector diagram into the trained spine map convolutional network model;
the output module 17 is configured to extract, according to a frequency spectrum domain method in the GCN, the spine frequency domain feature in the third feature vector diagram through the spine diagram convolution network model, and obtain an identification result output by the spine diagram convolution network model according to the spine frequency domain feature; the identification result represents the category of the spinal deformation population of the target to be identified, and the category of the spinal deformation population comprises a lateral bending population, a spinal kyphosis population, a potential spinal lateral bending population, a potential spinal kyphosis population and a non-spinal deformation population.
For specific definition of the spinal deformation population identification device, reference may be made to the above definition of the spinal deformation population identification method, and details are not repeated here. The modules in the device for identifying the deformed human spine can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a spinal deformation population identification method.
In one embodiment, a computer device is provided, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the spine deformation population identification method in the above embodiments is implemented.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, implements the spinal deformation population identification method in the above-described embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A spine deformation crowd identification method is characterized by comprising the following steps:
receiving a target identification instruction, and acquiring image data and non-image data associated with a unique code corresponding to a target to be identified; the image data is an image related to the back; the non-image data is information related to a target to be identified;
inputting the image data into a back region identification model, identifying the back region of the image data through the back region identification model, and acquiring a back region image to be identified, which is captured by the back region identification model; the back region identification model is a deep convolutional neural network model based on a YOLO model building frame;
carrying out image enhancement processing on the back area image to be identified to obtain an enhanced image of the area to be identified;
inputting the region-to-be-identified enhanced image into a spine identification model, extracting spine features in the region-to-be-identified enhanced image through the spine identification model, obtaining a first feature vector diagram output by the spine identification model according to the spine features, inputting the non-image data into a data standardization model, and performing normalization and edge weight processing on the non-image data through the data standardization model to obtain a second feature vector diagram;
performing edge filling on the second feature vector diagram to the first feature vector diagram to obtain a third feature vector diagram;
inputting the third feature vector diagram into the trained spine diagram convolution network model;
according to a frequency spectrum domain method in the GCN, spine frequency domain features in the third feature vector diagram are extracted through the spine diagram convolution network model, and an identification result output by the spine diagram convolution network model according to the spine frequency domain features is obtained; the identification result represents the category of the spinal deformation population of the target to be identified, and the category of the spinal deformation population comprises a lateral bending population, a spinal kyphosis population, a potential spinal lateral bending population, a potential spinal kyphosis population and a non-spinal deformation population.
2. The method for identifying people with spinal deformity according to claim 1, wherein the identifying the back region of the image data by the back region identification model to obtain the back region image to be identified intercepted by the back region identification model comprises:
inputting a target back image into a back region recognition model in the back region recognition models, and simultaneously inputting a target back side image into a back side region recognition model in the back region recognition models; the image data comprises the target back dorsal image and the target back lateral image;
according to a YOLO algorithm, identifying through the back area identification model, intercepting a back area image to be identified, which only contains the back of the target to be identified, and identifying through the back side area identification model, and intercepting a back side area image to be identified, which only contains the back side of the target to be identified;
and determining the back area image to be recognized and the back side area image to be recognized as the back area image to be recognized.
3. The method for identifying people with spinal deformity as claimed in claim 1, wherein the step of performing image enhancement processing on the image of the back region to be identified to obtain an enhanced image of the region to be identified comprises:
performing graying processing on the to-be-identified back region image in the to-be-identified back region image to obtain a back gray image, and simultaneously performing graying processing on the to-be-identified back side region image in the to-be-identified back region image to obtain a back side gray image;
performing image denoising and edge enhancement processing on the back and back gray level image to obtain a back and back enhanced image, and performing image denoising and edge enhancement processing on the back and side gray level image to obtain a back and side enhanced image;
and determining the back side enhanced image and the back side enhanced image as the to-be-identified region enhanced image.
4. The population recognition method for spinal deformity of claim 1, wherein the spinal features include lateral bending features and kyphosis features;
the extracting, by the spine recognition model, the spine feature in the enhanced image of the region to be recognized to obtain a first feature vector diagram output by the spine recognition model according to the spine feature includes:
performing the lateral bending feature extraction on the back side enhanced image through a lateral bending recognition model, and simultaneously performing the humpback feature extraction on the back side enhanced image through a humpback recognition model; the spine identification model comprises the lateral bending identification model and the humpback identification model;
acquiring a side bending feature vector diagram output by the side bending recognition model according to the side bending feature, and acquiring a humpback feature vector diagram output by the humpback recognition model according to the humpback feature;
and splicing the side bending feature vector diagram and the humpback feature vector diagram to obtain the first feature vector diagram.
5. The method for identifying people with spinal deformity as recited in claim 1, wherein the normalizing the non-image data by the data normalization model and the edge weighting to obtain a second feature vector diagram comprises:
obtaining each dimension in the non-image data and dimension data corresponding to each dimension;
acquiring a normalization rule and an edge weight parameter which are matched with each dimension;
according to a normalization rule matched with each dimension, performing normalization processing on all the dimension data to obtain a dimension standard value corresponding to each dimension;
according to the edge weight parameters matched with the dimensions, performing edge weighting processing on all the dimension standard values to obtain weighted values corresponding to the dimensions;
and expanding all the weighted values to obtain the second feature vector diagram.
6. The method of claim 5, wherein the dimensions in the non-image data include target gender, target age, target occupation, and target information.
7. The method for identifying people with spinal deformity as recited in claim 1, wherein before inputting the third feature vector map into the trained spine volume network model, the method comprises:
acquiring a sample data set; the sample data set comprises sample data and sample tags which correspond to the sample data one by one; the sample data is a historical third feature vector graph; the sample labels include lateral bending population, kyphosis population, potential kyphosis population, and potential kyphosis population;
inputting the sample data into a spine graph convolution neural network model containing initial parameters;
according to a frequency spectrum domain method in GCN, extracting the spine frequency domain characteristics in the sample data through the spine graph convolution neural network model, and obtaining a sample result output by the spine graph convolution neural network model according to the spine frequency domain characteristics;
determining a loss value according to the sample result and the sample label corresponding to the sample data;
and when the loss value reaches a preset convergence condition, recording the converged spine image convolution neural network model as a trained spine image convolution network model.
8. A spinal deformation crowd identification device, comprising:
the receiving module is used for receiving a target identification instruction and acquiring image data and non-image data associated with a unique code corresponding to a target to be identified; the image data is an image related to the back; the non-image data is information related to a target to be identified;
the identification module is used for inputting the image data into a back area identification model, identifying the back area of the image data through the back area identification model and acquiring a to-be-identified back area image intercepted by the back area identification model; the back region identification model is a deep convolutional neural network model based on a YOLO model building frame;
the enhancement module is used for carrying out image enhancement processing on the back area image to be identified to obtain an enhanced image of the area to be identified;
the acquisition module is used for inputting the enhanced image of the region to be identified into a spine identification model, extracting spine features in the enhanced image of the region to be identified through the spine identification model, acquiring a first feature vector diagram output by the spine identification model according to the spine features, inputting the non-image data into a data standardization model, and performing normalization and edge weight processing on the non-image data through the data standardization model to obtain a second feature vector diagram;
a filling module, configured to perform edge filling on the second feature vector diagram to the first feature vector diagram to obtain a third feature vector diagram;
the input module is used for inputting the third feature vector diagram into the trained spine diagram convolution network model;
the output module is used for extracting the spine frequency domain characteristics in the third characteristic vector diagram through the spine diagram convolution network model according to a frequency spectrum domain method in the GCN, and acquiring the identification result output by the spine diagram convolution network model according to the spine frequency domain characteristics; the identification result represents the category of the spinal deformation population of the target to be identified, and the category of the spinal deformation population comprises a lateral bending population, a spinal kyphosis population, a potential spinal lateral bending population, a potential spinal kyphosis population and a non-spinal deformation population.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor when executing the computer program implements the spinal deformity population identification method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the spinal deformity population identification method according to any one of claims 1 to 7.
CN202010513066.6A 2020-06-08 2020-06-08 Spine deformation crowd identification method and device, computer equipment and storage medium Active CN111666890B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010513066.6A CN111666890B (en) 2020-06-08 2020-06-08 Spine deformation crowd identification method and device, computer equipment and storage medium
PCT/CN2020/099253 WO2021114623A1 (en) 2020-06-08 2020-06-30 Method, apparatus, computer device, and storage medium for identifying persons having deformed spinal columns

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010513066.6A CN111666890B (en) 2020-06-08 2020-06-08 Spine deformation crowd identification method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111666890A true CN111666890A (en) 2020-09-15
CN111666890B CN111666890B (en) 2023-06-30

Family

ID=72385746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010513066.6A Active CN111666890B (en) 2020-06-08 2020-06-08 Spine deformation crowd identification method and device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN111666890B (en)
WO (1) WO2021114623A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139962A (en) * 2021-05-26 2021-07-20 北京欧应信息技术有限公司 System and method for scoliosis probability assessment
CN114287915A (en) * 2021-12-28 2022-04-08 深圳零动医疗科技有限公司 Noninvasive scoliosis screening method and system based on back color image

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505751B (en) * 2021-07-29 2022-10-25 同济大学 Human skeleton action recognition method based on difference map convolutional neural network
CN113610808B (en) * 2021-08-09 2023-11-03 中国科学院自动化研究所 Group brain map individuation method, system and equipment based on individual brain connection diagram

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647588A (en) * 2018-04-24 2018-10-12 广州绿怡信息科技有限公司 Goods categories recognition methods, device, computer equipment and storage medium
CN109508638A (en) * 2018-10-11 2019-03-22 平安科技(深圳)有限公司 Face Emotion identification method, apparatus, computer equipment and storage medium
CN109657582A (en) * 2018-12-10 2019-04-19 平安科技(深圳)有限公司 Recognition methods, device, computer equipment and the storage medium of face mood
CN110781836A (en) * 2019-10-28 2020-02-11 深圳市赛为智能股份有限公司 Human body recognition method and device, computer equipment and storage medium
CN111144285A (en) * 2019-12-25 2020-05-12 中国平安人寿保险股份有限公司 Fat and thin degree identification method, device, equipment and medium
CN111191568A (en) * 2019-12-26 2020-05-22 中国平安人寿保险股份有限公司 Method, device, equipment and medium for identifying copied image

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019520954A (en) * 2016-04-25 2019-07-25 中慧医学成像有限公司 Method and device for measuring the angle of curvature of the spine
CN109493334B (en) * 2018-11-12 2020-12-29 深圳码隆科技有限公司 Method and apparatus for measuring spinal curvature
CN109431511B (en) * 2018-11-14 2021-09-24 南京航空航天大学 Human back scoliosis spine contour characteristic curve fitting method based on digital image processing
CN110415291A (en) * 2019-08-07 2019-11-05 清华大学 Image processing method and relevant device
CN110458831B (en) * 2019-08-12 2023-02-03 深圳市智影医疗科技有限公司 Scoliosis image processing method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647588A (en) * 2018-04-24 2018-10-12 广州绿怡信息科技有限公司 Goods categories recognition methods, device, computer equipment and storage medium
CN109508638A (en) * 2018-10-11 2019-03-22 平安科技(深圳)有限公司 Face Emotion identification method, apparatus, computer equipment and storage medium
CN109657582A (en) * 2018-12-10 2019-04-19 平安科技(深圳)有限公司 Recognition methods, device, computer equipment and the storage medium of face mood
CN110781836A (en) * 2019-10-28 2020-02-11 深圳市赛为智能股份有限公司 Human body recognition method and device, computer equipment and storage medium
CN111144285A (en) * 2019-12-25 2020-05-12 中国平安人寿保险股份有限公司 Fat and thin degree identification method, device, equipment and medium
CN111191568A (en) * 2019-12-26 2020-05-22 中国平安人寿保险股份有限公司 Method, device, equipment and medium for identifying copied image

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139962A (en) * 2021-05-26 2021-07-20 北京欧应信息技术有限公司 System and method for scoliosis probability assessment
CN114287915A (en) * 2021-12-28 2022-04-08 深圳零动医疗科技有限公司 Noninvasive scoliosis screening method and system based on back color image
CN114287915B (en) * 2021-12-28 2024-03-05 深圳零动医疗科技有限公司 Noninvasive scoliosis screening method and system based on back color images

Also Published As

Publication number Publication date
CN111666890B (en) 2023-06-30
WO2021114623A1 (en) 2021-06-17

Similar Documents

Publication Publication Date Title
CN111666890B (en) Spine deformation crowd identification method and device, computer equipment and storage medium
CN110120040B (en) Slice image processing method, slice image processing device, computer equipment and storage medium
Yuan et al. Factorization-based texture segmentation
CN107679507B (en) Facial pore detection system and method
Manap et al. Non-distortion-specific no-reference image quality assessment: A survey
CN109584209B (en) Vascular wall plaque recognition apparatus, system, method, and storage medium
CN107679466B (en) Information output method and device
CN111524137A (en) Cell identification counting method and device based on image identification and computer equipment
CN111028923B (en) Digital pathological image staining normalization method, electronic device and storage medium
CN110827335B (en) Mammary gland image registration method and device
CN111105421A (en) Method, device, equipment and storage medium for segmenting high signal of white matter
CN113179421B (en) Video cover selection method and device, computer equipment and storage medium
CN111178187A (en) Face recognition method and device based on convolutional neural network
WO2020133072A1 (en) Systems and methods for target region evaluation and feature point evaluation
CN111652300A (en) Spine curvature classification method, computer device and storage medium
CN111080658A (en) Cervical MRI image segmentation method based on deformable registration and DCNN
CN113781488A (en) Tongue picture image segmentation method, apparatus and medium
Jenifa et al. Classification of cotton leaf disease using multi-support vector machine
CN111369598B (en) Deep learning model training method and device, and application method and device
CN116563647B (en) Age-related maculopathy image classification method and device
CN112927235A (en) Brain tumor image segmentation method based on multi-scale superpixel and nuclear low-rank representation
CN112330671A (en) Method and device for analyzing cell distribution state, computer equipment and storage medium
CN107832695A (en) The optic disk recognition methods based on textural characteristics and device in retinal images
US9443128B2 (en) Segmenting biological structures from microscopy images
CN116051421A (en) Multi-dimensional-based endoscope image quality evaluation method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant