CN117474829A - Computer readable storage medium, spine posture determining method and apparatus - Google Patents

Computer readable storage medium, spine posture determining method and apparatus Download PDF

Info

Publication number
CN117474829A
CN117474829A CN202210867625.2A CN202210867625A CN117474829A CN 117474829 A CN117474829 A CN 117474829A CN 202210867625 A CN202210867625 A CN 202210867625A CN 117474829 A CN117474829 A CN 117474829A
Authority
CN
China
Prior art keywords
curve
target
pixel point
dynamic endpoint
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210867625.2A
Other languages
Chinese (zh)
Inventor
付春萌
张旭
方伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Original Assignee
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Zhirong Medical Technology Co Ltd filed Critical Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority to CN202210867625.2A priority Critical patent/CN117474829A/en
Publication of CN117474829A publication Critical patent/CN117474829A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • G06T2207/30012Spine; Backbone

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to a computer readable storage medium, a spine pose determination method, an apparatus, a computer device and a computer program product. The computer program stored in the computer readable storage medium performs the steps of: and acquiring a target image obtained by acquiring an image of a target part of a target object, and extracting a contour curve corresponding to the target part in the target image. And determining a first pixel point meeting a preset height condition in the profile curve and a second pixel point except the first pixel point. Based on the discrete degree between the first pixel point and each second pixel point, a target curve which meets the non-discrete condition and comprises the first pixel point is intercepted from the contour curve. The target curve is divided into a first curve and a second curve according to the position of the target spine in the target curve. And determining the posture of the target spine according to the first curve and the second curve. In this way, the efficiency of determining the spinal pose is improved.

Description

Computer readable storage medium, spine posture determining method and apparatus
Technical Field
The present application relates to the field of spine detection technology, and in particular, to a computer readable storage medium, a spine posture determining method, a spine posture determining apparatus, a computer device, and a computer program product.
Background
With the development of posture detection technology, in order to detect the posture of the spine, it is often implemented by image measurement. The image measurement is to detect the posture of the spine to be detected in the to-be-detected area through medical image equipment.
However, in the case of posture detection by image measurement, the medical imaging apparatus takes a long detection time, and thus, it is difficult to quickly complete determination of the spinal posture of a vast number of regions to be detected. Therefore, there is a problem in that the efficiency of the determination of the spinal posture is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a computer-readable storage medium, a spinal posture determination method, an apparatus, a computer device, and a computer program product that can improve the efficiency of spinal posture determination.
In a first aspect, the present application provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a target image obtained by acquiring an image of a target part of a target object, and extracting a contour curve corresponding to the target part in the target image; the target part comprises a target spine;
Determining a first pixel point meeting a preset height condition in the profile curve and a second pixel point except the first pixel point;
based on the discrete degree between the first pixel point and each second pixel point, a target curve which meets the non-discrete condition and comprises the first pixel point is intercepted from the contour curve;
dividing the target curve into a first curve and a second curve according to the position of the target spine in the target curve;
and determining the posture of the target spine according to the first curve and the second curve.
In a second aspect, the present application provides a spinal posture determination method. The method comprises the following steps:
acquiring a target image obtained by acquiring an image of a target part of a target object, and extracting a contour curve corresponding to the target part in the target image; the target part comprises a target spine;
determining a first pixel point meeting a preset height condition in the profile curve and a second pixel point except the first pixel point;
based on the discrete degree between the first pixel point and each second pixel point, a target curve which meets the non-discrete condition and comprises the first pixel point is intercepted from the contour curve;
Dividing the target curve into a first curve and a second curve according to the position of the target spine in the target curve;
and determining the posture of the target spine according to the first curve and the second curve.
In a third aspect, the present application also provides a spinal posture determination device. The device comprises:
the extraction module is used for acquiring a target image obtained by image acquisition of a target part of a target object and extracting a contour curve corresponding to the target part in the target image; the target part comprises a target spine;
the determining module is used for determining a first pixel point which meets the preset height condition in the profile curve and a second pixel point except the first pixel point;
the intercepting module is used for intercepting a target curve which meets non-discrete conditions and comprises the first pixel point from the contour curve based on the discrete degree between the first pixel point and each second pixel point;
the dividing module is used for dividing the target curve into a first curve and a second curve according to the position of the target spine in the target curve;
the determining module is further configured to determine a posture of the target spine according to the first curve and the second curve.
In a fourth aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a target image obtained by acquiring an image of a target part of a target object, and extracting a contour curve corresponding to the target part in the target image; the target part comprises a target spine;
determining a first pixel point meeting a preset height condition in the profile curve and a second pixel point except the first pixel point;
based on the discrete degree between the first pixel point and each second pixel point, a target curve which meets the non-discrete condition and comprises the first pixel point is intercepted from the contour curve;
dividing the target curve into a first curve and a second curve according to the position of the target spine in the target curve;
and determining the posture of the target spine according to the first curve and the second curve.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
Acquiring a target image obtained by acquiring an image of a target part of a target object, and extracting a contour curve corresponding to the target part in the target image; the target part comprises a target spine;
determining a first pixel point meeting a preset height condition in the profile curve and a second pixel point except the first pixel point;
based on the discrete degree between the first pixel point and each second pixel point, a target curve which meets the non-discrete condition and comprises the first pixel point is intercepted from the contour curve;
dividing the target curve into a first curve and a second curve according to the position of the target spine in the target curve;
and determining the posture of the target spine according to the first curve and the second curve.
The computer readable storage medium, the spine posture determining method, the spine posture determining device, the computer equipment and the computer program product can rapidly acquire the contour curve corresponding to the target part through image acquisition, simplify the step of information acquisition on the target part, and avoid damage caused by radioactive detection on the target part. Through the preset height condition, each pixel point on the contour curve is divided into a first pixel point meeting the preset height condition and a second pixel point except the first pixel point, so that the degree of dispersion between the first pixel point and each second pixel point can be determined. In this way, the concentrated distribution of each pixel point in the contour curve can be accurately reflected according to the discrete degree, and the target curve which meets the non-discrete condition and comprises the first pixel point is cut out from the contour curve, so that the concentrated distribution of each pixel point in the target curve is ensured. Based on the position of the target spine in the target curve, a first curve and a second curve which are respectively positioned at two sides of the target spine can be determined, interference of a protruding structure of the target spine on gesture determination is avoided, and the gesture determined based on the first curve and the second curve is further ensured to accurately simulate the gesture determined based on the spine bending ruler. Therefore, not only can the multi-step measurement step in actual operation be avoided, but also the posture of the target spine can be truly estimated, and the efficiency of determining the posture of the spine is greatly improved.
Drawings
FIG. 1 is a diagram of an application environment for a computer-readable storage medium in one embodiment;
FIG. 2 is a diagram of an application environment for a computer-readable storage medium in another embodiment;
FIG. 3 is an application environment diagram of a computer-readable storage medium in another embodiment;
FIG. 4 is a flowchart illustrating steps performed by the computer-readable storage medium in one embodiment;
FIG. 5 is a schematic diagram of a target image in one embodiment;
FIG. 6 is a diagram of extraction results in one embodiment;
FIG. 7 is a schematic diagram of a profile curve in one embodiment;
FIG. 8 is a schematic diagram of a target curve in one embodiment;
FIG. 9 is a schematic diagram of a first curve and a second curve distribution in one embodiment;
FIG. 10 is a flowchart illustrating steps performed by a computer readable storage medium in another embodiment;
FIG. 11 is a schematic view of a common tangent angle in one embodiment;
FIG. 12 is a schematic diagram of two linear comparisons for determining a target spinal pose in one embodiment;
FIG. 13 is a flowchart illustrating steps performed by a computer readable storage medium in another embodiment;
FIG. 14 is a block diagram of the spinal posture determination device in one embodiment;
fig. 15 is an internal structural view of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The computer readable storage medium provided in the embodiment of the application may be applied to an application environment as shown in fig. 1. The terminal 102 communicates with the server 104 through a network, and the data storage system may store data that needs to be processed by the server 104. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The terminal 102 and the server 104 each have a computer-readable storage medium disposed thereon, which can have a computer program stored thereon. The terminal 102, which is deployed with a computer-readable storage medium, can separately execute a computer program on the computer-readable storage medium. Alternatively, the server 104, having the computer-readable storage medium disposed thereon, may execute the computer program on the computer-readable storage medium alone. Alternatively, the terminal 102 and the server 104 cooperate, for example, the terminal 102 acquires an image of a target portion of a target object to obtain a target image, and the server 104, in which a computer-readable storage medium is disposed, acquires the target image and executes a computer program. Wherein the computer program when executed by the executor performs the steps of: acquiring a target image obtained by acquiring an image of a target part of a target object, and extracting a contour curve corresponding to the target part in the target image; the target site includes a target spine therein. And determining a first pixel point meeting a preset height condition in the profile curve and a second pixel point except the first pixel point. Based on the discrete degree between the first pixel point and each second pixel point, a target curve which meets the non-discrete condition and comprises the first pixel point is intercepted from the contour curve. The target curve is divided into a first curve and a second curve according to the position of the target spine in the target curve. And determining the posture of the target spine according to the first curve and the second curve. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
If the computer readable storage medium is disposed in the terminal 102, as shown in fig. 2, the terminal may be, for example, a mobile phone, and an APP (Application program) is installed on the mobile phone, and the posture of the target spine is determined by a computer program stored in the APP. If the computer readable storage medium is disposed on the server 104, as shown in fig. 3, that is, a target image obtained by acquiring an image of a disposition site of a target object using a mobile phone, a digital camera or other photographing devices at a client, a computer program is stored in the server 104 disposed with the computer readable storage medium, and the computer program is executed to determine the posture of the target spine by acquiring the target image sent by the client.
In one embodiment, as shown in fig. 4, there is provided a computer readable storage medium having a computer program stored thereon, which is disposed on a computer device (the computer device may be a terminal 102 or a server 104 in particular) and which when executed by an executor performs the following steps:
step S402, acquiring a target image obtained by acquiring an image of a target part of a target object, and extracting a contour curve corresponding to the target part in the target image; the target site includes a target spine therein.
The target portion may be the entire back of the target object, or a certain area of the back, which is not particularly limited.
Specifically, a computer program stored on a computer readable storage medium disposed on a computer device, when executed, acquires a target image obtained by image acquisition of a target portion of a target object in a target anteversion posture, and determines a target extraction model from a plurality of preset extraction models based on an extraction requirement. And extracting a target part in the target image through the target extraction model to obtain an extraction result. And determining a contour curve corresponding to the extraction result.
The target forward-flexing posture may be a stereoscopic forward-flexing posture, a sitting stereoscopic forward-flexing posture, or the like, and is not particularly limited. The extraction model can be RGB color model (red green blue color model), YCrCb ellipse skin color model, YCrCb color space+OTSU threshold segmentation, HSV color space H range screening method. Where Y may represent brightness, cr reflects the difference between the red portion of the RGB input signal and the luminance value of the RGB signal, and Cb reflects the difference between the blue portion of the RGB input signal and the luminance value of the RGB signal. The YCrCb ellipse skin color model is a model that determines whether coordinates (Cr, cb) are within an ellipse. YCrCb color space+otsu thresholding refers to OTSU (self-binary thresholding) processing of Cr in YCrCb color space. HSV (Hue, saturation, value, i.e., hue, saturation, degree, and value) color space H range screening is a screening method that performs a hue range within the HSV color space.
For example, a target image (as shown in fig. 5) obtained by photographing the back in a stereoscopic forward-flexed posture of the target object is obtained, and the back in the target image is extracted by a YCrCb elliptical skin color model, so as to obtain an extraction result. Specifically, mapping the target image from an RGB space to a YCrCb space to obtain the coordinates of each pixel value in the target image in the CrCb space, judging whether the coordinates of each pixel value are in an ellipse, taking the pixels in the ellipse as skin pixel points, and taking the pixels in the ellipse not as non-skin pixel points. The extraction result is obtained by setting the image gradation value of the skin pixel to 1 and the image gradation value of the non-skin pixel to 0, as shown in fig. 6. And carrying out edge extraction processing on the extraction result through an edge extraction algorithm to obtain a processing result, setting the uppermost edge point in the processing result as a starting point, and carrying out filtering processing by using a region growing algorithm to obtain a contour curve. The edge extraction algorithm may be a Sobel algorithm.
The process of mapping the target image from RGB space to YCrCb space may be implemented based on the following formula:
Y=0.2990R+0.5870G+0.1140B
Cb=-0.1687R-0.3313G+0.5000B+128
Cr=0.5000R-0.4187G-0.0813B+128
Wherein R, G, B represent the red, green, and blue components, respectively, of the target image. Y may represent brightness, cr reflects the difference between the red portion of the RGB input signal and the RGB signal brightness value, and Cb reflects the difference between the blue portion of the RGB input signal and the RGB signal brightness value
It should be noted that, after the processing result is obtained by the edge extraction algorithm, the processing result may include an interference edge, for example, the black line in the white area shown in fig. 7 may generate an interference edge, so that the interference edge may be removed by the area growth algorithm, and the validity of the contour curve is ensured.
Step S404, determining a first pixel point satisfying a preset height condition and a second pixel point except the first pixel point in the contour curve.
The preset height condition is that the height reaches a preset height, the preset height may be the highest height, and the preset height may also be a first height which is smaller than the highest height and has a height difference value with the highest height as a preset value, which is not particularly limited.
Specifically, a computer program stored on a computer readable storage medium disposed on a computer device, when executed, determines height information for each pixel point in a contour curve and determines a highest height based on each height information. The preset height condition is determined based on the highest height. And judging whether each pixel point meets the preset height condition. If one pixel point meeting the preset height condition exists, the pixel point meeting the preset height condition is directly taken as a first pixel point, and the pixel points except the first pixel point are taken as second pixel points. If at least two pixel points meeting the preset height condition exist, the pixel points meeting the preset height condition are taken as pixel points to be processed, one pixel point to be processed is arbitrarily selected from a plurality of pixel points to be processed as a first pixel point, or the pixel point positioned in the middle of the plurality of pixel points to be processed is determined to be taken as the first pixel point, and the pixel points except the first pixel point are taken as second pixel points.
For example, coordinate information of each pixel point is acquired, the coordinate information including height information (which can be regarded as information on the Y axis) and displacement information (which can be regarded as information on the X axis). And determining the height information of each pixel point in the contour curve, and determining the highest height based on each height information. If the preset height condition is that the height reaches the highest height, judging whether only one pixel point with the highest height exists, if so (namely, only one pixel point with the highest height exists), taking the pixel point with the highest height as the first pixel point directly. If not (i.e. there are at least two pixel points with the highest height), the pixel point with the highest height is taken as the pixel point to be processed, and one pixel point to be processed is arbitrarily selected from a plurality of pixel points to be processed as the first pixel point. And taking the pixel points except the first pixel point as second pixel points.
Or under the condition that the preset height condition is that the height reaches the first height, taking the pixel point reaching the first height as a pixel point to be processed, and arbitrarily selecting one pixel point to be processed from a plurality of pixel points to be processed as the first pixel point. And taking the pixel points except the first pixel point as second pixel points.
In step S406, a target curve that satisfies the non-discrete condition and includes the first pixel point is cut out from the contour curve based on the degree of dispersion between the first pixel point and each second pixel point.
The discrete degree characterizes whether each pixel point is intensively distributed or not, and the difference of each pixel point is reflected. The non-discrete condition is to judge whether the discrete degree among the pixels reaches a preset discrete degree, if so, the discrete degree of the pixels is low, and at the moment, the pixels are distributed in a concentrated way, so that the bending degree of a curve formed by the pixels is small.
Specifically, when the computer program stored on the computer readable storage medium deployed on the computer device is executed, the degree of dispersion between the first pixel point and each second pixel point is determined through at least one of variance calculation, difference calculation and standard deviation calculation based on the position information of the first pixel point and the second pixel point, and each second pixel point is screened based on the degree of dispersion, so that a target curve which meets non-discrete conditions and comprises the first pixel point is obtained.
Or, when executed, the computer program stored on the computer readable storage medium disposed on the computer device uses a plurality of second pixels located on the left side of the first pixel as left side pixels and uses a plurality of second pixels located on the right side of the first pixel as right side pixels based on the position information of the first pixel. And screening out the left pixel points meeting the non-discrete condition through at least one of variance calculation, difference calculation and standard deviation calculation based on the positions of the left pixel points and the positions of the first pixel points. And screening out the right pixel points meeting the non-discrete condition through at least one of variance calculation, difference calculation and standard deviation calculation based on the positions of the right pixel points and the positions of the first pixel points. The target curve is determined based on the left pixel point satisfying the non-discrete condition, the right pixel point satisfying the non-discrete condition, and the first pixel point.
For example, a computer program stored on a computer readable storage medium disposed on a computer device, when executed, determines a plurality of current first pixel points corresponding to a current iteration, performs at least one of variance calculation, difference calculation, standard deviation calculation, and the like based on the positions of the plurality of current second pixel points and the first pixel points, and obtains a discrete result. The discrete result characterizes the degree of the discrete of each current second pixel point and the first pixel point. And deleting two current second pixel points farthest from the plurality of current second pixel points to obtain updated second pixel points under the condition that the discrete result does not meet the non-discrete condition. And entering the next iteration, taking each updated second pixel point as the current second pixel point corresponding to the next iteration, returning to at least one of variance calculation, difference calculation and standard deviation calculation based on the positions of the plurality of current second pixel points and the first pixel point, and continuing to execute the discrete result obtaining step until the non-discrete condition is met. And determining a target curve based on the current second pixel point and the first pixel point which meet the non-discrete condition. The current first pixel point represents the first pixel point of the current iteration number. The two current second pixels farthest from each other can be regarded as the two pixels having the largest difference between the abscissa.
Or, when executed, determining a left dynamic endpoint corresponding to the current first iteration, and performing at least one of variance calculation, difference calculation and standard deviation calculation based on the positions of the left dynamic endpoint and the first pixel point to obtain a left discrete result, where the left discrete result characterizes a degree of discrete between each second pixel point located on the left side of the first pixel point and the first pixel point. If the left discrete result does not meet the non-discrete condition, the point which is a preset step length from the left dynamic endpoint or a preset number of second pixel points from the left dynamic endpoint is used as the updated left dynamic endpoint. And entering a next round of first iteration, taking the updated left dynamic endpoint as the left dynamic endpoint of the next round of first iteration, returning to at least one of variance calculation, difference calculation and standard deviation calculation based on the positions of the left dynamic endpoint and the first pixel point, and continuing to execute the step of obtaining the left discrete result until the non-discrete condition is met.
And determining a right dynamic endpoint corresponding to the current second iteration when the computer program stored on a computer readable storage medium deployed on the computer equipment is executed, and performing at least one of variance calculation, difference calculation and standard deviation calculation based on the positions of the right dynamic endpoint and the first pixel point to obtain a right discrete result, wherein the right discrete result represents the degree of the discrete between each second pixel point positioned on the right side of the first pixel point and the first pixel point. If the right discrete result does not meet the non-discrete condition, the point which is a preset step length from the right dynamic endpoint or a preset number of second pixel points from the right dynamic endpoint is used as the updated right dynamic endpoint. And entering a next round of second iteration, taking the updated right dynamic endpoint as the right dynamic endpoint of the next round of second iteration, returning to at least one of variance calculation, difference calculation and standard deviation calculation based on the positions of the right dynamic endpoint and the first pixel point, and continuing to execute the step of obtaining the right discrete result until the non-discrete condition is met.
The target curve is determined based on the left dynamic endpoint, the right dynamic endpoint, and the first pixel point that satisfy the non-discrete condition.
Step S408, dividing the target curve into a first curve and a second curve according to the position of the target spine in the target curve.
In particular, a computer program stored on a computer readable storage medium disposed on a computer device, when executed, determines a line segment of a target spine in the target curve from a position of the target spine in the target curve. The line segment is cut from a target curve, and the target curve is divided into a first curve and a second curve.
It should be noted that, the length of the line segment is indicative of the width of the target spine, and based on the position of the target spine at the back, the line segment may be considered to be located at the middle of the target curve, and the length of the line segment is within a preset length. Wherein the preset length is determined by the product of the length of the target curve and a preset ratio, which may range from 0.23 to 0.28. Optionally, the line segment is located in the middle of the target curve, and the length of the line segment is 1/4 of the length of the target curve, the target curve is shown in fig. 8, and the first curve and the second curve obtained after division are shown in fig. 9.
It should be noted that, the target curve is understood to be a top contour curve of the target area, and the length of the target curve is determined by the left dynamic endpoint and the right dynamic endpoint, for example, the difference between the abscissa of the left dynamic endpoint and the abscissa of the right dynamic endpoint may be taken as the length of the target curve.
And step S410, determining the posture of the target spine according to the first curve and the second curve.
Wherein the posture of the target spine may be a normal state, such as a balanced state. The posture of the target spine may be an abnormal state, such as a lateral curvature state.
Specifically, a computer program stored on a computer readable storage medium disposed on a computer device, when executed, determines a common tangent to a first curve and a second curve based on a location of each pixel point in the first curve and a location of each pixel point in the second curve, and determines a pose of a target spine from the common tangent. When the posture is in the normal state, the target spine is not treated. And under the condition that the posture is in an abnormal state, sending out an alarm signal to remind an operator of detecting the type of the abnormal state of the target spine.
The computer readable storage medium can rapidly acquire the profile curve corresponding to the target part through image acquisition, simplify the step of information acquisition on the target part, and avoid the damage caused by radioactive detection on the target part. Through the preset height condition, each pixel point on the contour curve is divided into a first pixel point meeting the preset height condition and a second pixel point except the first pixel point, so that the degree of dispersion between the first pixel point and each second pixel point can be determined. In this way, the concentrated distribution of each pixel point in the contour curve can be accurately reflected according to the discrete degree, and the target curve which meets the non-discrete condition and comprises the first pixel point is cut out from the contour curve, so that the concentrated distribution of each pixel point in the target curve is ensured. Based on the position of the target spine in the target curve, a first curve and a second curve which are respectively positioned at two sides of the target spine can be determined, interference of a protruding structure of the target spine on gesture determination is avoided, and the gesture determined based on the first curve and the second curve is further ensured to accurately simulate the gesture determined based on the spine bending ruler. Therefore, not only can the multi-step measurement step in actual operation be avoided, but also the posture of the target spine can be truly estimated, and the efficiency of determining the posture of the spine is greatly improved.
In one embodiment, as shown in fig. 10, the processor when executing the computer program further performs the steps of:
step S1002, a left dynamic endpoint corresponding to the current first iteration is obtained from the second pixel point, and a first variance value is determined based on the positions of the left dynamic endpoint and the first pixel point.
Wherein the left dynamic end point of the first iteration of the first round and the right dynamic end point of the second iteration of the second round are two points of intersection of the edge of the target portion and the contour curve.
Specifically, an updated left dynamic endpoint corresponding to the first iteration of the previous round is determined from the second pixel points, and the updated left dynamic endpoint corresponding to the first iteration of the previous round is used as the left dynamic endpoint corresponding to the current first iteration. And determining a first set based on the positions of the left dynamic endpoint and the first pixel point, and performing variance calculation based on the positions of at least two pixel points in the first set to obtain a first variance value.
The first set is formed by a pixel point between the left dynamic endpoint and the first pixel point, or the first set is formed by a pixel point between the left dynamic endpoint and the first pixel point, the left dynamic endpoint, the first pixel point, or the first set is formed by the left dynamic endpoint and the first pixel point, which is not particularly limited.
It should be noted that, the first variance value can represent a degree of dispersion between the pixels in the first set, that is, a degree reflecting a concentrated distribution of the pixels in the first set.
In step S1004, if the first variance value does not satisfy the non-discrete condition, the updated left dynamic endpoint is determined from the second pixel point based on a preset step size.
Wherein the non-discrete condition may be that the variance value is less than or equal to a variance threshold.
Specifically, in the case where the first variance value is less than or equal to the variance threshold, it is determined that the first variance value satisfies the non-discrete condition. In the event that the first variance value is greater than the variance threshold, it is determined that the first variance value does not satisfy the non-discrete condition. And under the condition that the first variance value does not meet the non-discrete condition, updating the second pixel point which is the preset step length from the left dynamic endpoint to an updated left dynamic endpoint.
Wherein the preset step size is determined based on the distance between the initial left dynamic endpoint and the initial right dynamic endpoint. The initial left dynamic endpoint is the left dynamic endpoint corresponding to the first iteration of the first round, namely the left boundary point of the target area. The initial right dynamic endpoint is the right dynamic endpoint of the second iteration of the first round, i.e., the right boundary point of the target region. For example, the preset step size is the product of the distance and a preset ratio, which may be 1/30.
The first iteration is an iterative process between the first pixel point and a second pixel point located at the left side of the first pixel point, and the second iteration is an iterative process between the first pixel point and a second pixel point located at the right side of the first pixel point.
Step S1006, entering the next round of first iteration, taking the updated left dynamic endpoint as the left dynamic endpoint corresponding to the next round of first iteration, returning to the step of determining the first variance value based on the positions of the left dynamic endpoint and the first pixel point, and continuing to execute until the first variance value meets the non-discrete condition, and stopping the first iteration.
Specifically, the next round of first iteration is entered, and the updated left dynamic endpoint is used as the left dynamic endpoint corresponding to the next round of first iteration. And returning to the step of determining the first variance value based on the positions of the left dynamic endpoint and the first pixel point, and continuing to execute until the first variance value meets the non-discrete condition, stopping the first iteration, and determining the left dynamic endpoint meeting the non-discrete condition.
When the first iteration is stopped, it is reflected that each pixel point in the first set when the first iteration is stopped is intensively distributed.
Step S1008, obtaining a right dynamic endpoint corresponding to the current second iteration from the second pixel point, and determining a second variance value based on the positions of the right dynamic endpoint and the first pixel point.
Wherein the left dynamic end point of the first iteration of the first round and the right dynamic end point of the second iteration of the second round are two points of intersection of the edge of the target portion and the contour curve.
Specifically, an updated right dynamic endpoint corresponding to the previous round of second iteration is determined from the second pixel points, and the updated right dynamic endpoint corresponding to the previous round of second iteration is used as the right dynamic endpoint corresponding to the current second iteration. And determining a second set based on the positions of the right dynamic endpoint and the first pixel point, and performing variance calculation based on the positions of at least two pixel points in the second set to obtain a second variance value.
The second set is formed by a pixel point between the right dynamic endpoint and the first pixel point, or the second set is formed by a pixel point between the right dynamic endpoint and the first pixel point, the right dynamic endpoint, the first pixel point, or the second set is formed by the right dynamic endpoint and the first pixel point, which is not particularly limited.
It should be noted that, the second variance value can represent the degree of dispersion between the pixels in the second set, that is, reflect the degree of centralized distribution of the pixels in the second set.
In step S1010, if the second variance value does not satisfy the non-discrete condition, an updated right dynamic endpoint is determined from the second pixel point based on the preset step size.
Wherein the non-discrete condition may be that the variance value is less than or equal to a variance threshold.
Specifically, in the case where the second variance value is less than or equal to the variance threshold, it is determined that the second variance value satisfies the non-discrete condition. In the event that the second variance value is greater than the variance threshold, it is determined that the second variance value does not satisfy the non-discrete condition. And under the condition that the second variance value does not meet the non-discrete condition, updating the second pixel point which is the preset step length from the right dynamic endpoint to an updated right dynamic endpoint.
Wherein the preset step size is determined based on the distance between the initial left dynamic endpoint and the initial right dynamic endpoint. The initial left dynamic endpoint is the first iteration left dynamic endpoint of the first round, namely the target area left boundary point. The initial right dynamic endpoint is the right dynamic endpoint of the second iteration of the first round, i.e., the right boundary point of the target region. For example, the preset step size is the product of the distance and a preset ratio, which may be 1/30.
The first iteration is an iterative process between the first pixel point and a second pixel point located at the left side of the first pixel point, and the second iteration is an iterative process between the first pixel point and a second pixel point located at the right side of the first pixel point.
Step S1012, entering a next round of second iteration, taking the updated right dynamic endpoint as a right dynamic endpoint corresponding to the next round of second iteration, returning to the step of determining a second variance value based on the positions of the right dynamic endpoint and the first pixel point, and continuing to execute until the second variance value meets the non-discrete condition, and stopping the second iteration.
Specifically, the next round of second iteration is entered, and the updated right dynamic endpoint is used as the right dynamic endpoint corresponding to the next round of second iteration. And returning to the step of determining the second variance value based on the positions of the right dynamic endpoint and the first pixel point, and continuing to execute until the second variance value meets the non-discrete condition, stopping the first iteration and determining the right dynamic endpoint meeting the non-discrete condition.
When the second iteration is stopped, it is reflected that each pixel point in the second set when the second iteration is stopped is intensively distributed.
Step S1014, intercepting the profile curve based on the left dynamic endpoint corresponding to the first iteration stop and the right dynamic endpoint corresponding to the second iteration stop, to obtain a target curve.
Specifically, the contour curve is intercepted through a left dynamic endpoint corresponding to the first iteration stop and a right dynamic endpoint corresponding to the second iteration stop, and a target curve comprising a first pixel point is obtained.
It should be noted that, the iteration process of the first iteration and the iteration process of the second iteration may be performed simultaneously, or may be performed sequentially according to a certain iteration sequence, which is not limited in particular.
In this embodiment, the first iteration and the second iteration respectively perform dynamic non-discrete verification on two endpoints of the contour curve, so that the degree of dispersion of the first set determined by the left dynamic endpoint corresponding to each round of first iteration can be reflected in real time, and the degree of dispersion of the second set determined by the right dynamic endpoint corresponding to the second iteration can be reflected in real time. Accordingly, a highly effective and highly reliable target curve can be obtained based on the left dynamic end point and the right dynamic end point satisfying the non-discrete condition.
In one embodiment, the processor when executing the computer program further performs the steps of: from the contour curve, a preset number of left middle pixel points between the left dynamic end point and the first pixel point are screened out. And calculating a first variance value based on the positions of the screened preset number of left middle pixel points, the left dynamic endpoint and the first pixel points.
The preset number is smaller than or equal to the number of pixel points between the left dynamic endpoint and the first pixel point.
Specifically, the number of pixel points between the left dynamic endpoint and the first pixel point is determined, and the number is taken as a preset number. And taking the pixel point between the left dynamic endpoint and the first pixel point as a left middle pixel point. And obtaining a first variance value through variance calculation based on the positions of the left middle pixel point, the left dynamic endpoint and the first pixel point of the preset number.
In this embodiment, the integrity of the data is ensured by taking the left dynamic endpoint, the first pixel point and the left middle pixel point as a whole data set. In this way, the variance calculation is used to process the data set, so that the degree of dispersion among pixels in the data set can be intuitively and accurately reflected.
In one embodiment, the processor when executing the computer program further performs the steps of: from the contour curve, a preset number of right middle pixel points between the right dynamic endpoint and the first pixel point are screened out. And calculating a second variance value based on the positions of the screened preset number of right middle pixel points, the right dynamic endpoint and the first pixel points.
The preset number is smaller than or equal to the number of pixel points between the right dynamic endpoint and the first pixel point.
Specifically, the number of pixel points between the right dynamic endpoint and the first pixel point is determined, and the number is taken as a preset number. And taking the pixel point between the right dynamic endpoint and the first pixel point as a right middle pixel point. And obtaining a second variance value through variance calculation based on the positions of the preset number of right middle pixel points, the right dynamic end points and the first pixel points.
In this embodiment, the data integrity is ensured by taking the right dynamic endpoint, the first pixel point and the right middle pixel point as a whole data set. In this way, the variance calculation is used to process the data set, so that the degree of dispersion among pixels in the data set can be intuitively and accurately reflected.
In one embodiment, the processor when executing the computer program further performs the steps of: and under the condition that the first variance value does not meet the non-discrete condition, taking the pixel point which is positioned on the right side of the left dynamic endpoint and is separated from the left dynamic endpoint by a preset step length as the updated left dynamic endpoint.
Wherein the preset step size is determined based on the distance between the initial left dynamic endpoint and the initial right dynamic endpoint. The initial left dynamic endpoint is the first iteration left dynamic endpoint of the first round, namely the target area left boundary point. The initial right dynamic endpoint is the right dynamic endpoint of the second iteration of the first round, i.e., the right boundary point of the target region. For example, the preset step size is the product of the distance and a preset ratio, which may be 1/30.
It should be noted that, when the first variance value does not satisfy the non-discrete condition, the left dynamic endpoint needs to be moved by a preset step length toward the direction of the first pixel point, so as to obtain the updated left dynamic endpoint.
In this embodiment, when the first variance value does not satisfy the non-discrete condition, the left dynamic endpoint is dynamically adjusted to obtain an updated left dynamic endpoint, so that it can be ensured that the subsequent verification of the discreteness is performed based on the updated left dynamic endpoint. Therefore, the left dynamic endpoint for the next round of first iteration can be adjusted in real time and effectively based on the left dynamic endpoint which does not meet the non-discrete condition, the effectiveness in the first iteration process is ensured, and the credibility of the first iteration is improved.
In one embodiment, the processor when executing the computer program further performs the steps of: and under the condition that the second variance value does not meet the non-discrete condition, taking the pixel point which is positioned on the left side of the right dynamic endpoint and is separated from the right dynamic endpoint by a preset step length as the updated right dynamic endpoint.
Wherein the preset step size is determined based on the distance between the initial left dynamic endpoint and the initial right dynamic endpoint. The initial left dynamic endpoint is the first iteration left dynamic endpoint of the first round, namely the target area left boundary point. The initial right dynamic endpoint is the right dynamic endpoint of the second iteration of the first round, i.e., the right boundary point of the target region. For example, the preset step size is the product of the distance and a preset ratio, which may be 1/30.
It should be noted that, when the second variance value does not satisfy the non-discrete condition, the right dynamic endpoint needs to be moved by a preset step length toward the direction of the first pixel point, so as to obtain the updated right dynamic endpoint.
In this embodiment, when the second variance value does not satisfy the non-discrete condition, the right dynamic endpoint is dynamically adjusted to obtain an updated right dynamic endpoint, so that subsequent verification of the discreteness based on the updated right dynamic endpoint can be ensured. Therefore, the right dynamic endpoint for the next round of second iteration can be adjusted in real time and effectively based on the right dynamic endpoint which does not meet the non-discrete condition, the effectiveness in the second iteration process is ensured, and the credibility of the second iteration is improved.
In one embodiment, the processor when executing the computer program further performs the steps of: a regional extent of the target spine in the target curve is determined. And cutting out a curve which is positioned at the left side of the area range in the target curve to obtain a first curve. And cutting out a curve which is positioned on the right side of the area range in the target curve to obtain a second curve.
Wherein the area range is typically 1/4 of the target curve length. The distribution of the first curve and the second curve obtained by clipping is shown in fig. 9.
It should be noted that, since the spinous process may have a bulge at the top of the back area, it may interfere with the determination of the posture of the spine. Therefore, the area range is regarded as an invalid area, and errors of posture estimation caused by protrusion of the spinous process are avoided. Meanwhile, the area range is regarded as an ineffective area, so that the measuring principle of the actual scoliosis measuring ruler can be truly simulated.
In the embodiment, the first curve and the second curve are obtained by cutting the area range of the target spine in the target area, so that the erroneous prediction of the posture of the target spine due to the protrusion of the spinous process is avoided, and the accuracy of posture determination is ensured.
In one embodiment, the processor when executing the computer program further performs the steps of: a common tangent to the first curve and the second curve is determined. And determining the posture of the target spine according to the slope of the common tangent line.
Specifically, a common tangent line of the first curve and the second curve is determined according to the positions of the pixel points in the first curve and the positions of the pixel points in the second curve, and the slope of the common tangent line is determined. An angle of the common tangent is determined based on the slope. And if the angle is within the threshold range, determining that the posture of the target spine is a normal posture. If the angle is not within the threshold range, determining that the posture of the target spine is an abnormal posture, namely, the target spine is in a lateral bending state.
Wherein the threshold range may be between 0 ° and 3 °. For example, as shown in fig. 11, if the angle of the common tangent is 7.078 °, it is determined that the posture of the target spine is abnormal, that is, the target spine is in a lateral curvature state.
In the prior art, since the straight line obtained by the left tip in the left back tip contour line and the right tip in the right back tip contour line is directly embedded in the back region (as shown in fig. 12, curve 1 in the case of a), an error in scoliosis measurement is caused. At the same time, the back morphology will also vary in magnitude resulting in the measurement error. In this embodiment, the first curve and the second curve with high non-discrete degree are determined by the variance, and by determining the common tangent of the first curve and the second curve (as shown in fig. 12, curve 2 is a schematic diagram in the case b), it is ensured that the common tangent is not embedded into the back region, and the measurement principle of the actual scoliosis measuring ruler can be truly simulated, so that the accuracy of gesture determination is greatly ensured.
In the embodiment, the measurement condition of the actual scoliosis measuring ruler can be truly simulated by determining the common tangent line of the first curve and the second curve, so that a real and effective posture evaluation result can be obtained, and the accuracy of determining the target spinal posture is greatly improved.
For a clearer understanding of the technical solutions of the present application, a more detailed description of embodiments is provided. As shown in fig. 13, specifically, the following is: preparing a photographing device, collecting a target part picture of a target object in a forward bending position through the photographing device, sending the picture to a server provided with a computer readable storage medium, and executing the following steps through a computer program stored in the computer readable storage medium:
step 1: based on the extraction requirements, a target extraction model is determined from a plurality of preset extraction models. And extracting a target part in the target image through the target extraction model to obtain an extraction result. And determining a contour curve corresponding to the extraction result. And under the condition that the preset height condition is that the height reaches the first height, taking the pixel point reaching the first height as a pixel point to be processed, and arbitrarily selecting one pixel point to be processed from a plurality of pixel points to be processed as the first pixel point. And taking the pixel points except the first pixel point as second pixel points.
Step 2: and determining an updated left dynamic endpoint corresponding to the previous round of first iteration from the second pixel point, and taking the updated left dynamic endpoint corresponding to the previous round of first iteration as the left dynamic endpoint corresponding to the current first iteration. From the contour curve, a preset number of left middle pixel points between the left dynamic end point and the first pixel point are screened out. And calculating a first variance value based on the positions of the screened preset number of left middle pixel points, the left dynamic endpoint and the first pixel points. And under the condition that the first variance value does not meet the non-discrete condition, taking the pixel point which is positioned on the right side of the left dynamic endpoint and is separated from the left dynamic endpoint by a preset step length as the updated left dynamic endpoint. And entering a next round of first iteration, taking the updated left dynamic endpoint as a left dynamic endpoint corresponding to the next round of first iteration, returning to the step of determining a first variance value based on the positions of the left dynamic endpoint and the first pixel point, and continuing to execute until the first variance value meets the non-discrete condition, and stopping the first iteration.
Step 3: and determining an updated right dynamic endpoint corresponding to the previous round of second iteration from the second pixel points, and taking the updated right dynamic endpoint corresponding to the previous round of second iteration as the right dynamic endpoint corresponding to the current second iteration. From the contour curve, a preset number of right middle pixel points between the right dynamic endpoint and the first pixel point are screened out. And calculating a second variance value based on the positions of the screened preset number of right middle pixel points, the right dynamic endpoint and the first pixel points. And under the condition that the second variance value does not meet the non-discrete condition, taking the pixel point which is positioned on the left side of the right dynamic endpoint and is separated from the right dynamic endpoint by a preset step length as the updated right dynamic endpoint. And entering a next round of second iteration, taking the updated right dynamic endpoint as a right dynamic endpoint corresponding to the next round of second iteration, returning to the step of determining a second variance value based on the positions of the right dynamic endpoint and the first pixel point, and continuing to execute until the second variance value meets the non-discrete condition, and stopping the second iteration. And intercepting the profile curve based on the left dynamic endpoint corresponding to the first iteration stop and the right dynamic endpoint corresponding to the second iteration stop to obtain a target curve (corresponding to the step of extracting the profile curve at the top end of the target part in fig. 13).
Step 4: a regional extent of the target spine in the target curve is determined. And cutting out a curve which is positioned at the left side of the area range in the target curve to obtain a first curve. And cutting out a curve which is positioned on the right side of the area range in the target curve to obtain a second curve. A common tangent to the first curve and the second curve is determined. And determining the posture of the target spine according to the slope of the common tangent line, and displaying the posture (corresponding to the step of extracting the top profile curve of the target part in fig. 13).
In this embodiment, the profile curve corresponding to the target portion is rapidly acquired through image acquisition, so that the step of information acquisition on the target portion is simplified, and damage caused by radioactive detection on the target portion is avoided. Through the preset height condition, each pixel point on the contour curve is divided into a first pixel point meeting the preset height condition and a second pixel point except the first pixel point, so that the degree of dispersion between the first pixel point and each second pixel point can be determined. In this way, the concentrated distribution of each pixel point in the contour curve can be accurately reflected according to the discrete degree, and the target curve which meets the non-discrete condition and comprises the first pixel point is cut out from the contour curve, so that the concentrated distribution of each pixel point in the target curve is ensured. Based on the position of the target spine in the target curve, a first curve and a second curve which are respectively positioned at two sides of the target spine can be determined, interference of a protruding structure of the target spine on gesture determination is avoided, and the gesture determined based on the first curve and the second curve is further ensured to accurately simulate the gesture determined based on the spine bending ruler. Therefore, not only can the multi-step measurement step in actual operation be avoided, but also the posture of the target spine can be truly estimated, and the efficiency of determining the posture of the spine is greatly improved.
In one embodiment, a spinal posture determination method is provided, which is executable by a computer device, and specifically comprises the steps of: acquiring a target image obtained by acquiring an image of a target part of a target object, and extracting a contour curve corresponding to the target part in the target image; the target site includes a target spine therein. And determining a first pixel point meeting a preset height condition in the profile curve and a second pixel point except the first pixel point. Based on the discrete degree between the first pixel point and each second pixel point, a target curve which meets the non-discrete condition and comprises the first pixel point is intercepted from the contour curve. The target curve is divided into a first curve and a second curve according to the position of the target spine in the target curve. And determining the posture of the target spine according to the first curve and the second curve.
In one embodiment, the capturing, from the contour curve, a target curve that satisfies a non-discrete condition and includes the first pixel based on the degree of dispersion between the first pixel and each second pixel includes: and acquiring a left dynamic endpoint corresponding to the current first iteration from the second pixel point, and determining a first variance value based on the positions of the left dynamic endpoint and the first pixel point. And under the condition that the first variance value does not meet the non-discrete condition, determining an updated left dynamic endpoint from the second pixel point based on a preset step length. And entering a next round of first iteration, taking the updated left dynamic endpoint as a left dynamic endpoint corresponding to the next round of first iteration, returning to the step of determining a first variance value based on the positions of the left dynamic endpoint and the first pixel point, and continuing to execute until the first variance value meets the non-discrete condition, and stopping the first iteration. And acquiring a right dynamic endpoint corresponding to the current second iteration from the second pixel point, and determining a second variance value based on the positions of the right dynamic endpoint and the first pixel point. And under the condition that the second variance value does not meet the non-discrete condition, determining an updated right dynamic endpoint from the second pixel point based on the preset step length. And entering a next round of second iteration, taking the updated right dynamic endpoint as a right dynamic endpoint corresponding to the next round of second iteration, returning to the step of determining a second variance value based on the positions of the right dynamic endpoint and the first pixel point, and continuing to execute until the second variance value meets the non-discrete condition, and stopping the second iteration. And intercepting the profile curve based on the left dynamic endpoint corresponding to the first iteration stop and the right dynamic endpoint corresponding to the second iteration stop to obtain a target curve.
In one embodiment, the determining the first variance value based on the position of the left dynamic endpoint and the first pixel point includes: from the contour curve, a preset number of left middle pixel points between the left dynamic end point and the first pixel point are screened out. And calculating a first variance value based on the positions of the screened preset number of left middle pixel points, the left dynamic endpoint and the first pixel points.
In one embodiment, the determining the second variance value based on the right dynamic endpoint and the position of the first pixel point includes: from the contour curve, a preset number of right middle pixel points between the right dynamic endpoint and the first pixel point are screened out. And calculating a second variance value based on the positions of the screened preset number of right middle pixel points, the right dynamic endpoint and the first pixel points.
In one embodiment, the determining, based on a preset step size, the updated left dynamic endpoint from the second pixel point if the first variance value does not satisfy the non-discrete condition includes: and under the condition that the first variance value does not meet the non-discrete condition, taking the pixel point which is positioned on the right side of the left dynamic endpoint and is separated from the left dynamic endpoint by a preset step length as the updated left dynamic endpoint.
In one embodiment, the determining, based on the preset step size, the updated right dynamic endpoint from the second pixel point if the second variance value does not satisfy the non-discrete condition includes: and under the condition that the second variance value does not meet the non-discrete condition, taking the pixel point which is positioned on the left side of the right dynamic endpoint and is separated from the right dynamic endpoint by a preset step length as the updated right dynamic endpoint.
In one embodiment, the dividing the target curve into a first curve and a second curve according to the location of the target spine in the target curve includes: a regional extent of the target spine in the target curve is determined. And cutting out a curve which is positioned at the left side of the area range in the target curve to obtain a first curve. And cutting out a curve which is positioned on the right side of the area range in the target curve to obtain a second curve.
In one embodiment, the determining the pose of the target spine from the first curve and the second curve comprises: a common tangent to the first curve and the second curve is determined. And determining the posture of the target spine according to the slope of the common tangent line.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiments of the present application also provide a spinal posture determination device for implementing the above-mentioned spinal posture determination method. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitations in one or more embodiments of the spinal posture determination device provided below may be found in the limitations of the spinal posture determination method described above, and are not repeated here.
In one embodiment, as shown in FIG. 14, there is provided a spinal posture determining apparatus comprising: an extraction module 1402, a determination module 1404, an interception module 1406, and a partitioning module 1408, wherein:
the extraction module 1402 is configured to obtain a target image obtained by performing image acquisition on a target portion of a target object, and extract a contour curve corresponding to the target portion in the target image; the target site includes a target spine therein.
A determining module 1404 is configured to determine a first pixel point in the contour curve that meets a preset height condition, and a second pixel point except for the first pixel point.
The intercepting module 1406 is configured to intercept, from the contour curve, a target curve that meets a non-discrete condition and includes the first pixel point based on a degree of dispersion between the first pixel point and each second pixel point.
The dividing module 1408 is configured to divide the target curve into a first curve and a second curve according to a position of the target spine in the target curve.
The determining module 1404 is further configured to determine a pose of the target spine based on the first curve and the second curve.
In one embodiment, the intercepting module 1406 is configured to obtain a left dynamic endpoint corresponding to the current first iteration from the second pixel point, and determine a first variance value based on the positions of the left dynamic endpoint and the first pixel point. And under the condition that the first variance value does not meet the non-discrete condition, determining an updated left dynamic endpoint from the second pixel point based on a preset step length. And entering a next round of first iteration, taking the updated left dynamic endpoint as a left dynamic endpoint corresponding to the next round of first iteration, returning to the step of determining a first variance value based on the positions of the left dynamic endpoint and the first pixel point, and continuing to execute until the first variance value meets the non-discrete condition, and stopping the first iteration. And acquiring a right dynamic endpoint corresponding to the current second iteration from the second pixel point, and determining a second variance value based on the positions of the right dynamic endpoint and the first pixel point. And under the condition that the second variance value does not meet the non-discrete condition, determining an updated right dynamic endpoint from the second pixel point based on the preset step length. And entering a next round of second iteration, taking the updated right dynamic endpoint as a right dynamic endpoint corresponding to the next round of second iteration, returning to the step of determining a second variance value based on the positions of the right dynamic endpoint and the first pixel point, and continuing to execute until the second variance value meets the non-discrete condition, and stopping the second iteration. And intercepting the profile curve based on the left dynamic endpoint corresponding to the first iteration stop and the right dynamic endpoint corresponding to the second iteration stop to obtain a target curve.
In one embodiment, the clipping module 1406 is configured to screen a predetermined number of left middle pixels from the contour curve between the left dynamic endpoint and the first pixel. And calculating a first variance value based on the positions of the screened preset number of left middle pixel points, the left dynamic endpoint and the first pixel points.
In one embodiment, the clipping module 1406 is configured to screen a predetermined number of right middle pixels from the contour curve between the right dynamic endpoint and the first pixel. And calculating a second variance value based on the positions of the screened preset number of right middle pixel points, the right dynamic endpoint and the first pixel points.
In one embodiment, the intercepting module 1406 is configured to, if the first variance value does not satisfy the non-discrete condition, use a pixel located on the right side of the left dynamic endpoint and separated from the left dynamic endpoint by a predetermined step as the updated left dynamic endpoint.
In one embodiment, the intercepting module 1406 is configured to, if the second variance value does not satisfy the non-discrete condition, use a pixel point located on the left side of the right dynamic endpoint and separated from the right dynamic endpoint by a preset step size as the updated right dynamic endpoint.
A partitioning module 1408 is used to determine a regional extent of the target spine in the target curve. And cutting out a curve which is positioned at the left side of the area range in the target curve to obtain a first curve. And cutting out a curve which is positioned on the right side of the area range in the target curve to obtain a second curve.
The determining module 1404 is further configured to determine a common tangent of the first curve and the second curve. And determining the posture of the target spine according to the slope of the common tangent line.
The various modules in the spinal posture determination device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal or a server, and the internal structure of which may be as shown in fig. 15. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for spinal posture determination data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a spinal posture determination method.
It will be appreciated by those skilled in the art that the structure shown in fig. 15 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application is applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (12)

1. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program when executed by a processor realizes the steps of:
acquiring a target image obtained by acquiring an image of a target part of a target object, and extracting a contour curve corresponding to the target part in the target image; the target part comprises a target spine;
Determining a first pixel point meeting a preset height condition in the profile curve and a second pixel point except the first pixel point;
based on the discrete degree between the first pixel point and each second pixel point, a target curve which meets the non-discrete condition and comprises the first pixel point is intercepted from the contour curve;
dividing the target curve into a first curve and a second curve according to the position of the target spine in the target curve;
and determining the posture of the target spine according to the first curve and the second curve.
2. The computer readable storage medium of claim 1, the processor when executing the computer program further implementing the steps of:
acquiring a left dynamic endpoint corresponding to the current first iteration from the second pixel point, and determining a first variance value based on the positions of the left dynamic endpoint and the first pixel point;
under the condition that the first variance value does not meet the non-discrete condition, determining an updated left dynamic endpoint from the second pixel point based on a preset step length;
entering a next round of first iteration, taking the updated left dynamic endpoint as a left dynamic endpoint corresponding to the next round of first iteration, returning to the step of determining a first variance value based on the positions of the left dynamic endpoint and the first pixel point, and continuing to execute until the first variance value meets the non-discrete condition, and stopping the first iteration;
Acquiring a right dynamic endpoint corresponding to the current second iteration from the second pixel point, and determining a second variance value based on the positions of the right dynamic endpoint and the first pixel point;
under the condition that the second variance value does not meet the non-discrete condition, determining an updated right dynamic endpoint from the second pixel point based on the preset step length;
entering a next round of second iteration, taking the updated right dynamic endpoint as a right dynamic endpoint corresponding to the next round of second iteration, returning to the step of determining a second variance value based on the positions of the right dynamic endpoint and the first pixel point, and continuing to execute until the second variance value meets the non-discrete condition, and stopping the second iteration;
and intercepting the profile curve based on the left dynamic endpoint corresponding to the first iteration stop and the right dynamic endpoint corresponding to the second iteration stop to obtain a target curve.
3. The computer readable storage medium of claim 2, the processor when executing the computer program further implementing the steps of:
screening out a preset number of left middle pixel points between the left dynamic endpoint and the first pixel point from the contour curve;
And calculating a first variance value based on the positions of the screened preset number of left middle pixel points, the left dynamic end points and the first pixel points.
4. The computer readable storage medium of claim 2, the processor when executing the computer program further implementing the steps of:
screening out a preset number of right middle pixel points between the right dynamic endpoint and the first pixel point from the contour curve;
and calculating a second variance value based on the positions of the screened preset number of right middle pixel points, the right dynamic end points and the first pixel points.
5. The computer readable storage medium of claim 2, the processor when executing the computer program further implementing the steps of:
and under the condition that the first variance value does not meet the non-discrete condition, taking the pixel point which is positioned on the right side of the left dynamic endpoint and is separated from the left dynamic endpoint by a preset step length as the updated left dynamic endpoint.
6. The computer readable storage medium of claim 2, the processor when executing the computer program further implementing the steps of:
and under the condition that the second variance value does not meet the non-discrete condition, taking the pixel point which is positioned on the left side of the right dynamic endpoint and is separated from the right dynamic endpoint by a preset step length as the updated right dynamic endpoint.
7. The computer readable storage medium of any one of claims 1 to 6, the processor when executing the computer program further implementing the steps of:
determining the region range of the target spine in the target curve;
cutting out a curve which is positioned at the left side of the area range in the target curve to obtain a first curve;
and cutting out a curve which is positioned on the right side of the area range in the target curve to obtain a second curve.
8. The computer readable storage medium of any one of claims 1 to 6, the processor when executing the computer program further implementing the steps of:
determining a common tangent of the first curve and the second curve;
and determining the posture of the target spine according to the slope of the common tangent line.
9. A method of spinal posture determination, the method comprising:
acquiring a target image obtained by acquiring an image of a target part of a target object, and extracting a contour curve corresponding to the target part in the target image; the target part comprises a target spine;
determining a first pixel point meeting a preset height condition in the profile curve and a second pixel point except the first pixel point;
Based on the discrete degree between the first pixel point and each second pixel point, a target curve which meets the non-discrete condition and comprises the first pixel point is intercepted from the contour curve;
dividing the target curve into a first curve and a second curve according to the position of the target spine in the target curve;
and determining the posture of the target spine according to the first curve and the second curve.
10. A spinal posture determination device, the device comprising:
the extraction module is used for acquiring a target image obtained by image acquisition of a target part of a target object and extracting a contour curve corresponding to the target part in the target image; the target part comprises a target spine;
the determining module is used for determining a first pixel point which meets the preset height condition in the profile curve and a second pixel point except the first pixel point;
the intercepting module is used for intercepting a target curve which meets non-discrete conditions and comprises the first pixel point from the contour curve based on the discrete degree between the first pixel point and each second pixel point;
The dividing module is used for dividing the target curve into a first curve and a second curve according to the position of the target spine in the target curve;
the determining module is further configured to determine a posture of the target spine according to the first curve and the second curve.
11. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, performs the steps of:
acquiring a target image obtained by acquiring an image of a target part of a target object, and extracting a contour curve corresponding to the target part in the target image; the target part comprises a target spine;
determining a first pixel point meeting a preset height condition in the profile curve and a second pixel point except the first pixel point;
based on the discrete degree between the first pixel point and each second pixel point, a target curve which meets the non-discrete condition and comprises the first pixel point is intercepted from the contour curve;
dividing the target curve into a first curve and a second curve according to the position of the target spine in the target curve;
And determining the posture of the target spine according to the first curve and the second curve.
12. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, performs the steps of:
acquiring a target image obtained by acquiring an image of a target part of a target object, and extracting a contour curve corresponding to the target part in the target image; the target part comprises a target spine;
determining a first pixel point meeting a preset height condition in the profile curve and a second pixel point except the first pixel point;
based on the discrete degree between the first pixel point and each second pixel point, a target curve which meets the non-discrete condition and comprises the first pixel point is intercepted from the contour curve;
dividing the target curve into a first curve and a second curve according to the position of the target spine in the target curve;
and determining the posture of the target spine according to the first curve and the second curve.
CN202210867625.2A 2022-07-22 2022-07-22 Computer readable storage medium, spine posture determining method and apparatus Pending CN117474829A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210867625.2A CN117474829A (en) 2022-07-22 2022-07-22 Computer readable storage medium, spine posture determining method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210867625.2A CN117474829A (en) 2022-07-22 2022-07-22 Computer readable storage medium, spine posture determining method and apparatus

Publications (1)

Publication Number Publication Date
CN117474829A true CN117474829A (en) 2024-01-30

Family

ID=89624363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210867625.2A Pending CN117474829A (en) 2022-07-22 2022-07-22 Computer readable storage medium, spine posture determining method and apparatus

Country Status (1)

Country Link
CN (1) CN117474829A (en)

Similar Documents

Publication Publication Date Title
US11189022B2 (en) Automatic detection, counting, and measurement of logs using a handheld device
US9053389B2 (en) Hough transform for circles
US20180253852A1 (en) Method and device for locating image edge in natural background
CN111179230A (en) Remote sensing image contrast change detection method and device, storage medium and electronic equipment
CN108665421B (en) Highlight component removing device and method for face image and storage medium product
US11282180B1 (en) Object detection with position, pose, and shape estimation
US20200265227A1 (en) Background suppression for anomaly detection
CN111368717A (en) Sight line determining method and device, electronic equipment and computer readable storage medium
CN110415237B (en) Skin flaw detection method, skin flaw detection device, terminal device and readable storage medium
CN112651953B (en) Picture similarity calculation method and device, computer equipment and storage medium
CN111192239A (en) Method and device for detecting change area of remote sensing image, storage medium and electronic equipment
CN111915541A (en) Image enhancement processing method, device, equipment and medium based on artificial intelligence
CN111444555A (en) Temperature measurement information display method and device and terminal equipment
CN110032941B (en) Face image detection method, face image detection device and terminal equipment
CN114387353A (en) Camera calibration method, calibration device and computer readable storage medium
CN113158773B (en) Training method and training device for living body detection model
CN113435367A (en) Social distance evaluation method and device and storage medium
CN111161789B (en) Analysis method and device for key areas of model prediction
WO2021056501A1 (en) Feature point extraction method, movable platform and storage medium
WO2021000495A1 (en) Image processing method and device
CN117474829A (en) Computer readable storage medium, spine posture determining method and apparatus
CN116091998A (en) Image processing method, device, computer equipment and storage medium
CN115731442A (en) Image processing method, image processing device, computer equipment and storage medium
JP4434868B2 (en) Image segmentation system
CN115063473A (en) Object height detection method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination