CN111259788A - Method and device for detecting head and neck inflection point and computer equipment - Google Patents

Method and device for detecting head and neck inflection point and computer equipment Download PDF

Info

Publication number
CN111259788A
CN111259788A CN202010040825.1A CN202010040825A CN111259788A CN 111259788 A CN111259788 A CN 111259788A CN 202010040825 A CN202010040825 A CN 202010040825A CN 111259788 A CN111259788 A CN 111259788A
Authority
CN
China
Prior art keywords
dimensional
data
data point
head
point set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010040825.1A
Other languages
Chinese (zh)
Inventor
王荣军
张晶
刘利康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weijing medical equipment (Tianjin) Co.,Ltd.
Original Assignee
Hoz Minimally Invasive Medical Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hoz Minimally Invasive Medical Technology Beijing Co ltd filed Critical Hoz Minimally Invasive Medical Technology Beijing Co ltd
Priority to CN202010040825.1A priority Critical patent/CN111259788A/en
Publication of CN111259788A publication Critical patent/CN111259788A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application provides a method and a device for detecting a head and neck inflection point and computer equipment, relates to the technical field of medical data processing, and solves the technical problem of low identification precision of the head and neck inflection point. The method comprises the following steps: determining a first data point set of an initial central side shadow line, wherein the initial central side shadow line is a side shadow line of a point cloud object of a human head and face part after correction in a three-dimensional Euclidean space; projecting all data points in the first data point set to a two-dimensional plane to obtain a two-dimensional second data point set; the two-dimensional plane is vertical to an X axis in the three-dimensional Euclidean space; performing smoothing processing on the basis of the second data point set through a sliding window algorithm to obtain a three-dimensional third data point set; calculating a slope and a variance of each data point in the third set of data points on the two-dimensional plane, and determining a head and neck inflection point from the third set of data points according to the slope and the variance.

Description

Method and device for detecting head and neck inflection point and computer equipment
Technical Field
The application relates to the technical field of medical data processing, in particular to a method and a device for detecting a head and neck inflection point and computer equipment.
Background
Currently, the application of robot manufacturing technology to the field of medical surgery has received much attention and is one of the leading hot spots in the field of robot research. The robot technology not only brings huge technical changes in the aspects of accurate positioning of operations, minimally invasive operations, nondestructive diagnosis and treatment and the like, but also changes many concepts of conventional medical surgery, so that the development and development of robotized operation medical equipment have very important significance in the aspects of clinical medicine and rehabilitation engineering.
Research on medical surgical robotic systems is currently in widespread use in a number of medical fields, such as ultrasound-based teleoperated surgical systems, teleoperated robotic systems for heart valve repair, minimally invasive robotic systems for fiber surgery, voice-activated surgical systems for abdominal surgery, and the like. In any surgical robotic surgery system, a mapping relationship between a computer image space and a surgery space needs to be established, which requires that some anatomical feature points of a medical image and feature points of a corresponding position of a real human body are respectively extracted for spatial registration. Since both the rough registration and the fine registration algorithms need to fully utilize the spatial consistency and the position convergence of the corresponding feature points of the registration object and the registered object, the effect of the registration algorithm is closely related to the quality of feature point extraction.
In the research of the frameless brain surgery stereotactic robot, with the increasing requirements on system automation and intellectualization, the technologies of automatic acquisition of human head and face spatial data, model reconstruction and key feature recognition become more and more important, how to effectively extract high discriminativity such as head, neck, inflection point and the like on a three-dimensional face curved surface, and features containing unique information become hot spots of the current three-dimensional face recognition research.
However, because the head and neck shapes of different populations have great difference, the accuracy of identification is low if common template matching or shape-based curve identification algorithm is adopted.
Disclosure of Invention
The invention aims to provide a method and a device for detecting a head and neck inflection point and computer equipment, which aim to solve the technical problem of low identification precision of the head and neck inflection point.
In a first aspect, an embodiment of the present application provides a method for detecting a head and neck inflection point, where the method includes:
determining a first data point set of an initial central side shadow line, wherein the initial central side shadow line is a side shadow line of a point cloud object of a human head and face part after correction in a three-dimensional Euclidean space;
projecting all data points in the first data point set to a two-dimensional plane to obtain a two-dimensional second data point set; the two-dimensional plane is vertical to an X axis in the three-dimensional Euclidean space;
performing smoothing processing on the basis of the second data point set through a sliding window algorithm to obtain a three-dimensional third data point set;
calculating a slope and a variance of each data point in the third set of data points on the two-dimensional plane, and determining a head and neck inflection point from the third set of data points according to the slope and the variance.
In a possible implementation, before the step of projecting all data points in the first data point set to a two-dimensional plane to obtain a two-dimensional second data point set, the method further includes:
and establishing a two-dimensional rectangular coordinate system perpendicular to the X axis in the three-dimensional European space to obtain a two-dimensional plane perpendicular to the X axis.
In one possible implementation, the step of projecting all data points in the first data point set to a two-dimensional plane to obtain a two-dimensional second data point set includes:
and mapping all data points in the first data point set into data points on the two-dimensional plane through dimensionality reduction to obtain a two-dimensional second data point set on the two-dimensional plane.
In one possible implementation, the step of performing smoothing processing based on the second data point set by using a sliding window algorithm to obtain a three-dimensional third data point set includes:
sorting the data points in the second data point set according to the magnitude sequence of the X-axis coordinate values in the two-dimensional plane;
generating a histogram of the two-dimensional plane based on the sorted second set of data points;
and taking the group distance of the histogram as the window width of a sliding window, and smoothing the data of the histogram through a sliding window algorithm to obtain a three-dimensional third data point set.
In one possible implementation, the step of determining a head-neck inflection point from the third set of data points based on the slope and the variance comprises:
and determining the target data point corresponding to the position where the slope is greater than zero and the variance is greater than a preset value as a head and neck inflection point for all data points in the third data point set.
In one possible implementation, the method further comprises:
and determining the coordinate value of the target data point as the coordinate value of the head and neck inflection point, and outputting the coordinate value of the head and neck inflection point.
In a second aspect, there is provided a device for detecting a head and neck inflection point, including:
the system comprises a determining module, a calculating module and a correcting module, wherein the determining module is used for determining a first data point set of an initial central side shadow line, and the initial central side shadow line is a side shadow line of a point cloud object of a human head and face part after being corrected in a three-dimensional Euclidean space;
the projection module is used for projecting all data points in the first data point set to a two-dimensional plane to obtain a two-dimensional second data point set; the two-dimensional plane is vertical to an X axis in the three-dimensional Euclidean space;
the processing module is used for carrying out smoothing processing on the basis of the second data point set through a sliding window algorithm to obtain a three-dimensional third data point set;
and the calculation module is used for calculating the slope and the variance of each data point in the third data point set on the two-dimensional plane and determining a head and neck inflection point from the third data point set according to the slope and the variance.
In one possible implementation, the processing module is specifically configured to:
sorting the data points in the second data point set according to the magnitude sequence of the X-axis coordinate values in the two-dimensional plane;
generating a histogram of the two-dimensional plane based on the sorted second set of data points;
and taking the group distance of the histogram as the window width of a sliding window, and smoothing the data of the histogram through a sliding window algorithm to obtain a three-dimensional third data point set.
In a third aspect, an embodiment of the present application further provides a computer device, including a memory and a processor, where the memory stores a computer program executable on the processor, and the processor implements the method of the first aspect when executing the computer program.
In a fourth aspect, this embodiment of the present application further provides a computer-readable storage medium storing machine executable instructions, which, when invoked and executed by a processor, cause the processor to perform the method of the first aspect.
The embodiment of the application brings the following beneficial effects:
the method, the device and the computer equipment for detecting the head and neck inflection point provided by the embodiment of the application can determine a first data point set of an initial central side shadow line, wherein the initial central side shadow line is a side shadow line of a human head and face point cloud object after being corrected in a three-dimensional Euclidean space, all data points in the first data point set are projected to a two-dimensional plane vertical to an X axis in the three-dimensional Euclidean space, so that a two-dimensional second data point set is obtained, smoothing processing is performed through a sliding window algorithm based on the second data point set so that a three-dimensional third data point set is obtained, finally, the slope and the variance of each data point in the third data point set on the two-dimensional plane are calculated, and the head and neck inflection point is determined from the third data point set according to the slope and the variance. All data points in the first data point set are projected to a two-dimensional plane perpendicular to an X axis in a three-dimensional Euclidean space, smoothing processing is carried out through a sliding window algorithm based on the second data point set so as to obtain a three-dimensional third data point set, and then a head and neck inflection point is determined from the third data point set according to a slope and a variance, so that the actual head and neck inflection point can be detected more accurately.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the detailed description of the present application or the technical solutions in the prior art, the drawings needed to be used in the detailed description of the present application or the prior art description will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flow chart of a method for detecting a head and neck inflection point according to an embodiment of the present disclosure;
fig. 2 is another flowchart of a method for detecting a head and neck inflection point according to an embodiment of the present disclosure;
fig. 3 is a histogram on a plane S3 provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of an intersection point set of a Y-axis cross section and a point cloud provided in an embodiment of the present disclosure;
FIGS. 5A and B are graphs illustrating the effect of horizontal correction provided by embodiments of the present application;
FIG. 6 is a schematic diagram of an intersection point set strip of an X-axis cross section and a point cloud provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a device for detecting a head and neck inflection point according to an embodiment of the present application;
fig. 8 is a schematic structural diagram illustrating a computer device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the present application will be described in detail and completely with reference to the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "comprising" and "having," and any variations thereof, as referred to in the embodiments of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
At present, in the technical field of medical image processing and surgical navigation, the connecting part of the head and the neck of a human body is positioned in the surface space data of the skull of the human body acquired by a three-dimensional scanner. The human face characteristic point identification algorithm comprises the following steps: face preprocessing (down-sampling, noise filtering, data format conversion, etc.), pre-alignment, region-of-interest detection, and feature extraction. The definition of the identification starting point in the region of interest detection is a very important calculation link.
Since the orientation, direction, range and point cloud density of the three-dimensional face data are unknown before recognition, the range of the intervention data recognition in the absence of guidance information is one of the key technologies for locating the specific region most easily defined in the face data, that is, the starting point detection technology.
The process of extracting the feature points comprises the following steps: the detected points are used as recognition starting points, the approximate positions of the characteristic points to be recognized are positioned through the inherent spatial anatomical position relationship of the human body (namely, a recognition datum line is established), and then all the characteristic points are gradually extracted from the starting points. From the extraction process, the good selection of the starting point can greatly improve the quality and efficiency of feature point extraction.
The method adopts the connecting inflection point of the skull and the neck on the front face of the head and the face as the identification starting point. Because the head and neck shapes of different populations have great difference, if a common template matching or a shape-based bending recognition algorithm is adopted, the recognition precision and adaptability are poor.
Based on this, the embodiment of the application provides a method and a device for detecting a head and neck inflection point, and a computer device, by which the technical problem of low identification accuracy of the head and neck inflection point can be solved.
Embodiments of the present invention are further described below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a method for detecting a head and neck inflection point according to an embodiment of the present disclosure.
As shown in fig. 1, the method includes:
in step S110, a first set of data points of the initial center-side hatching is determined.
And the initial central side shadow line is the side shadow line of the point cloud object of the head and the face of the human body after correction in the three-dimensional Euclidean space. The human head and face point cloud object is obtained by scanning three-dimensional surface drawing data, namely triangular patch data, of the human head and face through a three-dimensional scanner.
Step S120, projecting all data points in the first data point set to a two-dimensional plane to obtain a two-dimensional second data point set.
It should be noted that the two-dimensional plane is perpendicular to the X-axis in the three-dimensional euclidean space. In this step, all data points in the initial center silhouette region may be projected onto a plane perpendicular to the X-axis.
And step S130, smoothing is carried out on the basis of the second data point set through a sliding window algorithm, and a three-dimensional third data point set is obtained.
The method for detecting the head and neck inflection point provided by the embodiment of the application utilizes a sliding window calculation method.
The Sliding window (Sliding window) is an interval with a certain length and can freely move on a coordinate axis, and is used as a slider with a specified length Sliding on the scale, and data in the slider can be dynamically acquired every time the slider slides one unit.
And step S140, calculating the slope and the variance of each data point in the third data point set on the two-dimensional plane, and determining a head and neck inflection point from the third data point set according to the slope and the variance.
In this step, the slope and variance of the data points in the third data point set may also be calculated by using the sliding window method described above.
In the embodiment of the application, aiming at the inherent curvature change characteristic from the neck curve to the head curve of the human body, the traditional inflection point detection algorithm based on the template is improved, a simple sliding average model (such as an SMA model) and an inflection point detection method of an accumulation sum model (such as a CUSUM model) are introduced in the identification process, and the inflection point calculation method based on the sliding window method is provided, so that the efficiency and the accuracy of head and neck inflection point identification are improved to a certain extent, and some technical blanks in the extraction direction of the three-dimensional human body head and face data characteristic points are filled.
The method comprises the steps of performing rasterization, smoothing, regression and fitting on a curve to be measured by a subspace projection method in vector algebra and an autoregressive moving average model in a time sequence through providing a head and neck inflection point detection algorithm based on a sliding window method to obtain a data point set of the curve to be measured containing statistical characteristics, and finally obtaining the head and neck inflection point from the curve to be measured through sequential analysis. Compared with the traditional inflection point detection algorithm, the method has the advantages of simple design, high efficiency in operation, accurate extraction part and small noise interference, and overcomes the defects of high complexity, low efficiency and obvious influence of data information defect of the current algorithm to a certain extent. And the method has better applicability and robustness in the three-dimensional human head and neck inflection point identification.
The above steps are described in detail below.
In some embodiments, prior to step S110, as shown in fig. 2, an initial set of center-side hatched data points may be input:
MX={(mi1,mi2,mi3)|i=1,2,...,n} (1);
where Mx represents the initial set of center-side hatched data points.
In some embodiments, before step S120, the method may further include the steps of:
and establishing a two-dimensional rectangular coordinate system perpendicular to the X axis in the three-dimensional European space to obtain a two-dimensional plane perpendicular to the X axis.
For example, the initial center-side hatched data point set Mx described above may be used as a random vector (M)1,M2,M3) Calculating M, as shown in FIG. 21Mean value EM of1=Inf1(ii) a Then, S is established3:x=EM1And is in S3And a two-dimensional rectangular coordinate system (xoy)' is established by taking the negative direction of the Y axis of the three-dimensional space as the positive direction of the X axis of the two-dimensional space and the positive direction of the Z axis of the three-dimensional space as the positive direction of the Y axis of the two-dimensional space.
In some embodiments, the step S120 may include the following steps:
and mapping all data points in the first data point set into data points on a two-dimensional plane through dimensionality reduction to obtain a two-dimensional second data point set on the two-dimensional plane.
In practical application, a data point set Mx in a three-dimensional euclidean space can be regarded as a stationary time sequence with two-dimensional characteristics, and the data is actually subjected to dimension reduction. For the specific process of mapping the points in Mx to (xoy) 'and then generating a two-dimensional point set Mx', the following formula (1) can be known:
Figure BDA0002366960600000091
let S3 be X-EM1Linear mapping
Figure BDA0002366960600000092
Wherein the content of the first and second substances,
Figure BDA0002366960600000093
representing a three-dimensional Euclidean space; through dimensionality reduction, the
Figure BDA0002366960600000094
Is mapped as S by the point set Mx3Point set Mx 'on'I.e. by
Figure BDA0002366960600000095
In some embodiments, the step S130 may include the following steps:
sorting the data points in the second data point set according to the magnitude sequence of the coordinate values of the X axis in the two-dimensional plane;
generating a histogram of the two-dimensional plane based on the sorted second data point set;
and taking the group distance of the histogram as the window width of a sliding window, and smoothing the data of the histogram through a sliding window algorithm to obtain a three-dimensional third data point set.
In practical applications, the original central-side hatching may be re-segmented, interpolated, and corrected at S3 to generate a histogram model at S3, the interval of the histogram is set to the window width of the sliding window, and the histogram data is smoothed by the sliding window method, that is, the sequence of point sets is smoothed by the simple sliding average model, so as to calculate the to-be-processed data point set Mx ".
In the embodiment of the application, Mx' can be sorted from small to large according to x coordinates; smoothing Mx'; and reconstructing Mx 'into a three-dimensional coordinate point column Mx' by using a sliding window method. The specific process may be as follows.
Firstly, sorting Mx' on S3 from small value to large value, setting group distance of histogram on x axis after sorting, and marking as l1Calculating the projection width W of the whole point set Mx' on the x-axis1Finally, calculating the lattice number of the histogram:
Figure BDA0002366960600000101
by means of1,g1A histogram H data structure is established at S3, wherein the histogram data structure is shown in fig. 3.
Let partition of H be { x0,x1,...,xg1},
Wherein Δi=|xi-xi-1|=l1,i=1,…,g1
Figure BDA0002366960600000102
Wherein n isiIs the interval deltaiCounting the number of medium data points;
Figure BDA0002366960600000103
is the interval deltaiY-coordinate of the jth data point (j ═ 1, 2.. times.n)i)。
Alignment chart
Figure BDA0002366960600000107
And in the middle data defect interval, interpolation compensation is carried out by utilizing data of the left and right adjacent intervals to ensure data continuity.
Next, the histogram data in the dimension is further subjected to gridding, halving and smoothing processing by using a sliding window method, and finally, a data point column Mx "which is easy to calculate and analyze is generated.
Let the radius of the sliding window be 1, at this time, for any interval DeltaiCalculating the mean value of the samples in the adjacent interval:
Figure BDA0002366960600000104
thus a is formed byiDerived three-dimensional Euclidean space
Figure BDA0002366960600000106
Middle corresponding point m ″)i(m″i1,m″i2,m″i3) The method comprises the following steps:
Figure BDA0002366960600000105
by pairs
Figure BDA0002366960600000112
All adjacent panes are sequentially subjected to sliding calculation to obtain
Figure BDA0002366960600000111
In some embodiments, the step S140 may include the following steps:
and determining the target data point corresponding to the position with the slope larger than zero and the variance larger than the preset value as the head and neck inflection point aiming at all the data points in the third data point set.
For a specific search method for performing the head and neck inflection point search in Mx ", the method may search Mx" in S3 according to a priori knowledge of the morphology of the human head and neck in a given orientation until a first point satisfying a given characteristic is found, which is the found head and neck inflection point. Specifically, the statistical characteristic (variance) and the geometric characteristic (slope) of each data point at S3 may be calculated, and then the inflection point determination may be performed using the variance and the slope as the determination conditions.
First, let the sliding window radius be: d (d > 1) for any m ″i(i=d+1,d+2,...,g1-d), taking the left and right adjacent panes of d to perform curve fitting and variance estimation, and solving for miAt S3Slope and variance of (d); then, the curve Mx' formed by the point set is moved in a preset direction, and when the slope of the moved position is larger than 0 and the variance is larger than a certain threshold value, the data point of the position is the head and neck inflection point.
Based on this, the method may further comprise the steps of:
and determining the coordinate value of the target data point as the coordinate value of the head and neck inflection point, and outputting the coordinate value of the head and neck inflection point.
In the embodiment of the present application, the input is an initial center-side hatched data point set, i.e., the above equation (1); the output is the coordinates of the head and neck inflection points:
ptInflection=(Inf1,Inf2,fnf3) (10);
the head and neck inflection point coordinate ptinflecton can be output by performing head and neck inflection point search in Mx ".
Before the head and neck inflection point-based detection method provided by the embodiment of the present application is executed, horizontal correction and vertical correction may be performed on three-dimensional surface rendering data (triangular patch data) of a human head and face scanned by a three-dimensional scanner (step 1). Then, the method for detecting the head and neck inflection point provided by the embodiment of the application is executed, that is, the initial central hatching line and the head and neck inflection point of the face are extracted based on a sliding window method (step 2). And then, carrying out feature point effective region segmentation through the head and neck inflection point (step 3).
In the whole operation flow from the acquisition of the human head and face data to the successful extraction of the head and neck inflection point, the step 1 belongs to a data acquisition and data preprocessing part, and aims to provide effective data support for the head and neck inflection point detection method; the step 2 is a main part of the method for detecting the head and neck inflection point, namely, extracting the head and neck inflection point; the step 3 belongs to an application part, namely, the obtained head and neck inflection points are used for cutting the body part below the neck and the background part behind the neck of the head and face data so as to achieve the purpose of more accurately positioning the effective range of the algorithm, and therefore the execution efficiency of the follow-up feature point identification and registration algorithm is improved.
For the data preprocessing process in step 1, the specific implementation manner may be as follows.
The data preprocessing comprises horizontal correction and vertical correction, and the purpose of executing a graph correction algorithm is to adjust the pose of a human head and face point cloud object (set as C) in a three-dimensional Euclidean space, so that the forward direction of a human body is orthogonal to a three-dimensional space coordinate frame, and the optimal recognition effect is obtained.
The horizontal correction is to perform one-dimensional subspace projection on point cloud aiming at a Y axis, simultaneously grid the data points on the Y-dimensional space, establish a Y-dimensional histogram and finally obtain a section S with the maximum number of projection points of the point cloud object on a certain scale of the Y axis1The plane equation is not recorded as y ═ a, and S is easy to know1⊥ Y, then using S1Cutting the point cloud data to obtain S1Set of intersection points M with Cy. As shown in fig. 4, wherein the line segment is a section S1If the calculation precision error is not considered, M acquired in the figure 4 isySubstantially at a right angle to the Y axisStraight coplanar S1The above.
Then, M is addedyAnd performing linear fitting, calculating a deflection angle relative to an XOY plane, and correcting the horizontal pose of the point cloud data C by using the angle.
As shown in fig. 5, in the point cloud data in the diagram a, a state before the horizontal correction is performed, and the diagram B is an effect after the horizontal correction is performed. The curve in the diagram a is the intersection line of the cross section and the point cloud, and it can be seen from the diagram B that the horizontal direction of the point cloud after horizontal correction by using the intersection line is closer to the forward direction than the diagram a.
Similarly, the point cloud object can be vertically corrected, and on the basis of horizontal alignment, the point cloud object is subjected to X-dimensional subspace projection to establish an X-dimensional histogram, and an intersection line point set corresponding to each histogram interval is respectively calculated.
Note that the cross section defined by the vertical correction (denoted as S)2) Instead of being a plane as in the case of horizontal correction, it is a plane segment, denoted as a1<x≤a2Wherein a is1,a2Respectively, two end points corresponding to the histogram bin. The intersection point set divided by the cross section is actually a strip of the intersection point set, and as shown in fig. 6, the point of the two line segments is a1<x≤a2A divided cross line point collecting strip, wherein the line segments are S respectively2Upper limit of (x ═ a)2And lower limit x ═ a1M in FIG. 3yIn the difference thatxWith a significant width.
Here, an intersection point set stripe (denoted as M) where the intersection point closest to the origin is located is selectedx) Performing vertical correction while adding MxAs the location area of the initial center-side hatching.
In the method provided by the application, in order to improve the efficiency, some customization is performed according to the requirement of extracting the head and face feature points, and the pose of the point cloud object has certain requirement, so that the method needs to be matched with human body data acquired by a 3D scanner in a specific scene for use.
The process of head and neck inflection point region segmentation, which belongs to the head and neck inflection point post-processing section, may include the following steps.
After the head and neck inflection point position is obtained, the relative position of the head and the face and the inflection point is fixed, so that a region of interest (ROI) for facial contour feature point extraction and point cloud registration can be positioned by using the head and neck inflection point, and other regions all belong to irrelevant regions which can be eliminated, such as a body part below the head and neck inflection point, a back skull part behind the head and neck inflection point and a background part which is completely irrelevant to a human body.
The inflection point region segmentation algorithm judges the ROI region when traversing the data point set, cuts the region which does not belong to the ROI, and reserves the relevant part to the maximum degree while abandoning the part which is irrelevant to the subsequent service as far as possible, thereby greatly reducing the number of data points to be accessed in the subsequent work and improving the running speed of the feature point identification and point cloud registration algorithm.
Fig. 7 provides a schematic structural diagram of a device for detecting a head and neck inflection point. As shown in fig. 7, the apparatus 700 for detecting a neck inflection point includes:
a determining module 701, configured to determine a first data point set of an initial central side shadow, where the initial central side shadow is a side shadow of a point cloud object of a human head and face corrected in a three-dimensional european space;
a projection module 702, configured to project all data points in the first data point set to a two-dimensional plane, so as to obtain a two-dimensional second data point set; the two-dimensional plane is vertical to the X axis in the three-dimensional European space;
a processing module 703, configured to perform smoothing processing based on the second data point set by using a sliding window algorithm to obtain a three-dimensional third data point set;
and a calculating module 704, configured to calculate a slope and a variance of each data point in the third data point set on the two-dimensional plane, and determine a head-neck inflection point from the third data point set according to the slope and the variance.
In some embodiments, the apparatus further comprises:
the establishing module is used for establishing a two-dimensional rectangular coordinate system perpendicular to an X axis in a three-dimensional Euclidean space to obtain a two-dimensional plane perpendicular to the X axis.
In some embodiments, projection module 702 is specifically configured to:
and mapping all data points in the first data point set into data points on a two-dimensional plane through dimensionality reduction to obtain a two-dimensional second data point set on the two-dimensional plane.
In some embodiments, the processing module 703 is specifically configured to:
sorting the data points in the second data point set according to the magnitude sequence of the coordinate values of the X axis in the two-dimensional plane;
generating a histogram of the two-dimensional plane based on the sorted second data point set;
and taking the group distance of the histogram as the window width of a sliding window, and smoothing the data of the histogram through a sliding window algorithm to obtain a three-dimensional third data point set.
In some embodiments, the calculation module 704 is further configured to:
and determining the target data point corresponding to the position with the slope larger than zero and the variance larger than the preset value as the head and neck inflection point aiming at all the data points in the third data point set.
In some embodiments, the apparatus further comprises:
and the output module is used for determining the coordinate value of the target data point as the coordinate value of the head and neck inflection point and outputting the coordinate value of the head and neck inflection point.
The device for detecting a head and neck inflection point provided by the embodiment of the application has the same technical characteristics as the method for detecting a head and neck inflection point provided by the embodiment, so that the same technical problems can be solved, and the same technical effect can be achieved.
As shown in fig. 8, an embodiment of the present application provides a computer device 800, including: a processor 801, a memory 802 and a bus, wherein the memory 802 stores machine-readable instructions executable by the processor 801, when a computer device runs, the processor 801 communicates with the memory 802 through the bus, and the processor 801 executes the machine-readable instructions to execute the steps of the method for detecting the neck inflection point.
Specifically, the memory 802 and the processor 801 can be general memories and processors, which are not limited to the specific embodiments, and the processor 801 can execute the method for detecting the neck inflection point when executing the computer program stored in the memory 802.
The processor 801 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 801. The Processor 801 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 802, and the processor 801 reads the information in the memory 802, and combines the hardware to complete the steps of the method.
Corresponding to the method for detecting a neck inflection point, an embodiment of the present application further provides a computer-readable storage medium, where machine executable instructions are stored, and when the computer executable instructions are called and executed by a processor, the computer executable instructions cause the processor to execute the steps of the method for detecting a neck inflection point.
The device for detecting the head and neck inflection point provided by the embodiment of the application can be specific hardware on equipment or software or firmware installed on the equipment. The device provided by the embodiment of the present application has the same implementation principle and technical effect as the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments where no part of the device embodiments is mentioned. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the foregoing systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method for detecting a head and neck inflection point according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the scope of the embodiments of the present application. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for detecting a head and neck inflection point, the method comprising:
determining a first data point set of an initial central side shadow line, wherein the initial central side shadow line is a side shadow line of a point cloud object of a human head and face part after correction in a three-dimensional Euclidean space;
projecting all data points in the first data point set to a two-dimensional plane to obtain a two-dimensional second data point set; the two-dimensional plane is vertical to an X axis in the three-dimensional Euclidean space;
performing smoothing processing on the basis of the second data point set through a sliding window algorithm to obtain a three-dimensional third data point set;
calculating a slope and a variance of each data point in the third set of data points on the two-dimensional plane, and determining a head and neck inflection point from the third set of data points according to the slope and the variance.
2. The method of claim 1, wherein the step of projecting all data points in the first set of data points onto a two-dimensional plane to obtain a second set of two-dimensional data points is preceded by the step of:
and establishing a two-dimensional rectangular coordinate system perpendicular to the X axis in the three-dimensional European space to obtain a two-dimensional plane perpendicular to the X axis.
3. The method of claim 1, wherein the step of projecting all data points in the first set of data points onto a two-dimensional plane to obtain a second set of two-dimensional data points comprises:
and mapping all data points in the first data point set into data points on the two-dimensional plane through dimensionality reduction to obtain a two-dimensional second data point set on the two-dimensional plane.
4. The method of claim 1, wherein smoothing based on the second set of data points by a sliding window algorithm to obtain a third set of data points in three dimensions comprises:
sorting the data points in the second data point set according to the magnitude sequence of the X-axis coordinate values in the two-dimensional plane;
generating a histogram of the two-dimensional plane based on the sorted second set of data points;
and taking the group distance of the histogram as the window width of a sliding window, and smoothing the data of the histogram through a sliding window algorithm to obtain a three-dimensional third data point set.
5. The method of claim 1, wherein the step of determining a head and neck inflection point from the third set of data points based on the slope and the variance comprises:
and determining the target data point corresponding to the position where the slope is greater than zero and the variance is greater than a preset value as a head and neck inflection point for all data points in the third data point set.
6. The method of claim 5, further comprising:
and determining the coordinate value of the target data point as the coordinate value of the head and neck inflection point, and outputting the coordinate value of the head and neck inflection point.
7. A detection device for a head and neck inflection point is characterized by comprising:
the system comprises a determining module, a calculating module and a correcting module, wherein the determining module is used for determining a first data point set of an initial central side shadow line, and the initial central side shadow line is a side shadow line of a point cloud object of a human head and face part after being corrected in a three-dimensional Euclidean space;
the projection module is used for projecting all data points in the first data point set to a two-dimensional plane to obtain a two-dimensional second data point set; the two-dimensional plane is vertical to an X axis in the three-dimensional Euclidean space;
the processing module is used for carrying out smoothing processing on the basis of the second data point set through a sliding window algorithm to obtain a three-dimensional third data point set;
and the calculation module is used for calculating the slope and the variance of each data point in the third data point set on the two-dimensional plane and determining a head and neck inflection point from the third data point set according to the slope and the variance.
8. The apparatus of claim 7, wherein the processing module is specifically configured to:
sorting the data points in the second data point set according to the magnitude sequence of the X-axis coordinate values in the two-dimensional plane;
generating a histogram of the two-dimensional plane based on the sorted second set of data points;
and taking the group distance of the histogram as the window width of a sliding window, and smoothing the data of the histogram through a sliding window algorithm to obtain a three-dimensional third data point set.
9. A computer device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 6 when executing the computer program.
10. A computer readable storage medium having stored thereon machine executable instructions which, when invoked and executed by a processor, cause the processor to execute the method of any of claims 1 to 6.
CN202010040825.1A 2020-01-14 2020-01-14 Method and device for detecting head and neck inflection point and computer equipment Pending CN111259788A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010040825.1A CN111259788A (en) 2020-01-14 2020-01-14 Method and device for detecting head and neck inflection point and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010040825.1A CN111259788A (en) 2020-01-14 2020-01-14 Method and device for detecting head and neck inflection point and computer equipment

Publications (1)

Publication Number Publication Date
CN111259788A true CN111259788A (en) 2020-06-09

Family

ID=70951165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010040825.1A Pending CN111259788A (en) 2020-01-14 2020-01-14 Method and device for detecting head and neck inflection point and computer equipment

Country Status (1)

Country Link
CN (1) CN111259788A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240730A (en) * 2021-05-20 2021-08-10 推想医疗科技股份有限公司 Method and device for extracting centrum midline
CN113909720A (en) * 2021-09-24 2022-01-11 深圳前海瑞集科技有限公司 Welding device and welding method for deep wave steep slope corrugated plate container
CN117633554A (en) * 2023-12-14 2024-03-01 艾信智慧医疗科技发展(苏州)有限公司 Medical box type logistics transmission monitoring and early warning system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800123A (en) * 2010-11-05 2012-11-28 浙江华震数字化工程有限公司 Portable image holography three-dimensional reconstruction system
EP1869745B1 (en) * 2005-04-12 2013-07-24 Cellpack Gmbh Two- or multiple-piece insulating body system for producing medium high voltage cable fittings
WO2015018523A1 (en) * 2013-08-06 2015-02-12 Oncoethix Sa A novel bet-brd inhibitor for treatment of solid tumors
CN207152750U (en) * 2016-12-28 2018-03-30 苏州市立医院 Guardrail framework is used in head neck operation operation
CN109145866A (en) * 2018-09-07 2019-01-04 北京相貌空间科技有限公司 Determine the method and device of side face tilt angle
CN109214339A (en) * 2018-09-07 2019-01-15 北京相貌空间科技有限公司 Face shape of face, the calculation method of face's plastic operation and computing device
CN109241911A (en) * 2018-09-07 2019-01-18 北京相貌空间科技有限公司 Human face similarity degree calculation method and device
CN109255327A (en) * 2018-09-07 2019-01-22 北京相貌空间科技有限公司 Acquisition methods, face's plastic operation evaluation method and the device of face characteristic information
CN110060336A (en) * 2019-04-24 2019-07-26 北京华捷艾米科技有限公司 Three-dimensional facial reconstruction method, device, medium and equipment
CN110176066A (en) * 2019-05-28 2019-08-27 中山大学附属第三医院 Method for reconstructing, device and the electronic equipment of skull defeci structure
CN110443840A (en) * 2019-08-07 2019-11-12 山东理工大学 The optimization method of sampling point set initial registration in surface in kind

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1869745B1 (en) * 2005-04-12 2013-07-24 Cellpack Gmbh Two- or multiple-piece insulating body system for producing medium high voltage cable fittings
CN102800123A (en) * 2010-11-05 2012-11-28 浙江华震数字化工程有限公司 Portable image holography three-dimensional reconstruction system
WO2015018523A1 (en) * 2013-08-06 2015-02-12 Oncoethix Sa A novel bet-brd inhibitor for treatment of solid tumors
CN207152750U (en) * 2016-12-28 2018-03-30 苏州市立医院 Guardrail framework is used in head neck operation operation
CN109145866A (en) * 2018-09-07 2019-01-04 北京相貌空间科技有限公司 Determine the method and device of side face tilt angle
CN109214339A (en) * 2018-09-07 2019-01-15 北京相貌空间科技有限公司 Face shape of face, the calculation method of face's plastic operation and computing device
CN109241911A (en) * 2018-09-07 2019-01-18 北京相貌空间科技有限公司 Human face similarity degree calculation method and device
CN109255327A (en) * 2018-09-07 2019-01-22 北京相貌空间科技有限公司 Acquisition methods, face's plastic operation evaluation method and the device of face characteristic information
CN110060336A (en) * 2019-04-24 2019-07-26 北京华捷艾米科技有限公司 Three-dimensional facial reconstruction method, device, medium and equipment
CN110176066A (en) * 2019-05-28 2019-08-27 中山大学附属第三医院 Method for reconstructing, device and the electronic equipment of skull defeci structure
CN110443840A (en) * 2019-08-07 2019-11-12 山东理工大学 The optimization method of sampling point set initial registration in surface in kind

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
晏义等: "基于点云数据的三维人体头部分割技术研究" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113240730A (en) * 2021-05-20 2021-08-10 推想医疗科技股份有限公司 Method and device for extracting centrum midline
CN113240730B (en) * 2021-05-20 2022-02-08 推想医疗科技股份有限公司 Method and device for extracting centrum midline
CN113909720A (en) * 2021-09-24 2022-01-11 深圳前海瑞集科技有限公司 Welding device and welding method for deep wave steep slope corrugated plate container
CN113909720B (en) * 2021-09-24 2024-01-26 深圳前海瑞集科技有限公司 Welding device and welding method for deep wave steep slope corrugated plate container
CN117633554A (en) * 2023-12-14 2024-03-01 艾信智慧医疗科技发展(苏州)有限公司 Medical box type logistics transmission monitoring and early warning system
CN117633554B (en) * 2023-12-14 2024-05-14 艾信智慧医疗科技发展(苏州)有限公司 Medical box type logistics transmission monitoring and early warning system

Similar Documents

Publication Publication Date Title
CN110443836B (en) Point cloud data automatic registration method and device based on plane features
Lee et al. Curved glide-reflection symmetry detection
US10318839B2 (en) Method for automatic detection of anatomical landmarks in volumetric data
JP5334692B2 (en) Method and system for detecting 3D anatomical objects using constrained marginal space learning
CN111259788A (en) Method and device for detecting head and neck inflection point and computer equipment
CN109960402B (en) Virtual and real registration method based on point cloud and visual feature fusion
CN111179321B (en) Point cloud registration method based on template matching
US8594409B2 (en) Automation method for computerized tomography image analysis using automated calculation of evaluation index of degree of thoracic deformation based on automatic initialization, and record medium and apparatus
US20070211944A1 (en) Apparatus for detecting feature point and method of detecting feature point
CN111340756B (en) Medical image lesion detection merging method, system, terminal and storage medium
CN104424629A (en) X-ray chest radiography lung segmentation method and device
CN108830888B (en) Coarse matching method based on improved multi-scale covariance matrix characteristic descriptor
CN106340010B (en) A kind of angular-point detection method based on second order profile difference
CN104021547A (en) Three dimensional matching method for lung CT
CN110781937B (en) Point cloud feature extraction method based on global visual angle
CN111768418A (en) Image segmentation method and device and training method of image segmentation model
CN113393524B (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
Urschler et al. SIFT and shape context for feature-based nonlinear registration of thoracic CT images
CN109965979A (en) A kind of steady Use of Neuronavigation automatic registration method without index point
CN111582186A (en) Object edge identification method, device, system and medium based on vision and touch
CN111968224A (en) Ship 3D scanning point cloud data processing method
Zampokas et al. Real-time 3D reconstruction in minimally invasive surgery with quasi-dense matching
CN115641366A (en) Vamp side wall line registration method and device for robot glue applying
CN116503462A (en) Method and system for quickly extracting circle center of circular spot
CN105488798B (en) SAR image method for measuring similarity based on point set contrast

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210519

Address after: 301700 west side of the third floor of No.3 workshop, No.6 Xinchuang Road, Wuqing Development Zone, Wuqing District, Tianjin

Applicant after: Weijing medical equipment (Tianjin) Co.,Ltd.

Address before: Room 110, courtyard 5, No.4, East Binhe Road, Qinghe, Haidian District, Beijing

Applicant before: HOZ MINIMALLY INVASIVE MEDICAL TECHNOLOGY (BEIJING) Co.,Ltd.

TA01 Transfer of patent application right
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240531