CN117372322A - Face orientation determining method and device and face image reconstructing method and device - Google Patents

Face orientation determining method and device and face image reconstructing method and device Download PDF

Info

Publication number
CN117372322A
CN117372322A CN202210770738.0A CN202210770738A CN117372322A CN 117372322 A CN117372322 A CN 117372322A CN 202210770738 A CN202210770738 A CN 202210770738A CN 117372322 A CN117372322 A CN 117372322A
Authority
CN
China
Prior art keywords
plane
image
brain region
determining
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210770738.0A
Other languages
Chinese (zh)
Inventor
张旭
方伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Original Assignee
Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan United Imaging Zhirong Medical Technology Co Ltd filed Critical Wuhan United Imaging Zhirong Medical Technology Co Ltd
Priority to CN202210770738.0A priority Critical patent/CN117372322A/en
Priority to PCT/CN2023/104458 priority patent/WO2024002321A1/en
Publication of CN117372322A publication Critical patent/CN117372322A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face orientation determining method and device and a face image reconstructing method and device. The face orientation determining method comprises the following steps: performing principal component analysis processing on the image of the brain region according to the first position, and determining a first direction vector and a second direction vector; determining, for the image of the brain region, a front portion of the brain region and a rear portion of the brain region according to distances from voxels located on both sides of the first plane to the first plane, respectively; and determining an upper portion of the brain region and a lower portion of the brain region according to distances from voxels located on both sides of the second plane to the second plane, respectively; a face orientation is determined from at least one of a front portion of the brain region and a rear portion of the brain region, at least one of an upper portion of the brain region and a lower portion of the brain region, and the first location. The invention realizes the determination of the face orientation by utilizing the inherent characteristics of the head structure of the human body, and has higher accuracy.

Description

Face orientation determining method and device and face image reconstructing method and device
Technical Field
The present invention relates to the field of image processing, and in particular, to a method and apparatus for determining a face orientation, a method and apparatus for reconstructing a face image, an electronic device, and a storage medium.
Background
In neurosurgical stereotactic surgical robots, spatial alignment of the medical image data of the preoperative scan with the intraoperative patient is required, a process known as spatial registration. The rapid spatial registration scheme based on facial features benefits from abundant structural information of the face, and the existing 3D (three-Dimensional) camera can achieve rapid reconstruction of the face, so that rapid and high-precision spatial registration can be achieved.
In the spatial registration scheme based on the face 3D reconstruction, the core steps for influencing the precision and the stability comprise three steps: the method comprises the steps that firstly, a face image is obtained by reconstructing a face by a 3D camera; reconstructing a face based on DICOM (Digital Imaging and Communications in Medicine, digital imaging and communication in medicine) images to obtain a face image; and thirdly, registering the facial image reconstructed by the camera with the facial image reconstructed by the DICOM, and calculating a space transformation matrix.
In the prior art of face three-dimensional reconstruction based on DICOM images, the skin surface of the whole human body can be reconstructed, but the reconstructed result includes all the areas of the skin surface of the human body in the scanning field of view of the whole medical equipment, and the face area is only a small part of the areas, so that the interference of a large number of background areas in the follow-up face structure light registration is caused, and the follow-up point cloud registration is unstable.
In the prior art for performing three-dimensional face reconstruction based on DICOM images, the orientation of the head of the patient can be determined by combining the orientation information of the patient in tag (tag) information of DICOM data, so that the front part and the rear part of the head are distinguished, and the front part of the head is reserved as a face reconstruction result. However, the patient orientation information in tag information only roughly reflects the orientation of a person, and is very inaccurate when there is an inclination of the patient's head. And the tag information is input by a doctor in the scanning equipment, and the situation that the tag information is incorrectly filled may exist, which may cause the registration process to go unfeasible. In addition, the human body can be divided into a front half part and a rear half part only by the aid of the tag information-based segmentation mode, a human face area can not be accurately segmented and reconstructed, and a large amount of background interference still exists when the scanning range of medical equipment extends below the neck.
Disclosure of Invention
The invention aims to overcome the defects in the prior art for reconstructing a face three-dimensionally based on a DICOM image, and provides a face orientation determining method and device, a face image reconstructing method and device, electronic equipment and storage medium.
The invention solves the technical problems by the following technical scheme:
the first aspect of the present invention provides a face orientation determining method, including the following steps:
performing principal component analysis processing on the image of the brain region according to the first position, and determining a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the direction of the brain from front to back or from back to front, and the second direction vector is used for indicating the direction of the brain from top to bottom or from bottom to top;
determining the front part of the brain region and the rear part of the brain region according to the distance between the voxels positioned at two sides of the first plane and the first plane respectively aiming at the image of the brain region; and determining an upper portion of the brain region and a lower portion of the brain region according to distances from voxels located on both sides of the second plane to the second plane, respectively; wherein the first plane is a plane perpendicular to the first direction vector and passing through the first position, and the second plane is a plane perpendicular to the second direction vector and passing through the first position;
a face orientation is determined from at least one of a front portion of the brain region and a rear portion of the brain region, at least one of an upper portion of the brain region and a lower portion of the brain region, and the first location.
The second aspect of the present invention provides a face orientation determining method, including the steps of:
performing principal component analysis processing on the image of the brain region according to the first position, and determining a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region;
determining a first coordinate axis according to the first position and the distance between voxels positioned at two sides of a first plane and the first plane respectively aiming at the image of the brain region; determining a second coordinate axis according to the first position and the distances between the voxels positioned at two sides of the second plane and the second plane respectively; the first plane is a plane perpendicular to the first direction vector and passing through the first position, the second plane is a plane perpendicular to the second direction vector and passing through the first position, and the first coordinate axis and the second coordinate axis intersect at the first position;
And determining the face orientation according to the first coordinate axis and the second coordinate axis.
A third aspect of the present invention provides a method for determining a face orientation, including the steps of:
performing principal component analysis processing on the image of the brain region according to the first position, and determining a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region;
determining distances from voxels located on two sides of a first plane to the first plane and distances from voxels located on two sides of a second plane to the second plane respectively for the image of the brain region; wherein the first plane is a plane perpendicular to the first direction vector and passing through the first position, and the second plane is a plane perpendicular to the second direction vector and passing through the first position;
and determining the face orientation based on the first position, the distance between the voxels positioned at the two sides of the first plane and the first plane respectively, and the distance between the voxels positioned at the two sides of the second plane and the second plane respectively.
A fourth aspect of the present invention provides a method for reconstructing a face image, including the steps of:
determining a face orientation by using the face orientation determining method described in the first aspect, the second aspect or the third aspect;
performing head segmentation processing on the medical image to be processed to obtain an image of a head region; wherein the image of the brain region is obtained according to the medical image to be processed;
reconstructing a face image from the image of the head region and the face orientation.
A fifth aspect of the present invention provides a face orientation determining apparatus, including:
the direction vector determining module is used for carrying out principal component analysis processing on the image of the brain region according to the first position and determining a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region;
a brain direction determining module for determining the front part of the brain area and the rear part of the brain area according to the distance between the voxels positioned at two sides of the first plane and the first plane respectively aiming at the image of the brain area; and determining an upper portion of the brain region and a lower portion of the brain region according to distances from voxels located on both sides of the second plane to the second plane, respectively; wherein the first plane is a plane perpendicular to the first direction vector and passing through the first position, and the second plane is a plane perpendicular to the second direction vector and passing through the first position;
A face orientation determining module for determining a face orientation from at least one of a front portion of the brain region and a rear portion of the brain region, at least one of an upper portion of the brain region and a lower portion of the brain region, and the first location.
A sixth aspect of the present invention provides a face orientation determining apparatus, including:
the direction vector determining module is used for carrying out principal component analysis processing on the image of the brain region according to the first position and determining a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region;
the coordinate axis determining module is used for determining a first coordinate axis according to the first position and the distances between voxels positioned on two sides of a first plane and the first plane aiming at the image of the brain region; determining a second coordinate axis according to the first position and the distances between the voxels positioned at two sides of the second plane and the second plane respectively; the first plane is a plane perpendicular to the first direction vector and passing through the first position, the second plane is a plane perpendicular to the second direction vector and passing through the first position, and the first coordinate axis and the second coordinate axis intersect at the first position;
And the face orientation determining module is used for determining the face orientation according to the first coordinate axis and the second coordinate axis.
A seventh aspect of the present invention provides a face orientation determining apparatus, including:
the direction vector determining module is used for carrying out principal component analysis processing on the image of the brain region according to the first position and determining a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region;
a distance determining module, configured to determine, for an image of the brain region, distances between voxels located on both sides of a first plane and the second plane, respectively; wherein the first plane is a plane perpendicular to the first direction vector and passing through the first position, and the second plane is a plane perpendicular to the second direction vector and passing through the first position;
and the face orientation determining module is used for determining the face orientation based on the first position, the distance between the voxels positioned at the two sides of the first plane and the first plane respectively, and the distance between the voxels positioned at the two sides of the second plane and the second plane respectively.
An eighth aspect of the present invention provides a face image reconstruction apparatus, comprising:
a face orientation determining module, configured to determine a face orientation by using the face orientation determining method in the first aspect, the second aspect, or the third aspect;
the head segmentation processing module is used for carrying out head segmentation processing on the medical image to be processed to obtain an image of a head region; wherein the image of the brain region is obtained from the medical image to be processed;
and the face image reconstruction module is used for reconstructing a face image according to the image of the head area and the face orientation.
A ninth aspect of the present invention provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of determining a face orientation as described in the first, second or third aspects or the method of reconstructing a face image as described in the fourth aspect when the computer program is executed.
A tenth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the face orientation determination method according to the first, second or third aspects or the face image reconstruction method according to the fourth aspect.
On the basis of conforming to the common knowledge in the field, the optional conditions can be arbitrarily combined to obtain the preferred embodiments of the invention.
The invention has the positive progress effects that: firstly, the characteristic features of the human brain structure are utilized to determine the front part, the rear part, the upper part and the lower part of the brain region according to the image of the brain region, then the inherent relation between the face orientation and the brain direction is utilized to determine the face orientation, namely, the inherent features of the human head structure are utilized to realize the determination of the face orientation, and other auxiliary information such as the label information of DICOM data is not needed to be relied on, so that the method has higher accuracy, stronger robustness and generalization, and can be suitable for medical images obtained by scanning different ranges including the face in the human body by different equipment under different scanning resolutions.
Further, an accurate face image can be reconstructed by utilizing the determined face orientation and the head region image, and the face image can be applied to a face structure photo-registration scheme, so that the accuracy and stability of registration can be effectively improved.
Drawings
Fig. 1 is a flow chart of a face orientation determining method provided in embodiment 1 of the present invention.
Fig. 2 is a schematic diagram of a principal component direction vector according to embodiment 1 of the present invention.
Fig. 3 is a schematic diagram of an image of a brain region according to embodiment 1 of the present invention.
Fig. 4 is a schematic diagram of a brain coordinate system according to embodiment 1 of the present invention.
Fig. 5 is a flow chart of a brain segmentation method according to embodiment 1 of the present invention.
Fig. 6 is a block diagram of a face orientation determining apparatus according to embodiment 1 of the present invention.
Fig. 7 is a flowchart illustrating a method for determining a face orientation according to embodiment 2 of the present invention.
Fig. 8 is a block diagram of a face orientation determining apparatus according to embodiment 2 of the present invention.
Fig. 9 is a flowchart illustrating a method for determining a face orientation according to embodiment 3 of the present invention.
Fig. 10 is a block diagram of a face orientation determining apparatus according to embodiment 3 of the present invention.
Fig. 11 is a flowchart of a face image reconstruction method provided in embodiment 4 of the present invention.
Fig. 12 is a flowchart of step S43 provided in embodiment 4 of the present invention.
Fig. 13 is a schematic diagram of a method for determining an image of a face contour region according to embodiment 4 of the present invention.
Fig. 14 is a block diagram of a face image reconstruction device according to embodiment 4 of the present invention.
Fig. 15 is a schematic structural diagram of an electronic device according to embodiment 5 of the present invention.
Detailed Description
The invention is further illustrated by means of the following examples, which are not intended to limit the scope of the invention.
Example 1
Fig. 1 is a flowchart of a face orientation determining method provided in this embodiment, where the face orientation determining method may be performed by a face orientation determining device, and the face orientation determining device may be implemented by software and/or hardware, and the face orientation determining device may be part or all of an electronic device. The electronic device in this embodiment may be a personal computer (Personal Computer, PC), such as a desktop, an all-in-one, a notebook, a tablet, or a terminal device such as a mobile phone, a wearable device, a palm computer (Personal Digital Assistant, PDA), or the like. The method for determining the face orientation provided in this embodiment is described below with an electronic device as an execution subject.
As shown in fig. 1, the method for determining the face orientation provided in this embodiment may include the following steps S11 to S13:
and S11, performing principal component analysis processing on the image of the brain region according to the first position, and determining a first direction vector and a second direction vector.
Wherein the first location is used to characterize a central location of a brain region. In a specific implementation, a three-dimensional rectangular coordinate system may be established in a three-dimensional space where the image of the brain region is located, where the coordinate system includes an x-axis, a y-axis, and a z-axis, and the x-coordinates, the y-coordinates, and the z-coordinates of all voxels in the image of the brain region are averaged, and the three averages are respectively used as the x-coordinates, the y-coordinates, and the z-coordinates of the first position. In some examples, the first location may also be referred to as a geometric center of the brain region.
In a specific implementation, the brain region image may be obtained by performing brain segmentation processing on the medical image to be processed. The medical image to be processed may be called a DICOM image to be processed, specifically, a CT (Computed Tomography) image, an MRI (Magnetic Resonance Imaging) image, or the like, specifically, the medical image to be processed may be obtained by scanning a head (including a face) of a patient with a CT scanner, a magnetic resonance imager, or the like, or may be obtained by downloading from a server.
Among them, principal component analysis (Principal Components Analysis, PCA) techniques, also known as principal component analysis techniques, aim to compress the scale of the original data matrix, reducing the dimensionality of feature vectors so that they reflect the principal features of the thing. In step S11, a first direction vector and a second direction vector are determined from the result of the principal component analysis processing of the image of the brain region. The first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region.
The brain structure of the human body generally has the following features: the size of the brain area in the front-back direction is largest, and then the size of the brain area in the up-down direction is smallest, and the front-back direction and the up-down direction of the brain area have asymmetry. In an alternative embodiment of step S11, the first direction vector and the second direction vector are determined based on the above rule, and specifically, the following steps S111 and S112 may be included:
and step S111, performing principal component analysis processing on the image of the brain region by taking the first position as an origin to obtain three orthogonal principal component direction vectors and characteristic values corresponding to the principal component direction vectors.
In a specific implementation, voxels belonging to the brain region in the image of the brain region may be converted into a point cloud, and principal component analysis processing may be performed on the point cloud.
In step S111, the principal component analysis processing is performed on the image of the brain region using the first position as the origin, and three orthogonal principal component direction vectors P1, P2, and P3 emitted from the origin as shown in fig. 2, and a feature Value1 corresponding to the principal component direction vector P1, a feature Value2 corresponding to the principal component direction vector P2, and a feature Value3 corresponding to the principal component direction vector P3 can be obtained. The three principal component direction vectors can reflect different directions of the brain, and the corresponding three characteristic values can reflect sizes of the brain in different directions.
Step S112, determining the principal component direction vector corresponding to the largest eigenvalue as a first direction vector, and determining the principal component direction vector corresponding to the next largest eigenvalue as a second direction vector.
Specifically, the three feature values Value1, value2, value3 are ordered, and a first direction vector and a second direction vector are determined from the principal component direction vectors P1, P2, P3 according to the ordering result.
The largest eigenvalue represents the largest dimension in the corresponding direction, the next largest eigenvalue represents the next largest dimension in the corresponding direction, and the smallest eigenvalue represents the smallest dimension in the corresponding direction. Since the size of the front-rear direction of the brain is largest, the size of the up-down direction of the brain is next largest, and the size of the left-right direction of the brain is smallest, the principal component direction vector corresponding to the largest eigenvalue is determined as a first direction vector for indicating the front-to-rear direction or the rear-to-front direction of the brain region, the principal component direction vector corresponding to the next largest eigenvalue is determined as a second direction vector for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region, and the principal component direction vector corresponding to the smallest eigenvalue is determined as a third direction vector for indicating the left-to-right direction or the right-to-left direction of the brain region.
Step S12, determining the front part of the brain region and the rear part of the brain region according to the distances between voxels positioned at two sides of a first plane and the first plane respectively aiming at the image of the brain region; and determining an upper portion of the brain region and a lower portion of the brain region based on distances from the voxels located on both sides of the second plane, respectively, to the second plane.
The first plane is a plane perpendicular to the first direction vector and passing through the first position, and the second plane is a plane perpendicular to the second direction vector and passing through the first position.
In a specific implementation, for any voxel located on both sides of the first plane, the distance from the voxel to the first plane refers to the vertical distance from the voxel to the first plane, i.e. the distance between the point where the voxel is projected onto the first direction vector and the first position. For any voxel located on either side of the second plane, the distance of the voxel to said second plane refers to the perpendicular distance of the voxel to the second plane, i.e. the distance between the point of the voxel projected onto said second direction vector and the first position.
Fig. 3 is a schematic diagram for showing an image of a brain region. As can be seen from fig. 3, the distance span D1 of voxels in front of the brain region to the first plane L1 is greater than the distance span D2 of voxels in back of the brain region to the first plane L1, and the distance span D3 of voxels in top of the brain region to the second plane L2 is greater than the distance span D4 of voxels in bottom of the brain region to the second plane L2. The above features may be used in the implementation of step S12 to determine the anterior, posterior, superior and inferior portions of the brain region.
In an alternative embodiment of step S12, if the average distance from the voxels located on the first side of the first plane to the first plane is greater than the average distance from the voxels located on the second side of the first plane to the first plane, then the first side of the first plane is determined to be the front of the brain region and the second side of the first plane is determined to be the back of the brain region.
In this embodiment, the front part of the brain region and the rear part of the brain region are determined using the feature that the average distance from the voxels in the front part of the brain region to the first plane is larger than the average distance from the voxels in the rear part of the brain region to the first plane.
In an alternative embodiment of step S12, if the maximum distance from the voxels located on the first side of the first plane to the first plane is greater than the maximum distance from the voxels located on the second side of the first plane to the first plane, the first side of the first plane is determined to be the front of the brain region and the second side of the first plane is determined to be the rear of the brain region.
In this embodiment, the front portion of the brain region and the rear portion of the brain region are determined using the feature that the maximum distance from the voxels in the front portion of the brain region to the first plane is larger than the maximum distance from the voxels in the rear portion of the brain region to the first plane.
In an alternative embodiment of step S12, if the average distance from the voxels located on the first side of the second plane to the second plane is greater than the average distance from the voxels located on the second side of the second plane to the second plane, the first side of the second plane is determined to be the upper portion of the brain region and the second side of the second plane is determined to be the lower portion of the brain region.
In this embodiment, the upper part of the brain region and the lower part of the brain region are determined using the feature that the average distance from the voxels in the upper part of the brain region to the second plane is larger than the average distance from the voxels in the lower part of the brain region to the second plane.
In an alternative embodiment of step S12, if the maximum distance from the voxels located on the first side of the second plane to the second plane is greater than the maximum distance from the voxels located on the second side of the second plane to the second plane, the first side of the second plane is determined to be the upper portion of the brain region and the second side of the second plane is determined to be the lower portion of the brain region.
In this embodiment, the upper part of the brain region and the lower part of the brain region are determined using the feature that the maximum distance from the voxels in the upper part of the brain region to the second plane is larger than the maximum distance from the voxels in the lower part of the brain region to the second plane.
Step S13 of determining a face orientation from at least one of a front portion of the brain region and a rear portion of the brain region, at least one of an upper portion of the brain region and a lower portion of the brain region, and the first position.
Since there is a fixed relationship between the face orientation and the brain direction, the face orientation may be determined from the front of the brain region and/or the rear of the brain region, the upper of the brain region and/or the lower of the brain region. In a specific implementation, step S13 may include the following steps S131 to S133:
step S131, using the first position as an origin of coordinates, determining a first coordinate axis according to at least one of a front portion of the brain region and a rear portion of the brain region, and determining a second coordinate axis according to at least one of an upper portion of the brain region and a lower portion of the brain region.
Specifically, the direction from the rear portion of the brain region to the front portion of the brain region may be a first coordinate axis, the direction from the front portion of the brain region to the rear portion of the brain region may be a first coordinate axis, the direction in which the first position points to the front portion of the brain region may be a first coordinate axis, and the direction in which the first position points to the rear portion of the brain region may be a first coordinate axis. The direction from the lower part of the brain region to the upper part of the brain region may be the second coordinate axis, the direction from the upper part of the brain region to the lower part of the brain region may be the second coordinate axis, the direction from the first position to the upper part of the brain region may be the second coordinate axis, and the direction from the first position to the lower part of the brain region may be the second coordinate axis.
Step S132, determining a face orientation vector according to the first vector on the first coordinate axis and the second vector on the second coordinate axis. The direction of the first vector is the same as the positive direction of the first coordinate axis, and the direction of the second vector is the same as the positive direction of the second coordinate axis.
In this embodiment, a brain coordinate system is established according to the first position, the front part of the brain region and/or the rear part of the brain region, the upper part of the brain region and/or the lower part of the brain region, and the face orientation vector is determined by the brain coordinate system. In the example shown in fig. 4, the brain coordinate system is established by: the first position O is a coordinate origin, a y-axis is a first coordinate axis in a direction from the rear of the brain region to the front of the brain region, and a z-axis is a second coordinate axis in a direction from the lower of the brain region to the upper of the brain region.
Specifically, the face orientation vector V may be determined according to the following formula face
V face =α*V y +β*V z
Wherein V is y For the y-axis, i.e. the first vector on the first coordinate axis, V z Is the second vector on the z-axis, i.e., the second coordinate axis, α is the first coefficient, and β is the second coefficient. It should be noted that the vector V on the first coordinate axis y Vector V on the second coordinate axis z It may be a unit vector, i.e. the modulus of the vector is 1. The first coefficient α and the second coefficient β may be set according to an inherent relationship between the face orientation and the brain direction. Generally, the absolute value of the first coefficient α is greater than the absolute value of the second coefficient β by a ratio of between 1 and 5, preferably 2.
Assume that the direction from the rear part of the brain region to the front part of the brain region is the first coordinate axis, i.e., the y-axis, and the direction from the lower part of the brain region to the brain regionThe upper direction of (2) is the second coordinate axis, i.e. the z-axis. In the example shown in FIG. 4, V y Is vector in y-axis direction, V z The first coefficient α is a positive number and the second coefficient β is a negative number, which are vectors in the z-axis direction. In another example, V y Is the vector in the reverse direction of the y axis, V z The first coefficient α is a negative number and the second coefficient β is a positive number, which are vectors in the z-axis reverse direction. In another example, V y Is vector in y-axis direction, V z The first coefficient α and the second coefficient β are positive numbers for vectors in the z-axis reverse direction. In another example, V y Is the vector in the reverse direction of the y axis, V z The first coefficient α and the second coefficient β are negative numbers for the vector in the z-axis direction.
And step S133, determining the direction of the face orientation vector as the face orientation.
In this embodiment, the front, rear, upper and lower parts of the brain region are determined according to the image of the brain region by using the specific features of the human brain structure, and then the face orientation is determined by using the inherent relation between the face orientation and the brain direction, that is, the face orientation is determined by using the inherent features of the human head structure, and the determination of the face orientation is realized without depending on other auxiliary information such as the label information of DICOM data, so that the method has high accuracy, strong robustness and generalization, and can be suitable for medical images obtained by scanning different devices.
In specific implementation, the deep learning network may be utilized to perform brain segmentation processing on the medical image to be processed so as to obtain an image of a brain region, specifically, the deep learning network is trained by using sample data and gold standard data of brain segmentation, and the trained deep learning network is utilized to perform brain segmentation processing on the medical image to be processed so as to obtain an image of a predicted brain region.
In a specific implementation, an image of the brain region may also be obtained based on the following features of the human brain structure: the brain of the human body is completely enclosed within the skull, and the skull is the only one near completely enclosed bone cavity of the human body or at least the upper body. It should be noted that, because there may be some small holes in the skull, such as the foramen ovale, through the blood vessels and nerves, the skull is a near-fully enclosed bone cavity, rather than a fully enclosed bone cavity. The present embodiment provides a brain segmentation method for acquiring an image of a brain region, as shown in fig. 5, including the following steps S10a to S10e:
And step S10a, performing bone tissue segmentation processing on the acquired medical image to be processed to obtain an image of a bone tissue region.
In a specific implementation, in order to improve the efficiency of image processing, the acquired medical image to be processed may be downsampled, and then subjected to bone tissue segmentation processing. The method for segmenting bone tissue in the medical image to be processed can adopt a threshold segmentation method to carry out bone tissue segmentation processing, specifically, pixel points with gray values larger than a first preset threshold in the medical image to be processed can be determined to belong to bone tissue areas, and pixel points with gray values smaller than or equal to the first preset threshold are determined to be non-bone tissue areas. The first preset threshold may be set according to practical situations, for example, may be set to 400HU.
Step S10b, performing morphological processing on the image of the bone tissue region to obtain a first image.
Specifically, the image of the bone tissue region may be subjected to a closed operation, i.e., expansion followed by erosion, to fill small breaks and holes in the bone tissue, so that the skull becomes a fully enclosed bone cavity. In one specific example, the image of the bone tissue region is closed-computed using a spherical morphological structure operator with a radius of 3 mm.
Step S10c, performing hole filling processing on the first image to obtain a second image. Wherein, after the bone tissue segmentation process, some holes are inevitably present in the first image, and all the closed cavities can be filled by performing a hole filling process on the holes.
And step S10d, obtaining an image of the bone cavity area according to the second image and the first image. In a specific implementation, the difference set between the second image and the first image may be obtained, so as to extract all closed bone cavities, so as to obtain an image of the bone cavity region.
And step S10e, determining the largest connected domain in the image of the bone cavity area as the image of the brain area. Specifically, with the feature that the skull wraps the brain and the skull is a bone cavity that is nearly completely closed, the largest connected domain in the image of the bone cavity region is determined as the image of the brain region.
The brain segmentation method provided by the embodiment depends on the specific characteristics of the brain, the image of the brain region can be obtained through simple morphological processing, the processing process is rapid, and the processing result is stable.
As shown in fig. 6, the present embodiment further provides a face orientation determining apparatus 60, which includes a direction vector determining module 61, a brain direction determining module 62, and a face orientation determining module 63.
The direction vector determining module 61 is configured to perform principal component analysis processing on the image of the brain region according to the first position, and determine a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region. The brain direction determining module 62 is configured to determine, for an image of the brain region, a front portion of the brain region and a rear portion of the brain region according to distances from voxels located on both sides of a first plane to the first plane, respectively; and determining an upper portion of the brain region and a lower portion of the brain region according to distances from voxels located on both sides of the second plane to the second plane, respectively; the first plane is a plane perpendicular to the first direction vector and passing through the first position, the second plane is a plane perpendicular to the second direction vector and passing through the first position, and the first position is an intersection point of the first direction vector and the second direction vector. The face orientation determining module 63 is configured to determine the face orientation from at least one of a front portion of the brain region and a rear portion of the brain region, at least one of an upper portion of the brain region and a lower portion of the brain region, and the first position.
In an optional implementation manner, the direction vector determining module is specifically configured to perform principal component analysis processing on an image of a brain region with the first position as an origin, so as to obtain three orthogonal principal component direction vectors and a feature value corresponding to each principal component direction vector; the principal component direction vector corresponding to the largest eigenvalue is determined as a first direction vector, and the principal component direction vector corresponding to the next largest eigenvalue is determined as a second direction vector.
In an alternative embodiment, the brain direction determining module is specifically configured to determine that the first side of the first plane is a front part of the brain region and the second side of the first plane is a rear part of the brain region, in a case that an average distance from a voxel located on the first side of the first plane to the first plane is greater than an average distance from a voxel located on the second side of the first plane to the first plane.
In an alternative embodiment, the brain direction determining module is specifically configured to determine that the first side of the second plane is an upper portion of the brain region and the second side of the second plane is a lower portion of the brain region, in a case where an average distance from a voxel located on the first side of the second plane to the second plane is greater than an average distance from a voxel located on the second side of the second plane to the second plane.
In an optional implementation manner, the face orientation determining module includes a coordinate axis determining unit, a vector determining unit and an orientation determining unit. The coordinate axis determining unit is configured to determine a first coordinate axis based on at least one of a front portion of the brain region and a rear portion of the brain region and a second coordinate axis based on at least one of an upper portion of the brain region and a lower portion of the brain region with the first position as an origin of coordinates. The vector determining unit is used for determining a face orientation vector according to a first vector on the first coordinate axis and a second vector on the second coordinate axis; the direction of the first vector is the same as the positive direction of the first coordinate axis, and the direction of the second vector is the same as the positive direction of the second coordinate axis. The orientation determining unit is used for determining the direction of the face orientation vector as the face orientation.
In an optional embodiment, the face orientation determining device further includes a bone tissue segmentation processing unit, a morphology processing unit, a hole filling processing unit, a bone cavity extraction unit, and a brain region determining unit. The bone tissue segmentation processing unit is used for performing bone tissue segmentation processing on the acquired medical image to be processed to obtain an image of a bone tissue region. The morphological processing unit is used for performing morphological processing on the image of the bone tissue region to obtain a first image. The hole filling processing unit is used for performing hole filling processing on the first image to obtain a second image. The bone cavity extraction unit is used for obtaining an image of a bone cavity area according to the second image and the first image. The brain region determining unit is used for extracting the maximum connected domain in the image of the bone cavity region to obtain the image of the brain region.
It should be noted that, the device for determining the face orientation in this embodiment may be a separate chip, a chip module or an electronic device, or may be a chip or a chip module integrated in the electronic device.
The face orientation determining device described in this embodiment may include each module/unit, which may be a software module/unit, or may be a hardware module/unit, or may be a software module/unit, or may be a hardware module/unit.
Example 2
Fig. 7 is a flowchart of a face orientation determining method provided in this embodiment, where the face orientation determining method may be performed by a face orientation determining device, and the face orientation determining device may be implemented by software and/or hardware, and the face orientation determining device may be part or all of an electronic device. The electronic device in this embodiment may be a personal computer, for example, a desktop computer, an integrated machine, a notebook computer, a tablet computer, or a terminal device such as a mobile phone, a wearable device, and a palm computer. The method for determining the face orientation provided in this embodiment is described below with an electronic device as an execution subject.
As shown in fig. 7, the method for determining the face orientation provided in the present embodiment may include the following steps S21 to S23:
and S21, performing principal component analysis processing on the image of the brain region according to the first position, and determining a first direction vector and a second direction vector.
Wherein the first location is used to characterize a central location of a brain region. In a specific implementation, a three-dimensional rectangular coordinate system may be established in a three-dimensional space where the image of the brain region is located, where the coordinate system includes an x-axis, a y-axis, and a z-axis, and the x-coordinates, the y-coordinates, and the z-coordinates of all voxels in the image of the brain region are averaged, and the three averages are respectively used as the x-coordinates, the y-coordinates, and the z-coordinates of the first position. In some examples, the first location may also be referred to as a geometric center of the brain region.
In a specific implementation, the brain region image may be obtained by performing brain segmentation processing on the medical image to be processed. The medical image to be processed may be called a DICOM image to be processed, specifically, a CT image or an MRI image, or the like, specifically, the medical image to be processed may be obtained by scanning a head (including a face) of a patient with a CT scanner, a magnetic resonance imager, or the like, or may be obtained by downloading from a server.
The principal component analysis technique, also called principal component analysis technique, is aimed at compressing the scale of the original data matrix and reducing the dimension of feature vectors so that they reflect the principal features of things. In step S11, a first direction vector and a second direction vector are determined from the result of the principal component analysis processing of the image of the brain region. The first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region.
The brain structure of the human body generally has the following features: the size of the brain area in the front-back direction is largest, and then the size of the brain area in the up-down direction is smallest, and the front-back direction and the up-down direction of the brain area have asymmetry. In an alternative embodiment of step S21, the first direction vector and the second direction vector are determined based on the above rule, and specifically, the following steps S211 and S212 may be included:
step S211, performing principal component analysis processing on the image of the brain region by taking the first position as an origin to obtain three orthogonal principal component direction vectors and characteristic values corresponding to the principal component direction vectors.
In a specific implementation, voxels belonging to the brain region in the image of the brain region may be converted into a point cloud, and principal component analysis processing may be performed on the point cloud.
In step S211, the principal component analysis processing is performed on the image of the brain region using the first position as the origin, and three orthogonal principal component direction vectors P1, P2, and P3 emitted from the origin as shown in fig. 2, and a feature Value1 corresponding to the principal component direction vector P1, a feature Value2 corresponding to the principal component direction vector P2, and a feature Value3 corresponding to the principal component direction vector P3 can be obtained. The three principal component direction vectors can reflect different directions of the brain, and the corresponding three characteristic values can reflect sizes of the brain in different directions.
Step S212, determining the principal component direction vector corresponding to the largest eigenvalue as a first direction vector, and determining the principal component direction vector corresponding to the next largest eigenvalue as a second direction vector.
Specifically, the three feature values Value1, value2, value3 are ordered, and a first direction vector and a second direction vector are determined from the principal component direction vectors P1, P2, P3 according to the ordering result.
The largest eigenvalue represents the largest dimension in the corresponding direction, the next largest eigenvalue represents the next largest dimension in the corresponding direction, and the smallest eigenvalue represents the smallest dimension in the corresponding direction. Since the size of the front-rear direction of the brain is largest, the size of the up-down direction of the brain is next largest, and the size of the left-right direction of the brain is smallest, the principal component direction vector corresponding to the largest eigenvalue is determined as a first direction vector for indicating the front-to-rear direction or the rear-to-front direction of the brain region, the principal component direction vector corresponding to the next largest eigenvalue is determined as a second direction vector for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region, and the principal component direction vector corresponding to the smallest eigenvalue is determined as a third direction vector for indicating the left-to-right direction or the right-to-left direction of the brain region.
Step S22, determining a first coordinate axis according to the first position and the distances between voxels positioned at two sides of a first plane and the first plane respectively aiming at the image of the brain region; and determining a second coordinate axis according to the distances between the voxels at the two sides of the second plane and the first position and the second plane respectively.
The first plane is a plane perpendicular to the first direction vector and passing through the first position, the second plane is a plane perpendicular to the second direction vector and passing through the first position, and the first coordinate axis and the second coordinate axis intersect at the first position.
In a specific implementation, for any voxel located on both sides of the first plane, the distance from the voxel to the first plane refers to the vertical distance from the voxel to the first plane, i.e. the distance between the point where the voxel is projected onto the first direction vector and the first position. For any voxel located on either side of the second plane, the distance of the voxel to said second plane refers to the perpendicular distance of the voxel to the second plane, i.e. the distance between the point of the voxel projected onto said second direction vector and the first position.
In an alternative embodiment of step S22, the first coordinate axis is determined by taking the first position as the origin of coordinates according to the magnitude relation between the average distance from the voxels located at the first side of the first plane to the first plane and the average distance from the voxels located at the second side of the first plane to the first plane. In a specific example, if the average distance from the voxel located on the first side of the first plane to the first plane is greater than the average distance from the voxel located on the second side of the first plane to the first plane, the first position is taken as the origin of coordinates, and the direction from the first side of the first plane to the second side of the first plane is taken as the first coordinate axis.
In an alternative embodiment of step S22, the first coordinate axis is determined by taking the first position as the origin of coordinates according to the magnitude relation between the maximum distance of the voxels located at the first side of the first plane to the first plane and the maximum distance of the voxels located at the second side of the first plane to the first plane. In a specific example, if the maximum distance from the voxel located on the first side of the first plane to the first plane is greater than the maximum distance from the voxel located on the second side of the first plane to the first plane, the first position is taken as the origin of coordinates, and the direction from the second side of the first plane to the first side of the first plane is taken as the first coordinate axis.
In an alternative embodiment of step S22, the second coordinate axis is determined by taking the first position as the origin of coordinates according to the magnitude relation between the average distance from the voxels located at the first side of the second plane to the second plane and the average distance from the voxels located at the second side of the second plane to the second plane. In a specific example, if the average distance from the voxel located on the first side of the second plane to the second plane is greater than the average distance from the voxel located on the second side of the second plane to the second plane, the first position is taken as the origin of coordinates, and a direction from the first side of the second plane to the second side of the second plane is taken as the second coordinate axis.
In an alternative embodiment of step S22, the second coordinate axis is determined by taking the first position as the origin of coordinates according to the magnitude relation between the maximum distance of the voxels located at the first side of the second plane to the second plane and the maximum distance of the voxels located at the second side of the second plane to the second plane.
In a specific example, if the maximum distance from the voxel located on the first side of the second plane to the second plane is greater than the maximum distance from the voxel located on the second side of the second plane to the second plane, the first position is taken as the origin of coordinates, and a direction from the first side of the second plane to the second side of the second plane is taken as the second coordinate axis.
And S23, determining the face orientation according to the first coordinate axis and the second coordinate axis.
In an alternative embodiment of step S23, the following steps S231 and S232 are included:
step S231, determining a face orientation vector according to the first vector on the first coordinate axis and the second vector on the second coordinate axis; the direction of the first vector is the same as the positive direction of the first coordinate axis, and the direction of the second vector is the same as the positive direction of the second coordinate axis.
In a specific implementation, the first vector and the second vector may be unit vectors, that is, the modulus of the first vector and the second vector is 1. Specifically, the face orientation vector may be obtained from the sum of the product of the first vector and the first coefficient and the product of the second vector and the second coefficient. Typically, the absolute value of the first coefficient is greater than the absolute value of the second coefficient by a ratio of between 1 and 5, preferably 2.
And step S232, determining the direction of the face orientation vector as the face orientation.
The method provided by the embodiment does not need to rely on label information of other auxiliary information such as DICOM data in the process of determining the face orientation, not only has higher accuracy, but also has stronger robustness and generalization, and can be suitable for medical images obtained by scanning different ranges including the face in a human body by different devices under different scanning resolutions.
In specific implementation, the deep learning network may be utilized to perform brain segmentation processing on the medical image to be processed so as to obtain an image of a brain region, specifically, the deep learning network is trained by using sample data and gold standard data of brain segmentation, and the trained deep learning network is utilized to perform brain segmentation processing on the medical image to be processed so as to obtain an image of a predicted brain region.
In a specific implementation, an image of the brain region may also be obtained based on the following features of the human brain structure: the brain of the human body is completely enclosed within the skull, and the skull is the only one near completely enclosed bone cavity of the human body or at least the upper body. It should be noted that, because there may be some small holes in the skull, such as the foramen ovale, through the blood vessels and nerves, the skull is a near-fully enclosed bone cavity, rather than a fully enclosed bone cavity. The present embodiment provides a brain segmentation method for acquiring an image of a brain region, including the steps of S20a to S20e:
and step S20a, performing bone tissue segmentation processing on the acquired medical image to be processed to obtain an image of a bone tissue region.
In a specific implementation, in order to improve the efficiency of image processing, the acquired medical image to be processed may be downsampled, and then subjected to bone tissue segmentation processing. The method for segmenting bone tissue in the medical image to be processed can adopt a threshold segmentation method to carry out bone tissue segmentation processing, specifically, pixel points with gray values larger than a first preset threshold in the medical image to be processed can be determined to belong to bone tissue areas, and pixel points with gray values smaller than or equal to the first preset threshold are determined to be non-bone tissue areas. The first preset threshold may be set according to practical situations, for example, may be set to 400HU.
And step S20b, performing morphological processing on the image of the bone tissue region to obtain a first image.
Specifically, the image of the bone tissue region may be subjected to a closed operation, i.e., expansion followed by erosion, to fill small breaks and holes in the bone tissue, so that the skull becomes a fully enclosed bone cavity. In one specific example, the image of the bone tissue region is closed-computed using a spherical morphological structure operator with a radius of 3 mm.
And step S20c, performing hole filling processing on the first image to obtain a second image. Wherein, after the bone tissue segmentation process, some holes are inevitably present in the first image, and all the closed cavities can be filled by performing a hole filling process on the holes.
And step S20d, obtaining an image of the bone cavity area according to the second image and the first image. In a specific implementation, the difference set between the second image and the first image may be obtained, so as to extract all closed bone cavities, so as to obtain an image of the bone cavity region.
Step S20e, determining the largest connected domain in the image of the bone cavity region as the image of the brain region. Specifically, with the feature that the skull wraps the brain and the skull is a bone cavity that is nearly completely closed, the largest connected domain in the image of the bone cavity region is determined as the image of the brain region.
The brain segmentation method provided by the embodiment depends on the specific characteristics of the brain, the image of the brain region can be obtained through simple morphological processing, the processing process is rapid, and the processing result is stable.
As shown in fig. 8, the present embodiment further provides a face orientation determining apparatus 70, which includes a direction vector determining module 71, a coordinate axis determining module 72, and a face orientation determining module 73.
The direction vector determining module 71 is configured to perform principal component analysis processing on the image of the brain region according to the first position, and determine a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region. The coordinate axis determining module 72 is configured to determine, for the image of the brain region, a first coordinate axis according to the first position and distances between voxels located at two sides of a first plane and the first plane, respectively; determining a second coordinate axis according to the first position and the distances between the voxels positioned at two sides of the second plane and the second plane respectively; the first plane is a plane perpendicular to the first direction vector and passing through the first position, the second plane is a plane perpendicular to the second direction vector and passing through the first position, and the first coordinate axis and the second coordinate axis intersect at the first position. The face orientation determining module 73 is configured to determine a face orientation according to the first coordinate axis and the second coordinate axis.
In an optional implementation manner, the direction vector determining module is specifically configured to perform principal component analysis processing on an image of a brain region with the first position as an origin, so as to obtain three orthogonal principal component direction vectors and a feature value corresponding to each principal component direction vector; the principal component direction vector corresponding to the largest eigenvalue is determined as a first direction vector, and the principal component direction vector corresponding to the next largest eigenvalue is determined as a second direction vector.
In an optional implementation manner, the coordinate axis determining module is specifically configured to determine a first coordinate axis with the first location as an origin of coordinates according to a magnitude relation between an average distance from a voxel located on a first side of a first plane to the first plane and an average distance from a voxel located on a second side of the first plane to the first plane.
In an optional implementation manner, the coordinate axis determining module is specifically configured to determine the second coordinate axis with the first location as the origin of coordinates according to a magnitude relation between an average distance from the voxel located on the first side of the second plane to the second plane and an average distance from the voxel located on the second side of the second plane to the second plane.
In an alternative embodiment, the face orientation determining module includes a vector determining unit and an orientation determining unit. The vector determining unit is used for determining a face orientation vector according to a first vector on the first coordinate axis and a second vector on the second coordinate axis; the direction of the first vector is the same as the positive direction of the first coordinate axis, and the direction of the second vector is the same as the positive direction of the second coordinate axis. The orientation determining unit is used for determining the direction of the face orientation vector as the face orientation.
In an optional embodiment, the face orientation determining device further includes a bone tissue segmentation processing unit, a morphology processing unit, a hole filling processing unit, a bone cavity extraction unit, and a brain region determining unit. The bone tissue segmentation processing unit is used for performing bone tissue segmentation processing on the acquired medical image to be processed to obtain an image of a bone tissue region. The morphological processing unit is used for performing morphological processing on the image of the bone tissue region to obtain a first image. The hole filling processing unit is used for performing hole filling processing on the first image to obtain a second image. The bone cavity extraction unit is used for obtaining an image of a bone cavity area according to the second image and the first image. The brain region determining unit is used for extracting the maximum connected domain in the image of the bone cavity region to obtain the image of the brain region.
Example 3
Fig. 9 is a flowchart of a face orientation determining method provided in this embodiment, where the face orientation determining method may be performed by a face orientation determining device, and the face orientation determining device may be implemented by software and/or hardware, and the face orientation determining device may be part or all of an electronic device. The electronic device in this embodiment may be a personal computer, for example, a desktop computer, an integrated machine, a notebook computer, a tablet computer, or a terminal device such as a mobile phone, a wearable device, and a palm computer. The method for determining the face orientation provided in this embodiment is described below with an electronic device as an execution subject.
As shown in fig. 9, the method for determining the face orientation provided in the present embodiment may include the following steps S31 to S33:
step S31, performing principal component analysis processing on the image of the brain region according to the first position, and determining a first direction vector and a second direction vector.
Wherein the first location is used to characterize a central location of a brain region. In a specific implementation, a three-dimensional rectangular coordinate system may be established in a three-dimensional space where the image of the brain region is located, where the coordinate system includes an x-axis, a y-axis, and a z-axis, and the x-coordinates, the y-coordinates, and the z-coordinates of all voxels in the image of the brain region are averaged, and the three averages are respectively used as the x-coordinates, the y-coordinates, and the z-coordinates of the first position. In some examples, the first location may also be referred to as a geometric center of the brain region.
In a specific implementation, the brain region image may be obtained by performing brain segmentation processing on the medical image to be processed. The medical image to be processed may be called a DICOM image to be processed, specifically, a CT image or an MRI image, or the like, specifically, the medical image to be processed may be obtained by scanning a head (including a face) of a patient with a CT scanner, a magnetic resonance imager, or the like, or may be obtained by downloading from a server.
The principal component analysis technique, also called principal component analysis technique, is aimed at compressing the scale of the original data matrix and reducing the dimension of feature vectors so that they reflect the principal features of things. In step S11, a first direction vector and a second direction vector are determined from the result of the principal component analysis processing of the image of the brain region. The first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region.
The brain structure of the human body generally has the following features: the size of the brain area in the front-back direction is largest, and then the size of the brain area in the up-down direction is smallest, and the front-back direction and the up-down direction of the brain area have asymmetry. In an alternative embodiment of step S31, the first direction vector and the second direction vector are determined based on the above rule, and specifically, the following steps S311 and S312 may be included:
Step S311, performing principal component analysis processing on the image of the brain region using the first position as the origin, to obtain three orthogonal principal component direction vectors and feature values corresponding to each principal component direction vector.
In a specific implementation, voxels belonging to the brain region in the image of the brain region may be converted into a point cloud, and principal component analysis processing may be performed on the point cloud.
In step S311, the principal component analysis processing is performed on the image of the brain region using the first position as the origin, and three orthogonal principal component direction vectors P1, P2, and P3 emitted from the origin as shown in fig. 2, and a feature Value1 corresponding to the principal component direction vector P1, a feature Value2 corresponding to the principal component direction vector P2, and a feature Value3 corresponding to the principal component direction vector P3 can be obtained. The three principal component direction vectors can reflect different directions of the brain, and the corresponding three characteristic values can reflect sizes of the brain in different directions.
Step S312, a principal component direction vector corresponding to the largest eigenvalue is determined as a first direction vector, and a principal component direction vector corresponding to the next largest eigenvalue is determined as a second direction vector.
Specifically, the three feature values Value1, value2, value3 are ordered, and a first direction vector and a second direction vector are determined from the principal component direction vectors P1, P2, P3 according to the ordering result.
The largest eigenvalue represents the largest dimension in the corresponding direction, the next largest eigenvalue represents the next largest dimension in the corresponding direction, and the smallest eigenvalue represents the smallest dimension in the corresponding direction. Since the size of the front-rear direction of the brain is largest, the size of the up-down direction of the brain is next largest, and the size of the left-right direction of the brain is smallest, the principal component direction vector corresponding to the largest eigenvalue is determined as a first direction vector for indicating the front-to-rear direction or the rear-to-front direction of the brain region, the principal component direction vector corresponding to the next largest eigenvalue is determined as a second direction vector for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region, and the principal component direction vector corresponding to the smallest eigenvalue is determined as a third direction vector for indicating the left-to-right direction or the right-to-left direction of the brain region.
Step S32, determining the distance between the voxels at two sides of a first plane and the second plane respectively for the image of the brain region.
The first plane is a plane perpendicular to the first direction vector and passing through the first position, and the second plane is a plane perpendicular to the second direction vector and passing through the first position.
In a specific implementation, for any voxel located on both sides of the first plane, the distance from the voxel to the first plane refers to the vertical distance from the voxel to the first plane, i.e. the distance between the point where the voxel is projected onto the first direction vector and the first position. For any voxel located on either side of the second plane, the distance of the voxel to said second plane refers to the perpendicular distance of the voxel to the second plane, i.e. the distance between the point of the voxel projected onto said second direction vector and the first position.
And step S33, determining the face orientation based on the first position, the distance between the voxels at two sides of the first plane and the first plane respectively, and the distance between the voxels at two sides of the second plane and the second plane respectively.
In a specific implementation, the step S33 may include the following steps S331 to S332:
step S331, determining a first coordinate axis according to the first position and the distances between voxels positioned at two sides of a first plane and the first plane respectively; determining a second coordinate axis according to the first position and the distances between the voxels positioned at two sides of the second plane and the second plane respectively; the first coordinate axis and the second coordinate axis intersect at the first position.
In an alternative embodiment of step S331, the first coordinate axis is determined by taking the first position as the origin of coordinates according to the magnitude relation between the average distance from the voxels located at the first side of the first plane to the first plane and the average distance from the voxels located at the second side of the first plane to the first plane. In a specific example, if the average distance from the voxel located on the first side of the first plane to the first plane is greater than the average distance from the voxel located on the second side of the first plane to the first plane, the first position is taken as the origin of coordinates, and the direction from the first side of the first plane to the second side of the first plane is taken as the first coordinate axis.
In an alternative embodiment of step S331, the first coordinate axis is determined by taking the first position as the origin of coordinates according to the magnitude relation between the maximum distance from the voxel located on the first side of the first plane to the first plane and the maximum distance from the voxel located on the second side of the first plane to the first plane. In a specific example, if the maximum distance from the voxel located on the first side of the first plane to the first plane is greater than the maximum distance from the voxel located on the second side of the first plane to the first plane, the first position is taken as the origin of coordinates, and the direction from the second side of the first plane to the first side of the first plane is taken as the first coordinate axis.
In an alternative embodiment of step S331, the second coordinate axis is determined by taking the first position as the origin of coordinates according to the magnitude relation between the average distance from the voxels located at the first side of the second plane to the second plane and the average distance from the voxels located at the second side of the second plane to the second plane. In a specific example, if the average distance from the voxel located on the first side of the second plane to the second plane is greater than the average distance from the voxel located on the second side of the second plane to the second plane, the first position is taken as the origin of coordinates, and a direction from the first side of the second plane to the second side of the second plane is taken as the second coordinate axis.
In an alternative embodiment of step S331, the second coordinate axis is determined by taking the first position as the origin of coordinates according to the magnitude relation between the maximum distance from the voxel located on the first side of the second plane to the second plane and the maximum distance from the voxel located on the second side of the second plane to the second plane. In a specific example, if the maximum distance from the voxel located on the first side of the second plane to the second plane is greater than the maximum distance from the voxel located on the second side of the second plane to the second plane, the first position is taken as the origin of coordinates, and a direction from the first side of the second plane to the second side of the second plane is taken as the second coordinate axis.
And step S332, determining the face orientation according to the first coordinate axis and the second coordinate axis.
In an alternative embodiment of step S332, the following steps S332a and S332b are included:
step S332a, determining a face orientation vector according to the first vector on the first coordinate axis and the second vector on the second coordinate axis; the direction of the first vector is the same as the positive direction of the first coordinate axis, and the direction of the second vector is the same as the positive direction of the second coordinate axis.
In a specific implementation, the first vector and the second vector may be unit vectors, that is, the modulus of the first vector and the second vector is 1. Specifically, the face orientation vector may be obtained from the sum of the product of the first vector and the first coefficient and the product of the second vector and the second coefficient. Typically, the absolute value of the first coefficient is greater than the absolute value of the second coefficient by a ratio of between 1 and 5, preferably 2.
Step S332b, determining the direction of the face orientation vector as the face orientation.
The method provided by the embodiment does not need to rely on label information of other auxiliary information such as DICOM data in the process of determining the face orientation, not only has higher accuracy, but also has stronger robustness and generalization, and can be suitable for medical images obtained by scanning different ranges including the face in a human body by different devices under different scanning resolutions.
In specific implementation, the deep learning network may be utilized to perform brain segmentation processing on the medical image to be processed so as to obtain an image of a brain region, specifically, the deep learning network is trained by using sample data and gold standard data of brain segmentation, and the trained deep learning network is utilized to perform brain segmentation processing on the medical image to be processed so as to obtain an image of a predicted brain region.
In a specific implementation, an image of the brain region may also be obtained based on the following features of the human brain structure: the brain of the human body is completely enclosed within the skull, and the skull is the only one near completely enclosed bone cavity of the human body or at least the upper body. It should be noted that, because there may be some small holes in the skull, such as the foramen ovale, through the blood vessels and nerves, the skull is a near-fully enclosed bone cavity, rather than a fully enclosed bone cavity. The present embodiment provides a brain segmentation method for acquiring an image of a brain region, including the steps of S30a to S30e:
and step S30a, performing bone tissue segmentation processing on the acquired medical image to be processed to obtain an image of a bone tissue region.
In a specific implementation, in order to improve the efficiency of image processing, the acquired medical image to be processed may be downsampled, and then subjected to bone tissue segmentation processing. The method for segmenting bone tissue in the medical image to be processed can adopt a threshold segmentation method to carry out bone tissue segmentation processing, specifically, pixel points with gray values larger than a first preset threshold in the medical image to be processed can be determined to belong to bone tissue areas, and pixel points with gray values smaller than or equal to the first preset threshold are determined to be non-bone tissue areas. The first preset threshold may be set according to practical situations, for example, may be set to 400HU.
Step S30b, performing morphological processing on the image of the bone tissue region to obtain a first image.
Specifically, the image of the bone tissue region may be subjected to a closed operation, i.e., expansion followed by erosion, to fill small breaks and holes in the bone tissue, so that the skull becomes a fully enclosed bone cavity. In one specific example, the image of the bone tissue region is closed-computed using a spherical morphological structure operator with a radius of 3 mm.
Step S30c, hole filling processing is carried out on the first image, and a second image is obtained. Wherein, after the bone tissue segmentation process, some holes are inevitably present in the first image, and all the closed cavities can be filled by performing a hole filling process on the holes.
And step S30d, obtaining an image of the bone cavity area according to the second image and the first image. In a specific implementation, the difference set between the second image and the first image may be obtained, so as to extract all closed bone cavities, so as to obtain an image of the bone cavity region.
Step S30e, determining the largest connected domain in the image of the bone cavity region as the image of the brain region. Specifically, with the feature that the skull wraps the brain and the skull is a bone cavity that is nearly completely closed, the largest connected domain in the image of the bone cavity region is determined as the image of the brain region.
The brain segmentation method provided by the embodiment depends on the specific characteristics of the brain, the image of the brain region can be obtained through simple morphological processing, the processing process is rapid, and the processing result is stable.
As shown in fig. 10, the present embodiment further provides a face orientation determining apparatus 80, which includes a direction vector determining module 81, a distance determining module 82, and a face orientation determining module 83.
The direction vector determining module 81 is configured to perform principal component analysis processing on an image of a brain region according to the first position, and determine a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region. The distance determining module 82 is configured to determine, for an image of the brain region, distances between voxels located on both sides of a first plane and the second plane, respectively; the first plane is a plane perpendicular to the first direction vector and passing through the first position, and the second plane is a plane perpendicular to the second direction vector and passing through the first position. The face orientation determining module 83 is configured to determine a face orientation based on the first position, distances between voxels located at two sides of the first plane and the first plane, and distances between voxels located at two sides of the second plane and the second plane.
In an optional implementation manner, the direction vector determining module is specifically configured to perform principal component analysis processing on an image of a brain region with the first position as an origin, so as to obtain three orthogonal principal component direction vectors and a feature value corresponding to each principal component direction vector; and determining a principal component direction vector corresponding to the largest eigenvalue as a first direction vector, and determining a principal component direction vector corresponding to the next largest eigenvalue as a second direction vector.
In an optional embodiment, the face orientation determining device further includes a bone tissue segmentation processing unit, a morphology processing unit, a hole filling processing unit, a bone cavity extraction unit, and a brain region determining unit. The bone tissue segmentation processing unit is used for performing bone tissue segmentation processing on the acquired medical image to be processed to obtain an image of a bone tissue region. The morphological processing unit is used for performing morphological processing on the image of the bone tissue region to obtain a first image. The hole filling processing unit is used for performing hole filling processing on the first image to obtain a second image. The bone cavity extraction unit is used for obtaining an image of a bone cavity area according to the second image and the first image. The brain region determining unit is used for extracting the maximum connected domain in the image of the bone cavity region to obtain the image of the brain region.
Example 4
Fig. 11 is a flow chart of a face image reconstruction method provided in this embodiment, where the face image reconstruction method may be performed by a face image reconstruction device, the face image reconstruction device may be implemented by software and/or hardware, and the face image reconstruction device may be part or all of an electronic device. The electronic device in this embodiment may be a personal computer, for example, a desktop computer, an integrated machine, a notebook computer, a tablet computer, or a terminal device such as a mobile phone, a wearable device, and a palm computer. The method for reconstructing a face image provided in this embodiment is described below with an electronic device as an execution subject.
As shown in fig. 11, the method for reconstructing a face image provided in this embodiment may include the following steps S41 to S43:
step S41, face orientation is determined by using a face orientation determining method. Specifically, the face orientation may be determined by using the face orientation determination method provided in embodiment 1, embodiment 2, or embodiment 3.
And step S42, performing head segmentation processing on the medical image to be processed to obtain an image of a head region. Wherein the image of the brain region is derived from the medical image to be processed.
In a specific implementation, the medical image to be processed is the same as the medical image to be processed in embodiment 1, embodiment 2 or embodiment 3.
And S43, reconstructing a face image according to the image of the head area and the face orientation.
In this embodiment, the face orientation determination method provided in embodiment 1, embodiment 2 or embodiment 3 may obtain an accurate face orientation, and have strong robustness and generalization, and the face orientation and the head area image may be reconstructed to obtain an accurate face image.
In an alternative embodiment, the step S42 includes the following steps S421 to S422:
step S421, performing human body segmentation processing on the medical image to be processed to obtain an image of a human body region.
In a specific implementation, a threshold segmentation method may be used to perform human body segmentation processing on the medical image to be processed, specifically, a pixel point in the medical image to be processed, where the gray value is greater than a second preset threshold, may be determined to belong to a human body region, and a pixel point in the medical image to be processed, where the gray value is less than or equal to the second preset threshold, may be determined to be a non-human body region. The second preset threshold may be set according to practical situations, for example, may be set to-300 HU. And then, the complete image of the human body area is obtained through morphological operation, connected domain analysis, hole filling and other treatments.
In a specific implementation, other methods may also be used to obtain an image of the human body region, such as a neural network-based segmentation method, a cluster-based segmentation method, and so on.
And step 422, taking the first position as the center, and screening the image of the human body area according to the equivalent radius of the brain area to obtain the image of the head area.
Specifically, the volume of the brain region can be determined from the image of the brain region, and the radius of a sphere having the same volume as the brain region is taken as the equivalent radius R of the brain region brain . Since the brain is at a specific location on the head and there is a certain ratio between the size of the brain and the size of the head, in practice, the brain region may be characterized by a first location, which is the central location of the brain region, being centered on K1R brain And in order to set the radius, screening the image of the human body area, and reserving pixel points within the set radius so as to obtain the image of the head area. Wherein, K1 is larger than 1, the specific value can be set according to the actual situation, and can generally take the value between 2 and 3.
In an alternative embodiment, as shown in fig. 12, the step S43 includes the following steps S431 to S433:
Step S431, performing a head contour extraction process on the image of the head region, to obtain an image of the head contour region. In a specific implementation, a binary image contour extraction algorithm, morphological operation, gradient solving and other methods can be adopted to obtain an image of the head contour region, wherein the image of the head contour region comprises the contour outside the whole head region.
And step 432, performing face contour extraction processing on the image of the head contour region according to the face orientation to obtain the image of the face contour region.
In an alternative embodiment, the step S432 specifically includes the following steps S432a to S432b:
step S432a, starting from the second position, determining a target point along the face direction; wherein the second position is used to characterize the central position of the head region.
The three-dimensional rectangular coordinate system can be established in a three-dimensional space where the image of the brain region is located, the coordinate system comprises an x-axis, a y-axis and a z-axis, the x-coordinate, the y-coordinate and the z-coordinate of all voxels in the image of the head region are averaged, and the three averages are respectively used as the x-coordinate, the y-coordinate and the z-coordinate of the second position. In some examples, the second position may also be referred to as the geometric center of the head.
In a specific implementation, the second position can be taken as a starting point, and the equivalent radius R of the brain region is taken as the basis along the face brain The target point is determined. In a specific example, the distance between the second location and the target point is K2R brain Wherein, the value of K2 can be set according to practical situations, and K2 can be equal to about one quarter of K1, and generally takes a value between 0.5 and 1.
Step S432b, extracting a region positioned on one side of a third plane facing the face from the image of the head outline region to obtain an image of the face outline region; the third plane is a plane perpendicular to the face direction and passing through the target point.
In the example shown in FIG. 13, in the second position O head As a starting point, along the face direction V face Determining the target point P cut Will face towards V face Vertical and passing through the target point P cut Is determined as a third plane, as is the point cloud cut plane L in FIG. 13 cut As shown. The image of the head outline area is positioned on a point cloud cutting plane L cut Face-facing direction V face The area on one side is the image of the face contour area.
And S433, reconstructing a face image according to the image of the face contour area.
In order to make the reconstructed face image more complete, in a specific implementation, the image of the face contour region may be subjected to dilation processing to obtain a face edge image S including a face edge faceedge Image S is formed on face edge through Marching Cube algorithm faceedge And carrying out equivalent surface reconstruction on the CT value in the limited area to obtain a face image. The reconstruction threshold may be set as a CT value of the face surface. Further, post-processing can be performed on the reconstructed face image, for example, the maximum connected domain is obtained, and a final face image is obtained.
In this embodiment, the image of the head contour region is determined according to the image of the head region, the image of the face contour region is determined according to the image of the head contour region and the face orientation, and the face image is reconstructed according to the image of the face contour region, so that automatic reconstruction of the face image can be realized.
As shown in fig. 14, the present embodiment further provides a face image reconstruction device 90, which includes a face orientation determining module 91, a head segmentation processing module 92, and a face image reconstruction module 93. The face orientation determining module 91 is configured to determine the face orientation by using the face orientation determining method described in embodiment 1, embodiment 2 or embodiment 3. The head segmentation processing module 92 is configured to perform head segmentation processing on a medical image to be processed, so as to obtain an image of a head region; wherein the image of the brain region is obtained from the medical image to be processed. The face image reconstruction module 93 is configured to reconstruct a face image according to the image of the head region and the face orientation.
In an optional implementation manner, the head segmentation processing module is specifically configured to perform a human body segmentation process on the medical image to be processed to obtain an image of a human body region; and taking the first position as a center, and screening the image of the human body area according to the equivalent radius of the brain area to obtain the image of the head area.
In an alternative embodiment, the face image reconstruction module includes a head contour extraction unit, a face contour extraction unit, and a face image reconstruction unit. The head contour extraction unit is used for performing head contour extraction processing on the image of the head region to obtain the image of the head contour region. The face contour extraction unit is used for carrying out face contour extraction processing on the image of the head contour region according to the face orientation to obtain the image of the face contour region. The face image reconstruction unit is used for reconstructing a face image according to the image of the face contour area.
In an alternative embodiment, the face contour extraction unit includes a target point determination subunit and a face contour extraction subunit. The target point determining subunit is configured to determine a target point along the face direction with the second position as a starting point; wherein the second position is used to characterize the central position of the head region. The face contour extraction subunit is used for extracting a region positioned at one side of the third plane facing the face from the image of the head contour region to obtain an image of the face contour region; the third plane is a plane perpendicular to the face direction and passing through the target point.
In an alternative embodiment, the target point determination subunit is specifically configured to determine the target point along the face direction based on the equivalent radius of the brain region, starting from the second location.
It should be noted that, the reconstruction device of the face image in this embodiment may be a separate chip, a chip module or an electronic device, or may be a chip or a chip module integrated in the electronic device.
The reconstruction device for a face image described in this embodiment includes each module/unit, which may be a software module/unit, or may be a hardware module/unit, or may be a software module/unit, or may be a hardware module/unit.
Example 5
Fig. 15 is a schematic structural diagram of an electronic device according to the present embodiment. The electronic device includes at least one processor and a memory communicatively coupled to the at least one processor. Wherein the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the face orientation determination method of embodiment 1, embodiment 2 or embodiment 3 or the face image reconstruction method of embodiment 4. The electronic device provided in this embodiment may be a personal computer, for example, a desktop computer, an integrated machine, a notebook computer, a tablet computer, or a terminal device such as a mobile phone, a wearable device, a palm computer, or the like. The electronic device 3 shown in fig. 15 is only an example and should not impose any limitation on the functionality and scope of use of the embodiments of the present invention.
The components of the electronic device 3 may include, but are not limited to: the at least one processor 4, the at least one memory 5, a bus 6 connecting the different system components, including the memory 5 and the processor 4.
The bus 6 includes a data bus, an address bus, and a control bus.
The memory 5 may include volatile memory such as Random Access Memory (RAM) 51 and/or cache memory 52, and may further include Read Only Memory (ROM) 53.
The memory 5 may also include a program/utility 55 having a set (at least one) of program modules 54, such program modules 54 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The processor 4 executes a computer program stored in the memory 5 to thereby execute various functional applications and data processing, such as the face orientation determination method of the above-described embodiment 1, embodiment 2, or embodiment 3, or the face image reconstruction method of embodiment 4.
The electronic device 3 may also communicate with one or more external devices 7, such as a keyboard, pointing device, etc. Such communication may be through an input/output (I/O) interface 8. And the electronic device 3 may also communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the internet, via the network adapter 9. As shown in fig. 12, the network adapter 9 communicates with other modules of the electronic device 3 via the bus 6. It should be appreciated that although not shown in fig. 12, other hardware and/or software modules may be used in connection with the electronic device 3, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, data backup storage systems, and the like.
It should be noted that although several units/modules or sub-units/modules of an electronic device are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more units/modules described above may be embodied in one unit/module in accordance with embodiments of the present invention. Conversely, the features and functions of one unit/module described above may be further divided into ones that are embodied by a plurality of units/modules.
Example 6
The present embodiment provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the face orientation determination method of embodiment 1, embodiment 2, or embodiment 3, or the face image reconstruction method of embodiment 4.
More specifically, among others, readable storage media may be employed including, but not limited to: portable disk, hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible embodiment, the invention may also be implemented in the form of a program product comprising program code for causing an electronic device to carry out the face orientation determination method of embodiment 1, embodiment 2 or embodiment 3 or the reconstruction method of the face image of embodiment 4 when the program product is run on the electronic device.
Wherein the program code for carrying out the invention may be written in any combination of one or more programming languages, such that the program code is executable entirely on an electronic device, partially on an electronic device, as a stand-alone software package, partially on an electronic device, partially on a remote device, or entirely on a remote device.
While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that this is by way of example only, and the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the principles and spirit of the invention, but such changes and modifications fall within the scope of the invention.

Claims (24)

1. The face orientation determining method is characterized by comprising the following steps of:
performing principal component analysis processing on the image of the brain region according to the first position, and determining a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region;
Determining the front part of the brain region and the rear part of the brain region according to the distance between the voxels positioned at two sides of the first plane and the first plane respectively aiming at the image of the brain region; and determining an upper portion of the brain region and a lower portion of the brain region based on distances between voxels located on both sides of the second plane to the second plane, respectively; wherein the first plane is a plane perpendicular to the first direction vector and passing through the first position, and the second plane is a plane perpendicular to the second direction vector and passing through the first position;
a face orientation is determined from at least one of a front portion of the brain region and a rear portion of the brain region, at least one of an upper portion of the brain region and a lower portion of the brain region, and the first location.
2. The method for determining a face orientation according to claim 1, wherein the step of performing principal component analysis processing on the image of the brain region according to the first position to determine the first direction vector and the second direction vector specifically includes:
performing principal component analysis processing on the image of the brain region by taking the first position as an origin to obtain three orthogonal principal component direction vectors and characteristic values corresponding to each principal component direction vector;
The principal component direction vector corresponding to the largest eigenvalue is determined as a first direction vector, and the principal component direction vector corresponding to the next largest eigenvalue is determined as a second direction vector.
3. The face orientation determining method according to claim 1, wherein the step of determining the front portion of the brain region and the rear portion of the brain region according to the distances from the voxels located at both sides of the first plane, respectively, to the first plane specifically comprises:
if the average distance from the voxels on the first side of the first plane to the first plane is greater than the average distance from the voxels on the second side of the first plane to the first plane, the first side of the first plane is determined to be the front of the brain region and the second side of the first plane is determined to be the rear of the brain region.
4. The face orientation determining method according to claim 1, wherein the step of determining the upper part of the brain region and the lower part of the brain region based on the distances from the voxels located at both sides of the second plane, respectively, to the second plane specifically comprises:
if the average distance from the voxels on the first side of the second plane to the second plane is greater than the average distance from the voxels on the second side of the second plane to the second plane, determining that the first side of the second plane is the upper portion of the brain region and the second side of the second plane is the lower portion of the brain region.
5. The face orientation determining method according to claim 1, wherein the step of determining the face orientation from at least one of a front portion of the brain region and a rear portion of the brain region, at least one of an upper portion of the brain region and a lower portion of the brain region, and the first position specifically includes:
determining a first coordinate axis from at least one of a front portion of the brain region and a rear portion of the brain region and a second coordinate axis from at least one of an upper portion of the brain region and a lower portion of the brain region with the first location as an origin of coordinates;
determining a face orientation vector according to the first vector on the first coordinate axis and the second vector on the second coordinate axis; the direction of the first vector is the same as the positive direction of the first coordinate axis, and the direction of the second vector is the same as the positive direction of the second coordinate axis;
and determining the direction of the face orientation vector as the face orientation.
6. A face orientation determination method according to any one of claims 1 to 5, further comprising:
performing bone tissue segmentation treatment on the acquired medical image to be treated to obtain an image of a bone tissue region;
Performing morphological processing on the image of the bone tissue region to obtain a first image;
performing hole filling processing on the first image to obtain a second image;
obtaining an image of a bone cavity region from the second image and the first image;
and extracting the maximum connected domain in the image of the bone cavity region to obtain the image of the brain region.
7. The face orientation determining method is characterized by comprising the following steps of:
performing principal component analysis processing on the image of the brain region according to the first position, and determining a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region;
determining a first coordinate axis according to the first position and the distance between voxels positioned at two sides of a first plane and the first plane respectively aiming at the image of the brain region; determining a second coordinate axis according to the first position and the distances between the voxels positioned at two sides of the second plane and the second plane respectively; the first plane is a plane perpendicular to the first direction vector and passing through the first position, the second plane is a plane perpendicular to the second direction vector and passing through the first position, and the first coordinate axis and the second coordinate axis intersect at the first position;
And determining the face orientation according to the first coordinate axis and the second coordinate axis.
8. The method for determining a face orientation according to claim 7, wherein the step of performing principal component analysis processing on the image of the brain region according to the first position to determine the first direction vector and the second direction vector specifically includes:
performing principal component analysis processing on the image of the brain region by taking the first position as an origin to obtain three orthogonal principal component direction vectors and characteristic values corresponding to each principal component direction vector;
the principal component direction vector corresponding to the largest eigenvalue is determined as a first direction vector, and the principal component direction vector corresponding to the next largest eigenvalue is determined as a second direction vector.
9. The face orientation determining method of claim 7, wherein the step of determining the first coordinate axis according to the distances between the voxels at both sides of the first plane and the first location, respectively, includes:
and determining a first coordinate axis by taking the first position as a coordinate origin according to the magnitude relation of the average distance from the voxels positioned on the first side of the first plane to the first plane and the average distance from the voxels positioned on the second side of the first plane to the first plane.
10. The face orientation determining method of claim 7, wherein the step of determining the second coordinate axis according to the distances between the voxels at both sides of the second plane and the first location, respectively, includes:
and determining a second coordinate axis by taking the first position as a coordinate origin according to the magnitude relation between the average distance from the voxels positioned on the first side of the second plane to the second plane and the average distance from the voxels positioned on the second side of the second plane to the second plane.
11. The face orientation determining method according to claim 7, wherein the step of determining the face orientation according to the first coordinate axis and the second coordinate axis specifically includes:
determining a face orientation vector according to the first vector on the first coordinate axis and the second vector on the second coordinate axis; the direction of the first vector is the same as the positive direction of the first coordinate axis, and the direction of the second vector is the same as the positive direction of the second coordinate axis;
and determining the direction of the face orientation vector as the face orientation.
12. A face orientation determination method according to any one of claims 7-11, wherein the face orientation determination method further comprises:
Performing bone tissue segmentation treatment on the acquired medical image to be treated to obtain an image of a bone tissue region;
performing morphological processing on the image of the bone tissue region to obtain a first image;
performing hole filling processing on the first image to obtain a second image;
obtaining an image of a bone cavity region from the second image and the first image;
and extracting the maximum connected domain in the image of the bone cavity region to obtain the image of the brain region.
13. The face orientation determining method is characterized by comprising the following steps of:
performing principal component analysis processing on the image of the brain region according to the first position, and determining a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region;
determining distances from voxels located on two sides of a first plane to the first plane and distances from voxels located on two sides of a second plane to the second plane respectively for the image of the brain region; wherein the first plane is a plane perpendicular to the first direction vector and passing through the first position, and the second plane is a plane perpendicular to the second direction vector and passing through the first position;
And determining the face orientation based on the first position, the distance between the voxels positioned at the two sides of the first plane and the first plane respectively, and the distance between the voxels positioned at the two sides of the second plane and the second plane respectively.
14. The reconstruction method of the face image is characterized by comprising the following steps of:
determining a face orientation using the face orientation determination method of any one of claims 1-13;
performing head segmentation processing on the medical image to be processed to obtain an image of a head region; wherein the image of the brain region is obtained from the medical image to be processed;
reconstructing a face image from the image of the head region and the face orientation.
15. The method for reconstructing a face image according to claim 14, wherein the step of performing a head segmentation process on the medical image to be processed to obtain an image of a head region specifically comprises:
performing human body segmentation processing on the medical image to be processed to obtain an image of a human body region;
and taking the first position as the center, and screening the image of the human body area according to the equivalent radius of the brain area to obtain the image of the head area.
16. A method of reconstructing a face image as claimed in claim 14, wherein said step of reconstructing a face image from said image of said head region and said face orientation comprises:
performing head contour extraction processing on the image of the head region to obtain an image of the head contour region;
carrying out face contour extraction processing on the image of the head contour region according to the face orientation to obtain the image of the face contour region;
reconstructing a face image according to the image of the face contour area.
17. The method for reconstructing a face image according to claim 16, wherein said step of extracting a face contour from said image of said head contour region according to said face orientation comprises:
determining a target point along the face direction by taking the second position as a starting point; wherein the second position is used to characterize a central position of the head region;
extracting a region positioned on one side of a third plane facing the face from the image of the head outline region to obtain an image of the face outline region; the third plane is a plane perpendicular to the face direction and passing through the target point.
18. A method of reconstructing a face image as claimed in claim 17, wherein said step of determining a target point along said face orientation starting from said second position comprises:
and determining a target point along the face direction according to the equivalent radius of the brain region by taking the second position as a starting point.
19. A face orientation determining apparatus, comprising:
the direction vector determining module is used for carrying out principal component analysis processing on the image of the brain region according to the first position and determining a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region;
a brain direction determining module, configured to determine, for an image of the brain region, a front portion of the brain region and a rear portion of the brain region according to distances from voxels located on both sides of a first plane to the first plane, respectively; and determining an upper portion of the brain region and a lower portion of the brain region according to distances from voxels located on both sides of the second plane to the second plane, respectively; the first plane is a plane perpendicular to the first direction vector and passing through the first position, the second plane is a plane perpendicular to the second direction vector and passing through the first position, and the first position is an intersection point of the first direction vector and the second direction vector;
A face orientation determining module for determining a face orientation from at least one of a front portion of the brain region and a rear portion of the brain region, at least one of an upper portion of the brain region and a lower portion of the brain region, and the first location.
20. A face orientation determining apparatus, comprising:
the direction vector determining module is used for carrying out principal component analysis processing on the image of the brain region according to the first position and determining a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region;
the coordinate axis determining module is used for determining a first coordinate axis according to the first position and the distances between voxels positioned on two sides of a first plane and the first plane aiming at the image of the brain region; determining a second coordinate axis according to the first position and the distances between the voxels positioned at two sides of the second plane and the second plane respectively; the first plane is a plane perpendicular to the first direction vector and passing through the first position, the second plane is a plane perpendicular to the second direction vector and passing through the first position, and the first coordinate axis and the second coordinate axis intersect at the first position;
And the face orientation determining module is used for determining the face orientation according to the first coordinate axis and the second coordinate axis.
21. A face orientation determining apparatus, comprising:
the direction vector determining module is used for carrying out principal component analysis processing on the image of the brain region according to the first position and determining a first direction vector and a second direction vector; wherein the first position is used for representing the central position of the brain region, the first direction vector is used for indicating the front-to-back direction or the back-to-front direction of the brain region, and the second direction vector is used for indicating the top-to-bottom direction or the bottom-to-top direction of the brain region;
a distance determining module, configured to determine, for an image of the brain region, distances between voxels located on both sides of a first plane and the second plane, respectively; wherein the first plane is a plane perpendicular to the first direction vector and passing through the first position, and the second plane is a plane perpendicular to the second direction vector and passing through the first position;
and the face orientation determining module is used for determining the face orientation based on the first position, the distance between the voxels positioned at the two sides of the first plane and the first plane respectively, and the distance between the voxels positioned at the two sides of the second plane and the second plane respectively.
22. A face image reconstruction apparatus, comprising:
a face orientation determining module configured to determine a face orientation using the face orientation determining method of any one of claims 1 to 13;
the head segmentation processing module is used for carrying out head segmentation processing on the medical image to be processed to obtain an image of a head region; wherein the image of the brain region is obtained from the medical image to be processed;
and the face image reconstruction module is used for reconstructing a face image according to the image of the head area and the face orientation.
23. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of determining a face orientation according to any one of claims 1-13 or the method of reconstructing a face image according to any one of claims 14-18 when the computer program is executed.
24. A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the method of determining a face orientation according to any of claims 1-13 or the method of reconstructing a face image according to any of claims 14-18.
CN202210770738.0A 2022-06-30 2022-06-30 Face orientation determining method and device and face image reconstructing method and device Pending CN117372322A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210770738.0A CN117372322A (en) 2022-06-30 2022-06-30 Face orientation determining method and device and face image reconstructing method and device
PCT/CN2023/104458 WO2024002321A1 (en) 2022-06-30 2023-06-30 Face orientation determination method and apparatus, and face reconstruction method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210770738.0A CN117372322A (en) 2022-06-30 2022-06-30 Face orientation determining method and device and face image reconstructing method and device

Publications (1)

Publication Number Publication Date
CN117372322A true CN117372322A (en) 2024-01-09

Family

ID=89383341

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210770738.0A Pending CN117372322A (en) 2022-06-30 2022-06-30 Face orientation determining method and device and face image reconstructing method and device

Country Status (2)

Country Link
CN (1) CN117372322A (en)
WO (1) WO2024002321A1 (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8068655B2 (en) * 2007-10-02 2011-11-29 Siemens Aktiengesellschaft Method and system for vessel enhancement and artifact reduction in TOF MR angiography of brain
CN101727531A (en) * 2008-10-16 2010-06-09 国际商业机器公司 Method and system used for interaction in virtual environment
EP2639674B1 (en) * 2012-03-12 2016-06-01 Alcatel Lucent Method for control of a video interface, face orientation detector, and video conferencing server
CN105719295B (en) * 2016-01-21 2019-07-16 浙江大学 A kind of intracranial hemorrhage region segmentation method and system based on three-dimensional super voxel
CN105930775B (en) * 2016-04-14 2019-07-19 中南大学 Facial orientation recognition methods based on sensitivity parameter
US20220092791A1 (en) * 2018-04-12 2022-03-24 Veran Medical Technologies, Inc. Methods for the Segmentation of Lungs, Lung Vasculature and Lung Lobes from CT Data and Clinical Applications
CN110555507B (en) * 2019-10-22 2021-03-23 深圳追一科技有限公司 Interaction method and device for virtual robot, electronic equipment and storage medium
CN110837300B (en) * 2019-11-12 2020-11-27 北京达佳互联信息技术有限公司 Virtual interaction method and device, electronic equipment and storage medium
US11798161B2 (en) * 2019-11-26 2023-10-24 Koh Young Technology Inc. Method and apparatus for determining mid-sagittal plane in magnetic resonance images
CN112802193B (en) * 2021-03-11 2023-02-28 重庆邮电大学 CT image three-dimensional reconstruction method based on MC-T algorithm

Also Published As

Publication number Publication date
WO2024002321A1 (en) 2024-01-04

Similar Documents

Publication Publication Date Title
US8218905B2 (en) Method, system and software product for providing efficient registration of 3D image data
Lamecker et al. Segmentation of the liver using a 3D statistical shape model
JP5384779B2 (en) Image registration system and method with cross-entropy optimization
US8861891B2 (en) Hierarchical atlas-based segmentation
US11798161B2 (en) Method and apparatus for determining mid-sagittal plane in magnetic resonance images
EP4365838A1 (en) Registration method and system
US11935246B2 (en) Systems and methods for image segmentation
Sinha et al. The deformable most-likely-point paradigm
CN111402277B (en) Object outline segmentation method and device for medical image
CN111445575A (en) Image reconstruction method and device of Wirisi ring, electronic device and storage medium
CN113888566B (en) Target contour curve determination method and device, electronic equipment and storage medium
CN113129418B (en) Target surface reconstruction method, device, equipment and medium based on three-dimensional image
CN116824209A (en) Bone window prediction method and system
CN117372322A (en) Face orientation determining method and device and face image reconstructing method and device
CN111583240B (en) Method and device for determining anterior and posterior axis of femur end and computer equipment
Lötjönen et al. Four-chamber 3-D statistical shape model from cardiac short-axis and long-axis MR images
CN110009666B (en) Method and device for establishing matching model in robot space registration
WO2021081850A1 (en) Vrds 4d medical image-based spine disease recognition method, and related devices
CN112085698A (en) Method and device for automatically analyzing left and right breast ultrasonic images
US20240221190A1 (en) Methods and systems for registration
Soza Registration and simulation for the analysis of intraoperative brain shift
CN118279583A (en) Brain tumor image segmentation method and device
CN118279584A (en) Brain tumor image segmentation method and device
JP2023109739A (en) Image registration system and image registration method
CN114270408A (en) Method for controlling a display, computer program and mixed reality display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination