CN117456111A - Label display method and device based on near infrared brain function imaging data - Google Patents

Label display method and device based on near infrared brain function imaging data Download PDF

Info

Publication number
CN117456111A
CN117456111A CN202311786251.2A CN202311786251A CN117456111A CN 117456111 A CN117456111 A CN 117456111A CN 202311786251 A CN202311786251 A CN 202311786251A CN 117456111 A CN117456111 A CN 117456111A
Authority
CN
China
Prior art keywords
target mark
mark point
annotation
label
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311786251.2A
Other languages
Chinese (zh)
Other versions
CN117456111B (en
Inventor
邓皓
汪待发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huichuang Keyi Beijing Technology Co ltd
Original Assignee
Huichuang Keyi Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huichuang Keyi Beijing Technology Co ltd filed Critical Huichuang Keyi Beijing Technology Co ltd
Priority to CN202311786251.2A priority Critical patent/CN117456111B/en
Publication of CN117456111A publication Critical patent/CN117456111A/en
Application granted granted Critical
Publication of CN117456111B publication Critical patent/CN117456111B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4058Detecting, measuring or recording for evaluating the nervous system for evaluating the central nervous system
    • A61B5/4064Evaluating the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/743Displaying an image simultaneously with additional graphical information, e.g. symbols, charts, function plots
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Neurology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Primary Health Care (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Epidemiology (AREA)
  • Psychology (AREA)
  • Neurosurgery (AREA)
  • Physiology (AREA)
  • General Physics & Mathematics (AREA)

Abstract

The application provides a label display method and device based on near infrared brain function imaging data and a computer readable storage medium. The label display method comprises the following steps: acquiring target mark points on the three-dimensional brain model, wherein the target mark points comprise at least one category of transmitting probe mark points, receiving probe mark points and detecting channel mark points; determining rays emitted from the target marker point towards the viewing direction; determining that the target mark point is in an occlusion state or a presentation state compared with the three-dimensional brain model based on rays, and hiding the annotation tag matched with the target mark point under the condition that the target mark point is in the occlusion state; and displaying the annotation label matched with the target mark point under the condition that the target mark point is in a display state. The method can reduce the number of the annotation labels presented on the display interface, avoid the problem that too many annotation labels are mutually blocked, and realize that an observer can quickly observe the content of each annotation label.

Description

Label display method and device based on near infrared brain function imaging data
Technical Field
The present disclosure relates to the field of near infrared brain function imaging image display technologies, and in particular, to a tag display method and device based on near infrared brain function imaging data, and a computer readable storage medium.
Background
In the field of neuroscience and medical research today, near infrared brain function imaging technology (fNIRS) has become one of the key tools for non-invasive studies of brain function. The technology monitors the activity state of the brain in real time by measuring the optical characteristic changes of the hemoglobin and the oxyhemoglobin in the brain tissue, and provides a unique visual angle for unlocking the mystery of the activity in the brain. With the development of the fNIRS technology, we have made great progress in the field of neuroimaging, but also face many data analysis challenges, one of which is how to accurately find the target marker point corresponding to the three-dimensional brain model after the adjustment of the observation position of the brain model through visual characterization.
The current technical route is that MNI coordinates of a channel are firstly obtained when the current technical route is displayed, mapping is carried out on the basis of the corresponding position of the MNI coordinates on a head model, and the coordinates of the mapping position on the head model are the coordinates of an annotation label of the channel. The MNI brain (Mintreal Neurological Institute) is a standard brain derived based on a number of normal test MRI scans, and the current MNI standard template is ICBM152 (International Consortium for Brain Mapping). ICBM152 is the average result of affine transformation of 9 parameters of 152 normal MRI images matched with MNI.
However, due to the conventional display mode that the annotation label matched with the target mark point on the brain model is arranged on the external outline of the head model, after the observation position of the three-dimensional brain model is adjusted, part of the annotation label corresponding to the target mark point is blocked, and the annotation label matched with the target mark point is still displayed, so that an observer cannot accurately find the corresponding target mark point through the annotation label, the interval between the external outline of the head model in the standard head model and the brain model in the head model is relatively larger, the situation that adjacent annotation labels are blocked by each other easily occurs in the display of the annotation label on the external outline of the head model, the display content of the annotation label is not easy to observe, and the use of the observer is inconvenient.
Disclosure of Invention
Aiming at the technical problems in the prior art, the application provides a label display method, a device and a computer readable storage medium based on near infrared brain function imaging data, which can solve the problem that an observer cannot accurately find a target mark point corresponding to an annotation label through the annotation label, and the display of the annotation label on the outline of a head die is mutually blocked, so that the annotation label is easy to observe by the observer.
In a first aspect, an embodiment of the present application provides a tag display method based on near infrared brain function imaging data, including: acquiring target mark points on a three-dimensional brain model, wherein the target mark points comprise at least one category of a transmitting probe mark point, a receiving probe mark point and a detection channel mark point, and each target mark point is respectively matched with an annotation label; determining rays emitted from the target marker point in a viewing direction; determining that the target mark point is in an occlusion state or a presentation state compared with the three-dimensional brain model based on the ray, and hiding the annotation tag matched with the target mark point under the condition that the target mark point is in the occlusion state; and displaying the annotation tag matched with the target mark point under the condition that the target mark point is in the appearance state.
In a second aspect, an embodiment of the present application provides a tag display device based on near infrared brain function imaging data, including an acquisition module and a determination module. The acquisition module is configured to acquire target mark points on the three-dimensional brain model, wherein the target mark points comprise at least one category of a transmitting probe mark point, a receiving probe mark point and a detection channel mark point, and each target mark point is respectively matched with an annotation label. The determining module is configured to determine rays emitted from the target marker point towards the viewing direction; and determining that the target mark point is in an occlusion state or a presentation state compared with the three-dimensional brain model based on the ray, and hiding the annotation tag matched with the target mark point under the condition that the target mark point is in the occlusion state; and displaying the annotation tag matched with the target mark point under the condition that the target mark point is in the appearance state.
In a third aspect, embodiments of the present application provide a computer readable storage medium storing a computer program, where the computer program when executed by a processor implements the steps of the above-described tag display method based on near infrared brain function imaging data.
Compared with the prior art, the beneficial effects of the embodiment of the application are that: according to the method and the device, the target mark point on the three-dimensional brain model and the ray emitted from the target mark point to the observation direction are obtained, so that whether the target mark point is in a shielding state on the three-dimensional brain model or not can be determined, the annotation labels corresponding to the shielded target mark point are hidden, the number of the annotation labels presented on the display interface is reduced, after the observation position of the three-dimensional brain model is adjusted, the annotation labels corresponding to the shielded target mark point are hidden at the same time, and therefore an observer can be guaranteed to accurately find the target mark point corresponding to the annotation labels through the annotation labels.
Drawings
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. The accompanying drawings illustrate various embodiments by way of example in general and not by way of limitation, and together with the description and claims serve to explain the disclosed embodiments. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. Such embodiments are illustrative and not intended to be exhaustive or exclusive of the present apparatus or method.
FIG. 1 is a first flowchart of a label display method based on near infrared brain function imaging data according to an embodiment of the present application;
fig. 2 is a schematic display diagram of a label display method based on near infrared brain function imaging data according to an embodiment of the present application;
FIG. 3 is a second flowchart of a label display method based on near infrared brain function imaging data according to an embodiment of the present application;
FIG. 4 is a third flowchart of a label display method based on near infrared brain function imaging data according to an embodiment of the present application;
FIG. 5 is a fourth flowchart of a label display method based on near infrared brain function imaging data according to an embodiment of the present application;
fig. 6 is a block diagram of a label display device based on near infrared brain function imaging data according to an embodiment of the present application.
Detailed Description
It should be understood that various modifications may be made to the embodiments of the invention herein. Therefore, the above description should not be taken as limiting, but merely as exemplification of the embodiments. Other modifications within the scope and spirit of this application will occur to those skilled in the art.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and, together with a general description of the application given above and the detailed description of the embodiments given below, serve to explain the principles of the application.
These and other characteristics of the present application will become apparent from the following description of a preferred form of embodiment, given as a non-limiting example, with reference to the accompanying drawings.
It is also to be understood that, although the present application has been described with reference to some specific examples, those skilled in the art can certainly realize many other equivalent forms of the present application.
The foregoing and other aspects, features, and advantages of the present application will become more apparent in light of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present application will be described hereinafter with reference to the accompanying drawings; however, it is to be understood that the embodiments of the invention are merely examples of the application, which may be practiced in various ways. Well-known and/or repeated functions and constructions are not described in detail to avoid obscuring the application with unnecessary or excessive detail. Therefore, specific structural and functional details disclosed herein are not intended to be limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present application in virtually any appropriately detailed structure.
The specification may use the word "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments as per the application.
The embodiment of the application provides a tag display method based on near infrared brain function imaging data, which can be applied to a tag display device based on near infrared brain function imaging data. The label display device based on the near-infrared brain function imaging data can be matched with the near-infrared brain function imaging device to be used, and can also be a part of the near-infrared brain function imaging device, and the application is not limited in particular.
The near infrared brain function imaging device may include a head cap to be worn on the head of the subject, and a plurality of transmitting probes (S) for transmitting near infrared light and receiving probes (D) for receiving near infrared light may be provided on the head cap. Of course, the transmitting probe and the receiving probe may be mounted by other head-wearing devices other than the headgear, which is not particularly limited in this application, and the transmitting probe and the receiving probe may be stably mounted with respect to the head.
The transmitting probes and the receiving probes can be arranged in pairs, a detection channel can be formed between each pair of transmitting probes and the receiving probes, and near infrared data of the detection channel can be collected to represent the brain function state of a tested person. Wherein, a transmitting probe may correspond to a plurality of receiving probes such that a plurality of detection channels are formed between the transmitting probe and the plurality of receiving probes, respectively, or vice versa, a receiving probe corresponds to a plurality of transmitting probes such that a plurality of detection channels are formed between the receiving probe and the plurality of transmitting probes, respectively. The pairing relation between the transmitting probe and the receiving probe can be determined according to specific requirements of the arrangement position of each probe, the brain function area to be detected and the like, and the application is not particularly limited.
As shown in fig. 1, the tag display method based on near infrared brain function imaging data includes steps S101 to S103.
Step S101: and obtaining target mark points on the three-dimensional brain model, wherein the target mark points comprise at least one category of a transmitting probe mark point, a receiving probe mark point and a detection channel mark point, and each target mark point is respectively matched with an annotation label.
Optionally, the target mark points on the three-dimensional brain model may be obtained by combining near infrared data acquired by a near infrared brain function imaging device and spatial information of the three-dimensional brain model, where the near infrared data may be used to characterize one or more of the following physiological parameters of brain function states: oxyhemoglobin concentration value (HbO), deoxyhemoglobin concentration value (HbR), total hemoglobin concentration value (HbT).
Alternatively, the three-dimensional brain model may be a standard brain model, which may be determined according to brain models of a plurality of human bodies, or may be a non-standard brain model corresponding to a subject, and the brain model formulated according to the subject is not particularly limited in this application. Preferably, the three-dimensional brain model may be a standard brain model to reduce modeling time and computational requirements on modeling resources.
Alternatively, the target mark points of different categories may be displayed in the same display manner or in different display manners. Illustratively, as shown in fig. 2, each of the target mark points shown in fig. 2 is displayed protruding from the three-dimensional brain model in an arched shape, and of course, the target mark points may be displayed protruding in other shapes, such as a rectangle, a triangle, a sphere, etc., which is not particularly limited herein, and the arch shown in fig. 2 is only an example. In addition, different classes of target marker points may be highlighted in different shapes and/or different colors, such as transmitting probe marker points, receiving probe marker points, and detecting channel marker points are highlighted in red domes, and blue rectangles, respectively.
Optionally, the annotation tags of the target marking points in the same category are displayed differently in a manner of being partially the same and partially different from each other, for example, the transmitting probe marking points are marked by using the S-head annotation tag, the receiving probe marking points are marked by using the D-head annotation tag, and the detection channel marking points are marked by using the CH-head annotation tag. In particular, in connection with fig. 2, S1 shown in fig. 2 may be understood as an annotation tag of the transmitting probe marker point numbered 1, D3 may be understood as an annotation tag of the receiving probe marker point numbered 3, and CH13 may be understood as an annotation tag of the detection channel marker point numbered 13. The setting position of the annotation tag is not specifically limited herein, preferably, the annotation tag is set near the target mark point, and the specific annotation tag may be set on the three-dimensional brain model at the target mark point, or may be set at a preset distance from the corresponding target mark point on the three-dimensional brain model, where the preset distance may be adjusted according to the actual requirement, and no excessive limitation is made herein.
Step S102: rays emitted from the target marker point in the viewing direction are determined.
Alternatively, the above-mentioned viewing direction may be understood as a direction from the three-dimensional brain model to the observer, i.e., a direction opposite to a line of sight of the observer viewing the three-dimensional brain model, such as a direction perpendicular to and pointing from the three-dimensional brain model to the display interface where the three-dimensional brain model is presented.
Alternatively, the rays emitted from the target mark points in the viewing direction are in one-to-one correspondence with the target mark points, i.e. one ray corresponds to one target mark point.
Step S103: determining that the target mark point is in an occlusion state or a presentation state compared with the three-dimensional brain model based on the ray, and hiding the annotation tag matched with the target mark point under the condition that the target mark point is in the occlusion state; and displaying the annotation tag matched with the target mark point under the condition that the target mark point is in the appearance state.
Alternatively, the target mark point being in the blocking state may be understood as a state in which the target mark point is blocked by the three-dimensional brain model when the three-dimensional brain model is rotated to the first display angle, and at this time, the target mark point cannot be presented on the three-dimensional brain model. The target mark point being in the appearance state may be understood as a state in which the target mark point is not blocked by the three-dimensional brain model when the three-dimensional brain model is rotated to the second display angle, and at this time, the target mark point may be presented on the three-dimensional brain model. As shown in fig. 2, the target mark point shown in fig. 2 is the target mark point in the appearance state.
The three-dimensional brain model is rotated to change the display angle, and the state of each target mark point is changed.
Alternatively, the state of the target marker point may be determined by the intersection of the ray and the three-dimensional brain model. For example, when a ray emitted from a target mark point on a three-dimensional brain model in the observation direction passes through the outer contour of the three-dimensional brain model, it can be judged that the target mark point is in the blocked state, and conversely, it is judged that it is in the display state. And when the display angle of the three-dimensional brain model is changed, the states of all the target mark points are required to be redetermined, so that annotation labels matched with the target mark points are required to be hidden or displayed again.
It should be noted that, in the technical solution of presenting the target label on the external contour of the head mold and presenting the target mark point on the external contour of the three-dimensional brain model in the head mold in the prior art, the positions between the annotation label and the corresponding target mark point are far, under the circumstance that the three-dimensional brain model rotates, especially the annotation label adjacent to the edge position of the three-dimensional brain model is easier to be blocked, for example, due to the problem of rotation angle, when the distances displayed by the two target mark points in the presentation picture are relatively close, the annotation labels corresponding to the two target mark points are easy to be blocked, the observer is easy to be able to match the annotation label and the target mark point accurately due to visual deviation, and the dislocation display of the target mark point and the annotation label is easy to be observed.
According to the method and the device, the target mark point on the three-dimensional brain model and the ray emitted from the target mark point to the observation direction are obtained, so that whether the target mark point is in a shielding state on the three-dimensional brain model or not can be determined, the annotation labels corresponding to the shielded target mark point are hidden, the number of the annotation labels presented on the display interface is reduced, after the observation position of the three-dimensional brain model is adjusted, the annotation labels corresponding to the shielded target mark point are hidden at the same time, and therefore an observer can be guaranteed to accurately find the target mark point corresponding to the annotation labels through the annotation labels.
In some embodiments, as shown in fig. 3, the method further comprises steps S201 to S202.
Step S201: acquiring the mark point coordinates of the target mark point;
step S202: determining label coordinates of an annotation label corresponding to the target mark point based on the ray matched with the target mark point and the mark point coordinates; the tag coordinates lie on the ray.
Therefore, the label coordinates on the ray can be determined through the label point coordinates and the corresponding rays, the annotation labels are correspondingly displayed along the rays and the target label points, the problem of dislocation display of the target label points and the annotation labels is avoided, and an observer can quickly and accurately observe the annotation labels corresponding to the target label points.
Optionally, an MNI coordinate system (Montreal Neurological Institute coordinate system) may be built on the three-dimensional brain model, where the label point coordinates of each target label point and the label coordinates of each annotation label are MNI coordinates, i.e. each label point coordinate and label coordinate have coordinate values corresponding to three coordinate axes of x, y, and z.
Optionally, the label coordinate of the annotation label is located on a ray, which can be understood that the annotation label is located on a ray emitted by a target mark point on the three-dimensional brain model in the observation direction, that is, the annotation label and the target mark point are always located on the same ray in the observation direction, so that when an observer looks at the three-dimensional brain model, the annotation label corresponding to the target mark point is always located on the line of sight of the observer observing the target mark point no matter how the observer rotates the three-dimensional brain model, a visual effect that the annotation label is attached to the target mark point is formed, and the improvement of the display matching degree of the annotation label and the target mark point is facilitated, so that the presentation mode of the annotation label is more friendly to the observer, the observer can observe easily, and the observer can find the matched target mark point accurately through the annotation label conveniently.
Optionally, the tag coordinate of the annotation tag may be determined after the state of the target mark point is determined, or the state of the target mark point may be determined after the tag coordinate of the annotation tag is determined.
Preferably, the state of the target mark point may be determined first, then the label coordinates of the annotation label of the target mark point in the appearance state may be determined, so that the annotation label of the target mark point in the appearance state may be quickly presented, then the label coordinates of the annotation label of the target mark point in the shielding state may be determined, or the label coordinates of the annotation label of the target mark point in the shielding state may not be determined. The method can improve the display speed, and as the annotation label of the target mark point in the shielding state is not displayed, the label coordinates of the annotation label of the target mark point in the shielding state can be selected not to be calculated, so that the processing amount of data is reduced.
In some embodiments, the determining the label coordinates of the annotation label corresponding to the target mark point in step S202 specifically includes: and determining the coordinate which is positioned on one side of the ray extending to the outside of the three-dimensional brain model and has a preset distance with the target mark point as the label coordinate of the annotation label.
Therefore, the annotation labels are displayed on the target mark points at preset distances, and clear display of the annotation labels is realized.
Alternatively, the above-mentioned preset distance may range from 4mm to 6mm, preferably 5mm. The distance unit of the three-dimensional head model of the MNI is mm, and according to the distance between the three-dimensional brain model and the outer contour of the three-dimensional head model in the conventional MNI three-dimensional head model, not less than 18mm, the distance between the annotation tag and the matched target mark point is far less than the distance between the three-dimensional brain model in the three-dimensional head model and the outer contour of the three-dimensional head model by setting the distance range, so that the dislocation between the annotation tag and the target mark point and the overlapping display problem of the annotation tag are not easy to occur. Compared with the conventional display mode that annotation labels matched with target mark points on a three-dimensional brain model are arranged on the external outline of the head model, the display method is more beneficial for an observer to search corresponding target mark points through the annotation labels, and is convenient and quick to browse.
In some embodiments, as shown in fig. 4, determining the tag coordinates specifically includes steps S301 to S303.
Step S301: a unit vector of the target mark point in the observation direction is determined.
Step S302: and determining a first group of vectors consisting of the target mark point and the display point of the annotation tag based on the unit vector and a preset distance.
Step S303: and taking the difference value between the mark point coordinates of the target mark point and the first group of vectors as the label coordinates of the annotation label.
In this way, the tag coordinates of each annotation tag can be accurately calculated from the tag point coordinates of the target tag point and the first set of vectors.
Alternatively, a first set of vectors AD consisting of the target mark point (which may be represented by point a) and the display point of the annotation tag (which may be represented by point D) may be calculated using the following formula:
AD = u × d;
wherein u is a unit vector of the target mark point A in the observation direction; d is a preset distance.
Alternatively, the tag coordinates of the annotation tag, i.e., the D-point coordinates, may be calculated using the following formula:
D= A + AD;
wherein A is the mark point coordinate of the target mark point, namely the A point coordinate.
Optionally, after annotating the label coordinates of the label, the label coordinates of the annotation label may be written to the handle of the corresponding annotation label to modify the presentation position of the annotation label on the display interface by modifying the handle of the corresponding annotation label; note that, the display content of the annotation tag may also be adjusted by modifying other content settings in the handle, for example, the size, color, etc. of the display font of the annotation tag.
In some embodiments, the determining, based on the ray, the target marker point in step S103 is in an occlusion state or a visualization state compared to the three-dimensional brain model, specifically includes:
determining that the target mark point is in a shielding state under the condition that a ray corresponding to the target mark point passes through the three-dimensional brain model;
and determining that the target mark point is in a display state under the condition that the ray corresponding to the target mark point does not pass through the three-dimensional brain model.
Therefore, the state of the target mark point can be accurately determined by the condition that the ray passes through the three-dimensional brain model, and the mode for determining the state of the target mark point is simple, so that the data processing speed can be increased.
Alternatively, the three-dimensional brain model may have a three-dimensional contour line, and whether the ray passes through the three-dimensional brain model may be determined by an intersection of the ray and the three-dimensional contour line.
Specifically, if the ray intersects with the three-dimensional contour line, it can be determined that the ray passes through the three-dimensional brain model, and at this time, the target mark point is in a shielding state; if the ray does not intersect the three-dimensional contour line, it may be determined that the ray did not traverse the three-dimensional brain model, at which point the target marker point is in a visualized state.
Optionally, under the condition that the display angle of the three-dimensional brain model is changed, whether the ray corresponding to the target mark point passes through the three-dimensional brain model is determined again, so that the state of each target mark point is updated.
In some embodiments, the displaying the annotation tag matched with the target mark point in the case that the target mark point is in the appearance state in step S103 specifically includes: the annotation tag is presented in association with the target mark point in a two-dimensional display.
Therefore, the annotation tag can be always displayed towards the direction of the observer in a two-dimensional display mode, so that the observer can watch the content of the annotation tag easily, the content of the annotation tag is easy to deform in a relatively large-dimensional display mode, the condition of the content is not easy to clearly determine, and the annotation tag in the two-dimensional display mode is easier to watch by the observer.
Alternatively, the two-dimensional display mode may be understood as that the annotation tag is displayed in a plane, and does not deform with rotation of the three-dimensional brain model, but is always displayed in a fixed display state. As shown in fig. 2, the annotation tag shown in fig. 2 is presented in a two-dimensional display.
Optionally, the annotation tag is displayed corresponding to the target mark point, so as to quickly determine the annotation tag and the target mark point corresponding to the annotation tag. Illustratively, as shown in fig. 2, "D5" shown in fig. 2 is an annotation tag, and the circular pattern in which "D5" is located is a target mark point corresponding to the annotation tag.
In some embodiments, as shown in fig. 5, the method further comprises steps S401 to S402.
Step S401: and receiving an adjustment operation for adjusting the display angle of the three-dimensional brain model.
Step S402: updating at least one target marker point and marker point coordinates on the three-dimensional brain model based on the adjustment operation to redetermine a presentation mode of the annotation tag corresponding to the target marker point.
Wherein updating at least one target marker point and marker point coordinates on the three-dimensional brain model based on the adjustment operation may specifically include updating a hidden or displayed state of each target marker point, and updating the marker point coordinates of each target marker point to change its presentation position. The presentation mode of the annotation tag on the three-dimensional brain model specifically can comprise a presentation position of the annotation tag on the three-dimensional brain model and a hidden or displayed state. Therefore, the target mark point and the annotation label on the three-dimensional brain model can be updated in real time in response to the adjustment of the display angle of the three-dimensional brain model by the observer, so that the content displayed on the display interface can be adjusted in real time according to the requirements of the observer, and the requirements of the observer on different display angles of the three-dimensional brain model can be met.
Alternatively, the display angle may be a plurality of preset angles, or any angle. When the display angle is a plurality of preset angles, each preset angle can provide a proper display view angle for an observer, so that the observer can observe the target mark points and the annotation labels corresponding to each brain region.
Alternatively, the three-dimensional brain model and the head model may be simultaneously presented on the display interface, and when the observer adjusts the display angle of the head model, the display angle of the three-dimensional brain model may be synchronously adjusted to generate the adjustment operation described above.
Alternatively, the presentation mode of the annotation tag corresponding to the target mark point may correspond to the state in which the target mark point is located. In particular, the presentation modes of the annotation tag may include a hidden mode and a revealed mode. When the target mark point is in a shielding state, the presentation mode of the annotation tag is a hiding mode, and at the moment, the annotation tag is hidden and is not displayed; when the target mark point is in a display state, the display mode of the annotation label is a display mode, and the annotation label is displayed at the moment.
In some embodiments, the method further comprises: in the case that the target mark points have a plurality of categories, the target mark points of different categories are determined to be presented in different display modes.
Therefore, the observer can rapidly determine the category of the target mark point through different display modes, and the convenience and accuracy of observing the target mark point are improved.
Alternatively, the different display modes can be distinguished and displayed by one or more of the following modes: color, shape, size.
Optionally, the target mark points of the same category are all presented in the same display mode. For example, the transmitting probe mark points are all presented in pink, the receiving probe mark points are all presented in blue, and the detection channel mark points are all presented in gray.
Optionally, the display mode of the annotation tag is associated with the display mode of the category to which the target mark point belongs, so as to facilitate distinguishing the annotation tags respectively corresponding to the target mark points of different categories. For example, annotation tags corresponding to transmitting probe mark points are all presented in black, annotation tags corresponding to receiving probe mark points are all presented in yellow, and annotation tags corresponding to detection channel mark points are all presented in green.
In some embodiments, the target mark point is a plurality of; and/or the target mark points comprise a plurality of the transmitting probe mark points, a plurality of the receiving probe mark points and a plurality of the detection channel mark points.
In this way, the target marker points may be more fully presented on the three-dimensional brain model for easy observation by a viewer.
Alternatively, in the case that the target mark points are plural, the target mark points may be plural target mark points of the same type, or plural target mark points of different types, which is not particularly limited in this application.
The embodiment of the application also provides a label display device 100 based on near infrared brain function imaging data. As shown in fig. 6, the tag display device 100 based on near infrared brain function imaging data includes an acquisition module 101 and a determination module 102. The acquisition module 101 is configured to acquire target marker points on the three-dimensional brain model, the target marker points including at least one category of a transmitting probe marker point, a receiving probe marker point and a detection channel marker point, and each of the target marker points is respectively matched with an annotation tag. The determination module 102 is configured to determine rays emitted from the target marker point towards the viewing direction; and determining that the target mark point is in an occlusion state or a presentation state compared with the three-dimensional brain model based on the ray, and hiding the annotation tag matched with the target mark point under the condition that the target mark point is in the occlusion state; and displaying the annotation tag matched with the target mark point under the condition that the target mark point is in the appearance state.
According to the method and the device, the target mark point on the three-dimensional brain model and the ray emitted from the target mark point to the observation direction are obtained, so that whether the target mark point is in a shielding state on the three-dimensional brain model or not can be determined, the annotation labels corresponding to the shielded target mark point are hidden, the number of the annotation labels presented on the display interface is reduced, after the observation position of the three-dimensional brain model is adjusted, the annotation labels corresponding to the shielded target mark point are hidden at the same time, and therefore an observer can be guaranteed to accurately find the target mark point corresponding to the annotation labels through the annotation labels.
In some embodiments, the acquisition module 101 is further configured to acquire the marker point coordinates of the target marker point. The determining module 102 is further configured to determine tag coordinates of an annotation tag corresponding to the target marker point based on the ray and marker point coordinates that match the target marker point; the tag coordinates lie on the ray.
In some embodiments, the determining module 102 is further configured to determine, as the label coordinates of the annotation label, coordinates located on a side of the ray that extends outside the three-dimensional brain model and spaced a preset distance from the target marker point.
In some embodiments, the determining module 102 is further configured to determine a unit vector of the target marker point into the viewing direction; determining a first group of vectors consisting of the target mark point and the display point of the annotation tag based on the unit vector and a preset distance; and taking the difference value between the mark point coordinates of the target mark point and the first group of vectors as the label coordinates of the annotation label.
In some embodiments, the determining module 102 is further configured to determine that the target marker point is in an occluded state if a ray corresponding to the target marker point passes through the three-dimensional brain model; and determining that the target mark point is in a display state under the condition that the ray corresponding to the target mark point does not pass through the three-dimensional brain model.
In some embodiments, the determination module 102 is further configured to present the annotation tag in association with the target marker point in a two-dimensional display.
In some embodiments, the tag display device 100 based on near infrared brain function imaging data further includes an adjustment module configured to receive an adjustment operation to adjust a display angle of the three-dimensional brain model; updating at least one target marker point and marker point coordinates on the three-dimensional brain model based on the adjustment operation to redetermine a presentation mode of the annotation tag corresponding to the target marker point.
In some embodiments, the determining module 102 is further configured to determine that the target marker points of different categories are presented in different display manners if the target marker points have multiple categories.
In some embodiments, the target mark point is a plurality of; and/or the target mark points comprise a plurality of the transmitting probe mark points, a plurality of the receiving probe mark points and a plurality of the detection channel mark points.
The embodiment of the application also provides a computer readable storage medium, which stores a computer program, wherein the computer program realizes the steps of the label display method based on the near infrared brain function imaging data when being executed by a processor.
Note that according to various units in various embodiments of the present application, they may be implemented as computer-executable instructions stored on a memory, which when executed by a processor, may implement corresponding steps; may also be implemented as hardware having corresponding logic computing capabilities; and may also be implemented as a combination of software and hardware (firmware). In some embodiments, the processor may be implemented as any one of FPGA, ASIC, DSP chip, SOC (system on a chip), MPU (e.g., without limitation, cortex), etc. The processor may be communicatively coupled to the memory and configured to execute computer-executable instructions stored therein. The memory may include read-only memory (ROM), flash memory, random Access Memory (RAM), dynamic Random Access Memory (DRAM) such as Synchronous DRAM (SDRAM) or Rambus DRAM, static memory (e.g., flash memory, static random access memory), etc., upon which computer-executable instructions are stored in any format. Computer-executable instructions may be accessed by the processor, read from ROM or any other suitable memory location, and loaded into RAM for execution by the processor to implement a wireless communication method in accordance with various embodiments of the present application.
It should be noted that, among the components of the system of the present application, the components thereof are logically divided according to functions to be implemented, but the present application is not limited thereto, and the components may be re-divided or combined as needed, for example, some components may be combined into a single component, or some components may be further decomposed into more sub-components.
Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in a system according to embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application may also be embodied as an apparatus or device program (e.g., computer program and computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form. In addition, the application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
Furthermore, although exemplary embodiments have been described herein, the scope thereof includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of the various embodiments across), adaptations or alterations as pertains to the present application. Elements in the claims are to be construed broadly based on the language employed in the claims and are not limited to examples described in the present specification or during the practice of the present application, which examples are to be construed as non-exclusive.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the above detailed description, various features may be grouped together to streamline the application. This is not to be interpreted as an intention that the disclosed features not being claimed are essential to any claim. Rather, the subject matter of the present application is capable of less than all of the features of a particular disclosed embodiment. Thus, the claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with one another in various combinations or permutations. The scope of the application should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above embodiments are only exemplary embodiments of the present application and are not intended to limit the present application, the scope of which is defined by the claims. Various modifications and equivalent arrangements may be made to the present application by those skilled in the art, which modifications and equivalents are also considered to be within the scope of the present application.

Claims (11)

1. A label display method based on near infrared brain function imaging data, comprising:
acquiring target mark points on a three-dimensional brain model, wherein the target mark points comprise at least one category of a transmitting probe mark point, a receiving probe mark point and a detection channel mark point, and each target mark point is respectively matched with an annotation label;
determining rays emitted from the target marker point in a viewing direction;
determining that the target mark point is in an occlusion state or a presentation state compared with the three-dimensional brain model based on the ray, and hiding the annotation tag matched with the target mark point under the condition that the target mark point is in the occlusion state; and displaying the annotation tag matched with the target mark point under the condition that the target mark point is in the appearance state.
2. The method of claim 1, further comprising:
acquiring the mark point coordinates of the target mark point;
determining label coordinates of an annotation label corresponding to the target mark point based on the ray matched with the target mark point and the mark point coordinates; the tag coordinates lie on the ray.
3. The method for displaying a label based on near infrared brain function imaging data according to claim 2, wherein determining label coordinates of an annotation label corresponding to the target mark point specifically comprises:
and determining the coordinate which is positioned on one side of the ray extending to the outside of the three-dimensional brain model and has a preset distance with the target mark point as the label coordinate of the annotation label.
4. A tag display method based on near infrared brain function imaging data according to claim 3, wherein determining the tag coordinates specifically includes:
determining a unit vector of the target mark point in the observation direction;
determining a first group of vectors consisting of the target mark point and the display point of the annotation tag based on the unit vector and a preset distance;
And taking the difference value between the mark point coordinates of the target mark point and the first group of vectors as the label coordinates of the annotation label.
5. The method for displaying a label based on near infrared brain function imaging data according to any one of claims 1 to 4, wherein said determining that said target marker point is in an occlusion state or a visualization state compared to said three-dimensional brain model based on said rays, specifically comprises:
determining that the target mark point is in a shielding state under the condition that a ray corresponding to the target mark point passes through the three-dimensional brain model;
and determining that the target mark point is in a display state under the condition that the ray corresponding to the target mark point does not pass through the three-dimensional brain model.
6. The method for displaying a label based on near infrared brain function imaging data according to any one of claims 1 to 4, wherein displaying the annotation label to which the target mark point is matched in the case that the target mark point is in the appearance state, specifically comprises:
the annotation tag is presented in association with the target mark point in a two-dimensional display.
7. The method of claim 2, further comprising:
Receiving an adjustment operation for adjusting the display angle of the three-dimensional brain model;
updating at least one target marker point and marker point coordinates on the three-dimensional brain model based on the adjustment operation to redetermine a presentation mode of the annotation tag corresponding to the target marker point.
8. The method of displaying a label based on near infrared brain function imaging data according to any one of claims 1 to 4, further comprising:
in the case that the target mark points have a plurality of categories, the target mark points of different categories are determined to be presented in different display modes.
9. The method for displaying a label based on near infrared brain function imaging data according to any one of claims 1 to 4, wherein the target mark points are plural; and/or the target mark points comprise a plurality of the transmitting probe mark points, a plurality of the receiving probe mark points and a plurality of the detection channel mark points.
10. A tag display device based on near infrared brain function imaging data, comprising:
the system comprises an acquisition module, a detection module and a detection module, wherein the acquisition module is configured to acquire target mark points on a three-dimensional brain model, the target mark points comprise at least one category of a transmitting probe mark point, a receiving probe mark point and a detection channel mark point, and each target mark point is respectively matched with an annotation label;
A determination module configured to determine rays emitted from the target marker point toward a viewing direction; the method comprises the steps of,
determining that the target mark point is in an occlusion state or a presentation state compared with the three-dimensional brain model based on the ray, and hiding the annotation tag matched with the target mark point under the condition that the target mark point is in the occlusion state; and displaying the annotation tag matched with the target mark point under the condition that the target mark point is in the appearance state.
11. A computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the near infrared brain function imaging data-based tag display method according to any one of claims 1 to 9.
CN202311786251.2A 2023-12-25 2023-12-25 Label display method and device based on near infrared brain function imaging data Active CN117456111B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311786251.2A CN117456111B (en) 2023-12-25 2023-12-25 Label display method and device based on near infrared brain function imaging data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311786251.2A CN117456111B (en) 2023-12-25 2023-12-25 Label display method and device based on near infrared brain function imaging data

Publications (2)

Publication Number Publication Date
CN117456111A true CN117456111A (en) 2024-01-26
CN117456111B CN117456111B (en) 2024-04-05

Family

ID=89585987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311786251.2A Active CN117456111B (en) 2023-12-25 2023-12-25 Label display method and device based on near infrared brain function imaging data

Country Status (1)

Country Link
CN (1) CN117456111B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726674A (en) * 2024-02-07 2024-03-19 慧创科仪(北京)科技有限公司 Positioning method of near-infrared brain function imaging device based on personalized brain model
CN117726674B (en) * 2024-02-07 2024-05-14 慧创科仪(北京)科技有限公司 Positioning method of near-infrared brain function imaging device based on personalized brain model

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6031565A (en) * 1993-06-18 2000-02-29 Gte Internetworking Incorporated Stereo radiography
US20130321397A1 (en) * 2012-06-05 2013-12-05 Billy P. Chen Methods and Apparatus for Rendering Labels Based on Occlusion Testing for Label Visibility
US20190188909A1 (en) * 2017-12-20 2019-06-20 Biosense Webster (Israel) Ltd. Marking a computerized model of a cardiac surface
JP2019170412A (en) * 2018-03-26 2019-10-10 コニカミノルタメディカルソリューションズ株式会社 Surgery simulation apparatus and surgery simulation program
CN111009033A (en) * 2019-12-06 2020-04-14 北京国润健康医学投资有限公司 OpenGL-based focal region visualization method and system
US20210271931A1 (en) * 2020-02-27 2021-09-02 GE Precision Healthcare LLC Systems and methods for detecting laterality of a medical image
CN114246557A (en) * 2022-03-01 2022-03-29 慧创科仪(北京)科技有限公司 Positioning method, device and storage medium for near-infrared brain function imaging device
CN114569076A (en) * 2022-03-01 2022-06-03 丹阳慧创医疗设备有限公司 Positioning method, device and storage medium for near-infrared brain function imaging device
CN114693893A (en) * 2022-03-28 2022-07-01 北京城市网邻信息技术有限公司 Data processing method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6031565A (en) * 1993-06-18 2000-02-29 Gte Internetworking Incorporated Stereo radiography
US20130321397A1 (en) * 2012-06-05 2013-12-05 Billy P. Chen Methods and Apparatus for Rendering Labels Based on Occlusion Testing for Label Visibility
US20190188909A1 (en) * 2017-12-20 2019-06-20 Biosense Webster (Israel) Ltd. Marking a computerized model of a cardiac surface
JP2019170412A (en) * 2018-03-26 2019-10-10 コニカミノルタメディカルソリューションズ株式会社 Surgery simulation apparatus and surgery simulation program
CN111009033A (en) * 2019-12-06 2020-04-14 北京国润健康医学投资有限公司 OpenGL-based focal region visualization method and system
US20210271931A1 (en) * 2020-02-27 2021-09-02 GE Precision Healthcare LLC Systems and methods for detecting laterality of a medical image
CN114246557A (en) * 2022-03-01 2022-03-29 慧创科仪(北京)科技有限公司 Positioning method, device and storage medium for near-infrared brain function imaging device
CN114569076A (en) * 2022-03-01 2022-06-03 丹阳慧创医疗设备有限公司 Positioning method, device and storage medium for near-infrared brain function imaging device
CN114693893A (en) * 2022-03-28 2022-07-01 北京城市网邻信息技术有限公司 Data processing method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726674A (en) * 2024-02-07 2024-03-19 慧创科仪(北京)科技有限公司 Positioning method of near-infrared brain function imaging device based on personalized brain model
CN117726674B (en) * 2024-02-07 2024-05-14 慧创科仪(北京)科技有限公司 Positioning method of near-infrared brain function imaging device based on personalized brain model

Also Published As

Publication number Publication date
CN117456111B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
US11253171B2 (en) System and method for patient positioning
US20050182341A1 (en) Posture diagnosis equipment and program therefor
US10383511B2 (en) Image processing apparatus, image processing method, and program
CN110662477B (en) Information processing apparatus, control method, and program
US20070188705A1 (en) Ocular fundus portion analyzer and ocular fundus portion analyzing method
CN109271914A (en) Detect method, apparatus, storage medium and the terminal device of sight drop point
CN106714681A (en) Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
CN110251066A (en) Based on the not positive system and method for subjective distance measuring measurement ophthalmic refractive
US20140111630A1 (en) Systems and methods for iris detection and gaze estimation
WO2014168128A1 (en) Endoscope system and operation method for endoscope system
CN104641634A (en) Inspection imaging system, and a medical imaging system, apparatus and method
JP6445784B2 (en) Image diagnosis support apparatus, processing method thereof, and program
US9437003B2 (en) Method, apparatus, and system for correcting medical image according to patient's pose variation
JP4661688B2 (en) Optical biological measurement apparatus, optical biological measurement apparatus program, and optical biological measurement method
CN105593896B (en) Image processing apparatus, image display device, image processing method
CN109984719B (en) Visual color channel function detection method and system
JP6988787B2 (en) Display device, display method, and program
CN117456111B (en) Label display method and device based on near infrared brain function imaging data
CN112352289A (en) Method and system for providing ECG analysis interface
US20210035294A1 (en) Image processing method, program, and image processing device
CN109765990A (en) Picture display control method and picture display control program
WO2022018271A1 (en) Method for determining a coronal position of an eye relative to the head
US11622683B2 (en) Pupillometry systems, methods, and devices
CN109567816B (en) Monocular vision-based cervical vertebra mobility measuring system and method
WO2006131922A2 (en) Method and apparatus for mapping 3-dimensional bodies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant