CN113842227B - Medical auxiliary three-dimensional model positioning and matching method, system, equipment and medium - Google Patents

Medical auxiliary three-dimensional model positioning and matching method, system, equipment and medium Download PDF

Info

Publication number
CN113842227B
CN113842227B CN202111033540.6A CN202111033540A CN113842227B CN 113842227 B CN113842227 B CN 113842227B CN 202111033540 A CN202111033540 A CN 202111033540A CN 113842227 B CN113842227 B CN 113842227B
Authority
CN
China
Prior art keywords
dimensional model
target
medical auxiliary
real
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111033540.6A
Other languages
Chinese (zh)
Other versions
CN113842227A (en
Inventor
盘细平
黄秋
王元昊
郭铭浩
汪一
翟金宝
苑之仪
宁宇
蔡勇亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfang Yukang (Tai'an) Health Management Co.,Ltd.
Original Assignee
Shanghai Laiqiu Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Laiqiu Medical Technology Co ltd filed Critical Shanghai Laiqiu Medical Technology Co ltd
Priority to CN202111033540.6A priority Critical patent/CN113842227B/en
Publication of CN113842227A publication Critical patent/CN113842227A/en
Application granted granted Critical
Publication of CN113842227B publication Critical patent/CN113842227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/367Correlation of different images or relation of image positions in respect to the body creating a 3D dataset from 2D images using position information
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/374NMR or MRI
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/376Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
    • A61B2090/3762Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy using computed tomography systems [CT]

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Gynecology & Obstetrics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention provides a medical auxiliary three-dimensional model positioning and matching method, a system, equipment and a medium. The medical auxiliary three-dimensional model of the patient is combined with the target object for reality, visual three-dimensional information is provided for doctors, and accurate guidance is provided in the operation.

Description

Medical auxiliary three-dimensional model positioning and matching method, system, equipment and medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, a system, an apparatus, and a medium for positioning and matching a medical auxiliary three-dimensional model.
Background
Virtual Reality (Virtual Reality) is a three-dimensional Virtual world formed by simulating and generating senses such as vision, hearing, touch and the like for a user by using computer technology, and the user interacts with the Virtual world by means of special input/output equipment.
Augmented reality (Augmented Reality), which is a technology of calculating the position and angle of a camera image in real time and assisting with corresponding images, superimposes the virtual world and the real world in the display screen of a lens through holographic projection, and an operator can interact through equipment.
Mixed reality (Mix reality), which creates a new environment and visualizes three-dimensional world in combination with real world and virtual world, physical entities and digital objects coexist and interact in real time to simulate real objects, is a further development of virtual reality technology.
Currently, mixed reality technology is widely used in various fields such as medical fields. However, how to precisely match the virtual model with the physical object has been a difficult problem to overcome. In the related technology, the virtual model is often moved manually, so that the matching of the virtual model and the entity model is completed, and the process is complex, the practicability is poor and the time consumption is high.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present invention aims to provide a method, a system, a device and a medium for positioning and matching a three-dimensional model with assistance in medical treatment, which are used for solving the technical problems of complicated flow, poor practicality and more time consumption due to the fact that accurate matching of a virtual model and a physical object depends on manual processing.
In view of the above problems, the present invention provides a medical auxiliary three-dimensional model positioning and matching method, which includes:
acquiring a medical auxiliary three-dimensional model and virtual position information of a marked object in the medical auxiliary three-dimensional model, wherein the medical auxiliary three-dimensional model is generated according to a medical image of the target object, and the medical image comprises the marked object;
acquiring a target real image of the target object, and determining the real position information according to the mark position of the mark object in the target real image;
generating a space conversion relation according to the virtual position information and the real position information, wherein the space conversion relation comprises a mapping relation between the virtual position and the real position;
and positioning and matching the medical auxiliary three-dimensional model at the corresponding position of the target object according to the spatial transformation relation.
Optionally, the matching the medical auxiliary three-dimensional model to the corresponding position of the target object according to the spatial transformation relation includes any one of the following:
determining a marked real image of the marked object from the target real image, determining the real size of the marked object, determining a marked virtual image of the marked object from the medical auxiliary three-dimensional model, determining the virtual size of the marked object, determining a size adjustment proportion according to the real size and the virtual size, and adjusting the medical auxiliary three-dimensional model to realize fusion display of the medical auxiliary three-dimensional model and the target object;
the marking object comprises at least three different marking sub-objects, the real distances of any two marking sub-objects are determined from the target real image, the virtual distances of any two marking sub-objects are determined from the medical auxiliary three-dimensional model, the distance adjustment proportion is determined according to the real distances and the virtual distances, and the medical auxiliary three-dimensional model is adjusted to realize fusion display of the medical auxiliary three-dimensional model and the target object.
Optionally, if the target object has moved in position, the method further includes:
acquiring a current target image and a historical target image of the target object, wherein the historical target image is an image shot at a plurality of times before the current target image;
determining current position information and historical position information of the target object according to the current target image and the historical target image;
acquiring the position similarity between the current position information and the historical position information, and determining position change information if the position similarity is larger than a preset similarity, wherein the position change information is determined according to the current position information and the real position information;
and adjusting the medical auxiliary three-dimensional model according to the position change information so as to realize positioning and matching of the medical auxiliary three-dimensional model, and displaying the corresponding position of the target object after the position movement.
Optionally, the method further comprises:
acquiring a real scene image, and inputting the real scene image into a preset target detection model to obtain a target detection result;
and if the target detection result comprises a preset indication object, the medical auxiliary three-dimensional model displayed at the corresponding position of the target object is displayed in a transparent mode.
Optionally, the method further comprises:
if the target detection result comprises a preset neglected object, preferentially displaying a target sub-model in a target area of the medical auxiliary three-dimensional model displayed at a corresponding position of the target object;
the target area comprises an area corresponding to the medical auxiliary three-dimensional model and the preset neglected object position;
the preferential display comprises moving the model where the target sub-model is located to be above the preset neglected object.
Optionally, before the matching the medical auxiliary three-dimensional model to the corresponding position of the target object according to the spatial transformation relation, the method further includes:
and acquiring the current object position and the sight line direction of the observed object, and if the current object position belongs to a preset position range and the sight line direction belongs to a preset sight line direction, displaying the medical auxiliary three-dimensional model at the corresponding position of the target object according to the space transformation relation.
Optionally, if the observed object moves in position, the method further includes:
acquiring a moving object position of the observed object, wherein the moving position is a position of the observed object after the position of the observed object moves;
And determining position change data according to the current object position and the moving object position, and adjusting the medical auxiliary three-dimensional model to realize positioning and matching of the medical auxiliary three-dimensional model at the corresponding position of the target object after the position movement.
The invention also provides a medical auxiliary three-dimensional model positioning and matching system, which comprises:
the medical auxiliary three-dimensional model is generated according to a medical image of the target object, and the medical image comprises the marker object;
the real position information acquisition module is used for acquiring a target real image of the target object and determining the real position information according to the mark position of the mark object in the target real image;
the generation module is used for generating a space transformation relation according to the virtual position information and the real position information, wherein the space transformation relation comprises a mapping relation between the virtual position and the real position;
and the positioning matching module is used for positioning and matching the medical auxiliary three-dimensional model at the corresponding position of the target object according to the spatial transformation relation.
The invention also provides an electronic device, which comprises a processor, a memory and a communication bus;
the communication bus is used for connecting the processor and the memory;
the processor is configured to execute a computer program stored in the memory to implement the method according to any one of the embodiments described above.
The invention also provides a computer readable storage medium having stored thereon a computer program for causing the computer to perform the method according to any of the embodiments described above.
As described above, the medical auxiliary three-dimensional model positioning and matching method, system, equipment and medium provided by the invention have the following beneficial effects:
the method comprises the steps of obtaining a medical auxiliary three-dimensional model and virtual position information of a marked object in the medical auxiliary three-dimensional model, obtaining a target real image of the target object, determining real position information according to the marked position of the marked object in the target real image, generating a space conversion relation according to the virtual position information and the real position information, and positioning and matching the medical auxiliary three-dimensional model at a corresponding position of the target object according to the space conversion relation. By the medical auxiliary three-dimensional model positioning and matching method, the matching problem of the virtual model (the medical auxiliary three-dimensional model) and the entity object (the target object) is improved, so that intelligent application of different types of application scenes is achieved. The image reconstruction data (medical auxiliary three-dimensional model) of the patient and the physical affected part (target object) are combined for reality, visual three-dimensional information is provided for doctors, and accurate guidance is provided in the operation.
Drawings
Fig. 1 is a schematic flow chart of a medical auxiliary three-dimensional model positioning and matching method according to an embodiment of the invention.
Fig. 2 is a schematic flow chart of a medical auxiliary three-dimensional model positioning and matching method according to an embodiment of the invention.
FIG. 3 is a schematic structural diagram of a medical assisted three-dimensional model location matching system according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal according to an embodiment.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
Example 1
Referring to fig. 1, the medical auxiliary three-dimensional model positioning and matching method provided by the embodiment of the invention includes:
s101: and obtaining the virtual position information of the marking object in the medical auxiliary three-dimensional model.
Wherein the medical assistance three-dimensional model is generated from a medical image of the target object, the medical image comprising the marker object.
The medical auxiliary three-dimensional model can be generated according to a two-dimensional image obtained by setting a plurality of mark objects on a target object in advance and further obtaining the two-dimensional image of the target object comprising the mark objects. The two-dimensional image may be acquired by medical imaging devices such as CT (computed tomography) devices, MR (Magnetic Resonance ) devices, PET-CT (positron emission computed tomography) devices, etc., including but not limited to CT images, MR images, PET-CT images. The conversion of the two-dimensional image into the medical-assisted three-dimensional model may be performed in a manner known to those skilled in the art, and is not limited thereto. The marker object can be identified by the medical imaging device, so that the marker object will also be included in the two-dimensional image, and the resulting medical auxiliary three-dimensional model will also include the marker object.
Optionally, the target object includes, but is not limited to, a person, a portion or tissue of the body of the person (e.g., leg, hand, etc.), an item, etc. The marking object is arranged on the surface or the inside of the target object, if the marking object is at least one, the marking object is a three-dimensional object, and the shapes or the sizes of the surfaces of the at least one three-dimensional object are different, so that the three marking points which are not on the same line are determined on the three-dimensional object, and the three-dimensional object can be positioned in the three-dimensional space. If the number of the marked objects is three or more, at least three marked objects are not on a straight line. Alternatively, if the marker object includes a plurality of marker objects, the colors of the respective marker objects may be the same or different.
The marking object may be at least one of a user specific texture feature, such as a mole, a labeled label, or a painted mark, when the marking object is on the surface of the target object. Specifically, taking a specific example of the case that the target object is a certain part or a certain organ tissue of the human body of the patient (such as a head, a hand, a leg, etc.), there is a special texture on the surface of the human body, for example: on the one hand, the characteristic information is positioned on the surface of a human body, and the characteristic information cannot easily change in position and easily fall off; on the other hand, the characteristic information can be easily identified relative to the surface of the human body even after photographing or collecting imaging, and is particularly remarkable.
The medical auxiliary three-dimensional model includes, but is not limited to, bones, blood vessels, human internal organs, specific constructions of human internal organs, and the like. For example, the target object is a leg of a patient, at this time, three patches which are not aligned with each other may be attached to the leg, and a two-dimensional image may be obtained by imaging with a CT or other device, and then a three-dimensional model including blood vessels, nerves, and bones located in the leg may be created from the two-dimensional image, and the three-dimensional model includes an image of the labeled object. In order to facilitate the subsequent observation, the part of the marker object may be displayed transparently or hidden in the medical auxiliary three-dimensional model.
Optionally, the virtual position information may be obtained by creating a virtual coordinate system in a virtual space where the medical auxiliary three-dimensional model is located, and further determining the virtual position information corresponding to the marker object according to a coordinate point of the marker object in the virtual coordinate system.
Alternatively, the display of the medical-assisted three-dimensional model may be realized by means of an assisted viewing device (e.g. MR device, AR device). Because the formats of the medical auxiliary three-dimensional model include but are not limited to STL, 3DS, 3DP, 3MF, OBJ, PLY and the like, the format converter of the medical auxiliary three-dimensional model is utilized to preprocess the holographic projection, the preprocessing is mainly to perform format conversion on the formed holographic projection, and the generated holographic projection is ensured to at least meet the hardware support requirements of auxiliary observation equipment (such as MR equipment and AR equipment), so that the holographic projection can be normally displayed in a real virtual environment formed by the auxiliary observation equipment; in addition, through a format conversion mode, the fact that the holographic projection and a medical auxiliary three-dimensional model formed by the target object are at least identical in format is ensured, and when the holographic projection and the target object are fused, the accuracy of fusion of the holographic projection and the target object (entity model) is greatly improved.
Optionally, the virtual position information includes position information of the marker object under a virtual coordinate system where the medical auxiliary three-dimensional model is located, and the virtual position information corresponding to each marker object includes marker object identification information so as to be corresponding to the marker object in the real scene subsequently.
S102: and acquiring a target real image of the target object, and determining real position information according to the mark position of the mark object in the target real image.
Alternatively, the target real image may be obtained by photographing a photographing device disposed at a certain known position in the medical room, and at this time, it is necessary to obtain in advance a relative positional relationship between the photographing device and eyes of the observation object, and take a position of the marker object of the target real image with respect to the observation object glasses as the real positional information. Or, the current image of the target object is acquired through auxiliary observation equipment (including but not limited to holonens glasses and the like) worn by the observation object (such as doctors, family members of patients, students and the like), so that the auxiliary observation equipment can be approximately equivalent to eyes of the observation object according to tolerance degree of error of a person skilled in the art. .
The target real image includes an image of a marker object, optionally, if there are multiple marker objects, there may be some marker objects in the target real image that only include some marker objects due to the pose, shooting angle, etc. of the target object, where the image is required to be taken as the target real image according to a preset shooting rule preset by a person skilled in the art until the number of marker objects included in the acquired image satisfies the preset shooting rule. The preset shooting rule comprises a three-dimensional mark object or at least three mark objects which are not in a straight line according to the specific attribute, shape and other information of the mark objects.
Optionally, one determination manner of the marking position may be that by constructing a camera coordinate system and a real scene coordinate system where the target real image is located, a real coordinate of a fixed object under the real scene coordinate system of other fixed objects in the target real image except the target object and a camera coordinate of the fixed object under the camera coordinate system are selected, a virtual-real change relationship between the real scene coordinate system and the camera coordinate system is obtained, then a marking camera coordinate of the marking object is obtained under the camera coordinate system, and a marking real coordinate of the marking object under the real scene coordinate system is obtained as the marking position according to the obtained virtual-real change relationship.
Alternatively, another determination manner of the marking position may be to implement positioning of the marking object by bluetooth positioning, wifi positioning, or the like. The determination of the marking position can also be realized by creating a visual map of the space under the real scene in advance, and further realizing visual positioning by acquiring a marking space image shot by the marking position, so as to obtain the pose of the marking object as the marking position.
The location of the mark may also be determined in other ways known to those skilled in the art, and is not limited herein.
The marking positions of several marking objects can be taken as the real position information. Each real position information comprises identification information of the marked object so that the marked object in the subsequent real scene corresponds to the marked object in the medical auxiliary three-dimensional model.
S103: and generating a space conversion relation according to the virtual position information and the real position information.
If the virtual position information and the real position information of the plurality of marker objects are acquired in step S101 and step S102, a spatial transformation relationship may be generated by selecting a predetermined number of the virtual position information and the real position information of the marker objects. Because three-dimensional space theoretically only needs virtual position information and real position information of three mark points at least, the determination of the space transformation relation can be realized. In order to ensure the accuracy of the determination of the spatial transformation relationship, virtual position information and real position information of one or more marker points can be selected to verify the spatial transformation relationship. It should be noted that there may be one or more marker points on a marker object.
Optionally, the method for generating the spatial transformation relation according to the virtual position information and the real position information includes:
acquiring position information of auxiliary observation equipment such as MR equipment and the like under a virtual reality coordinate system, acquiring inertial data corresponding to at least two frames of successively acquired images containing the same mark point and the MR equipment by utilizing the MR equipment at different position information, and calculating pose change data after the MR equipment position conversion according to the inertial data; calculating a position matrix corresponding to the two frames of images according to the pose change data, and selecting the same marking point from the two continuous frames of images to form a marking pair; and carrying out pose calculation according to the position matrix of each group of mark pairs to obtain the position information of the mark points in the target object.
Optionally, at least two frames of images of the same mark point under the reflective light source can be continuously collected by using a shooting device in the MR equipment, meanwhile, inertial data corresponding to the shooting device in the MR equipment is collected by an inertial measurement unit in the MR equipment, a motion track of the MR equipment is determined by using the conventional data, and pose change data after the position of the MR equipment is changed is determined according to the motion track; and obtaining a corresponding displacement and rotation matrix according to the pose change data of the MR equipment, thereby obtaining a position matrix (basic matrix) corresponding to two continuous frames of images, namely a space transformation relation.
For example, the same mark point in two frames of images shot successively forms a mark pair, and pose calculation is performed according to the position matrix of each group of mark pairs to obtain the position information of the mark point in the target object. The location information may be calculated using a random sample consensus algorithm. For another example, the position coordinates of the first feature point and the position coordinates of the second feature point in each set of mark pairs are determined through the displacement and rotation matrix corresponding to the basic matrix, the matching degree between the two feature points is determined according to the position coordinates of the first feature point, the position coordinates of the second feature point and the basic matrix, and the position coordinates corresponding to each mark pair are obtained according to the mark pairs in the preset threshold interval corresponding to the calculated matching degree.
The basic matrix formed by the mark pairs can acquire the position information of each mark in a visual positioning mode, so that the phase automation and intelligent efficiency is higher, and meanwhile, the position information of the mark points is more accurate by adopting the mode.
Wherein the spatial transformation relationship comprises a mapping relationship between the virtual position and the real position. The determination of the mapping relationship may be accomplished in a manner known to those skilled in the art.
S104: and positioning and matching the medical auxiliary three-dimensional model at the corresponding position of the target object according to the space transformation relation.
The medical auxiliary three-dimensional model can be projected and displayed at the corresponding position of the target object in the real scene through technologies such as holographic projection after the positioning and matching are successful through the space transformation relation, so that the virtual model (the medical auxiliary three-dimensional model) and the entity object (the target object) can be accurately matched.
The target object (entity model) is in the real scene, on one hand, the target object has a corresponding entity model in a real coordinate system, has a corresponding three-dimensional relationship, and can reflect the length, width and height of the entity model and the specific structural relationship; on the other hand, in the medical auxiliary three-dimensional model under the virtual reality coordinate system, since the coordinate system itself is changed, a matrix transformation relationship (space transformation relationship) exists between the two coordinate systems, and coordinate transformation of the marking points on the marking objects in the target object and the medical auxiliary three-dimensional model is realized by calculating the matrix transformation relationship (space transformation relationship) between the two coordinate systems.
In some embodiments, matching the medical assistance three-dimensional model location to the corresponding location of the target object according to the spatial transformation relationship comprises:
Determining a marked real image of the marked object from the target real image, determining the real size of the marked object, determining a marked virtual image of the marked object from the medical auxiliary three-dimensional model, determining the virtual size of the marked object, determining a size adjustment proportion according to the real size and the virtual size, and adjusting the medical auxiliary three-dimensional model to realize fusion display of the medical auxiliary three-dimensional model and the target object.
Alternatively, the real size and the virtual size may be the size of the area, the side length, the height, or the like of the marking object.
In some embodiments, displaying the medical assistance three-dimensional model at the corresponding location of the target object according to the spatial transformation relationship comprises:
the marking object comprises at least three different marking sub-objects, the real distances of any two marking sub-objects are determined from the target real image, the virtual distances of any two marking sub-objects are determined from the medical auxiliary three-dimensional model, the distance adjustment proportion is determined according to the real distances and the virtual distances, and the medical auxiliary three-dimensional model is adjusted so as to realize the fusion display of the medical auxiliary three-dimensional model and the target object.
The real distance and the virtual distance may be distances between corresponding marking points on the marking object, and the positions of the marking points may be specified by those skilled in the art, for example, one or more vertices of the marking object, and the like, which is not limited herein.
Sometimes, considering the problems of file resource storage, system error and the like, the medical auxiliary three-dimensional model and the target object may not be built in equal proportion, and at this time, the accurate matching of the virtual model (medical auxiliary three-dimensional model) and the entity object (target object) can be realized through the above manner, so as to achieve equal proportion restoration of the virtual model into the entity object in the virtual reality scene.
In some embodiments, if the target object has moved in position, the method further comprises:
acquiring a current target image and a historical target image of a target object, wherein the historical target image is an image shot at a plurality of times before the current target image;
determining current position information and historical position information of a target object according to the current target image and the historical target image;
acquiring the position similarity between the current position information and the historical position information, and determining position change information if the position similarity is greater than the preset similarity, wherein the position change information is determined according to the current position information and the real position information;
and adjusting the medical auxiliary three-dimensional model according to the position change information so as to realize positioning and matching of the medical auxiliary three-dimensional model, and displaying the corresponding position of the target object after the position movement.
Because the target object moves, the medical auxiliary three-dimensional model fitted before the target object is not necessarily corresponding to the target object any more, and synchronous adjustment is needed. However, since the three-dimensional model is usually used for displaying blood vessels, bones and the like in different colors, if the three-dimensional model is frequently adjusted synchronously with the target object, the picture is frequently rocked, and the customer experience is affected.
Optionally, determining whether the target object has moved in position may be performed by adding a position sensor to the target object or other preset positions, and determining the position change information according to the information related to the position sensor. The position sensor may be a laser sensor, an ultrasonic sensor, or the like.
Whether the position of the target object is moved or not can be determined by acquiring images of the target object at the front and rear moments and comparing the images.
Alternatively, the current target image and the historical target image may be image frames of two moments of video shot on the target object, or may be images shot at two moments before and after. The current target image and the history target image may be two images of different moments photographed by the same photographing apparatus at the same location, or may be two images of different moments photographed by the same photographing apparatus at different locations. For example, the current target image and the historical target image may be acquired by an auxiliary observation device worn by the observation object, and the current target image and the historical target image acquired by the observation object may have different image acquisition positions due to actions such as walking, turning, and the like, and at this time, the current position information and the historical position information may be determined by performing coordinate system conversion by means of the coordinates of a certain fixed reference object.
Alternatively, the location similarity may be determined by a euclidean distance between the current location and the historical location, with the euclidean distance being indicative of the similarity. The preset similarity may be preset by a person skilled in the art. If the position similarity is greater than the preset similarity, the moving range of the target object is very small or the target object is in a static state. Otherwise, the target object is still in a moving state, and at the moment, a new current target image and a new historical target image are repeatedly acquired until the position similarity is determined to be greater than the preset similarity.
The current position information and the real position information comprise coordinate information of the target object under a real scene coordinate system, and position change information is determined according to the coordinate information.
By the aid of the method, when the target object moves, the medical auxiliary three-dimensional model can follow the movement in time, the model does not move in the moving process of the target object, the model display image is prevented from shaking, and the customer experience degree is reduced. Compared with the mode of redefining a new space conversion relation, the method is simpler and more convenient, has relatively less calculated amount and saves resources.
In some embodiments, before the medical auxiliary three-dimensional model is positioned and matched at the corresponding position of the target object according to the spatial transformation relation, the method further comprises: and preprocessing the medical auxiliary three-dimensional model to obtain a virtual three-dimensional model with the same parameters as the solid model constructed by the target real image.
The formats of the three-dimensional model include but are not limited to STL, 3DS, 3DP, 3MF, OBJ, PLY and the like, the three-dimensional model format converter is utilized to preprocess the medical auxiliary three-dimensional model, the preprocessing is mainly to perform format conversion on the formed medical auxiliary three-dimensional model, the generated medical auxiliary three-dimensional model is ensured to at least meet the hardware support requirements of virtual reality equipment (such as MR equipment and AR equipment), and the medical auxiliary three-dimensional model can be normally displayed in a real virtual environment formed by the virtual reality equipment; in addition, through the mode of format conversion, the three-dimensional model formed by the medical auxiliary three-dimensional model and the entity model is ensured to have at least the same format, and when the medical auxiliary three-dimensional model is fused with the entity model, the accuracy of the fusion of the medical auxiliary three-dimensional model and the entity model is greatly improved. That is, the medical auxiliary three-dimensional model is displayed at the display accuracy of the corresponding position of the target object according to the spatial transformation relationship.
Specifically, n marker points (n is a natural number equal to or greater than 3 and determined by the degree of freedom when the marker points are solved R, t) are acquired at the same positions corresponding to the medical auxiliary three-dimensional model and the actual object of the virtual model, and since the rigid transformation can be decomposed into rotation transformation and translation transformation, the rigid transformation of the medical auxiliary three-dimensional model and the target object in the real scene is transformed into a transformation relationship for solving the R (rotation matrix) and t (translation matrix) transformation from the source point pair to the target point pair, namely, the following relationship is satisfied:
P Target =R*P Calibration +t
the method for solving the R, t transformation matrix between the mark point pairs A and B (A and B are 3x n arrays, each column represents a coordinate point) is to translate A, B to a rotation center respectively by solving the average centers of the mark point pair A and the mark point pair B, then calculate R, and take the R into a rigid transformation formula to calculate t.
The method for obtaining the virtual medical auxiliary three-dimensional model and the actual coordinates of the marked object is that the coordinates of the marked points are obtained in HoloLens by placing virtual points, the three-dimensional model (medical auxiliary three-dimensional model) with the marked points is obtained through CT image reconstruction, and the two characteristic points are matched to achieve the model matching effect.
In some embodiments, the method further comprises:
Acquiring a real scene image, and inputting the real scene image into a preset target detection model to obtain a target detection result;
and if the target detection result comprises the preset indication object, the medical auxiliary three-dimensional model displayed at the corresponding position of the target object is displayed in a transparent mode.
The real scene image can be an image which can be seen by the observed object currently, specifically, the image which can be seen by the observed object can be acquired by arranging a shooting device on the head of the observed object, and the relative position between the shooting device and the eyes of the observed object needs to be calibrated in advance. The acquisition of the real scene image can also be realized by wearing equipment such as glasses with an image acquisition function on an observation object. The acquisition of the real scene image may also be accomplished in other ways known to those skilled in the art.
The preset target detection model may be obtained by training the basic neural network model through a plurality of images including a preset neglected object or a preset indication object, and a specific training manner may be implemented in a manner known to those skilled in the art, which is not limited herein.
Preset pointing objects include, but are not limited to, pointing sticks, scalpels, fingers, fingertips, and the like that may be used for pointing. The preset ignoring object includes, but is not limited to, an arm of the third person or the like (the non-target object is also not a preset indicating object).
When the preset indication object is identified, the operation scheme explanation and other behaviors of the observation object are needed by means of the medical auxiliary three-dimensional model, at this time, the current indicated position is needed to be seen more clearly, so that the effect of the medical auxiliary three-dimensional model display can be set to be transparent, the preset indication object in the real scene is clearer, and the observation is convenient. The multi-gear preset transparency may be set for user selection according to the needs of those skilled in the art.
In some embodiments, the method further comprises:
if the target detection result comprises a preset neglected object, preferentially displaying a target sub-model in a target area of the medical auxiliary three-dimensional model displayed at a corresponding position of the target object;
the target area comprises an area corresponding to the medical auxiliary three-dimensional model and a preset neglected object position;
the preferential display comprises the step of moving the model where the target sub-model is located to be above the preset neglected object.
When a certain part of the medical auxiliary three-dimensional model (hereinafter referred to as model) needs to be indicated, an elbow or other foreign matters are often introduced to block the model, at this time, the model can be identified by a preset target detection model, if the model is identified, the model of the blocked area can be preferentially displayed, for example, the model of the area is moved close to human eyes in the direction of connecting the model and the human eyes until the model is located above a preset neglected object (the direction close to the human eyes is the upper part). Therefore, shielding of the preset neglected object on the model can be avoided, the visibility of the model is improved, and user experience is improved.
Optionally, when the preset ignore object is not detected, the target sub-model is restored to the original display position.
In some embodiments, before displaying the medical assistance three-dimensional model at the corresponding location of the target object according to the spatial transformation relationship, the method further comprises:
and acquiring the current object position and the sight line direction of the observed object, and if the current object position belongs to a preset position range and the sight line direction belongs to a preset sight line direction, displaying the medical auxiliary three-dimensional model at the corresponding position of the target object according to the space transformation relation.
In some cases, the line of sight of the observation target is not on the target, and the model may not be displayed at this time. When the current object position is monitored to belong to the preset position range, the sight line direction belongs to the preset sight line direction and the model is displayed, and otherwise, the model is not displayed.
The determination of the current object position may be implemented in a manner known to those skilled in the art, for example, using bluetooth, wiFi, visual positioning, etc. The determination of the line of sight direction can be determined by capturing a face image of the observation object on the premise that the observation object allows.
In some embodiments, if the viewing object undergoes a positional shift, the method further comprises:
Acquiring a moving object position of an observation object, wherein the moving position is a position of the observation object after the position of the observation object moves;
and determining position change data according to the current object position and the moving object position, and adjusting the medical auxiliary three-dimensional model to realize positioning and matching of the medical auxiliary three-dimensional model, and displaying the corresponding position of the target object after the position movement.
That is, at this time, since the observation object moves, the viewing angle of the observation object will change, and the model can be adjusted by the position change data at this time, so as to ensure that the model is displayed more realistically, and the fitting effect is better.
In some embodiments, the medical assisted three-dimensional model location matching method further comprises:
acquiring gesture information, wherein the gesture information comprises an adjustment mode and a region to be adjusted of a target object;
and adjusting the positioning matching and the display state of the medical auxiliary three-dimensional model according to the gesture information.
Specifically, the gesture information includes a certain position of a target object designated by a preset finger, an area where the position is located is taken as an area to be adjusted, and then a current adjustment mode is determined according to gestures such as OK, clapping, bending one or more fingers, for example, an OK gesture is taken as an amplifying gesture, and the amplifying factor can be determined by acquiring the duration of the OK gesture (which can be determined by acquiring the number of image frames in a video); the hand clapping gesture is used as a zoom-out instruction, and the zoom-out multiple is determined according to the times of clapping hands (which can also be determined according to the occurrence times of a certain characteristic image frame in the video image or a sound sensor). The number of curved fingers may be used to determine what virtual object(s) in the three-dimensional model of the medical assistance is currently being displayed (not shown), e.g., blood vessels, bones, and muscles are included in the three-dimensional model of the medical assistance, only bones are displayed if one finger is acquired, only muscles are displayed if two fingers are acquired, etc. The correspondence between gesture information and display state can be set by those skilled in the art as required.
The above-mentioned medical assisted three-dimensional model location matching method is exemplarily described below by way of a specific embodiment, and referring to fig. 2, the specific method includes:
s201: a target human body part is obtained.
S202: a plurality of markers are attached to the target human body part.
The marker can be a specific plurality of developing marker points marked on the surface of a human body, can be identified by medical imaging equipment (such as CT, MR, PET-CT and the like which can provide two-dimensional to three-dimensional equipment) and can be identified by a common camera.
S203: and acquiring a two-dimensional image through medical imaging equipment.
S204: and establishing a medical auxiliary three-dimensional model according to the two-dimensional image.
The medical auxiliary three-dimensional model includes a marker.
S205: virtual position information of the marker in the medical auxiliary three-dimensional model is acquired.
The virtual position information of the marks in the medical auxiliary three-dimensional model after the three-dimensional modeling can be obtained through the program virtual camera.
S206: and acquiring the real position information of the target human body part marked in the real scene.
The position of the marker of the target human body part relative to the real camera can be identified by using the sensing camera of the wearable device.
S207: and generating a space conversion relation according to the virtual position information and the real position information.
The matching of the positions of the virtual mark points and the positions of the real mark points is realized, and then the matching of the target human body part and the medical auxiliary three-dimensional model in the real scene is realized.
S208: and positioning and matching the medical auxiliary three-dimensional model according to the space transformation relation, and displaying the object position of the target human body part in the real scene.
Subsequent virtual-to-real overlays can then be achieved by virtue of the positioning of the head-mounted device. When a user uses the spatial mapping function of a head-mounted device, such as microsoft holonens glasses, to collect internal structural information of a real scene, for example,
a user operates Hololens glasses, and a single scanning range threshold value is set; a user wears holons glasses to move in a current space (an operating room, a consulting room and the like), and the holons glasses are adopted to scan the internal environment of the current space, so that a group of internal structure information grids and coordinates of the current space are obtained and stored; and collecting discrete small spaces by using holonens glasses according to the method, collecting multiple groups of internal structure information of the current space for subsequent splicing processing to form a complete internal 3D model of the current space, and realizing reconstruction of a three-dimensional scene. For example, the modeling software is used for manufacturing the information panel and importing the information panel into Unity 3D; a HoloLens plug-in is used in the Unity3D to bind scripts for the information panel, and a space anchor point positioning function is added; the prefabricated body is used for calling an environment-aware camera and a depth-aware camera to periodically scan the surrounding environment, and environment information data is updated; and reconstructing the three-dimensional scene, and mapping the object with the space anchor point positioning function from the virtual scene space coordinate to the three-dimensional reconstruction scene space coordinate.
For example, when the target object (i.e., the target object) scanned by the user is the leg of the patient, since the leg is a solid model in the virtual reality scene, the solid model is provided with a mark point, and the user can clearly and completely observe the leg of the patient and the mark on the surface of the leg in the virtual reality scene through holole lens. Before that, the user scans the leg of the patient through CT or nuclear magnetic resonance to obtain the examination data, acquires the three-dimensional structure of the leg, constructs holographic projection containing the marking points based on the examination data and the marking points, the holographic projection is differentially displayed in multiple colors from a plurality of dimensions such as a framework, muscles, blood vessels, focuses, marks and the like, and the medical auxiliary three-dimensional model after 3D reconstruction is accurately overlapped with the target human body part through matching the holographic image and the marking points corresponding to the leg, so that a doctor can observe the tissue structure of the human body more intuitively and truly, the operation accuracy is improved, communication with the patient is facilitated, and meanwhile, the operation participation rate of young doctors can be increased.
By the method, the holographic projection of the virtual medical auxiliary three-dimensional model is overlapped on the target human body part of the patient, such as the leg part, and the holographic projection is displayed in a colorful three-dimensional transparent three-dimensional structure, so that not only can a clinician be assisted in solving various complicated and difficult conditions encountered at present, but also the surgical simulation, risk assessment, accurate measurement and intra-operative navigation can be provided for treatment, and the surgical success rate is improved.
The embodiment provides a medical auxiliary three-dimensional model positioning and matching method, which comprises the steps of obtaining a medical auxiliary three-dimensional model and virtual position information of a marked object in the medical auxiliary three-dimensional model, obtaining a target real image of the target object, determining real position information according to the marked position of the marked object in the target real image, generating a space transformation relation according to the virtual position information and the real position information, and positioning and matching the medical auxiliary three-dimensional model at a corresponding position of the target object according to the space transformation relation. By the medical auxiliary three-dimensional model display method, the matching problem of the virtual model (the medical auxiliary three-dimensional model) and the entity object (the target object) is improved, so that intelligent application of different types of application scenes is achieved. The image reconstruction data (medical auxiliary three-dimensional model) of the patient and the physical affected part (target object) are combined for reality, visual three-dimensional information is provided for doctors, and accurate guidance is provided in the operation.
Example two
Referring to fig. 3, the present embodiment provides a medical-assisted three-dimensional model location matching system 300, which includes:
the virtual position information obtaining module 301 is configured to obtain a medical auxiliary three-dimensional model and virtual position information of a marker object in the medical auxiliary three-dimensional model, where the medical auxiliary three-dimensional model is generated according to a medical image of the target object, and the medical image includes the marker object;
The real position information obtaining module 302 is configured to obtain a target real image of the target object, and determine real position information according to a mark position of the mark object in the target real image;
a generating module 303, configured to generate a spatial transformation relationship according to the virtual position information and the real position information, where the spatial transformation relationship includes a mapping relationship between the virtual position and the real position;
the positioning matching module 304 is configured to position and match the medical auxiliary three-dimensional model at a corresponding position of the target object according to the spatial transformation relationship.
In this embodiment, the system is essentially provided with a plurality of modules for executing the method in any of the above embodiments, and specific functions and technical effects are only needed with reference to the above embodiment, and are not described herein again.
Referring to fig. 4, an embodiment of the present invention further provides an electronic device 1300 including a processor 1301, a memory 1302, and a communication bus 1303;
a communication bus 1303 for connecting the processor 1301 and the memory 1302;
processor 1301 is configured to execute a computer program stored in memory 1302 to implement the method as described in one or more of the above embodiments.
An embodiment of the invention also provides a computer-readable storage medium, characterized in that it has stored thereon a computer program,
The computer program is for causing a computer to execute the method according to any one of the above embodiments.
The embodiment of the present application further provides a non-volatile readable storage medium, where one or more modules (programs) are stored, where the one or more modules are applied to a device, and the device may be caused to execute instructions (instructions) of a step included in the embodiment one of the embodiment of the present application.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (9)

1. A medical assisted three-dimensional model location matching method, the method comprising:
acquiring a medical auxiliary three-dimensional model and virtual position information of a marked object in the medical auxiliary three-dimensional model, wherein the medical auxiliary three-dimensional model is generated according to a medical image of the target object, and the medical image comprises the marked object;
acquiring a target real image of the target object, and determining real position information according to the mark position of the mark object in the target real image;
generating a space conversion relation according to the virtual position information and the real position information, wherein the space conversion relation comprises a mapping relation between the virtual position and the real position;
Positioning and matching the medical auxiliary three-dimensional model at a corresponding position of the target object according to the spatial transformation relation;
before the medical auxiliary three-dimensional model is displayed at the corresponding position of the target object according to the space transformation relation, acquiring the current object position and the sight line direction of the observed object, and if the current object position belongs to a preset position range and the sight line direction belongs to a preset sight line direction, enabling the marked object to be transparent or hidden in the medical auxiliary three-dimensional model, and displaying the medical auxiliary three-dimensional model at the corresponding position of the target object according to the space transformation relation;
wherein matching the medical auxiliary three-dimensional model to the corresponding position of the target object according to the spatial transformation relationship further comprises any one of:
determining a marked real image of the marked object from the target real image, determining the real size of the marked object, determining a marked virtual image of the marked object from the medical auxiliary three-dimensional model, determining the virtual size of the marked object, determining a size adjustment proportion according to the real size and the virtual size, and adjusting the medical auxiliary three-dimensional model to realize fusion display of the medical auxiliary three-dimensional model and the target object;
The marking object comprises at least three different marking sub-objects, the real distances of any two marking sub-objects are determined from the target real image, the virtual distances of any two marking sub-objects are determined from the medical auxiliary three-dimensional model, the distance adjustment proportion is determined according to the real distances and the virtual distances, and the medical auxiliary three-dimensional model is adjusted to realize fusion display of the medical auxiliary three-dimensional model and the target object.
2. The medical assisted three-dimensional model localization matching method of claim 1, wherein said matching the medical assisted three-dimensional model localization at the corresponding location of the target object according to the spatial transformation relationship comprises any one of:
determining a marked real image of the marked object from the target real image, determining the real size of the marked object, determining a marked virtual image of the marked object from the medical auxiliary three-dimensional model, determining the virtual size of the marked object, determining a size adjustment proportion according to the real size and the virtual size, and adjusting the medical auxiliary three-dimensional model to realize fusion display of the medical auxiliary three-dimensional model and the target object;
The marking object comprises at least three different marking sub-objects, the real distances of any two marking sub-objects are determined from the target real image, the virtual distances of any two marking sub-objects are determined from the medical auxiliary three-dimensional model, the distance adjustment proportion is determined according to the real distances and the virtual distances, and the medical auxiliary three-dimensional model is adjusted to realize fusion display of the medical auxiliary three-dimensional model and the target object.
3. The medical assisted three dimensional model localization matching method of claim 1, wherein if the target object has a position shift, the method further comprises:
acquiring a current target image and a historical target image of the target object, wherein the historical target image is an image shot at a plurality of times before the current target image;
determining current position information and historical position information of the target object according to the current target image and the historical target image;
acquiring the position similarity between the current position information and the historical position information, and determining position change information if the position similarity is larger than a preset similarity, wherein the position change information is determined according to the current position information and the real position information;
And adjusting the medical auxiliary three-dimensional model according to the position change information so as to realize positioning and matching of the medical auxiliary three-dimensional model, and displaying the corresponding position of the target object after the position movement.
4. The medical assisted three dimensional model localization matching method of claim 1, further comprising:
acquiring a real scene image, and inputting the real scene image into a preset target detection model to obtain a target detection result;
and if the target detection result comprises a preset indication object, the medical auxiliary three-dimensional model displayed at the corresponding position of the target object is displayed in a transparent mode.
5. The medical assisted three dimensional model localization matching method of claim 4, further comprising:
if the target detection result comprises a preset neglected object, preferentially displaying a target sub-model in a target area of the medical auxiliary three-dimensional model displayed at a corresponding position of the target object;
the target area comprises an area corresponding to the medical auxiliary three-dimensional model and the preset neglected object position;
the preferential display comprises moving the model where the target sub-model is located to be above the preset neglected object.
6. The method of medical assisted three dimensional model localization matching of claim 5, wherein if the observed object undergoes a positional shift, the method further comprises:
acquiring a moving object position of the observed object, wherein the moving object position is a position of the observed object after the position of the observed object is moved;
and determining position change data according to the current object position and the moving object position, and adjusting the medical auxiliary three-dimensional model to realize positioning and matching of the medical auxiliary three-dimensional model at the corresponding position of the target object after the position movement.
7. A medical assisted three-dimensional model location matching system, the system comprising:
the medical auxiliary three-dimensional model is generated according to a medical image of the target object, and the medical image comprises the marker object;
the real position information acquisition module is used for acquiring a target real image of the target object and determining real position information according to the mark position of the mark object in the target real image;
The generation module is used for generating a space transformation relation according to the virtual position information and the real position information, wherein the space transformation relation comprises a mapping relation between the virtual position and the real position;
the positioning matching module is used for positioning and matching the medical auxiliary three-dimensional model at the corresponding position of the target object according to the space transformation relation, acquiring the current object position and the sight line direction of the observed object before the medical auxiliary three-dimensional model is displayed at the corresponding position of the target object according to the space transformation relation, and displaying the medical auxiliary three-dimensional model at the corresponding position of the target object according to the space transformation relation if the current object position belongs to a preset position range and the sight line direction belongs to a preset sight line direction;
wherein matching the medical auxiliary three-dimensional model to the corresponding position of the target object according to the spatial transformation relationship further comprises any one of:
determining a marked real image of the marked object from the target real image, determining the real size of the marked object, determining a marked virtual image of the marked object from the medical auxiliary three-dimensional model, determining the virtual size of the marked object, determining a size adjustment proportion according to the real size and the virtual size, and adjusting the medical auxiliary three-dimensional model to realize fusion display of the medical auxiliary three-dimensional model and the target object;
The marking object comprises at least three different marking sub-objects, the real distances of any two marking sub-objects are determined from the target real image, the virtual distances of any two marking sub-objects are determined from the medical auxiliary three-dimensional model, the distance adjustment proportion is determined according to the real distances and the virtual distances, and the medical auxiliary three-dimensional model is adjusted to realize fusion display of the medical auxiliary three-dimensional model and the target object.
8. An electronic device comprising a processor, a memory, and a communication bus;
the communication bus is used for connecting the processor and the memory;
the processor is configured to execute a computer program stored in the memory to implement the method of any one of claims 1-6.
9. A computer-readable storage medium, having a computer program stored thereon,
the computer program for causing the computer to perform the method of any one of claims 1-6.
CN202111033540.6A 2021-09-03 2021-09-03 Medical auxiliary three-dimensional model positioning and matching method, system, equipment and medium Active CN113842227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111033540.6A CN113842227B (en) 2021-09-03 2021-09-03 Medical auxiliary three-dimensional model positioning and matching method, system, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111033540.6A CN113842227B (en) 2021-09-03 2021-09-03 Medical auxiliary three-dimensional model positioning and matching method, system, equipment and medium

Publications (2)

Publication Number Publication Date
CN113842227A CN113842227A (en) 2021-12-28
CN113842227B true CN113842227B (en) 2024-04-05

Family

ID=78973260

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111033540.6A Active CN113842227B (en) 2021-09-03 2021-09-03 Medical auxiliary three-dimensional model positioning and matching method, system, equipment and medium

Country Status (1)

Country Link
CN (1) CN113842227B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100380B (en) * 2022-06-17 2024-03-26 上海新眼光医疗器械股份有限公司 Automatic medical image identification method based on eye body surface feature points

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102591449A (en) * 2010-10-27 2012-07-18 微软公司 Low-latency fusing of virtual and real content
CN102999160A (en) * 2011-10-14 2013-03-27 微软公司 User controlled real object disappearance in a mixed reality display
CN105264478A (en) * 2013-05-23 2016-01-20 微软技术许可有限责任公司 Hologram anchoring and dynamic positioning
CN106997617A (en) * 2017-03-10 2017-08-01 深圳市云宙多媒体技术有限公司 The virtual rendering method of mixed reality and device
CN107847289A (en) * 2015-03-01 2018-03-27 阿里斯医疗诊断公司 The morphology operation of reality enhancing
CN109512514A (en) * 2018-12-07 2019-03-26 陈玩君 A kind of mixed reality orthopaedics minimally invasive operation navigating system and application method
CN109646089A (en) * 2019-01-15 2019-04-19 浙江大学 A kind of spine and spinal cord body puncture based on multi-mode medical blending image enters waypoint intelligent positioning system and method
CN110914873A (en) * 2019-10-17 2020-03-24 深圳盈天下视觉科技有限公司 Augmented reality method, device, mixed reality glasses and storage medium
CN111031954A (en) * 2016-08-16 2020-04-17 视觉医疗系统公司 Sensory enhancement system and method for use in medical procedures
CN111163837A (en) * 2017-07-28 2020-05-15 医达科技公司 Method and system for surgical planning in a mixed reality environment
KR20200081540A (en) * 2018-12-27 2020-07-08 주식회사 홀로웍스 System for estimating orthopedics surgery based on simulator of virtual reality
CN111491584A (en) * 2017-12-15 2020-08-04 美敦力公司 Augmented reality solution for optimizing targeted access and therapy delivery for interventional cardiology tools
CN112826615A (en) * 2021-03-24 2021-05-25 北京大学口腔医院 Display method of fluoroscopy area based on mixed reality technology in orthodontic treatment

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8472120B2 (en) * 2010-02-28 2013-06-25 Osterhout Group, Inc. See-through near-eye display glasses with a small scale image source
AU2014231341B2 (en) * 2013-03-15 2019-06-06 Synaptive Medical Inc. System and method for dynamic validation, correction of registration for surgical navigation
CA2973479C (en) * 2015-07-21 2019-02-26 Synaptive Medical (Barbados) Inc. System and method for mapping navigation space to patient space in a medical procedure
US10102640B2 (en) * 2016-11-29 2018-10-16 Optinav Sp. Z O.O. Registering three-dimensional image data of an imaged object with a set of two-dimensional projection images of the object
US10499997B2 (en) * 2017-01-03 2019-12-10 Mako Surgical Corp. Systems and methods for surgical navigation
CN108446011A (en) * 2017-02-14 2018-08-24 深圳梦境视觉智能科技有限公司 A kind of medical householder method and equipment based on augmented reality
US10304252B2 (en) * 2017-09-15 2019-05-28 Trimble Inc. Collaboration methods to improve use of 3D models in mixed reality environments
US10869727B2 (en) * 2018-05-07 2020-12-22 The Cleveland Clinic Foundation Live 3D holographic guidance and navigation for performing interventional procedures
US11574446B2 (en) * 2019-08-30 2023-02-07 National Central University Digital image reality aligning kit and method applied to mixed reality system for surgical navigation
TWI741359B (en) * 2019-08-30 2021-10-01 國立中央大學 Mixed reality system integrated with surgical navigation system
CN111297485A (en) * 2019-12-06 2020-06-19 中南大学湘雅医院 Novel MR (magnetic resonance) method for automatically tracking and really displaying plant products in interior

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102591449A (en) * 2010-10-27 2012-07-18 微软公司 Low-latency fusing of virtual and real content
CN102999160A (en) * 2011-10-14 2013-03-27 微软公司 User controlled real object disappearance in a mixed reality display
CN105264478A (en) * 2013-05-23 2016-01-20 微软技术许可有限责任公司 Hologram anchoring and dynamic positioning
CN107847289A (en) * 2015-03-01 2018-03-27 阿里斯医疗诊断公司 The morphology operation of reality enhancing
CN111031954A (en) * 2016-08-16 2020-04-17 视觉医疗系统公司 Sensory enhancement system and method for use in medical procedures
CN106997617A (en) * 2017-03-10 2017-08-01 深圳市云宙多媒体技术有限公司 The virtual rendering method of mixed reality and device
CN111163837A (en) * 2017-07-28 2020-05-15 医达科技公司 Method and system for surgical planning in a mixed reality environment
CN111491584A (en) * 2017-12-15 2020-08-04 美敦力公司 Augmented reality solution for optimizing targeted access and therapy delivery for interventional cardiology tools
CN109512514A (en) * 2018-12-07 2019-03-26 陈玩君 A kind of mixed reality orthopaedics minimally invasive operation navigating system and application method
KR20200081540A (en) * 2018-12-27 2020-07-08 주식회사 홀로웍스 System for estimating orthopedics surgery based on simulator of virtual reality
CN109646089A (en) * 2019-01-15 2019-04-19 浙江大学 A kind of spine and spinal cord body puncture based on multi-mode medical blending image enters waypoint intelligent positioning system and method
CN110914873A (en) * 2019-10-17 2020-03-24 深圳盈天下视觉科技有限公司 Augmented reality method, device, mixed reality glasses and storage medium
CN112826615A (en) * 2021-03-24 2021-05-25 北京大学口腔医院 Display method of fluoroscopy area based on mixed reality technology in orthodontic treatment

Also Published As

Publication number Publication date
CN113842227A (en) 2021-12-28

Similar Documents

Publication Publication Date Title
US11730545B2 (en) System and method for multi-client deployment of augmented reality instrument tracking
Wang et al. Video see‐through augmented reality for oral and maxillofacial surgery
CN101243475B (en) Method and apparatus featuring simple click style interactions according to a clinical task workflow
Canessa et al. Calibrated depth and color cameras for accurate 3D interaction in a stereoscopic augmented reality environment
EP2452649A1 (en) Visualization of anatomical data by augmented reality
EP2765776A1 (en) Graphical system with enhanced stereopsis
CN112155727A (en) Surgical navigation systems, methods, devices, and media based on three-dimensional models
Jiang et al. Registration technology of augmented reality in oral medicine: A review
Ebert et al. Invisible touch—Control of a DICOM viewer with finger gestures using the Kinect depth camera
CN112346572A (en) Method, system and electronic device for realizing virtual-real fusion
CN104274247A (en) Medical surgical navigation method
Whitaker et al. Object calibration for augmented reality
Chen et al. A naked eye 3D display and interaction system for medical education and training
CN111973273A (en) Operation navigation system, method, device and medium based on AR technology
CN113034700A (en) Anterior cruciate ligament reconstruction surgery navigation method and system based on mobile terminal
CN113842227B (en) Medical auxiliary three-dimensional model positioning and matching method, system, equipment and medium
Müller et al. The virtual reality arthroscopy training simulator
Scherfgen et al. Estimating the pose of a medical manikin for haptic augmentation of a virtual patient in mixed reality training
Borges et al. A system for the generation of in-car human body pose datasets
Valentini Natural interface in augmented reality interactive simulations: This paper demonstrates that the use of a depth sensing camera that helps generate a three-dimensional scene and track user's motion could enhance the realism of the interactions between virtual and physical objects
CN114886558A (en) Endoscope projection method and system based on augmented reality
Liu et al. A portable projection mapping device for medical augmented reality in single-stage cranioplasty
Tuladhar et al. A recent review and a taxonomy for hard and soft tissue visualization‐based mixed reality
CN113689577B (en) Method, system, equipment and medium for matching virtual three-dimensional model with entity model
Isham et al. A framework of ultrasounds image slice positioning and orientation in 3D augmented reality environment using hybrid tracking method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240514

Address after: Room 808, Floor 8, Building 1, Maker Information Technology Service Platform, west of Mingtang Road, north of Shanggao Street, Mount Taishan District, Tai'an City, Shandong Province, 271000

Patentee after: Dongfang Yukang (Tai'an) Health Management Co.,Ltd.

Country or region after: China

Address before: 201204 floor 3, building 1, No. 400, Fangchun Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee before: Shanghai Laiqiu Medical Technology Co.,Ltd.

Country or region before: China