CN113610986B - Digital physical coupling method and system based on biological invariant feature - Google Patents

Digital physical coupling method and system based on biological invariant feature Download PDF

Info

Publication number
CN113610986B
CN113610986B CN202110760876.6A CN202110760876A CN113610986B CN 113610986 B CN113610986 B CN 113610986B CN 202110760876 A CN202110760876 A CN 202110760876A CN 113610986 B CN113610986 B CN 113610986B
Authority
CN
China
Prior art keywords
digital
digital model
physical coupling
camera
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110760876.6A
Other languages
Chinese (zh)
Other versions
CN113610986A (en
Inventor
戴凌磊
郝泳涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202110760876.6A priority Critical patent/CN113610986B/en
Publication of CN113610986A publication Critical patent/CN113610986A/en
Application granted granted Critical
Publication of CN113610986B publication Critical patent/CN113610986B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a digital physical coupling method and a system based on biological invariant features, which comprise the following steps: preprocessing a digital model of a human body part; marking a first digital characteristic point on the preprocessed digital model; marking second digital feature points on a human body part in reality through a camera of the head-mounted device, wherein the first digital feature points are in one-to-one correspondence with the second digital feature points and are positioned in the same virtual world coordinate system, and the virtual world coordinate system is a virtual world coordinate system of the camera; and transforming the position of the digital model according to the first digital characteristic points and the second digital characteristic points which are in one-to-one correspondence, so as to realize digital physical coupling of the digital model and the human body part. Compared with the prior art, the invention only needs a small amount of marks, can be directly used in an operating room without preparation, is applicable to a plurality of parts of a human body, and has good universality.

Description

Digital physical coupling method and system based on biological invariant feature
Technical Field
The invention relates to the field of digital physical coupling methods in augmented reality environments, in particular to a digital physical coupling method and system based on biological invariant features.
Background
AR (Active Reality), namely the augmented reality display technology, integrates various scientific technologies such as a computer graphic technology, a computer simulation technology, a sensor technology, a display technology and the like, creates a virtual information environment on a multidimensional information space, can enable a user to have immersive sensation on the scene, has perfect interaction capability with the environment and is helpful for inspiring conception.
As Augmented Reality (AR) and 3D modeling technologies develop, attempts to combine AR technology with medical applications are also increasing. The immersive medical AR system is one of important research directions, and the main function of the immersive medical AR system is to couple a digital model with a corresponding part of a patient so as to improve the spatial perception capability of doctors. In order to realize digital physical coupling, research at home and abroad provides various solutions for different surgical scenes.
Augmented Reality (AR) technology is currently mostly integrated in head-mounted devices for use, such as a head-mounted AR device disclosed in the invention with publication number CN106526860 a.
The first solution is manual coupling. The manual coupling directly uses the functionality provided by the head-mounted device to interact with the digital model, moving the digital model until the model is deemed to have been coupled to the human body by the naked eye. The manual coupling operation is simpler, but small errors can be caused by naked eye resolution, for example, when the model is positioned between naked eyes and human bodies, the naked eyes cannot judge the specific position of the model, in order to solve the problem, people perform rough contour matching before manual coupling, and then perform manual coupling, so that the errors are reduced to a certain extent.
The second solution is to couple by means of the identification technique of Vuforia. Vuforia can identify a particular image or regular object and present a digital model at a location near the image or regular object, which places certain demands on the placement location of the image or regular object, resulting in a very narrow range of applications.
A third solution requires the aid of an optical tracker and additional optical markers. The optical tracker is often used as a fixed real coordinate system, and the optical tracker and the corresponding optical markers can be used for accurately establishing the connection between the virtual world and the real world, realizing coupling with small error and tracking the patient in real time. However, the scheme needs to fix optical marks on patients and corresponding equipment, has high requirements on the surgical environment and is difficult to popularize.
A fourth solution is a label-free coupling method. Some people use the self space mapping and staring interaction functions of the head-mounted equipment, register the two head-mounted equipment in the same world coordinate through iterating the nearest point algorithm coupling surface, and realize digital physical coupling by using the characteristic that the cone beam CT coordinates are known, which belongs to a coupling method of a specific scene, and also use the depth camera data of the head-mounted equipment and facial feature detection realized by a neural network, realize digital physical coupling through the registration of facial point cloud and model point cloud, but the method can only be used for coupling of the head and cannot be applied to other parts.
Disclosure of Invention
The invention aims to overcome the defect of narrow application range in the prior art and provide a digital physical coupling method and system based on biological invariant features.
The aim of the invention can be achieved by the following technical scheme:
A digital physical coupling method based on biological invariant features, comprising the steps of:
preprocessing a digital model of a human body part;
marking a first digital feature point on the preprocessed digital model;
Marking a second digital feature point on the human body part in reality through a camera of the head-mounted device, wherein the first digital feature point corresponds to the second digital feature point one by one and is positioned in the same virtual world coordinate system, and the virtual world coordinate system is a virtual world coordinate system of the camera;
and transforming the position of the digital model according to the first digital characteristic points and the second digital characteristic points which are in one-to-one correspondence, so as to realize digital physical coupling of the digital model and the human body part.
Further, the marking of the first digital feature point specifically includes:
A focal point is formed with the digital model surface using far-ray or gaze interaction of the head-mounted device, and a first digital feature point on the digital model is generated from coordinates of the focal point.
Further, the process of forming the focus on the digital model by the far-rays is specifically:
And storing the focus position formed by the far rays and the digital model in a far pointer object by using an AR software development tool, and acquiring the current coordinate of the far pointer by triggering a pointer event to serve as the focus coordinate.
Further, the marking of the second digital feature point specifically includes:
And setting an identifiable mark on the human body part in reality, visually identifying the identifiable mark by a computer, calculating the coordinate position of the identifiable mark relative to a camera according to the identified identifiable mark, and taking the coordinate position as the second digital characteristic point.
Further, the transforming the position of the digital model is specifically:
According to the coordinate point sets of at least two groups of first digital feature points and second digital feature points which are in one-to-one correspondence in the virtual world coordinate system, calculating a rotation matrix by utilizing a singular value decomposition method, calculating a translation matrix by utilizing the calculated rotation matrix and the central points of at least two groups of the coordinate point sets, combining the rotation matrix and the translation matrix to obtain a 4X4 conversion matrix, and utilizing the conversion matrix to change the position of the digital model.
Further, the preprocessing of the digital model of the human body part specifically includes:
Importing the digital model into unit 3D software in the head-mounted equipment, and adjusting the digital model unit to be consistent with the unit of the unit 3D software;
The center of the digital model is aligned with the virtual world coordinate system center of the camera of the headset.
Further, the preprocessing of the digital model of the human body part further comprises:
a MeshCollider component is added to the digital model in the units 3D software.
Further, the digital physical coupling method includes: the camera of the head mounted device is calibrated before the camera of the head mounted device marks the second digital feature points.
Further, the camera is a front-facing camera of the head-mounted device.
The invention also provides a digital physical coupling system based on biological invariant features, comprising a memory and a processor, the memory storing a computer program, the processor invoking the computer program to perform the steps of the method as described above.
Compared with the prior art, the invention has the following advantages:
(1) The invention only needs a small number of marking points, can directly work in an operating room without preparation, and has small limitation.
(2) The invention can be used for a plurality of human body parts and has strong universality.
Drawings
Fig. 1 is a flow chart of a digital physical coupling method based on biological invariant features according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
Example 1
Referring to fig. 1, the present embodiment provides a digital physical coupling method based on biological invariant features, which includes the following steps:
S1: preprocessing a digital model of a human body part;
Specifically, importing the digital model into unit 3D software in head-mounted equipment, and adjusting the digital model unit to be consistent with the unit of the unit 3D software;
aligning a center of the digital model with a virtual world coordinate system center of a camera of the headset;
a MeshCollider component is added to the digital model in the units 3D software.
S2: marking a first digital feature point on the preprocessed digital model;
S3: marking a second digital feature point on the human body part in reality through a camera of the head-mounted device, wherein the first digital feature point corresponds to the second digital feature point one by one and is positioned in the same virtual world coordinate system, and the virtual world coordinate system is a virtual world coordinate system of the camera;
the marking of the first digital characteristic point is specifically as follows:
A focal point is formed with the digital model surface using far-ray or gaze interaction of the head-mounted device, and a first digital feature point on the digital model is generated from coordinates of the focal point.
The process of forming a focus on a digital model by means of far rays is in particular:
And storing the focus position formed by the far rays and the digital model in a far pointer object by using an AR software development tool, and acquiring the current coordinate of the far pointer by triggering a pointer event to serve as the focus coordinate.
The marking of the second digital characteristic points is specifically as follows:
And setting an identifiable mark on the human body part in reality, visually identifying the identifiable mark by a computer, calculating the coordinate position of the identifiable mark relative to a camera according to the identified identifiable mark, and taking the coordinate position as the second digital characteristic point.
S4: and transforming the position of the digital model according to the first digital characteristic points and the second digital characteristic points which are in one-to-one correspondence, so as to realize digital physical coupling of the digital model and the human body part.
The conversion of the position of the digital model is specifically:
According to the coordinate point sets of at least two groups of first digital feature points and second digital feature points which are in one-to-one correspondence in the virtual world coordinate system, calculating a rotation matrix by utilizing a singular value decomposition method, calculating a translation matrix by utilizing the calculated rotation matrix and the central points of at least two groups of the coordinate point sets, combining the rotation matrix and the translation matrix to obtain a 4X4 conversion matrix, and utilizing the conversion matrix to change the position of the digital model.
The head mounted device employed in this embodiment is holonens 2.
The digital model generated by the general modeling software is in millimeter units, and the unit 3D is in meter units, and if the digital model is directly imported into the unit 3D, the model is amplified by 1000 times. In addition, a digital model center generated by general modeling software has certain deviation from a center obtained by unit 3D automatic calculation, and if the digital model center is not adjusted, the digital model center has certain influence on system development. It is necessary to pre-process the digital model before coupling. Step S1 therefore comprises in particular:
s1, importing a model: importing the digital model into the unit 3D, and adjusting the digital model unit to be consistent with the unit 3D;
s2, aligning centers: aligning the center of the digital model with the center of the virtual world coordinate system;
s3, adding MeshCollider components to the digital model object.
The MeshCollider component is added to make the digital model surface logically physical, and only models that logically have physical can interact in an AR environment, including movement of the digital model, far rays forming focus with the model surface, and so forth.
The step S2 specifically comprises the following steps: forming a focus with the surface of the digital model by using the far rays or staring interaction of HoloLens2, enabling the focus to move to a selected characteristic point, generating a characteristic point object at the focus position, binding the characteristic point object with the digital model, and moving along with the movement of the digital model after the characteristic point object is generated;
The specific method for acquiring the focal coordinates comprises the following steps: utilizing MRTK software development tools, storing focus positions formed by far rays and other objects in far pointer objects, acquiring coordinates of a current far pointer through triggering pointer events, and generating digital feature points by utilizing the coordinates;
the step S3 specifically comprises the following steps: and placing the identifiable mark on the position of the feature point corresponding to the human body, visually identifying the identifiable mark by a computer, calculating the position and angle of the camera relative to the mark, converting the position into coordinates, and generating a feature point object at the current coordinate position.
The tools used for identifying the mark points are as follows: holoLensARToolKit software development tools and holonens 2 front-facing cameras.
The identifiable mark selected was a 40mmX40mm square black and white mark that matched the HoloLensARToolKit software development tool.
In order to enable the feature points to be accurately generated in the mark center, a front camera of HoloLens2 needs to be calibrated once, and real camera parameters are calculated and configured into HoloLensARToolKit software development tools.
The feature points on the digital model and the feature points on the human body model are in one-to-one correspondence and are in the same virtual world coordinate system, and the virtual world coordinate system is automatically established and maintained stable by HoloLens 2.
The step S4 specifically comprises the following steps: according to two sets of coordinate point sets corresponding one by one in the same coordinate system, calculating a rotation matrix by utilizing a singular value decomposition method, calculating a translation matrix by utilizing the calculated rotation matrix and central points of the two sets of point sets, combining to obtain a 4X4 conversion matrix, and utilizing the conversion matrix to change the position of a digital model so as to realize digital physical coupling.
The present embodiment also provides a digital physical coupling system based on biological invariant features, comprising a memory storing a computer program and a processor invoking the computer program to perform the steps of the method as described above.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention by one of ordinary skill in the art without undue burden. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by the person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.

Claims (7)

1. A digital physical coupling method based on biological invariant features, comprising the steps of:
preprocessing a digital model of a human body part;
marking a first digital feature point on the preprocessed digital model;
Marking a second digital feature point on the human body part in reality through a camera of the head-mounted device, wherein the first digital feature point corresponds to the second digital feature point one by one and is positioned in the same virtual world coordinate system, and the virtual world coordinate system is a virtual world coordinate system of the camera;
according to the first digital characteristic points and the second digital characteristic points which are in one-to-one correspondence, the positions of the digital model are transformed, so that digital physical coupling between the digital model and the human body part is realized;
the marking of the first digital characteristic point is specifically as follows:
forming a focus with the surface of the digital model by utilizing far-ray or staring interaction of the head-mounted equipment, and generating a first digital characteristic point on the digital model according to the coordinates of the focus;
The marking of the second digital characteristic points is specifically as follows:
Setting an identifiable mark on the human body part in reality, visually identifying the identifiable mark by using a computer, calculating the coordinate position of the identifiable mark relative to a camera according to the identified identifiable mark, and taking the coordinate position as the second digital characteristic point;
the conversion of the position of the digital model is specifically:
According to the coordinate point sets of at least two groups of first digital feature points and second digital feature points which are in one-to-one correspondence in the virtual world coordinate system, calculating a rotation matrix by utilizing a singular value decomposition method, calculating a translation matrix by utilizing the calculated rotation matrix and the central points of at least two groups of the coordinate point sets, combining the rotation matrix and the translation matrix to obtain a 4X4 conversion matrix, and utilizing the conversion matrix to change the position of the digital model.
2. The method for digital physical coupling based on biological invariant features according to claim 1, wherein the process of forming the focal point on the digital model by means of far rays is specifically:
And storing the focus position formed by the far rays and the digital model in a far pointer object by using an AR software development tool, and acquiring the current coordinate of the far pointer by triggering a pointer event to serve as the focus coordinate.
3. The digital physical coupling method based on the biological invariant feature of claim 1, wherein the preprocessing of the digital model of the human body part is specifically:
Importing the digital model into unit 3D software in the head-mounted equipment, and adjusting the digital model unit to be consistent with the unit of the unit 3D software;
The center of the digital model is aligned with the virtual world coordinate system center of the camera of the headset.
4. A digital physical coupling method based on biological invariant features of claim 3, wherein said preprocessing the digital model of the human body portion further comprises:
a MeshCollider component is added to the digital model in the units 3D software.
5. The digital physical coupling method based on biological invariant features of claim 1, wherein said digital physical coupling method comprises: the camera of the head mounted device is calibrated before the camera of the head mounted device marks the second digital feature points.
6. The method of digital physical coupling based on biological invariant features of claim 1 wherein said camera is a front-facing camera of a head-mounted device.
7. A digital physical coupling system based on biological invariant features, comprising a memory and a processor, the memory storing a computer program, the processor invoking the computer program to perform the steps of the method according to any of claims 1 to 6.
CN202110760876.6A 2021-07-06 2021-07-06 Digital physical coupling method and system based on biological invariant feature Active CN113610986B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110760876.6A CN113610986B (en) 2021-07-06 2021-07-06 Digital physical coupling method and system based on biological invariant feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110760876.6A CN113610986B (en) 2021-07-06 2021-07-06 Digital physical coupling method and system based on biological invariant feature

Publications (2)

Publication Number Publication Date
CN113610986A CN113610986A (en) 2021-11-05
CN113610986B true CN113610986B (en) 2024-04-23

Family

ID=78304087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110760876.6A Active CN113610986B (en) 2021-07-06 2021-07-06 Digital physical coupling method and system based on biological invariant feature

Country Status (1)

Country Link
CN (1) CN113610986B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533719A (en) * 2019-04-23 2019-12-03 以见科技(上海)有限公司 Augmented reality localization method and device based on environmental visual Feature point recognition technology
CN112017302A (en) * 2020-08-29 2020-12-01 南京翱翔智能制造科技有限公司 Real-time registration method of projection mark and machine vision based on CAD model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10242456B2 (en) * 2011-06-23 2019-03-26 Limitless Computing, Inc. Digitally encoded marker-based augmented reality (AR)
US9264702B2 (en) * 2013-08-19 2016-02-16 Qualcomm Incorporated Automatic calibration of scene camera for optical see-through head mounted display

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533719A (en) * 2019-04-23 2019-12-03 以见科技(上海)有限公司 Augmented reality localization method and device based on environmental visual Feature point recognition technology
CN112017302A (en) * 2020-08-29 2020-12-01 南京翱翔智能制造科技有限公司 Real-time registration method of projection mark and machine vision based on CAD model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种基于自然特征的AR算法;秦雪洲;邢冠宇;;现代计算机(05);全文 *
增强现实环境下的CPS虚拟装配系统研究与实现;韩峰;张衡;朱镭;刘虎;;应用光学(03);全文 *

Also Published As

Publication number Publication date
CN113610986A (en) 2021-11-05

Similar Documents

Publication Publication Date Title
Harders et al. Calibration, registration, and synchronization for high precision augmented reality haptics
Champleboux et al. Accurate calibration of cameras and range imaging sensor: the NPBS method
CN108388341B (en) Man-machine interaction system and device based on infrared camera-visible light projector
CN109700550A (en) A kind of augmented reality method and device for dental operation
CN108305321B (en) Three-dimensional human hand 3D skeleton model real-time reconstruction method and device based on binocular color imaging system
CN109344714A (en) One kind being based on the matched gaze estimation method of key point
CN113505694B (en) Man-machine interaction method and device based on sight tracking and computer equipment
Moser et al. Evaluation of user-centric optical see-through head-mounted display calibration using a leap motion controller
WO2020145826A1 (en) Method and assembly for spatial mapping of a model, such as a holographic model, of a surgical tool and/or anatomical structure onto a spatial position of the surgical tool respectively anatomical structure, as well as a surgical tool
JPH0351407B2 (en)
WO2019136588A1 (en) Cloud computing-based calibration method, device, electronic device, and computer program product
Summers et al. Calibration for augmented reality experimental testbeds
Cao et al. Camera calibration using symmetric objects
Wang et al. Pose determination of human faces by using vanishing points
CN107993227B (en) Method and device for acquiring hand-eye matrix of 3D laparoscope
CN113610986B (en) Digital physical coupling method and system based on biological invariant feature
CN110310328B (en) Mixed reality operation registration method and device
Mischke et al. Recovering projection geometry: How a cheap camera can outperform an expensive stereo system
JP6210447B2 (en) Line-of-sight measuring device, method of displaying a gaze point, method of displaying a gaze region, and method of displaying a Gaussian distribution of a gaze point
CN114356078B (en) Person intention detection method and device based on fixation target and electronic equipment
Gelšvartas et al. Projection mapping user interface for disabled people
Genc et al. Optical see-through calibration with vision-based trackers: Propagation of projection matrices
CN113971835A (en) Control method and device of household appliance, storage medium and terminal device
Knoerlein et al. Comparison of tracker-based to tracker-less haptic device calibration
Zhang et al. [Poster] an accurate calibration method for optical see-through head-mounted displays based on actual eye-observation model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant