CN109102533A - A kind of characteristic point positioning method based on mixed reality - Google Patents

A kind of characteristic point positioning method based on mixed reality Download PDF

Info

Publication number
CN109102533A
CN109102533A CN201810630885.1A CN201810630885A CN109102533A CN 109102533 A CN109102533 A CN 109102533A CN 201810630885 A CN201810630885 A CN 201810630885A CN 109102533 A CN109102533 A CN 109102533A
Authority
CN
China
Prior art keywords
mixed reality
model
point cloud
characteristic point
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810630885.1A
Other languages
Chinese (zh)
Inventor
邱兆文
张健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heilongjiang Tuomeng Technology Co Ltd
Original Assignee
Heilongjiang Tuomeng Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heilongjiang Tuomeng Technology Co Ltd filed Critical Heilongjiang Tuomeng Technology Co Ltd
Priority to CN201810630885.1A priority Critical patent/CN109102533A/en
Publication of CN109102533A publication Critical patent/CN109102533A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The present invention relates to a kind of characteristic point positioning methods based on mixed reality, method includes the following steps: (1) records point cloud data by acquisition human body feature point;(2) point cloud data of feature is generated into two dimensional code;(3) two dimensional code is identified by the camera that mixed reality equipment carries, obtains feature point cloud model;(4) by carrying out aspect ratio pair with the model in mixed reality equipment, it is registrated simultaneously computation model error, differentiates the similarity between human body and model.Human body feature point is constructed point cloud data by the present invention, then the identifiable two dimensional code of mixed reality equipment is generated, then it is compared with the model prestored in mixed reality equipment, reach distinguish the model whether be this human body information data, solve the problems, such as that mixed reality model is matched with patient status's information.

Description

A kind of characteristic point positioning method based on mixed reality
Technical field
The invention belongs to image recognitions and field of medical image processing, are related to a kind of positioning feature point based on mixed reality Method.
Background technique
Mixed reality equipment is the first holographic computer equipment not limited by cable of Microsoft, can allow user and digital content Interaction, and interacted with the hologram in true environment around.It may be implemented to advise preoperative simulation by mixed reality at present It draws and is interacted with across space remote operation, and then implement precisely operation, greatly reduce the risk of operation.When same set of mixed reality For device service when multiple sufferers, which needs the problem of should matching to mixed reality model with patient status's information, If cannot correctly match, result is difficult to expect by bringing.So the present invention passes through a kind of feature based on mixed reality Independent positioning method, to solve the problems, such as this.
Summary of the invention
The object of the present invention is to provide a kind of characteristic point positioning method based on mixed reality, solve mixed reality model with The problem of patient status's information matches.
The present invention is achieved through the following technical solutions: a kind of characteristic point positioning method based on mixed reality, this method The following steps are included:
(1) by acquisition human body feature point, point cloud data is recorded;
(2) point cloud data of feature is generated into two dimensional code;
(3) two dimensional code is identified by the camera that mixed reality equipment carries, obtains feature point cloud model;
(4) by carrying out aspect ratio pair with the model in mixed reality equipment, it is registrated simultaneously computation model error, differentiates human body Similarity between model.
Further, the construction method of point cloud data is to establish cartesian coordinate system in step (1), drafts arbitrary origin, The coordinate information of characteristic point is recorded, point cloud data is formed.
Further, the aspect ratio pair in step (4), the comparison picture that mixed reality equipment uploads are 3 dimensions, and characteristic point is raw At contrast images be similarly 3 dimensions, calculate the error of two groups of 3 d image models, by the method for point cloud registering for judging Whether human body information coincide with the model in mixed reality equipment.
Further, the method for the point cloud registering is ICP algorithm.
Further, the mixed reality equipment is Microsoft Hololens glasses.
Good effect by adopting the above technical scheme: human body feature point is constructed point cloud data by the present invention, is then generated mixed The identifiable two dimensional code of real world devices is closed, is then compared with the model prestored in mixed reality equipment, reaches and distinguishes the mould Type whether be this human body information data, solve the problems, such as that mixed reality model is matched with patient status's information.
Detailed description of the invention
Fig. 1 is for facial image features point distribution schematic diagram;
Fig. 2 is to save the three dimensional face model for being patient in mixing apparatus;
Fig. 3 is the characteristic point positioning method schematic diagram of the invention based on mixed reality:
Fig. 4 is ICP registration result schematic diagram;
Fig. 5 is the error calculation schematic diagram of registration.
Specific embodiment
The following further describes the technical solution of the present invention with reference to the accompanying drawing, but should not be construed as to limit of the invention System:
A kind of characteristic point positioning method based on mixed reality, method includes the following steps:
(1) by acquisition human body feature point, point cloud data is recorded;
(2) point cloud data of feature is generated into two dimensional code;
(3) two dimensional code is identified by the camera that mixed reality equipment carries, obtains feature point cloud model;
(4) by carrying out aspect ratio pair with the model in mixed reality equipment, it is registrated simultaneously computation model error, differentiates human body Similarity between model.
Further, the construction method of point cloud data is to establish cartesian coordinate system in step (1), drafts arbitrary origin, The coordinate information of characteristic point is recorded, point cloud data is formed.
Further, the aspect ratio pair in step (4), the comparison picture that mixed reality equipment uploads are 3 dimensions, and characteristic point is raw At contrast images be similarly 3 dimensions, calculate the error of two groups of 3 d image models, by the method for point cloud registering for judging Whether human body information coincide with the model in mixed reality equipment.
Further, the method for the point cloud registering is ICP algorithm.
Further, the mixed reality equipment is Microsoft Hololens glasses.
Embodiment 1
A kind of extraction facial image features point data method includes:
Step 1: wherein characteristic point is distributed using 90 positioning feature point faces are as follows: 18 points mark mouth, 14 points Label lower jaw, 12 points label eyes, 6 points label eyebrows, 4 points label cheek and cheek, 10 points label noses, 4 Point label posterior neck, 10 points mark ear, and 12 points mark hair.As shown in Figure 1.
Step 2: establishing cartesian coordinate system, arbitrary origin is drafted, records the coordinate information of characteristic point, building point cloud number According to.The point cloud data is the coordinate array for possessing 90 rows 3 column, and 3 column distributions correspond to x, y, z coordinate value, and every row indicates different spies Sign point.
It is stored Step 3: point cloud data is converted into two dimensional code, facilitates mixed reality equipment scanning recognition.
Embodiment 2
A kind of characteristic point positioning method based on mixed reality, comprising:
Step 1: doctor has the two dimensional code of patient characteristics by wearing mixed reality equipment, scanning strip, characteristic point cloud is obtained Model, process are as shown in Figure 3.
Step 2: patient characteristics point cloud is named as p, the point cloud of model is named as q.A cloud is calculated by following formula Center of gravity:
Covariance matrix is constructed using the point cloud center of gravity acquired:
Step 3: next entering ICP algorithm carries out Model registration, the function of ICP is can to match two number differences Data acquisition system, then one 4 × 4 symmetrical matrix will be constructed using covariance matrix by realizing using ICP:
Wherein, it can be used to calculate rotation parameter by the maximal eigenvector of this symmetrical matrix, and then find out entire rigid R required for body converts and T (R is best rotating vector, and T is best motion vector).The mode for obtaining selection translation can To be registrated two images, and then achieve the effect that calculate error, registration result is as shown in Figure 4.
Step 4: error calculation mode will be illustrated next, as shown in figure 5, MODEL is p point cloud, DATA is q point cloud, mid For the position of the intermediate point of DATA point cloud, iclosest table records the number by the matched point of point in each DATA cloud (number in MODEL point cloud).
Step 5: being guaranteed by calculating MODLE point to the Euclidean distance of intermediate position points cloud registration point in the cloud A section in the Euclidean distance of arbitrary point to DATA point cloud be both less than the distance that this point is generated.Similarly calculate Each matched minimum Eustachian distance of point institute of mid ± n, and add up to these distances, obtained result is as mistake Difference returns.
Human body feature point is constructed point cloud data by the present invention, then generates the identifiable two dimensional code of mixed reality equipment, so Be compared afterwards with the model prestored in mixed reality equipment, reach distinguish the model whether be this human body information data, Solve the problems, such as that mixed reality model is matched with patient status's information.

Claims (5)

1. a kind of characteristic point positioning method based on mixed reality, it is characterised in that: method includes the following steps:
(1) by acquisition human body feature point, point cloud data is recorded;
(2) point cloud data of feature is generated into two dimensional code;
(3) two dimensional code is identified by the camera that mixed reality equipment carries, obtains feature point cloud model;
(4) by carrying out aspect ratio pair with the model in mixed reality equipment, it is registrated simultaneously computation model error, differentiates human body and mould Similarity between type.
2. the characteristic point positioning method according to claim 1 based on mixed reality, it is characterised in that: step (1) midpoint The construction method of cloud data is to establish cartesian coordinate system, drafts arbitrary origin, records the coordinate information of characteristic point, forms point cloud Data.
3. the characteristic point positioning method according to claim 1 based on mixed reality, it is characterised in that: in step (4) Aspect ratio pair, the comparison picture that mixed reality equipment uploads are 3 dimensions, and the contrast images that characteristic point generates similarly are 3 dimensions, are passed through The method of point cloud registering calculates the error of two groups of 3 d image models, for judging the mould in human body information and mixed reality equipment Whether type coincide.
4. the characteristic point positioning method according to claim 3 based on mixed reality, it is characterised in that: the point cloud is matched Quasi- method is ICP algorithm.
5. the characteristic point positioning method based on mixed reality described in -4 any one claims according to claim 1, special Sign is: the mixed reality equipment is Microsoft Hololens glasses.
CN201810630885.1A 2018-06-19 2018-06-19 A kind of characteristic point positioning method based on mixed reality Pending CN109102533A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810630885.1A CN109102533A (en) 2018-06-19 2018-06-19 A kind of characteristic point positioning method based on mixed reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810630885.1A CN109102533A (en) 2018-06-19 2018-06-19 A kind of characteristic point positioning method based on mixed reality

Publications (1)

Publication Number Publication Date
CN109102533A true CN109102533A (en) 2018-12-28

Family

ID=64796949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810630885.1A Pending CN109102533A (en) 2018-06-19 2018-06-19 A kind of characteristic point positioning method based on mixed reality

Country Status (1)

Country Link
CN (1) CN109102533A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400119A (en) * 2013-07-31 2013-11-20 南京融图创斯信息科技有限公司 Face recognition technology-based mixed reality spectacle interactive display method
CN103854270A (en) * 2012-11-28 2014-06-11 广州医学院第一附属医院 CT and MR inter-machine three dimensional image fusion registration method and system
CN103971079A (en) * 2013-01-28 2014-08-06 腾讯科技(深圳)有限公司 Augmented reality implementation method and device of two-dimensional code
CN104376334A (en) * 2014-11-12 2015-02-25 上海交通大学 Pedestrian comparison method based on multi-scale feature fusion
CN105188516A (en) * 2013-03-11 2015-12-23 奇跃公司 System and method for augmented and virtual reality
CN105469042A (en) * 2015-11-20 2016-04-06 天津汉光祥云信息科技有限公司 Improved face image comparison method
CN106248035A (en) * 2016-08-19 2016-12-21 苏州大学 The method and system that a kind of surface profile based on point cloud model accurately detects
CN106973569A (en) * 2014-05-13 2017-07-21 Pcp虚拟现实股份有限公司 Generation and the playback multimedia mthods, systems and devices of virtual reality
CN107223269A (en) * 2016-12-29 2017-09-29 深圳前海达闼云端智能科技有限公司 Three-dimensional scene positioning method and device
CN107678537A (en) * 2017-09-04 2018-02-09 全球能源互联网研究院有限公司 Assembly manipulation, the method and apparatus of simulation assembling are identified in augmented reality environment
CN107798702A (en) * 2016-08-30 2018-03-13 成都理想境界科技有限公司 A kind of realtime graphic stacking method and device for augmented reality
US20180103213A1 (en) * 2016-10-06 2018-04-12 Fyusion, Inc. Live style transfer on a mobile device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854270A (en) * 2012-11-28 2014-06-11 广州医学院第一附属医院 CT and MR inter-machine three dimensional image fusion registration method and system
CN103971079A (en) * 2013-01-28 2014-08-06 腾讯科技(深圳)有限公司 Augmented reality implementation method and device of two-dimensional code
CN105188516A (en) * 2013-03-11 2015-12-23 奇跃公司 System and method for augmented and virtual reality
CN103400119A (en) * 2013-07-31 2013-11-20 南京融图创斯信息科技有限公司 Face recognition technology-based mixed reality spectacle interactive display method
CN106973569A (en) * 2014-05-13 2017-07-21 Pcp虚拟现实股份有限公司 Generation and the playback multimedia mthods, systems and devices of virtual reality
CN104376334A (en) * 2014-11-12 2015-02-25 上海交通大学 Pedestrian comparison method based on multi-scale feature fusion
CN105469042A (en) * 2015-11-20 2016-04-06 天津汉光祥云信息科技有限公司 Improved face image comparison method
CN106248035A (en) * 2016-08-19 2016-12-21 苏州大学 The method and system that a kind of surface profile based on point cloud model accurately detects
CN107798702A (en) * 2016-08-30 2018-03-13 成都理想境界科技有限公司 A kind of realtime graphic stacking method and device for augmented reality
US20180103213A1 (en) * 2016-10-06 2018-04-12 Fyusion, Inc. Live style transfer on a mobile device
CN107223269A (en) * 2016-12-29 2017-09-29 深圳前海达闼云端智能科技有限公司 Three-dimensional scene positioning method and device
CN107678537A (en) * 2017-09-04 2018-02-09 全球能源互联网研究院有限公司 Assembly manipulation, the method and apparatus of simulation assembling are identified in augmented reality environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵春雨: "基于复杂点云的三维重建", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Similar Documents

Publication Publication Date Title
US20200192464A1 (en) Gesture recognition using multi-sensory data
Gupta et al. Texas 3D face recognition database
JP4950787B2 (en) Image processing apparatus and method
WO2021052375A1 (en) Target image generation method, apparatus, server and storage medium
CN110503703A (en) Method and apparatus for generating image
US20210158028A1 (en) Systems and methods for human pose and shape recovery
JP4692526B2 (en) Gaze direction estimation apparatus, gaze direction estimation method, and program for causing computer to execute gaze direction estimation method
CN108140105A (en) Head-mounted display with countenance detectability
WO2019075666A1 (en) Image processing method and apparatus, terminal, and storage medium
WO2005020030A2 (en) Multi-modal face recognition
CN103593870A (en) Picture processing device and method based on human faces
CN111325846B (en) Expression base determination method, avatar driving method, device and medium
CN108388889B (en) Method and device for analyzing face image
JP4936491B2 (en) Gaze direction estimation apparatus, gaze direction estimation method, and program for causing computer to execute gaze direction estimation method
CN114219878A (en) Animation generation method and device for virtual character, storage medium and terminal
CN107749084A (en) A kind of virtual try-in method and system based on 3-dimensional reconstruction technology
CN112232128B (en) Eye tracking based method for identifying care needs of old disabled people
Lan et al. The application of 3D morphable model (3DMM) for real-time visualization of acupoints on a smartphone
CN108717730B (en) 3D character reconstruction method and terminal
Zhang et al. 3D statistical head modeling for face/head-related product design: a state-of-the-art review
JP2022074153A (en) System, program and method for measuring jaw movement of subject
JP4682372B2 (en) Gaze direction detection device, gaze direction detection method, and program for causing computer to execute gaze direction detection method
CN109124765A (en) Application method of the assisting in diagnosis and treatment system in neurosurgery based on mixed reality
CN111597926A (en) Image processing method and device, electronic device and storage medium
CN109102533A (en) A kind of characteristic point positioning method based on mixed reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181228