CN114926613A - Method and system for enhancing reality of human body data and space positioning - Google Patents

Method and system for enhancing reality of human body data and space positioning Download PDF

Info

Publication number
CN114926613A
CN114926613A CN202210694903.9A CN202210694903A CN114926613A CN 114926613 A CN114926613 A CN 114926613A CN 202210694903 A CN202210694903 A CN 202210694903A CN 114926613 A CN114926613 A CN 114926613A
Authority
CN
China
Prior art keywords
module
camera
sensor
augmented reality
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210694903.9A
Other languages
Chinese (zh)
Inventor
蔡明�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fast Diagnostics Nantong Digital Technology Co Ltd
Original Assignee
Fast Diagnostics Nantong Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fast Diagnostics Nantong Digital Technology Co Ltd filed Critical Fast Diagnostics Nantong Digital Technology Co Ltd
Priority to CN202210694903.9A priority Critical patent/CN114926613A/en
Publication of CN114926613A publication Critical patent/CN114926613A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses   a method and a system for human body data and space positioning augmented reality, which belongs to the technical field of space positioning augmented reality, and the system for human body data and space positioning augmented reality comprises a sensor input module, a three-dimensional registration module connected with the sensor input module, an image rendering module connected with the three-dimensional registration module, a virtual information module and a human-computer interaction module connected with the image rendering module, and an output display module connected with the image rendering module; the information is acquired by a camera or a camera sensor input module and then input to a three-dimensional registration information module for three-dimensional space equipotential, the output position and pose are input to an image rendering module, and preset virtual information is output to a display screen; and performing space positioning through the three-dimensional registration module, and performing virtual-real matching through the rendering module to perform augmented reality rendering.

Description

  method and system for augmented reality of human body data and space positioning
Technical Field
The invention relates to   a human body data and space positioning augmented reality system, in particular to   a human body data and space positioning augmented reality method and system.
Background
The augmented reality technology is one of the important fields of modern science and technology development, and the interaction between a virtual space and a real space is realized by calculating the pose of a camera in real time and displaying corresponding preset information in an image acquired by the camera in an overlapping manner. At present, the augmented reality technology is widely applied to multiple fields of education, entertainment, medical treatment and the like.
The augmented reality system mainly comprises an augmented reality positioning engine, a display module and an interaction module, wherein the positioning engine is responsible for positioning the position of equipment in a three-dimensional space through data acquired by a sensor and outputting a pose (position and rotation angle); then the display module receives the pose and renders virtual information at the specified position; and finally, the user can carry out virtual-real interaction through the interaction module. It can be seen that the positioning engine is the basis for an augmented reality system.
The existing realization is such as to carry out virtual object account posting or drawing on video images to reach the mode of augmented reality effect, mainly utilize single camera to use AR technique to discern and track single human action, so, need carry out the interactive scene of behavioral analysis simultaneously to many people, because the camera can only focus on a human object at the same moment, thereby the interactive scene that whole can realize is more single, and augmented reality's effect is also not good enough.
Disclosure of Invention
The purpose of the invention is as follows:   A method and system for augmented reality of human body data and space orientation to solve the above problems in the prior art.
The technical scheme is as follows:   A human body data and space positioning augmented reality system comprises a sensor input module, a three-dimensional registration module connected with the sensor input module, an image rendering module connected with the three-dimensional registration module, a virtual information module and a human-computer interaction module connected with the image rendering module, and an output display module connected with the image rendering module;
the information is acquired by a camera or a camera sensor input module and then input to a three-dimensional registration information module for three-dimensional space equipotential, the output position and pose are input to an image rendering module, and preset virtual information is output to a display screen;
and performing space positioning through a three-dimensional registration module, and performing virtual-real matching through a rendering module to perform augmented reality rendering.
In a further example, the human-computer interaction module can perform interaction operation with a virtual module or information in a real scene through any one of a mouse, a keyboard, a numerical control panel, gesture recognition and voice recognition;
the man-machine interaction module has at least three modes, including a command interaction mode, a space interaction mode and a tool interaction mode.
In further examples, the three-dimensional registration module includes a computer vision component to which the hardware sensor component is connected, a hybrid tracking component connected to the computer vision component.
In a further example, the computer vision component includes an artificial marker, a self-heating marker, and a simultaneous localization and mapping; and estimating the pose of the camera or the camera in the three-dimensional space through the image.
In a further example, the hybrid tracking assembly is combined by a visual sensor in combination with an electromagnetic sensor, a visual sensor in combination with an inertial sensor, and a visual sensor in combination with a global positioning system.
In a further example, the command interaction mode takes a preset action or state as an input command, the command corresponds to a specific operation of a virtual object, and when a user makes a specific gesture or a specific voice, a corresponding command can be triggered;
the space interaction mode is used for carrying out point selection on the positions of all virtual objects in a scene in a real three-dimensional space through accurate positioning and realizing real-time positioning through the positions of the three-dimensional space points;
the tool interaction mode is that interaction is realized through hardware interaction equipment, and the action of work is recognized as an input command.
In a further example, the hardware sensor module includes a mechanical sensor, a global positioning system coupled to the mechanical sensor, an inertial sensor coupled to the global positioning system, and an electromagnetic sensor coupled to the inertial sensor.
In a further example, the following steps are included;
step 1, acquiring an image through a camera or a camera;
step 2, extracting the features of the image to generate corresponding feature description; then sequentially registering a plurality of image features which are continuous front and back;
step 3, identifying the characteristic point information of the scene, and obtaining the motion of the camera or the camera through the reverse calculation of the change of the image;
step 4, converting the read-in depth graph into a three-dimensional point cloud by using 3D calculation on the acquired environmental data and characteristic information, and calculating a normal vector of each point;
step 5, calculating the positions of the point cloud with the normal vector and the point cloud projected from the model according to the position of the previous frame by a light projection algorithm by utilizing an ICP point cloud registration algorithm;
step 6, fusing the point cloud of the current frame into a grid model according to the position of the camera;
and 7, utilizing the camera position calculated by the current image, projecting the point cloud under the current view angle from the model through a ray projection algorithm, and calculating a normal vector of the point cloud for registering the input of the next image.
Has the beneficial effects that: the invention discloses   an augmented reality system for human body data and space positioning, which adopts a three-dimensional tracking module and adopts different sensors for tracking and positioning according to different types of positioning, thereby ensuring the universality of the system. A hybrid tracking mode is designed, tracking and positioning are realized through fusion of multiple groups of sensors, and the range is wider by combining hardware sensors and computer vision.
Drawings
FIG. 1 is a block diagram of an incremental implementation system of the present invention;
FIG. 2 is a flow chart of spatial positioning according to the present invention;
FIG. 3 is a schematic diagram of feature matching of neighboring images according to the present invention.
Detailed Description
According to the   method and system for augmented reality of human body data and spatial positioning, in the prior art, virtual object account hanging or drawing is performed on a video image to achieve an augmented reality effect, and the AR technology is mainly applied to a single camera to identify and track the motion of a single human body, so that for a scene in which multiple people need to perform behavior analysis interaction simultaneously, the camera can only focus on one human body object at the same time, so that the interaction scene which can be integrally realized is single, and the augmented reality effect is poor. The present invention is further illustrated by the following embodiments in combination with the accompanying drawings.
  in order to realize space location, it needs to solve a problem of "where i am", that is, a map, mainly by two tracking locations of hardware and cardinal number vision, specifically, the invention includes a sensor input module, a three-dimensional registration module connected with the sensor input module, an image rendering module connected with the three-dimensional registration module, a virtual information module and a human-computer interaction module connected with the image rendering module, and an output display module connected with the image rendering module;
the information is acquired by a camera or a camera sensor input module and then input to a three-dimensional registration information module for three-dimensional space equipotential, the output position and pose are input to an image rendering module, and preset virtual information is output to a display screen; and performing space positioning through a three-dimensional registration module, and performing virtual-real matching through a rendering module to perform augmented reality rendering. The whole system acquires real world information through sensors such as a camera and an IMU (inertial measurement Unit), inputs the real world information into a three-dimensional registration module for three-dimensional space positioning, inputs an output pose into a graphic rendering module, outputs and displays virtual information on a screen by combining preset virtual information, and a user can perform real-time interaction with the virtual information through a human-computer interaction module.
In order to ensure the sense of reality of fusion of a virtual object and the real world, an image and a virtual object of the real world are rendered on a preset rendering plane, imaging of a real world camera is simulated through the virtual rendering plane and the virtual camera, and consistency of camera pose data and rendering pause rate is ensured through simultaneous rendering, namely virtual scene construction and simultaneous rendering. Specifically, the human-computer interaction technology is reflected in augmented reality application that a user can perform certain operation through equipment and a virtual model or information in a real scene, so that interactive operation between the user and the equipment is realized, and interaction between a person and a virtual object is realized. The human-computer interaction technology can greatly improve the experience of a user on augmented reality and expand the scope of augmented reality application, the human-computer interaction technology can realize interaction in a mouse, a keyboard, a touch pad, gesture recognition, voice recognition and other modes, in the augmented reality application, the human-computer interaction mode can be generally divided into the following three modes, the human-computer interaction module has at least three modes including a command interaction mode, a space interaction mode and a tool interaction mode, and the human-computer interaction module can perform interaction operation with a virtual module or information in a real scene through any one device of the mouse, the keyboard, a numerical control panel, the gesture recognition and the voice recognition; such as moving, rotating, etc. The command mode can also be realized by combining technologies such as gesture recognition, voice recognition and the like, and when a user makes a specific gesture or speaks a specific voice, a corresponding command can be triggered to realize the interaction between a person and a virtual object.
As a preferred aspect, the three-dimensional registration module includes a hardware sensor component, a computer vision component coupled to the hardware sensor component, and a hybrid tracking component coupled to the computer vision component. The sensor directly measures and estimates the motion of the sensor, and different sensors are adopted for tracking and positioning according to different types of positioning, so that the universality of the system is ensured. The three-dimensional registration module registers the object in the virtual environment into the three-dimensional space, so that the geometric consistency under virtual and real conditions is ensured.
As a preferred aspect, the computer vision assembly includes an artificial marker, a self-heating marker, and a simultaneous localization and mapping; and estimating the pose of the camera or the camera in the three-dimensional space through the image. The method comprises the steps of tracking an artificial standard object through computer vision, simultaneously tracking the camera or the camera through the position of the camera or the camera in a three-dimensional space through the computer vision, ensuring the stability of the artificial standard object, collecting images through the camera or the camera to estimate the three-dimensional motion and establish a map when a natural marker needs to be positioned, and modeling the motion estimation and the three-dimensional space of the camera or the camera under the condition of no environmental information.
As a preferred solution, the hybrid tracking assembly is formed by combining a visual sensor with an electromagnetic sensor, a visual sensor with an inertial sensor, and a visual sensor with a global positioning system.
Specifically, the command interaction mode takes a preset action or state as an input command, the command corresponds to a specific operation of the virtual object, and when a user makes a specific gesture or a specific voice, the corresponding command can be triggered;
specifically, the spatial interaction mode is used for accurately positioning the positions of all virtual objects in a scene in a real three-dimensional space, and real-time positioning is realized by selecting points according to the positions of the three-dimensional space points; the method for selecting the virtual objects in the scene through the positions of the three-dimensional space points is a space point selection method, and the technology for realizing the human-computer interaction by utilizing the method is a space point interaction method.
The tool interaction mode is to realize interaction through hardware interaction equipment, and to take the action of work as an input command through recognition.
Preferably, the hardware sensor module includes a mechanical sensor, a global positioning system connected to the mechanical sensor, an inertial sensor connected to the global positioning system, and an electromagnetic sensor connected to the inertial sensor.
In a further example, the following steps are included;
step 1, acquiring an image through a camera or a camera;
step 2, extracting the features of the image to generate corresponding feature description; then sequentially registering a plurality of image features which are continuous from front to back;
step 3, identifying the characteristic point information of the scene, and obtaining the motion of the camera or the camera through the reverse calculation of the change of the image;
step 4, converting the acquired environmental data and the acquired characteristic information into a three-dimensional point cloud by utilizing 3D (three-dimensional) calculation, and calculating a normal vector of each point;
step 5, calculating the positions of the point cloud with the normal vector and the point cloud projected from the model according to the position of the previous frame by a light projection algorithm by utilizing an ICP point cloud registration algorithm;
step 6, fusing the point cloud of the current frame into a grid model according to the position of the camera;
and 7, utilizing the camera position calculated by the current image, projecting the point cloud under the current view angle from the model through a ray projection algorithm, and calculating a normal vector of the point cloud for registering the input of the next image.
The preferred embodiments of the present invention have been described in detail with reference to the accompanying drawings, however, the present invention is not limited to the specific details of the embodiments, and various equivalent changes can be made to the technical solution of the present invention within the technical idea of the present invention, and these equivalent changes are within the protection scope of the present invention.

Claims (8)

1. The utility model provides a human data and spatial localization's augmented reality system which characterized in that: the system comprises a sensor input module, a three-dimensional registration module connected with the sensor input module, an image rendering module connected with the three-dimensional registration module, a virtual information module and a human-computer interaction module connected with the image rendering module, and an output display module connected with the image rendering module;
the information is acquired by a camera or a camera sensor input module and then input to a three-dimensional registration information module for three-dimensional space equipotential, the output position and pose are input to an image rendering module, and preset virtual information is output to a display screen;
and performing space positioning through a three-dimensional registration module, and performing virtual-real matching through a rendering module to perform augmented reality rendering.
2.  A system of human body data and spatially localized augmented reality as claimed in claim 1, wherein: the human-computer interaction module can perform interaction operation with a virtual module or information in a real scene through any one of a mouse, a keyboard, a numerical control panel, gesture recognition and voice recognition;
the man-machine interaction module has at least three modes, including a command interaction mode, a space interaction mode and a tool interaction mode.
3.  A human body data and spatial localization augmented reality system according to claim 1, wherein: the three-dimensional registration module comprises a hardware sensor component, a computer vision component connected with the hardware sensor component and a hybrid tracking component connected with the computer vision component.
4.  A system for human body data and spatially localized augmented reality as claimed in claim 3, wherein: the computer vision component comprises an artificial marker, a self-heating marker and a simultaneous positioning and drawing device; and estimating the pose of the camera or the camera in the three-dimensional space through the image.
5.  A system for human body data and spatially localized augmented reality as claimed in claim 3, wherein: the hybrid tracking assembly is formed by combining a visual sensor and an electromagnetic sensor, combining the visual sensor and an inertial sensor and combining the visual sensor and a global positioning system.
6. The   A human body data and spatial localization augmented reality system of claim 2, wherein: the command interaction mode takes preset action or state as an input command, the command corresponds to specific operation of the virtual object, and when a user makes a specific gesture or specific voice, the corresponding command can be triggered;
the space interaction mode is used for carrying out point selection on the positions of all virtual objects in a scene in a real three-dimensional space through accurate positioning and realizing real-time positioning through the positions of the three-dimensional space points;
the tool interaction mode is to realize interaction through hardware interaction equipment, and to take the action of work as an input command through recognition.
7.  A human body data and spatial localization augmented reality system according to claim 3, wherein: the hardware sensor module comprises a mechanical sensor, a global positioning system connected with the mechanical sensor, an inertial sensor connected with the global positioning system, and an electromagnetic sensor connected with the inertial sensor.
8. A method of   a human body data and spatially localized augmented reality system according to claim 7; the method is characterized in that: comprises the following steps;
step 1, acquiring an image through a camera or a camera;
step 2, extracting the features of the image to generate corresponding feature description; then sequentially registering a plurality of image features which are continuous front and back;
step 3, identifying the feature point information of the scene, and obtaining the motion of the camera or the camera through reverse calculation of the change of the image;
step 4, converting the read-in depth graph into a three-dimensional point cloud by using 3D calculation on the acquired environmental data and characteristic information, and calculating a normal vector of each point;
step 5, calculating the positions of the point cloud with the normal vector and the point cloud projected from the model according to the position of the previous frame by a light projection algorithm by utilizing an ICP point cloud registration algorithm;
step 6, fusing the point cloud of the current frame into a grid model according to the position of the camera;
and 7, utilizing the camera position calculated by the current image, projecting the point cloud under the current view angle from the model through a ray projection algorithm, and calculating a normal vector of the point cloud for registering the input of the next image.
CN202210694903.9A 2022-06-20 2022-06-20 Method and system for enhancing reality of human body data and space positioning Withdrawn CN114926613A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210694903.9A CN114926613A (en) 2022-06-20 2022-06-20 Method and system for enhancing reality of human body data and space positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210694903.9A CN114926613A (en) 2022-06-20 2022-06-20 Method and system for enhancing reality of human body data and space positioning

Publications (1)

Publication Number Publication Date
CN114926613A true CN114926613A (en) 2022-08-19

Family

ID=82815432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210694903.9A Withdrawn CN114926613A (en) 2022-06-20 2022-06-20 Method and system for enhancing reality of human body data and space positioning

Country Status (1)

Country Link
CN (1) CN114926613A (en)

Similar Documents

Publication Publication Date Title
CN113096252B (en) Multi-movement mechanism fusion method in hybrid enhanced teaching scene
CN112509151B (en) Method for generating sense of reality of virtual object in teaching scene
JP7337104B2 (en) Model animation multi-plane interaction method, apparatus, device and storage medium by augmented reality
KR101687017B1 (en) Hand localization system and the method using head worn RGB-D camera, user interaction system
Ueda et al. A hand-pose estimation for vision-based human interfaces
CN104574267B (en) Bootstrap technique and information processing equipment
Tian et al. Handling occlusions in augmented reality based on 3D reconstruction method
JP4768196B2 (en) Apparatus and method for pointing a target by image processing without performing three-dimensional modeling
CN108509026B (en) Remote maintenance support system and method based on enhanced interaction mode
CN110926334B (en) Measuring method, measuring device, electronic device and storage medium
Jia et al. 3D image reconstruction and human body tracking using stereo vision and Kinect technology
CN105824417B (en) human-object combination method adopting virtual reality technology
Whitaker et al. Object calibration for augmented reality
Zou et al. Automatic reconstruction of 3D human motion pose from uncalibrated monocular video sequences based on markerless human motion tracking
JP3860550B2 (en) Interface method, apparatus, and program
CN110197531A (en) Role's skeleton point mapping techniques based on deep learning
Kondori et al. Direct hand pose estimation for immersive gestural interaction
Valentini Natural interface in augmented reality interactive simulations: This paper demonstrates that the use of a depth sensing camera that helps generate a three-dimensional scene and track user's motion could enhance the realism of the interactions between virtual and physical objects
Jo et al. Tracking and interaction based on hybrid sensing for virtual environments
CN113842227B (en) Medical auxiliary three-dimensional model positioning and matching method, system, equipment and medium
CN114926613A (en) Method and system for enhancing reality of human body data and space positioning
CN112181135A (en) 6-DOF visual touch interaction method based on augmented reality
Shi et al. Error elimination method in moving target tracking in real-time augmented reality
Gope et al. Interaction with Large Screen Display using Fingertip & Virtual Touch Screen
CN116958450B (en) Human body three-dimensional reconstruction method for two-dimensional data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220819

WW01 Invention patent application withdrawn after publication