CN104360729A - Multi-interactive method and device based on Kinect and Unity 3D - Google Patents

Multi-interactive method and device based on Kinect and Unity 3D Download PDF

Info

Publication number
CN104360729A
CN104360729A CN201410381549.XA CN201410381549A CN104360729A CN 104360729 A CN104360729 A CN 104360729A CN 201410381549 A CN201410381549 A CN 201410381549A CN 104360729 A CN104360729 A CN 104360729A
Authority
CN
China
Prior art keywords
kinect
unity3d
coordinate system
registration
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410381549.XA
Other languages
Chinese (zh)
Other versions
CN104360729B (en
Inventor
王虓
郭新宇
吴升
温维亮
王传宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Research Center for Information Technology in Agriculture
Beijing Research Center of Intelligent Equipment for Agriculture
Original Assignee
Beijing Research Center for Information Technology in Agriculture
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Research Center for Information Technology in Agriculture filed Critical Beijing Research Center for Information Technology in Agriculture
Priority to CN201410381549.XA priority Critical patent/CN104360729B/en
Publication of CN104360729A publication Critical patent/CN104360729A/en
Application granted granted Critical
Publication of CN104360729B publication Critical patent/CN104360729B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Abstract

The invention relates to a multi-interactive method based on Kinect and Unity 3D. The method comprises the following steps: S1: adjusting the camera parameter in the Unity 3D to be in accordance with the effective detection range of the Kinect; S2: utilizing the Kinect to determine a user coordinate and a ground equation; S3: determining a virtual simulate coordinate according to the relative position, and registering a virtual model; S4: designing interactive pose and voice; S5: determining Unity 3D control model displacement animation and a multimedia effect; S6: fusing and displaying an image obtained by a camera of the Unity 3D and an image obtained by a camera of the Kinect. The multi-interactive method utilizes support on voice recognition and positioning on human skeleton by the Kinect to add the trigger mode of the virtual model three-dimensional registration, provides more interactive modes for users through the recognition function of limb motion, improves the user experience, utilizes a three-dimensional engine of the Unity 3D to conduct automatic processing on the model pose, and greatly simplifies the needed step of the three-dimensional registration. The invention further discloses a multi-interactive device based on the Kinect and the Unity 3D.

Description

Based on many exchange methods and the device of Kinect and Unity3D
Technical field
The present invention relates to computing machine augmented reality field, particularly relate to a kind of many exchange methods based on Kinect and Unity3D and device.
Background technology
Augmented reality (Augmented Reality) proposes with the nineties in last century the earliest, is widely used in medical treatment now, education, industry, the many aspects such as business.What augmented reality one was comparatively general be defined in 1997 is proposed by the Ronald Azuma of North Carolina University, comprise three main aspects: to be virtually combined with reality (Combines real and virtual), immediate interactive (Interactive in real time), three-dimensional registration (Registered in3D).Virtual scene is superimposed upon on reality scene by this technology on screen, and makes participant can be interactive with virtual scene.The realization flow of current augmented reality is generally: 1) obtain scene image by image acquiring device; 2) identify and follow the tracks of uncalibrated image or word in scene, calculating its displacement rotating matrix of its deformation calculation; 3) according to position and the rotation matrix of uncalibrated image, corresponding dummy model positional information is registered in three dimensions; 4) merge dummy model and real scene, and be presented on screen.
But, there is following a few point defect in current common technology: 1) interactive mode simplification, can only trigger dummy model registration, and can only carry out the operations such as translation rotation to model after registration by uncalibrated image or word, model can only be followed and be demarcated thing motion, and interactive mode is few and limit more; 2) three-dimensional registration algorithm is loaded down with trivial details, needs according to unique point coordinate system Confirming model position and attitude, then is transformed into camera coordinate system, finally merges dummy model and reality scene and shows it according to indicator screen coordinate.Visible current art needs the calculating compared with multi-step at the three-dimensional registration phase of dummy model, operates succinct not and robotization.
Summary of the invention
Technical matters to be solved by this invention is, for the deficiencies in the prior art, how to utilize Kinect to the support of speech recognition and triggering mode human skeleton location being increased to the registration of dummy model three-dimensional, by the recognition function of limb action for user provides more interactive mode, improve the experience of user, and how to utilize the d engine of Unity3D to carry out automatic business processing to model pose, greatly simplify the key issue of the required step of three-dimensional registration.
For this purpose, the present invention proposes a kind of many exchange methods based on Kinect and Unity3D, comprising:
S1: in adjustment Unity3D, camera parameters is consistent with Kinect valid analysing range;
S2: utilize Kinect to determine user coordinates and ground equation;
S3: according to relative position determination virtual analog coordinate, and register dummy model;
S4: design mutual posture and voice;
S5: determine Unity3D Controlling model displacement animation and Multimedia;
S6: the image that the picture get the video camera in Unity3D and the camera of Kinect obtain merges and shows.
Further, described step S1 comprises further: place the predeterminated position of Kinect to reality scene, and adjustment reality scene is in Kinect valid analysing range.
Further, described step S1 comprises further: video camera Field of view and Clipping Planes parameter in adjustment Unity3D.
Further, described step S2 comprises further:
S21: use SkeletonFrame.FloorClipPlane function to determine the plane equation representing ground, wherein, described plane equation under described Kinect coordinate system is: Ax+By+Cz+D=0, the planar process vector that (A, B, C) is described plane equation, described plane equation under described Unity3D coordinate system is: y+E=0, the planar process vector that (0,1,0) is described plane equation;
S22: (A, B, C) is rotated and overlaps to (0,1,0), complete the registration of Kinect coordinate system and Unity3D coordinate system.
Further, the registration of described Kinect coordinate system and Unity3D coordinate system comprises further: arbitrfary point (k under Kinect coordinate system 1, k 2, k 3) to Unity3D ordinate transform time, need be-arctan (B/C) around the X-axis anglec of rotation, be arctan (A/B) around the Z axis anglec of rotation, radius of turn be rotation recoil is designated as: (k 1cos α-(k 2cos β-k 3sin β) sin α, k 1sin α+(k 2cos β-k 3sin β) cos α, k 2sin β+k 3cos β), wherein, α=arctan (A/B), β=-arctan (B/C).
Further, described step S6 comprises further:
S61: two width images are sampled or difference operation;
S62: two width images after operation are traveled through, compares the depth value with object image slices vegetarian refreshments corresponding point in two width images;
S63: object image corresponding point color value is set to the color value of depth value compared with statuette vegetarian refreshments.
Further, described registration dummy model can also move to by user the mode that specific position triggers default models.
Further, described registration dummy model can also trigger the mode of corresponding model registration by user speech.
For this purpose, the present invention proposes a kind of many interactive devices based on Kinect and Unity3D, comprising:
Adjusting module, consistent with Kinect valid analysing range for adjusting camera parameters in Unity3D;
Determining coordinate and ground equation module, determining user coordinates and ground equation for utilizing Kinect;
Dummy model Registering modules, for according to relative position determination virtual analog coordinate, and registers dummy model;
Design module, for designing mutual posture and voice;
Determine effects module, for determining Unity3D Controlling model displacement animation and Multimedia;
Image co-registration module, the image that the camera for the picture that gets the video camera in Unity3D and Kinect obtains merges and shows.
A kind of many exchange methods based on Kinect and Unity3D disclosed in this invention, first by arranging position and the attribute of video camera in Unity3D, simplify the conversion between real scene coordinate system and virtual scene coordinate system; Secondly the respective coordinates of user in Unity is obtained by Kinect, and represent the plane equation on ground, then can determine that three-dimensional registers coordinate according to the relative position relation of dummy model to be registered and ground and user, its mechanism triggering registration is more flexible, can trigger when user moves to ad-hoc location, sound identification module also can be used to trigger.Again enriched the interactive mode after model registration, can by limb action and voice operating model mutual with it; The Transform assembly in Unity3D and Mecanim animation system is finally utilized to simplify the realization of dummy model change in displacement and animation effect.The invention also discloses a kind of many interactive devices based on Kinect and Unity3D.
Accompanying drawing explanation
Can understanding the features and advantages of the present invention clearly by reference to accompanying drawing, accompanying drawing is schematic and should not be construed as and carry out any restriction to the present invention, in the accompanying drawings:
Fig. 1 shows the flow chart of steps of a kind of many exchange methods based on Kinect and Unity3D in the embodiment of the present invention;
Fig. 2 shows the structural drawing of a kind of many interactive devices based on Kinect and Unity3D in the embodiment of the present invention.
Embodiment
Below in conjunction with accompanying drawing, embodiments of the present invention is described in detail.
As shown in Figure 1, the invention provides a kind of many exchange methods based on Kinect and Unity3D, comprise concrete following steps:
Step S1: in adjustment Unity3D, camera parameters is consistent with Kinect valid analysing range.Particularly, place the predeterminated position of Kinect to reality scene, adjustment reality scene is in Kinect valid analysing range, and wherein, effective range is span camera 1.2-3.6 rice, level 57 degree, vertical 43 degree.
Further, in the coordinate system of Kinect return data, namely initial point is the sensor of Kinect, therefore video camera in Unity3D is positioned over true origin, calculates dummy model coordinate to facilitate during three-dimensional registration.Video camera Field of view and Clipping Planes parameter in adjustment Unity3D, by Field of view, the parameters such as Clipping Planes are arranged to the same numerical value with the effective range of Kinect.
Step S2: utilize Kinect to determine user coordinates and ground equation.
Particularly, SkeletonFrame.FloorClipPlane function is used to determine the plane equation representing ground, wherein, the plane equation under Kinect coordinate system is: Ax+By+Cz+D=0, (A, B, C) be the planar process vector of plane equation, the plane equation under Unity3D coordinate system is: y+E=0, (0,1,0) be the planar process vector of plane equation; (A, B, C) is rotated and overlaps to (0,1,0), complete the registration of Kinect coordinate system and Unity3D coordinate system.
Further, the registration of Kinect coordinate system and Unity3D coordinate system comprises further: arbitrfary point (k under Kinect coordinate system 1, k 2, k 3) to Unity3D ordinate transform time, need be-arctan (B/C) around the X-axis anglec of rotation, be arctan (A/B) around the Z axis anglec of rotation, radius of turn be rotation recoil is designated as: (k 1cos α-(k 2cos β-k 3sin β) sin α, k 1sin α+(k 2cos β-k 3sin β) cos α, k 2sin β+k 3cos β), wherein, α=arctan (A/B), β=-arctan (B/C).
Step S3: according to relative position determination virtual analog coordinate, and register dummy model.
Particularly, carry out the conversion formula changed according to the Kinect coordinate system of above-mentioned steps to Unity3D coordinate system, the bone site point Skeleton Point returned by API in Kinect SDK is converted to Unity3D coordinate system.According to floor level under Unity3D coordinate system, the relative position of user coordinates and dummy model and user is by model orientation extremely required coordinate; Or select default models to register, or register to the sound bank interpolation word relevant to model, wherein, the corresponding model of some words, by the identification of Kinect Speech modular voice.After user says the word existed in sound bank, in scene, register model three-dimensional model corresponding to this word.
Step S4: design mutual posture and voice.
Particularly, design mutual posture, determine often kind of limb action set operated.Such as: use arm hovering to represent and select object or button click; Mobile arm represents sliding mouse or translation model; Two hands away from, near represent zoom model; Two hands are embraced ball and are rotated expression rotating model etc., and are realized simple mutual by voice.Such as: the display of model, disappearance, multimedia broadcasting, time-out etc.
Step S5: determine Unity3D Controlling model displacement animation and Multimedia.
Particularly, according to the limb action of user, corresponding operating is carried out to model.Utilize the Transform assembly in Unity3D SDK in GameObject object to carry out translation to model, rotate, the operations such as convergent-divergent; Use Mecanim animation system Controlling model to make user to follow, run, the interactive action that guiding etc. designs; Audio assembly and Movie Textures assembly is used to control Multimedia.
Step S6: the image that the picture get the video camera in Unity3D and the camera of Kinect obtain merges and shows.
Particularly, step S6 comprises further:
Step S61: sample or difference operation two width images, makes it to zoom to object image size.
Step S62: two width images after operation are traveled through, compares the depth value with object image slices vegetarian refreshments corresponding point in two width images;
Step S63: object image corresponding point color value is set to the color value of depth value compared with statuette vegetarian refreshments.
A kind of many exchange methods based on Kinect and Unity3D disclosed by the invention, for three-dimensional registers augmented reality simple to operate.Utilize Kinect to the support of speech recognition and triggering mode human skeleton location being increased to the registration of dummy model three-dimensional, provide more interactive mode by the recognition function of limb action for user, improve the experience of user; Utilize the d engine of Unity3D to carry out automatic business processing to model pose, greatly simplify the required step of three-dimensional registration.Namely fully utilize body sense interactive device and three-dimensional game engine, simplify three-dimensional register flow path, add three-dimensional registration triggering mode, enriched user interactions approach, perfect user operation experience.
As shown in Figure 2, the invention provides a kind of many interactive devices 10 based on Kinect and Unity3D, comprising: adjusting module 101, determine coordinate and ground equation module 102, dummy model Registering modules 103, design module 104, determine effects module 105 and image co-registration module 106.
Particularly, adjusting module 101 is consistent with Kinect valid analysing range for adjusting camera parameters in Unity3D; Determine that coordinate and ground equation module 102 determine user coordinates and ground equation for utilizing Kinect; Dummy model Registering modules 103 for according to relative position determination virtual analog coordinate, and registers dummy model; Design module 104 is for designing mutual posture and voice; Determine that effects module 105 is for determining Unity3D Controlling model displacement animation and Multimedia; The image that image co-registration module 106 obtains for the camera of the picture that gets the video camera in Unity3D and Kinect merges and shows.
A kind of many exchange methods based on Kinect and Unity3D disclosed in this invention, first by arranging position and the attribute of video camera in Unity3D, simplify the conversion between real scene coordinate system and virtual scene coordinate system; Secondly the respective coordinates of user in Unity is obtained by Kinect, and represent the plane equation on ground, then can determine that three-dimensional registers coordinate according to the relative position relation of dummy model to be registered and ground and user, its mechanism triggering registration is more flexible, can trigger when user moves to ad-hoc location, sound identification module also can be used to trigger.Again enriched the interactive mode after model registration, can by limb action and voice operating model mutual with it; The Transform assembly in Unity3D and Mecanim animation system is finally utilized to simplify the realization of dummy model change in displacement and animation effect.The invention also discloses a kind of many interactive devices based on Kinect and Unity3D.
Above embodiment is only for illustration of the present invention; and be not limitation of the present invention; the those of ordinary skill of relevant technical field; without departing from the spirit and scope of the present invention; can also make a variety of changes and modification; therefore all equivalent technical schemes also belong to category of the present invention, and scope of patent protection of the present invention should be defined by the claims.
Although describe embodiments of the present invention by reference to the accompanying drawings, but those skilled in the art can make various modifications and variations without departing from the spirit and scope of the present invention, such amendment and modification all fall into by within claims limited range.

Claims (9)

1. based on many exchange methods of Kinect and Unity3D, it is characterized in that, comprise concrete following steps:
S1: in adjustment Unity3D, camera parameters is consistent with Kinect valid analysing range;
S2: utilize Kinect to determine user coordinates and ground equation;
S3: according to relative position determination virtual analog coordinate, and register dummy model;
S4: design mutual posture and voice;
S5: determine Unity3D Controlling model displacement animation and Multimedia;
S6: the image that the picture get the video camera in Unity3D and the camera of Kinect obtain merges and shows.
2. the method for claim 1, is characterized in that, described step S1 comprises further: place the predeterminated position of Kinect to reality scene, and adjustment reality scene is in Kinect valid analysing range.
3. the method for claim 1, is characterized in that, described step S1 comprises further: video camera Field of view and Clipping Planes parameter in adjustment Unity3D.
4. the method for claim 1, is characterized in that, described step S2 comprises further:
S21: use SkeletonFrame.FloorClipPlane function to determine the plane equation representing ground, wherein, described plane equation under described Kinect coordinate system is: Ax+By+Cz+D=0, the planar process vector that (A, B, C) is described plane equation, described plane equation under described Unity3D coordinate system is: y+E=0, the planar process vector that (0,1,0) is described plane equation;
S22: (A, B, C) is rotated and overlaps to (0,1,0), complete the registration of Kinect coordinate system and Unity3D coordinate system.
5. method as claimed in claim 4, it is characterized in that, the registration of described Kinect coordinate system and Unity3D coordinate system comprises further: arbitrfary point (k under Kinect coordinate system 1,k 2,k 3) to Unity3D ordinate transform time, need be-arctan (B/C) around the X-axis anglec of rotation, be arctan (A/B) around the Z axis anglec of rotation, radius of turn be rotation recoil is designated as: (k 1cos α-(k 2cos β-k 3sin β) sin α, k 1sin α+(k 2cos β-k 3sin β) cos α, k 2sin β+k 3cos β), wherein, α=arctan (A/B), β=-arctan (B/C).
6. the method for claim 1, is characterized in that, described step S6 comprises further:
S61: two width images are sampled or difference operation;
S62: two width images after operation are traveled through, compares the depth value with object image slices vegetarian refreshments corresponding point in two width images;
S63: object image corresponding point color value is set to the color value of depth value compared with statuette vegetarian refreshments.
7. the method for claim 1, is characterized in that, described registration dummy model can also move to by user the mode that specific position triggers default models.
8. the method for claim 1, is characterized in that, described registration dummy model can also trigger the mode of corresponding model registration by user speech.
9., based on many interactive devices of Kinect and Unity3D, it is characterized in that, comprising:
Adjusting module, consistent with Kinect valid analysing range for adjusting camera parameters in Unity3D;
Determining coordinate and ground equation module, determining user coordinates and ground equation for utilizing Kinect;
Dummy model Registering modules, for according to relative position determination virtual analog coordinate, and registers dummy model;
Design module, for designing mutual posture and voice;
Determine effects module, for determining Unity3D Controlling model displacement animation and Multimedia;
Image co-registration module, the image that the camera for the picture that gets the video camera in Unity3D and Kinect obtains merges and shows.
CN201410381549.XA 2014-08-05 2014-08-05 Many exchange methods and device based on Kinect and Unity3D Active CN104360729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410381549.XA CN104360729B (en) 2014-08-05 2014-08-05 Many exchange methods and device based on Kinect and Unity3D

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410381549.XA CN104360729B (en) 2014-08-05 2014-08-05 Many exchange methods and device based on Kinect and Unity3D

Publications (2)

Publication Number Publication Date
CN104360729A true CN104360729A (en) 2015-02-18
CN104360729B CN104360729B (en) 2017-10-10

Family

ID=52527997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410381549.XA Active CN104360729B (en) 2014-08-05 2014-08-05 Many exchange methods and device based on Kinect and Unity3D

Country Status (1)

Country Link
CN (1) CN104360729B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125903A (en) * 2016-04-24 2016-11-16 林云帆 Many people interactive system and method
CN106791478A (en) * 2016-12-15 2017-05-31 山东数字人科技股份有限公司 A kind of three-dimensional data real-time volume display systems
CN107330978A (en) * 2017-06-26 2017-11-07 山东大学 The augmented reality modeling experiencing system and method mapped based on position
CN107551551A (en) * 2017-08-09 2018-01-09 广东欧珀移动通信有限公司 Game effect construction method and device
CN107861714A (en) * 2017-10-26 2018-03-30 天津科技大学 The development approach and system of car show application based on IntelRealSense
CN108096836A (en) * 2017-12-20 2018-06-01 深圳市百恩互动娱乐有限公司 A kind of method that true man's real scene shooting makes game
CN109089017A (en) * 2018-09-05 2018-12-25 宁波梅霖文化科技有限公司 Magic virtual bench
CN109782911A (en) * 2018-12-30 2019-05-21 广州嘉影软件有限公司 Double method for catching and system based on virtual reality
CN110728739A (en) * 2019-09-30 2020-01-24 杭州师范大学 Virtual human control and interaction method based on video stream
CN111913577A (en) * 2020-07-31 2020-11-10 武汉木子弓数字科技有限公司 Three-dimensional space interaction method based on Kinect
CN113709537A (en) * 2020-05-21 2021-11-26 云米互联科技(广东)有限公司 User interaction method based on 5G television, 5G television and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110169927A1 (en) * 2010-01-13 2011-07-14 Coco Studios Content Presentation in a Three Dimensional Environment
CN103049618A (en) * 2012-12-30 2013-04-17 江南大学 Intelligent home displaying method on basis of Kinect
CN103181157A (en) * 2011-07-28 2013-06-26 三星电子株式会社 Plane-characteristic-based markerless augmented reality system and method for operating same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110169927A1 (en) * 2010-01-13 2011-07-14 Coco Studios Content Presentation in a Three Dimensional Environment
CN103181157A (en) * 2011-07-28 2013-06-26 三星电子株式会社 Plane-characteristic-based markerless augmented reality system and method for operating same
CN103049618A (en) * 2012-12-30 2013-04-17 江南大学 Intelligent home displaying method on basis of Kinect

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125903B (en) * 2016-04-24 2021-11-16 林云帆 Multi-person interaction system and method
CN106125903A (en) * 2016-04-24 2016-11-16 林云帆 Many people interactive system and method
CN106791478A (en) * 2016-12-15 2017-05-31 山东数字人科技股份有限公司 A kind of three-dimensional data real-time volume display systems
CN107330978A (en) * 2017-06-26 2017-11-07 山东大学 The augmented reality modeling experiencing system and method mapped based on position
CN107551551A (en) * 2017-08-09 2018-01-09 广东欧珀移动通信有限公司 Game effect construction method and device
CN107861714A (en) * 2017-10-26 2018-03-30 天津科技大学 The development approach and system of car show application based on IntelRealSense
CN108096836A (en) * 2017-12-20 2018-06-01 深圳市百恩互动娱乐有限公司 A kind of method that true man's real scene shooting makes game
CN109089017A (en) * 2018-09-05 2018-12-25 宁波梅霖文化科技有限公司 Magic virtual bench
CN109782911A (en) * 2018-12-30 2019-05-21 广州嘉影软件有限公司 Double method for catching and system based on virtual reality
CN109782911B (en) * 2018-12-30 2022-02-08 广州嘉影软件有限公司 Whole body motion capture method and system based on virtual reality
CN110728739A (en) * 2019-09-30 2020-01-24 杭州师范大学 Virtual human control and interaction method based on video stream
CN110728739B (en) * 2019-09-30 2023-04-14 杭州师范大学 Virtual human control and interaction method based on video stream
CN113709537A (en) * 2020-05-21 2021-11-26 云米互联科技(广东)有限公司 User interaction method based on 5G television, 5G television and readable storage medium
CN113709537B (en) * 2020-05-21 2023-06-13 云米互联科技(广东)有限公司 User interaction method based on 5G television, 5G television and readable storage medium
CN111913577A (en) * 2020-07-31 2020-11-10 武汉木子弓数字科技有限公司 Three-dimensional space interaction method based on Kinect

Also Published As

Publication number Publication date
CN104360729B (en) 2017-10-10

Similar Documents

Publication Publication Date Title
CN104360729A (en) Multi-interactive method and device based on Kinect and Unity 3D
AU2019356907B2 (en) Automated control of image acquisition via use of acquisition device sensors
US10902678B2 (en) Display of hidden information
Arth et al. The history of mobile augmented reality
US11417365B1 (en) Methods, systems and apparatuses for multi-directional still pictures and/or multi-directional motion pictures
US9898844B2 (en) Augmented reality content adapted to changes in real world space geometry
AU2013224660B2 (en) Automated frame of reference calibration for augmented reality
JP7337104B2 (en) Model animation multi-plane interaction method, apparatus, device and storage medium by augmented reality
US10482659B2 (en) System and method for superimposing spatially correlated data over live real-world images
TWI505709B (en) System and method for determining individualized depth information in augmented reality scene
JP6458371B2 (en) Method for obtaining texture data for a three-dimensional model, portable electronic device, and program
JP5920352B2 (en) Information processing apparatus, information processing method, and program
US9626801B2 (en) Visualization of physical characteristics in augmented reality
US9268410B2 (en) Image processing device, image processing method, and program
US20150185825A1 (en) Assigning a virtual user interface to a physical object
Reitmayr et al. Simultaneous localization and mapping for augmented reality
CN110457414A (en) Offline map processing, virtual objects display methods, device, medium and equipment
CN111373347B (en) Apparatus, method and computer program for providing virtual reality content
Unal et al. Distant augmented reality: Bringing a new dimension to user experience using drones
US20230037750A1 (en) Systems and methods for generating stabilized images of a real environment in artificial reality
JP6980802B2 (en) Methods, equipment and computer programs to provide augmented reality
Lv et al. Interaction design in augmented reality on the smartphone
Nivedha et al. Enhancing user experience through physical interaction in handheld augmented reality
Unal et al. Augmented Reality and New Opportunities for Cultural Heritage
Asiminidis Augmented and Virtual Reality: Extensive Review

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: BEIJING RESEARCH CENTER OF INTELLIGENT EQUIPMENT F

Free format text: FORMER OWNER: BEIJING AGRICULTURE INFORMATION TECHNOLOGY RESEARCH CENTER

Effective date: 20150804

Owner name: BEIJING AGRICULTURE INFORMATION TECHNOLOGY RESEARC

Effective date: 20150804

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20150804

Address after: Block 318b, No. 11 building, 100097 Beijing City, Haidian District agricultural A shuguangyuanzhong Road

Applicant after: Beijing Research Center of Intelligent Equipment for Agriculture

Applicant after: Beijing Research Center for Information Technology in Agriculture

Address before: Block 318b, No. 11 building, 100097 Beijing City, Haidian District agricultural A shuguangyuanzhong Road

Applicant before: Beijing Research Center for Information Technology in Agriculture

GR01 Patent grant
GR01 Patent grant