CN115202485A - XR (X-ray fluorescence) technology-based gesture synchronous interactive exhibition hall display system - Google Patents

XR (X-ray fluorescence) technology-based gesture synchronous interactive exhibition hall display system Download PDF

Info

Publication number
CN115202485A
CN115202485A CN202211118126.XA CN202211118126A CN115202485A CN 115202485 A CN115202485 A CN 115202485A CN 202211118126 A CN202211118126 A CN 202211118126A CN 115202485 A CN115202485 A CN 115202485A
Authority
CN
China
Prior art keywords
video
interactive
response
time
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211118126.XA
Other languages
Chinese (zh)
Other versions
CN115202485B (en
Inventor
王亚刚
李元元
程思锦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Feidie Virtual Reality Technology Co ltd
Original Assignee
Shenzhen Feidie Virtual Reality Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Feidie Virtual Reality Technology Co ltd filed Critical Shenzhen Feidie Virtual Reality Technology Co ltd
Priority to CN202211118126.XA priority Critical patent/CN115202485B/en
Publication of CN115202485A publication Critical patent/CN115202485A/en
Application granted granted Critical
Publication of CN115202485B publication Critical patent/CN115202485B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F19/00Advertising or display means not otherwise provided for
    • G09F19/12Advertising or display means not otherwise provided for using special optical effects
    • G09F19/18Advertising or display means not otherwise provided for using special optical effects involving the use of optical projection means, e.g. projection of images on clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Marketing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an XR technology-based gesture synchronous interactive exhibition hall display system, which comprises: the action tracking end is used for dynamically tracking the user to obtain dynamic tracking data; the interactive rendering end is used for rendering a synchronous interactive response virtual animation based on the dynamic tracking data and the preset virtual space video; the fusion rendering end is used for aligning, fusing and rendering the synchronous interactive response virtual animation, the foreground video of the user in the dynamic video and the background video in the preset virtual space video based on the dynamic tracking data and the standard visual field video of the user to generate interactive synchronous display animation; the real-time projection end is used for projecting the interactive synchronous display animation to the LED immersive display screen in real time to obtain a real-time interactive display result; the method is used for rendering synchronous interaction influence virtual animation and foreground video of a user based on dynamic tracking data and aligning, fusing and rendering in preset virtual space video and projecting to obtain high-fit and high-precision interactive animation.

Description

XR (X-ray fluorescence) technology-based gesture synchronous interactive exhibition hall display system
Technical Field
The invention relates to the technical field of XR (X-ray diffraction), in particular to a gesture synchronous interactive exhibition hall display system based on an XR technology.
Background
At present, technologies such as VR (Virtual Reality) and AR (Augmented Reality) are mostly adopted in existing gesture synchronization interaction exhibition halls, and not only does a user wear wearable equipment to perform gesture interaction, but the gesture synchronization interaction exhibition system has limitations on auditory experience and visual experience brought to the user, and interactivity delay is strong. The XR technology refers to VR/AR/MR and other related technologies, which are the next generation of experience revolution and computing platform, the advanced stage of the integration of the digital world and the physical world, and the revolutionary upgrade of the computing power, the connection and the display, so that a small amount of interactive exhibition hall application XR technology exists to improve the use experience of users.
However, the existing gesture synchronous interactive exhibition hall display system based on the XR technology still has the problems of poor interactive fitness and low interactive response precision.
Therefore, the invention provides an attitude synchronization interaction exhibition hall display system based on an XR technology.
Disclosure of Invention
The invention provides a gesture synchronous interaction exhibition hall display system based on an XR technology, which is used for rendering synchronous interaction influence virtual animation through dynamic tracking data obtained by dynamic tracking of a user, aligning, fusing and rendering the virtual animation and a foreground video of the user and projecting the virtual animation to obtain highly-fitting and highly-accurate interactive animation.
The invention provides an XR technology-based attitude synchronization interaction exhibition hall display system, which is characterized by comprising the following components:
the action tracking end is used for dynamically tracking the user based on the dynamic video to obtain dynamic tracking data;
the interactive rendering end is used for rendering a synchronous interactive response virtual animation based on the dynamic tracking data and the preset virtual space video;
the fusion rendering end is used for aligning, fusing and rendering the synchronous interactive response virtual animation, the foreground video of the user in the dynamic video and the background video in the preset virtual space video based on the dynamic tracking data and the standard visual field video of the user to generate interactive synchronous display animation;
and the real-time projection end is used for projecting the interactive synchronous display animation to the LED immersive display screen in real time to obtain a real-time interactive display result.
Preferably, the motion tracking terminal includes:
the video acquisition module is used for acquiring a dynamic video in an entity scene based on a camera arranged in an entity space of the interactive exhibition hall;
and the dynamic tracking module is used for dynamically tracking the user entering the entity scene based on the dynamic video to obtain dynamic tracking data of the corresponding user.
Preferably, the interactive rendering end includes:
the intention determining module is used for analyzing the action intention of the user based on the relative pose track in the dynamic tracking data and the standard visual field video of the user;
and the interactive rendering module is used for rendering and generating synchronous interactive response virtual animation based on the action intention of the user, aligning, fusing and rendering the foreground video, the synchronous interactive response virtual animation and the background video of the user in the dynamic video, and generating interactive synchronous display animation.
Preferably, the intent determination module includes:
the coordinate transformation unit is used for determining a first real-time eye socket coordinate set of the user in the entity scene based on the relative pose track in the dynamic tracking data, and determining a second real-time eye socket coordinate set of the user in the preset virtual space video based on a coordinate transformation relation between the entity scene and the preset virtual space video and the first real-time eye socket coordinate set;
the visual field determining unit is used for determining a first visual field image corresponding to the first visual field range in real time in a preset virtual space video based on the second real-time eye socket coordinate set and the standard visual field range, and acquiring a first visual field video based on the first visual field image determined in real time;
the deviation determining unit is used for carrying out time sequence alignment on the real-time fixation point coordinates of the two eyes of the user and the second real-time eye socket coordinate set to obtain first time sequence alignment data and determining fixation deviation angle change data based on the first time sequence alignment data;
the range correction unit is used for carrying out time sequence alignment on the watching deviation angle change data and the first visual field video to obtain second time sequence alignment data, and carrying out range correction on a video frame at a corresponding moment in the first visual field video in the second time sequence alignment data based on the real-time watching deviation angle to obtain a standard visual field video of the user;
and the intention determining unit is used for determining the action track of the user based on the dynamic tracking data, determining the current interaction target of the user based on the action track and the standard visual field video, and taking the current interaction target and the action track as the action intention of the user.
Preferably, the interactive rendering module includes:
the response determining unit is used for determining a current interaction target and a corresponding interaction response result in the preset virtual space video based on the action intention and the interaction response result list;
the interactive rendering unit is used for rendering and generating synchronous interactive response virtual animation based on the current interactive target, the corresponding interactive response result and the preset virtual space video;
and the alignment rendering unit is used for performing alignment fusion rendering on the foreground video, the synchronous interactive response virtual animation and the background video of the user in the dynamic video to generate the interactive synchronous display animation.
Preferably, the response determination unit includes:
a category determination subunit, configured to determine an action category based on the action track in the action intention;
and the response determining subunit is used for determining the current interaction target and the corresponding interaction response result based on the current interaction target in the action category and the action intention and the interaction response result list.
Preferably, the interactive rendering unit includes:
the contour determining subunit is used for determining action duration based on the action track in the action intention and determining a dynamic contour of the current interaction target in the action duration based on the preset virtual space video;
the distance determining subunit is configured to perform time sequence alignment on the motion trajectory and the dynamic contour to obtain a time sequence alignment result, determine, based on the time sequence alignment result, a distance difference between a real-time end point of the motion trajectory and each contour point included in the dynamic contour, and determine, based on the distance difference between the real-time end point and each contour point included in the dynamic contour at the corresponding time, a real-time minimum distance difference;
the specific determination subunit is used for taking the time when the real-time minimum distance difference reaches the standard response distance as a response starting time and taking a contour point corresponding to the minimum distance difference at the response starting time in the dynamic contour as an action response point;
the target rendering subunit is used for determining the response action type of the current interactive target based on the interactive response result, performing time sequence dynamic analysis on the action track to obtain an action rate, determining a response degree based on the action rate, and rendering a response video of the current interactive target based on the response degree, the response action type, an action response point and a contour coordinate set of the dynamic contour at a response starting moment;
and the connection rendering subunit is used for determining a response preamble time period based on the starting time and the response starting time of the action track, calling an original local video of the current interactive target in the response preamble time period from the preset virtual space video, and performing connection rendering on the original local video and the response video to obtain the synchronous interactive response virtual animation.
Preferably, the merging render end includes:
the coordinate alignment module is used for determining a relative pose track of the user in the entity scene in the dynamic tracking data, and unifying the coordinates of a foreground video and a synchronous interactive response virtual animation of the user in the dynamic video and a background video in a preset virtual space video based on the relative pose track to obtain a unified result;
and the fusion rendering module is used for performing fusion rendering on the dynamic foreground video, the synchronous interactive response virtual animation and the background video in the preset virtual space video based on the unified result to generate the interactive synchronous display animation.
Preferably, the real-time projection terminal includes:
the parameter determination module is used for determining projection parameters of the interactive synchronous display animation based on the initial projection parameters of the preset virtual space video;
and the real-time projection module is used for projecting the interactive synchronous display animation to the LED immersive display screen in real time based on the projection parameters to obtain a real-time interactive display result.
Preferably, the parameter determining module includes:
the transformation determining unit is used for determining a coordinate transformation matrix between the interactive synchronous display animation and the preset virtual space video;
and the parameter determining unit is used for determining the projection parameters of the interactive synchronous display animation based on the coordinate transformation matrix and the initial projection parameters of the preset virtual space video.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of an XR technology-based gesture-synchronized interactive exhibition hall display system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an operation tracking end according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an interactive rendering end according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an intent determination module in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of an interactive rendering module according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a response determination unit according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating an interactive rendering unit according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a fusion rendering end according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a real-time projection end according to an embodiment of the present invention;
fig. 10 is a diagram illustrating a parameter determination module according to an embodiment of the invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it should be understood that they are presented herein only to illustrate and explain the present invention and not to limit the present invention.
Example 1:
the invention provides an XR technology-based attitude synchronization interaction exhibition hall display system, which comprises the following components in reference to figure 1:
the action tracking end is used for dynamically tracking the user based on the dynamic video to obtain dynamic tracking data;
the interactive rendering end is used for rendering a synchronous interactive response virtual animation based on the dynamic tracking data and the preset virtual space video;
the fusion rendering end is used for aligning, fusing and rendering the synchronous interactive response virtual animation, the foreground video of the user in the dynamic video and the background video in the preset virtual space video based on the dynamic tracking data and the standard visual field video of the user to generate interactive synchronous display animation;
and the real-time projection end is used for projecting the interactive synchronous display animation to the LED immersive display screen in real time to obtain a real-time interactive display result.
In this embodiment, the dynamic video is a video for monitoring users in the interactive exhibition hall physical scene.
In this embodiment, the dynamic tracking of the user based on the dynamic video is that: and tracking real-time pose data of the moving track and the changing action of the user in the interactive exhibition hall based on the dynamic video.
In this embodiment, the dynamic tracking data is data of dynamic tracking of the user obtained by tracking the user from the dynamic video.
In this embodiment, the preset virtual space video is a space video projected on an LED immersive display screen of an exhibition hall.
In this embodiment, the synchronous interactive response virtual animation is an interactive animation of an object performing action interaction with a user in the preset virtual space video rendered based on the dynamic tracking data and the preset virtual space video.
In this embodiment, the relative pose trajectory is a pose movement trajectory of the user in the three-dimensional coordinate system corresponding to the physical scene in the physical scene of the exhibition hall.
In this embodiment, the standard view video is the video seen in the standard view of the user.
In this embodiment, the foreground video is a local video portion of the dynamic video that includes the user.
In this embodiment, the interactive synchronous display animation is a video animation that is generated by aligning, fusing and rendering a foreground video of a user in the dynamic video, a synchronous interactive response virtual animation, and a background video and that includes an interactive action process of the user and an interactive object on the basis of a preset virtual space video.
In this embodiment, the real-time interactive display result is a display result obtained by projecting the interactive synchronous display animation to the LED immersive display screen in real time.
In this embodiment, the background video is a complete background video excluding the local video including the current interactive target and the user from the preset virtual space video.
The beneficial effects of the above technology are: and rendering the dynamic tracking data obtained by dynamically tracking the user to obtain a synchronous interaction influence virtual animation, aligning, fusing and rendering the virtual animation with the foreground video of the user and projecting to obtain the interactive animation with high conformity and high accuracy.
Example 2:
on the basis of the embodiment 1, the motion tracking terminal, referring to fig. 2, includes:
the video acquisition module is used for acquiring a dynamic video in an entity scene based on a camera arranged in an entity space of the interactive exhibition hall;
and the dynamic tracking module is used for dynamically tracking the user entering the entity scene based on the dynamic video to obtain dynamic tracking data of the corresponding user.
In this embodiment, the interactive exhibition hall is an exhibition hall to which the XR technology-based attitude synchronization interactive exhibition hall display system of the present invention is applied.
In this embodiment, the physical space is a space in the interactive exhibition hall.
In this embodiment, the physical scene is a scene in a physical space in the interactive exhibition hall.
In this embodiment, the dynamic video is a video for monitoring a user in an entity scene of the interactive exhibition hall.
The beneficial effects of the above technology are: based on a dynamic video acquired by a camera in the interactive exhibition hall, the tracking of the movement track and the pose change of a user in the exhibition hall is realized.
Example 3:
on the basis of the embodiment 1, the interactive rendering end, referring to fig. 3, includes:
the intention determining module is used for analyzing the action intention of the user based on the relative pose track in the dynamic tracking data and the standard visual field video of the user;
and the interactive rendering module is used for rendering and generating synchronous interactive response virtual animation based on the action intention of the user, aligning, fusing and rendering the foreground video, the synchronous interactive response virtual animation and the background video of the user in the dynamic video, and generating interactive synchronous display animation.
In this embodiment, the relative pose trajectory is a pose movement trajectory of the user in the interactive exhibition hall in a three-dimensional coordinate system corresponding to the entity scene.
In this embodiment, the action intention is an action trajectory of the user interaction action and a current interaction target (a preset virtual space video), that is, an intention of the interaction action, which are analyzed based on the relative pose trajectory in the dynamic tracking data and the standard view video of the user.
In this embodiment, the synchronous interactive response virtual animation is a display animation corresponding to an interactive response result of the current interactive target generated by rendering based on the action intention of the user.
In this embodiment, the interactive synchronous display animation is a display animation of an interaction result between a user and a current interaction target obtained by aligning, fusing and rendering a foreground video of the user in a dynamic video and a synchronous interactive response virtual animation.
The beneficial effects of the above technology are: rendering to generate a synchronous interactive response virtual animation of the current interactive target based on the relative pose track in the dynamic tracking data and the action intention of the user analyzed from the standard video of the user, and then aligning, fusing and rendering the foreground video corresponding to the user and the synchronous interactive response virtual animation to generate a display animation capable of displaying the interactive process of the user and the current interactive target.
Example 4:
on the basis of embodiment 3, the intention determining module, referring to fig. 4, includes:
the coordinate transformation unit is used for determining a first real-time orbit coordinate set of the user in the entity scene based on the relative pose track in the dynamic tracking data, and determining a second real-time orbit coordinate set of the user in the preset virtual space video based on a coordinate transformation relation between the entity scene and the preset virtual space video and the first real-time orbit coordinate set;
the visual field determining unit is used for determining a first visual field image corresponding to the first visual field range in real time in a preset virtual space video based on the second real-time eye socket coordinate set and the standard visual field range, and acquiring a first visual field video based on the first visual field image determined in real time;
the deviation determining unit is used for carrying out time sequence alignment on the real-time fixation point coordinates of the two eyes of the user and the second real-time eye socket coordinate set to obtain first time sequence alignment data and determining fixation deviation angle change data based on the first time sequence alignment data;
the range correction unit is used for carrying out time sequence alignment on the watching deviation angle change data and the first visual field video to obtain second time sequence alignment data, and carrying out range correction on a video frame at a corresponding moment in the first visual field video in the second time sequence alignment data based on the real-time watching deviation angle to obtain a standard visual field video of the user;
and the intention determining unit is used for determining the action track of the user based on the dynamic tracking data, determining the current interaction target of the user based on the action track and the standard visual field video, and taking the current interaction target and the action track as the action intention of the user.
In this embodiment, the first real-time orbit coordinate set is a set formed by orbit contour coordinate points of the user in the coordinate system corresponding to the physical scene, which are acquired in real time.
In this embodiment, the coordinate transformation relationship is a relationship of coordinate transformation between the entity scene and the preset virtual space.
In this embodiment, the first real-time orbit coordinate set is a set formed by orbit contour coordinate points, which are determined based on a coordinate transformation relationship between the entity scene and the preset virtual space video and a coordinate system corresponding to the user in the preset virtual space video and correspond to the first real-time orbit coordinate set.
In this embodiment, the standard viewing angle range is the viewing angle of the user, the horizontal comfortable viewing angle range is 60 degrees, and the vertical comfortable viewing angle range is 55 degrees.
In this embodiment, determining, in real time, a first view image corresponding to the first view range in the preset virtual space video based on the second real-time orbital coordinate set and the standard view range includes:
taking the average coordinate value of the second real-time orbit coordinate set as a field-of-view central point, and determining a first field-of-view range based on the field-of-view central point and a standard field-of-view range (the first field-of-view range takes the field-of-view central point as a center, an angle between a first connecting line of an upper boundary and the field-of-view central point and a second connecting line of a lower boundary and the field-of-view central point is a vertical comfortable field-of-view range of the standard field-of-view range, and an angle between a third connecting line of the left boundary and the field-of-view central point and a fourth connecting line of the right boundary and the central point is a horizontal comfortable field-of-view range of the standard field-of-view range);
and the local image seen in the preset virtual space video within the first visual field range is the first visual field image.
In this embodiment, the first-view video is a video obtained by sorting and connecting the first-view images from front to back based on the acquisition time of the first-view images.
In this embodiment, the real-time fixation point coordinates of both eyes of the user are coordinate values of the pupil centers of both eyes.
In this embodiment, the first time-series alignment data is data obtained by time-series aligning the real-time fixation point coordinates of both eyes of the user and the second real-time orbit coordinate set.
In this embodiment, the gaze deviation angle change data is determined based on the time sequence alignment data, which is:
determining actual field center points of the two eyes of the user based on an average value of real-time fixation point coordinates of the two eyes of the user in the time sequence alignment data, determining assumed field center points based on a second real-time eye socket coordinate set, aligning the actual field center points with the assumed field center points based on the time sequence alignment data to obtain alignment data, calculating distance differences between the actual field center points and the corresponding assumed field center points based on the alignment data, determining horizontal distances from the assumed field centers to the LED immersive display screen, and taking inverse tangent values of the distance differences and the horizontal distances as corresponding watching deviation angles;
and summarizing all the gaze deviation angles contained in the time sequence alignment data to obtain gaze deviation angle change data.
In this embodiment, the second time series alignment data is time series aligned data obtained by time series aligning the gaze deviation angle change data and the first view video.
In this embodiment, performing range correction on a video frame at a corresponding time in the first-view video in the second time-series alignment data based on the real-time gaze deviation angle includes:
determining a range correction direction based on the real-time watching deviation angle, and multiplying the tangent value of the deviation angle by the horizontal distance from the assumed field center to the LED immersive display screen to be used as a translation distance;
translating the corresponding division region of the video frame of the corresponding moment in the first view video in the preset virtual space video to the corresponding translation distance in the range correction direction to obtain a new division region, taking the video frame region corresponding to the new division region as the video frame of the standard view video, and obtaining the standard view video based on the new video frame.
In this embodiment, the standard view video is a video seen in the standard view of the user obtained by performing range correction on a video frame at a corresponding moment in the first view video in the second time series alignment data based on the real-time gaze deviation angle.
In this embodiment, the motion trajectory is the motion trajectory of the user in the interactive exhibition hall obtained based on the dynamic tracking data.
In this embodiment, determining the current interaction target of the user based on the motion trajectory and the view image includes:
and determining a pointing target of the interactive action of the user based on the interactive target contained in the visual field image and the action track of the user, and taking the pointing target as the current interactive target of the user.
The beneficial effects of the above technology are: on the basis of transforming the coordinates of the eyepit coordinate set of the user in the entity scene into the preset virtual space video, the accurate correction of the center point of the field of view of the user is realized through the tracking of the standard field of view and the binocular fixation point, so that the accurate field of view and the accurate field of view image of the user can be obtained, the current interactive target of the user is accurately determined by combining the action track of the user, and the technical support is provided for realizing the high-precision interactive response of the user and the interactive target contained in the preset virtual space video.
Example 5:
on the basis of the embodiment 3, the interactive rendering module, referring to fig. 5, includes:
the response determining unit is used for determining a current interaction target and a corresponding interaction response result in the preset virtual space video based on the action intention and the interaction response result list;
the interactive rendering unit is used for rendering and generating synchronous interactive response virtual animation based on the current interactive target, the corresponding interactive response result and the preset virtual space video;
and the alignment rendering unit is used for performing alignment fusion rendering on the foreground video, the synchronous interactive response virtual animation and the background video of the user in the dynamic video to generate the interactive synchronous display animation.
In this embodiment, the interactive response result list is a list including the interactive response result corresponding to each current interactive target.
In this embodiment, based on the action intention and the interactive response result list, a current interactive target and a corresponding interactive response result in the preset virtual space video are determined, that is:
and determining a current interactive target based on the action intention, and determining an interactive response result corresponding to the current interactive target based on the interactive response result list and the current interactive target.
In this embodiment, the preset virtual space video is a prepared video projected in the interactive exhibition hall.
The beneficial effects of the above technology are: rendering a synchronous interactive response virtual animation based on the current interactive target and the interactive response result determined by the interactive response result list, aligning, fusing and rendering the foreground video corresponding to the user and the synchronous interactive response virtual animation, and generating a display animation capable of displaying the interactive process of the user and the current interactive target.
Example 6:
on the basis of embodiment 5, the response determination unit, referring to fig. 6, includes:
a category determination subunit, configured to determine an action category based on the action track in the action intention;
and the response determining subunit is used for determining the current interaction target and the corresponding interaction response result based on the current interaction target in the action category and the action intention and the interaction response result list.
In this embodiment, determining the motion category based on the motion trajectory in the motion intention is that:
and determining the action type corresponding to the current action intention based on the direction angle corresponding to the action track and a preset action type list (namely, the list of the action types corresponding to the direction angles containing different action tracks).
In this embodiment, the action categories are, for example: page turning, waving, etc.
In this embodiment, based on the current interaction target in the action category and the action intention, the current interaction target and the corresponding interaction response result are determined in the interaction response result list.
The beneficial effects of the above technology are: and determining a corresponding interactive response result in the interactive response result based on the action category and the current interactive target determined by the action track, so that the interactive response result is accurately determined based on the action track and the current interactive target of the user, and the high-precision interaction with the object in the preset virtual space video is realized.
Example 7:
on the basis of embodiment 5, the interactive rendering unit, referring to fig. 7, includes:
the contour determining subunit is used for determining action duration based on the action track in the action intention and determining a dynamic contour of the current interaction target in the action duration based on the preset virtual space video;
the distance determining subunit is configured to perform time sequence alignment on the motion trajectory and the dynamic contour to obtain a time sequence alignment result, determine, based on the time sequence alignment result, a distance difference between a real-time end point of the motion trajectory and each contour point included in the dynamic contour, and determine, based on the distance difference between the real-time end point and each contour point included in the dynamic contour at the corresponding time, a real-time minimum distance difference;
the specific determination subunit is used for taking the time when the real-time minimum distance difference reaches the standard response distance as a response starting time and taking a contour point corresponding to the minimum distance difference at the response starting time in the dynamic contour as an action response point;
the target rendering subunit is used for determining the response action type of the current interactive target based on the interactive response result, performing time sequence dynamic analysis on the action track to obtain an action rate, determining a response degree based on the action rate, and rendering a response video of the current interactive target based on the response degree, the response action type, an action response point and a contour coordinate set of the dynamic contour at a response starting moment;
and the connection rendering subunit is used for determining a response preamble time period based on the starting time and the response starting time of the action track, calling an original local video of the current interactive target in the response preamble time period from the preset virtual space video, and performing connection rendering on the original local video and the response video to obtain the synchronous interactive response virtual animation.
In this embodiment, the motion duration is a duration between a start time and an end time of the motion trajectory determined based on the dynamic video.
In this embodiment, the dynamic contour is a change process of a contour of the current interaction target determined based on the preset virtual space video within the action duration.
In this embodiment, the time sequence alignment result is obtained by time sequence aligning the motion trajectory and the dynamic profile.
In this embodiment, the real-time endpoint is an endpoint of the motion trajectory determined in real time.
In this embodiment, the distance difference is a distance difference between the real-time end point of the motion trajectory in the time sequence alignment result and each contour point in the dynamic contour at the corresponding time.
In this embodiment, the real-time minimum distance difference is a minimum value of the distance differences between the real-time endpoint at the corresponding time and each contour point in the dynamic contour at the corresponding time.
In this embodiment, the standard response distance is a minimum distance difference between a corresponding action trajectory end point of the user and the current interactive target when the preset interactive target starts to respond to the interactive action of the user.
In this embodiment, the response start time is a time corresponding to when the real-time minimum distance difference in the time sequence alignment result first reaches the standard response distance.
In this embodiment, the action response point is a contour point corresponding to the minimum distance difference at the response start time in the dynamic contour.
In this embodiment, the response action type is a type of a response action that is made by the current interaction target for the interaction action of the user, which is determined based on the interaction response result.
In this embodiment, the action rate is an average rate of the user action determined by performing time sequence dynamic analysis on the action trajectory.
In this embodiment, the response degree is determined based on the action rate, that is:
and determining the response degree corresponding to the action rate based on the action rate and the action rate-response degree list.
In this embodiment, based on the response degree, the response action type and the action response point, and the set of contour coordinates of the dynamic contour at the response start time, rendering the response video of the current interaction target, that is:
and inputting the response degree, the response action type, the action response point and the contour coordinate set of the dynamic contour at the response starting moment into a rendering model (namely a preset model for rendering the response animation video of the interaction target), and obtaining the response animation video of the interaction action of the current interaction target to the user.
In this embodiment, the response preamble time period is a time period from the start time of the action track to the response start time.
In this embodiment, the original local video is a local video segment of the current interaction target in the preset virtual space video within the response preamble time period.
In this embodiment, the synchronous interactive response virtual animation is a whole video animation obtained by performing connection rendering on the original local video and the response video.
The beneficial effects of the above technology are: the method comprises the steps of determining a response starting moment and a response point by carrying out alignment analysis on an action track of a user and a preset virtual space video, determining a response action type based on an interaction response result, and accurately rendering an interaction response result of a current interaction target to an interaction action by combining an action rate obtained by carrying out time sequence dynamic analysis on the action track.
Example 8:
on the basis of the embodiment 1, the merging rendering end, referring to fig. 8, includes:
the coordinate alignment module is used for determining a relative pose track of the user in the entity scene in the dynamic tracking data, and unifying the coordinates of a foreground video of the user in the dynamic video, the synchronous interactive response virtual animation and a background video in a preset virtual space video based on the relative pose track to obtain a unified result;
and the fusion rendering module is used for performing fusion rendering on the dynamic foreground video, the synchronous interactive response virtual animation and the background video in the preset virtual space video based on the unified result to generate the interactive synchronous display animation.
In this embodiment, unifying coordinates of a foreground video, a background video, and a synchronous interactive response virtual animation of a user in a dynamic video based on a relative pose trajectory to obtain a unified result, includes:
and determining a transformation relative pose track of the relative pose track under a coordinate system corresponding to the preset virtual space video based on a coordinate transformation relation between the coordinate system corresponding to the relative pose track and the entity scene and a coordinate system corresponding to the preset virtual space video, and determining a coordinate value corresponding to the dynamic foreground video under the coordinate system corresponding to the preset virtual space video based on the transformation relative pose track, wherein the coordinate value corresponding to the foreground video under the coordinate system corresponding to the preset virtual space video, the coordinate value corresponding to the synchronous interactive response virtual animation under the coordinate system corresponding to the preset virtual space video and the coordinate value corresponding to the background video under the coordinate system corresponding to the preset virtual space video are unified results.
In this embodiment, based on the unified result, the dynamic foreground video and the synchronous interactive response virtual animation are subjected to fusion rendering to generate an interactive synchronous display animation, that is:
and based on the coordinate value of the foreground video in the coordinate system corresponding to the preset virtual space video and the coordinate value of the synchronous interactive response virtual animation in the coordinate system corresponding to the preset virtual space video in the alignment result, fusing and rendering the dynamic foreground video and the synchronous interactive response virtual animation to generate interactive synchronous display animation.
The beneficial effects of the above technology are: based on the relative pose track, the foreground video and the synchronous interactive response virtual animation of the user and the background video in the preset virtual space video are subjected to coordinate unification and then are subjected to fusion rendering, so that the synchronous interactive response virtual animation of the current interactive target, which is generated by rendering and is obtained based on the foreground video of the user, and the background video in the preset virtual space video are subjected to fusion rendering, and the synchronous accurate interaction display of the user and the preset virtual space video is realized.
Example 9:
on the basis of the embodiment 1, the real-time projection terminal, referring to fig. 9, includes:
the parameter determination module is used for determining projection parameters of the interactive synchronous display animation based on the initial projection parameters of the preset virtual space video;
and the real-time projection module is used for projecting the interactive synchronous display animation to the LED immersive display screen in real time based on the projection parameters to obtain a real-time interactive display result.
In this embodiment, the initial projection parameters are initial parameters set when the preset virtual space video is projected on the LED immersive display screen in the exhibition hall, and include: size parameters and display parameters, etc.
In this embodiment, the projection parameters are the projection parameters that should be set when the interactive synchronous display animation is projected to the LED immersive display screen in real time, and include: projection size parameters, etc.
In this embodiment, the real-time interactive display result is a real-time display result obtained after the interactive synchronous display animation is projected to the LED immersive display screen in real time based on the projection parameters.
The beneficial effects of the above technology are: the animation of the projection parameters based on the interactive synchronous display animation realizes the non-inductive switching from the preset virtual space video to the interactive synchronous display animation and also ensures the unification of the projection effects before and after the interaction.
Example 10:
on the basis of the embodiment 9, the parameter determining module, referring to fig. 10, includes:
the transformation determining unit is used for determining a coordinate transformation matrix between the interactive synchronous display animation and the preset virtual space video;
and the parameter determining unit is used for determining the projection parameters of the interactive synchronous display animation based on the coordinate transformation matrix and the initial projection parameters of the preset virtual space video.
In this embodiment, determining a coordinate transformation matrix between the interactive synchronous display animation and the preset virtual space video includes:
carrying out time sequence alignment on video segments in an interactive period (from an action track starting time to a response ending time) in the interactive synchronous display animation and the preset virtual space video to obtain an aligned video;
based on an alignment frame in an alignment video, determining a first coordinate value of each first pixel point in a first video frame belonging to an interactive synchronous display animation in the alignment frame, determining a second coordinate value of a second pixel point in a second video frame belonging to a preset virtual space video in the alignment frame, wherein the second pixel point is consistent with a pixel visual value (chromaticity, contrast and gray scale) corresponding to the first pixel point, and determining a coordinate transformation matrix between the interactive synchronous display animation and the preset virtual space video based on a coordinate transformation relation between each first coordinate value and the corresponding second coordinate value.
In this embodiment, determining projection parameters of the interactive synchronous display animation based on the coordinate transformation matrix and initial projection parameters of the preset virtual space video includes:
and determining the picture size ratio of the aligned frame based on the coordinate transformation matrix, and taking the product of the picture size ratio and the initial projection parameter as a corresponding projection parameter.
The beneficial effects of the above technology are: based on a coordinate transformation matrix between the interactive synchronous display animation and the preset virtual space video, the initial projection parameters are transformed to the projection parameters suitable for the interactive synchronous display animation, the non-inductive switching from the preset virtual space video to the interactive synchronous display animation is realized, and the unification of projection effects before and after interaction is also ensured.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. The utility model provides an interactive exhibition hall display system of gesture synchronization based on XR technique which characterized in that includes:
the action tracking end is used for dynamically tracking the user based on the dynamic video to obtain dynamic tracking data;
the interactive rendering end is used for rendering a synchronous interactive response virtual animation based on the dynamic tracking data and the preset virtual space video;
the fusion rendering end is used for aligning, fusing and rendering the synchronous interactive response virtual animation, the foreground video of the user in the dynamic video and the background video in the preset virtual space video based on the dynamic tracking data and the standard visual field video of the user to generate interactive synchronous display animation;
and the real-time projection end is used for projecting the interactive synchronous display animation to the LED immersive display screen in real time to obtain a real-time interactive display result.
2. The XR technology based gesture-synchronized interactive exhibition hall display system of claim 1, wherein the motion tracking end comprises:
the video acquisition module is used for acquiring a dynamic video in an entity scene based on a camera arranged in an entity space of the interactive exhibition hall;
and the dynamic tracking module is used for dynamically tracking the user entering the entity scene based on the dynamic video to obtain dynamic tracking data of the corresponding user.
3. The XR technology-based gesture-synchronized interactive exhibition hall display system of claim 1, wherein the interactive rendering end comprises:
the intention determining module is used for analyzing the action intention of the user based on the relative pose track in the dynamic tracking data and the standard visual field video of the user;
and the interactive rendering module is used for rendering and generating synchronous interactive response virtual animation based on the action intention of the user, aligning, fusing and rendering the foreground video, the synchronous interactive response virtual animation and the background video of the user in the dynamic video, and generating interactive synchronous display animation.
4. The XR technology based gesture-synchronized interactive exhibition system of claim 3 wherein the intent determination module comprises:
the coordinate transformation unit is used for determining a first real-time eye socket coordinate set of the user in the entity scene based on the relative pose track in the dynamic tracking data, and determining a second real-time eye socket coordinate set of the user in the preset virtual space video based on a coordinate transformation relation between the entity scene and the preset virtual space video and the first real-time eye socket coordinate set;
the visual field determining unit is used for determining a first visual field image corresponding to a first visual field range in real time in a preset virtual space video based on the second real-time eye socket coordinate set and the standard visual field range, and acquiring the first visual field video based on the first visual field image determined in real time;
the deviation determining unit is used for carrying out time sequence alignment on the real-time fixation point coordinates of the two eyes of the user and the second real-time eye socket coordinate set to obtain first time sequence alignment data and determining fixation deviation angle change data based on the first time sequence alignment data;
the range correction unit is used for carrying out time sequence alignment on the watching deviation angle change data and the first visual field video to obtain second time sequence alignment data, and carrying out range correction on a video frame at a corresponding moment in the first visual field video in the second time sequence alignment data based on the real-time watching deviation angle to obtain a standard visual field video of a user;
and the intention determining unit is used for determining the action track of the user based on the dynamic tracking data, determining the current interaction target of the user based on the action track and the standard visual field video, and taking the current interaction target and the action track as the action intention of the user.
5. The XR technology-based gesture-synchronized interactive exhibition hall display system of claim 3 wherein the interactive rendering module comprises:
the response determining unit is used for determining a current interaction target and a corresponding interaction response result in the preset virtual space video based on the action intention and the interaction response result list;
the interactive rendering unit is used for rendering and generating synchronous interactive response virtual animation based on the current interactive target, the corresponding interactive response result and the preset virtual space video;
and the alignment rendering unit is used for performing alignment fusion rendering on the foreground video, the synchronous interactive response virtual animation and the background video of the user in the dynamic video to generate the interactive synchronous display animation.
6. The XR technology based gesture-synchronized interactive exhibition system of claim 5 wherein the response determination unit comprises:
the category determining subunit is used for determining the action category based on the action track in the action intention;
and the response determining subunit is used for determining the current interaction target and the corresponding interaction response result based on the current interaction target in the action category and the action intention and the interaction response result list.
7. The XR technology-based gesture-synchronized interactive exhibition hall display system of claim 5, wherein the interactive rendering unit comprises:
the contour determining subunit is used for determining action duration based on the action track in the action intention and determining a dynamic contour of the current interaction target in the action duration based on the preset virtual space video;
the distance determining subunit is configured to perform time sequence alignment on the motion trajectory and the dynamic contour to obtain a time sequence alignment result, determine, based on the time sequence alignment result, a distance difference between a real-time end point of the motion trajectory and each contour point included in the dynamic contour, and determine, based on the distance difference between the real-time end point and each contour point included in the dynamic contour at the corresponding time, a real-time minimum distance difference;
the specific determination subunit is used for taking the time when the real-time minimum distance difference reaches the standard response distance as a response starting time and taking a contour point corresponding to the minimum distance difference at the response starting time in the dynamic contour as an action response point;
the target rendering subunit is used for determining the response action type of the current interactive target based on the interactive response result, performing time sequence dynamic analysis on the action track to obtain an action rate, determining a response degree based on the action rate, and rendering a response video of the current interactive target based on the response degree, the response action type, an action response point and a contour coordinate set of the dynamic contour at a response starting moment;
and the connection rendering subunit is used for determining a response preamble time period based on the starting time and the response starting time of the action track, calling an original local video of the current interaction target in the response preamble time period from the preset virtual space video, and performing connection rendering on the original local video and the response video to obtain the synchronous interaction response virtual animation.
8. The XR technology-based gesture-synchronized interactive exhibition hall display system of claim 1, wherein the fusion rendering end comprises:
the coordinate alignment module is used for determining a relative pose track of the user in the entity scene in the dynamic tracking data, and unifying the coordinates of a foreground video and a synchronous interactive response virtual animation of the user in the dynamic video and a background video in a preset virtual space video based on the relative pose track to obtain a unified result;
and the fusion rendering module is used for performing fusion rendering on the dynamic foreground video, the synchronous interactive response virtual animation and the background video in the preset virtual space video based on the unified result to generate the interactive synchronous display animation.
9. The XR technology-based gesture-synchronized interactive exhibition hall display system of claim 1, wherein the real-time projection terminal comprises:
the parameter determination module is used for determining projection parameters of the interactive synchronous display animation based on the initial projection parameters of the preset virtual space video;
and the real-time projection module is used for projecting the interactive synchronous display animation to the LED immersive display screen in real time based on the projection parameters to obtain a real-time interactive display result.
10. The XR technology-based gesture-synchronized interactive exhibition hall display system of claim 9, wherein the parameter determination module comprises:
the transformation determining unit is used for determining a coordinate transformation matrix between the interactive synchronous display animation and the preset virtual space video;
and the parameter determining unit is used for determining the projection parameters of the interactive synchronous display animation based on the coordinate transformation matrix and the initial projection parameters of the preset virtual space video.
CN202211118126.XA 2022-09-15 2022-09-15 XR (X-ray fluorescence) technology-based gesture synchronous interactive exhibition hall display system Active CN115202485B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211118126.XA CN115202485B (en) 2022-09-15 2022-09-15 XR (X-ray fluorescence) technology-based gesture synchronous interactive exhibition hall display system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211118126.XA CN115202485B (en) 2022-09-15 2022-09-15 XR (X-ray fluorescence) technology-based gesture synchronous interactive exhibition hall display system

Publications (2)

Publication Number Publication Date
CN115202485A true CN115202485A (en) 2022-10-18
CN115202485B CN115202485B (en) 2023-01-06

Family

ID=83572141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211118126.XA Active CN115202485B (en) 2022-09-15 2022-09-15 XR (X-ray fluorescence) technology-based gesture synchronous interactive exhibition hall display system

Country Status (1)

Country Link
CN (1) CN115202485B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116614676A (en) * 2023-07-14 2023-08-18 南京维赛客网络科技有限公司 Method, system and storage medium for replaying virtual character animation in message synchronization
CN117834949A (en) * 2024-03-04 2024-04-05 清华大学 Real-time interaction prerendering method and device based on edge intelligence

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101539804A (en) * 2009-03-11 2009-09-23 上海大学 Real time human-machine interaction method and system based on augmented virtual reality and anomalous screen
CN103426195A (en) * 2013-09-09 2013-12-04 天津常青藤文化传播有限公司 Method for generating three-dimensional virtual animation scenes watched through naked eyes
CN205793049U (en) * 2016-06-15 2016-12-07 苏州创捷传媒展览股份有限公司 Augmented reality scene news conference system
CN107333121A (en) * 2017-06-27 2017-11-07 山东大学 The immersion solid of moving view point renders optical projection system and its method on curve screens
US20180165879A1 (en) * 2016-12-09 2018-06-14 Fyusion, Inc. Live augmented reality using tracking
US20210027511A1 (en) * 2019-07-23 2021-01-28 LoomAi, Inc. Systems and Methods for Animation Generation
CN112684894A (en) * 2020-12-31 2021-04-20 北京市商汤科技开发有限公司 Interaction method and device for augmented reality scene, electronic equipment and storage medium
CN113132707A (en) * 2021-04-16 2021-07-16 中德(珠海)人工智能研究院有限公司 Method and system for dynamically superposing character and virtual decoration environment in real time
WO2021249414A1 (en) * 2020-06-10 2021-12-16 阿里巴巴集团控股有限公司 Data processing method and system, related device, and storage medium
CN114035681A (en) * 2021-10-29 2022-02-11 王朋 3D active stereo interactive immersive virtual reality CAVE system
US20220222900A1 (en) * 2021-01-14 2022-07-14 Taqtile, Inc. Coordinating operations within an xr environment from remote locations

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101539804A (en) * 2009-03-11 2009-09-23 上海大学 Real time human-machine interaction method and system based on augmented virtual reality and anomalous screen
CN103426195A (en) * 2013-09-09 2013-12-04 天津常青藤文化传播有限公司 Method for generating three-dimensional virtual animation scenes watched through naked eyes
CN205793049U (en) * 2016-06-15 2016-12-07 苏州创捷传媒展览股份有限公司 Augmented reality scene news conference system
US20180165879A1 (en) * 2016-12-09 2018-06-14 Fyusion, Inc. Live augmented reality using tracking
CN107333121A (en) * 2017-06-27 2017-11-07 山东大学 The immersion solid of moving view point renders optical projection system and its method on curve screens
US20210027511A1 (en) * 2019-07-23 2021-01-28 LoomAi, Inc. Systems and Methods for Animation Generation
WO2021249414A1 (en) * 2020-06-10 2021-12-16 阿里巴巴集团控股有限公司 Data processing method and system, related device, and storage medium
CN112684894A (en) * 2020-12-31 2021-04-20 北京市商汤科技开发有限公司 Interaction method and device for augmented reality scene, electronic equipment and storage medium
US20220222900A1 (en) * 2021-01-14 2022-07-14 Taqtile, Inc. Coordinating operations within an xr environment from remote locations
CN113132707A (en) * 2021-04-16 2021-07-16 中德(珠海)人工智能研究院有限公司 Method and system for dynamically superposing character and virtual decoration environment in real time
CN114035681A (en) * 2021-10-29 2022-02-11 王朋 3D active stereo interactive immersive virtual reality CAVE system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116614676A (en) * 2023-07-14 2023-08-18 南京维赛客网络科技有限公司 Method, system and storage medium for replaying virtual character animation in message synchronization
CN116614676B (en) * 2023-07-14 2023-09-12 南京维赛客网络科技有限公司 Method, system and storage medium for replaying virtual character animation in message synchronization
CN117834949A (en) * 2024-03-04 2024-04-05 清华大学 Real-time interaction prerendering method and device based on edge intelligence
CN117834949B (en) * 2024-03-04 2024-05-14 清华大学 Real-time interaction prerendering method and device based on edge intelligence

Also Published As

Publication number Publication date
CN115202485B (en) 2023-01-06

Similar Documents

Publication Publication Date Title
CN115202485B (en) XR (X-ray fluorescence) technology-based gesture synchronous interactive exhibition hall display system
Arthur et al. Evaluating 3d task performance for fish tank virtual worlds
CN110675489B (en) Image processing method, device, electronic equipment and storage medium
CN111294665B (en) Video generation method and device, electronic equipment and readable storage medium
CN107105333A (en) A kind of VR net casts exchange method and device based on Eye Tracking Technique
US9001115B2 (en) System and method for three-dimensional visualization of geographical data
WO2020006519A1 (en) Synthesizing an image from a virtual perspective using pixels from a physical imager array
CN113035010B (en) Virtual-real scene combined vision system and flight simulation device
CN111275801A (en) Three-dimensional picture rendering method and device
CN106095106A (en) Virtual reality terminal and display photocentre away from method of adjustment and device
Moeslund et al. A natural interface to a virtual environment through computer vision-estimated pointing gestures
CN115359093A (en) Monocular-based gaze estimation and tracking method
CN107562185B (en) Light field display system based on head-mounted VR equipment and implementation method
CN111047713B (en) Augmented reality interaction system based on multi-vision positioning and operation method thereof
CN109427094B (en) Method and system for acquiring mixed reality scene
CN111881807A (en) VR conference control system and method based on face modeling and expression tracking
CN114926613A (en) Method and system for enhancing reality of human body data and space positioning
CN113866987A (en) Method for interactively adjusting interpupillary distance and image surface of augmented reality helmet display by utilizing gestures
CN112288890A (en) Model editing method and system
US20010043395A1 (en) Single lens 3D software method, system, and apparatus
CN109427093B (en) Mixed reality system
Hua et al. Calibration of an HMPD-based augmented reality system
Yoshimura et al. Appearance-based gaze estimation for digital signage considering head pose
CN109062413A (en) A kind of AR interactive system and method
Liu et al. Depth perception optimization based on multi-viewing spaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 502-503, Floor 5, Building 5, Hongtai Smart Valley, No. 19, Sicheng Road, Tianhe District, Guangzhou, Guangdong 518000

Patentee after: Guangdong Feidie Virtual Reality Technology Co.,Ltd.

Address before: 518000 3311, 3rd floor, building 1, aerospace building, No.51, Gaoxin South nine road, Gaoxin community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen FEIDIE Virtual Reality Technology Co.,Ltd.