CN111727924B - Mixed reality fish tank system in stereoscopic display environment and generation method - Google Patents

Mixed reality fish tank system in stereoscopic display environment and generation method Download PDF

Info

Publication number
CN111727924B
CN111727924B CN202010676771.8A CN202010676771A CN111727924B CN 111727924 B CN111727924 B CN 111727924B CN 202010676771 A CN202010676771 A CN 202010676771A CN 111727924 B CN111727924 B CN 111727924B
Authority
CN
China
Prior art keywords
fish
real
virtual
point
real fish
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010676771.8A
Other languages
Chinese (zh)
Other versions
CN111727924A (en
Inventor
杨承磊
靳新培
耿文秀
刘娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202010676771.8A priority Critical patent/CN111727924B/en
Publication of CN111727924A publication Critical patent/CN111727924A/en
Application granted granted Critical
Publication of CN111727924B publication Critical patent/CN111727924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K63/00Receptacles for live fish, e.g. aquaria; Terraria
    • A01K63/003Aquaria; Terraria
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; CARE OF BIRDS, FISHES, INSECTS; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K63/00Receptacles for live fish, e.g. aquaria; Terraria
    • A01K63/003Aquaria; Terraria
    • A01K63/006Accessories for aquaria or terraria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The utility model provides a mixed reality fish tank system and a generating method under a three-dimensional display environment, which can obtain the position of a real fish and the position of the head of a user in real time; compensating the obtained real fish position to obtain the real position of the real fish in the Kinect camera space; mapping the obtained real fish position information to a virtual scene, placing a virtual real fish label near the real fish, and carrying out real-time collision detection on the real fish and the virtual object, so that the virtual object effectively avoids collision of the real fish, and simultaneously detects whether the real fish blocks the virtual scene, so as to effectively obtain scene information; by utilizing the imaging principle of three-dimensional display, the virtual fish is accurately placed in the fish tank environment, and the virtual fish and the fish tank environment are integrated. The method and the device realize the virtual-real fusion of the MR fish tank content, the generated three-dimensional picture is more vivid and real, and a mixed reality experience with reality sense is provided for a user.

Description

Mixed reality fish tank system in stereoscopic display environment and generation method
Technical Field
The disclosure belongs to the technical field of virtual reality interaction, and relates to a mixed reality fish tank system and a generating method in a stereoscopic display environment.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
The fish tank and the fish have positive effects on physical and psychological health of people. The research of O' Haire et al finds that the fish tank has a soothing effect, the observation of fish produces a relaxing effect, and the observation of fish attracts attention, and is a suitable way for simulating nature. In addition, the fish tank is common ornamental equipment of Chinese families, has a simple structure and low cost, and therefore, a simple and vivid capability training and quality expanding platform can be designed by using a Mixed Reality (MR) technology and taking the fish tank as a carrier.
The virtual-real fusion technology is dedicated to realize seamless fusion among people, a real environment and a virtual environment, so that a natural and vivid human-computer interaction effect is achieved, and the virtual-real fusion technology is a common target of current research. The method is used for constructing a mixed environment of virtual and real fusion, and not only relates to the fusion presentation of virtual and real environments, but also comprises various key technologies such as high-precision positioning, optical display, multi-perception interaction and the like. However, in some existing mixed reality systems, the virtual environment is simply superimposed in the real environment, and effective fusion of the real object and the virtual environment is not considered, so that the mixed reality effect cannot be achieved.
At present, the problems existing in the fish tank based on the mixed reality technology are mainly shown in that:
real fishes in the fish tank swim back and forth to shield a virtual scene to a certain extent, so that a user cannot effectively acquire virtual scene information; the virtual environment is superposed in the real environment, and the effective fusion of the real object and the virtual environment is not considered, so that the effect of mixed reality cannot be achieved. Therefore, virtual-real occlusion not only impairs the visual immersion of a mixed-reality fish tank, but also further affects the user's interactive experience.
Disclosure of Invention
In order to solve the above problems, the present disclosure provides a mixed reality fish tank system and a generating method in a stereoscopic display environment, which achieve virtual-real fusion of contents in an MR fish tank, and generate a more vivid and real stereoscopic image, thereby providing a mixed reality experience with a sense of reality for a user. The invention obtains the coordinate position of real fish in the fish tank by detecting the real fish in real time, and then compensates the influence of refraction on the position of the real fish in real time by using a light refraction method. On the basis of the position information, a three-dimensional rendering method is adopted to display real fish labels in the real fish tank in real time, and collision detection and avoidance of virtual fish and real fish are achieved. In addition, the problem of shielding of real fish to the virtual scene is solved, and a user can effectively acquire scene information.
According to some embodiments, the following technical scheme is adopted in the disclosure:
a mixed reality fish tank system in a stereoscopic display environment, comprising:
a position tracking module configured to obtain real fish positions and user head positions in real time;
the refraction compensation module is configured to compensate the acquired real fish position to obtain the real position of the real fish in the Kinect camera space;
the virtual content setting module is configured to map the obtained real fish position information into a virtual scene, place a virtual real fish label near the real fish, and perform real-time collision detection on the real fish and the virtual object, so that the virtual object effectively avoids collision of the real fish, and simultaneously detect whether the real fish blocks the virtual scene, so as to effectively acquire scene information;
and the stereoscopic display module is configured to accurately place the virtual fish in the fish tank environment by utilizing the imaging principle of stereoscopic display and integrate the virtual fish and the fish tank environment.
As an alternative embodiment, the position tracking module includes at least two RGB-D cameras, and the real fish position and the user head position are respectively obtained in real time.
In an alternative embodiment, the position tracking module detects the movement of a single fish by using a Gaussian mixture model separation algorithm on the depth image with the resolution of W × H.
In an alternative embodiment, the refraction compensation module calculates the refraction compensation value using snell's law.
As an alternative embodiment, the virtual content setting module is configured to perform real fish occlusion detection, specifically, configure a button, determine whether there is real fish occlusion between the button and human eyes, if there is true fish occlusion, the user cannot see the button, and the virtual content setting module updates the button position.
As an alternative embodiment, the virtual content setting module is configured to display real fish tags in real time, configure a virtual tag near the position of a real fish in real time by using the position information of the real fish, label the real fish information, and move along with the real fish.
As an alternative embodiment, the virtual content setting module is configured to perform collision detection and avoidance of real fish and virtual fish, specifically including planning a roaming path of the virtual fish; and arranging collision bodies around the real fish, controlling the virtual fish to emit rays forward in the roaming process, performing collision detection to judge whether barriers exist in the front, and setting the avoiding direction of the virtual fish.
A method for generating a mixed reality fish tank in a stereoscopic display environment comprises the following steps:
real fish positions and user head positions are obtained in real time;
compensating the obtained real fish position to obtain the real position of the real fish in the Kinect camera space;
mapping the obtained real fish position information to a virtual scene, placing a virtual real fish label near the real fish, performing real-time collision detection on the real fish and a virtual object, controlling the virtual object to effectively avoid the collision of the real fish, and detecting whether the real fish blocks the virtual scene so as to effectively obtain scene information;
by utilizing the imaging principle of stereoscopic display, the virtual fish is accurately placed in the fish tank environment, and the virtual fish and the fish tank environment are integrated into a whole, so that the virtual-real integrated mixed reality fish tank is obtained.
As an alternative embodiment, the compensation of the obtained real fish position, and the specific process of obtaining the real position of the real fish in the Kinect camera space includes:
from virtual image PKThe position relation between the Kinect camera and the origin of the Kinect camera space coordinate system can calculate the refraction angle of the refraction light entering the Kinect camera;
according to the Snell's law, the incident angle of the light is obtained;
calculating to obtain the difference value of the x coordinate of the object point P and the x coordinate of the refraction point i according to the z coordinate of the object point P and the z coordinate of the refraction point i, and further determining the x coordinate and the y coordinate of the object point P;
fitting the relation between the z coordinate of the object point position before and after refraction and the depth position of the virtual image according to the virtual image PKThe actual depth position before refraction, i.e., the z-coordinate of the object point P, is determined at the depth position in the water.
As an alternative embodiment, the process of mapping the obtained real fish position information into the virtual scene includes: and (4) converting the positions of the real fish and the human eyes to the same coordinate system, namely, a Kinect camera space for skeletal tracking.
As an alternative embodiment, the specific process of detecting whether a real fish blocks a virtual scene includes: a button is configured in the virtual scene, and the distance d from the real fish to a straight line between the human eyes and the button is calculated according to the positions of the button, the real fish and the human eyes;
setting a shielding threshold value according to the size of the button and the real fish;
and comparing the calculated distance d with a shielding threshold value, if the distance d is less than or equal to the shielding threshold value, shielding occurs, and the position of the button needs to be updated.
As an alternative embodiment, the specific process of controlling the virtual object to effectively avoid the collision of the real fish includes: planning a roaming path of the virtual fish; and arranging collision bodies around the real fish, controlling the virtual fish to emit rays forward in the roaming process, performing collision detection to judge whether barriers exist in the front, and setting the avoiding direction of the virtual fish.
As an alternative embodiment, the specific process of planning the roaming path of the virtual fish includes:
adopting Berlin noise to randomly generate smooth speed;
setting two parameters of a loitering period and a change probability, wherein the loitering period determines the time interval of angle change; and the change probability determines whether the angle change is carried out in the loitering period, a random number between 0 and 1 is generated in each loitering period, if the random number is smaller than the change probability, a random angle is set, the direction of the virtual fish is changed according to the angle, and otherwise, the direction of the virtual fish is not changed.
As an alternative embodiment, the specific process of setting the avoidance direction of the virtual fish includes:
acquiring a collision point and a ray reflection direction according to ray collision detection, calculating to obtain a reflection point, and calculating the distance from the reflection point to the collision point;
taking a point in the fish tank, and taking the key point of the point and the reflection point as an evasive target point;
and calculating an avoidance direction vector according to the target point and the current position of the virtual fish, and avoiding according to the avoidance direction vector.
Compared with the prior art, the beneficial effect of this disclosure is:
the method and the device have the advantages that the virtual-real fusion of the MR fish tank content is realized, the generated three-dimensional picture is more vivid and real, and a mixed reality experience with reality sense is provided for a user;
the technical scheme of the method has the advantages that the user does not need additional training, only needs a natural interaction mode, is wide in application range, is suitable for various environments such as laboratories, hospitals and common family environments, and can be used for cognitive training, language learning, vocabulary learning or rehabilitation training.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
FIGS. 1(a) and 1(b) are system architecture diagrams;
fig. 2(a), 2(b) and 2(c) are a detection result and a foreground screenshot on a depth image by using a gaussian mixture model separation algorithm;
FIG. 3 is a schematic view of refraction;
FIG. 4(a) and FIG. 4(b) are schematic diagrams of experiments for analyzing the depth position change of the same position in the fish tank before and after refraction;
FIG. 5 is a fit of pre-refraction and post-refraction depth positions in Matlab;
FIGS. 6(a) - (c) show the effect of a user viewing a stereoscopic projection;
FIG. 7 is a schematic diagram of a button being occluded by a real fish;
FIG. 8 is a diagram of the screen coordinate system and the Kinect coordinate system;
FIGS. 9(a) - (c) are virtual tag following displays of real fish;
FIG. 10 is an obstacle avoidance schematic;
fig. 11(a) and 11(b) are comparison graphs of obstacle avoidance effects of virtual fish;
fig. 12(a) and 12(b) are diagrams illustrating the effect of virtual fish on avoiding real fish.
The specific implementation mode is as follows:
the present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
A mixed reality fish tank system in a three-dimensional display environment mainly comprises a position tracking module, a refraction compensation module, a three-dimensional display module and a virtual content setting module.
A position tracking module: the two RGB-D cameras are used to respectively obtain the real fish position and the user head position in real time, and the embodiment takes a Kinect camera as an example for explanation.
A refraction compensation module: and compensating the detected real fish position by refraction compensation to obtain the real position of the real fish in the Kinect camera space.
The stereoscopic display module: the imaging principle of stereoscopic display is utilized, virtual fish is accurately placed in a real fish tank environment, the virtual fish and the real fish tank environment are integrated by means of the display equipment, new experience of real sensory effect of a user is presented, and a virtual-real integrated mixed reality fish tank is realized.
A virtual content setting module: the virtual content setting utilizes the obtained real fish position information to map the real fish position information to a virtual scene, virtual real fish labels are placed near the real fish, real-time collision detection is carried out on the real fish and virtual objects, the virtual objects effectively avoid collision of the real fish, and meanwhile, whether the real fish blocks the virtual scene is detected, so that a user can effectively obtain scene information.
The position tracking module detects the movement of a single fish by using a Gaussian mixture model separation algorithm on a depth image with the resolution of W multiplied by H (the resolution of the Kinect depth image is 512 multiplied by 424). The method is a background difference method based on self-adaptive mixed Gaussian background modeling. And the algorithm is used on the Kinect depth image, so that sudden change of illumination can be effectively avoided.
The refraction module calculates refraction compensation by utilizing Snell's law, and only one layer of glass passes through from fish to Kinect, and the glass is thin and can be ignored.
The three-dimensional display module displays the virtual scene information in a real fish tank environment with a three-dimensional effect of screen display by using a three-dimensional vision and a three-dimensional rendering technology.
The virtual content setting module comprises real fish shielding processing, real fish label real-time display and collision detection and avoidance, and specifically comprises the following steps:
1. real fish shielding treatment: the real fish shelters from the processing and includes that real fish shelters from detection and button position update, and wherein real fish shelters from the detection and indicates, detects whether have the sheltering from of real fish between button and the people's eye, if there is sheltering from, then can make the user can't see the button, updates the button position this moment.
2. Real fish label real-time display: in order to effectively construct the virtual-real fish tank environment, the real-time display of the real fish tag is set on the virtual content in the embodiment, namely, the real fish position information is utilized, a virtual tag is placed near the position of the real fish in real time and moves along with the real fish, so that the understanding of the user on the fish is increased.
3. Collision detection and avoidance: in order to avoid collision between real fish and virtual fish in the mixed-reality fish tank, the embodiment provides a collision detection and avoidance algorithm. The algorithm mainly comprises three parts of virtual fish roaming path planning, obstacle avoidance and virtual fish position updating.
(1) Planning a roaming path of the virtual fish: the roaming of the virtual fish refers to that the virtual fish moves irregularly and purposelessly in the real fish tank.
(2) Obstacle avoidance: obstacle avoidance comprises two stages of obstacle detection and avoidance direction setting. Wherein the obstacle detection is based on collision detection. Through real fish label real-time display, can set up collision body around real fish. The virtual fish emits rays forward in the roaming process and carries out collision detection to judge whether obstacles exist in front. And after the obstacle is detected, the avoiding direction of the virtual fish needs to be set, so that the collision between the virtual fish and the real fish is avoided.
(3) Updating the position of the virtual fish: the roaming path planning and obstacle avoidance mainly change the direction of the virtual fish, and the position of the virtual fish needs to be changed in real time to realize the swimming effect of the virtual fish.
Real fish in the fish tank is detected in real time to obtain the coordinate position of the fish tank, and the influence of refraction on the position of the fish tank is compensated in real time by using a light refraction-based touch offset compensation method. On the basis of the position information, a three-dimensional rendering method is adopted to display real fish labels in the real fish tank in real time, and the collision detection and avoidance of virtual fish and real fish are realized, and the sense of reality of a mixed reality interface is enhanced. In addition, the problem of shielding the button by the real fish is solved. The technology realizes the mixed reality fish tank with the virtual-real fusion, the three-dimensional picture is more vivid and real, and a novel mixed reality experience is provided for users. In an exemplary embodiment, fig. 1(a) is a system hardware architecture diagram, wherein the main hardware devices include: the system comprises a projector, a host, stereo glasses, a modified fish tank and an RGB-D camera (Kinect is taken as an example). Wherein, we have carried on the transformation to the ordinary fish bowl, one side of the fish bowl is equipped with the membrane of adjusting luminance of liquid crystal, used for projection display; and an infrared touch screen is arranged on the other side of the touch screen to support touch operation. In order to avoid the influence of blocking the Kinect by the user on the real fish detection, the Kinect for the real fish detection is placed on the side face of the fish tank, and another Kinect is placed on the front face of the fish tank to perform skeleton tracking, so that the real fish blocking processing is realized.
Fig. 1(b) is a functional architecture diagram of the system, and the MR fish tank system mainly includes four parts in a stereoscopic display environment: position tracking, refraction compensation, virtual content setting, and stereoscopic display. Firstly, position tracking is a basic part, and real fish positions and user head positions are respectively obtained in real time by using Kinect. And secondly, compensating the detected real fish position by refraction compensation to obtain the real position of the real fish in the Kinect camera space. Then, the virtual content setting utilizes the obtained real fish position information to map the real fish position information to a virtual scene, a virtual real fish label is placed near the real fish, real-time collision detection is carried out on the real fish and a virtual object, the virtual object effectively avoids collision of the real fish, and meanwhile whether the real fish blocks the virtual scene is detected, so that a user can effectively obtain scene information. And finally, displaying the virtual content in the real fish tank in a three-dimensional manner by utilizing a three-dimensional display principle.
The detection method comprises the steps of using a Gaussian mixture model separation algorithm on a Kinect depth image, enabling each pixel point to contain depth information corresponding to the point in the depth image obtained by the Kinect, and obtaining three-dimensional coordinate data corresponding to the depth data through MapDepthFrameToCameraspace. Therefore, the three-dimensional coordinate P of the moving object in the Kinect camera space can be obtained through the pixel coordinate of the moving object on the depth imageK. As can be seen from the figure, when the detection is performed on the depth image, a relatively ideal detection result can be obtained as shown in fig. 2(a) in most cases. Moreover, when the real fish swims to the glass, no mirror image is generated at the glass on the depth image, as in 2 (b). In addition, since the depth image is not affected by the illumination, when the illumination suddenly changes, the detection effect of the real fish is not affected, as in 2 (c).
Fig. 3 shows a refraction diagram, and the calculation process is described as follows:
the method comprises the following steps: from virtual image PKAnd Kinect (origin of the Kinect camera space coordinate system), the refraction angle β of the refracted ray entering the Kinect camera can be calculated:
Figure BDA0002584330380000111
step two: according to Snell's law, the incident angle α of the light can be obtained:
Figure BDA0002584330380000121
step three: according to the z coordinate of the object point P and the z coordinate of the refraction point i, the difference value Deltax between the x coordinate of the object point P and the x coordinate of the refraction point i can be obtained:
Figure BDA0002584330380000122
step four: the x coordinate of the object point P can be found as follows:
Figure BDA0002584330380000123
likewise, the y coordinate of the object point P can be calculated.
Step five: since only one incident ray cannot determine the actual position of the fish, we have searched for the change in depth position before and after refraction at the same position in the fish tank. For 72 different positions in the fish tank, before water is added (no refraction occurs) and after water is added (refraction occurs), depth position information of the positions is collected by using Kinect respectively, and then a regression equation between the positions is fitted by using Matlab: z is a radical ofBefore refraction=a*zAfter refraction+ b. Using the regression equation, from the virtual image PKThe actual depth position before refraction, that is, the z-coordinate of the object point P is determined as the depth position in the water. Thus, the actual position P of the fish in the water can be obtained by using the incident light and the actual depth position.
As shown in fig. 4(a) and 4(b), a z-coordinate experimental graph of the object point P calculated by the refraction compensation module is shown, a long strip is placed at the bottom of the fish tank, 12 coins are pasted on the long strip to serve as a marked object, and before water is added (no refraction occurs, as shown in fig. 4(a)) and after water is added (refraction occurs, as shown in fig. 4(b)), the operation of clicking the marked object by a mouse on the Kinect depth image is respectively performed manually, so that the depth position of the marked object is obtained. The above operations are repeated on 6 sets of different depth positions, and finally depth position information of 12 × 6 — 72 different positions is obtained. Wherein, 9 abnormal data (including missing depth value and abnormality) are removed, and finally 63 valid data are obtained.
Fig. 5 is a graph showing the fitting relationship between the depth positions before and after refraction in Matlab. Matlab is used for fitting the sum before refractionRegression equation between depth positions after refraction: z is a radical ofBefore refraction=a*zAfter refraction+b。
Fig. 6(a) and 6(b) are pictures viewed by the left and right eyes of a user wearing stereoscopic glasses, respectively, and the two pictures are different from each other, so that the left and right eyes of the user generate parallax due to cutting transformation, and a stereoscopic visual effect is generated. And 6(c) shows that ghosting appears in the picture viewed by the naked eye of the user, and the stereoscopic projector used in the embodiment alternately projects the left-eye picture and the right-eye picture at the frequency of 60Hz, so that the ghosting effect is observed by the naked eye of the user.
As shown in fig. 7, the view of the user may be obstructed by the fish actually swimming in the fish tank. In order to reduce the occlusion of the real fish to the virtual scene and ensure that the user effectively obtains information, the embodiment provides a real fish occlusion processing method, and the real fish occlusion processing includes real fish occlusion detection and button position updating.
Before detection, the positions of the button, the real fish and the human eyes are firstly converted into the same coordinate system, namely a Kinect camera space for skeletal tracking.
Through a coordinate mapping method from a screen coordinate system to a Kinect coordinate system (such as the method shown in FIG. 8), the Kinect camera space coordinates for skeletal tracking and the Unity screen coordinates can be converted to obtain a transformation matrix C 'of two coordinate systems'1And C'2Wherein, C'1Is a transformation matrix, C ', that aligns the screen coordinate system to the Kinect camera space coordinate system for skeletal tracking'2Is a transformation matrix that aligns the Kinect camera space coordinate system for skeletal tracking to the screen coordinate system.
Through the coordinate mapping, the actual position P of the button in the Kinect camera space for skeletal tracking can be obtainedbuttonI.e. Pbutton=C'1Pbutton', wherein Pbutton' is the coordinate of the button in the Unity screen coordinate system.
In the Kinect camera space for real fish detection, the real fish actual position P before refraction does not occurfishCan be obtained by referring to a position refraction compensation module, and a seven-parameter model is calculated by using a three-point methodAnd (3) solving a transformation matrix between the two Kinect camera space coordinate systems, and converting the transformation matrix into a Kinect camera space for skeletal tracking.
And the position P of the human eye in Kinect camera space for skeletal trackingeyeIt can be obtained directly by head position estimation.
The calculation process for judging occlusion is as follows:
1) according to the position P of the button, the real fish and the human eyesbutton、PfishAnd PeyeAnd calculating the distance d from the real fish to a straight line between the human eyes and the button:
Figure BDA0002584330380000141
2) according to the size S of the button and the real fishbutton、SfishSetting a shielding threshold T:
Figure BDA0002584330380000142
3) comparing the distance d calculated by equation 1 with the occlusion threshold T:
d≤T (3)
if the shielding condition of the formula 3 is met, the button is blocked by the real fish, and the position of the button needs to be updated. The update strategy is as follows: two candidate positions are set above and below the button, and if the occlusion condition is satisfied, the button is randomly moved to the candidate positions. In order to prevent the button from moving out of the screen, the button is provided with upper and lower limits, and if the limits in a certain direction are exceeded, the button is moved in the opposite direction.
Fig. 8 shows the screen coordinates and Kinect coordinates mapping. Since we place the Kinect right opposite the aquarium, the x-y planes of the screen coordinate system of the x-y plane of the camera space coordinate system of the Kinect are parallel by default, so the z value of the x-y plane of the screen coordinate system in the Kinect camera space is fixed. The transformation between the complex three-dimensional rectangular coordinate system and the two-dimensional coordinate system is simplified into the transformation between the two-dimensional coordinate system and the two-dimensional coordinate system.
A red circular paper sheet is respectively fixed at the lower left corner and the upper right corner of the surface of the fish tank and used as two mark points which respectively correspond to an origin (0, 0) and a point (screen. width, height) of a screen coordinate system, wherein the screen coordinate is related to the resolution of a screen and has a unit of pixel. And meanwhile, counting the three-dimensional coordinates of the mark points in a Kinect camera space coordinate system, wherein the unit of the Kinect camera space coordinate system is meter.
After two groups of corresponding two-dimensional points and three-dimensional points are obtained, a rotation matrix R between two coordinate systems can be obtained through calculation1、R2And a translation vector t1、t2. The transformation matrix that aligns the screen coordinate system to the Kinect camera space coordinate system can be represented as C1=(R1|t1) The transformation matrix that aligns the Kinect camera space coordinate system to the screen coordinate system can be represented as C2=(R2|t2)。
Fig. 9(a) - (c) are diagrams showing the effect that the virtual label of the real fish follows the real fish in real time during the swimming process of the real fish, wherein the red letter Nemo above the real fish is the virtual label of the real fish. The virtual tag is placed near the real fish, and the most important thing is to obtain the position of the real fish in the virtual scene coordinate system. By referring to the position refraction compensation module, the actual position P of the real fish in the Kinect camera space for real fish detection can be obtained in real time, and meanwhile, a transformation matrix C 'obtained by the real fish shielding processing part is utilized'2The position P of the real fish in the Kinect camera space for skeletal tracking can be converted into the position P in the screen coordinate system1I.e. P1=C'2And P. However, since the real fish tag to be added exists in Unity's world coordinates in the virtual scene, it is necessary to perform coordinate mapping again to convert Unity screen coordinates to world coordinates. The Unity world coordinate system is a three-dimensional rectangular coordinate system, and two-dimensional screen coordinates are converted into three-dimensional world coordinates, extra z value information needs to be added, or conversion errors occur. The actual position of a real fish in a Kinect camera space for skeletal tracking is directly measured by the methodSetting the z value of P as extra information, adding the extra information into the conversion process from the screen coordinate to the world coordinate, and obtaining P by using Screen ToWorldPoint1Corresponding world coordinate P2I.e. P2=ScreenToWorldPoint(P1x,P1y,Pz). In P obtained through a series of coordinate mapping2And a virtual real fish label is placed at the position, so that the position matching of the real fish and the virtual label can be realized.
Fig. 10 is an obstacle avoidance diagram showing the target point and the avoidance direction of the virtual fish. The avoidance direction calculation process is as follows:
1) acquisition of a collision point P by ray collision detectionhitAnd a ray reflection direction VrefCalculating to obtain a reflection point Pref,PrefThe calculation formula is shown in formula 4, wherein T is the distance from the reflection point to the collision point;
Pref=Phit+Vref*T (4)
2) in order to prevent the virtual fish from running out of the fish tank, a point P is taken from the inside of the fish tanktankCalculate PtankAnd PrefIs taken as a target point P of evasiongoalI.e. by
Figure BDA0002584330380000171
3) From the target point PgoalAnd virtual fish current position PfishCalculating an avoidance direction vector VavoidThe avoidance direction is calculated as shown in equation 6.
Vavoid=Pgoal-Pfish (6)
The roaming path planning provided by the invention comprises two parts, namely roaming speed setting and direction setting of the virtual fish. Speed setting, in order to achieve a more natural virtual fish swimming effect, the roaming speed is dynamically set. In addition, to smoothly change the roaming Speed, the present embodiment randomly generates a smooth Speed using Berlin noiseW. Direction deviceThe purpose of the device is to prevent the virtual fish from moving along the same direction, so that the movement of the virtual fish is closer to the reality. In order to make the direction change more natural, a loitering Period is setWAnd Probability of change ProbabilityWTwo parameters. Wherein PeriodWDetermining a time interval of the angle change; probabilityWDetermining whether the angle change is carried out in the loitering period, wherein a random number R between 0 and 1 is generated in each loitering periodWIf R isWLess than Probasic capabilityWThen a random Angle is setWAnd changing the direction of the virtual fish, otherwise not changing the direction of the virtual fish.
As shown in fig. 11(a), the virtual fish and the object representing the real fish are occluded from the effect of no obstacle avoidance; fig. 11(b) shows the effect of adding obstacle avoidance, in which the virtual fish changes the direction of roaming after encountering a real fish.
FIG. 12(a) shows an initial roaming state of the virtual fish, in which the virtual fish is swimming towards the real fish; fig. 12(b) shows the effect of avoiding the real fish detected by the virtual fish, when the virtual fish moves in a direction of roaming. The roaming path planning and obstacle avoidance mainly change the direction of the virtual fish, and the position of the virtual fish needs to be changed in real time to realize the swimming effect of the virtual fish. Using displacement velocity Speed obtained in roaming path planningWVirtual fish real-time orientation RfishAnd the position P of the last frame of the virtual fishprevCalculating the position P of the virtual fish in the current framecur. The position calculation formula is shown in formula 7, wherein dpThe position of the virtual fish is incremented for each frame.
Pcur=Pprev+Rfish*SpeedW*dp (7)
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
Although the present disclosure has been described with reference to specific embodiments, it should be understood that the scope of the present disclosure is not limited thereto, and those skilled in the art will appreciate that various modifications and changes can be made without departing from the spirit and scope of the present disclosure.

Claims (5)

1. A mixed reality fish tank system under a stereoscopic display environment is characterized in that: the method comprises the following steps:
a position tracking module configured to obtain real fish positions and user head positions in real time;
a refraction compensation module configured to compensate the obtained real fish position to obtain the real fish positionKinectA true position in camera space;
the virtual content setting module is configured to map the obtained real fish position information into a virtual scene, place a virtual real fish label near the real fish, and perform real-time collision detection on the real fish and the virtual object, so that the virtual object effectively avoids collision of the real fish, and simultaneously detect whether the real fish blocks the virtual scene, so as to effectively acquire scene information;
the virtual content setting module is configured to perform real fish occlusion detection, specifically, a button is configured to determine whether the occlusion of a real fish exists between the button and human eyes, if the occlusion exists, a user cannot see the button, and the virtual content setting module updates the position of the button;
the virtual content setting module is configured to display real fish tags in real time, configure a virtual tag near the position of the real fish in real time by using the position information of the real fish, mark the real fish information and move along with the real fish;
the virtual content setting module is configured to perform collision detection and avoidance of real fish and virtual fish, and specifically comprises planning a roaming path of the virtual fish; arranging collision bodies around the real fish, controlling the virtual fish to transmit rays forward in the roaming process and carrying out collision detection to judge whether barriers exist in front or not, and setting the avoiding direction of the virtual fish;
the three-dimensional display module is configured to accurately place the virtual fish in the fish tank environment by utilizing the imaging principle of three-dimensional display and integrate the virtual fish and the fish tank environment;
the specific process for planning the roaming path of the virtual fish comprises the following steps:
adopting Berlin noise to randomly generate smooth speed;
setting two parameters of a loitering period and a change probability, wherein the loitering period determines the time interval of angle change; the change probability determines whether the angle change is carried out in the loitering period, a random number between 0 and 1 is generated in each loitering period, if the random number is smaller than the change probability, a random angle is set, the direction of the virtual fish is changed according to the angle, and otherwise, the direction of the virtual fish is not changed;
the specific process of setting the avoidance direction of the virtual fish comprises the following steps:
acquiring a collision point and a ray reflection direction according to ray collision detection, calculating to obtain a reflection point, and calculating the distance from the reflection point to the collision point;
taking a point in the fish tank, and taking the key point of the point and the reflection point as an evasive target point;
and calculating an avoidance direction vector according to the target point and the current position of the virtual fish, and avoiding according to the avoidance direction vector.
2. The system of claim 1, wherein the fish tank system comprises: the position tracking module comprises at least twoRGB-DAnd the camera is used for respectively acquiring the real fish position and the head position of the user in real time.
3. The system of claim 1, wherein the fish tank system comprises: the position tracking module is used for tracking the position of the object at the resolution ofW×HThe movement of a single fish is detected by using a Gaussian mixture model separation algorithm on the depth image.
4. The system of claim 1, wherein the fish tank system comprises: and the refraction compensation module calculates a refraction compensation value by utilizing Snell's law.
5. A method for generating a mixed reality fish tank in a three-dimensional display environment is characterized by comprising the following steps: the method comprises the following steps:
real fish positions and user head positions are obtained in real time;
compensating the obtained real fish position to obtain the real fish positionKinectA true position in camera space;
mapping the obtained real fish position information to a virtual scene, placing a virtual real fish label near the real fish, performing real-time collision detection on the real fish and a virtual object, controlling the virtual object to effectively avoid the collision of the real fish, and detecting whether the real fish blocks the virtual scene so as to effectively obtain scene information;
the virtual fish is accurately placed in the fish tank environment by utilizing the imaging principle of three-dimensional display, and the virtual fish and the fish tank environment are integrated into a whole to obtain a virtual-real integrated mixed reality fish tank;
compensating the obtained real fish position to obtain the real fish positionKinectThe specific process of the real position in the camera space includes:
according to virtual imagesP K AndKinectthe position relation of the origin of the camera space coordinate system can be calculatedKinectA refraction angle of a refracted ray of the camera;
according to the Snell's law, the incident angle of the light is obtained;
according to the object pointPIs/are as followszCoordinates and refraction pointsiIs/are as followszCoordinates, calculating to obtain object pointsPIs/are as followsxCoordinates and refraction pointsiIs/are as followsxThe difference of the coordinates further determines the object pointPIs/are as followsxCoordinates andycoordinates;
fitting the positions of the object points before and after refractionzThe relation between the coordinates and the depth position of the virtual image, according to the virtual imageP K The depth position in water is determined to obtain the actual depth position before refraction, i.e. object pointPIs/are as followszCoordinates;
detecting whether a real fish occludes a virtual sceneThe specific process comprises the following steps: configuring a button in a virtual scene, and calculating the distance from the real fish to a straight line between the human eyes and the button according to the positions of the button, the real fish and the human eyesd
Setting a shielding threshold value according to the size of the button and the real fish;
comparing the calculated distancesdAnd occlusion threshold, if distancedIf the value is less than or equal to the shielding threshold value, shielding occurs and the position of the button needs to be updated;
the specific process of controlling the virtual object to effectively avoid the collision of the real fish comprises the following steps: planning a roaming path of the virtual fish; arranging collision bodies around the real fish, controlling the virtual fish to transmit rays forward in the roaming process and carrying out collision detection to judge whether barriers exist in front or not, and setting the avoiding direction of the virtual fish;
the specific process for planning the roaming path of the virtual fish comprises the following steps:
adopting Berlin noise to randomly generate smooth speed;
setting two parameters of a loitering period and a change probability, wherein the loitering period determines the time interval of angle change; the change probability determines whether the angle change is carried out in the loitering period, a random number between 0 and 1 is generated in each loitering period, if the random number is smaller than the change probability, a random angle is set, the direction of the virtual fish is changed according to the angle, and otherwise, the direction of the virtual fish is not changed;
the specific process of setting the avoidance direction of the virtual fish comprises the following steps:
acquiring a collision point and a ray reflection direction according to ray collision detection, calculating to obtain a reflection point, and calculating the distance from the reflection point to the collision point;
taking a point in the fish tank, and taking the key point of the point and the reflection point as an evasive target point;
and calculating an avoidance direction vector according to the target point and the current position of the virtual fish, and avoiding according to the avoidance direction vector.
CN202010676771.8A 2020-07-14 2020-07-14 Mixed reality fish tank system in stereoscopic display environment and generation method Active CN111727924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010676771.8A CN111727924B (en) 2020-07-14 2020-07-14 Mixed reality fish tank system in stereoscopic display environment and generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010676771.8A CN111727924B (en) 2020-07-14 2020-07-14 Mixed reality fish tank system in stereoscopic display environment and generation method

Publications (2)

Publication Number Publication Date
CN111727924A CN111727924A (en) 2020-10-02
CN111727924B true CN111727924B (en) 2022-03-18

Family

ID=72655265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010676771.8A Active CN111727924B (en) 2020-07-14 2020-07-14 Mixed reality fish tank system in stereoscopic display environment and generation method

Country Status (1)

Country Link
CN (1) CN111727924B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103176733A (en) * 2011-12-20 2013-06-26 西安天动数字科技有限公司 Electronic interactive aquarium system
CN104375778A (en) * 2014-11-25 2015-02-25 湖南大学 Intelligent interactive aquarium display system
CN205485918U (en) * 2016-01-12 2016-08-17 上海盟云移软网络科技股份有限公司 Aquatic virtual reality experience system
US9635305B1 (en) * 2012-11-03 2017-04-25 Iontank, Ltd. Display apparatus including a transparent electronic monitor
CN107274438A (en) * 2017-06-28 2017-10-20 山东大学 Support single Kinect multi-human trackings system and method for mobile virtual practical application
CN107340870A (en) * 2017-07-13 2017-11-10 深圳市未来感知科技有限公司 A kind of fusion VR and AR virtual reality display system and its implementation
CN109471521A (en) * 2018-09-05 2019-03-15 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Virtual and real shielding interaction method and system in AR environment
CN109616179A (en) * 2018-12-07 2019-04-12 山东大学 Autism spectrum disorder mixed reality rehabilitation training system and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107390224A (en) * 2017-07-21 2017-11-24 歌尔科技有限公司 Obstacle detection method, device and virtual reality display device
CN108898676B (en) * 2018-06-19 2022-05-13 青岛理工大学 Method and system for detecting collision and shielding between virtual and real objects

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103176733A (en) * 2011-12-20 2013-06-26 西安天动数字科技有限公司 Electronic interactive aquarium system
US9635305B1 (en) * 2012-11-03 2017-04-25 Iontank, Ltd. Display apparatus including a transparent electronic monitor
CN104375778A (en) * 2014-11-25 2015-02-25 湖南大学 Intelligent interactive aquarium display system
CN205485918U (en) * 2016-01-12 2016-08-17 上海盟云移软网络科技股份有限公司 Aquatic virtual reality experience system
CN107274438A (en) * 2017-06-28 2017-10-20 山东大学 Support single Kinect multi-human trackings system and method for mobile virtual practical application
CN107340870A (en) * 2017-07-13 2017-11-10 深圳市未来感知科技有限公司 A kind of fusion VR and AR virtual reality display system and its implementation
CN109471521A (en) * 2018-09-05 2019-03-15 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Virtual and real shielding interaction method and system in AR environment
CN109616179A (en) * 2018-12-07 2019-04-12 山东大学 Autism spectrum disorder mixed reality rehabilitation training system and method

Also Published As

Publication number Publication date
CN111727924A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
US11693242B2 (en) Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking
US8890812B2 (en) Graphical user interface adjusting to a change of user's disposition
US8743187B2 (en) Three-dimensional (3D) imaging based on MotionParallax
US10528125B2 (en) Method for operating a virtual reality system, and virtual reality system
Tomioka et al. Approximated user-perspective rendering in tablet-based augmented reality
US20150042640A1 (en) Floating 3d image in midair
Livingston et al. Pursuit of “X-ray vision” for augmented reality
US20190371072A1 (en) Static occluder
CN105992965A (en) Stereoscopic display responsive to focal-point shift
US10235806B2 (en) Depth and chroma information based coalescence of real world and virtual world images
KR20130108643A (en) Systems and methods for a gaze and gesture interface
CN107810634A (en) Display for three-dimensional augmented reality
JP2022122876A (en) image display system
US20210407125A1 (en) Object recognition neural network for amodal center prediction
CN111727924B (en) Mixed reality fish tank system in stereoscopic display environment and generation method
KR101177793B1 (en) Stereoscopic virtual experience apparatus and method
Heinrich et al. Effects of surface visualizations on depth perception in projective augmented reality
Ercan A 3D Topological tracking system for augmented reality
Lin Lightweight and Sufficient Two Viewpoint Connections for Augmented Reality
KR20100062774A (en) System and method for displaying 3d virtual image using moving picture input device
JPH0415772A (en) Visual line following type high speed image generation/ display method
CN117716419A (en) Image display system and image display method
Liu et al. 20‐4: An AI‐Driven Aquarium Guide System for Intelligent Museum
Tan et al. Invisible Mesh: Effects of X-Ray Vision Metaphors on Depth Perception in Optical-See-Through Augmented Reality
Yangjun Intuitive Robot Teleoperation Based on Haptic Feedback and 3D Visualization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant