CN106155281A - Stereo interaction method, stereoscopic display device and system thereof - Google Patents

Stereo interaction method, stereoscopic display device and system thereof Download PDF

Info

Publication number
CN106155281A
CN106155281A CN201510147807.2A CN201510147807A CN106155281A CN 106155281 A CN106155281 A CN 106155281A CN 201510147807 A CN201510147807 A CN 201510147807A CN 106155281 A CN106155281 A CN 106155281A
Authority
CN
China
Prior art keywords
virtual
space
actual
stereoscopic
interactive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510147807.2A
Other languages
Chinese (zh)
Other versions
CN106155281B (en
Inventor
蒋凌锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SuperD Co Ltd
Original Assignee
Shenzhen Super Perfect Optics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Super Perfect Optics Ltd filed Critical Shenzhen Super Perfect Optics Ltd
Priority to CN201510147807.2A priority Critical patent/CN106155281B/en
Publication of CN106155281A publication Critical patent/CN106155281A/en
Application granted granted Critical
Publication of CN106155281B publication Critical patent/CN106155281B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

The invention belongs to stereo display interaction technique field, disclose a kind of stereo interaction method, it is applied in the interaction scenarios of stereoscopic display device and three-dimensional interaction operation body, including: identify that three-dimensional interaction operation body is in the operation information of contactless state, wherein, contactless state refers to that three-dimensional interaction operation body moves relative to stereoscopic display device, and the state not contacted with stereoscopic display device;Judging whether operation information meets pre-conditioned, if meeting pre-conditioned, then triggering the interactive operation in interaction scenarios.By the way, the present invention is capable of three-dimensional mutual, and operating body does not has any Mechanical Contact with stereoscopic display device, reduces friction loss, and improves operating body and stereoscopic display device ruggedness.The present invention also provides for stereoscopic display device, interacts operation with three-dimensional interaction operation body, it is achieved the solid under contactless state is mutual, rich interactive scene, and the present invention also provides for Three-dimensional interaction system, it is simple to operator uses.

Description

Stereoscopic interaction method, stereoscopic display equipment and system thereof
Technical Field
The present invention relates to the field of stereoscopic display interaction technology, and in particular, to a stereoscopic interaction method, a stereoscopic display device, and a stereoscopic display system.
Background
The stereo display technology is an image technology which utilizes the left and right sides of human eyes to respectively receive different pictures, and then the brain superposes and reproduces image information to form an image with the stereo directional effects of front-back, up-down, left-right, far-near and the like.
At present, virtual objects constructed by three-dimensional display can be divided into two types in visual effect, one type is an object which looks like a convex screen by naked eyes, and the technology is called as an off-screen parallax object; another type is an object that appears to the naked eye to be within the screen, a technique known as an in-screen parallax object. Regardless of the out-of-screen parallax object technology or the in-screen parallax object technology, when the stereoscopic display device is operated, the operation body needs to be used, and the operation body is in contact with the display screen of the stereoscopic display device or in contact with the physical operation plane, so that the operation body is in contact with the display screen of the stereoscopic display device or in mechanical friction with the physical operation plane, and the operation body, the display screen and the physical operation plane are easily damaged.
Disclosure of Invention
The technical problem mainly solved by the invention is to provide a stereoscopic interaction method, a stereoscopic display device and a system thereof, which can realize stereoscopic interaction in a non-contact state, enrich interaction scenes, reduce friction loss and improve the durability of a stereoscopic interaction operation body and the stereoscopic display device because the stereoscopic interaction operation body does not have any mechanical contact with the stereoscopic display device.
To solve the foregoing technical problem, an embodiment of the present invention provides a stereoscopic interaction method, applied to an interaction scene of a stereoscopic display device and a stereoscopic interaction operator, including:
identifying operation information of the stereoscopic interactive operation body in a non-contact state, wherein the non-contact state refers to a state that the stereoscopic interactive operation body moves relative to the stereoscopic display equipment and is not in contact with the stereoscopic display equipment;
and judging whether the operation information meets a preset condition, and if so, triggering the interactive operation in the interactive scene.
Specifically, the operation information includes an actual motion trajectory of the stereoscopic interactive operation body relative to the stereoscopic display device and an operation instruction sent by the stereoscopic interactive operation body.
Further, still include:
and generating a virtual motion track in a virtual operation space displayed by the stereoscopic display equipment according to the actual motion track.
Specifically, the generating a virtual motion trajectory in a virtual operation space displayed by the stereoscopic display device according to the actual motion trajectory includes:
setting an actual operation space;
acquiring the actual motion track formed by the motion of the stereoscopic interactive operation body relative to the stereoscopic display equipment in the actual operation space;
and carrying out space mapping on the actual motion track to generate the virtual motion track.
Specifically, the setting of the actual operating space specifically includes:
receiving an actual operation space setting instruction sent by the three-dimensional interactive operation body;
acquiring the actual position coordinate of the three-dimensional interactive operation body according to the actual operation space setting instruction;
and setting the actual operation space according to the actual position coordinates of the three-dimensional interactive operation body.
Further, the generating the virtual motion trajectory by performing spatial mapping on the actual motion trajectory specifically includes:
receiving an initial position setting instruction sent by the three-dimensional interactive operation body, wherein the initial position setting instruction is used for setting a first position coordinate of the three-dimensional interactive operation body in the actual operation space and a second position coordinate of the three-dimensional interactive operation body in the virtual operation space;
establishing a mapping relation between the actual operation space and the virtual operation space according to the first position coordinate and the second position coordinate;
and according to the mapping relation, performing spatial mapping on the actual motion track to generate the virtual motion track.
Further, the method further comprises:
receiving a zooming instruction sent by the three-dimensional interactive operation body, and acquiring a third position coordinate of the three-dimensional interactive operation body in the actual operation space;
and resetting the actual operation space according to the third position coordinate and the first position coordinate, and establishing a mapping relation between the virtual operation space and the reset actual operation space.
Further, the method further comprises:
receiving a locking operation instruction sent by the three-dimensional interactive operation body, and acquiring a locking position coordinate of the three-dimensional interactive operation body;
moving the three-dimensional interactive operation body, and acquiring an unlocking position coordinate when receiving an unlocking operation instruction sent by the three-dimensional interactive operation body;
and resetting the actual operation space according to the first position coordinate, the locking position coordinate and the unlocking position coordinate, and establishing a mapping relation between the virtual operation space and the reset actual operation space.
Optionally, the determining whether the operation information meets a preset condition, and if the preset condition is met, triggering an interactive operation in the interactive scene specifically includes:
judging whether the actual operation space is coincident with the virtual operation space, if the actual operation space is not coincident with the virtual operation space, mapping the actual motion track of the three-dimensional interactive operation body to be the motion track of a virtual cursor in the virtual operation space, and executing the operation corresponding to the operation instruction on the operation object when the distance between the motion track of the virtual cursor and the operation object in the virtual operation space is smaller than a preset distance.
Optionally, if the actual operation space coincides with the virtual operation space, when a distance between the actual motion trajectory of the stereoscopic interactive operation body and an operation object in the virtual operation space is smaller than a preset distance, an operation corresponding to the operation instruction is executed on the operation object.
Preferably, the operation instruction includes an activation instruction, the operation object includes a plurality of display objects stacked in the depth direction of the virtual space, and the stereoscopic interactive operation body executes the activation instruction to perform an activation operation on the display objects.
Preferably, the operation instruction includes a cutting instruction, the operation object includes a virtual human tissue in the virtual operation space, and the stereoscopic interactive operation body executes the cutting instruction to perform a cutting operation on the virtual human tissue.
Further, the method further comprises:
and when the interactive operation in the interactive scene is triggered, sending a force feedback vibration instruction to the three-dimensional interactive operation body so as to enable the three-dimensional interactive operation body to perform force feedback vibration.
Further, before identifying the operation information that the stereoscopic interactive operation body is in the non-contact state, the method further includes:
and establishing data communication between the stereoscopic display equipment and the stereoscopic interactive operation body.
The embodiment of the present invention further provides a stereoscopic display device, configured to perform an interactive operation with a stereoscopic interactive operation body, where the stereoscopic display device includes a display unit, configured to display an interactive scene, and further includes:
the identification unit is used for identifying operation information of the stereoscopic interactive operation body in a non-contact state, wherein the non-contact state refers to a state that the stereoscopic interactive operation body moves relative to the stereoscopic display equipment and is not in contact with the stereoscopic display equipment;
and the interaction unit is used for judging whether the operation information meets a preset condition or not, and if so, triggering the interaction operation in the interaction scene.
Specifically, the operation information includes an actual motion trajectory of the stereoscopic interactive operation body relative to the stereoscopic display device and an operation instruction sent by the stereoscopic interactive operation body.
Specifically, the identification unit includes:
the actual operation space setting module is used for receiving an actual operation space setting instruction sent by the three-dimensional interactive operation body, acquiring the actual position coordinate of the three-dimensional interactive operation body according to the actual operation space setting instruction, and setting the actual operation space according to the actual position coordinate of the three-dimensional interactive operation body;
and the actual motion track acquisition module is used for acquiring the actual motion track formed by the motion of the stereoscopic interactive operation body relative to the stereoscopic display equipment in the actual operation space.
Further, the interaction unit includes:
the space conversion module is used for generating a virtual motion track in a virtual operation space displayed by the stereoscopic display equipment according to the actual motion track;
an interactive content processing module for judging whether the operation information satisfies a preset condition,
and the interaction event triggering module is used for triggering the interaction operation in the interaction scene when the preset condition is met.
Further, the spatial conversion module is further configured to:
receiving an initial position setting instruction sent by the three-dimensional interactive operation body, wherein the initial position setting instruction is used for setting a first position coordinate of the three-dimensional interactive operation body in the actual operation space and a second position coordinate of the three-dimensional interactive operation body in the virtual operation space;
establishing a mapping relation between the actual operation space and the virtual operation space according to the first position coordinate and the second position coordinate;
and according to the mapping relation, performing spatial mapping on the actual motion track to generate the virtual motion track.
Further, the spatial conversion module is further configured to:
receiving a zooming instruction sent by the three-dimensional interactive operation body, and acquiring a third position coordinate of the three-dimensional interactive operation body in the actual operation space;
and resetting the actual operation space according to the third position coordinate and the first position coordinate, and establishing a mapping relation between the virtual operation space and the reset actual operation space.
Further, the spatial conversion module is further configured to:
receiving a locking operation instruction sent by the three-dimensional interactive operation body, and acquiring a locking position coordinate of the three-dimensional interactive operation body; receiving an unlocking operation instruction sent when the three-dimensional interactive operation body is moved, and acquiring an unlocking position coordinate;
and resetting the actual operation space according to the first position coordinate, the locking position coordinate and the unlocking position coordinate, and establishing a mapping relation between the virtual operation space and the reset actual operation space.
Specifically, the interactive content processing module includes:
the first judgment submodule is used for judging whether the actual operation space is coincident with the virtual operation space or not, and if the actual operation space is not coincident with the virtual operation space, mapping the actual motion track of the three-dimensional interactive operation body into the motion track of a virtual cursor in the virtual operation space;
and the second judging submodule is used for judging whether the distance between the actual motion track of the three-dimensional interactive operation body or the motion track of the virtual cursor in the virtual operation space and the operation object in the virtual operation space is smaller than a preset distance.
Optionally, the operation instruction includes an activation instruction, the operation object includes a plurality of display objects stacked in the depth direction of the virtual space, and the interaction event triggering module is specifically configured to: and executing the activation instruction to perform activation operation on the display object.
Optionally, the operation instruction includes a cutting instruction, the operation object includes a virtual human tissue in the virtual operation space, and the interaction event triggering module is specifically configured to: and executing the cutting instruction to perform cutting operation on the virtual human body tissue.
Preferably, the apparatus further comprises:
and the force feedback unit is used for sending a force feedback vibration instruction to the three-dimensional interactive operation body when the interactive operation in the interactive scene is triggered so as to enable the three-dimensional interactive operation body to carry out force feedback vibration.
Preferably, the apparatus further comprises:
and the communication unit is used for establishing data communication between the stereoscopic display equipment and the stereoscopic interactive operation body.
The embodiment of the invention also provides a stereo interaction system which comprises a stereo interaction operation body and the stereo display equipment.
The embodiment of the invention realizes the three-dimensional interaction by identifying the operation information of the three-dimensional interaction operation body in a non-contact state and triggering the interaction operation when the operation information meets the preset condition; in addition, the stereoscopic interactive operation body does not have any mechanical contact with the stereoscopic display device, so that the friction loss is reduced, and the durability of the stereoscopic interactive operation body and the stereoscopic display device is improved.
Drawings
Fig. 1 is a schematic flow chart of a stereoscopic interaction method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of positioning a stereoscopic interactive operating body by using ultrasonic waves in an embodiment of the invention;
FIG. 3 is a schematic diagram of a coordinate plane for positioning a three-dimensional interactive effector using ultrasound in an embodiment of the present invention;
FIG. 4 is a timing diagram of a multi-point ultrasonic positioning time division multiplex in accordance with an embodiment of the present invention;
FIG. 5 is a schematic diagram of positioning a stereo interactive operator by using an infrared light source according to an embodiment of the present invention;
FIG. 6 is a schematic view of a camera structure for positioning a three-dimensional interactive operation body by using an infrared light source according to an embodiment of the present invention;
FIG. 7 is a perspective view of an embodiment of the present invention illustrating an infrared light source for positioning a three-dimensional interactive operation body;
FIG. 8 is a schematic diagram of an embodiment of the present invention setting up a real operating space;
FIG. 9 is a schematic diagram of an example virtual pool game application according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating an example of an application for enhancing conventional mouse input according to an embodiment of the present invention;
FIG. 11 is a diagram illustrating a process of mapping an actual operating space and a virtual operating space according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of a method for scaling an actual operating space range by using a stereo interactive operator according to an embodiment of the present invention;
FIG. 13 is a diagram illustrating cursor movement in a virtual operating space displayed by a stereoscopic display device in accordance with an embodiment of the invention;
FIG. 14 is a schematic diagram of a method for unlocking when a virtual cursor is moved to another area after being locked by a stereoscopic interactive operator according to an embodiment of the present invention;
fig. 15-17 are schematic diagrams illustrating a stereoscopic interactive operation performed on an operation object activated or selected in a stereoscopic scene in a mobile terminal that is stereoscopically displayed by using the stereoscopic interactive method according to an embodiment of the present invention;
FIG. 18 is a schematic diagram of a rotating mechanical element in an embodiment of a method for stereoscopic interaction using a mouse pad and a ring auxiliary input device according to an embodiment of the present invention;
FIGS. 19-21 are perspective mechanical structure application diagrams of one design of an auxiliary input device operation according to an embodiment of the present invention;
FIG. 22 is a schematic diagram of an implementation of an embodiment of the present invention for a simulated engraving application;
FIG. 23 is a schematic diagram of an implementation of a virtual surgical training application in accordance with an embodiment of the present invention;
24-25 are schematic diagrams of a simulated pool game application implemented by an embodiment of the present invention;
FIG. 26 is a schematic structural diagram of a stereoscopic display apparatus according to an embodiment of the invention;
fig. 27 is a schematic structural diagram of a stereoscopic interactive system according to an embodiment of the invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and examples.
To facilitate an understanding of the embodiments of the invention, some terms referred to in all embodiments of the invention are explained before beginning to describe various embodiments of the invention. Specifically, in all embodiments of the present invention, a stereoscopic scene displayed by the stereoscopic display device for the interactive operation of the stereoscopic interactive operation body row is referred to as an interactive scene, wherein an operation space of the stereoscopic interactive operation body relative to the stereoscopic display device is referred to as an actual operation space, and a space where the interactive scene is located is a virtual operation space. The trajectory of the actual motion of the stereoscopic interactive operation body relative to the stereoscopic display device in the actual operation space is referred to as an actual motion trajectory, for example, the position, contour or posture of the stereoscopic interactive operation body relative to the stereoscopic display device may be the actual motion trajectory of the stereoscopic interactive operation body. And mapping the actual motion track of the three-dimensional interactive operation body in the actual operation space to the virtual operation space to form a virtual motion track. When the stereoscopic interactive operation body operates the content in the virtual operation space, the issued instruction or command is called an operation instruction, the content in the virtual operation space is called an operation object, for example, in the operation of activating the phone book application in the virtual operation space, the activation instruction is the operation instruction, and the phone book application is the operation object.
It should be noted that, if not conflicting, the embodiments of the present invention and the features of the embodiments may be combined with each other within the scope of protection of the present invention. Additionally, while functional block divisions are performed in apparatus schematics, with logical sequences shown in flowcharts, in some cases, steps shown or described may be performed in sequences other than block divisions in apparatus or flowcharts.
The following briefly describes a stereoscopic interaction method according to an embodiment of the present invention. In the embodiment of the invention, the stereoscopic interactive operation body and the stereoscopic display device are in a non-contact state, the operation information of the stereoscopic interactive operation body in the non-contact state is identified, and when the operation information meets the preset condition, the interactive operation is triggered, so that the interactive operation of the stereoscopic interactive operation body on the operation object in the stereoscopic display device is realized.
The embodiments of the present invention will be further explained with reference to the drawings.
Fig. 1 is a schematic flow chart of a stereoscopic interaction method according to an embodiment of the present invention, and as shown in fig. 1, the interaction method according to the embodiment of the present invention includes the following steps:
and S11, identifying operation information of the stereoscopic interactive operation body in a non-contact state, wherein the non-contact state refers to a state that the stereoscopic interactive operation body moves relative to the stereoscopic display equipment and is not in contact with the stereoscopic display equipment.
In the embodiment of the present invention, the operation information includes an actual motion trajectory of the stereoscopic interactive operation body with respect to the stereoscopic display device and an operation instruction sent by the stereoscopic interactive operation body. The actual motion trajectory of the stereoscopic interactive effector may be a position coordinate, a posture or a contour for the stereoscopic display device. The operation instruction may include various commands for operating an operation object displayed on the stereoscopic display device, and the trigger of the operation instruction may be, but is not limited to, triggering a key, a key combination, a capacitive touch slider, a multi-fingerstall special action, a pen-down action of a pen-shaped operation body, or a screen UI trigger on the stereoscopic interactive operation body.
In the embodiment of the present invention, before the stereoscopic interactive operation body performs the interactive operation on the operation object, an actual operation space of the stereoscopic interactive operation body may be set first, the stereoscopic interactive operation body moves in the actual operation space, and the interactive operation on the operation object displayed by the stereoscopic display device is realized by identifying an actual movement track of the stereoscopic interactive operation body. In the process of setting an actual operation space, receiving an actual operation space setting instruction sent by a three-dimensional interactive operation body, acquiring the actual position coordinate of the three-dimensional interactive operation body according to the actual operation space setting instruction, and setting the actual operation space according to the actual position coordinate of the three-dimensional interactive operation body; and meanwhile, generating a virtual motion track in a virtual operation space displayed by the stereoscopic display equipment according to the actual motion track.
And S12, judging whether the operation information meets a preset condition, and if so, triggering the interactive operation in the interactive scene.
In the embodiment of the present invention, the interactive operation may be triggered by determining whether the actual motion trajectory in the operation information satisfies a preset condition, or may be triggered by determining whether a virtual motion trajectory generated by mapping the actual motion trajectory in the operation information, for example, a virtual cursor, satisfies a preset condition.
In an embodiment of the present invention, generating a virtual motion trajectory in a virtual operation space displayed by the stereoscopic display device according to the actual motion trajectory includes:
setting an actual operation space;
acquiring the actual motion track formed by the motion of the stereoscopic interactive operation body relative to the stereoscopic display equipment in the actual operation space;
and performing space mapping according to the actual motion track to generate the virtual motion track. In the space mapping process, a mapping relationship between the actual operating space and the virtual operating space needs to be established.
In the embodiment of the present invention, performing spatial mapping according to the actual motion trajectory to generate the virtual motion trajectory may include the following embodiments:
receiving an initial position setting instruction sent by a three-dimensional interactive operation body, wherein the initial position setting instruction is used for setting a first position coordinate of the three-dimensional interactive operation body in the actual operation space and a second position coordinate of the three-dimensional interactive operation body in the virtual operation space;
establishing a mapping relation between the actual operation space and the virtual operation space according to the first position coordinate and the second position coordinate;
and according to the mapping relation, performing spatial mapping on the actual motion track to generate the virtual motion track.
Or,
receiving a zooming instruction sent by the three-dimensional interactive operation body, and acquiring a third position coordinate of the three-dimensional interactive operation body in the actual operation space;
and resetting the actual operation space according to the third position coordinate and the first position coordinate, and establishing a mapping relation between the virtual operation space and the reset actual operation space.
Or,
receiving a locking operation instruction sent by a three-dimensional interactive operation body, and acquiring a locking position coordinate of the three-dimensional interactive operation body, wherein the locking position coordinate is a position coordinate of the three-dimensional interactive operation body before movement;
moving the three-dimensional interactive operation body, and acquiring an unlocking position coordinate when receiving an unlocking operation instruction sent by the three-dimensional interactive operation body;
and resetting the actual operation space according to the first position coordinate, the locking position coordinate and the unlocking position coordinate, and establishing a mapping relation between the virtual operation space and the reset actual operation space.
In the embodiment of the present invention, the preset condition may be that a distance between an actual motion trajectory or a virtual motion trajectory, which is set in advance, and the operation object is smaller than a preset distance.
In the embodiment of the present invention, determining whether the operation information meets a preset condition, and if the operation information meets the preset condition, triggering an interactive operation in the interactive scene may specifically include: if the actual operation space is not coincident with the virtual operation space, the actual motion trajectory of the three-dimensional interactive operation body can be mapped to the motion trajectory of the virtual cursor in the virtual operation space, and when the distance between the virtual cursor and the operation object in the virtual operation space is smaller than a preset distance, the operation corresponding to the operation instruction is executed on the operation object. And if the actual operation space is overlapped with the virtual operation space, executing the operation corresponding to the operation instruction on the operation object when the distance between the actual motion trajectory of the three-dimensional interactive operation body and the operation object in the virtual operation space is smaller than the preset distance.
In this embodiment of the present invention, the operation instruction includes an activation instruction, and the operation object includes a plurality of display objects stacked in the depth direction of the virtual space, and then triggering the interactive operation in the interactive scene specifically includes: and the stereoscopic interactive operation body executes the activation instruction and performs activation operation on the display object.
In an embodiment of the present invention, the operation instruction includes a cutting instruction, and the operation object includes a virtual human tissue in the virtual operation space, and then triggering the interactive operation in the interactive scene specifically includes: and the three-dimensional interactive operation body executes the cutting instruction and performs cutting operation on the virtual human body tissue.
In an embodiment of the present invention, the method further comprises:
and when the interactive operation in the interactive scene is triggered, sending a force feedback vibration instruction to the three-dimensional interactive operation body so as to enable the three-dimensional interactive operation body to perform force feedback vibration.
In the embodiment of the present invention, before identifying the operation information that the stereoscopic interactive operation body is in the non-contact state, the method further includes:
and establishing data communication between the stereoscopic display equipment and the stereoscopic interactive operation body.
The embodiment of the invention realizes the three-dimensional interaction by identifying the operation information of the three-dimensional interaction operation body in a non-contact state and triggering the interaction operation when the operation information meets the preset condition, and compared with the interaction scene disclosed by the prior art, the interaction operation disclosed by the embodiment of the invention is more convenient and the interaction scene is enriched; in addition, the stereoscopic interactive operation body does not have any mechanical contact with the stereoscopic display device, so that the friction loss is reduced, and the durability of the stereoscopic interactive operation body and the stereoscopic display device is improved.
The following describes a specific embodiment of the interaction method of the present invention with reference to the specific drawings.
In the embodiment of the present invention, in order to obtain the actual motion trajectory of the stereoscopic interactive operation body, the stereoscopic interactive operation body needs to be positioned first.
Fig. 2 is a schematic diagram illustrating positioning of a stereo interactive operation body by using ultrasonic waves according to an embodiment of the present invention, as shown in fig. 2, an ultrasonic transmitter 22 and a synchronization signal transmitter 23 are installed on the stereo interactive operation body 21, and an ultrasonic receiver 24 and a synchronization information receiver 25 are installed on a stereo display device, wherein synchronization information transmitted in the synchronization signal transmitter 23 and the synchronization signal receiver 25 may be RF radio frequency signals, or may be infrared modulated light sources, and is used for synchronizing and timing ultrasonic time of arrival (TOA) or ultrasonic time difference of arrival (TDOA). In order to calculate the spatial position of the stereoscopic interactive operation body 21, the spatial position of the ultrasonic transmitter 22 on the stereoscopic interactive operation body 21 and the spatial position of the ultrasonic receiver 24 on the stereoscopic display device positioning plane 26 are coordinated, as shown in fig. 3, the coordinates of the stereoscopic interactive operation body ultrasonic transmitting point 31 relative to the stereoscopic display device positioning plane 32 are assumed to be (x, y, z), and the optimal three ultrasonic receiving points S1, S2, S3 on the stereoscopic display device positioning plane 32 are screened, wherein the distance between the first positioning point S1 and the stereoscopic interactive operation body is D1, the distance between the second positioning point S2 and the stereoscopic interactive operation body is D2, and the distance between the third positioning point S3 and the stereoscopic interactive operation body is D3, and the distance values can be converted from the ultrasonic arrival time and the acoustic velocity. The width of the stereoscopic display device positioning plane 32 is w (the distance between S1 and S2), the height is h (the distance between S1 and S3), the x axis points to the direction from S1 to S2, the Y axis points to the direction from S1 to S3, the Z axis points to the user perpendicular to the plane, the zero point of coordinates is the middle point of the stereoscopic display device positioning plane 32, and the three-dimensional display device positioning plane has the advantages that Y1 is zero, Y2 is zero, Y3 is zero, h/2 is zero, x1 is zero, x3 is zero, x2 is zero, w/2 is zero, and Z1 is zero, Z2 is zero, and Z3 is zero. The distances from the ultrasonic wave emitting point 31 to the distances S1, S2 and S3 are D1, D2 and D3, and the values are converted by the arrival time of the sound waves received by the three receiving points. The following relation 1:
x 2 + y 2 + z 2 + x 1 2 + y 1 2 + z 1 2 - 2 x 1 x - 2 y 1 y - 2 z 1 z = D 1 2 x 2 + y 2 + z 2 + x 2 2 + y 2 2 + z 2 2 - 2 x 2 x - 2 y 2 y - 2 z 2 z = D 2 2 x 2 + y 2 + z 2 + x 3 2 + y 3 2 + z 3 2 - 2 x 3 x - 2 y 3 y - 2 z 3 z = D 3 2
equation reduction to relational expression 2
x 2 + y 2 + z 2 + x 1 2 + y 1 2 + z 1 2 - 2 x 1 x - 2 y 1 y - 2 z 1 z = D 1 2 - 2 wx = D 2 2 - D 1 2 - 2 hy = D 3 2 - D 1 2
From the equation 2, x, y are obtained and then substituted into the equation 1 to obtain z.
So far, the spatial coordinates of the stereoscopic interactive operator with respect to the plane of the stereoscopic display device can be obtained.
Since the ultrasonic wave is a matter wave and has the characteristic of exclusive channel when used for positioning, a frequency division multiplexing method or a time division multiplexing method can be adopted when the ultrasonic wave is used for tracking a plurality of points. For simplification, time division multiplexing is one of the most reasonable methods for multipoint ultrasound positioning, but there is also a problem that the frame rate is lowered if the number of points to be tracked is large. The number of points required for tracking the finger action is comprehensively considered, and the number of the ultrasonic emission points is most reasonable from 2 to 3. FIG. 4 is a timing diagram of the time division multiplexing for multi-point ultrasonic positioning, in which the number of tracking points is 2, the synchronization signal is SYNC, the first transmission point is S1, and the second transmission point is S2. Wherein: tsync is the interval between the synchronization signal transmission SYNC and the transmission time of the first anchor point. In a typical electronic system, a controller at the transmitting end needs a certain time to trigger timing when sending a synchronous transmitting signal to the receiving end, so that the delay time must be determined according to the properties of the device sending the synchronous signal. For example, the typical 38Khz infrared signal is usually 1-2 ms apart. Tgab is the interval between adjacent anchor points in the transmit timing to avoid channel aliasing. Assuming that the sound velocity is Vs at normal temperature and the maximum location distance of the operation region is Smax, the time Ts required for the ultrasonic wave is Smax/Vs at the maximum location distance, considering the problems of the echo of the ultrasonic wave and the accumulation of the sound wave energy, from practical engineering experience, Tgab >5Ts is a better setting, but not too large, because Tgab reduces the tracking rate. Twinow is the window time for a single fix and is also the minimum interval for transmission of 2 times the synchronization signal. It is slightly greater than the sum of the above times. The time parameters must be corrected in a delayed manner after the receiving end measures the arrival time, so as to ensure that the measured distance is accurate, and it is necessary to set the correct time parameters, so the embodiment of the present invention provides the above corresponding description according to engineering experience.
Fig. 5 is a schematic diagram illustrating positioning of a stereoscopic interactive operating body by using an infrared light source in an embodiment of the present invention, and as shown in fig. 5, an infrared mark light source 52 is installed on a stereoscopic interactive operating body 51, and generally, an AsGa LED of 850nm or 940nm is used as a light source. The stereo display device positioning plane 53 is provided with a binocular infrared camera 54 for capturing light spots formed by the infrared marker light sources on the operation body. The structure of the camera head should include an 800 nm-1100 nm infrared band-pass filter 61 to filter out visible light and eliminate the influence of light spots in the environment. The camera structure is shown in fig. 6, and includes a lens group 62 and a camera sensor 63 in addition to the infrared band pass filter 61. In order to calculate the spatial position of the stereoscopic interactive operator 51, a perspective structure is required, as shown in fig. 7, which is a schematic perspective structure diagram, wherein C1 and C2 are the positions of the imaging sensors of the two cameras, and the distance between them is D. And the point P is the position of the infrared light source. When the point P appears in the public view of the two cameras, the image generated by the point P in the 2 cameras generates parallax, the numerical relationship between the pixel difference and the optical axis distance D of the camera is calibrated, and then the spatial position of the point P relative to the positioning plane 53 of the stereoscopic display device can be easily calculated by a triangulation method according to the perspective model of the camera.
In the embodiment of the invention, when the three-dimensional interactive operation body is used for interactive operation, an actual operation space needs to be set firstly. In normal use, a user will also have great intention to perform three-dimensional operation in an arbitrary space, and therefore, the setting of an actual operation space needs to be conveniently performed.
A specific embodiment of setting the actual operating space is further explained below with reference to fig. 8. As shown in fig. 8, there are 2 coordinate system zero points POS (0,0,0) and OP (0,0,0) respectively corresponding to the positioning plane zero point and the actual operating space zero point of the stereoscopic display device. When the actual operating space needs to be set, the stereoscopic interactive operating body sends an actual operating space setting instruction, at this time, the position coordinates of the stereoscopic interactive operating body are recorded, for example, a zero point OP (0,0,0) of the stereoscopic interactive operating body in the original actual operating space 81 triggers the actual operating space setting instruction, and a positioning plane zero point coordinate POS (x0, y0, z0) corresponding to the original actual operating space zero point OP (0,0,0) exists under the coordinate system of the positioning plane 82 of the stereoscopic display device. When the operator moves the stereoscopic interactive operation body to a certain position, for example, a new real operation space 83, and triggers the zero-point function key 84 on the stereoscopic interactive operation body to trigger the real operation space setting command, the new coordinates POS (x1, x1, z1) of the stereoscopic interactive operation body on the positioning plane 82 of the stereoscopic display device will be taken as the new mapping coordinates of OP (0,0, 0). Therefore, the actual operation space of the three-dimensional interactive operation body can be determined by sending the actual operation space setting instruction in the actual operation space and acquiring the actual position coordinates of the three-dimensional interactive operation body.
In the embodiment of the invention, in some cases, a user can set the actual operation space to be completely overlapped with the virtual operation space, and what is achieved is 'what is seen and what is touched', namely, the position of the three-dimensional interactive operation body, namely the virtual cursor, is realized, and in this case, the virtual cursor is not displayed generally, and the operation body and the operation object are used for interaction directly. As shown in fig. 9, which is a schematic diagram of an example of a virtual table game application, when a user uses a tiled stereoscopic display device 91 to play a virtual table game, a stereoscopic interactive operation body 92 is a club held by the player. The movement of the club is an important input for the operation in the game, and the ball is directly visible to the naked eye, in which case the actual operating space and the virtual operating space coordinate systems coincide. The spatial coordinates of the positioned stereoscopic interactive effector 92 with respect to the positioning plane 93 of the stereoscopic display device 91 can be directly subjected to positioning calculation.
When the position or size of the virtual operating space is not convenient for an operator to directly place the operating body, the actual operating space of the operating body needs to be mapped into the virtual operating space. For example, fig. 10 is a schematic diagram of an application example for enhancing a conventional mouse input, a user wears a three-dimensional interactive operation body 101 capable of being positioned on a finger, and a positioning plane 102 is a mouse pad of the user, so that when performing work such as mechanical modeling design, it is more convenient for an operator to set an actual operation space on the mouse pad-based space. When a user needs to edit the stereoscopic content in the stereoscopic display device, the user can move the finger with the finger sleeve in the space above the mouse pad, and the coordinate of the stereoscopic interactive operator 101 relative to the positioning plane 102 is mapped to the stereoscopic coordinate of the cursor in the stereoscopic display device 103, so that the operator can edit the stereoscopic content in the display conveniently.
The process of mapping the real operating space and the virtual operating space will be described with reference to fig. 11. As shown in fig. 11, there are 3 coordinate system zero points POS (0,0,0), OP (0,0,0), and View (0,0,0) respectively corresponding to the positioning plane 112 zero point, the actual operating space 111 zero point, and the virtual operating space 113 zero point. OP (0,0,0,) has a corresponding coordinate POS (x0, y0, z0) in the location plane coordinate system. The values of x0, y0, z0 can be determined by the actual operating space setting method.
When the operator moves to a certain position in the actual operation space by using the three-dimensional interactive operation body, the new position is positioned to a new coordinate POS (x0+ x, y0+ y, z0+ z) under the coordinate system of the positioning plane, and the values of x, y and z, namely OP (x, y, z), can be differentiated according to the POS (x0, y0, z0) corresponding to the saved OP (0,0, 0).
Assuming that the maximum values of the virtual operating space of the stereoscopic display device in the x, y, z directions are ViewX, ViewY and ViewZ, respectively, and the maximum values of the actual operating space in the x, y, z directions are OPX, OPY and OPZ, respectively, wherein the maximum value can also be freely set by the operator.
Then, the displacement proportionality coefficients PX, PY and PZ:
PX=ViewX/OPX
PY=ViewY/OPY
PZ=ViewZ/OPZ
the coordinates mapped by the virtual cursor 114 in the virtual operating space at this time are View (PX x, PY x, PZ x).
In the embodiment of the invention, when an operator uses the three-dimensional interactive operation body to operate, the size of the actual operation space needs to be changed, and the change is most intuitive, namely the ratio between the moving distance of the operation body and the moving distance of the virtual cursor is changed, namely the speed of the virtual cursor is changed. Fig. 12 is a schematic diagram illustrating a method for scaling an actual operation space range by using a stereoscopic interactive operation body. As shown in fig. 12, let the coordinates of the positioning coordinate system of the operating space zero point of the original actual operating space 121 corresponding to the positioning plane zero point POS (0,0,0) of the positioning plane 124 be POS (x0, y0, z0), and the maximum values of the virtual operating space filled in the x, y, z directions of the actual operating space be OPX, OPY, and OPZ, respectively. When the zoom operation space function key 125 on the stereoscopic interactive operation body 123 is triggered to trigger the zoom operation space command, the stereoscopic interactive operation body 123 moves to a new position with coordinates of POS (x1, y1, z1), and this point is used as the end point of the actual operation space, so as to obtain the final result
OPX’=x1-x0
OPY’=y1-y0
OPZ’=z1-z0
Wherein OPX ', OPY ', and OPZ ' are saved as the maximum value of the scaled real operation space 122 in the x, y, z direction that fills the virtual operation space. Since the operator pulls in any direction, in most cases, the method conforming to the use habit of the operator determines a new zero point of the actual operating space by the position of the previous zero point of the coordinate system relative to the actual operating space, but a UI selection interface can also be provided to allow the operator to select the new zero point position.
It should be noted that, when there are a plurality of positioning points of the stereoscopic interactive operation body, the geometric center coordinates calculated from the spatial coordinates of the plurality of points after coordinate mapping are used as the unique coordinates required for determination.
Fig. 13 is a schematic diagram of cursor movement in a virtual operating space displayed by a stereoscopic display device. As shown in fig. 13, there are 2 coordinate system zero points POS (0,0,0), OP (0,0,0), which correspond to the zero point of the positioning plane 134 and the zero point of the actual operating space 132, respectively, and when the operator moves in the actual operating space 132 using the stereoscopic interactive operating body 131, the displayed virtual cursor 133 is transformed into the space-transformed and mapped coordinates in the virtual operating space 134, and the movement in the virtual operating space 134 is synchronized with the movement of the stereoscopic interactive operating body 131. When there are a plurality of positioning points of the stereoscopic interactive effector 131, the geometric center coordinates calculated from the spatial coordinates of the plurality of points after coordinate mapping are used as unique coordinates required for determination. The operator needs to switch from rough operation to fine operation, temporarily place the handle in an easily fixed position to better control the hand movement, and resume the previous operation in a new position, but the operation space is not continuous with the previous operation, and the present embodiment mainly solves the problem.
In the embodiment of the invention, when an operator uses the three-dimensional interactive operation body to operate, seamless switching of an actual operation space can be realized by locking the virtual cursor. Fig. 14 is a schematic diagram of a method for unlocking when a virtual cursor is moved to another area after being locked by using a stereoscopic interactive operation body. As shown in fig. 14, coordinate system coordinates corresponding to the actual operation space zero point of the stereoscopic interactive operator 141 and the positioning plane zero point POS (0,0,0) of the positioning plane 142 are assumed to be POS (x0, y0, z 0). When the stereoscopic interactive operation body 141 is triggered to lock the cursor function key 145, the coordinate system coordinate corresponding to the positioning plane 142 of the stereoscopic interactive operation body 141 is POS (x1, y1, z1), and at this time, the mapping between the virtual operation space coordinate of the virtual cursor 144 and the positioning plane coordinate of the stereoscopic interactive operation body is blocked, that is, the position of the virtual cursor 144 is locked. When the operator moves the stereoscopic interactive operation body 141 to a new position POS (x2, y2, z2), and triggers the stereoscopic interactive operation body 141 to unlock the cursor function key 146, sets the zero-point coordinates of the actual operation space to POS (x0+ (x2-x1), y0+ (y2-y1), z0+ (z2-z1)) and restarts mapping of the virtual operation space coordinates of the virtual cursor and the positioning plane coordinates of the stereoscopic interactive operation body, at this time, the operator finishes locking the virtual cursor by using the stereoscopic interactive operation body and then moves to another region for unlocking, and can continue to lock the operation before the cursor at the new position without changing the mapping relationship of the coordinates again, thereby realizing seamless switching of the operation space. It should be noted that, in the embodiment of the present invention, when the stereoscopic interactive operating body triggers the instruction for locking the cursor, the virtual cursor coordinate information is locked, and when the stereoscopic interactive operating body releases the instruction for locking the cursor, the current position of the stereoscopic interactive operating body is obtained to calculate the absolute displacement coordinate change of the stereoscopic interactive operating body during the period of "locking the cursor", and this changed data is used to correct a new zero point of the operating space, so as to implement seamless switching of the operating space of the user.
After the mobile terminal increasingly adopts the stereo display technology, in order to realize that the stereo man-machine interaction of the stereo display equipment is not limited to a two-dimensional touch screen, the stereo interaction method provided by the embodiment of the invention can effectively realize three-dimensional stereo interaction. The following describes further the stereoscopic interactive operation performed by activating or selecting an operation object in a stereoscopic scene in a mobile terminal for stereoscopic display by using the stereoscopic interactive method of the present invention with reference to fig. 15-17. When an operator uses the three-dimensional interactive operation body to move in the actual operation space, the space coordinates of the three-dimensional interactive operation body are obtained, and the actual motion track of the three-dimensional interactive operation body is mapped into a virtual cursor in the virtual operation space. When the space distance between the virtual cursor mapped by the motion trail of the stereoscopic interactive operation body and the operation object which can be activated or selected is smaller than a set threshold value, the operation object is activated. As shown in fig. 15, the motion trajectory of the stereoscopic interactive operator 151 in the actual operation space 152 is mapped to a virtual cursor 155 in the virtual operation space 154 through a positioning plane 153, and when the distance between the virtual cursor 155 and the "phone" icon is smaller than a preset threshold, the "phone icon" is activated and highlighted. As shown in fig. 16, the motion trajectory of the stereoscopic interactive operator 161 in the actual operation space 162 is mapped to the virtual cursor 165 in the virtual operation space 164 through the positioning plane 163, and when the virtual cursor 165 moves to the "contact" icon on the left side, the distance between the mapped virtual cursor 165 and the "contact" icon is smaller than the preset threshold, and the "contact" icon is activated and highlighted. As shown in fig. 17, the motion trajectory of the stereoscopic interactive operation body 171 in the actual operation space 172 is mapped to the virtual cursor 175 in the virtual operation space 174 through the positioning plane 173, when the virtual cursor 175 moves on the z-axis, the distance between the mapped virtual cursor 175 and the "short message" icon is smaller than the preset threshold, the "short message" icon is activated, the object visually lighter than the hierarchy is transparently displayed, and the activated object is highlighted.
For further understanding of the interaction method of the embodiment of the present invention, the following further describes an application example of the interaction method of the present invention with reference to the drawings.
FIG. 18 is a schematic diagram of stereo interaction using a mouse pad and ring auxiliary input device.
It consists of a positioning plane 181 designed to be used as a mouse pad, a finger ring-shaped stereoscopic interactive operating body 182 worn on a finger, and a computer having a stereoscopic display device 183. The positioning system of the ring can be the ultrasonic positioning system or the binocular camera for identifying the infrared mark points.
When an operator normally uses the mouse, the mouse pad is not different from a common mouse pad; when the user's hand leaves the mouse pad, the positioning plane 181 acquires the spatial position of the ring of the stereoscopic interactive manipulation body 182 in the actual manipulation space 184 in real time, and reports the positional information thereof to the computer controlling the stereoscopic display device 183. The computer, having obtained the position information for the ring, can map the motion of each tracked finger, placing the virtual cursor 185 associated with it within the scene. Generally, the coordinates of the virtual cursor 185 and the geometric center of the positioning points of the plurality of stereoscopic interactive operation bodies are defined as the standard. When the operator moves in the actual operating space 184 above the location plane, the computer system will interpret the movement of each tracked finger and map it to a corresponding interaction event, changing the scene displayed within the stereoscopic display device.
Fig. 19-21 are perspective mechanical structure application diagrams in one design of auxiliary input device operation. As shown in fig. 19, this three-dimensional mechanical structure contains a considerable number of layers and internal structures. When the stereoscopic interactive operator 191 is at the corresponding position of the actual operation space 192, the virtual cursor 195 of the virtual operation space 194 after being coordinate mapped by the positioning plane 193 correspondingly activates the portion of the designed object at the corresponding position. Accordingly, the process is not transparent at all levels of activation. When the operator moves in a decreasing direction toward the Z-axis, the corresponding virtual cursor 195 will also move and activate the portion of the workpiece being operated upon. As shown in fig. 20, the three-dimensional interactive operation body (in the figure, a finger ring) is mapped to a virtual cursor 205 of a virtual operation space 204 through a coordinate of a positioning plane 203 at a corresponding position of an actual operation space 202, wherein a small piston component in a workpiece is activated. When the component is activated, the ring is controlled to generate a slight force feedback to alert the operator that a new object is activated. When an operator draws the two fingers wearing the three-dimensional interactive operation body close to each other to make a holding/pinching gesture and the like, the activated workpiece enters a grabbing state, and the workpiece can move along with the further movement of the operator at the moment until the holding gesture of the operator is cancelled. The force feedback device can send out very short force feedback for a plurality of times during the actions of pinching and releasing by an operator so as to generate viscous feeling to simulate the touch feeling of the object being adsorbed and released. For example, as shown in fig. 21, when the stereoscopic interactive operation body (in the figure, a ring worn on a finger) is at a corresponding position in the actual operation space 212, the coordinate of the positioning plane 213 is mapped to the virtual cursor 215 in the virtual operation space 214, and after the operator activates the piston workpiece, the operator performs a pinching operation, and then pulls the workpiece to move in the z-axis direction.
Fig. 22 is a schematic diagram of an implementation of a simulated engraving application in accordance with an embodiment of the present invention.
As shown in fig. 22, the stereoscopic interactive operation body 221 is held by an operator as a virtual graver, and the object to be engraved is displayed within the virtual operation space 222 of the stereoscopic display device. When the operator holds the three-dimensional interactive operation body 221 to move in the actual operation space 225 relative to the positioning plane 223, the virtual engraving knife 224 synchronously invades with the engraved virtual object, the force feedback device synchronously works, the intensity of the force feedback is in direct proportion to the hardness of the engraved object and the invasion speed, and the operator obtains the touch sense that the object is really damaged. Particularly, when the positioning plane is a screen surface of a tablet computer, a mobile phone and other devices using the stereoscopic display technology, the actual operation space and the virtual operation space are overlapped, and the virtual graver is a stereoscopic interactive operation body held by an operator, and the invention also belongs to the protection scope.
Fig. 23 is a schematic diagram of implementing a virtual surgery training application according to an embodiment of the present invention.
As shown in fig. 23, the stereoscopic interactive operation body 231 is held by the operator as a virtual scalpel, and the body tissue of the virtual operation is displayed in the virtual operation space 232 of the stereoscopic display device. When the operator holds the three-dimensional interactive operation 231 and moves in the actual operation space 235 relative to the positioning plane 233, the virtual scalpel 234 is synchronously cut and invaded by virtual human tissues, and meanwhile, the force feedback device synchronously works according to the progress of the simulated operation so that the operator obtains real touch feeling. Particularly, when the positioning plane is a screen surface of a tablet computer, a mobile phone and other devices using the stereoscopic display technology, the actual operation space and the virtual operation space are overlapped, and a virtual scalpel, namely a stereoscopic interactive operation body held by an operator, also belongs to the protection scope of the invention.
Fig. 24-25 are schematic diagrams of a simulated pool game application implemented by an embodiment of the present invention.
As shown in fig. 24, the stereoscopic interactive manipulation body 241 is made in the form of a club. When the operator moves the three-dimensional interactive operation body 241 in the actual operation space 243 on the positioning plane 242, the three-dimensional interactive operation body 241 is mapped to the virtual billiard ball stick 245 in the virtual operation space 244 to collide with the virtual billiard ball synchronously, and meanwhile, the force feedback device works synchronously to enable the operator to obtain a vivid collision sense. In particular, as shown in fig. 25, when the positioning plane 251 is a screen surface of a stereoscopic display device 252 such as a tablet computer and a mobile phone using stereoscopic display technology, an actual operation space and a virtual operation space are overlapped, and a virtual billiard stick, i.e. a stereoscopic interactive operation body 253 held by an operator, also belongs to the protection scope of the present invention.
In the embodiment of the invention, the operation information of the three-dimensional interactive operation body and the three-dimensional display equipment in a non-contact state is identified, and the interactive operation is triggered when the operation information meets the preset condition, particularly, the actual motion track can be mapped into a virtual cursor in a virtual operation space, and the virtual cursor is controlled to operate an operation object of the three-dimensional display equipment according to an operation instruction sent by the operation body, so that the three-dimensional interaction is realized; in addition, since the operation body does not have any mechanical contact with the stereoscopic display device, friction loss is reduced, and durability of the operation body and the stereoscopic display device is improved.
Fig. 26 is a schematic structure of a stereoscopic display device according to an embodiment of the invention. As shown in fig. 26, a stereoscopic display device for performing an interactive operation with a stereoscopic interactive operation body, includes:
an identifying unit 261, configured to identify operation information that the stereoscopic interactive operation body is in a non-contact state, where the non-contact state is a state in which the stereoscopic interactive operation body moves relative to the stereoscopic display device and is not in contact with the stereoscopic display device,
an interaction unit 262, configured to determine whether the operation information meets a preset condition, if so, trigger an interaction operation in the interaction scene,
and a display unit 263 for displaying the interactive scene.
In the embodiment of the present invention, the operation information includes an actual motion trajectory of the stereoscopic interactive operation body with respect to the stereoscopic display device and an operation instruction sent by the stereoscopic interactive operation body.
In this embodiment of the present invention, the identifying unit 261 includes:
an actual operating space setting module 2611, configured to receive an actual operating space setting instruction sent by the stereoscopic interactive operating body, acquire an actual position coordinate of the stereoscopic interactive operating body according to the actual operating space setting instruction, and set the actual operating space according to the actual position coordinate of the stereoscopic interactive operating body;
an actual motion trajectory obtaining module 2612, configured to determine an actual motion trajectory of the stereoscopic interactive operation body according to a motion trajectory of the stereoscopic interactive operation body in the actual operation space relative to the stereoscopic display device.
In the embodiment of the present invention, the recognition unit 261 cooperates with a positioning unit, such as an ultrasonic transmitter or an infrared light source, on the stereoscopic interactive operating body to obtain the spatial position coordinates, attitude or contour of the stereoscopic interactive operating body relative to the positioning plane of the stereoscopic display device. It should be noted that, if the stereoscopic interactive manipulation body itself can independently acquire the spatial position coordinates, the posture or the contour of itself with respect to the positioning plane of the stereoscopic display device, the recognition unit 261 may acquire the spatial position coordinates, the posture or the contour directly from the stereoscopic interactive manipulation body.
In an embodiment of the present invention, the interaction unit 262 includes:
the space conversion module 2621 is configured to perform image processing on the actual motion trajectory to generate a virtual motion trajectory in a virtual operation space displayed by the stereoscopic display device;
an interactive content processing module 2622 for determining whether the operation information satisfies a preset condition,
and the interaction event triggering module 2623 is configured to trigger an interaction operation in the interaction scene when the preset condition is met.
In an embodiment of the present invention, the spatial conversion module 2621 is further configured to:
receiving an initial position setting instruction sent by the three-dimensional interactive operation body, wherein the initial position setting instruction is used for setting a first position coordinate of the three-dimensional interactive operation body in the actual operation space and a second position coordinate of the three-dimensional interactive operation body in the virtual operation space;
establishing a mapping relation between the actual operation space and the virtual operation space according to the first position coordinate and the second position coordinate;
and generating the virtual motion track from the actual motion track according to the mapping relation.
Or, further to:
receiving a zooming instruction sent by the three-dimensional interactive operation body, and acquiring a third position coordinate to which the three-dimensional interactive operation body moves;
resetting the actual operation space according to the third position coordinate and the first position coordinate, and establishing a mapping relation between the virtual operation space and the reset actual operation space;
or, further to:
receiving a locking operation instruction sent by the three-dimensional interactive operation body, and acquiring a locking position coordinate of the three-dimensional interactive operation body; receiving an unlocking operation instruction sent when the three-dimensional interactive operation body is moved, and acquiring an unlocking position coordinate;
and resetting the actual operation space according to the first position coordinate, the locking position coordinate and the unlocking position coordinate, and establishing a mapping relation between the virtual operation space and the reset actual operation space.
In the embodiment of the present invention, the space converting module 2621 receives space coordinate data of the stereoscopic interactive operation body, and when the stereoscopic interactive operation body can independently obtain the space position coordinate, the posture or the contour of the stereoscopic interactive operation body relative to the positioning plane of the stereoscopic display device, the space converting module 2621 obtains the space coordinate data transmitted from the stereoscopic interactive operation body through the wireless communication unit, wherein the received coordinate data is based on the coordinates of the positioning plane, and is converted into the coordinates in the virtual operation space in the space converting module 2621, and stored in a first-in first-out manner. The first-in first-out data represents the motion track information of the three-dimensional interactive operation body in the virtual operation space in the previous period.
In this embodiment of the present invention, the interactive content processing module 2622 includes:
a first judging submodule 26221, configured to judge whether the actual operation space and the virtual operation space coincide with each other, and if the actual operation space and the virtual operation space do not coincide with each other, map the motion trajectory of the stereoscopic interactive operation body to the motion trajectory of a virtual cursor in the virtual operation space,
the second determining submodule 26222 is configured to determine whether the motion trajectory of the stereoscopic interactive operation body or the distance between the motion trajectory of the virtual cursor in the virtual operation space and the operation object in the virtual operation space is smaller than a preset distance.
In this embodiment of the present invention, the interactive content processing module 2622 reads the motion trajectory data from the space conversion module 2621, and determines whether the actual operation space coincides with the virtual operation space, and if the actual operation space does not coincide with the virtual operation space, maps the motion trajectory of the operation object to the motion trajectory of the virtual cursor in the virtual operation space, and further determines whether the trajectory in the virtual operation space contacts or collides with the coordinates of the virtual object in the interactive scene.
In an embodiment of the present invention, the operation instruction includes an activation instruction, the operation object includes a plurality of display objects stacked in a depth direction of the virtual space, and the interaction event triggering module 2623 is specifically configured to: and executing the activation instruction to activate the display object.
In an embodiment of the present invention, the operation instruction includes a cutting instruction, the operation object includes a virtual human tissue in the virtual operation space, and the interaction event triggering module 2623 is specifically configured to: and executing the cutting instruction to cut the virtual human body tissue.
In this embodiment of the present invention, the interaction event triggering module 2623 receives the determination result sent by the interactive content processing module 2622, and triggers an interaction operation when the operation information meets a preset condition, where the interaction operation is determined according to an operation instruction included in the operation information.
In an embodiment of the present invention, the apparatus further includes:
and a force feedback unit 264, configured to send a force feedback vibration instruction to the stereoscopic interactive operation body when the interactive operation in the interactive scene is triggered, so that the stereoscopic interactive operation body performs force feedback vibration.
In an embodiment of the present invention, the apparatus further includes:
a communication unit 265, configured to establish data communication between the stereoscopic display device and the stereoscopic interactive operator.
It should be noted that, for the information interaction, execution process and other contents between the units and modules in the stereoscopic display device in the embodiment of the present invention, the specific contents are also applicable to the stereoscopic display device since they are based on the same concept as the embodiment of the method of the present invention, and moreover, the units or modules in the embodiment of the present invention can be implemented as separate hardware or software, and the functions of the units or modules can be implemented by using separate hardware or software as required.
In the embodiment of the invention, the operation information of the three-dimensional interactive operation body and the three-dimensional display equipment in a non-contact state is identified, and the interactive operation is triggered when the operation information meets the preset condition, particularly, the actual motion track can be mapped into a virtual cursor in a virtual operation space, and the virtual cursor is controlled to operate an operation object of the three-dimensional display equipment according to an operation instruction sent by the operation body, so that the three-dimensional interaction is realized; in addition, since the operation body does not have any mechanical contact with the stereoscopic display device, friction loss is reduced, and durability of the operation body and the stereoscopic display device is improved.
Fig. 27 is a schematic structural diagram of a stereoscopic interaction system according to an embodiment of the invention. As shown in fig. 27, includes: the stereoscopic interactive operating body 271 and the stereoscopic display device 272, wherein the specific structural composition and functions of the stereoscopic display device 272 are as described in the stereoscopic display device according to the embodiments of the present invention, and are not described herein again. The stereoscopic interactive operation body 271 specifically includes:
the protocol bit unit 2711 is capable of generating a specific signal, such as a signal generated by ultrasonic TOA or TDOA, infrared marker image recognition, etc., and cooperating with the stereoscopic display device to obtain the spatial position coordinates, posture or contour of the stereoscopic interactive operator with respect to the positioning plane of the stereoscopic display device. The unit can work independently to obtain the space position coordinates, postures or contours of the stereoscopic interactive operation body relative to the positioning plane of the stereoscopic display equipment, for example, an inertial guidance system (such as a gyroscope) formed by MEMS devices is installed;
a force feedback unit 2712 for improving the interactive experience,
the operation instruction triggering unit 2713 is configured to trigger specific interactive operation by an operator through a triggering instruction, where a typical triggering method may be a key, a capacitive touch slider, or the like.
A communication unit 2714, configured to perform data connection with the stereoscopic display device, so as to implement data communication.
In the embodiment of the invention, the operation information of the stereoscopic interactive operation body and the stereoscopic display equipment in a non-contact state is identified, and the interactive operation is triggered when the operation information meets the preset condition, so that the stereoscopic interaction is realized; moreover, since the operation body does not have any mechanical contact with the stereoscopic display device, friction loss is reduced, and durability of the operation body and the stereoscopic display device is improved.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (27)

1. The stereo interaction method is applied to an interaction scene of stereo display equipment and a stereo interaction operation body, and is characterized in that: the method comprises the following steps:
identifying operation information of the stereoscopic interactive operation body in a non-contact state, wherein the non-contact state refers to a state that the stereoscopic interactive operation body moves relative to the stereoscopic display equipment and is not in contact with the stereoscopic display equipment;
and judging whether the operation information meets a preset condition, and if so, triggering the interactive operation in the interactive scene.
2. The method of claim 1, wherein: the operation information comprises an actual motion track of the stereoscopic interactive operation body relative to the stereoscopic display equipment and an operation instruction sent by the stereoscopic interactive operation body.
3. The method of claim 2, further comprising:
and generating a virtual motion track in a virtual operation space displayed by the stereoscopic display equipment according to the actual motion track.
4. The method of claim 3, wherein: the generating of the virtual motion track in the virtual operation space displayed by the stereoscopic display device according to the actual motion track includes:
setting an actual operation space;
acquiring the actual motion track formed by the motion of the stereoscopic interactive operation body relative to the stereoscopic display equipment in the actual operation space;
and carrying out space mapping on the actual motion track to generate the virtual motion track.
5. The method of claim 4, wherein: the setting of the actual operating space specifically includes:
receiving an actual operation space setting instruction sent by the three-dimensional interactive operation body;
acquiring the actual position coordinate of the three-dimensional interactive operation body according to the actual operation space setting instruction;
and setting the actual operation space according to the actual position coordinates of the three-dimensional interactive operation body.
6. The method of claim 4, wherein: the generating the virtual motion trajectory by performing spatial mapping on the actual motion trajectory specifically includes:
receiving an initial position setting instruction sent by the three-dimensional interactive operation body, wherein the initial position setting instruction is used for setting a first position coordinate of the three-dimensional interactive operation body in the actual operation space and a second position coordinate of the three-dimensional interactive operation body in the virtual operation space;
establishing a mapping relation between the actual operation space and the virtual operation space according to the first position coordinate and the second position coordinate;
and according to the mapping relation, performing spatial mapping on the actual motion track to generate the virtual motion track.
7. The method of claim 6, wherein: the method further comprises the following steps:
receiving a zooming instruction sent by the three-dimensional interactive operation body, and acquiring a third position coordinate of the three-dimensional interactive operation body in the actual operation space;
and resetting the actual operation space according to the third position coordinate and the first position coordinate, and establishing a mapping relation between the virtual operation space and the reset actual operation space.
8. The method of claim 6, wherein: the method further comprises the following steps:
receiving a locking operation instruction sent by the three-dimensional interactive operation body, and acquiring a locking position coordinate of the three-dimensional interactive operation body;
moving the three-dimensional interactive operation body, and acquiring an unlocking position coordinate when receiving an unlocking operation instruction sent by the three-dimensional interactive operation body;
and resetting the actual operation space according to the first position coordinate, the locking position coordinate and the unlocking position coordinate, and establishing a mapping relation between the virtual operation space and the reset actual operation space.
9. The method of claim 4, wherein: judging whether the operation information meets a preset condition, if so, triggering the interactive operation in the interactive scene, specifically comprising:
judging whether the actual operation space is coincident with the virtual operation space, if the actual operation space is not coincident with the virtual operation space, mapping the actual motion track of the three-dimensional interactive operation body to be the motion track of a virtual cursor in the virtual operation space, and executing the operation corresponding to the operation instruction on the operation object when the distance between the motion track of the virtual cursor and the operation object in the virtual operation space is smaller than a preset distance.
10. The method of claim 9, wherein:
and if the actual operation space is coincident with the virtual operation space, executing the operation corresponding to the operation instruction on the operation object when the distance between the actual motion trajectory of the three-dimensional interactive operation body and the operation object in the virtual operation space is smaller than a preset distance.
11. The method according to claim 9 or 10, characterized in that: the operation instruction comprises an activation instruction, the operation object comprises a plurality of display objects which are arranged in a stacked mode along the depth direction of the virtual space, and the stereoscopic interactive operation body executes the activation instruction and performs activation operation on the display objects.
12. The method according to claim 9 or 10, characterized in that: the operation instruction comprises a cutting instruction, the operation object comprises virtual human body tissues in the virtual operation space, and the three-dimensional interactive operation body executes the cutting instruction and performs cutting operation on the virtual human body tissues.
13. The method according to any one of claims 1 to 10, wherein: the method further comprises the following steps:
and when the interactive operation in the interactive scene is triggered, sending a force feedback vibration instruction to the three-dimensional interactive operation body so as to enable the three-dimensional interactive operation body to perform force feedback vibration.
14. The method according to any one of claims 1 to 10, wherein: before identifying the operation information that the three-dimensional interactive operation body is in a non-contact state, the method further comprises the following steps:
and establishing data communication between the stereoscopic display equipment and the stereoscopic interactive operation body.
15. The stereoscopic display equipment is used for carrying out interactive operation with a stereoscopic interactive operation body, and comprises a display unit and a display unit, wherein the display unit is used for displaying an interactive scene, and the stereoscopic display equipment is characterized in that: further comprising:
the identification unit is used for identifying operation information of the stereoscopic interactive operation body in a non-contact state, wherein the non-contact state refers to a state that the stereoscopic interactive operation body moves relative to the stereoscopic display equipment and is not in contact with the stereoscopic display equipment;
and the interaction unit is used for judging whether the operation information meets a preset condition or not, and if so, triggering the interaction operation in the interaction scene.
16. The apparatus of claim 15, wherein: the operation information comprises an actual motion track of the stereoscopic interactive operation body relative to the stereoscopic display equipment and an operation instruction sent by the stereoscopic interactive operation body.
17. The apparatus of claim 16, wherein: the identification unit includes:
the actual operation space setting module is used for receiving an actual operation space setting instruction sent by the three-dimensional interactive operation body, acquiring the actual position coordinate of the three-dimensional interactive operation body according to the actual operation space setting instruction, and setting the actual operation space according to the actual position coordinate of the three-dimensional interactive operation body;
and the actual motion track acquisition module is used for acquiring the actual motion track formed by the motion of the stereoscopic interactive operation body relative to the stereoscopic display equipment in the actual operation space.
18. The apparatus of claim 17, wherein: the interaction unit includes:
the space conversion module is used for generating a virtual motion track in a virtual operation space displayed by the stereoscopic display equipment according to the actual motion track;
an interactive content processing module for judging whether the operation information satisfies a preset condition,
and the interaction event triggering module is used for triggering the interaction operation in the interaction scene when the preset condition is met.
19. The apparatus of claim 18, wherein: the spatial conversion module is further configured to:
receiving an initial position setting instruction sent by the three-dimensional interactive operation body, wherein the initial position setting instruction is used for setting a first position coordinate of the three-dimensional interactive operation body in the actual operation space and a second position coordinate of the three-dimensional interactive operation body in the virtual operation space;
establishing a mapping relation between the actual operation space and the virtual operation space according to the first position coordinate and the second position coordinate;
and according to the mapping relation, performing spatial mapping on the actual motion track to generate the virtual motion track.
20. The apparatus of claim 18, wherein: the spatial conversion module is further configured to:
receiving a zooming instruction sent by the three-dimensional interactive operation body, and acquiring a third position coordinate of the three-dimensional interactive operation body in the actual operation space;
and resetting the actual operation space according to the third position coordinate and the first position coordinate, and establishing a mapping relation between the virtual operation space and the reset actual operation space.
21. The apparatus of claim 18, wherein: the spatial conversion module is further configured to:
receiving a locking operation instruction sent by the three-dimensional interactive operation body, and acquiring a locking position coordinate of the three-dimensional interactive operation body; receiving an unlocking operation instruction sent when the three-dimensional interactive operation body is moved, and acquiring an unlocking position coordinate;
and resetting the actual operation space according to the first position coordinate, the locking position coordinate and the unlocking position coordinate, and establishing a mapping relation between the virtual operation space and the reset actual operation space.
22. The apparatus of claim 18, wherein: the interactive content processing module comprises:
the first judgment submodule is used for judging whether the actual operation space is coincident with the virtual operation space or not, and if the actual operation space is not coincident with the virtual operation space, mapping the actual motion track of the three-dimensional interactive operation body into the motion track of a virtual cursor in the virtual operation space;
and the second judging submodule is used for judging whether the distance between the actual motion track of the three-dimensional interactive operation body or the motion track of the virtual cursor in the virtual operation space and the operation object in the virtual operation space is smaller than a preset distance.
23. The apparatus according to any one of claims 19 to 22, wherein: the operation instruction comprises an activation instruction, the operation object comprises a plurality of display objects which are stacked along the depth direction of the virtual space, and the interaction event triggering module is specifically configured to: and executing the activation instruction to perform activation operation on the display object.
24. The apparatus according to any one of claims 19 to 22, wherein: the operation instruction comprises a cutting instruction, the operation object comprises a virtual human body tissue in the virtual operation space, and the interaction event triggering module is specifically configured to: and executing the cutting instruction to perform cutting operation on the virtual human body tissue.
25. The apparatus according to any one of claims 15 to 22, wherein: the apparatus further comprises:
and the force feedback unit is used for sending a force feedback vibration instruction to the three-dimensional interactive operation body when the interactive operation in the interactive scene is triggered so as to enable the three-dimensional interactive operation body to carry out force feedback vibration.
26. The apparatus according to any one of claims 15 to 22, wherein: the apparatus further comprises:
and the communication unit is used for establishing data communication between the stereoscopic display equipment and the stereoscopic interactive operation body.
27. The three-dimensional interactive system is characterized in that: comprising a stereoscopic interactive operator and a stereoscopic display device according to any of claims 15 to 26.
CN201510147807.2A 2015-03-31 2015-03-31 Stereo interaction method, stereoscopic display device and its system Expired - Fee Related CN106155281B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510147807.2A CN106155281B (en) 2015-03-31 2015-03-31 Stereo interaction method, stereoscopic display device and its system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510147807.2A CN106155281B (en) 2015-03-31 2015-03-31 Stereo interaction method, stereoscopic display device and its system

Publications (2)

Publication Number Publication Date
CN106155281A true CN106155281A (en) 2016-11-23
CN106155281B CN106155281B (en) 2018-05-01

Family

ID=57338065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510147807.2A Expired - Fee Related CN106155281B (en) 2015-03-31 2015-03-31 Stereo interaction method, stereoscopic display device and its system

Country Status (1)

Country Link
CN (1) CN106155281B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951087A (en) * 2017-03-27 2017-07-14 联想(北京)有限公司 A kind of exchange method and device based on virtual interacting plane

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102591550A (en) * 2011-01-04 2012-07-18 中国移动通信集团公司 Zoom control method and device of terminal interface contents
CN103246351A (en) * 2013-05-23 2013-08-14 刘广松 User interaction system and method
CN103456223A (en) * 2012-06-01 2013-12-18 苏州敏行医学信息技术有限公司 Laparoscopic surgery simulation system based on force feedback

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102591550A (en) * 2011-01-04 2012-07-18 中国移动通信集团公司 Zoom control method and device of terminal interface contents
CN103456223A (en) * 2012-06-01 2013-12-18 苏州敏行医学信息技术有限公司 Laparoscopic surgery simulation system based on force feedback
CN103246351A (en) * 2013-05-23 2013-08-14 刘广松 User interaction system and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951087A (en) * 2017-03-27 2017-07-14 联想(北京)有限公司 A kind of exchange method and device based on virtual interacting plane
CN106951087B (en) * 2017-03-27 2020-02-21 联想(北京)有限公司 Interaction method and device based on virtual interaction plane

Also Published As

Publication number Publication date
CN106155281B (en) 2018-05-01

Similar Documents

Publication Publication Date Title
EP3425481B1 (en) Control device
CN103995584B (en) Stereo interaction method and its display device, operation stick and system
KR101639066B1 (en) Method and system for controlling virtual model formed in virtual space
US20150352437A1 (en) Display control method for head mounted display (hmd) and image generation device
WO2014141504A1 (en) Three-dimensional user interface device and three-dimensional operation processing method
WO2015180497A1 (en) Motion collection and feedback method and system based on stereoscopic vision
CN103793060B (en) A kind of user interactive system and method
US7190378B2 (en) User interface for augmented and virtual reality systems
WO2016185845A1 (en) Interface control system, interface control device, interface control method and program
WO2014147858A1 (en) Three-dimensional unlocking device, three-dimensional unlocking method and program
CN109313502B (en) Tap event location using selection device
WO2007100204A1 (en) Stereovision-based virtual reality device
JP2017027206A (en) Information processing apparatus, virtual object operation method, computer program, and storage medium
EP3262505B1 (en) Interactive system control apparatus and method
US20140347329A1 (en) Pre-Button Event Stylus Position
KR102516096B1 (en) Information processing system and information processing method
KR20150040580A (en) virtual multi-touch interaction apparatus and method
WO2017021902A1 (en) System and method for gesture based measurement of virtual reality space
US9658685B2 (en) Three-dimensional input device and input system
KR100446236B1 (en) No Contact 3-Dimension Wireless Joystick
CN106155281B (en) Stereo interaction method, stereoscopic display device and its system
CN104267833A (en) Man-machine interface system
CN106066689B (en) Man-machine interaction method and device based on AR or VR system
CN108459716B (en) Method for realizing multi-person cooperation to complete task in VR
US20230214004A1 (en) Information processing apparatus, information processing method, and information processing program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20180711

Address after: 518000 Room 201, building A, 1 front Bay Road, Shenzhen Qianhai cooperation zone, Shenzhen, Guangdong

Patentee after: SUPERD Co.,Ltd.

Address before: 518053 East Guangdong H-1 East 101, overseas Chinese town, Nanshan District, Shenzhen.

Patentee before: SHENZHEN SUPER PERFECT OPTICS Ltd.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180501

CF01 Termination of patent right due to non-payment of annual fee