CN106774935B - Display device - Google Patents
Display device Download PDFInfo
- Publication number
- CN106774935B CN106774935B CN201710013282.2A CN201710013282A CN106774935B CN 106774935 B CN106774935 B CN 106774935B CN 201710013282 A CN201710013282 A CN 201710013282A CN 106774935 B CN106774935 B CN 106774935B
- Authority
- CN
- China
- Prior art keywords
- target object
- infrared
- action information
- signal
- scanning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000009471 action Effects 0.000 claims abstract description 52
- 238000001514 detection method Methods 0.000 claims abstract description 11
- 239000000872 buffer Substances 0.000 claims description 19
- 238000006243 chemical reaction Methods 0.000 claims description 14
- 238000012360 testing method Methods 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 7
- 230000003139 buffering effect Effects 0.000 claims description 6
- 238000000034 method Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 230000001934 delay Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 230000009467 reduction Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 2
- 238000006467 substitution reaction Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 description 11
- 230000000875 corresponding effect Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000037308 hair color Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Optics & Photonics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention provides a display device including: the scanning unit is used for scanning and identifying the characteristics of the target object to obtain a scanning signal; the modeling unit is used for generating a 3D model according with the characteristics of the target object according to the scanning signals; the motion detection unit is used for detecting the motion state of the target object to obtain motion information; and the virtual presenting unit is used for displaying the 3D model in the virtual image and enabling the 3D model to present a motion state matched with the action information. The invention can present the 3D model of the participant in the virtual image together with the actual action of the participant, so that the user can obtain the virtual environment experience with stronger substitution feeling, thereby having very high practical value.
Description
Technical Field
The invention relates to the technical field of virtual reality display, in particular to a display device.
Background
Virtual Reality (VR) technology is an important direction of simulation technology, is a collection of simulation technology and various technologies such as computer graphics man-machine interface technology, multimedia technology, sensing technology, network technology and the like, and is a challenging cross-technology leading-edge subject and research field. The virtual reality technology mainly comprises the aspects of simulating environment, sensing and sensing equipment and the like.
Along with the continuous development of virtual reality technology, virtual reality wears display equipment and has obtained very big application, and virtual reality wears display equipment for short VR equipment or VR head-display or VR glasses utilize VR equipment to seal the user to external vision, and the guide user produces the sensation of one kind of body in virtual environment, can provide the three-dimensional real visual effect of user.
At present, VR equipment only presents a programmed virtual image to a user, and the user cannot be added to the virtual image. For example, when different users use VR devices to experience the same program, the role models of the virtual images are determined by manufacturers, so that the substitution feeling of the users themselves is reduced, and the users cannot feel the immersive realistic effect.
Disclosure of Invention
The invention aims to solve the problem that the conventional virtual display equipment cannot provide substituted feeling experience for a user.
To achieve the above object, in one aspect, an embodiment of the present invention provides a display apparatus including:
the scanning unit is used for scanning and identifying the characteristics of the target object to obtain a scanning signal;
the modeling unit is used for generating a 3D model according with the characteristics of the target object according to the scanning signals;
the motion detection unit is used for detecting the motion state of the target object to obtain motion information;
and the virtual presenting unit is used for displaying the 3D model in the virtual image and enabling the 3D model to present a motion state matched with the action information.
Further, the scanning unit includes:
the image control CCD camera is used for scanning and shooting the target object for 360 degrees to obtain a shot image of the target object;
and the image operation module is internally provided with a preset image processing algorithm and is used for carrying out characteristic recognition and collection on the target object on the shot image of the target object according to the image processing algorithm to obtain a scanning signal for generating the 3D model.
Further, during the working process of the CCD camera, the CCD camera moves around the target object with the precision of 0.3 pixel, the lens rotationally tracks the target object with the precision of 0.05 degree, and the scanning shooting of the target object is completed in the time of less than 10 milliseconds.
Further, the motion detection unit includes:
the infrared emitter is arranged on a preset positioning point and used for emitting an infrared positioning signal;
the infrared receiver is arranged on a target object and used for receiving the infrared positioning signal and determining the difference value between the target object and a reference distance according to the receiving condition of the infrared positioning signal;
and the first processor is used for determining motion information representing the motion condition of the target object according to the determined difference value between the target object and the reference distance.
Further, the infrared transmitter is also used for transmitting an infrared test signal;
the infrared receiver is also used for receiving an infrared test signal and determining the change of the relative distance between the target object and the positioning point according to the receiving condition of the infrared test signal;
the first processor is further configured to control the infrared emitter to emit an infrared test signal before determining the motion information of the target object, and if the infrared receiver determines, through the infrared test signal, that the relative distance between the target object and the locating point is unchanged and the unchanged duration reaches a preset threshold, determine that the distance between the current position of the target object and the infrared emitter is the reference distance, and control the infrared receiver to emit an infrared locating signal.
Further, the display apparatus further includes: a micro control unit, the micro control unit comprising:
a signal adjustment circuit for performing idealized extraction of the motion information detected by the motion detection unit, comprising: removing signal burrs of the action information and/or inserting preset signal pulses into the action information;
the A/D conversion circuit is used for carrying out analog-to-digital conversion on the idealized extracted action information;
the time sequence synchronization circuit is used for carrying out time slot delay on the action information after the analog-to-digital conversion;
and the second processor is used for sending the action information after the time slot delay to the virtual presentation unit.
Further, the timing synchronization circuit delays the action information after the analog-to-digital conversion by adding a buffer time slot and/or a signal feedback mode.
Further, the second processor comprises:
a first buffer, a second buffer and a transmitter;
the first buffer is used for receiving and buffering the action information sent by the time sequence synchronization circuit in a first time slot and is used for sending the buffered action information to the sender in a second time slot; the second buffer is used for receiving and buffering the action information sent by the time sequence synchronization circuit in a second time slot, and is used for sending the buffered action information to the sender in a first time slot; the working cycle of the second processor comprises at least one first time slot and at least one second time slot which are alternately arranged;
the transmitter is used for transmitting the action information received by the transmitter to the virtual presenting unit.
Further, the display apparatus further includes:
the filtering unit is used for denoising the motion information sent by the second processor;
the virtual presenting unit is specifically configured to receive motion information after the noise reduction performed by the filtering unit.
Further, the display apparatus further includes:
at least one vibration element disposed on the target object;
and the tactile feedback units are respectively connected with the at least one vibration element and used for acquiring the tactile information and generating corresponding tactile feedback control signals to drive the corresponding vibration elements to vibrate so as to feed the tactile feedback to the target object when the 3D model in the virtual image generates the tactile information.
The scheme of the invention has the following beneficial effects:
the scheme of the invention can present the 3D model of the participant in the virtual image together with the actual action of the participant, so that the user can obtain the virtual environment experience with stronger substitution feeling, thereby having very high practical value.
Drawings
FIG. 1 is a schematic structural diagram of a display device according to the present invention;
FIG. 2 is a schematic diagram of a micro control unit of the display device according to the present invention;
fig. 3 is a logical relationship diagram between detailed structures of the display device of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
The invention provides a solution for solving the problem that the existing virtual display equipment cannot provide the user with the substituted feeling experience.
In one aspect, an embodiment of the present invention provides a display apparatus, as shown in fig. 1, including:
the scanning unit is used for scanning and identifying the characteristics of the target object to obtain a scanning signal; the target object may refer to a person or an object participating in the virtual image, and the number is not limited to one;
a modeling unit for generating a 3D model conforming to the characteristics of the target object based on the scanning signal;
the motion detection unit is used for detecting the motion state of the target object to obtain motion information;
and the virtual presenting unit is used for displaying the 3D model in the virtual image and enabling the 3D model to present a motion state matched with the motion information.
Obviously, the embodiment can present the 3D model of the participant in the virtual image together with the actual motion of the participant, so that the user can obtain the experience of the virtual environment with stronger substitution feeling, and therefore, the embodiment has a very high practical value.
The display device of the present embodiment will be described in detail with reference to practical applications.
Illustratively, the scanning unit of the present embodiment includes: the image control CCD camera and the image operation module.
The image control CCD camera is used for scanning and shooting a target object for 360 degrees to obtain a shot image of the target object; the image operation module is internally provided with a preset image processing algorithm and used for carrying out feature recognition and collection on a shot image of a target object according to the image processing algorithm to obtain a scanning signal for generating a 3D model.
Specifically, the CCD camera of the present embodiment mainly includes: the camera comprises a shooting light source, a rotary lens, a CCD camera and an input/output interface corresponding to an image operation module, and is driven by image capturing and processing related software.
Preferably, during operation, the CCD camera is required to move around the target object with an accuracy of 0.3 pixel, and the lens thereof rotationally tracks the target object with an accuracy of 0.05 degree, and completes scanning shooting of the target object in less than 10 milliseconds, under the parameter, the CCD camera can rapidly and highly-granularly acquire 360-degree images of the target object, and the deviation caused by the target object carelessly moving during the scanning shooting process is avoided.
The image processing algorithm built in the image operation module may, but need not, include: the method comprises the following steps of performing image-related accumulation algorithm, weighted average algorithm, cyclic judgment algorithm, compensation algorithm and the like, effectively filtering data outside a target object in a shot image based on the above algorithms, only obtaining characteristic information of the target object, and ensuring that the collected characteristic information is different and accurate according to the target object; for example, if the target object is a human, it is possible to recognize and extract features such as height, weight, hair color, skin color, and wearing, and these feature data are used as the scanning signal.
It should be noted that determining feature data of an object according to a captured image of the object is already implemented in the prior art, and thus, will not be described again by way of example.
After the scanning signal is determined, the modeling unit determines a 3D model of the features of the target object from the scanning signal. In practical applications, the 3D model of the features of the target object is not limited to only represent the outline of the target object itself, for example, if the target object is the user itself, the corresponding 3D model may also represent the clothing and the wearing equipment of the user at the same time. In addition, the 3D model does not necessarily reflect the complete target object, and taking the user as the target object as an example, the finally determined 3D model may be a part of the body of the user, such as the arm, the torso, and the like.
After determining the 3D model, the virtual rendering unit may render it in a virtual imagery. In practical applications, the virtual presenting unit of this embodiment may be a virtual reality head-mounted display device, such as a VR head-mounted display, VR glasses, and the like. When the virtual presentation unit works, the action detection unit detects the motion state of the target object in real time and feeds corresponding action information back to the virtual presentation unit, so that the virtual presentation unit controls the 3D model of the target object to present corresponding actions in the virtual image.
Specifically, the motion detection unit of the present embodiment includes: infrared ray transmitter, infrared ray transmitter and first processor. Wherein,
the infrared emitter is arranged on a preset positioning point and used for emitting an infrared positioning signal;
the infrared receiver is arranged on the target object and used for receiving the infrared positioning signal and determining the difference value between the target object and a reference distance according to the receiving condition of the infrared positioning signal;
the first processor determines motion information representing a motion state of the target object according to the difference between the determined target object and the reference distance.
Taking the target object as an example of a user, in a practical application, the present embodiment may set the infrared receiver on the limbs and the trunk of the user. Before the action information is determined, the user needs to stand still near the positioning point and keep a preset threshold time length. During the period of time that the user is stationary, the first processor may take the distance between the infrared receiver and the infrared transmitter as a reference distance and then determine a relative change in the distance between the infrared receiver and the infrared transmitter based on the reference distance.
In the process of determining the reference standard, the first processor controls the infrared transmitter to transmit the infrared test signal, the infrared receiver receives the infrared test signal, and the change of the relative distance between the user and the positioning point is determined according to the receiving condition of the infrared test signal.
If it is determined through the infrared test signal that the relative distance between the user and the positioning point is unchanged and the unchanged duration reaches the aforementioned preset threshold, it may be determined that the user is standing still at this time to wait for positioning calibration, and at this time, the first processor determines the distance between the current position of the target object (or each infrared emitter) and the infrared emitter as the reference distance. After the reference distance is determined, the infrared receiver can be controlled to emit an infrared positioning signal so as to formally detect the motion information of the user.
After the motion information of the user is determined, the display device of this embodiment further needs to select the collected motion signals in a classified manner and output the motion signals to the virtual presentation unit synchronously through a micro control unit, and the micro control unit is mainly used for reducing the delay of the motion signals in the transmission process.
Specifically, as shown in fig. 2, the micro control unit of the present embodiment includes:
a signal adjustment circuit for extracting the motion information detected by the motion detection unit in an idealized manner, the signal adjustment circuit including: removing signal glitches of the motion information and/or inserting preset signal pulses into the motion information. The signal burrs of the action information are removed, so that interference pulses generated when the action information is influenced by other factors can be eliminated; and the anti-interference capability of the action information can be increased by inserting the preset signal pulse.
The A/D conversion circuit is used for carrying out analog-to-digital conversion on the idealized extracted action information; in practical application, the a/D conversion circuit of the present embodiment can freely convert between 8bit and 10bit processed data amounts according to requirements (such as data amount, processing speed, power consumption, etc.), the minimum resolution is 0.003V, the conversion rate is greater than or equal to 1.3MSPS, and the maximum adjustment range is 3.3V;
the time sequence synchronization circuit is used for carrying out time slot delay on the action information after the analog-to-digital conversion; in practical application, the timing synchronization circuit delays the action information after the analog-to-digital conversion by N whole periods by increasing buffer time slots and/or signal feedback;
the second processor is used for sending the action information after the time slot delay to the virtual presentation unit; by way of exemplary introduction, the second processor is connected with the virtual rendering unit through a network cable, and realizes transmission of action information data with a large bandwidth according to the network cable. In addition, taking the user as the target object as an example, the timing synchronization circuit delays for N full cycles, so that the second processor can send the motion information generated correspondingly to the virtual rendering unit together after one coherent motion of the user is completed, so that the 3D model in the virtual image can render the corresponding motion smoothly and continuously.
Preferably, the second processor of this embodiment includes:
a first buffer (e.g., Static Random Access Memory (SRAM)), a second buffer (e.g., SRAM), and a transmitter;
the first buffer is used for receiving and buffering the action information sent by the time sequence synchronization circuit in a first time slot and is used for sending the buffered action information to the sender in a second time slot;
the second buffer is used for receiving and buffering the action information sent by the time sequence synchronization circuit in the second time slot and sending the buffered action information to the sender in the first time slot; the working cycle of the second processor comprises at least one first time slot and at least one second time slot which are alternately arranged;
the transmitter is used for transmitting the action information received by the transmitter to the virtual presence unit.
It is obvious that in the second processor of this embodiment, the first buffer and the second buffer may work alternately, that is, one is responsible for receiving the action information and one is responsible for sending the action information to the sender at the same time. The working scheme can greatly improve the data processing capacity of the processor, avoid influencing the timely sending of the action information when the data volume of the action information is large, and ensure that the action of the 3D model displayed by the virtual picture is closer to the consistency of the action of the actual target object in time sequence.
Of course, the second processor of this embodiment is not limited to only two buffers, but all technical solutions that different buffers are configured to perform interleaving operations should fall within the scope of the present invention.
In addition, with further reference to fig. 2, the micro control unit of the present embodiment may further include:
and the memory is used for backing up action information which is not ideally extracted, and the action information is recorded as historical data for subsequent related operations. In practical applications, assuming that the display device of the embodiment is used for training a pilot to pilot an airplane, the corresponding most original action information has a certain evaluation value.
In addition, as a preferable aspect, the display device of the present embodiment may further include:
the filtering unit is used for denoising the motion information sent by the second processor; the virtual presentation unit is specifically used for receiving the motion information after the noise reduction is performed by the filtering unit, so that the real degree of the action of the 3D model for restoring the target object is improved.
In addition, in order to further increase the user experience, the display device of the present embodiment further includes:
at least one vibration element disposed on the target object;
and the tactile feedback units are respectively connected with the at least one vibration element and used for acquiring tactile information and generating corresponding tactile feedback control signals to drive the corresponding vibration elements to vibrate when the 3D model in the virtual image generates the tactile information so as to feed the tactile feedback to the target object.
Based on the above description, the logical structure relationship of the display device of the present embodiment is as shown in fig. 3.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (7)
1. A display device, comprising:
the scanning unit is used for scanning and identifying the characteristics of the target object to obtain a scanning signal;
the modeling unit is used for generating a 3D model according with the characteristics of the target object according to the scanning signals;
the motion detection unit is used for detecting the motion state of the target object to obtain motion information;
the virtual presenting unit is used for displaying the 3D model in a virtual image and enabling the 3D model to present a motion state matched with the action information;
the scanning unit includes:
the image control CCD camera is used for scanning and shooting the target object for 360 degrees to obtain a shot image of the target object;
the image operation module is internally provided with a preset image processing algorithm and is used for carrying out feature recognition and collection on the target object on the shot image of the target object according to the image processing algorithm to obtain a scanning signal for generating a 3D model;
in the working process, the CCD camera moves around the target object with the precision of 0.3 pixel, the lens rotationally tracks the target object with the precision of 0.05 degree, and the scanning and shooting of the target object are completed in less than 10 milliseconds;
the motion detection unit includes:
the infrared emitter is arranged on a preset positioning point and used for emitting an infrared positioning signal;
the infrared receiver is arranged on a target object and used for receiving the infrared positioning signal and determining the difference value between the target object and a reference distance according to the receiving condition of the infrared positioning signal;
and the first processor is used for determining motion information representing the motion condition of the target object according to the determined difference value between the target object and the reference distance.
2. The display device according to claim 1,
the infrared transmitter is also used for transmitting an infrared test signal;
the infrared receiver is also used for receiving an infrared test signal and determining the change of the relative distance between the target object and the positioning point according to the receiving condition of the infrared test signal;
the first processor is further configured to control the infrared emitter to emit an infrared test signal before determining the motion information of the target object, and if the infrared receiver determines, through the infrared test signal, that the relative distance between the target object and the locating point is unchanged and the unchanged duration reaches a preset threshold, determine that the distance between the current position of the target object and the infrared emitter is the reference distance, and control the infrared receiver to emit an infrared locating signal.
3. The display device according to claim 1, further comprising:
a micro control unit, the micro control unit comprising:
a signal adjustment circuit for performing idealized extraction of the motion information detected by the motion detection unit, comprising: removing signal burrs of the action information and/or inserting preset signal pulses into the action information;
the A/D conversion circuit is used for carrying out analog-to-digital conversion on the idealized extracted action information;
the time sequence synchronization circuit is used for carrying out time slot delay on the action information after the analog-to-digital conversion;
and the second processor is used for sending the action information after the time slot delay to the virtual presentation unit.
4. The display device according to claim 3,
the time sequence synchronization circuit delays the action information after the analog-to-digital conversion by increasing buffer time slots and/or signal feedback.
5. The display device according to claim 3,
the second processor comprises:
a first buffer, a second buffer and a transmitter;
the first buffer is used for receiving and buffering the action information sent by the time sequence synchronization circuit in a first time slot and is used for sending the buffered action information to the sender in a second time slot; the second buffer is used for receiving and buffering the action information sent by the time sequence synchronization circuit in a second time slot, and is used for sending the buffered action information to the sender in a first time slot; the working cycle of the second processor comprises at least one first time slot and at least one second time slot which are alternately arranged;
the transmitter is used for transmitting the action information received by the transmitter to the virtual presenting unit.
6. The display device according to claim 3, further comprising:
the filtering unit is used for denoising the motion information sent by the second processor;
the virtual presenting unit is specifically configured to receive motion information after the noise reduction performed by the filtering unit.
7. The display device according to claim 1, further comprising:
at least one vibration element disposed on the target object;
and the tactile feedback units are respectively connected with the at least one vibration element and used for acquiring the tactile information and generating corresponding tactile feedback control signals to drive the corresponding vibration elements to vibrate so as to feed the tactile feedback to the target object when the 3D model in the virtual image generates the tactile information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710013282.2A CN106774935B (en) | 2017-01-09 | 2017-01-09 | Display device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710013282.2A CN106774935B (en) | 2017-01-09 | 2017-01-09 | Display device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106774935A CN106774935A (en) | 2017-05-31 |
CN106774935B true CN106774935B (en) | 2020-03-31 |
Family
ID=58951232
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710013282.2A Active CN106774935B (en) | 2017-01-09 | 2017-01-09 | Display device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106774935B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111178127B (en) * | 2019-11-20 | 2024-02-20 | 青岛小鸟看看科技有限公司 | Method, device, equipment and storage medium for displaying image of target object |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831401A (en) * | 2012-08-03 | 2012-12-19 | 樊晓东 | Method and system for tracking, three-dimensionally superposing and interacting target object without special mark |
CN103150020A (en) * | 2013-03-14 | 2013-06-12 | 上海电机学院 | Three-dimensional finger control operation method and system |
KR20150058733A (en) * | 2013-11-21 | 2015-05-29 | 오테리 테크놀러지스 인코포레이티드 | A method using 3d geometry data for virtual reality image presentation and control in 3d space |
CN105183147A (en) * | 2015-08-03 | 2015-12-23 | 众景视界(北京)科技有限公司 | Head-mounted smart device and method thereof for modeling three-dimensional virtual limb |
CN105955455A (en) * | 2016-04-15 | 2016-09-21 | 北京小鸟看看科技有限公司 | Device and method for adding object in virtual scene |
-
2017
- 2017-01-09 CN CN201710013282.2A patent/CN106774935B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102831401A (en) * | 2012-08-03 | 2012-12-19 | 樊晓东 | Method and system for tracking, three-dimensionally superposing and interacting target object without special mark |
CN103150020A (en) * | 2013-03-14 | 2013-06-12 | 上海电机学院 | Three-dimensional finger control operation method and system |
KR20150058733A (en) * | 2013-11-21 | 2015-05-29 | 오테리 테크놀러지스 인코포레이티드 | A method using 3d geometry data for virtual reality image presentation and control in 3d space |
CN105183147A (en) * | 2015-08-03 | 2015-12-23 | 众景视界(北京)科技有限公司 | Head-mounted smart device and method thereof for modeling three-dimensional virtual limb |
CN105955455A (en) * | 2016-04-15 | 2016-09-21 | 北京小鸟看看科技有限公司 | Device and method for adding object in virtual scene |
Also Published As
Publication number | Publication date |
---|---|
CN106774935A (en) | 2017-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190384967A1 (en) | Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium | |
KR102437456B1 (en) | Event camera-based deformable object tracking | |
AU2018237067B2 (en) | Depth sensing techniques for virtual, augmented, and mixed reality systems | |
US10293252B2 (en) | Image processing device, system and method based on position detection | |
US9628755B2 (en) | Automatically tracking user movement in a video chat application | |
US9348141B2 (en) | Low-latency fusing of virtual and real content | |
CN112198959A (en) | Virtual reality interaction method, device and system | |
KR20150117553A (en) | Method, apparatus and computer readable recording medium for eye gaze tracking | |
CN105429989A (en) | Simulative tourism method and system for virtual reality equipment | |
CN108881885A (en) | Advanced treatment system | |
CN115048954B (en) | Retina-imitating target detection method and device, storage medium and terminal | |
CN109753158B (en) | VR device delay determination method and control terminal | |
WO2017061890A1 (en) | Wireless full body motion control sensor | |
CN106774935B (en) | Display device | |
CN109426336A (en) | A kind of virtual reality auxiliary type selecting equipment | |
CN116109974A (en) | Volumetric video display method and related equipment | |
KR102613032B1 (en) | Control method of electronic apparatus for providing binocular rendering based on depth map matching field of view of user | |
Bailey et al. | Real Time 3D motion tracking for interactive computer simulations | |
Raghuraman et al. | A Visual Latency Estimator for 3D Tele-Immersion | |
Ionescu et al. | A 3D NIR camera for gesture control of video game consoles | |
WO2023186542A1 (en) | Information processing apparatus and information processing method | |
CN108052237B (en) | 3D projection touch device and touch method thereof | |
CN115802021A (en) | Volume video generation method and device, electronic equipment and storage medium | |
EP2979157A1 (en) | Self discovery of autonomous nui devices | |
CN118347699A (en) | Virtual reality display delay test method and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |