CN214959905U - Multi-camera combined imaging technology applied to VR (virtual reality), AR (augmented reality) or MR (magnetic resonance) - Google Patents
Multi-camera combined imaging technology applied to VR (virtual reality), AR (augmented reality) or MR (magnetic resonance) Download PDFInfo
- Publication number
- CN214959905U CN214959905U CN202121223106.XU CN202121223106U CN214959905U CN 214959905 U CN214959905 U CN 214959905U CN 202121223106 U CN202121223106 U CN 202121223106U CN 214959905 U CN214959905 U CN 214959905U
- Authority
- CN
- China
- Prior art keywords
- unit
- processing unit
- imaging
- electrically connected
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Abstract
The utility model relates to an imaging technology field, concretely relates to be applied to VR or AR or many cameras of MR joint imaging technique. Comprises an image pickup unit (1); an auxiliary signal input unit (2); the data processing unit (3) is electrically connected with the camera unit (1) and the auxiliary signal input unit (2) respectively; the data transmission unit (4) is electrically connected with the data processing unit (3); the secondary processing unit (5) is electrically connected with the data transmission unit (4); the imaging unit (6) is electrically connected with the secondary processing unit (5); the data processing unit (3) receives the information processing of the camera unit (1) and the auxiliary signal input unit (2); the processed information is transmitted to a secondary processing unit (5) and then processed, and is imaged in an imaging unit (6). Visual field sense is replaced by multiple cameras, three-dimensional reconstruction is conducted, using factors of a real environment are calculated, the real environment is displayed in a virtualized mode, and auxiliary equipment replaces human eyes to confirm and avoid the environment.
Description
Technical Field
The utility model relates to an imaging technology field, concretely relates to be applied to VR or AR or many cameras of MR joint imaging technique.
Background
With the development of scientific technology, Virtual devices such as Augmented Reality (AR) and Virtual Reality (VR) enter mass life, in recent years, Mixed Reality (MR) is developed, and MR can reconstruct and virtualize a real object in three dimensions and present the virtualized effect to multiple people, thereby realizing multi-person interaction. However, different virtual devices have different characteristics, and VR presents a virtual world to a user through a display, so that the virtual world has the characteristic of providing people with an immersion feeling. AR is an extension or augmentation to more dimensions of the real world, such as marker-based augmented reality, location or velocity-based augmented reality, projection-based augmented reality, scene understanding-based augmented reality, for recognition, translation, driving assistance, and so on. The MR superimposes real things into a virtual world, three-dimensional reconstruction is carried out on a two-dimensional video shot by a camera through an algorithm, a virtual three-dimensional object is generated and is presented to multiple people, and multi-person interaction is achieved.
In actual use, different virtual devices have advantages, but have limitations. VR needs great head-mounted display equipment, and AR is mostly planar display, lacks the three-dimensional information of object, and MR then can rebuild the object in real life in the virtual world, is applied to trades such as educational training more, and the range of application is narrow. The real society is a complex large environment, after the real society is virtualized, the real social environment is difficult to be completely shown by a single virtual device, and the image is shown in a ratio of 1:1 the mass deformation was severe when displayed. Combining the advantages of multiple virtual devices, integrating the virtual designs processed by different devices again, truly showing the real environment, and having information such as speed, position, depth, gray scale, 1:1, the real society is restored, multi-person interaction is realized, and development of virtual equipment and expansion of application fields are facilitated. Therefore, the development of a VR or AR or MR multi-camera combined imaging technology has great value and profound application requirements.
SUMMERY OF THE UTILITY MODEL
The utility model aims at providing a be applied to VR or AR or MR many cameras and jointly form images technique, replace the field of vision sense organ through many cameras, three-dimensional reconstruction calculates the use factor of real environment, and virtual show real environment, auxiliary assembly replace people's eye to carry out the affirmation of environment and dodge.
In order to achieve the above object, the utility model provides a following technical scheme:
a multi-camera joint imaging technique applied to VR or AR or MR, comprising:
a plurality of image pickup units;
the auxiliary signal input unit is provided with a positioner, a speed sensor and a microphone;
the input end of the data processing unit is electrically connected with the output end of the camera shooting unit and the output end of the auxiliary signal input unit respectively;
the input end of the data transmission unit is electrically connected with the output end of the data processing unit;
the input end of the secondary processing unit is electrically connected with the output end of the data transmission unit;
the imaging unit is electrically connected with the output end of the secondary processing unit;
the data processing unit receives and processes the information of the camera unit and the auxiliary signal input unit; the information processed by the data processing unit is transmitted to the secondary processing unit through the data transmission unit, the secondary processing unit processes the information, and the processed information is imaged in the imaging unit.
The camera shooting unit is provided with a plurality of projection generators and a plurality of signal collectors; the projection generators and the signal collectors are respectively positioned at different positions of VR, AR or MR; the projection generator projects signals to a plurality of directions, and the signal collectors located at different positions receive the plurality of projection signals simultaneously.
The auxiliary signal input unit further comprises a user input end; the user input end is provided with a voice input end and a touch screen input end.
The data processing unit is provided with a solid memory card, an AI intelligent chip, a central processing unit and a graphic processor; the solid-state memory card is electrically connected with the AI intelligent chip, the central processing unit and the graphic processor respectively.
The secondary processing unit is a three-dimensional image processor and can be in a flash memory.
The imaging unit is provided with a 3D display screen and a stereo surround sound box; the 3D display screen and the stereo surround sound are respectively electrically connected with the secondary processing unit; the 3D display screen truly restores the external visual field, 1: the scale of 1 displays the external visual field, the stereo surround sound plays the sound of the external environment, and the change condition of the simulation is played and calculated.
The data processing unit comprehensively processes images of different camera units and signals received by the auxiliary signal input unit, preliminarily simulates the environment information, judges the moving speed and direction of the equipment according to the optical path difference and gray scale information received by the projection signals by the plurality of signal collectors, and reconstructs a three-dimensional real object scene.
The secondary processing unit simulates the external environment information given by the data processing unit into a virtual scene, simulates three-dimensional information of people, objects and environment by combining with user information, and gives speed and position information; in addition, the change condition of the surrounding environment is calculated, and the change result is simulated.
The utility model discloses a theory of operation:
in the multiple camera units, a plurality of projection generators project signals to multiple directions, and signal collectors located at different positions receive the projection signals at the same time and collect optical path differences of the same mark; the auxiliary signal input unit acquires information such as positioning, speed and sound; the data processing unit comprehensively processes information from the camera unit and the auxiliary signal input unit, analyzes and calculates the information, identifies and distinguishes the change conditions of optical path difference, gray scale information, relative position and the like of the same mark, preliminarily simulates the environmental information, judges the moving speed and direction of the equipment and reconstructs a three-dimensional real object scene; the secondary processing unit is a three-dimensional image processor, carries out secondary processing on the information processed by the data processing unit, combines external environment information, a three-dimensional real object scene and a simulated virtual scene, simulates three-dimensional information of people, objects and the environment, calculates the change condition of the surrounding environment, and simulates a change result; the imaging unit receives the information of the secondary processing unit for imaging and restores the visual field of the real person.
The utility model has the advantages that:
1. the setting of user's input end, including touch screen input, speech input, can carry out the manual intervention adjustment to external environment, perhaps according to manual instruction, the environmental characteristic of some part of pertinence discernment for the formation of image to external environment accords with actual need more.
2. The imaging unit is provided with a 3D display screen and a stereo surround sound, and the visual field of a real person is restored according to the ratio of 1:1, so that when the equipment is used, real environment information can be displayed more really, the environment information is not distorted, the ratio of 1:1 is more in accordance with conventional cognition, and the identification of the environment information by a person is facilitated.
3. The visual field sense of human eyes, the combination of position, speed and other information are replaced by a plurality of cameras, three-dimensional reconstruction is carried out on an external object through a complex algorithm, a real environment is displayed in a virtualization mode, 1:1 reduction of the visual field of a real person is achieved, and therefore the purpose that the equipment can interact with reality in time and confirm information of various complex conditions in the using process is achieved; through the comparison of the transmitted visual field sense and the image, the use factors of the real environment are calculated, possible environment change conditions are given, and the auxiliary equipment replaces human eyes to confirm and avoid the environment through the calculated factors.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of the present invention;
FIG. 2 is a system flow diagram of the present invention;
1. the system comprises a camera shooting unit, 101, a projection generator, 102, a signal collector, 2, an auxiliary signal input unit, 201, a positioner, 202, a speed sensor, 203, a microphone, 204, a user input end, 2041, a voice input end, 2042, a touch screen input end, 3, a data processing unit, 301, a solid-state memory card, 302, an AI intelligent chip, 303, a central processing unit, 304, a graphic processor, 4, a data transmission unit, 5, a secondary processing unit, 6, an imaging unit, 601, a 3D display screen, 602 and stereo surround sound.
Detailed Description
The technical solution of the present invention will be described clearly and completely with reference to the accompanying drawings of the present invention, and obviously, the described embodiments are only some embodiments of the present invention, not all embodiments. Based on the embodiments in the present invention, all other embodiments obtained by a person skilled in the art without creative efforts belong to the protection scope of the present invention.
Example 1
According to the fig. 1 and 2, the method is applied to VR or AR or MR multi-camera combined imaging technology, and comprises the following steps:
a plurality of imaging units 1, wherein the imaging units 1 are provided;
an auxiliary signal input unit 2, the auxiliary signal input unit 2 being provided with a locator 201, a speed sensor 202, a microphone 203;
the input end of the data processing unit 3 is electrically connected with the output end of the camera unit 1 and the output end of the auxiliary signal input unit 2 respectively;
the input end of the data transmission unit 4 is electrically connected with the output end of the data processing unit 3;
the input end of the secondary processing unit 5 is electrically connected with the output end of the data transmission unit 4;
the imaging unit 6, the said imaging unit 6 is electrically connected with carry-out terminal of the secondary processing unit 5;
the data processing unit 3 receives and processes the information of the camera unit 1 and the auxiliary signal input unit 2; the information processed by the data processing unit 3 is transmitted to the secondary processing unit 5 through the data transmission unit 4, the secondary processing unit 5 processes the information, and the processed information is imaged by the imaging unit 6.
In the arrangement, the visual field sense, the combination position, the speed and other information of human eyes are replaced by the multiple camera units, the real environment is displayed in a virtualization mode through a complex algorithm, the 1:1 reduction of the real human visual field is realized, and the purposes that the equipment can interact with reality in time and confirm information of various complex conditions in the use process are achieved; through the comparison of the transmitted visual field sense and the image, the use factors of the real environment are calculated, possible environment change conditions are given, and the auxiliary equipment replaces human eyes to confirm and avoid the environment through the calculated factors.
Example 2
As shown in fig. 1, the image capturing unit 1 is provided with a plurality of projection generators 101 and a plurality of signal collectors 102; the plurality of projection generators 101 and the plurality of signal collectors 102 are respectively located at different positions of VR or AR or MR; the projection generator 101 projects signals in multiple directions, and the signal collectors 102 located at different positions receive the multiple projection signals at the same time.
In the above arrangement, the optical path difference acquired by the plurality of signal acquisition devices for the same mark is processed by the data processing unit to give the depth information of the object, and the brightness signal difference gives the gray information of the object, so that the three-dimensional reconstruction for external environment identification is facilitated.
Example 3
According to fig. 1, the auxiliary signal input unit 2 further comprises a user input 204; the user input end 204 is provided with a voice input end 2041 and a touch screen input end 2042.
In the above setting, the setting of the user input end, including touch screen input and voice input, can perform manual intervention adjustment on the external environment, or identify the environmental characteristics of a certain part according to manual instructions in a targeted manner, so that the imaging of the external environment is more in line with the actual needs.
Example 4
As shown in fig. 1, the data processing unit 3 is provided with a solid-state memory card 301, an AI smart chip 302, a central processing unit 303, and a graphics processing unit 304; the solid-state memory card 301 is electrically connected to the AI smart chip 302, the central processor 303, and the graphics processor 304, respectively.
In the setting, the solid-state memory card stores received or processed information, and the AI intelligent chip is used for deep learning and simulation; the central processing unit is mainly used for calculating or logic operation chips; the graphics processor is a chip for processing main graphics and three-dimensional images. The data processing unit rapidly analyzes and calculates the optical path difference, the gray scale information and the relative position change condition of the same mark from the camera shooting unit and the auxiliary signal input unit, identifies and distinguishes a plurality of cameras, and carries out three-dimensional reconstruction on an external real object, the processing speed is high, the reconstruction effect is real, and the finally simulated external environment is real and accurate.
Example 5
As shown in fig. 1, the secondary processing unit 5 is a three-dimensional image processor, and can be a flash memory.
In the above arrangement, the secondary processing unit is a three-dimensional image processor, performs secondary processing on information, reconstructs an external environment in three dimensions, assists in simulating the external environment with information such as speed, position, sound and the like, and is more immersive when interacting remotely; according to the change of the external environment, the change condition is pre-judged, the change condition of the changed external environment can be timely and effectively judged, and the safety is improved.
Example 6
According to fig. 1, the imaging unit 6 is provided with a 3D display 601, a stereo surround sound 602; the 3D display 601 and the stereo surround 602 are electrically connected to the secondary processing unit 5 respectively.
In the arrangement, the imaging unit is provided with the 3D display screen and the stereo surround sound, the visual field of a real person is restored according to the ratio of 1:1, so that real environment information can be displayed more really when the equipment is used, the environment information is not distorted, the ratio of 1:1 is more consistent with conventional cognition, and the identification of the environment information by a person is facilitated.
The above description is only for the specific embodiments of the present invention, but the protection scope of the present invention is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present invention, and all should be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (6)
1. A multi-camera joint imaging technique applied to VR or AR or MR, comprising:
a plurality of imaging units (1), wherein the number of the imaging units (1) is multiple;
an auxiliary signal input unit (2), the auxiliary signal input unit (2) being provided with a locator (201), a speed sensor (202), a microphone (203);
the input end of the data processing unit (3) is electrically connected with the output end of the camera shooting unit (1) and the output end of the auxiliary signal input unit (2) respectively;
the input end of the data transmission unit (4) is electrically connected with the output end of the data processing unit (3);
the input end of the secondary processing unit (5) is electrically connected with the output end of the data transmission unit (4);
the imaging unit (6), the said imaging unit (6) is connected with output terminal electrical behavior of the secondary processing unit (5);
the data processing unit (3) receives and processes the information of the camera unit (1) and the auxiliary signal input unit (2); the information processed by the data processing unit (3) is transmitted to the secondary processing unit (5) through the data transmission unit (4), the secondary processing unit (5) processes the information, and the processed information is imaged in the imaging unit (6).
2. The technique applied to VR or AR or MR multi-camera joint imaging according to claim 1, wherein: the camera shooting unit (1) is provided with a plurality of projection generators (101) and a plurality of signal collectors (102); the plurality of projection generators (101), the plurality of signal collectors (102) are respectively located at different positions of VR or AR or MR; the projection generator (101) projects signals to a plurality of directions, and the signal collectors (102) located at different positions receive the plurality of projection signals simultaneously.
3. The technique applied to VR or AR or MR multi-camera joint imaging according to claim 1, wherein: the auxiliary signal input unit (2) further comprises a user input (204); the user input end (204) is provided with a voice input end (2041) and a touch screen input end (2042).
4. The technique applied to VR or AR or MR multi-camera joint imaging as claimed in claim 2, wherein: the data processing unit (3) is provided with a solid-state memory card (301), an AI intelligent chip (302), a central processing unit (303) and a graphic processor (304); the solid-state memory card (301) is electrically connected with the AI intelligent chip (302), the central processing unit (303) and the graphic processor (304) respectively.
5. The technique of claim 1 applied to VR or AR or MR multi-camera joint imaging, wherein: the secondary processing unit (5) is a three-dimensional image processor and can be in a flash memory.
6. The technique of claim 1 applied to VR or AR or MR multi-camera joint imaging, wherein: the imaging unit (6) is provided with a 3D display screen (601) and a stereo surround sound box (602); the 3D display screen (601) and the stereo surround sound box (602) are respectively electrically connected with the secondary processing unit (5).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202121223106.XU CN214959905U (en) | 2021-06-02 | 2021-06-02 | Multi-camera combined imaging technology applied to VR (virtual reality), AR (augmented reality) or MR (magnetic resonance) |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202121223106.XU CN214959905U (en) | 2021-06-02 | 2021-06-02 | Multi-camera combined imaging technology applied to VR (virtual reality), AR (augmented reality) or MR (magnetic resonance) |
Publications (1)
Publication Number | Publication Date |
---|---|
CN214959905U true CN214959905U (en) | 2021-11-30 |
Family
ID=79053999
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202121223106.XU Active CN214959905U (en) | 2021-06-02 | 2021-06-02 | Multi-camera combined imaging technology applied to VR (virtual reality), AR (augmented reality) or MR (magnetic resonance) |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN214959905U (en) |
-
2021
- 2021-06-02 CN CN202121223106.XU patent/CN214959905U/en active Active
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110221690B (en) | Gesture interaction method and device based on AR scene, storage medium and communication terminal | |
CN108830894A (en) | Remote guide method, apparatus, terminal and storage medium based on augmented reality | |
CN106066701B (en) | A kind of AR and VR data processing equipment and method | |
JP2022044647A (en) | Mixed reality system with multi-source virtual content compositing and method of generating virtual content using the same | |
CN101631257A (en) | Method and device for realizing three-dimensional playing of two-dimensional video code stream | |
CN110211222B (en) | AR immersion type tour guide method and device, storage medium and terminal equipment | |
CN114175097A (en) | Generating potential texture proxies for object class modeling | |
CN112492380A (en) | Sound effect adjusting method, device, equipment and storage medium | |
CN109920000B (en) | Multi-camera cooperation-based dead-corner-free augmented reality method | |
CN108259764A (en) | Video camera, image processing method and device applied to video camera | |
CN107562185B (en) | Light field display system based on head-mounted VR equipment and implementation method | |
WO2017042070A1 (en) | A gazed virtual object identification module, a system for implementing gaze translucency, and a related method | |
WO2023280082A1 (en) | Handle inside-out visual six-degree-of-freedom positioning method and system | |
WO2021151380A1 (en) | Method for rendering virtual object based on illumination estimation, method for training neural network, and related products | |
CN214959905U (en) | Multi-camera combined imaging technology applied to VR (virtual reality), AR (augmented reality) or MR (magnetic resonance) | |
CN112288876A (en) | Long-distance AR identification server and system | |
CN109816791B (en) | Method and apparatus for generating information | |
CN116958344A (en) | Animation generation method and device for virtual image, computer equipment and storage medium | |
CN113206988A (en) | Multi-camera combined imaging technology applied to VR (virtual reality), AR (augmented reality) or MR (magnetic resonance) | |
CN111047713A (en) | Augmented reality interaction system based on multi-view visual positioning | |
CN115984437A (en) | Interactive three-dimensional stage simulation system and method | |
CN116012459A (en) | Mouse positioning method based on three-dimensional sight estimation and screen plane estimation | |
CN115268626A (en) | Industrial simulation system | |
Mori et al. | An overview of augmented visualization: observing the real world as desired | |
CN115222917A (en) | Training method, device and equipment for three-dimensional reconstruction model and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GR01 | Patent grant | ||
GR01 | Patent grant |