SPHERICAL OMNIDIRECTIONAL VIDEO-SHOOTING SYSTEM
This invention relates to a spherical omnidirectional video-shooting system. This system has the innovative function to record, produce and send, via wireless signal, to remote devices, spherical video in real time. In particular, the invention has the objective to record images and video content in 'immersive omni-directional ' 360° mode, and is a miniaturized system that has two or more different cameras, combined together with geometric arrangements, inside a chassis.
The limit of modern cameras is to record video with a very narrow angle of view, losing the ability to record many important details. Even more difficult is rotating the optics to shoot what is behind the camera and comparing it with the images or video taken from an original angle at that time.
This system and its integrated software provides enormous potentials: it has two or more video lenses inside it, and can execute in real time the movies merge from all video camera signals
generating omnidirectional movies. The film thus obtained can be cut to the desired portion of the video, allowing the users to create their own timeline with multiple shots and then upload it on the internet or send it to remote devices. The system can also assist and/or integrate mobile phones cameras, providing the ability to record omnidirectional contents exploiting the optics present in this system, or by using the front and rear optics present in modern phones, smartphones and tablets with technologies specially designed for this kind of equipment, as shown in Fig.16.
The classic shots typically used to monitor the Planetary or hemispherical IMAX type are created to give the illusion to the public to be in the center of the scene, in an immersive environment, such as outdoors on a starry night, with the sky that fills for the entire curved screen on which will be projected. The classic video recorded by the current cameras, even with an optimized lens, is produced at 30 frames per second with photographic lens of circular shape, because the higher distortion and loss of resolution, is so much greater, because the distance from the center of the captured image is greater, or on the
perimeter points of the image. This image is usually cropped in a rectangular shape, traditionally derived from the shape of painting pictures and theater scene. In this way, a large part of the useful surface is therefore wasted. A photo with a rectangular aspect ratio of 1:1.85, used for moving images, is able to record only 53% of this circular area.
The earlier documents, most relevant to this invention, are: US-A- 7710463, US-A1-2009/0278917 Al and US-A- 7003136.
Object of this invention is solving the above prior art problems. The proposed mechanism is highly miniaturized, watertight and water resistant till 20 atmospheres of pressure, and it can be used in movies, video surveillance, commercials, security, video inspections in inaccessible or contaminated places, by the military aboard drones, air and underwater, parachuted or endoscopic probe, onboard phones or smartphones. This system is dedicated to record images and videos scope, and more particularly to digitally record images and
360 degree panoramic videos; a special particular version enables immersive stereoscopic shooting, which is a system of cameras to record a panoramic
field of vision at 360° in 3D in digital mode. The system is composed of multiple cameras, at least four in the stereoscopic version, that coupled, or in double number for each shooting quadrant, allow the user to obtain stereoscopic vision: the cameras of the system are able to capture images covering an entire 360° panorama to create sequences of images and video immersive 3D, a 3D movie or stereoscopic panoramic animation.
With the optics of the new digital cameras with greater resolution and picture quality of 5 or 10 or more megapixels that record at 30-60 or 100 frames per second and beyond, together with the speed and processing power of electronic processors in constant growth, the foundations have been laid for three-dimensional video systems recovery at 360° omnidirectional cameras, that can capture images and video data to create three-dimensional images and animation for immersive movies.
In detail, for the optics the USB protocol 3 is used, but the system is already prepared to accept video streams of higher speed as the thunderbolt2 or higher, without limitations, being an open system to receive new components. The video stream coming from the different optics is merged
into one omnidirectional image by the processor of the system, and recorded in the storage memory inside the system and/or sent to remote devices via the high-speed wireless module integrated into the system. The movies shot with this innovative technology are designed to be projected on flat screens, but also in theaters in hemispherical dome. The video created for a flat screen and then displayed on a curved surface, such as a dome, and IMAX screens are distorted, if shot with classic cameras, while those videos, recorded with the 360° immersive technology are perfect, once projected on both flat and hemispherical screens, operative rooms, traffic control or surveillance programs and television formats. The immersive movies can be "navigated" with three-dimensional eyeglasses and visors, which also produce sounds to offer the viewer a realistic view, as if it were at the center of the scene.
To create the illusion of a continuous picture able to fill the inside of a hemisphere, it is necessary to turn the images with an appropriate camera system that covers the entire field of vision, or images that are appropriate for such a semi-curved, cylindrical, or hemispherical screen,
which exactly are the technologies objective of this patent. So, the importance and relevance of the technical innovation of this invention is creating 3D spherical video, through a system that includes several (two or more) video cameras, mounted on a modular structure shaped as a sphere, pentagon or dodecahedron, or any other shape, without limitations in size and shape, to shoot in video mode the entire surrounding landscape at 360° spherically from the shooting point, with no dead spots. Another innovation is the absolute chance to realize immersive shooting, also taking advantage of front and rear optics of modern mobile phones, same for smartphones, tablets. This technology can also be used for example on classic cameras mounted on a pole that shoot different angles or by combining the various video signals from each camera, to create an omnidirectional image.
In this system, video-shooting, unlike other systems, are merged in real time by the on-board software that use appropriate mathematical algorithms of image fusion hosted on the microprocessor of the device. The video projection in high resolution as result of a movie that covers the entire field of vision (spherical in this case)
has no more visual distortion, in order to give to the viewer the feeling of being immersed in a spherical dome, giving the perception of being immersed in the second half, and this places looks like in the virtual reality video game with the synthetic vision mode, just as it is perceived by a viewer in reality, without there being any optical distortion. This is because the system with two or more high-definition optics from 2 or more megapixels, has shot in real time across the optical field at 360° around the shooting point.
Compared with existing solutions, the system of this invention has the particularity to be miniaturized, self-powered, portable but especially to mount images inside without the aid of other devices, so it is a unique and innovative shooting system, with wide-angle lenses, and can record at the same time in stereoscopic mode, and realize omnidirectional images merging together the signals coming from different optics, already assembled and processed in real time, ready to be sent remotely via the wireless system, in real-time, to remote devices.
Video and photo contents are also stored into an on-board memory card. Furthermore, the system is
equipped with a GPS module for locating, or tracing the movements of taken shots, to produce the analysis carried out with respect to the path, and a rangefinder, which provides the recording of data necessary to obtain parameters of distance data, all of them being also assembled together and stored in the storage system as metadata.
Other highly innovative aspects are described below.
In recent years, there has been much talk of interactive TV, although so far the results have been poor. This system introduces important new innovative concepts, but so far little progress and Utopian ideas have come out, which are partial and unsatisfactory, and do not provide a lot to the end user of the service. The systems on the market today are created to be controlled by voice commands and hand signals to the TV and the equipment with Touch Gesture, but still there is no real, concrete interactivity in real time with the television "object" or with the network.
Networks have an obsolete design and the viewer is totally passive: he can only watch the images that are presented to him by the director, without being able to interact with them in any
way: the only possible interaction is changing the channel .
Moreover, all video systems, including video surveillance, have a very big limitation: they record images with a default angle, even in the best cases, with the use of wide-angle >90° and PTZ cameras; this recording mode, framing only a part of what surrounds the camera, is obviously very disadvantageous and constitutes an insurmountable limit, at least so far.
None of current systems has a real time interaction between camera and user and vice versa, also no network transmits images, content or immersive omnidirectional videos interactive with the users.
Networks like Youtube or similar, even if they allow uploading online contents by users, do not allow to interact with what is proposed in any way, certainly not in real time.
This is the innovation of the present system: new systems can be designed starting from four new innovative concepts, fundamentals for the system of this patent application:
1. The usage, by Networks Television, of cameras, precisely Omnidirectional real-time cameras, which
are able to record an omnidirectional video 360° x 180°, because they have a device equipped with multiple remotely controlled optics, even introducing new concepts for the video direction: our system device, compared with the "standard" or PAL Full HD devices, offers many advantages, namely:
• Vision at 360° x 180°
• Resolution at more than 10 times compared with standard cameras, up to 100 million pixels or more.
• Real time Streaming Live: the viewer will become director of real-time video transmitted building, as he wants, his personal schedule and unique within the event, as if it were present at the shooting point of omnidirectional cameras, deciding to follow the event from the angle he prefers, looking where he prefers without having to adapt to the decision of the director or operator who is picking up the event, so crossing the insurmountable current video limits.
• Ubiquity: the viewer will observe as if it were present at several points of the event or events simultaneously.
2. Sharing and uploading content: users not only can use part of the video they want, by selecting
it from the video recorded by omnidirectional cameras into this system, but will be also able to upload their content, also recorded with these omnidirectional cameras, thanks to a real time upload streaming server. It will be possible to enjoy and "navigate" the contents created by other connected users, which can interact or participate effectively in shared video sessions (e.g., teleconferencing) . This will be the new real frontier of interactive TV: every user becomes a television producer, and it is possible to simultaneously enjoy movies uploaded by other users via their PC, phone or tablet and participate in omni-directional remote sessions.
3. Video management software that combines various video sources in real time (real time and video stitching) . At the same this software is able to merge video signal coming from classic cameras, with single optics, to build three-dimensional objects and scenes for biometric analysis of people.
4. Thanks to applications for tablets, mobile phones and smart TV, the user will have the option to choose and crop, within a 360° omni-directional spherical video, the portions he desires to create
new scenes. This will allow the user not to miss any detail of the space surrounding the omnidirectional camera object of this patent, unlike the classic cameras, which usually miss everything that is not exactly in front of the camera at that moment, which is the most remaining of all that surrounds, making them very limited and, when used for surveillance purposes, very vulnerable .
Starting from these new innovative and revolutionary concepts, it is easy to understand, even for people not skilled in multimedia, the immense potential offered by the system of this patent and the possible uses in many different applications.
Thanks to the many options available, this system provides an innovative way to combine or cut out portions of video from different video signals, taken with different optics, including quality and different resolutions.
In contrast, using the same concept can be merged from different video signals pointing in one direction to realize a video of a single three- dimensional object, for analysis of the shape and size detectors using telemetry data and/or
biometrics .
These and other advantages of the invention just described, which will be highlighted below, are achieved with a system as described in claim 1. Preferred embodiments and non-trivial variations of the present invention are the subject matter of the dependent claims .
It is understood and clear that all the appended claims form an integral part of the present description.
This invention will be better described by some examples of the final product, given by way of non-limiting example, with reference to the related drawings, in which:
Figure 1 illustrates an embodiment of the system and the printed circuit board, where the various electronic components and the digital micro-camera with optics greater than 2 mega pixels are placed;
Figure 2 illustrates one of the possible embodiments of the inventive system, having assembled therein, the chassis 3 with spherical shape with the use of two sun optics 2, the figure also showing the monitor 9 and the function keys
Figure 3 illustrates one of the possible embodiments of the assembled system, the chassis 3 in one of the possible embodiments in this case of spherical shape with the use of three optics 2, the figure also showing the LED status 10, the slot for the memory card 13, the breading for any tripod or mounting brackets 11, the USB3 port for data exchange 12;
Figure 4 illustrates the side view of one of the possible embodiments of the assembled system, where the chassis 5 of hemispherical shape accommodates six optics 2, the figure highlighting the possible mounted tripode 4.
Figure 5 illustrates the side view of one of the possible embodiments of the assembled system, the chassis 6 of spherical optics housing 2, in the figure highlighting the possible mounted tripod 4;
Figure 6 illustrates a perspective view of one of the possible embodiments of the system partially disassembled, the chassis 17 of spherical optical housing 2, there being also visible the electronic components and the microprocessor 1 housed on the printed circuit 16;
Figures 7a, 7b illustrate a front and up view of the radial mounting of the various recovery
cones 19 that are overlapped by 15-20 degrees and that cover the entire recovery hemisphere;
Figure 8 illustrates a scheme of union of the various video recorded with the use of the chassis with. six. optics and the related, video merge fox spherical imaging;
Figure 9 shows a diagram of the merge of the various video recorded by different optics, merged via the algorithms for the video-stitching software present in the microprocessor;
Figure 10 is the schematics of the communication interface between electronic cameras, microprocessor, battery and related video outputs for USB 3, wireless module and other devices integrated into the system;
Figure 11 shows an example of the methods and procedures in which the various video signals and metadata are assembled and then sent in real time to remote devices as shown i Figure 16;
Figure 12 illustrates an example of two recorded frames from adjacent optics inside the device and the processing stage, Figure 13 and merged Figure 14, of this frames with a 15-2Ό overlap 19;
Figure 15 illustrates how the device is able
to send pictures, video and metadata in real time to remote devices through the wireless module and interact with such equipment.
Figures 16a, 16b, 16c illustrate how the highly innovative software in the edge of the device can also be used on board of modern smartphones, tablets and mobile phones in general, but also classic cameras such as those already installed for the video surveillance, capable of transforming them into devices capable of 360° omni-directional content recording via front and rear cameras or in the system without restrictions in number: the corresponding Figure 16 illustrates the two . cameras and angles in the recording views : Figure 16a rear, Figure 16b front, and Figure 16c side;
Figure 17 illustrates one version of the device and hand 18 travel support 4.
The system of the invention is substantially constituted by the following components:
- a chassis, preferably made of aluminum or composite material, containing electronics and two or more optics;
- a processor and other electronic components that can be updated using new software and
hardware with other components;
- a series of 2-megapixel digital camera with wide-angle lenses or greater viewing angle and recording >90°;
- a frame structure of the chassis, preferably made of aluminum or composite material, containing electronics and multiple optical coupled in pairs of two for each quadrant of recovery (in the stereoscopic version) ;
- possible extension tube for mounting with a preferable range from 5-25 cm;
- frame plate for mounting;
- interface and/or possibility of installing cable for data transfer, in particular USB3 type or faster;
- battery power;
- wireless module;
- GPS module;
- LED status (at least two) ;
- any parachute, and a tripod in the probe parachuted version;
- display (at least one) ;
- gyroscope;
- GSM module;
- memory storage;
- telemetric laser rangefinder;
- engine for movement system on wheels or propulsion for moving on the ground in the air, on surfaces, in liquids or conduits;
- stereo microphone (at least one) .
The invention is related to a system comprising at least two lenses that create an omnidirectional video camera, an images and videos recording system that can be used to create a 3D immersive environment at 360°, also stereoscopic. The system uses at least two cameras, with preferable variations preferable even with six or eleven cameras; if 3D stereoscopic version lenses are doubled for each shooting quadrant, the optics have a resolution >2 megapixel camera and a camera angle >100°, organized with overlapping visual frame, to capture image data covering the entire 360° scene, oriented so as to have an overlap of recovery of at least 15°.
The collected data are processed by the chip on the device that records data from different cameras, and can be sent in real time to remote devices via a wireless interface and to a cable with USB 3 technology with greater bandwidth. The camera system can be used to create a 3D model
taken from the real world of a scene at 360°, using the triangulation of the image data in the frame of overlap view.
Inside the object there is also a microphone to record sounds in stereo mode, a geolocation (GPS) as a wireless module, a range finder, a Bluetooth module, a GSM-4G module, battery power, a memory storage and possibly an optional parachute powered by an accelerometer.
Another aspect of this invention is describing a system of modular cameras interchangeable parts, in which a configuration is sufficient for photographing an entire hemisphere of the visual field; in addition, is it possible to shoot with an operator and a single camera that can be easily carried on one's shoulder, in cars, planes, drones, helicopters, or other mobile devices due to its very low size and weight.
In particular, the microprocessor inside the device has executable instructions and fusion algorithms of the recorded data comprising the steps of:
- acquiring images;
- the sequence of acquisition of image data is transferred from the plurality of cameras;
- acquiring audio data from the microphone at least in synchronism with the acquisition of the image data;
processing the data acquired from multiple cameras with field of vision on the recorded image and thus overlap to about 15-20°;
- processing the acquired telemetry data;
- assembling, via the microprocessor, the encoded data of the individual images and audio of the acquired data, to provide the result of an image spherical product of the merge of the various optical joint between them;
- producing a spherical video file, and
recording the immersive 360° video in the internal memory of the device system.
- sending images via the wireless module to remote systems (cloud/internet) .
The system is water resistant, up to 20 atmospheres of pressure, it is also resistant to weathering and chemicals. It also has the ability to be connected to a flexible tube for moving within the human body, in the version for endoscopic probe, or the ducts or video inspected for movement in the application for video surveillance.
A built-in microprocessor in the camera, using algorithms, provides the processing of data acquired by cameras: crop the scanned data from the first, the second and then the third camera; scale the data from the three cameras, rotate the image data produced; then adjust one or more visual properties of the images rotate, to vary exposure, color, brightness, and contrast, and then merge them into a single frame overlapping for about 20% each image. These frames are produced by the processor board 30 times per second to create a smooth video display visible through an external system on-board the camera, which has a detector for the telemetry data of the acquired data, and transfers the data outside through a physical port and via a wireless module, with a transfer speed of at least 1 gigabyte per second.
The system includes a card slot for accepting a memory card where image data are stored; it also includes a wireless module in which the microprocessor has executable instructions, including the transmission functions of the spherical video to remote devices, PC, tablets, internet, mobile phones.
The system also includes a small parachute
driven by an accelerometer: the parachute system offers the possibility to be released in territories or inaccessible areas or where you want to put an area under remote surveillance.
The system can be also provided in a particular version of a miniaturized motor and a propulsion system to become self-propelled, in air, liquids, on the ground, within conduits or for use as endoscopic probe.
A gyroscope and an accelerometer are also provided, to determine the rotational acceleration. These data are stored as metadata of rotation into the system.
The system also includes a Global Positioning System ("GPS") to determine changes in the position of the camera system during the movement of the camera itself: these data are stored as metadata in the global GPS position into the system, and can then be analyzed to produce charts or patterns.
The system also has in its ROM memory, an algorithm for displaying spherical video files and a directional sound created by the camera: the videos taken are transferred wirelessly to a spherical video viewer. This display is controlled by a computer, creating, for the processing system,
a three-dimensional video "navigable" with a mouse or touchscreen mode in a virtual environment.
The system also has a module with GSM data transmission with 4G LTE technology or higher.
In the preferred embodiment, the system weighs around 180 grams including the chassis, optics and electronics, battery, and its dimensions are: diameter 10 cm, and 2 cm in the endoscopic version, which may be reduced to 1mm or less with modern nanotechnology.
In summary, the peculiarity of this invention is the ability to record the entire world around in real time, by merging the video shot from various optics going to cancel the deformation of the image from a stereoscopic image by merging two images from a single shooting quadrant (for stereoscopic vision) . The acquired data can be also used for creating panoramic images with various configurations of two-dimensional audio-video compression: this serves to provide the operator with the ability to view the entire visual frame, but what he does not perceive behind, above and below him in a single moment and a single image or video unlike what that occurs in the human eye which has a field of vision of approximately 90°,
useful for control rooms or for video security with immense advantages of drastically reducing the number of cameras and related monitors present in the control room.
The sequence of images or videos generated are then sent through the wireless module on-board equipment, and, through an appropriate software, are usable on standard personal computers, touch screen or via the Internet, with an application that makes it navigable 360° the whole scene or through the new stereoscopic viewers eye.
The cameras are oriented in a radial manner with respect to a structure made of plastic, aluminum or composite material. Each camera has a field of view of >90°, which overlaps with the field of view of the adjacent digital optics. The digital cameras, to create a stereoscopic vision, to create the stereoscopic version, are two digital camera for each quadrant.