US20230396747A1 - Multiple camera sensor system - Google Patents
Multiple camera sensor system Download PDFInfo
- Publication number
- US20230396747A1 US20230396747A1 US18/350,866 US202318350866A US2023396747A1 US 20230396747 A1 US20230396747 A1 US 20230396747A1 US 202318350866 A US202318350866 A US 202318350866A US 2023396747 A1 US2023396747 A1 US 2023396747A1
- Authority
- US
- United States
- Prior art keywords
- camera
- view
- virtual camera
- sensors
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims description 17
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000004519 manufacturing process Methods 0.000 abstract description 13
- 238000005516 engineering process Methods 0.000 abstract description 6
- 238000013461 design Methods 0.000 abstract description 5
- 230000001133 acceleration Effects 0.000 abstract description 4
- 238000005293 physical law Methods 0.000 abstract description 4
- 230000005484 gravity Effects 0.000 abstract description 3
- 239000010410 layer Substances 0.000 description 10
- 238000012937 correction Methods 0.000 description 5
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 238000002955 isolation Methods 0.000 description 3
- 239000012790 adhesive layer Substances 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 206010003402 Arthropod sting Diseases 0.000 description 1
- 239000000853 adhesive Substances 0.000 description 1
- 230000001070 adhesive effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010924 continuous production Methods 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 238000005034 decoration Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B37/00—Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
- G03B37/04—Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/158—Switching image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/58—Means for changing the camera field of view without moving the camera body, e.g. nutating or panning of optics or image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
Definitions
- the present invention discloses a multiple camera sensor system and a method for any application and for a large number of different fields.
- the invention can be used in areas as industry and production support, medical appliances, AV production, Video Conferencing, Broadcast, Surveillance and Security among others.
- the Security and surveillance camera solutions are generally based on one or several single fixed mounted or remotely controlled moving CCTV/PTZ (pan, tilt, zoom) cameras.
- the cameras are mounted in strategic positions for the most efficient coverage of the areas of interest.
- the conferencing/meeting rooms are usually equipped with one single camera mounted in close relation to the monitor (on the screen frame) with the incoming signal/image.
- the common conferencing solutions are with wide or ultrawide lenses, or cameras with 2-3 lenses mounted together in a fan shape from a zero point of view, this to be able to provide the wanted coverage of the meeting room, table, blackboard or other presenting area, sometimes as much as 180 degrees.
- the ultrawide and wide lenses have the disadvantage that the objects of interest will be smaller the wider the lens are.
- the existing solutions therefor often suffers from the choice between nearness or wideness, being unsatisfying to use in larger meetings because of long distances to the participants in the end.
- Some high-end conferencing solutions can offer physically and mechanical moving single-lens PTZ cameras placed close to the monitor. These solutions can have software-controlled zoom and focus and can sometime place the talking person in the center. These solutions suffer from software controlled mechanical moving cameras. Auto-producing with one camera and several participants will result in over-panning, zooming and tilting, (moving too much and far in all axes) sound recognition issues and bad image control, camera stability and image quality.
- the domestic users, bloggers and semiprofessionals, business to business and others that are streaming to the internet are either using single web cameras or semiprofessional cameras.
- This can be single PTZ camera solutions or custom streaming cameras virtually doing face tracking and framing inside a larger high-resolution image than the output streaming resolution.
- the cameras and the camera-robots used in studio production are generally mechanically complicated, very large in size and area and space demanding for camera movements.
- an object of the present disclosure is to overcome or at least mitigate at least some of the drawbacks related to cameras.
- the present invention discloses a multi camera sensor system providing a target image by a virtual camera with a virtual camera field of view localized at a virtual camera point of view, including a number of camera sensors each with respective sensor fields of view consecutively positioned on one or more attachment devices adjusted to be attached to one or more surfaces, defining an n-dimensional space when attached to the one or more surfaces spanned by the positions of the camera sensors, a processing device at least adjusted to determine a current virtual camera point of view within the n-dimensional space, select image data captured by a first set of the camera sensors having sensor field of views which at least in combination are covering the current virtual camera field of view associated with the current virtual camera point of view, create the target image within the current virtual camera field of view by stitching the selected image data, and a transmission media inherently provided in or connected to the one or more attachment devices adjusted to transmit image data captured by the camera sensors and control data between the camera sensors and the processing device.
- the present invention also discloses a corresponding method.
- FIG. 1 Illustrates how image sensors according to aspects of the present invention create overlapping images through a fixed point of origin and its angular field of view
- FIG. 2 a and b conceptually displaying two different stitching modules differentiated by the input
- FIG. 3 a illustrates image sensors mounted on a flexible multilayer PCB that can be shaped into any form and length
- FIGS. 3 b - d show how image sensors mounted on a flexible multilayer sensor tape can be given any form, shape and length.
- FIG. 3 e is a closer view of how an image sensor may be mounted on a flexible multilayer PCB
- FIG. 4 is an illustration of parameters included in possible mathematical equations defining the relationship between them
- FIGS. 5 a and 5 b are illustrations of other parameters included in possible mathematical equations defining the relationship between them.
- FIG. 6 illustrates a software architecture according to aspects of the present invention at a high level
- FIG. 7 illustrates a switch logic system simplifying scalability when a centralized processing server according to aspects of the present invention is running with a fixed number of serial channels.
- the different aspects of the present invention are solving certain problems related to existing camera technology issues by introducing the possibility of getting several different camera inputs from the same source or moving shots/images without physically moving any cameras. It will minimize the physical presence of existing camera technology, size, weight and design and also to large extent avoid physical laws like gravity, vibrations, acceleration, retardation and speed.
- This may be done by means of a software virtually moving camera using multiple cameras sensors mounted consecutively along some attachment solution.
- This attachment solution may include e.g. tape or strips.
- this solution is referred to as a certain concrete implementation, it must be understood that this is for the purpose of exemplifying only, and that the attaching solution may be implemented in many different ways.
- the camera sensors are placed along the tape with a distance that provides image overlap that are software stitched together into a virtual camera or used as single cameras along the strip. This is generally illustrated in FIG. 1 .
- a virtual camera always consists of data from more than one image sensor.
- the virtual camera can be placed anywhere along the axis of the strip, even on arbitrary positions between two image sensors. For such positions, the field of view (FOV) and the point of view (POV) must be computed and updated according to interpolated and weighted pixel data from the active image sensors.
- Real-time 3D reconstruction algorithms will use data from several adjacent image sensors to ensure correctness in perspective changes.
- the active image is generated by a virtual camera located between the cameras with respective field of views A and B. This results in a perspective change which is mathematically defined within the perspective of A and B. By using interpolation, weighted pixels and 3D reconstruction, the correct perspective is generated for the active image.
- FIGS. 2 a and b are illustrating two examples of stitching modules differentiated by the input.
- the stitching module of FIG. 2 a is taking raw image sensor data as input, producing an AB image.
- the stitching module of FIG. 2 b is taking an AB image as input, producing an ABCD image.
- the camera strip can be placed/mounted and used in all the three axes (xyz) and over long distances/meters with a large amount (e.g. 100 or more) of camera sensors on the strip.
- the camera sensors can be used and mounted as is with adhesive, but in other aspects, applied and built into any hardware casing/housings and been given nearly any shape and form. Examples of such different shapes and forms, being straight, convex, concave or a combination, are illustrated in FIG. 3 a - 3 d.
- the image sensors may be mounted on a flexible multilayer PCB that can be shaped into any form and length.
- Each image sensors placement on the tape (and in space) and their individual view will be calculated and calibrated in software providing precise positioning for each sensor on the PCB and given as input to the software modules to obtain an accurate output image.
- the image sensor tape can be given any form, shape and length. In non-linear surfaces convex/concave applications the image sensors placement on the tape (and in space) and their individual point of view will be calculated and calibrated and given as input to the software modules to obtain an accurate output image.
- the image sensor PCB-tape may have dedicated sensor power layer, multichannel camera control and switching layers, signal transportation layers, isolation layers, isolation, cooling and adhesive layer for mounting as is, or in casing/housing.
- Examples of relationship between the image sensor field of view ( ⁇ ), distance between image sensors (d1) and minimum distance to theoretical stitching area (d2) may be as follows.
- d 1 d 1 2* d 2 /tan( ⁇ 2 )
- a high resolution multicamera solution where a large number of 40-megapixel (cellphone) cameras and higher (sensors with lens) are mounted in an image cluster bracket on one of the above mentioned flexible multilayer PCB tape. The cameras are then mounted consecutively on with a defined distance, to provide the required images regarding distance to focus of interest.
- the camera sensors are mounted along the tape with an individual distance that provides camera image overlap on straight line or concave, convex as discussed and illustrated above.
- the overlapping images are controlled by software, stitched together into a real time virtual camera movement.
- the camera tape can also provide several simultaneously single images along the strip. This means getting moving camera images without moving any physical cameras. All movements are made in software, and no moving parts need to be present.
- the camera strip can be placed/mounted and used in all the three spatial axes (xyz) and over long distances/many meters with a large amount of camera sensors mounted on the strip.
- Several strips may be connected together either physically or virtually, to create one or more pictures from a point of view not necessarily directly localized on a camera strip, but also on a location in the spatial area defined by the camera strip axis.
- the resulting virtual camera point of view can be on any points of (x m , y n ), where m and n are floating numbers between (0,M) and (0,N), respectively, and where x M and y N represents the respective maximum lengths or the two perpendicularly provided camera strips.
- the camera sensor and lenses are mounted in a flexible multilayer PCB sensor cluster-bracket.
- This cluster bracket should be provided with the modularity and the flexibility to easy change and use different lenses and sensor configuration depending on application in use.
- the flexible multilayer PCB tape is according to different aspects of the present invention designed and made from several layers to connect to the camera sensor cluster mounting bracket with the camera sensor.
- the PCB tape should preferably have a dedicated camera power layer, multichannel camera control and switching layers, signal transportation layers, isolation layers, shield layer and an adhesive layer.
- the multilayer flexible PCB tape is designed to be produced in modular lengths and can be cut into desired length within.
- the camera sensors and lenses on the tape may be high resolution mass-produced cellphone camera modules from 40 megapixels and higher. These image sensors provide a resolution as 4-8 K which is a bigger resolution than HD (broadcast) or video resolution published on the internet. This makes it possible to do software virtual moving and zooming within the 4-8 K frame.
- the camera sensors on the camera strips and the communication and transmission of pixel values and control data are controlled by a remotely or locally (or both) localized software.
- the software is handling the camera-tape switching protocol on all camera sensors for distribution on several channels. Examples of this is illustrated in FIGS. 6 and 7 . In the following, this example is discussed referring to the different modules in these figures.
- the centralized processing server consists of hardware, operating system and software to facilitate the processing of incoming input signals from the image sensor strip.
- the hardware features input/output (I/O) modules reading, converting or translating the input signals and computing power through a conventional or application specific server PC with processing units suitable for image handling.
- I/O input/output
- a centralized processing server can occur as a single setup, with multiple servers, locally or remotely, and does not remove a potential need of intelligent or processing units built into the image sensor strip.
- An image capturer consisting of hardware interfacing the image sensor circuitry and the image reader module, making data from all active image sensors readable.
- the image reader actively selects image sensor data made ready by the image capturer based on the necessary data for the trailing processes.
- a video format converter can be used in advance of the stitching module to optimize time complexity of stitching algorithms.
- the stitching module stitches two input image matrices into one image represented as a matrix with an increased row range.
- the inputs can either originate from raw data from image sensors, or from single iteration outputs from the stitching module resulting in a second-degree stitch—stitching two already stitched images.
- Multithreading is initiated to facilitate concurrent processing, making n preview threads available.
- Preview threads can be used internally in the software to cache processed images, output raw preview video, forwarded to calibration software improving total system performance and output.
- the image warper performs manipulative operations to image matrices through rotation and skewing to optimize stitching areas and output fit.
- the color correction module corrects image coloring based on the most recent output image or image pairs, ensuring color consistency in image output.
- the stitching trailing from color correction operates in the same manner as the previous stitching module, stitching corrected images prior to the output pipeline.
- a video format converter converts back from an algorithm-optimized format to the desired output format.
- the output router has the possibility to output one or several images to a connected or programmed path.
- Image Sensors in series transmits image data concurrently to a centralized processing server where the Image Capturer, Image Reader and Video Format Converter ensures time-synchronized grouped image data to the Stitching Module.
- the trailing software modules compares and corrects the data to produce an accurate output image.
- a switch logic system simplifying scalability when the centralized processing server is running with a fixed number of serial channels.
- the switch logic can be utilized to enable different segments of the image sensor circuitry.
- the stitching software can be seen as simulating the human eyes.
- the human vision consists of images from two eyes/retinas (two viewpoints) that are split vertically and assembled in our brain into a coherent and “plausible” virtual representation of the environment. (binocular stereopsis)
- the distance between the human eyes is providing the depth, length and distance information.
- the stitching software may in certain aspects of the present invention handle stitching on two levels in the image processing.
- A, B, C, D and E are cameras or camera input according to the corresponding denotations in FIG. 1 .
- F is an imaginable camera adjacent to E.
- Stitching Module 1 as illustrated in FIG. 2 a is based on 1st degree stitching where two image sensors continuously feed the software module with time-synchronized image data, creating a virtual image output in a similar way as the human eyes. As static parameters for image sensors are known, this module should also perform lens corrections concerning viewpoint/point of origin, focal point and parallax offsets to provide the 2nd degree stitching with the same initial conditions as illustrated in FIG. 2 b .
- module 2 of FIG. 2 b is handling the stitching between the virtual AB and CD.
- Module 2 is adapted to continuously stitching AB, CD and further neighbors EF and virtually enables movements of the point of view along the image sensor tape. Stitching an AB-CD is only performed when needed, for example during a virtual movement crossing the boundary between AB and CD, resulting in a 4 times wider image, where colors and skewing are performed based on the most recent outputted viewable image to ensure consistency in image output.
- the system would have the capability to compute dept and distance.
- the virtual output image would preferably be composed by either a 1st degree stitch output, a 2nd degree stich output or higher order output, but rarely from a single image sensor.
- the software should preferably calibrate each image sensor based on a reference model and optimize the equality between the image sensors by inheriting parameters from the closest neighbor image sensor.
- the supplementary aspects above ensure lens correction issues concerning the point of focus and parallax movements between objects in the foreground and background that occurs during movement in the different axes. This is important since the background and foreground objects in the current output point of view have to be moving in the right speed within the image during movement.
- the software knows the “whole” image, while output is produced in the same way as e.g. zoom in on an image on a smartphone and move x/y, ie software selects column range and row range according to the desired aspect ratio on output.
- the distance between the image sensors provides dept and distance information
- the software calibrates every image sensor and what to process and not.
- the software output is usually when virtually moving a virtual representation, an image made from two or more images, rarely from a single sensor.
- aspects according to the present invention include several amount of image sensors, high precision optical measurements like distance to object and image scan etc. can be provided. This information can be precomputed and used in the finished output or for reference to graphical engines, 3D models, virtual productions or output to other applications for other use.
- the camera sensor strips as discussed in the present application can be used as is, mounted directly on an existing surface, on a set decoration piece, on a wall in a meeting room etc. Because of this flexibility, the present invention can be used in several industries and can go into several housings/casings depending on the application and use. This housing can be given nearly any shape or form to fit the industry, application and environment. The applications and fields of the present invention are further discussed in the following.
- the present invention will provide a better view and a possibility of virtually moving the high-resolution camera coverage to the area of interest without loss of image quality.
- the invention will be able to cover larger and wider areas than existing PTZ solutions without remotely and physically moving the camera with a camera controller, joystick or preset positioning.
- the invention is a software controlled only and do not rely on any moving parts, robotic controller or motors that is needed for controlling the PTZ cameras. This means less risk of mechanical and camera controller malfunctions.
- the invention When using it in this field, the invention will provide a software controlled high resolution multicamera solution with a large number of cameras available for dedicated ready framed and focused presets of the participants for automated meeting production.
- the Invention can be used on existing monitoring, be placed strategically over and under the monitoring to give a software stitched eye direction correction.
- the invention can be placed in several areas and axes and directions in the room Providing a different presence of the room and meeting than existing systems.
- the invention will be able to deliver a much wider and deeper presentation in educational setups during lectures, people working on smart/white/blackboards due to multiple strips of cameras placed strategically in the lecture room.
- the invention can be placed anywhere—obtaining the desired viewing area of a meeting room, participants, educational setups or whiteboards.
- the invention is enhancing the opportunities with video conferencing compared to real-life meetings.
- the invention will provide the domestic and semiprofessional market with a professional production tool providing setups that can make still images from different angles and moving camera images in all axes.
- the invention introduces the possibility to make still images from several angles and moving camera shots without physically moving any camera in the setup. This can be in a studio, events or arenas inside or outside.
- the invention can be fully integrated into the set design itself.
- the invention can be mounted in a given shape and camera trajectory designed housing that relates or blends into the studio design.
- the camera arrangement is small and not heavy weighted, and can easily be used on the road by journalists, press rooms and others that want several camera angles or moving images in their camera setup. Further the camera arrangement can be placed strategically alongside stages or sports arenas and events, being able to virtually provide moving camera shots of moving talents and athletes during performance.
- the invention differences radically from existing camera setups in any known situation, industry or area.
- the concept of using multiple High Definition lenses mounted along a strip/tape is new.
- the invention provides you with the possibility for individual single framed shots from different places, length and angles on the tape.
- the invention gives the possibility for virtual movements along the tape in any desired axis, this without actually moving any physical camera.
- the wanted trajectory or movement can be done as is or inside a housing designed into any shape or form.
- the invention is therefore extremely adaptable to any industry or different use where there is needs for different angles, viewpoints, and or camera movement this provided from only one signal, one camera source.
- the Invention due to its physical design, weight and size are less space demanding in most applications and installations.
- the invention provides several different camera inputs (many cameras) from the same source or moving shots/images without physically moving any cameras.
- the invention is not depending on the known physical laws regarding movement like earth gravity, acceleration, retardation, vibration, speed, fast camera movements, hard start or stop does not affect the camera movement in any way.
- Point of view A singular point in 3D space, whereas based on the viewing or capturing direction can obtain an infinite amount of unique field of views.
- Field of view In photography, the field of view is the environment or scene visible through the camera. The field of view is made up by the origin point (point of view) of the image sensor and made up by the angle of view (the viewing angle defined by the lens).
- the topology and arrangement of multiple cameras are based on having a common virtual point of view, which is a projection of all real point of views.
- Such camera systems have the possibility of generating several fields of views based on viewing direction, but only a single point of view as the projected virtual point of view is fixed.
- standing at a single location in a room provides multiple fields of views based on where the human looks, but only a single point of view. To alter the point of view, the human is required to walk or move around in the room.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Burglar Alarm Systems (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The present invention discloses a multiple camera sensor system and a method for any application and for a large number of different fields. The present invention is solving certain problems related to existing camera technology issues by introducing the possibility of getting several different camera inputs from the same source or moving shots/images without physically moving any cameras. It will minimize the physical presence of existing camera technology, size, weight and design and also to large extent avoid physical laws like gravity, vibrations, acceleration, retardation and speed. This may be implemented by means of a software virtually moving camera using multiple cameras sensors mounted consecutively along some attachment solution. This attachment solution may include e.g. tape or strips. The invention can be utilized in areas like industry and production support, medical appliances, AV production, Video Conferencing, Broadcast, Surveillance and Security among others.
Description
- This application is a continuation of U.S. patent application Ser. No. 18/040,067, entitled MULTIPLE CAMERA SENSOR SYSTEM, having a National Phase Entry date of Jan. 31, 2023 for PCT/EP2021/071911, filed Aug. 5, 2021, entitled MULTIPLE CAMERA SENSOR SYSTEM which is related to and claims priority to Norwegian Application Serial No. 20200879, filed Aug. 5, 2020, the entirety of all of which are incorporated herein by reference.
- The present invention discloses a multiple camera sensor system and a method for any application and for a large number of different fields. The invention can be used in areas as industry and production support, medical appliances, AV production, Video Conferencing, Broadcast, Surveillance and Security among others.
- There are several challenges related to existing camera technology, each in the respective field of which the camera technology is utilized.
- The Security and surveillance camera solutions are generally based on one or several single fixed mounted or remotely controlled moving CCTV/PTZ (pan, tilt, zoom) cameras.
- The cameras are mounted in strategic positions for the most efficient coverage of the areas of interest.
- This can often mean that there are waste areas visually unattended or in blind spots. The amount or difficult or wrong placement of the cameras can demand extensive use of zooming inn for getting to the area of interest and will give the surveillance limited image quality and insufficient information.
- In video conferencing, the conferencing/meeting rooms (and on computers) are usually equipped with one single camera mounted in close relation to the monitor (on the screen frame) with the incoming signal/image.
- The common conferencing solutions are with wide or ultrawide lenses, or cameras with 2-3 lenses mounted together in a fan shape from a zero point of view, this to be able to provide the wanted coverage of the meeting room, table, blackboard or other presenting area, sometimes as much as 180 degrees.
- The ultrawide and wide lenses have the disadvantage that the objects of interest will be smaller the wider the lens are. The existing solutions therefor often suffers from the choice between nearness or wideness, being unsatisfying to use in larger meetings because of long distances to the participants in the end.
- Some high-end conferencing solutions can offer physically and mechanical moving single-lens PTZ cameras placed close to the monitor. These solutions can have software-controlled zoom and focus and can sometime place the talking person in the center. These solutions suffer from software controlled mechanical moving cameras. Auto-producing with one camera and several participants will result in over-panning, zooming and tilting, (moving too much and far in all axes) sound recognition issues and bad image control, camera stability and image quality.
- The large amount of industrial camera solutions either made for inspections or surveillance during different production or processes are done with multiple single camera solutions. This means multiple mounting of cameras for larger installations, multiple cabling and connecting to the final output.
- The domestic users, bloggers and semiprofessionals, business to business and others that are streaming to the internet are either using single web cameras or semiprofessional cameras. This can be single PTZ camera solutions or custom streaming cameras virtually doing face tracking and framing inside a larger high-resolution image than the output streaming resolution.
- Through history of film and TV production moving cameras has been a crucial part of the storytelling. Making moving images has been and are still made with many different technical solutions. This can be camera on a bike, cars, trolleys, dolly's on tracks, cranes, steady-cams and robotic cameras free roaming or on track solutions. All the different technics have their advantages and disadvantages. The problems are there regardless if the camera is physically moved around by a human or moved by a robot.
- These problems are all basic physical problems such as gravitation, weight of the equipment, physical torque, long acceleration and retardation of the camera, hard stops, hard starts, limitations on speed, different types of resistance and resonance, uneven floor and speed will give vibrations during camera movement, either moving straight on the floor or on a roof or floor mounted robotic trolley-track.
- All known camera systems and camera moving setups suffer from the limitations of being a physical object with a weight and being a volume, and therefore is a part of the world's physical laws.
- The cameras and the camera-robots used in studio production are generally mechanically complicated, very large in size and area and space demanding for camera movements.
- There are safety issues on most of the camera robotic systems on the market. This is making the presence of people in the studios during production where robotic cameras are in use prohibited.
- Therefore there is a need for a system solving the camera related problems discussed above in various fields and applications.
- In view of the above, an object of the present disclosure is to overcome or at least mitigate at least some of the drawbacks related to cameras.
- In particular, the present invention discloses a multi camera sensor system providing a target image by a virtual camera with a virtual camera field of view localized at a virtual camera point of view, including a number of camera sensors each with respective sensor fields of view consecutively positioned on one or more attachment devices adjusted to be attached to one or more surfaces, defining an n-dimensional space when attached to the one or more surfaces spanned by the positions of the camera sensors, a processing device at least adjusted to determine a current virtual camera point of view within the n-dimensional space, select image data captured by a first set of the camera sensors having sensor field of views which at least in combination are covering the current virtual camera field of view associated with the current virtual camera point of view, create the target image within the current virtual camera field of view by stitching the selected image data, and a transmission media inherently provided in or connected to the one or more attachment devices adjusted to transmit image data captured by the camera sensors and control data between the camera sensors and the processing device. The present invention also discloses a corresponding method.
-
FIG. 1 Illustrates how image sensors according to aspects of the present invention create overlapping images through a fixed point of origin and its angular field of view, -
FIG. 2 a and b conceptually displaying two different stitching modules differentiated by the input, -
FIG. 3 a illustrates image sensors mounted on a flexible multilayer PCB that can be shaped into any form and length, -
FIGS. 3 b-d show how image sensors mounted on a flexible multilayer sensor tape can be given any form, shape and length. -
FIG. 3 e is a closer view of how an image sensor may be mounted on a flexible multilayer PCB, -
FIG. 4 is an illustration of parameters included in possible mathematical equations defining the relationship between them, -
FIGS. 5 a and 5 b are illustrations of other parameters included in possible mathematical equations defining the relationship between them, -
FIG. 6 illustrates a software architecture according to aspects of the present invention at a high level, -
FIG. 7 illustrates a switch logic system simplifying scalability when a centralized processing server according to aspects of the present invention is running with a fixed number of serial channels. - The different aspects of the present invention are solving certain problems related to existing camera technology issues by introducing the possibility of getting several different camera inputs from the same source or moving shots/images without physically moving any cameras. It will minimize the physical presence of existing camera technology, size, weight and design and also to large extent avoid physical laws like gravity, vibrations, acceleration, retardation and speed.
- This may be done by means of a software virtually moving camera using multiple cameras sensors mounted consecutively along some attachment solution. This attachment solution may include e.g. tape or strips. In the following, even if this solution is referred to as a certain concrete implementation, it must be understood that this is for the purpose of exemplifying only, and that the attaching solution may be implemented in many different ways.
- According to some aspect of the present invention, the camera sensors are placed along the tape with a distance that provides image overlap that are software stitched together into a virtual camera or used as single cameras along the strip. This is generally illustrated in
FIG. 1 . - A virtual camera always consists of data from more than one image sensor. The virtual camera can be placed anywhere along the axis of the strip, even on arbitrary positions between two image sensors. For such positions, the field of view (FOV) and the point of view (POV) must be computed and updated according to interpolated and weighted pixel data from the active image sensors. Real-time 3D reconstruction algorithms will use data from several adjacent image sensors to ensure correctness in perspective changes.
- As shown in
FIG. 1 , the active image is generated by a virtual camera located between the cameras with respective field of views A and B. This results in a perspective change which is mathematically defined within the perspective of A and B. By using interpolation, weighted pixels and 3D reconstruction, the correct perspective is generated for the active image. -
FIGS. 2 a and b are illustrating two examples of stitching modules differentiated by the input. The stitching module ofFIG. 2 a is taking raw image sensor data as input, producing an AB image. The stitching module ofFIG. 2 b is taking an AB image as input, producing an ABCD image. - The camera strip can be placed/mounted and used in all the three axes (xyz) and over long distances/meters with a large amount (e.g. 100 or more) of camera sensors on the strip.
- According to some aspects of the present invention, the camera sensors can be used and mounted as is with adhesive, but in other aspects, applied and built into any hardware casing/housings and been given nearly any shape and form. Examples of such different shapes and forms, being straight, convex, concave or a combination, are illustrated in
FIG. 3 a -3 d. - The image sensors may be mounted on a flexible multilayer PCB that can be shaped into any form and length. Each image sensors placement on the tape (and in space) and their individual view will be calculated and calibrated in software providing precise positioning for each sensor on the PCB and given as input to the software modules to obtain an accurate output image.
- This applies for both linear and non-linear surfaces (convex/concave and multi-shape applications).
- The image sensor tape can be given any form, shape and length. In non-linear surfaces convex/concave applications the image sensors placement on the tape (and in space) and their individual point of view will be calculated and calibrated and given as input to the software modules to obtain an accurate output image.
- As indicated in the closer view in
FIG. 3 e , the image sensor PCB-tape may have dedicated sensor power layer, multichannel camera control and switching layers, signal transportation layers, isolation layers, isolation, cooling and adhesive layer for mounting as is, or in casing/housing. - This provides the ability of being applied in several vertical business areas, with different applications and to work under different conditions, weather, pressure, water, altitude etc. Referring now to
FIG. 4 , examples of relationship between the image sensor field of view (θ), distance between image sensors (d1) and minimum distance to theoretical stitching area (d2) may be as follows. -
d 1=2*d 2/tan(θ2) -
d 1 =d 12*d 2/tan(θ2) - Referring now to
FIG. 5 a , the mathematical expression for how the image sensors are physically displaced when mounted on non-linear surfaces, may be as follows: -
P D=2Rπ*ψ/360 - Referring now to
FIG. 5 b , the mathematical expression for how the image sensors are physically displaced when mounted on convex and concave surfaces, may be as follows: -
d 2 =R*sin(ψ/2)*tan(90−θ1/2±ψ) - where positive ψ is used for convex arcs, and negative ψ is used for concave arcs.
- According to aspects of the present invention, a high resolution multicamera solution is provided, where a large number of 40-megapixel (cellphone) cameras and higher (sensors with lens) are mounted in an image cluster bracket on one of the above mentioned flexible multilayer PCB tape. The cameras are then mounted consecutively on with a defined distance, to provide the required images regarding distance to focus of interest.
- The camera sensors are mounted along the tape with an individual distance that provides camera image overlap on straight line or concave, convex as discussed and illustrated above.
- The overlapping images are controlled by software, stitched together into a real time virtual camera movement. The camera tape can also provide several simultaneously single images along the strip. This means getting moving camera images without moving any physical cameras. All movements are made in software, and no moving parts need to be present.
- The camera strip can be placed/mounted and used in all the three spatial axes (xyz) and over long distances/many meters with a large amount of camera sensors mounted on the strip. Several strips may be connected together either physically or virtually, to create one or more pictures from a point of view not necessarily directly localized on a camera strip, but also on a location in the spatial area defined by the camera strip axis. For example, if one first straight camera strip defines an x-axis of (x0−xM) and a second straight camera strip provided perpendicularly to the first straight camera strip defines a y-axis of (y0−yN), the resulting virtual camera point of view can be on any points of (xm, yn), where m and n are floating numbers between (0,M) and (0,N), respectively, and where xM and yN represents the respective maximum lengths or the two perpendicularly provided camera strips.
- As already mentioned, the camera sensor and lenses are mounted in a flexible multilayer PCB sensor cluster-bracket. This cluster bracket should be provided with the modularity and the flexibility to easy change and use different lenses and sensor configuration depending on application in use.
- As illustrated in
FIG. 3 e , the flexible multilayer PCB tape is according to different aspects of the present invention designed and made from several layers to connect to the camera sensor cluster mounting bracket with the camera sensor. - Further, the PCB tape should preferably have a dedicated camera power layer, multichannel camera control and switching layers, signal transportation layers, isolation layers, shield layer and an adhesive layer.
- In certain aspects of the present invention, the multilayer flexible PCB tape is designed to be produced in modular lengths and can be cut into desired length within.
- According to different aspects of the present invention, the camera sensors and lenses on the tape may be high resolution mass-produced cellphone camera modules from 40 megapixels and higher. These image sensors provide a resolution as 4-8 K which is a bigger resolution than HD (broadcast) or video resolution published on the internet. This makes it possible to do software virtual moving and zooming within the 4-8 K frame. As mentioned above, the camera sensors on the camera strips and the communication and transmission of pixel values and control data are controlled by a remotely or locally (or both) localized software.
- According to certain aspects of the invention, the software is handling the camera-tape switching protocol on all camera sensors for distribution on several channels. Examples of this is illustrated in
FIGS. 6 and 7 . In the following, this example is discussed referring to the different modules in these figures. - The centralized processing server consists of hardware, operating system and software to facilitate the processing of incoming input signals from the image sensor strip. Conceptually the hardware features input/output (I/O) modules reading, converting or translating the input signals and computing power through a conventional or application specific server PC with processing units suitable for image handling. A centralized processing server can occur as a single setup, with multiple servers, locally or remotely, and does not remove a potential need of intelligent or processing units built into the image sensor strip.
- The below software modules line out an architectural concept for necessary software modules. Other modules may occur as a necessity between modules, module order may change, and some modules may be combined or removed.
- An image capturer consisting of hardware interfacing the image sensor circuitry and the image reader module, making data from all active image sensors readable.
- The image reader actively selects image sensor data made ready by the image capturer based on the necessary data for the trailing processes.
- A video format converter can be used in advance of the stitching module to optimize time complexity of stitching algorithms.
- Using stitching algorithms, the stitching module stitches two input image matrices into one image represented as a matrix with an increased row range. The inputs can either originate from raw data from image sensors, or from single iteration outputs from the stitching module resulting in a second-degree stitch—stitching two already stitched images.
- Multithreading is initiated to facilitate concurrent processing, making n preview threads available. Preview threads can be used internally in the software to cache processed images, output raw preview video, forwarded to calibration software improving total system performance and output.
- The image warper performs manipulative operations to image matrices through rotation and skewing to optimize stitching areas and output fit.
- The color correction module corrects image coloring based on the most recent output image or image pairs, ensuring color consistency in image output.
- The stitching trailing from color correction operates in the same manner as the previous stitching module, stitching corrected images prior to the output pipeline.
- A video format converter converts back from an algorithm-optimized format to the desired output format.
- The output router has the possibility to output one or several images to a connected or programmed path.
- Note that the software architecture is subject to change in order to optimize outputs based on the inputs, according to how inputs and output are defined in this patent document.
- Image Sensors in series transmits image data concurrently to a centralized processing server where the Image Capturer, Image Reader and Video Format Converter ensures time-synchronized grouped image data to the Stitching Module.
- The trailing software modules compares and corrects the data to produce an accurate output image.
- A switch logic system simplifying scalability when the centralized processing server is running with a fixed number of serial channels. The switch logic can be utilized to enable different segments of the image sensor circuitry.
- According to one aspect of the present invention, the stitching software can be seen as simulating the human eyes. The human vision consists of images from two eyes/retinas (two viewpoints) that are split vertically and assembled in our brain into a coherent and “plausible” virtual representation of the environment. (binocular stereopsis)
- The distance between the human eyes is providing the depth, length and distance information.
- The stitching software may in certain aspects of the present invention handle stitching on two levels in the image processing. In the following discussion, A, B, C, D and E are cameras or camera input according to the corresponding denotations in
FIG. 1 . F is an imaginable camera adjacent to E. - An example of a first-degree stitching is illustrated in
FIG. 2 a when raw data from image sensors is provided as input: A+B=(h1*w1+h2*w2)=h1*2w1 -
Stitching Module 1 as illustrated inFIG. 2 a is based on 1st degree stitching where two image sensors continuously feed the software module with time-synchronized image data, creating a virtual image output in a similar way as the human eyes. As static parameters for image sensors are known, this module should also perform lens corrections concerning viewpoint/point of origin, focal point and parallax offsets to provide the 2nd degree stitching with the same initial conditions as illustrated inFIG. 2 b . The 2nd stitching module is taking an AB image as input, producing an ABCD image: AB+CD=(h1*2w1+h1*2w1)=h1*4w1. - Consequently,
module 2 ofFIG. 2 b is handling the stitching between the virtual AB and CD.Module 2 is adapted to continuously stitching AB, CD and further neighbors EF and virtually enables movements of the point of view along the image sensor tape. Stitching an AB-CD is only performed when needed, for example during a virtual movement crossing the boundary between AB and CD, resulting in a 4 times wider image, where colors and skewing are performed based on the most recent outputted viewable image to ensure consistency in image output. - An expansion of this would be to virtually enable movements of the point of view in the two-dimensional x,y-space defined by two substantially perpendicularly provided camera strips as discussed above.
- Based on known origins of each image sensor and a reference object, the system would have the capability to compute dept and distance. The virtual output image would preferably be composed by either a 1st degree stitch output, a 2nd degree stich output or higher order output, but rarely from a single image sensor. The software should preferably calibrate each image sensor based on a reference model and optimize the equality between the image sensors by inheriting parameters from the closest neighbor image sensor.
- Supplementary to the discussion of stitching above, below is some aspects according to the present invention to be taken into account:
-
- Output images such as AB and ABCD are available for all software processes in binary matrix form (pixel- and color-matrices etc.), and the active image is only a data selection which also includes the corresponding data (matrices) to create a final output viewable image.
- Singular stitching should be set up as a continuous process, always stitching the adjacent lens (AB, CD, EF).
- Stitching an AB-CD should only be performed when needed, for example during a virtual movement crossing the boundary between AB and CD, resulting in a 4 times wider image, where colors and skewing are performed based on the most recent outputted viewable image to ensure consistency in image output.
- The supplementary aspects above ensure lens correction issues concerning the point of focus and parallax movements between objects in the foreground and background that occurs during movement in the different axes. This is important since the background and foreground objects in the current output point of view have to be moving in the right speed within the image during movement.
- According to different aspects of the present invention, the software knows the “whole” image, while output is produced in the same way as e.g. zoom in on an image on a smartphone and move x/y, ie software selects column range and row range according to the desired aspect ratio on output.
- The distance between the image sensors provides dept and distance information, the software calibrates every image sensor and what to process and not. The software output is usually when virtually moving a virtual representation, an image made from two or more images, rarely from a single sensor.
- As aspects according to the present invention include several amount of image sensors, high precision optical measurements like distance to object and image scan etc. can be provided. This information can be precomputed and used in the finished output or for reference to graphical engines, 3D models, virtual productions or output to other applications for other use.
-
- Lens field of view (viewing angles), size of overlapping area and focus point determines the
necessity of Singular 1-1 Clean Stitching or processing available completed 1-1 stitched images
in the secondary stitching module, Stitching pre-stitched 1-1 s.
- Lens field of view (viewing angles), size of overlapping area and focus point determines the
- The camera sensor strips as discussed in the present application can be used as is, mounted directly on an existing surface, on a set decoration piece, on a wall in a meeting room etc. Because of this flexibility, the present invention can be used in several industries and can go into several housings/casings depending on the application and use. This housing can be given nearly any shape or form to fit the industry, application and environment. The applications and fields of the present invention are further discussed in the following.
- Security and Surveillance.
- In this field, the present invention will provide a better view and a possibility of virtually moving the high-resolution camera coverage to the area of interest without loss of image quality.
- The invention will be able to cover larger and wider areas than existing PTZ solutions without remotely and physically moving the camera with a camera controller, joystick or preset positioning.
- The invention is a software controlled only and do not rely on any moving parts, robotic controller or motors that is needed for controlling the PTZ cameras. This means less risk of mechanical and camera controller malfunctions.
- Video Conferencing and AV Applications
- When using it in this field, the invention will provide a software controlled high resolution multicamera solution with a large number of cameras available for dedicated ready framed and focused presets of the participants for automated meeting production. The Invention can be used on existing monitoring, be placed strategically over and under the monitoring to give a software stitched eye direction correction.
- These dedicated framed shots combined with cameras for overview total images, choice of listening images and dedicated document cameras.
- The invention can be placed in several areas and axes and directions in the room Providing a different presence of the room and meeting than existing systems.
- The invention will be able to deliver a much wider and deeper presentation in educational setups during lectures, people working on smart/white/blackboards due to multiple strips of cameras placed strategically in the lecture room.
- Through interactive virtualization, meeting participants will have the ability to navigate and control their own point of view from the entire covered area, improving the sense of presence. The invention can be placed anywhere—obtaining the desired viewing area of a meeting room, participants, educational setups or whiteboards. Along with built-in support for augmented-/virtual-/mixed-reality, the invention is enhancing the opportunities with video conferencing compared to real-life meetings.
- Industrial Appliances, Production and Installations
- In this field, the invention's ability to be customized, given nearly any form, number of cameras and length, configurated and integrated in several casings/housings makes the invention extremely versatile for any industrial purpose inside or outside.
- Domestic, Business and Semiprofessional Camera Production for Livestreaming or Other Video Publications.
- The invention will provide the domestic and semiprofessional market with a professional production tool providing setups that can make still images from different angles and moving camera images in all axes.
- Professional Studio, News, Sports and Arena Production Inside and Outside.
- The invention introduces the possibility to make still images from several angles and moving camera shots without physically moving any camera in the setup. This can be in a studio, events or arenas inside or outside.
- The invention can be fully integrated into the set design itself. The invention can be mounted in a given shape and camera trajectory designed housing that relates or blends into the studio design.
- One of the advantages of the present invention is that the camera arrangement is small and not heavy weighted, and can easily be used on the road by journalists, press rooms and others that want several camera angles or moving images in their camera setup. Further the camera arrangement can be placed strategically alongside stages or sports arenas and events, being able to virtually provide moving camera shots of moving talents and athletes during performance.
- The invention differences radically from existing camera setups in any known situation, industry or area.
- The concept of using multiple High Definition lenses mounted along a strip/tape is new. The invention provides you with the possibility for individual single framed shots from different places, length and angles on the tape.
- The invention gives the possibility for virtual movements along the tape in any desired axis, this without actually moving any physical camera.
- The wanted trajectory or movement can be done as is or inside a housing designed into any shape or form.
- The invention is therefore extremely adaptable to any industry or different use where there is needs for different angles, viewpoints, and or camera movement this provided from only one signal, one camera source.
- The Invention due to its physical design, weight and size are less space demanding in most applications and installations.
- The invention provides several different camera inputs (many cameras) from the same source or moving shots/images without physically moving any cameras.
- The invention is not depending on the known physical laws regarding movement like earth gravity, acceleration, retardation, vibration, speed, fast camera movements, hard start or stop does not affect the camera movement in any way.
- It must be emphasized that the terminology “comprise/comprises” as used in this specification is chosen to specify the presence of stated features, numbers, steps or components, but does not preclude the presence or addition of one or more other functions, numbers, steps, components or groups thereof. It should also be noted that the word “a” or “an” preceding an element does not exclude the presence of a plurality thereof.
-
-
- CCTV—Closed-circuit television
- FOV—Field of view
- PCB—Printed circuit board
- POV—Point of view
- PTZ—Pan, tilt, zoom
- Point of view (POV)—A singular point in 3D space, whereas based on the viewing or capturing direction can obtain an infinite amount of unique field of views.
- Field of view (FOV)—In photography, the field of view is the environment or scene visible through the camera. The field of view is made up by the origin point (point of view) of the image sensor and made up by the angle of view (the viewing angle defined by the lens).
- In stereoscopic cameras and 360 panoramic cameras, the topology and arrangement of multiple cameras are based on having a common virtual point of view, which is a projection of all real point of views. Such camera systems have the possibility of generating several fields of views based on viewing direction, but only a single point of view as the projected virtual point of view is fixed. Similar to the human perception, standing at a single location in a room provides multiple fields of views based on where the human looks, but only a single point of view. To alter the point of view, the human is required to walk or move around in the room.
Claims (4)
1. A multi camera sensor system providing a target image by a virtual camera with a virtual camera field of view localized at a virtual camera point of view, comprising:
a number of camera sensors each with respective sensor fields of view consecutively positioned on one or more attachment devices adjusted to be attached to one or more surfaces, defining an n-dimensional space when attached to the one or more surfaces spanned by the positions of the camera sensors,
a transmission media inherently provided in or connected to the one or more attachment devices adjusted to transmit image data captured by the camera sensors and control data between the camera sensors and a processing device, characterized in that the processing device at least adjusted to:
determine a current virtual camera point of view located between a first set of camera sensors within the n-dimensional space, select image data captured by the first set of the camera sensors having sensor field of views which at least in combination are covering the current virtual camera field of view associated with the current virtual camera point of view, and create the target image within the current virtual camera field of view by stitching the selected image data.
2. A multi camera sensor system according to claim 1 , wherein the virtual camera point of view is movable, and the processing device is further adjusted to:
determine an updated virtual camera point of view located between the number of camera sensors within the n-dimensional space different from the current virtual camera point of view,
re-select image data captured by a second set of the number of camera sensors if the first set of the camera sensors do not have sensor field of views which at least in combination are covering the updated virtual camera field of view associated with the updated virtual camera point of view, and
re-create the target image within the updated virtual camera field of view by re-stitching the selected image data.
3. A method for providing a target image by a virtual camera with a virtual camera field of view localized at a virtual camera point of view, characterized in:
determining a current virtual camera point of view located between a number of camera sensors within an n-dimensional space defined by the positions of the number of camera sensors, each with respective sensor fields of view, consecutively positioned on one or more attachment devices attached to one or more surfaces
selecting image data captured by a first set of the number of camera sensors having sensor field of views which at least in combination are covering the current virtual camera field of view associated with the current virtual camera point of view,
creating the target image within the current virtual camera field of view by stitching the selected image data,
transmitting image data captured by the camera sensors and control data between the camera sensors and the processing device on a transmission media inherently provided in or connected to the one or more attachment devices.
4. A method according to claim 3 including the additional steps of:
determining an updated virtual camera point of view located between the number of camera sensors within the n-dimensional space different from the current virtual camera point of view,
re-selecting image data captured by a second set of the number of camera sensors if the first set of the camera sensors do not have sensor field of views which at least in combination cover the updated virtual camera field of view associated with the updated virtual camera point of view, and
re-creating the target image within the updated virtual camera field of view by re-stitching the selected image data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/350,866 US11831851B1 (en) | 2020-08-05 | 2023-08-23 | Multiple camera sensor system |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NO20200879A NO346392B1 (en) | 2020-08-05 | 2020-08-05 | Multiple camera sensor system and method of providing a target image by a virtual camera |
NO20200879 | 2020-08-05 | ||
PCT/EP2021/071911 WO2022029246A1 (en) | 2020-08-05 | 2021-08-05 | Multiple camera sensor system |
US202318040067A | 2023-01-31 | 2023-01-31 | |
US18/350,866 US11831851B1 (en) | 2020-08-05 | 2023-08-23 | Multiple camera sensor system |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/040,067 Continuation US20230269354A1 (en) | 2020-08-05 | 2021-08-05 | Multiple camera sensor system |
PCT/EP2021/071911 Continuation WO2022029246A1 (en) | 2020-08-05 | 2021-08-05 | Multiple camera sensor system |
Publications (2)
Publication Number | Publication Date |
---|---|
US11831851B1 US11831851B1 (en) | 2023-11-28 |
US20230396747A1 true US20230396747A1 (en) | 2023-12-07 |
Family
ID=77358286
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/040,067 Abandoned US20230269354A1 (en) | 2020-08-05 | 2021-08-05 | Multiple camera sensor system |
US18/350,866 Active US11831851B1 (en) | 2020-08-05 | 2023-08-23 | Multiple camera sensor system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/040,067 Abandoned US20230269354A1 (en) | 2020-08-05 | 2021-08-05 | Multiple camera sensor system |
Country Status (5)
Country | Link |
---|---|
US (2) | US20230269354A1 (en) |
EP (1) | EP4193587A1 (en) |
JP (1) | JP7419601B2 (en) |
NO (1) | NO346392B1 (en) |
WO (1) | WO2022029246A1 (en) |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6522325B1 (en) * | 1998-04-02 | 2003-02-18 | Kewazinga Corp. | Navigable telepresence method and system utilizing an array of cameras |
AU761950B2 (en) * | 1998-04-02 | 2003-06-12 | Kewazinga Corp. | A navigable telepresence method and system utilizing an array of cameras |
EP1110413A1 (en) * | 1999-06-11 | 2001-06-27 | Emile Hendriks | Acquisition of 3-d scenes with a single hand held camera |
US6778207B1 (en) * | 2000-08-07 | 2004-08-17 | Koninklijke Philips Electronics N.V. | Fast digital pan tilt zoom video |
CN102714690A (en) | 2009-09-04 | 2012-10-03 | 布瑞特布里克有限公司 | Mobile wide-angle video recording system |
EP2860699A1 (en) * | 2013-10-11 | 2015-04-15 | Telefonaktiebolaget L M Ericsson (Publ) | Technique for view synthesis |
JP6452360B2 (en) | 2013-12-19 | 2019-01-16 | キヤノン株式会社 | Image processing apparatus, imaging apparatus, image processing method, and program |
EP3452340B1 (en) * | 2016-05-05 | 2020-09-30 | Harman International Industries, Incorporated | Systems and methods for driver assistance |
JP7165140B2 (en) * | 2017-05-10 | 2022-11-02 | グラバンゴ コーポレイション | Tandem Camera Array for Efficient Placement |
US11049218B2 (en) * | 2017-08-11 | 2021-06-29 | Samsung Electronics Company, Ltd. | Seamless image stitching |
JP7132730B2 (en) | 2018-03-14 | 2022-09-07 | キヤノン株式会社 | Information processing device and information processing method |
US11501572B2 (en) * | 2018-03-26 | 2022-11-15 | Nvidia Corporation | Object behavior anomaly detection using neural networks |
US10659698B2 (en) * | 2018-09-19 | 2020-05-19 | Canon Kabushiki Kaisha | Method to configure a virtual camera path |
US10623660B1 (en) * | 2018-09-27 | 2020-04-14 | Eloupes, Inc. | Camera array for a mediated-reality system |
US20220109822A1 (en) * | 2020-10-02 | 2022-04-07 | Facebook Technologies, Llc | Multi-sensor camera systems, devices, and methods for providing image pan, tilt, and zoom functionality |
-
2020
- 2020-08-05 NO NO20200879A patent/NO346392B1/en unknown
-
2021
- 2021-08-05 US US18/040,067 patent/US20230269354A1/en not_active Abandoned
- 2021-08-05 WO PCT/EP2021/071911 patent/WO2022029246A1/en active Application Filing
- 2021-08-05 EP EP21755493.0A patent/EP4193587A1/en active Pending
- 2021-08-05 JP JP2023507970A patent/JP7419601B2/en active Active
-
2023
- 2023-08-23 US US18/350,866 patent/US11831851B1/en active Active
Also Published As
Publication number | Publication date |
---|---|
NO346392B1 (en) | 2022-07-04 |
US20230269354A1 (en) | 2023-08-24 |
WO2022029246A1 (en) | 2022-02-10 |
EP4193587A1 (en) | 2023-06-14 |
US11831851B1 (en) | 2023-11-28 |
JP2023531322A (en) | 2023-07-21 |
NO20200879A1 (en) | 2022-02-07 |
JP7419601B2 (en) | 2024-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7224382B2 (en) | Immersive imaging system | |
KR102023587B1 (en) | Camera Rig and Stereoscopic Image Capture | |
US7710463B2 (en) | Method and system for compensating for parallax in multiple camera systems | |
JP3320541B2 (en) | Image processing method and apparatus for forming an image from a plurality of adjacent images | |
US5063441A (en) | Stereoscopic video cameras with image sensors having variable effective position | |
US10154194B2 (en) | Video capturing and formatting system | |
EP2569951B1 (en) | System and method for multi-viewpoint video capture | |
CN102665087A (en) | Automatic shooting parameter adjusting system of three dimensional (3D) camera device | |
CN105072314A (en) | Virtual studio implementation method capable of automatically tracking objects | |
KR20150050172A (en) | Apparatus and Method for Selecting Multi-Camera Dynamically to Track Interested Object | |
JP2007517264A (en) | Multidimensional imaging apparatus, system and method | |
Gurrieri et al. | Acquisition of omnidirectional stereoscopic images and videos of dynamic scenes: a review | |
US11778297B1 (en) | Portable stereoscopic image capturing camera and system | |
KR101725024B1 (en) | System for real time making of 360 degree VR video base on lookup table and Method for using the same | |
JP2004056779A (en) | Image inputting device | |
KR20150141172A (en) | Method and system of generating images for multi-surface display | |
US10419681B2 (en) | Variable field of view multi-imager | |
KR101704362B1 (en) | System for real time making of panoramic video base on lookup table and Method for using the same | |
US9258546B2 (en) | Three-dimensional imaging system and image reproducing method thereof | |
US11831851B1 (en) | Multiple camera sensor system | |
KR20180092411A (en) | Method and apparatus for transmiting multiple video | |
CN116260955A (en) | Digital image stereoscopic shooting system with image space projection posture correction function | |
CN213461928U (en) | Panoramic camera and electronic device | |
JP4183466B2 (en) | Method for generating omnidirectional binocular stereoscopic image | |
Ikeda et al. | Panoramic movie generation using an omnidirectional multi-camera system for telepresence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
AS | Assignment |
Owner name: MUYBRIDGE AS, NORWAY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TOMREN, ANDERS;ESPELAND, HAKON;REEL/FRAME:064446/0498 Effective date: 20230126 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |