EP3494692A1 - Method and apparatus for obtaining enhanced resolution images - Google Patents
Method and apparatus for obtaining enhanced resolution imagesInfo
- Publication number
- EP3494692A1 EP3494692A1 EP17757950.5A EP17757950A EP3494692A1 EP 3494692 A1 EP3494692 A1 EP 3494692A1 EP 17757950 A EP17757950 A EP 17757950A EP 3494692 A1 EP3494692 A1 EP 3494692A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- view
- field
- image data
- imaging system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
Definitions
- the present disclosure relates generally to imaging and tracking systems and methods, and more particularly to imaging systems and methods that allow for imaging of a field of view using more than one image plane, where the image planes work in a cooperative (e.g. , complementary) manner to generate an image of the field of view, e.g. , an image with enhanced details.
- a cooperative e.g. , complementary
- Imaging systems can be employed in a variety of applications. For example, in surveillance applications, autonomous driving and virtual reality imaging systems can provide still or video images of a field of view. Further, in some cases, imaging systems can detect changes in the field of view at a distance, such as movement of a vehicle, a person, presence of a traffic sign, or changes in a traffic light. Some imaging systems can provide depth information regarding a field of view for all or a portion of that field of view. For example, some imaging systems can use a pair of cameras, placed at a fixed known distance apart from one another, to extract and obtain depth information using, for example, binocular disparity by detecting the difference in coordinates of similar features within two stereo images.
- an imaging system which includes an optical system configured to receive light from a field of view and direct the received light to a plurality of image planes.
- the imaging system can also include a first plurality of image detectors optically coupled to a first image plane.
- the first plurality of image detectors can be configured to detect at least a portion of the light directed by the optical system to the first image plane and generate first image data corresponding to at least a portion of the field of view.
- the imaging system can also include a second plurality of image detectors optically coupled to at least another image plane.
- the second plurality of image detectors can be configured to detect at least a portion of the light directed by the optical system to the at least another image plane and generate second image data corresponding to complementary portions of the field of view.
- an imaging method for obtaining enhanced images comprises receiving light from a field of view and directing the received light to a plurality of image planes. At least a portion of the light directed to a first image plane is detected, at the first image plane, using a first plurality of detectors optically coupled to the first image plane, and used to generate first image data corresponding to at least a portion of the field of view. At least a portion of the light directed to at least another image plane is detected, using a second plurality of image detectors optically coupled to at the least another image plane, to generate second image data corresponding to other (e.g. , complementary) portions of the field of view.
- the second image data can include data unique to the at least another image plane and/or data overlapping with at least a portion of the first image data.
- a processor coupled to the plurality of image planes receives the first and second image data and generates a first image of the field of view, corresponding to the first image data, and a second image of the field of view, corresponding to the second image data.
- the first and second image data can be combined to generate combined image data of the field of view and an enhanced image of the field of view can be generated using the combined image data.
- the optical system can be configured to split the light received from the field of view into two or more portions and direct said portions to different image planes.
- the optical system can include an image splitter (e.g. , image splitter system) configured to split the light received from the field of view.
- the image splitter can be at least one of a prism-type image splitter, a flat Pellicle-type image splitter, or a curved Pellicle-type image splitter.
- the optical system can further include a processor.
- the processor can be in communication with the first and second plurality of image detectors and configured to receive the first and second image data and generate a first image of the field of view, corresponding to the first image data, and a second image of the field of view, corresponding to the second image data.
- the second image data can include data unique to the at least another image plane and/or data overlapping with at least a portion of the first image data.
- the processor can be configured to combine the first and second image data to generate combined image data of the field of view, and generate an image of the field of view using the combined image data.
- the processor can further be configured to combine the first and second images of the field of view to generate a combined image of the field of view having at least one of 1) higher resolution or 2) additional depth information than that of the first image or the second image.
- the first plurality of image detectors can further be configured to obtain a video image of the field of view, and the processor can be configured to analyze the obtained video image to identify one or more moving objects within the field of view.
- the processor can further be configured to control the second plurality of image detectors to acquire image data of at least a portion of the field of view having at least one of the identified one or more moving objects included therein. Additionally or alternatively, the processor can receive image data of at least one moving object from the second plurality of image detectors and generate a video image of the at least one moving object at a higher image resolution than an image resolution of the first image. [0011] Further, the processor can combine the first and the second image data to form an image of the field of view. Specifically, the processor can combine the image data to form the final image and need not to first generate a first image based on the first image data and a second image based on the second image data in order to form an image of the field of view.
- the imaging system can include at least one display element.
- the display element can be in communication with the processor and configured to display at least one of the first or second image and/or an image formed by combining the first and the second images.
- the imaging system can include a user interface.
- the user interface can be in communication with the processor and configured to receive input instructions from a user of the imaging system.
- the user interface can be configured to receive instructions requesting additional information regarding a specific portion of the first image.
- the processor can, in response to the instructions, control the second plurality of image detectors to acquire image data of at least a portion of the field of view corresponding to the specific portion of the image. Additionally, or alternatively, the processor can generate a plurality of second images corresponding to one or more portions of the first image according to a predefined pattern.
- At least one of the first plurality of image detectors and the second plurality of image detectors can comprise at least one of a substantially flat geometry or a substantially curved geometry.
- at least one of the image planes can include a plurality of detecting elements that are positioned at an angle relative to one another.
- vectors orthogonal to the surface of any two adjacent detecting elements associated with an image plane can form an acute angle, e.g., an angle image in a range of about 0.01 degrees to about 10 degrees relative to one another.
- at least two image planes from the plurality of image planes can be positioned at an angle relative to one another.
- the optical system can include a wide-angle lens, the wide-angle lens being configured to direct the light received from a wide field-of-view to the plurality of image planes.
- the optical system can include a wide-angle lens that can form a curved focal plane.
- the wide-angle lens can comprise a fisheye lens.
- the optical system can include any suitable optical element known in the art.
- the optical system can include at least one lens and at least one beam splitter coupled to the at least one optical lens.
- the beam splitter can be configured to divert at least one portion of the light to the first image and divert at least another portion of the light to the at least another image plane.
- the optical system can include plurality of optical paths, for example four optical paths in a x-cube type prism Each optical path can have an independent or separate optical axis.
- the imaging system can include an image capture device that includes at least one wide-angle lens.
- the wide-angle lens can be configured to collect light received from a wide-angle field of view.
- FIG. 1A is a schematic illustration of an example of an optical system according to some embodiments disclosed herein.
- FIG. IB is a schematic illustration of another example of an optical system according to some embodiments disclosed herein.
- FIG. 2 is a block diagram of an imaging system according to some embodiments disclosed herein.
- FIG. 3 A is a schematic illustration of an example of an image plane that can be obtained using an image detecting array according to the embodiments disclosed herein.
- FIG. 3B is a schematic illustration of another example of an image plane that can be obtained using an image detecting array according to the embodiments disclosed herein.
- FIG. 3C is a schematic illustration of an example of a combined image plane formed using the image detecting arrays shown in FIG. 3A and 3B.
- FIG. 4A is a schematic illustration of an image that can be obtained using the embodiments disclosed herein.
- FIG. 4B is a schematic illustration of another image that can be obtained using the embodiments disclosed herein.
- FIG. 4C is a schematic illustration of an image that can be obtained from combining the images shown in FIG. 4A and FIG. 4B.
- FIGs. 5A-5B are schematic illustrations of an image of a moving object that can be obtained using the embodiments disclosed herein.
- FIG. 6 is a schematic illustration of an example of a combined image plane formed using image detecting arrays according to some embodiments disclosed herein.
- FIG. 7 is a schematic illustration of an example imaging system according to some embodiments disclosed herein.
- FIG. 8 is an illustrative example of image planes that can be used in operation of self-driving automobiles according to the embodiments disclosed herein.
- the present disclosure relates to systems and methods for obtaining still or video images of a field of view and displaying high resolution images of at least a portion of the field of view, while presenting information regarding the context of that portion within the field of view.
- Embodiments described herein can provide for simultaneous and rapid generation of high resolution images corresponding to different portions of the field of view. Such rapid generation of the high resolution images can be suitable for a variety of applications, and in particular, for object tracking applications in which high resolution images of one or more objects are generated, as those object(s) move within the field of view.
- a system according to the present teachings can operate by directing light from a field of view onto two or more image planes that are optically coupled to one or more image detectors.
- the two or more image planes can work in a complimentary manner to extract 3D information within overlapping regions of the field of view.
- Such imaging systems can be used in a wide range of applications, for example in autonomous driving and in virtual reality applications.
- optical systems employ a plurality of image planes to acquire image data from multiple portions of a field of view and combine such image data (e.g. , digitally stich the image data associated with different image planes) to obtain a composite image of the field of view.
- image data e.g. , digitally stich the image data associated with different image planes
- the composite image can exhibit a resolution greater than the resolution of images that can be formed from the image data acquired at the individual image planes.
- resolution is used herein consistent with its common usage in the art, and particularly in the field of digital imaging, to refer to the number of light sensitive pixels per unit length or per unit area in an image.
- resolution is used herein consistent with its common usage in the art, and particularly in the field of digital imaging, to refer to the number of light sensitive pixels per unit length or per unit area in an image.
- a 35mm film used in professional photography has an effective resolution of approximately 20 megapixels per frame when coupled with high quality diffraction limited optics.
- image plane is also used herein consistent with its known meaning in the art.
- image plane can refer to a plane on which the image of at least a portion of the field of view is formed, e.g., the surface of a detector.
- the image plane can be flat, curved, or have any other suitable shape known in the art.
- sensors or image detecting elements described herein can be placed in image planes in any suitable manner. For example, different sensors can each have a separate image plane. Further, each image plane can be a part or a region of a larger image plane.
- image planes can be described as regions of a larger image plane.
- FIG. 1A is a schematic illustration of an example of an optical system 100 according to some embodiments disclosed herein.
- the imaging system 100 can include an optical system 102 having one or more optical elements for collecting light from a field of view (not shown).
- the optical system 102 can include a converging lens 103 and a diverging lens 104.
- the converging lens 103 and the diverging lens 104 can, in combination, direct the collected light onto a beam splitter 105.
- the converging lens 103 and the diverging lens 104 can collectively collimate the light received from the field of view and direct the collimated light to the beam splitter 105.
- the optical system 102 can include an image capture device including at least one wide-angle lens that collects light received from a wide- angle field of view.
- a portion of the light incident on the beam splitter 105 can be reflected by the beam splitter 105 to a first image plane 106. Another portion of the incident light can be passed through the beam splitter 105 to a second image plane 108.
- the beam splitter 105 can direct the one or more portions of the light incident thereon using any scheme known in the art.
- a mirror 110 can be used to receive the portion of the light forwarded by the beam splitter 105 and direct the received light (e.g. , by reflection) onto the first image plane 106.
- An image detector 107 can be optically coupled to the first image plane 106 and configured to detect at least a portion of the light that is directed to the first image plane in order to acquire first image data of the field of view or at least a portion of the field of view.
- a second image detector 109 can also be optically coupled to the second image plane 108. Similarly, the second image detector 109 can be configured to acquire second image data of the field of view or at least a portion thereof. As discussed in more detail below, in some embodiments, the second image data can be combined with the first image data to construct a higher resolution image than an image constructed based on the first image data acquired by the first image detector 107 or second image detector 109 individually.
- an optical system 100 can include additional image planes and/or detectors.
- the optical system 100 can include a third image plane and/or a third image detector, to which light emanating from the field of view can be directed.
- additional beam splitters or other optical components can be used to direct one or more portions of the light within the optical system 100.
- Such configurations can allow for multiple higher resolution images to be created simultaneously from different portions of the field of view.
- two image planes e.g. , each receiving an output of a beam splitter
- image planes each having a two-dimensional array of sensors offset from the other planes, can be used.
- image planes can allow for multiple overlapping regions in the horizontal and vertical directions removing any dead zones.
- Image sensors generally have dead zones surrounding active sensing pixels in the form of calibration pixels, readout electronics or packaging. The dead zones preclude contiguous horizontal or vertical tiling of multiple image sensors.
- FIG. IB is a schematic illustration of another example of an optical system 100' according to some embodiments disclosed herein.
- the imaging system 100' can include two or more optical systems 102, 1 11 for collecting light from two or more corresponding fields of view (not shown).
- Each optical system 102, 1 11 can include any suitable optical component or system available in the art. Further, the optical system can comprise a plurality of optical paths, each having an independent optical axis. For example, as shown in FIG. IB, each optical system 102, 1 1 1 can include a converging lens 103, 1 10 and a diverging lens 104, 112. Further, each optical system 102, 1 11 can direct the collected light onto one or more corresponding image planes 106, 108. Each image plane 106, 108 can include one or more image detectors 107, 109. Each image detector 107, 109 can be configured to detect at least a portion of the light that is directed to that image plane in order to generate image data of its corresponding field of view or at least a portion of the corresponding field of view.
- the first image plane 108 can be configured to receive at least a portion of the light that is directed from a first portion of the field of view (not shown) to the first optical system 102.
- the second image plane 106 can be configured to receive at least a portion of the light directed from a second portion of the field of view (not shown) to the second optical system 11 1.
- the first detector 109 can be configured to detect at least a portion of the light that is directed to the first image plane in order to generate first image data of the first portion of the field of view.
- the second detector 107 can be configured to detect at least a portion of the light that is directed to the second image plane in order to generate second image data of the second portion of the field of view.
- the first and second portions of the field of view can be complementary and/or overlapping regions.
- the first and second portions can each represent a subset of the field of view. These subsets of the field of view can be selected such that once combined they can span and represent the entire of field of view.
- the first and second portions of the field of view can be overlapping regions.
- the first portion of the field of view can represent the entire field of view and the second portion of the field of view can represent a subset/segment of the field of view that overlaps with the first portion.
- the first and second portions can be complementary regions (each can be a subset of the field of view) which can also include at least some overlapping regions)
- the first and second image planes can be complementary fields of view.
- the first field of view can include a portion of the second field of view or be a subset of the second field of view.
- the first and second fields of view can be or include one or more portions of a larger field of view.
- the first and second fields of view can be complementary fields of view that together represent one or more portions of a larger field of view. Accordingly, the first image data and the second image data can be combined to create an image of the field of view represented by the first and second fields of view.
- FIG. 2 is an example of an imaging system 200 according to some embodiments disclosed herein.
- the imaging system 200 can include a processor 205.
- the processor 205 can be any processor or processing system known in the art.
- the processor 205 can be a data processing apparatus, e.g., a programmable processor, a computer, or multiple computers.
- the processor 205 can be electrically coupled to one or more image detectors.
- the processor 205 can be coupled to a first image detector 107 and a second image detector 109.
- Each of the detectors 107, 109 can be disposed in one or more corresponding image planes.
- the first image detector 107 can be disposed in a first image plane 106 and the second image detector 108 can be disposed in a second image plane 109.
- the processor 205 can be configured to collect image data acquired by the image detectors, combine the collected data, and use the combined data to generate an image of a field of view 299. Specifically, as shown in FIG.
- the first image detector 107 can be disposed in a first image plane 106 and configured to detect at least a portion of the light (LI) directed from at least a first portion of the field of view and generate first image data (Dl) corresponding to the detected portion of the light.
- the second detector 108 can be disposed in a second image plane 109 and configured to detect at least a portion of the light directed (L2) from at least a second portion of the field of view and generate second image data (D2) corresponding to the detected portion of the light.
- the first and second image data can correspond to complementary and/or overlapping regions.
- the processor 205 can receive the data D l, D2 from the first 107 and the second 108 image detectors and process and combine the collected image data to construct a resulting combined image. Further, as discussed in more detail below, the processor 205 can form an image of a field of view 299 based on the image data acquired by the first 107 and second 108 image detectors and analyze that image to identify one or more objects of interest, if any, in the field of view 299.
- the processor 205 can be configured to select a portion of the field of view 299 for which image data is collected by one or more detectors and utilize other detectors to collect additional image data corresponding to the selected portion of the field of view.
- the processor 205 can use the additional image data to generate enhanced images of the field of view and/or enhanced images of the selected portion of the field of view.
- the processor 205 can select a portion (PI) of the field of view 299, for which image data Dl is acquired by the first image detector 107.
- the processor can also collect second image data D2, corresponding to the selected portion (PI) of the field of view 299, using the second detector 108.
- the first image data Dl and the second image data D2 can be used to generate an enhanced image of the field of view and the selected portion (PI) of the field of view.
- the first and second image data D l, D2 can be used to generate an image having a higher resolution than an image constructed solely based on the first (D l) or the second (D2) image data.
- the processor 205 can process image data from complementary regions of the field of view 299 to obtain enhanced images of the field of view 299.
- the processor can detect one or more objects of interest in the first image and configure the second detector 108 to obtain image data corresponding to a portion of the field of view in which the one or more objects of interest are detected. This allows for obtaining enhanced images of the portions of field of view containing the detected objects of interest (e.g. , images having higher resolution).
- the images obtained by the first and second image detectors 107, 108 are not limited to static images.
- the first and/or second image detectors can detect and generate data suitable for forming dynamic (e.g. , video) images.
- the processor can be configured to analyze dynamic images (video) obtained by the first detector to detect one or more objects of interest and control the second detector to obtain more detailed images (static or dynamic) of the portion of field of view containing the detected objects of interest.
- additional information regarding the field of view can be obtained by utilizing independent optical paths and/or independent optical lenses.
- image data obtained (by utilizing independent optical paths and/or independent optical lenses) from overlapping regions of a field of view can be used to extract depth information.
- Depth information can be very valuable in video surveillance, autonomous driving Virtual Reality and many other applications.
- the optical system 200 can also include a buffer 206, which can be used to store (e.g. , temporarily store) image data acquired by the first 107 and the second 108 image detectors.
- the buffer 206 can be connected to the processor 205 and configured such that the processor 205 can communicate with the buffer 206 to store image data in the buffer 205 and/or retrieve image data from the buffer 205.
- the buffer 206 can be any buffer and/or storage medium known in the art. Further, the buffer 206 can be an integral portion of the optical system 200 and/or any component included in the optical system 200. Alternatively or additionally, the buffer 206 can be an independent component coupled to the optical system 200.
- each of the first 107 or second 108 image detectors can include an integrated buffer for temporary storage of the image data acquired by that image detector 107, 108.
- the processor 205 can also be coupled to one or more interfaces.
- the processor 205 can be coupled to one or more display elements 209, 210 and/or a user interface 208.
- the processor 205 is coupled to a primary display 209 and a secondary display 210.
- the primary 209 and secondary 210 displays can receive data (e.g., data corresponding to the first and second image data, Dl and D2) from the processor 205, and display images corresponding to the received data.
- the processor 205 can cause the display of a graphical element 211, such as a call-out box, in a display element (e.g., the primary display 209).
- the graphical element 211 can highlight the portion of the first image that has been re-produced as the second image at a higher resolution, and display the higher resolution image (e.g. , in the secondary display 210).
- the term "display,” as used herein, can refer to a device, such as a computer monitor or a television monitor, that allows visual presentation of data, e.g., images, and it can also refer to at least a portion of a screen provided by such a device for viewing images.
- each of the primary and secondary display 209, 210 can refer to a computer monitor or a window provided on a computer screen.
- the optical system 200 can include a user interface 208 connected to the processor 205.
- the user interface 208 can allow a user (not shown) to interact with the optical system 200 and/or control the functions and/or components of the optical system 200.
- the user interface module 208 can include any suitable interface hardware known in the art.
- the user interface module 208 can be a mouse, keyboard, stylus, trackpad, or other input devices. In some embodiments, these input devices can be used in combination with the primary and secondary displays 209, 210 to allow a user to select a portions of the field of view that is displayed.
- the optical system 200 can further include other or additional storage elements, for example a permanent storage medium 207.
- the permanent storage medium 207 can store information, such as the first and second image data Dl, D2 and/or the resulting images for later access and/or review.
- the processor 205 can be in communication with the permanent storage medium 207 and arranged to cause the transfer and/or retrieval of the image data and/or images constructed from the image data to/from the permanent storage medium 207.
- the information (e.g., data or images) stored in the permanent storage medium 207 can be used in processing and/or display of data and images.
- one or more elements that can display and save the image for later retrieval and display can be utilized.
- a hybrid system that can both display and save image data for later retrieval and review can be utilized.
- the first image detector 107 can acquire the image data Dl by detecting light collected from the field of view 299 using an optical system, such as optical system 102 shown in FIGs. 1A-1B.
- the optical system 102 can include any suitable optical element available in the art.
- the optical system 102 can include a wide- angle lens (e.g. , lens 103, shown in FIGs. 1 A-1B, can be a wide-angle lens) that can capture the light directed from a scene within at least one portion of the field of view 299.
- the wide-angle lens can be any suitable wide-angle lens known in the art.
- the wide-angle lens can be a fisheye lens that can capture light received from a 180- degree field of view (e.g. , a wide-angle field of view) and direct that light to the first image detector 107.
- the first image detector 107 (which, as described below with reference to FIGs. 3A-3C, can typically include a plurality of image detecting elements) can convert the optical photons incident thereon into electrical signals.
- the first image detector 107 can use any suitable technique known in the art for converting the incident light into corresponding electrical signals.
- These electrical signals can be stored (e.g. , under the control of the processor 205) in a storage medium, such as buffer 206.
- the processor 205 can retrieve the stored image data (Dl) from the buffer 206 and operate on the image data (Dl) in a manner known in the art to form a first image of the scene.
- the processor 205 can optionally correct for any geometrical distortions of the image data by employing any suitable scheme known in the art.
- any suitable scheme known in the art for example, the teachings of U. S. Patent No.
- a wide angle lens can be used in combination with a curved Pellicle-type image splitter to direct the light received from the field of view to one or more image planes.
- images having resolutions of about 8,000 by 4,000 pixels can be obtained using two image planes (e.g., a pair of detectors) coupled to a curved Pellicle-type image splitter that connects to a wide angle lens.
- FIGs. 3A-3B are schematic illustrations of image planes 106, 107 that can be obtained using one or more image detecting arrays according to the embodiments disclosed herein.
- the image detectors e.g., first 107 and the second 108 image detectors
- an image detector can be implemented as a plurality of image detecting elements that are fixedly coupled to a corresponding image plane. For example, as shown in FIG.
- each image detector 107, 108 can be implemented as a plurality of image detecting elements 301, 302, 303 (for the first image detector 107), 30 , 302', 303' (for the second image detector 108) that are fixedly coupled to image planes 106, 109 in which the image detectors 107, 108 are utilized.
- any suitable image detecting element known in the art can be used to form the image detectors (e.g., the first 107 and second 108 image detectors) described herein.
- an image detecting element can include, without limitation, any of a Charge-Coupled Device (CCD), a Complementary Metal-Oxide Semiconductor (CMOS), a Thermal Imaging device, or any other imaging device known in the art and suitable for use with the embodiments disclosed herein.
- CCD Charge-Coupled Device
- CMOS Complementary Metal-Oxide Semiconductor
- Thermal Imaging device any other imaging device known in the art and suitable for use with the embodiments disclosed herein.
- the image detecting elements 301, 302, 303, 301 ', 302', 303' can be arranged in any suitable manner available in the art.
- the image detecting elements 301, 302, 303, 30 , 302', 303' can be arranged as an array and/or a matrix of image detecting elements that are configured to capture and detect the light directed to each image plane 106, 109.
- Arrangements of image detecting elements having various densities i.e. , the number of image detecting elements per unit area of the image plane
- the density of the arrangements of image detecting elements used can depend on various factors, such as the type of application in which the imaging system is used, the type of image detecting elements, etc.
- the image detecting elements 301, 302, 303, 301 ', 302', 303' can be configured in any suitable manner known in the art. Further, the image detecting elements 301, 302, 303, 30 , 302', 303' can be configured to communicate with the processor 205 and/or to transmit and/or receive image data or other information to/from the processor 205 using any suitable technique known in the art. Furthermore, the image detecting elements can be configured such that the image data acquired by the image detecting elements exhibits a predetermined total resolution.
- the image detecting elements 301, 302, 303, 30 , 302', 303' can be configured such that the image data acquired by those elements exhibits a total resolution greater than about 1 megapixel per square inch (e.g. , in a range of about 1 megapixel to about 20 megapixels per square inch).
- the linear density of the image detecting elements can be selected based on a variety of factors, such as a desired resolution for the second image, the physical constraints of the imaging system, cost, power consumption, etc.
- 6 image detecting elements (three per imaging plane) of approximately 15mm x 11mm array of 2.4 ⁇ pixels can be disposed along an extent of about 66mm. This arrangement can result a resolution of approximately 120 Megapixels.
- FIG. 3C is a schematic illustration of an example of a combined image plane 316 that can be formed using overlapping image data obtained from the detecting arrays shown in FIG. 3A and 3B.
- the combined image plane 316 combines the image plane 106 formed using image detecting elements 301, 302, 303 with image plane 109 formed using image detecting elements 301 ', 302', 303'.
- the image data obtained by the image detecting elements 301, 302, 303, utilized to form the first image plane 106 can overlap with the image data obtained using image detecting elements 301 ', 302', 303', utilized to form the second image plane 109.
- These overlapping regions 311, 312, 313 can provide additional information regarding the field of view.
- the overlapping regions 311, 312, 313 can be used to obtain images having higher resolution than an image that can be obtained individually from the image detecting elements 301, 302, 303 in the first image plane 106 and/or an image that can be obtained individually from the image detecting elements 30 , 302', 303' in the first image plane 109.
- the overlapping regions 311, 312, 313 can be used to obtain additional information (e.g. , additional, more focused images) regarding the regions of field of view corresponding to the overlapping regions 311, 312, 313.
- FIG. 4A is a schematic illustration of a first image 401 that can be obtained using the embodiments disclosed herein.
- FIG. 4B is a schematic illustration of a second image 402 that can be obtained using the embodiments disclosed herein.
- FIG. 4C is a schematic illustration of an image 403 that can be obtained from combining (e.g. , digitally) the first 401 and second 402 images shown in FIG. 4A and FIG. 4B.
- FIG. 4A illustrates an image 401 containing the sun and part of a house, obtained from an overall field of view in a first image plane 106.
- FIG. 4B shows another image 402 containing a person walking and the complimentary portion of the house shown in FIG. 4A.
- the processor 205 can combine the first image 401 and second image 402 to yield the image 403 of the entire field of view.
- the processor 205 can also determine range information for the overlapping areas. For example, in the example shown in FIGs. 4A-4C, the processor can determine the range elements such as the house, the person, and the sun are imaged, as these regions move into the overlapping regions.
- the processor can utilize any appropriate image recognition software to detect motions and overlapping regions.
- the complementary and/or overlapping image planes disclosed herein provide an economically feasible manner (as compared to the presently available technology) in which large monolithic sensor/image detecting devices can be constructed.
- the complementary and/or overlapping image planes disclosed herein can be used to construct a mosaic of contiguous images.
- an effective resolution of at least about 10 megapixels per square inch, and preferably 1000 megapixels per square inch can be obtained.
- the term "effective resolution,” as used herein, refers to a resolution that an equivalent monolithic 2-dimensional sensor would have. As an example, a sensor having 20,000 by 20,000 pixels can have an effective resolution of 400 megapixels.
- the processor 205 can analyze the first image 401 to detect one or more objects of interest. Additionally or alternatively, the processor 205 can be configured to detect changes in one or more objects of interest in the field of view. For example, the processor 205 can be configured to detect changes in the location (e.g., motion or movement) of an object of interest in the field of view.
- the processor 205 can utilize any suitable image recognition algorithm and/or method known in the art to detect the objects disposed in the field of view.
- the processor 205 can be configured (e.g. , via loading appropriate software) to detect human subjects, vehicles, or any other object of interest in the field of view.
- Any suitable image recognition algorithms can be employed. For example, image recognition techniques such as those disclosed in U.S. Patent No. 6,301,396 entitled “Nonfeedback-based Machine Vision Methods for Determining a Calibration Relationship Between a Camera and a Moveable Object," and U.S. Patent No.
- a predictive Kalman-type filter can be utilized to calculate best estimate probabilistic predictions for objects being tracked.
- successive images can be obtained according to a predetermined pattern. For example, when used for autonomous driving, successive frames can be subtracted to identify the sky, the road, lane boundaries and vehicles moving within each lane. Vehicles and people (e.g. , on the sidewalk) can fit pre-determined patterns of movements.
- the processor 205 can be configured to track one or more objects that are moving within the field of view.
- FIGs. 5 A and 5B schematically depict a vehicle 501, 50 within a field of view 599, 599' (e.g., a parking lot) within corresponding image planes (spanned by X-W lines in FIG.5 A and Y-Z lines in FIG. 5B).
- the processor 205 (FIG. 2) can analyze the first image 599 (e.g., a wide- angle image of the parking lot) to detect the moving vehicle 501.
- the processor 205 can analyze the second image 599' to detect the moving vehicle 501 ' in its new location.
- the processor 205 can utilize any suitable image recognition software known in the art to detect the moving vehicle 501. Further, in obtaining the second image 599', the processor 205 can utilize information obtained from the first image 599 to determine arrangement of the image detecting elements (e.g. , whether overlapping image detecting elements should be used). Any reasonable and/or suitable arrangement of image detecting elements can be used. Further, the image detecting elements can be arranged to acquire overlapping image data from different portions of the field of view.
- a plurality of image detecting elements arranged horizontally can be used to acquire image data that overlap in the horizontal direction. This can be useful for surveillance, where it may be desirable to provide digital panning with high resolution.
- three planes, each having a 2-D array of image detecting elements can be used to provide continuous coverage of a larger 2-D image. This can allow digital pan and tilt with high resolution.
- one image plane can contain additional optics, such as a focusing lens that allows a larger field of view to be condensed e.g. , onto a single image detecting element. This can allow different image planes to provide different image resolutions.
- one image plane can have a single image detecting element that is configured to provide a continuous lower resolution image. Such image detecting elements can be useful when a user is viewing the field of view in a zoomed out view.
- an image plane can utilize an image detecting element that detects radiation in a different portion of the electromagnetic spectrum, such as thermal imaging data.
- a detecting element can allows for lower resolution imaging in such spectra.
- image detecting elements associated with one image plane can be infrared detecting elements that are configured to detect infrared radiation and the image detecting elements associated with another image plane can be configured to detect visible radiation (e.g. , wavelengths in a range of 400 nm (nanometer) to 700 nm.
- a polarizing beam splitter or filter can be used to provide different polarization or different subsets of detected light to the different image planes.
- complementary images of one or more portions of the field of view can be combined to provide enhanced information regarding the field of view.
- the image data obtained via detecting elements associated with one image plane can have no overlap with the image data obtained via detecting elements associated with another image plane. Therefore, upon digitally stitching the two image data, an image of the field of view is obtained.
- FIG. 6 illustrates a combined image plane 616 formed using complementary and overlapping image arrays arranged according to some embodiments disclosed herein.
- the imaging system 600 shown in FIG. 6 can include an optical system 602 that collects the light directed from a field of view 699.
- the optical system 602 can include one or more optical elements for receiving and directing the light received from the field of view 699.
- the optical system 602 includes a converging lens 603 for received and directing the light forwarded from the field of view 699.
- the imaging system 600 can further include additional optical elements for directing the light forwarded by the optical system 602.
- additional optical elements for directing the light forwarded by the optical system 602.
- any suitable optical element known in the art can be used.
- the converging lens 603 and any other optical element utilized in the imaging system 600 can be coupled to a beam splitter 605 and configured to direct the light collected by the optical system 602 onto a beam splitter 605.
- the converging lens 103 and the diverging lens 104 can collectively collimate the light received from the field of view and direct the collimated light to the beam splitter 105.
- At least one portion of the light incident on the beam splitter 605 can be reflected by the beam splitter 605 to one or more complementary image detectors Al, A2.
- the complementary image detectors Al, A2 can be included in an image plane A.
- Other portions of the incident light can be passed through the beam splitter 605 to other complementary image detectors Bl, B2, which are disposed in an image plane B.
- any suitable beam splitter known in the art can be used to direct the one or more portions of the light incident on the beam splitter onto the image detectors.
- a curved pellicle image splitter is used to shape the image and forward one or more portions of the incident light onto the image detectors Al, A2, Bl, B2.
- a flat pellicle image splitter can be used.
- each image plane A, B can include one or more image detectors (e.g., detectors Al, A2, Bl, B2) and image detecting elements (e.g., image detecting elements 301, 302, 303, shown in FIG. 3A).
- the image detecting elements can be arranged in any suitable manner available in the art.
- the image detecting elements can be arranged as an array and/or a matrix of image detecting elements that are configured to capture and detect the light directed to each image plane A, B.
- the image detectors Al, A2, Bl, B2 can be complementary image detectors that are configured such that the data obtained using these detectors, once combined, can be processed to form an image 616 of at least a portion of a region of interest in the field of view 299.
- the imaging system 600 is being used to image the region of interest using four complementary image detectors Al, A2, Bl, B2.
- the image data (DAI, Dbl, DA2, DB2) acquired using these image detectors is combined, an image 616 of an entire region of interest can be obtained.
- complementary image detectors Al, A2, Bl, B2 can detect light emitted by overlapping regions 619 of the field of view and generate image data DAI, DA2, DB1, DB2 from portions of the field of view 699 that at least overlap in one or more regions. These overlapping regions can be used to match the complementary image data DAI, DA2, DB1, DB2 during the combination process and/or to generate more detailed images (e.g., images having higher resolution) from the regions of field of view corresponding to the overlapping region.
- Embodiments of the present invention can create images of a field of view having high resolution. For example, in one embodiment, by combining images associated with several individual image planes, images having a resolution as high as 15,000 by 54,400 pixels can be obtained using four image detectors.
- FIG. 7 is a schematic illustration of an example imaging system that utilizes a combined image plane having a curved geometry.
- the imaging system 700 can include an optical system 702 that is coupled with one or more image detectors Al, A2, Bl, B2.
- Each image detector Al, A2, Bl, B2 can include one or more image detectors (e.g., detectors 107, 109, shown in FIG. 1A) and image detecting elements (e.g. , image detecting elements 301, 302, 303, shown in FIG. 3A).
- the image detectors Al, A2, Bl, B2 can be arranged such that once combined they form a curved structure.
- the image detectors Al, A2, Bl, B2 can include complementary and/or overlapping regions.
- the image detectors Al, Al are tilted relative to one another.
- the image detectors Bl, B2 are tilted relative to one another.
- the vectors VI, V2 perpendicular to the image planes Al, A2 form an angle x relative to one another
- vectors V3 and V4 that are orthogonal to the image vectors Bl, B2 form an angle ⁇ 1 relative to one another.
- the angles x and ⁇ 1 can be the same. In other embodiments, the angles a t and ⁇ can be different.
- Embodiments disclosed herein can be used in various imaging applications.
- the image data acquired by three image detectors in three individual, corresponding, image planes can be combined to construct a composite image having a resolution of about 15,000 by 5,500 pixels.
- An image splitter such as a Pellicle-type image splitter, can be used to split the light from the field of view without causing any distortion in the resulting image.
- a Pellicle- type image splitter is discussed herein, one skilled in the art should appreciate that any suitable optical element and/or image splitter can be used with the embodiments disclosed herein, for example prism-based optics.
- Embodiments disclosed herein can also be used in autonomous operation of motor vehicles (e.g. , self-driving cars).
- image sensors arranged according to the embodiments disclosed herein can be coupled to two or more image planes to obtain images having high resolution, images with increased/reduced pixel size, images that capture the speed of moving vehicles, etc.
- FIG. 8 is an illustrative example of the various image planes that can be utilized in operating a self-driving automobile/car according to embodiments disclosed herein.
- image plane 803 can use one or more optical detecting elements (sensors) optimized for objects on the road, such as detecting pot holes, etc.
- Image plane 804 can utilize one or more optical detecting elements (sensors) having a higher resolution. Such optical detecting elements can detect elements, such as street signs and traffic lights more accurately. Further, such optical detecting elements can be configured to detect street signs and traffic lights that may be positioned at a distance from the self-driving car. Further, image planes 801, 802 can include optical detecting elements optimized for reading roadside signs.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662370875P | 2016-08-04 | 2016-08-04 | |
PCT/US2017/045598 WO2018027182A1 (en) | 2016-08-04 | 2017-08-04 | Method and apparatus for obtaining enhanced resolution images |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3494692A1 true EP3494692A1 (en) | 2019-06-12 |
Family
ID=59702827
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP17757950.5A Pending EP3494692A1 (en) | 2016-08-04 | 2017-08-04 | Method and apparatus for obtaining enhanced resolution images |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP3494692A1 (en) |
WO (1) | WO2018027182A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11689811B2 (en) | 2011-09-19 | 2023-06-27 | Epilog Imaging Systems, Inc. | Method and apparatus for obtaining enhanced resolution images |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5960125A (en) | 1996-11-21 | 1999-09-28 | Cognex Corporation | Nonfeedback-based machine vision method for determining a calibration relationship between a camera and a moveable object |
US5974169A (en) | 1997-03-20 | 1999-10-26 | Cognex Corporation | Machine vision methods for determining characteristics of an object using boundary points and bounding regions |
US6734911B1 (en) | 1999-09-30 | 2004-05-11 | Koninklijke Philips Electronics N.V. | Tracking camera using a lens that generates both wide-angle and narrow-angle views |
US6833843B2 (en) | 2001-12-03 | 2004-12-21 | Tempest Microsystems | Panoramic imaging and display system with canonical magnifier |
US7750936B2 (en) | 2004-08-06 | 2010-07-06 | Sony Corporation | Immersive surveillance system interface |
WO2010048618A1 (en) * | 2008-10-24 | 2010-04-29 | Tenebraex Corporation | Systems and methods for high resolution imaging |
US9137433B2 (en) * | 2011-09-19 | 2015-09-15 | Michael Mojaver | Super resolution binary imaging and tracking system |
WO2016015623A1 (en) * | 2014-07-28 | 2016-02-04 | Mediatek Inc. | Portable device with adaptive panoramic image processor |
-
2017
- 2017-08-04 WO PCT/US2017/045598 patent/WO2018027182A1/en unknown
- 2017-08-04 EP EP17757950.5A patent/EP3494692A1/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11689811B2 (en) | 2011-09-19 | 2023-06-27 | Epilog Imaging Systems, Inc. | Method and apparatus for obtaining enhanced resolution images |
Also Published As
Publication number | Publication date |
---|---|
WO2018027182A1 (en) | 2018-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11689811B2 (en) | Method and apparatus for obtaining enhanced resolution images | |
TWI287402B (en) | Panoramic vision system and method | |
JP6322753B2 (en) | Wide-angle (FOV) imaging device with active foveal capability | |
US10404910B2 (en) | Super resolution imaging and tracking system | |
US7126630B1 (en) | Method and apparatus for omni-directional image and 3-dimensional data acquisition with data annotation and dynamic range extension method | |
US10348963B2 (en) | Super resolution binary imaging and tracking system | |
JP5426174B2 (en) | Monocular 3D imaging | |
CN102656878B (en) | Image processing equipment and method | |
US8908054B1 (en) | Optics apparatus for hands-free focus | |
US11418695B2 (en) | Digital imaging system including plenoptic optical device and image data processing method for vehicle obstacle and gesture detection | |
US20170195655A1 (en) | Rgb-d imaging system and method using ultrasonic depth sensing | |
WO2002065786A1 (en) | Method and apparatus for omni-directional image and 3-dimensional data acquisition with data annotation and dynamic range extension method | |
US20050206773A1 (en) | Optical tracking system using variable focal length lens | |
JP2010239290A (en) | Imaging apparatus | |
US20170339324A1 (en) | Imaging system having multiple imaging sensors and an associated method of operation | |
KR102001950B1 (en) | Gaze Tracking Apparatus and Method | |
US11490067B2 (en) | Multi-aperture ranging devices and methods | |
CN1702452B (en) | Digital micromirror multi-target imaging spectrometer device | |
EP3494692A1 (en) | Method and apparatus for obtaining enhanced resolution images | |
Somanath et al. | Single camera stereo system using prism and mirrors | |
JP2004072240A (en) | Imaging apparatus, and tracking system and scanner using the same | |
JP6097587B2 (en) | Image reproducing apparatus and control method thereof | |
Weerasinghe et al. | Stereoscopic panoramic video generation using centro-circular projection technique | |
JP3338861B2 (en) | 3D environment measurement camera | |
WO2020195307A1 (en) | Electronic mirror system, image display method, and mobile body |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20190221 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20201005 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: EPILOG IMAGING SYSTEMS INC. |