WO2014023231A1 - Système et procédé d'imagerie optique de très grande résolution et à large champ de vision - Google Patents

Système et procédé d'imagerie optique de très grande résolution et à large champ de vision Download PDF

Info

Publication number
WO2014023231A1
WO2014023231A1 PCT/CN2013/081002 CN2013081002W WO2014023231A1 WO 2014023231 A1 WO2014023231 A1 WO 2014023231A1 CN 2013081002 W CN2013081002 W CN 2013081002W WO 2014023231 A1 WO2014023231 A1 WO 2014023231A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
magixoom
registration
point
nflfs
Prior art date
Application number
PCT/CN2013/081002
Other languages
English (en)
Chinese (zh)
Inventor
贾伟
Original Assignee
泰邦泰平科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 泰邦泰平科技(北京)有限公司 filed Critical 泰邦泰平科技(北京)有限公司
Publication of WO2014023231A1 publication Critical patent/WO2014023231A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Definitions

  • the present invention relates to the field of optical imaging, and more particularly to an imaging system and method that achieve high resolution in both wide field of view images and narrow field of view images. Background technique
  • optical imaging systems are single-lens imaging systems, including short-focus imaging systems and telephoto imaging systems.
  • the short-focus imaging system has the ability to capture macroscopic wide-angle images of wide field of view
  • the telephoto imaging system has the ability to take close-up images of microscopic details of narrow field of view.
  • the general pixel imaging has a visual resolution (LW/PH) value of about 1 to 10 million.
  • LW/PH visual resolution
  • a higher resolution system is difficult to do because there is no universal high resolution sensor; or it must be expensive. The cost is to achieve higher resolution sensors, and because high resolution optical systems are also difficult to achieve in the process, so to shoot wide-angle imaging, choose a short-focus wide-angle lens; to take close-up details, you have to choose Telephoto narrow field lens.
  • CMOS complementary metal oxide semiconductor
  • the display with general resolution can be used to view the general resolution image of the entire field of view or local field of view, achieving clear resolution such as wide field of view and narrow field of view or close to clear resolution image.
  • Such a system provides better uniformity of macro wide-angle imaging and micro-detail imaging, but because of the large aperture and small depth of field of the product, it is difficult to ensure that the captured scenes are within the depth of field for the large depth of the scene being photographed. .
  • the disadvantage is that the depth of field is small.
  • the system is not capable of non-focusing shooting.
  • Another disadvantage is that the system is designed with a separate ultra-high resolution CMOS sensor. The sensor cost is high, and another disadvantage is that the system The accuracy of the optical imaging lens is very high and the implementation cost is high. The system is also unable to achieve dynamic detail tracking shooting.
  • an optical imaging system with a sensor resolution and a display resolution close to each other, if a wide-angle imaging is to be taken, a short-focus wide-angle lens is selected; and for a close-up detailed image, a telephoto narrow view is selected.
  • Field lens The two can't have both. In the capture of dynamic details, it is difficult to capture the target close-up shot for a long time, even if it is experienced by the photographer.
  • the present invention provides a wide field of view ultra-high resolution optical imaging system, that is, a Magic Zoom system based on ultra high resolution imaging technology, including: narrow line by M rows and N columns
  • the NFLFS array consisting of the field telephoto imaging subsystem NFLFS, the fields of view of adjacent NFLFS overlap each other, and the main optical axis of each NFLFS converges at a point or the neighborhood of the point, which is the optical center of the Magixoom system, where , M, N are natural numbers greater than or equal to 1, and at least one of M and N is greater than 1 and not equal to 2;
  • the horizontal field of view of the NFLFS at the i-th row and j-column is the vertical field of view angle ⁇ + ⁇ ⁇ +
  • the horizontal field of view HFOV of the Magixoom system is N ⁇ 3 ⁇ 4 + ⁇ 3 ⁇ 4 + e h
  • the vertical field of view VFOV of the Magixoom system is ⁇ 3 ⁇ 4 + ⁇ 3 ⁇ 4 + £
  • ⁇ 3 ⁇ 4 is the design angle between the main optical axes of horizontally adjacent NFLFS, which is the design clip between the main optical axes of vertically adjacent NFLFS
  • the angle, ⁇ 3 ⁇ 4 is the field of view overlap angle of the horizontally adjacent NFLFS, which is approximately the angle between the edge of the horizontal field of view overlapping region and the optical center of the system at the infinity of the horizontally adjacent NFLFS
  • ⁇ ⁇ is the vertical phase
  • the field of view overlap angle of the adjacent NFLFS is approximately the angle between the edge of the vertical field overlap region and the optical center of the system at the infinity of the vertically adjacent NFLFS
  • e hij and ⁇ are the i-th row and the j-th column, respectively.
  • the horizontal and vertical field angle errors of the NFLFS, £ h and £ v are the horizontal and vertical field angle errors of the Magixoom system, respectively;
  • An image processing apparatus configured to process an array image having adjacent overlapping characteristics captured by the NFLFS array to obtain a wide field of view ultra-high resolution image;
  • a system control device coupled to the image processing device and the NFLFS array for controlling the operation of the various components of the Magixoom system.
  • the image processing device includes an image processing device of a full-calculation mode or an image processing device of a data mapping mode according to different product application requirements.
  • the image processing device of the full-calculation mode includes: an image projection module, an image overlap region feature point finding module, an image registration splicing module, an image fusion module, and an image cropping module, wherein
  • the image projection module projects the sensor coordinate system of each array image to the system coordinate system to obtain a corresponding projection image in the system coordinate system;
  • An image overlap area feature point finding module searches for feature points in an overlapping area of the projected image
  • the image registration splicing module uses the feature points to find corresponding registration points in the corresponding overlapping images, and filters the registration point pairs to find valid registration point pairs; the calculated projection images of the respective sensors according to the found effective registration point pairs The relative positional relationship between the images is spliced in the system coordinate system;
  • the image fusion module adjusts the hue, brightness, and saturation between adjacent images after stitching to achieve a smooth transition between hue, brightness, and saturation between adjacent images;
  • the image cutting module takes the inscribed quadrilateral of the image obtained after the stitching, and cuts off the portion other than the quadrilateral to obtain an image output.
  • the image processing device of the data mapping mode includes: a pixel mapping module, and a pixel mapping module, configured to: according to the captured array image and the array image, in the case that the object distance actually captured and the object distance of the product design are close to each other Magixoom system image fixed mapping of the first risk, mapping and generation of Magixoom system images;
  • the fixed a priori mapping relationship between the array image and the Magixoom system image refers to: an array image generated by actual shooting or simulation in the system design process, through image projection, overlapping region feature point searching, registration stitching, fusion And cropping processing, establishing a pixel mapping relationship between each pixel of the array image and each pixel of the Magixoom system image.
  • the image processing device further includes:
  • the micro-registration module is used to obtain the relative positional relationship between adjacent projected images caused by the change of the object distance by accurately calculating the object distance and the object distance of the product design.
  • the offset of the pixel is corrected according to the offset, and the updated pixel mapping relationship is obtained.
  • a Magixoom method the method comprising:
  • the NFLFS array is composed of N rows and N columns of narrow field of view telephoto imaging subsystems NFLFS, the fields of view of adjacent NFLFS overlap each other, and the main optical axis of each NFLFS converges at a point or a neighborhood of the point,
  • the point is the optical center of the Magixoom system, where M and N are natural numbers greater than or equal to 1, and at least one of M and N is greater than 1 and not equal to 2; setting the horizontal field of view of the NFLFS at the i-th row and the j-th column is ⁇ 3 ⁇ 4 + ⁇ + e Mj , the vertical field of view is ⁇ ⁇ + ⁇ ⁇ + ⁇ ⁇ / ; Set the horizontal field of view HFOV of the Magixoom system to No) h + o) h + £ h , the vertical field of view of the Magixoom system VFOV is ⁇ 3 ⁇ 4 + ⁇ 3 ⁇ 4 + £ ⁇ , where 180° > ⁇ 3 ⁇ 4> 0° , 90° >
  • a system control device coupled to the image processing device and the NFLFS array for controlling the operation of the various components of the Magixoom system
  • An image processing apparatus for processing an array image having adjacent overlapping characteristics taken by the NFLFS array to obtain a wide field of view ultra-high resolution image.
  • the image processing device includes an image processing device of a full-calculation mode or an image processing device of a data mapping mode according to different product application requirements.
  • the image processing device of the full-calculation mode includes: an image projection module, an image overlap region feature point finding module, an image registration splicing module, an image fusion module, and an image cropping module, wherein
  • the image projection module projects the sensor coordinate system of each array image to the system coordinate system to obtain a corresponding projection image in the system coordinate system;
  • An image overlap area feature point finding module searches for feature points in an overlapping area of the projected image
  • the image registration splicing module uses the feature points to find corresponding registration points in the corresponding overlapping images, and filters the registration point pairs to find valid registration point pairs; the calculated projection images of the respective sensors according to the found effective registration point pairs The relative positional relationship between the images is spliced in the system coordinate system;
  • the image fusion module adjusts the hue, brightness, and saturation between adjacent images after stitching to achieve a smooth transition between hue, brightness, and saturation between adjacent images;
  • the image cutting module takes the inscribed quadrilateral of the image obtained after the stitching, and cuts off the portion other than the quadrilateral to obtain an image output.
  • the specific method for image projection includes:
  • the intersecting line includes, but is not limited to, a cross line of two lines, and the fixed step size in each direction is used to determine two regional extreme value points on the intersecting line. Judging auxiliary point;
  • A12. Calculate the absolute value of the sum of the pixel value differences between the auxiliary point and the search point on each line in the cross line, and sum the absolute values on the two lines to obtain a summation value higher than a preset condition threshold. Search point, as the current region extreme value feature point coordinates, placed in the extreme feature point list;
  • step Al l and step A12 moving the search point, according to the same method as step Al l and step A12, and traversing the entire overlapping area to obtain a search point whose summation value is higher than a preset condition threshold, and obtaining a list of regional extreme value feature points of the overlapping area.
  • the obtained regional extremum feature points are sorted according to the summation value from large to small, and the top extremum points of the sorting are selected as the candidate region extremum feature points;
  • Dijfy f (i + step5, j - step6) + f ii - stepl , j + step8) - 2f(i, j)
  • /(, j) is the pixel value of the point search point P(i, j), i, j is a positive real number;
  • is the sum of the pixel value difference between the auxiliary judgment points P(i + step5, j - stepG), sfep7, + Wep8) and the search point P(i, j) on the second line in the cross line;
  • is the preset condition threshold
  • the image registration and splicing between the registration template and the registered template according to the acquired overlapping region feature points includes, but is not limited to, the following specific methods and steps:
  • Use the MSAD method to find candidate registration point pairs Use the intersection line of the regional extreme feature points of the selected registration template as the registration cross line, traverse the registered image registration area and calculate the SAD of each search point, find the MSAD And corresponding image coordinate point pairs, as candidate candidate point pairs of the current region extreme value feature points;
  • the abscissa difference and the ordinate difference of all the registration point pairs are respectively averaged, and the registration stitching horizontal and vertical translation data of all the pixel points of the adjacent projected image and the coordinates corresponding to the system coordinate system are obtained.
  • the using the MSAD method to obtain candidate registration point pairs includes:
  • Step 1 calculating the MSAD according to the following formula; SAD,
  • Step 2 Find a coordinate of the MSAD point corresponding to the registration template and the registered template as a candidate registration point pair; the calculation formula takes a cross line of two lines as an example, and a cross line of one line or multiple lines is also applicable;
  • Step9, steplO, stepl 1, stepll are the registration step size
  • P(i, j) is an extreme image feature point of the registration image region
  • Q (ii, jj) is a registration point corresponding to the extreme feature points of the registered image region
  • SAD is (Sum of Absolute Difference), which is the sum of the absolute values of the corresponding items of the values of two corresponding sequences;
  • MSAD is ( Minimum Sum of Absolute Difference ) and refers to the minimum of multiple SAD values. .
  • the reasonable selection of the current candidate registration point pairs includes: based on the current ⁇ ⁇ ⁇ registration point pairs (P kl (i, j), Q kl (ii, jj) ), taking any set of registration point pairs (P (i, j) , Q (ii, jj) ), calculate the distances P kl P k2 , Q kl Q k2 , respectively, and use feL- ⁇ ⁇ 1 3 ⁇ 4 2
  • Reasonable selection points :
  • the rationality selection points of each candidate registration point pair obtained are sorted, and the pre-sorting N is selected.
  • H ⁇ k candidate registration point pairs are used as registration point pairs.
  • the method further includes:
  • the coordinate translation data of the last accurate registration is used as the coordinate translation data of the registration.
  • the image processing device of the data mapping mode includes: a pixel mapping module, and a pixel mapping module, configured to: according to the captured array image and the array image, in the case that the object distance actually captured and the object distance of the product design are close to each other Fixed image of Magixoom system
  • the mapping relationship between the image and the Magixoom system image refers to: an array image generated by actual shooting or simulation of the system, through the image Projection, overlapping region feature point finding, registration stitching, blending and cropping, establishing a pixel mapping relationship between the array image and the Magixoom system image;
  • mapping and generation of the image of the Magixoom system is performed, that is, searching
  • the pixel mapping relationship between the pixel value of the output image of the Magixoom system and the image pixel value of each sensor array is as follows:
  • R ' j is the pixel value at the coordinates of the Magixoom image
  • p is the sensor image number that affects the pixel value at the Magixoom image coordinates
  • f x p , y is the pixel value of the ( 1 ⁇ 2 , > ) point of the sensor image number P;
  • the image processing device further includes:
  • the micro-registration module is used to obtain the relative positional relationship between adjacent projected images caused by the change of the object distance by accurately calculating the object distance and the object distance of the product design.
  • the offset of the pixel is corrected according to the offset, and the updated pixel mapping relationship is obtained.
  • system control device is further configured to:
  • the Magixoom system is operated in a photographing mode, a dynamic tracking mode or a macro mode, wherein in the photographing mode, the NFLFS array is subjected to uniform exposure control by the system control device to perform a one-time wide field scene
  • the photograph is taken, and the photographed image is sent to the image processing apparatus for processing, and the processed photographed image is stored in the memory.
  • the unified exposure control means that all the NFLFS are uniformly controlled by the system control device to control the electronic shutter of the sensor to expose the sensor at the same time, and the exposure at the same time includes that each NFLFS has the same exposure duration and different exposure durations; In the case of different exposure durations, the NFLFS exposure period with a longer exposure time covers the exposure period of the NFLFS with a shorter exposure duration.
  • the NFLFS array is controlled by a unified exposure control
  • the system control device dynamically tracks the finder frame according to the coordinates of the tracked object in the imaging system and the framing range according to the application design
  • the image processing device only refers to the dynamic Tracking the image within the framing frame for splicing pre-processing and splicing processing, and storing the image in the splicing-processed dynamic tracking finder frame to the memory
  • the splicing pre-processing includes image projection transformation
  • the image projection transformation refers to Dynamically tracking the array image of the NFLFS array distributed along the spherical surface of each image plane in the finder frame to the tangent point in a specified plane of the spherical surface of the Magixoom system with the optical center of the Magixoom system in the Magixoom system field of view A planar distribution image array.
  • the image in the wide field of view scene seen through the display and the browsing software is a sample of the original captured image to obtain a small amount of data, which is seen by the display and the browsing software.
  • a close-up image within a field of view scene is a partial close-up image within a wide-field scene or even an interpolated and magnified image of a local close-up image within a wide-field scene; in dynamic tracking mode or macro mode, through the display and video
  • the display software sees a dynamic scene close-up video in the dynamic tracking framing frame or a video taken by the macro imaging system.
  • Macro imaging subsystem for independent macro imaging subsystems operating in macro mode or integrated in Magixoom systems in the Magixoom system, independently performing macro photography or video under the control of the system control device Shooting.
  • the dynamic tracking auxiliary imaging system has an apparent angle of view that is approximately the same as the field of view of the NFLFS array, and is used to assist the Magixoom system to acquire the coordinates of the tracked target when the dynamic tracking mode is run and transmit the coordinates to the Magixoom system in real time. Or for imaging that is farther than the object distance between the macro system and the Magixoom system.
  • the image processing apparatus may include one or more of the following modules to make the Magixoom system have better performance:
  • the color correction module replaces the automatic white balance function of each NFLFS to correct the color distortion of the formed image
  • a geometric correction module for performing geometric distortion correction on each NFLFS image
  • a brightness correction module for performing brightness distortion correction on each NFLFS image
  • the NFLFS array is a 3 X 3 NFLFS array, wherein an angle of the main optical axis of the horizontally adjacent NFLFS is 16.2°, an angle of the main optical axis of the vertically adjacent NFLFS is 12.05°, and a horizontal overlap angle is 1.5°.
  • the vertical overlap angle is 1.25°
  • the HFOV is 50. ⁇
  • the VFOV is 37.4°.
  • the Magixoom system of the present invention utilizes an M x N narrow field of view telephoto imaging system array, enabling ultra-high resolution imaging, simultaneous wide-angle and close-up imaging, and the ability to track dynamic targets for close-up tracking.
  • the imaging resolution of the imaging system of the present invention is much greater than the display resolution.
  • the optical resolution of the Magixoom system greatly exceeds the resolution of conventional imaging systems with the same optical specifications at general industrial accuracy, with an ultra-high LW/PH value.
  • the resolution of ultra-high resolution images imaged by the NFLFS array and spliced by the image processing device is much larger than the general display resolution. Therefore, the display of the wide-field scene must first sample the ultra-high resolution image.
  • the imaging system with this feature is named Magic Zoom, the Magixoom system.
  • FIG. 1 is a block diagram showing the principle composition of a Magixoom system according to an embodiment of the present invention
  • FIG. 2 is a horizontal cross-sectional view of a 3 x 3 NFLFS array of a Magixoom system in accordance with an embodiment of the present invention
  • Figure 3a and Figure 3b are schematic diagrams of the front and back optical centers in a 3 X 3 NFLFS array, respectively;
  • FIG. 4 is a cross-sectional view of the NFLFS line in the horizontal direction of the Magixoom system according to an embodiment of the present invention
  • Figure 5 is a cross-sectional view of a vertical row of NFLFS in the vertical direction of the Magixoom system in accordance with an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an image array and an ultra-high resolution image in a photographing mode according to an embodiment of the present invention.
  • FIG. 7 is an image array and a dynamic tracking framing frame in a dynamic tracking mode according to an embodiment of the present invention. And dynamic tracking image schematic.
  • FIG. 8 is a schematic flow chart of a Magixoom method according to an embodiment of the present invention.
  • FIG. 9 is a schematic flowchart of a full-process calculation mode according to an embodiment of the present invention.
  • Figure 10 is a schematic diagram of nine array images taken by the 3x3 Magixoom system.
  • FIG. 11 is a schematic diagram of determining auxiliary point selection according to an embodiment of the present invention.
  • FIG. 12 is a schematic diagram of a template crossover line of MSAD registration according to an embodiment of the present invention.
  • FIG. 13 is a schematic diagram showing an overlapping area formed by an image 1 and an image 2 according to an embodiment of the present invention.
  • FIG. 14 is a schematic diagram of a fusion result obtained by using a coverage fusion algorithm according to an embodiment of the present invention.
  • FIG. 15 is a schematic diagram of an irregular polygon image obtained by stitching according to an embodiment of the present invention.
  • FIG. 16 is a schematic diagram of cutting an irregular polygon image according to an embodiment of the present invention.
  • FIG. 17 is a schematic flowchart of a data mapping mode according to an embodiment of the present invention.
  • FIG. 18 is a schematic diagram of two image overlapping structures according to an embodiment of the present invention.
  • FIG. 19 is a schematic flowchart of a data mapping mode based on micro-registration according to an embodiment of the present invention. detailed description
  • FIG. 1 illustrates a Magixoom system in accordance with an embodiment of the present invention.
  • the Magixoom system includes an NFLFS array 10, an image processing device 12, and a system control device 14.
  • the NFLFS array is M rows and N columns.
  • the fields of view of adjacent NFLFSIOO overlap each other, and the main optical axis of each NFLFSIOO converges at a point or a neighborhood of the point, which is the optical center of the Magixoom system, where M and N are both A natural number greater than or equal to 1, and at least one of M and N is greater than 1 and is not equal to 2.
  • the fields of view of adjacent NFLFSs in the NFLFS array of the narrow field of view telephoto imaging subsystem are adjacent, and the fields of view of adjacent NFLFS overlap slightly in the designed object distance; adjacent peers or the same column
  • the angle of the main optical axis of the NFLFS is smaller than the angle of view of the peers or the same column of each NFLFS.
  • the initial overlap point in the object direction of the overlapping regions is closer to the near end of the design depth of field of the Magixoom system.
  • the system control unit 14 is connected to the image processing unit 12 and the NFLFS array 10 of M x N to control the operation of the various components of the Magixoom system.
  • the horizontal field of view of the narrow field of view telephoto imaging subsystem NFLFSIOO at the i-th row and the j-th column is a vertical field angle of ⁇ + ⁇ ⁇ + £
  • the horizontal field of view angle HFOV of the Magixoom system is N ⁇ 3 ⁇ 4 + ⁇ 3 ⁇ 4 + e h
  • the vertical field of view of the Magixoom system is VFOV ⁇ ⁇ + ⁇ ⁇ + ⁇ ⁇ ⁇
  • is the field of view of the horizontally adjacent NFLFS
  • the overlap angle is approximately the angle between the edge of the horizontal field of view overlapping region to the center of the system at the infinity of the horizontally adjacent NFLFS, and the angle between the main optical axes of the horizontally adjacent NFLFS 100, and 180° ⁇ ⁇ 0° , 90° ⁇ ” ⁇ 0°.
  • the field of view overlap angle of vertically adjacent NFLFS approximately the edge of the vertical field of view overlap at the infinity of the vertically adjacent NFLFS
  • the angle to the center of the system and 180 ° > ⁇ 0 °, 90 ° > ⁇ ⁇ 3 ⁇ 4 > 0 °.
  • each NFLFS 100 converges at a junction, which is the optical center of the Magixoom system, which is located in front of or behind the Magixoom system.
  • the main optical axis intersection of all NFLFS is behind the NFLFS, which is called the back optical center, and thus constitutes the Magixoom system as the rear optical system.
  • the intersection of the main optical axes of all NFLFS is in front of the NFLFS, and the junction is called the front optical center, and thus the Magixoom system becomes the front optical system.
  • Figures 3a and 3b are schematic illustrations of the front and back optical centers in a 3 X 3 NFLFS array.
  • the required field of view of the Magixoom system is determined according to the specific application requirements and cost of the Magixoom system, and the resolution required at different object distances; according to the resolution of the lens and the resolution of the CCD/CMOS photoelectric sensor Other factors determine the M of the Magixoom system
  • Design parameters of XN's NFLFS array such as the number of rows and columns of NFLFS , ⁇ 3 ⁇ 4 , ⁇ , ⁇ ⁇ , etc.
  • the image processing device 12 performs splicing processing on the array images having the adjacent overlapping characteristics taken by the narrow field of view telephoto imaging system array to obtain an ultra-high resolution wide field of view image.
  • the wide field of view scene image after the stitching process can be stored in the memory 22.
  • the image processing apparatus 12 performs splicing processing on the array image having the adjacent overlapping characteristics, and the image splicing technique in the prior art is used, and details are not described herein again.
  • the system control device 14 in the Magixoom system of the embodiment of the present invention may further include a mode control function.
  • the system control device 14 is connected to each of the NFLFS and to the image processing device 12. With the system control unit 14, the user can choose to have the Magixoom system operate in a photographing mode or a dynamic tracking mode or a macro mode.
  • the photographing mode includes a panoramic spherical mode and a panoramic planar mode.
  • the image processing device 12 directly splicing the array images taken by the NFLFS arrays whose image planes are distributed along the approximate spherical surface to obtain an image of the panoramic image plane along the approximate spherical surface without performing projection.
  • Transform splicing preprocessing; in the panoramic plane mode, graph The image processing device 12 performs splicing preprocessing including image projection transformation, and then performs splicing processing to obtain an image of a planar shooting mode, wherein the image projection transformation refers to an array formed by NFLFS arrays of various image planes distributed along a spherical surface.
  • Image Projection to Cut Point An array of planarly distributed images is obtained within a specified plane of the spherical surface of the Magixoom system's center of the sphere in the Magixoom system's field of view.
  • the NFLFS array is subjected to unified exposure control by the system control device 14 to take a one-time photograph of the wide field of view scene to obtain an image array as shown in FIG. 6, and the photographed image is sent to the image processing apparatus 12 for stitching preprocessing. And the splicing process, the ultra-high resolution image included in the outer edge of the image array as shown in FIG. 6 is obtained, and the spliced ultra-high resolution image is stored in the memory 22.
  • the unified exposure control means that all the NFLFS are uniformly controlled by the system control device 14 to control the electronic shutter of the sensor to expose the sensor at the same time, and the exposure at the same time includes that each NFLFS has the same exposure duration and different exposure durations. In the case of different exposure durations, the NFLFS exposure period with a longer exposure time covers the NFLFS exposure period with a shorter exposure duration.
  • the narrow field of view telephoto imaging system array photographs all scenes by the same shutter control, and the NFLFS array is uniformly exposed by the system control device 14 to photograph all the scenes to obtain the image array shown in FIG.
  • the system control device 14 defines a dynamic tracking finder frame as shown in FIG. 7 according to the coordinates and the framing range of the tracked object, and the image processing device 12 only splicing the image within the dynamic tracking finder frame.
  • the processing and splicing process results in an image within the dynamic tracking framing frame as shown in FIG. 7, and stores the image within the dynamic tracking framing frame to the memory 22.
  • the splicing pre-processing includes image projection transformation, wherein the image projection transformation refers to dynamically tracking an array image formed by NFLFS arrays of various image planes along the spherical surface in the finder frame to the point of view in the field of view of the Magixoom system.
  • the optical center of the Magixoom system provides an array of planarly distributed images in a specified section of the spherical surface of the sphere.
  • dynamic tracking mode or macro mode you can see the dynamic scene close-up video in the dynamic tracking frame or the video captured by the macro imaging system through the display and video display software.
  • the dynamic target object is identified by the pattern recognition function of the software in the image processing device to determine its position coordinates; and then according to the determined position coordinates and The required framing range determines the framing frame.
  • the scene is taken after tracking and the output dynamic video is processed. If the tracked dynamic target object's viewfinder spans more than two
  • a wide field of view imaging system can be set as a dynamic tracking aid on the side of the Magixoom system.
  • the imaging system 20 is configured to acquire the coordinates of the tracked target when the Magixoom system is operating in the dynamic tracking mode and transmit it to the Magixoom system in real time.
  • the dynamic tracking assisted imaging system 20 is also coupled to the system control unit 14.
  • the imaging field of view of the dynamic tracking assisted imaging system 20 coincides with the field of view of the Magixoom system, allowing the Magixoom system and the auxiliary imaging system to simultaneously capture the exact same scene with the same scene coordinates.
  • the coordinates of the tracking dynamic target object are determined by tracking the dynamic target object in the captured display image of the additional imaging system with an external tool such as a mouse or a finger on the touch screen.
  • the dynamic video data is selected in the Magixoom system according to the coordinates of the tracked dynamic target object and the frame required for the application, and the dynamic video of the tracked dynamic target object is output. If the target frame of the tracked target object spans more than two NFLFS shooting fields, the images in the taken frame are stitched to achieve dynamic tracking video.
  • the Magixoom system can also include a macro imaging system 18 coupled to system control unit 14.
  • the Magixoom system generally only shoots scenes with far objects, so that users can see the wide field of view while viewing the small scenes in a close-up manner, but this mode is due to the focus position and angle of view. Not suitable for taking macro images. Therefore, if the Magixoom system needs to capture an object within the macro, that is, when operating in the macro mode, under the control of the system control device 14, the NFLFS array of ⁇ N will not operate, and the macro imaging system 18 independently performs macro. Photo or video capture.
  • the memory may be located in the Magixoom system or external to the Magixoom system.
  • the image processing device is connected to the external memory through the data line.
  • the Magixoom system further includes a display 16, browsing software, and video browsing software for the user to view images and videos of the Magixoom system.
  • the browsing software and video browsing software can be installed into the system control device 14.
  • the image in the wide field of view scene seen through the display 16 and the browsing software is an image obtained by sampling the originally photographed image to obtain a small amount of data
  • the close-up image in the wide field of view scene seen through the display 16 and the browsing software is Wide field of view A partial close-up image within the scene or even an interpolated and magnified image of a local close-up image within the wide field of view scene.
  • the originally captured image may be stored in a memory for viewing by the user via display 16 and browsing software, or may be sent directly from image processing device 12 to display 16 for viewing by the user.
  • the imaging resolution of the Magixoom system is much greater than the display resolution.
  • the optical resolution of the Magixoom system greatly exceeds the resolution of conventional imaging systems with the same optical specifications at general industrial accuracy, with an ultra-high LW/PH value.
  • the imaging resolution of the Magixoom system is much larger than the normal display resolution in the general sense. Therefore, when the display of the ordinary display resolution displays the imaging of the system of the present invention, the ultra-high resolution wide field of view image is sampled and the display resolution is adapted for wide field of view image display. For any local position in the image formed by the ultra-high resolution imaging system of the present invention, image pixels of display resolution size are acquired centered on the position and displayed by a common display resolution display, and a partial close-up high resolution can be obtained. Imaging images.
  • the shooting application software in the image processing apparatus 12 sets the dynamic finder frame according to the specified tracking dynamic position and takes out the data in the dynamic framing frame in real time for shooting, thereby obtaining a dynamic shooting video for tracking the dynamic object.
  • Each NFLFS uses EDOF or EIDOF for large depth of field imaging.
  • EDOF or EIDOF are both prior art and are not mentioned here.
  • the image processing device 12 may include one or more of the following modules to improve stitching accuracy and stitching efficiency and system image quality: a color correction module that replaces the image of each NFLFS automatic white balance function. Color distortion correction; geometric correction module for geometric distortion correction of each NFLFS image; brightness correction module for brightness distortion correction for each NFLFS image; EDOF or EIDOF decoding module, for Decoding of images made by NFLFS using EDOF or EIDOF technology.
  • the Magixoom system in an embodiment of the invention may also include an output interface 24 coupled to system control unit 14. Through the output interface 24, it is possible to externally communicate with the system control device 14 to perform related operations, such as accessing the memory 22 to acquire an image.
  • M X N NFLFS 100 is 3 ⁇ 3 NFLFS100.
  • the vertical field resolution LW/PH of the Magixoom system in this example is close to 5783, and the horizontal field of view resolution LW/PH is close to 7800.
  • the spatial resolution at 100m object distance (corresponding to each pixel) is 1.15cm-1.19cm.
  • Each NFLFS in the Magixoom system uses EIDOF technology, and the target object is clearly imaged 3m ahead, and each NFLFS has a depth of field of 2m to infinity;
  • the Magixoom system consists of 3 x 3 sub-NFLFS.
  • Each NFLFS CCD uses a 1/4" target surface (approximately 4.5mm diagonal, 2.7mm high and 3.6mm wide).
  • the resolution of the CCD is 2592 X. 1944) ;
  • the angle of view of the diagonal direction of the imaged NFLFS lens as shown in Figure 4 and Figure 5 should be 22 °, corresponding to the horizontal field of view of the NFLFS lens.
  • the NFLFS lens has a vertical field of view of 13.3°.
  • the optical characteristics of the system are the overlapping areas of the field of view of each NFLFS lens.
  • the image processing apparatus includes image processing apparatuses of a full-calculation mode or an image processing apparatus of a data mapping mode in accordance with different product application requirements.
  • the image processing device of the full calculation mode may further include: an image projection module, an image overlap region feature point finding module, an image registration splicing module, an image fusion module, and an image cropping module (not shown), wherein
  • Image projection module projecting the sensor coordinate system of each array image to the system coordinate system to obtain a corresponding projection image in the system coordinate system;
  • Image overlapping area feature point finding module finding feature points in an overlapping area of the projected image
  • Image registration splicing module Using the feature points to find the corresponding registration points in the corresponding overlapping images, and screening the registration point pairs to find valid registration point pairs; according to the found effective registration Point-to-calculate the relative positional relationship between the projected images of the respective sensors, and perform image stitching in the system coordinate system;
  • Image fusion module adjusts the hue, brightness, and saturation between adjacent images after stitching to achieve a smooth transition between hue, brightness, and saturation between adjacent images;
  • the image processing apparatus of the data mapping mode may further include: a pixel mapping module, where
  • the pixmap module is configured to perform the Magixoom system image according to the mapping relationship between the captured array image and the array image and the fixed image of the Magixoom system image for the object distance of the actual photographing and the object distance of the product design. Mapping and generation.
  • the fixed a priori mapping relationship between the array image and the Magixoom system image refers to: an array image generated by actual shooting or simulation in the system design process, through image projection, overlapping region feature point searching, registration stitching, fusion And cropping, establishing a pixel mapping relationship between the array image and the Magixoom system image.
  • the image processing apparatus may further include:
  • the micro-registration module is used to obtain the relative positional relationship between adjacent projected images caused by the change of the object distance by accurately calculating the object distance and the object distance of the product design.
  • the offset of the pixel is corrected according to the offset, and the updated pixel mapping relationship is obtained.
  • FIG. 8 is a schematic flow chart of a Magixoom method according to an embodiment of the present invention. Referring to Figure 8, the process includes:
  • Step 81 setting an NFLFS array consisting of M rows and N columns of narrow field of view telephoto imaging subsystem NFLFS, the fields of view of adjacent NFLFS overlapping each other, and the main optical axis of each NFLFS converges at a point or a neighborhood of the point Inside, this point is the optical center of the Magixoom system;
  • both M and N are natural numbers greater than or equal to 1, and at least one of M and N is greater than 1 and is not equal to 2.
  • Step 83 setting a horizontal angle of view HFOV of the Magixoom system to No h + co h +e h ,
  • the vertical field of view VFOV of the Magixoom system is ⁇ « ⁇ + ⁇ « ⁇ +£ v ;
  • the design angle between the main optical axes of horizontally adjacent NFLFS, ⁇ ⁇ is the design angle between the main optical axes of vertically adjacent NFLFS, ⁇ 3 ⁇ 4 is the field of view overlap angle of horizontally adjacent NFLFS, approximation
  • the angle of overlap of the edge of the horizontal field of view overlap to the center of the system at the infinity of the horizontally adjacent NFLFS, ⁇ ⁇ is the field of view overlap angle of the vertically adjacent NFLFS, approximately vertically adjacent NFLFS vertical field of view of the object distance infinity edge overlap region the angle of the optical center of the system,
  • £ ⁇ are horizontal NFLFS the i-th row j-th column, the vertical field angle error, £ 3 ⁇ 4, ⁇ ⁇ respectively Horizontal and vertical field angle
  • Step 84 a system control device, connected to the image processing device and the NFLFS array, for controlling the operation of each component of the Magixoom system;
  • Step 85 An image processing apparatus, configured to process an array image having adjacent overlapping characteristics captured by the NFLFS array to obtain a wide field of view ultra-high resolution image.
  • the image processing device includes an image processing device of a full-calculation mode or an image processing device of a data mapping mode according to different product application requirements.
  • the image processing device of the full calculation mode includes: image projection, image overlap region feature point search, image registration stitching, image fusion, and image cropping, wherein
  • Image projection Projecting the sensor coordinate system of each array image to the system coordinate system, and obtaining a corresponding projection image in the system coordinate system;
  • Image overlap area feature point finding Find feature points in the overlapping area of the projected image;
  • Image registration stitching Use the feature points to find corresponding registration points in the corresponding overlapping images, and filter the registration point pairs to find effective Registration point pair; according to the found effective registration point pair calculated relative positional relationship between the sensor images, the image is stitched in the system coordinate system;
  • Image fusion Adjust the hue, brightness, and saturation between adjacent images after stitching to achieve a smooth transition between hue, brightness, and saturation between adjacent images;
  • Image cropping Take the inscribed quadrilateral of the image obtained after stitching, and cut off the portion other than the quadrilateral to obtain the image output.
  • the specific method for image projection includes:
  • the image processing apparatus includes three implementation manners for image processing, the first implementation manner is a full-process calculation mode, and the second implementation manner is a data mapping mode based on prior knowledge, and the third implementation manner is based on Micro-registered data mapping mode.
  • the image projection is to project the sensor coordinate system of each array image to the system coordinate system, and obtain the corresponding projection image in the system coordinate system, that is, based on the existing image projection matrix, through the projection matrix to the sensor coordinate system of the array image.
  • Each pixel is projected and interpolated to obtain a projected image of the image in the system coordinate system.
  • the sensor coordinate system refers to a scene based on the optical parameters of the lens and the sensor photosensitive plane and the sensor parameters are projected into an image. Coordinate system, each sensor coordinate system is independent of each other; system coordinate system is the coordinate system of the final output image of the zoom imaging system of the present invention.
  • the corresponding registration points are found in the corresponding overlapping images by using the feature points, and the registration point pairs are screened to find an effective registration point pair; the calculated sensors are obtained according to the found effective registration point pairs.
  • the hue, brightness, and saturation are adjusted between adjacent images after stitching to achieve a smooth transition between hue, brightness, and saturation between adjacent images;
  • the image projection step and the image overlap region feature point searching step are not in sequence, that is, the image projection step may also be after the image overlap region feature point searching step or at the same time; similarly, the image fusion step and the image There are no prior divisions in the cutting steps.
  • FIG. 9 is a schematic flowchart of a full-process calculation mode according to an embodiment of the present invention. See Figure 9, the process package Including: image projection based on prior knowledge, feature point finding in overlapping regions, normal flow processing or exception processing, image registration stitching, image blending, and image cropping, etc. After searching for feature points in overlapping regions, it is possible to pass normal Image registration and splicing are performed after the process is processed, and image registration splicing can also be performed after exception processing, specifically,
  • Step 901 Perform imaging on the captured array image according to a preset projection matrix
  • the image projection includes: a projection method for calculating a projection matrix based on the relationship between the feature points of each adjacent image captured, and a projection method based on the prior projection matrix. among them,
  • the above feature point finding algorithms are relatively mature, and the correctness of feature point finding is also guaranteed, but the computational complexity is relatively large.
  • a feature point finding algorithm based on regional extremum is proposed.
  • the feature point of the Magixoom system has an extreme value feature of the current region, and the extremum is an excellent extreme point under a certain threshold condition, and the information large.
  • the regional extreme points are obtained as follows:
  • Al l in the vicinity of the boundary of the overlapping area, select the starting search point of the effective regional extremum feature point, and determine the cross in a fixed step in each direction on any intersecting line passing the starting search point Judging auxiliary points of two regional extreme feature points on the line;
  • A12 Calculate the absolute value of the sum of the pixel value differences between the auxiliary point and the search point on each line in the cross line, and sum the absolute values on the two lines to obtain a summation value higher than a preset condition threshold.
  • the search point is set as the current regional extremum feature point coordinate in the extreme feature point list.
  • the effective region extremum feature point is selected adjacent to the boundary of the overlapping region between the array images.
  • the starting search point determines the auxiliary point of the two regional extreme feature points on each of the intersecting lines on any intersecting line that passes the starting search point.
  • the feature points need to be extreme points of the overlapping area.
  • / (, j) is the pixel value of the search point P(i, j), i, is a positive real number
  • is the sum of the pixel value difference between the auxiliary judgment points P(i + step5, j - stepG), sfep7, + Wep8) and the search point P(i, j) on the second line in the cross line;
  • is the preset condition threshold
  • s fe P is the sampling step size
  • FIG. 11 is a schematic diagram of determining auxiliary point selection according to an embodiment of the present invention.
  • the point is the intersection point, and a line consisting of pixels of the point P(i, j) is set as the first line in the cross line, and the other point of the point P (, ) is the pixel point.
  • the composed line is set as the second line in the cross line, and the positional relationship of different pixel points according to the sampling step is as shown in the figure.
  • step Al l and step A12 moving the search point, according to the same method as step Al l and step A12, and traversing the entire overlapping area to obtain a search point whose summation value is higher than a preset condition threshold, and obtaining a list of regional extreme value feature points of the overlapping area.
  • the extreme feature points of the overlapping region can be found through three or more intersecting lines according to steps Al l ⁇ A13 .
  • the extreme feature points can also be filtered to make the extreme feature points more accurate. Further includes:
  • the obtained regional extreme feature points are sorted according to the summation value from large to small, Taking the previous extreme point of the sort as the candidate region extreme value feature point;
  • A15 update the list of regional extreme feature points of the overlapped area.
  • the Wx 2 template information amount is calculated by using each candidate extreme point as the center, and the template information size is sorted, and the top N template information maximum candidate extreme points are taken out.
  • ⁇ and w can be determined according to actual needs, and 2 is a template specification centered on the candidate extreme point.
  • Image registration stitching algorithms include:
  • an image registration algorithm based on a projected image is adopted. Since there is no rotational stretching between the projected images, only the translation relationship is used. Therefore, accurate correlation can be performed based on the correlation degree of the projected image.
  • the criterion for accurate registration of correlation is to find the minimum absolute value and difference of the cross line.
  • the method used to select the registration area is: according to the optical system structure of the product and the object distance range of the product application scene, or the respective sensor images obtained according to the actual shooting of the product, the positional relationship of the same scene in the adjacent image, and the actual shooting.
  • the object distance range is obtained, and the maximum variation range of the corresponding adjacent image overlapping area is obtained, and the maximum registration range is determined.
  • the exact registration point pair corresponding to the extreme value feature points of the registration region is calculated, that is, the search for the precise registration point is performed.
  • FIG. 12 is a schematic diagram of a template crossover line of MSAD registration according to an embodiment of the present invention.
  • the intersection line of the over-registration region extremum feature points is used as the template cross-line of the MSAD registration, and the image point Q (ii, jj) in the search registration region is found to be the smallest in the template cross-line.
  • Point, candidate registration point pair as the current extreme feature point of the registration area. That is to say, the candidate registration points corresponding to the registration region extreme value feature points P(j, j) are represented as Q (ii, jj ), and each registration region extreme value feature point P(, ) and the corresponding candidate registration point Q (ii, jj) forms a candidate registration point pair.
  • Finding candidate registration point pairs using the MSAD method includes:
  • Step 1 the formula calculates MSAD
  • Step9, steplO, stepl 1, stepll are the registration step size
  • P(i, j) is an extreme image feature point of the registration image region
  • SAD is (Sum of Absolute Difference), which is the sum of the absolute values of the corresponding differences of the values of two corresponding sequences; MSAD is ( Minimum Sum of Absolute Difference ) and refers to the minimum of multiple SAD values.
  • the precise registration point is selected.
  • the punctual point is the reference positional relationship as the final image stitching. That is, using the distance difference integration algorithm, the pairing of pairs of N pairs is rationally screened.
  • any pair of registration point pairs ( ⁇ ⁇ ( ⁇ , j) , Q k2 (ii, jj) ) , that is, take any two pairs of registration points ⁇ P kl (i, j), Q kl (ii, jj)) and ⁇ P k2 (i, j), Q k2 (ii, jj) , respectively calculate the distance P kl P k2 , Q kl Q k2 , and use l ⁇ - ftA 2
  • A23 according to the small to large, sorting the rationality screening points of each candidate registration point pair obtained, and selecting the pre-sorting N. H ⁇ k registration point pairs;
  • the most reasonable N is selected.
  • H ⁇ k pairs the registration point pairs as the reference positional relationship for the final image fusion.
  • N H ⁇ k can be determined according to actual needs.
  • S k integration The h ⁇ k pair registration point pair serves as a reference point for the registration relationship between the two images of the array image.
  • the position coordinate information of the point in image 1 can be obtained as ⁇
  • the position coordinate information in image 2 is P 2 (3 ⁇ 4, y 2 )
  • the fusion region overlap region
  • the left and right boundaries are known, respectively x feft , x nght , which include but are not limited to the following specific methods: Image fusion using the following gradient fusion algorithm: then overlap the region
  • the pixel value is calculated as:
  • X p is the horizontal position coordinate information of any point selected
  • FIG. 14 is a schematic diagram of a fusion result obtained by using a coverage fusion algorithm according to an embodiment of the present invention. As shown in the figure, the results of the fusion are covered for the two images. In the figure, the black bold line portion is the place where the gray value jumps of the two images are merged. In the embodiment of the present invention, the overlay fusion algorithm only performs fusion processing on the hopping of the black bold line portion.
  • Step 905 Perform cropping processing on the image subjected to the image fusion processing, and output the image obtained by the cropping processing.
  • is the maximum line number of the irregular polygon.
  • a pixel point matrix is constructed according to each pixel point of the overlapping area of the array image, and multiplied by the relation matrix to obtain the pixel value of each pixel of the overlapping area.
  • Step 181 Pre-establish a pixel mapping relationship.
  • f x p, y is the pixel value of the ( 1 ⁇ 2 , > ) point of the sensor image number P;
  • Step 801 pre-establishing a pixel mapping relationship
  • Step 803 real-time calculation and fine-tuning the pixel mapping relationship
  • the pre-established pixmap relationship is updated according to the pixmap relationship of the trimming correction, and a corrected pixmap relationship is generated.
  • Step 806 outputting a large resolution image.
  • the image processing apparatus further includes:
  • the correction process is as follows: after acquiring the array image, searching for feature points and searching for precise registration points for some overlapping regions, and calculating the offset ( ⁇ ⁇ , ⁇ y ) of the relative positional relationship between adjacent images, according to The offset corrects the image address, the output image coordinate, and the weighting factor in the original pixel mapping relationship table to obtain a new pixel mapping relationship.

Abstract

La présente invention concerne un procédé et un système « Magixoom ». Le système comprend un réseau de sous-systèmes d'imagerie optique à longue focale et à champ de vision étroit, un dispositif de traitement d'image doté d'une fonction d'assemblage d'images et d'autres fonctions, et un dispositif de commande du système doté d'une fonction de commande du système. Dans la présente invention, une image formée a une très grande résolution. Selon les critères de conception sélectionnés, l'obtention d'une image grand angle avec un champ de vision large et l'obtention d'une image en gros plan avec un champ de vision étroit peuvent être réalisées en même temps par une sélection du champ de vision.
PCT/CN2013/081002 2012-08-07 2013-08-07 Système et procédé d'imagerie optique de très grande résolution et à large champ de vision WO2014023231A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210279808.9A CN102821238B (zh) 2012-03-19 2012-08-07 宽视场超高分辨率成像系统
CN201210279808.9 2012-08-07

Publications (1)

Publication Number Publication Date
WO2014023231A1 true WO2014023231A1 (fr) 2014-02-13

Family

ID=50067552

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/081002 WO2014023231A1 (fr) 2012-08-07 2013-08-07 Système et procédé d'imagerie optique de très grande résolution et à large champ de vision

Country Status (2)

Country Link
CN (1) CN102821238B (fr)
WO (1) WO2014023231A1 (fr)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104853089A (zh) * 2015-04-16 2015-08-19 深圳市威德视科技开发有限公司 一种实现完整高清图片的方法和装置
CN105872475A (zh) * 2016-05-20 2016-08-17 北京格灵深瞳信息技术有限公司 一种监控摄像装置
CN105933660A (zh) * 2016-05-20 2016-09-07 北京格灵深瞳信息技术有限公司 一种监控摄像装置
CN106815808A (zh) * 2017-01-20 2017-06-09 长沙全度影像科技有限公司 一种利用分块运算的图像拼接方法
US9867538B2 (en) 2016-03-21 2018-01-16 Canon Kabushiki Kaisha Method for robust eye tracking and ophthalmologic apparatus therefor
CN110673607A (zh) * 2019-09-25 2020-01-10 优地网络有限公司 动态场景下的特征点提取方法、装置、及终端设备
CN110910485A (zh) * 2019-12-16 2020-03-24 山东东艺数字科技有限公司 一种沉浸式cave影像制作方法
CN111833384A (zh) * 2020-05-29 2020-10-27 武汉卓目科技有限公司 一种可见光和红外图像快速配准方法及装置
CN111971711A (zh) * 2018-04-10 2020-11-20 深圳华大智造科技有限公司 荧光图像配准方法、基因测序仪及系统、存储介质
CN112082909A (zh) * 2020-04-24 2020-12-15 西安理工大学 一种应用于双线阵扫描成像系统中细胞流速的计算方法
CN112509038A (zh) * 2020-12-15 2021-03-16 华南理工大学 结合视觉仿真的自适应图像模板截取方法、系统及存储介质
CN112995514A (zh) * 2021-03-03 2021-06-18 上海悦易网络信息技术有限公司 一种工业相机的拍照物距的获取方法及设备
CN113313659A (zh) * 2021-04-25 2021-08-27 中国人民解放军火箭军工程大学 一种多机协同约束下高精度图像拼接方法
CN113645462A (zh) * 2021-08-06 2021-11-12 深圳臻像科技有限公司 一种3d光场的转换方法及装置
CN113689321A (zh) * 2021-08-23 2021-11-23 陈凤妹 基于立体投影加密的图像信息传输方法及装置
CN113689339A (zh) * 2021-09-08 2021-11-23 北京经纬恒润科技股份有限公司 图像拼接方法及装置
CN113822987A (zh) * 2021-09-22 2021-12-21 杭州趣村游文旅集团有限公司 一种相邻三维实景模型间的自动调色方法
CN113840123A (zh) * 2020-06-24 2021-12-24 上海赫千电子科技有限公司 一种车载图像的图像处理装置、汽车
CN113947526A (zh) * 2020-07-16 2022-01-18 四川大学 一种改进尺度不变特征变换的快速拼接方法
CN114070981A (zh) * 2021-11-09 2022-02-18 南通大学 一种异型管道全景成像装置及全景成像方法
CN114257703A (zh) * 2021-12-14 2022-03-29 成都信和创业科技有限责任公司 四目微光夜视仪图像拼接融合自动检测方法和装置
CN114286017A (zh) * 2021-11-15 2022-04-05 华能国际电力股份有限公司上海石洞口第二电厂 基于自由视角的超大尺度电力设备图像采集方法及系统
CN114636546A (zh) * 2022-03-10 2022-06-17 杭州海康威视数字技术股份有限公司 用于成像同步性检测的系统
CN114897699A (zh) * 2022-05-25 2022-08-12 中国电建集团中南勘测设计研究院有限公司 一种风机叶片的红外图像拼接方法及装置
CN115524343A (zh) * 2022-09-29 2022-12-27 哈尔滨工业大学 一种冰晶体物理结构的细观表征方法
CN115797995A (zh) * 2022-11-18 2023-03-14 北京的卢铭视科技有限公司 人脸活体检测方法、电子设备及存储介质
CN117876222A (zh) * 2024-03-12 2024-04-12 昆明理工大学 一种弱纹理湖泊水面场景下的无人机影像拼接方法
CN114636546B (zh) * 2022-03-10 2024-05-14 杭州海康威视数字技术股份有限公司 用于成像同步性检测的系统

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102821238B (zh) * 2012-03-19 2015-07-22 北京泰邦天地科技有限公司 宽视场超高分辨率成像系统
CN103969958B (zh) * 2013-01-25 2016-03-30 上海微电子装备有限公司 一种多曝光视场拼接系统和方法
CN103338343A (zh) * 2013-05-29 2013-10-02 山西绿色光电产业科学技术研究院(有限公司) 以全景图像为基准的多路图像无缝拼接方法及装置
CN104079808B (zh) * 2014-07-13 2017-05-10 西安电子科技大学 超高分辨率宽场成像系统
US9307133B1 (en) 2015-02-11 2016-04-05 Pho Imaging Limited System and method of imaging for increasing image resolution
CN104702971B (zh) * 2015-03-24 2018-02-06 西安邮电大学 相机阵列高动态范围成像方法
UA114549C2 (uk) * 2015-09-16 2017-06-26 Сергій Іванович Мірошниченко Багатосенсорний формувач відеозображень на базі оптоелектронних перетворювачів
KR102140882B1 (ko) * 2015-12-29 2020-08-04 코어포토닉스 리미티드 자동 조정가능 텔레 시야(fov)를 갖는 듀얼-애퍼처 줌 디지털 카메라
CN107389687B (zh) * 2017-06-27 2020-10-09 北京航空航天大学 一种电子元器件外观图像采集装置及其采集方法
CN109636763B (zh) * 2017-10-09 2022-04-01 小元感知(北京)科技有限公司 一种智能复眼监控系统
CN107959767B (zh) * 2017-12-14 2020-04-10 中国科学院长春光学精密机械与物理研究所 一种以电视跟踪结果为导向的调焦调光方法
CN108234958B (zh) * 2018-02-06 2024-04-05 长沙学院 举高喷射消防车及其成像系统、成像方法
CN108989739B (zh) * 2018-07-24 2020-12-18 上海国茂数字技术有限公司 一种全视角视频会议直播系统及方法
CN108881873B (zh) * 2018-07-31 2019-12-17 杭州一隅千象科技有限公司 高分辨率图像融合的方法、装置和系统
CN109040601B (zh) * 2018-09-05 2020-06-26 清华-伯克利深圳学院筹备办公室 一种多尺度非结构化的十亿像素vr全景摄影系统
CN109379522A (zh) * 2018-12-06 2019-02-22 Oppo广东移动通信有限公司 成像方法、成像装置、电子装置及介质
CN109639997B (zh) * 2018-12-20 2020-08-21 Oppo广东移动通信有限公司 图像处理方法、电子装置及介质
CN109379528A (zh) * 2018-12-20 2019-02-22 Oppo广东移动通信有限公司 成像方法、成像装置、电子装置及介质
CN109639974A (zh) * 2018-12-20 2019-04-16 Oppo广东移动通信有限公司 控制方法、控制装置、电子装置及介质
CN109600551A (zh) * 2018-12-29 2019-04-09 Oppo广东移动通信有限公司 成像方法、成像装置、电子装置及介质
CN110493526B (zh) 2019-09-10 2020-11-20 北京小米移动软件有限公司 基于多摄像模块的图像处理方法、装置、设备及介质
CN114205522B (zh) 2020-01-23 2023-07-18 华为技术有限公司 一种长焦拍摄的方法及电子设备
CN112085106A (zh) * 2020-09-10 2020-12-15 江苏提米智能科技有限公司 一种应用于多图像融合的图像识别方法、装置、电子设备及存储介质
CN113066011B (zh) * 2021-04-07 2022-11-11 合肥英睿系统技术有限公司 一种图像处理方法、装置、系统、介质和电子设备
CN115272077B (zh) * 2022-07-29 2023-06-06 西安羚控电子科技有限公司 一种基于视域融合的图像拼接方法及系统
CN116471490B (zh) * 2023-06-19 2023-08-29 清华大学 变光照十亿像素光场智能成像系统、方法与装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050031197A1 (en) * 2000-10-04 2005-02-10 Knopp David E. Method and apparatus for producing digital orthophotos using sparse stereo configurations and external models
US20090141043A1 (en) * 2007-11-30 2009-06-04 Hitachi, Ltd. Image mosaicing apparatus for mitigating curling effect
CN101976449A (zh) * 2010-11-25 2011-02-16 上海合合信息科技发展有限公司 拍摄多幅文本图像并拼接的方法
CN102821238A (zh) * 2012-03-19 2012-12-12 北京泰邦天地科技有限公司 宽视场超高分辨率成像系统
CN102819835A (zh) * 2012-07-26 2012-12-12 中国航天科工集团第三研究院第八三五七研究所 一种图像拼接中筛选特征点匹配对的方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006098784A (ja) * 2004-09-29 2006-04-13 Toshiba Corp 視野角制御装置および表示装置
JP5110350B2 (ja) * 2006-09-29 2012-12-26 Nltテクノロジー株式会社 光学素子およびこれを用いた照明光学装置、表示装置、電子機器
JP4586869B2 (ja) * 2008-03-13 2010-11-24 ソニー株式会社 液晶表示装置及び電子機器

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050031197A1 (en) * 2000-10-04 2005-02-10 Knopp David E. Method and apparatus for producing digital orthophotos using sparse stereo configurations and external models
US20090141043A1 (en) * 2007-11-30 2009-06-04 Hitachi, Ltd. Image mosaicing apparatus for mitigating curling effect
CN101976449A (zh) * 2010-11-25 2011-02-16 上海合合信息科技发展有限公司 拍摄多幅文本图像并拼接的方法
CN102821238A (zh) * 2012-03-19 2012-12-12 北京泰邦天地科技有限公司 宽视场超高分辨率成像系统
CN102819835A (zh) * 2012-07-26 2012-12-12 中国航天科工集团第三研究院第八三五七研究所 一种图像拼接中筛选特征点匹配对的方法

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104853089A (zh) * 2015-04-16 2015-08-19 深圳市威德视科技开发有限公司 一种实现完整高清图片的方法和装置
US9867538B2 (en) 2016-03-21 2018-01-16 Canon Kabushiki Kaisha Method for robust eye tracking and ophthalmologic apparatus therefor
CN105872475A (zh) * 2016-05-20 2016-08-17 北京格灵深瞳信息技术有限公司 一种监控摄像装置
CN105933660A (zh) * 2016-05-20 2016-09-07 北京格灵深瞳信息技术有限公司 一种监控摄像装置
CN106815808A (zh) * 2017-01-20 2017-06-09 长沙全度影像科技有限公司 一种利用分块运算的图像拼接方法
CN111971711A (zh) * 2018-04-10 2020-11-20 深圳华大智造科技有限公司 荧光图像配准方法、基因测序仪及系统、存储介质
CN110673607A (zh) * 2019-09-25 2020-01-10 优地网络有限公司 动态场景下的特征点提取方法、装置、及终端设备
CN110673607B (zh) * 2019-09-25 2023-05-16 优地网络有限公司 动态场景下的特征点提取方法、装置、及终端设备
CN110910485A (zh) * 2019-12-16 2020-03-24 山东东艺数字科技有限公司 一种沉浸式cave影像制作方法
CN110910485B (zh) * 2019-12-16 2023-07-25 山东东艺数字科技有限公司 一种沉浸式cave影像制作方法
CN112082909A (zh) * 2020-04-24 2020-12-15 西安理工大学 一种应用于双线阵扫描成像系统中细胞流速的计算方法
CN112082909B (zh) * 2020-04-24 2023-08-11 西安理工大学 一种应用于双线阵扫描成像系统中细胞流速的计算方法
CN111833384A (zh) * 2020-05-29 2020-10-27 武汉卓目科技有限公司 一种可见光和红外图像快速配准方法及装置
CN111833384B (zh) * 2020-05-29 2023-12-26 武汉卓目科技有限公司 一种可见光和红外图像快速配准方法及装置
CN113840123A (zh) * 2020-06-24 2021-12-24 上海赫千电子科技有限公司 一种车载图像的图像处理装置、汽车
CN113947526B (zh) * 2020-07-16 2023-04-18 四川大学 一种改进尺度不变特征变换的快速拼接方法
CN113947526A (zh) * 2020-07-16 2022-01-18 四川大学 一种改进尺度不变特征变换的快速拼接方法
CN112509038A (zh) * 2020-12-15 2021-03-16 华南理工大学 结合视觉仿真的自适应图像模板截取方法、系统及存储介质
CN112509038B (zh) * 2020-12-15 2023-08-22 华南理工大学 结合视觉仿真的自适应图像模板截取方法、系统及存储介质
CN112995514B (zh) * 2021-03-03 2023-05-30 上海万物新生环保科技集团有限公司 一种工业相机的拍照物距的获取方法及设备
CN112995514A (zh) * 2021-03-03 2021-06-18 上海悦易网络信息技术有限公司 一种工业相机的拍照物距的获取方法及设备
CN113313659B (zh) * 2021-04-25 2024-01-26 中国人民解放军火箭军工程大学 一种多机协同约束下高精度图像拼接方法
CN113313659A (zh) * 2021-04-25 2021-08-27 中国人民解放军火箭军工程大学 一种多机协同约束下高精度图像拼接方法
CN113645462B (zh) * 2021-08-06 2024-01-16 深圳臻像科技有限公司 一种3d光场的转换方法及装置
CN113645462A (zh) * 2021-08-06 2021-11-12 深圳臻像科技有限公司 一种3d光场的转换方法及装置
CN113689321B (zh) * 2021-08-23 2023-12-22 深圳普汇智为科技有限公司 基于立体投影加密的图像信息传输方法及装置
CN113689321A (zh) * 2021-08-23 2021-11-23 陈凤妹 基于立体投影加密的图像信息传输方法及装置
CN113689339B (zh) * 2021-09-08 2023-06-20 北京经纬恒润科技股份有限公司 图像拼接方法及装置
CN113689339A (zh) * 2021-09-08 2021-11-23 北京经纬恒润科技股份有限公司 图像拼接方法及装置
CN113822987A (zh) * 2021-09-22 2021-12-21 杭州趣村游文旅集团有限公司 一种相邻三维实景模型间的自动调色方法
CN114070981B (zh) * 2021-11-09 2023-09-19 南通大学 一种异型管道全景成像方法
CN114070981A (zh) * 2021-11-09 2022-02-18 南通大学 一种异型管道全景成像装置及全景成像方法
CN114286017A (zh) * 2021-11-15 2022-04-05 华能国际电力股份有限公司上海石洞口第二电厂 基于自由视角的超大尺度电力设备图像采集方法及系统
CN114257703A (zh) * 2021-12-14 2022-03-29 成都信和创业科技有限责任公司 四目微光夜视仪图像拼接融合自动检测方法和装置
CN114257703B (zh) * 2021-12-14 2023-12-01 成都信和创业科技有限责任公司 四目微光夜视仪图像拼接融合自动检测方法和装置
CN114636546A (zh) * 2022-03-10 2022-06-17 杭州海康威视数字技术股份有限公司 用于成像同步性检测的系统
CN114636546B (zh) * 2022-03-10 2024-05-14 杭州海康威视数字技术股份有限公司 用于成像同步性检测的系统
CN114897699A (zh) * 2022-05-25 2022-08-12 中国电建集团中南勘测设计研究院有限公司 一种风机叶片的红外图像拼接方法及装置
CN114897699B (zh) * 2022-05-25 2024-04-05 中国电建集团中南勘测设计研究院有限公司 一种风机叶片的红外图像拼接方法及装置
CN115524343A (zh) * 2022-09-29 2022-12-27 哈尔滨工业大学 一种冰晶体物理结构的细观表征方法
CN115797995B (zh) * 2022-11-18 2023-09-01 北京的卢铭视科技有限公司 人脸活体检测方法、电子设备及存储介质
CN115797995A (zh) * 2022-11-18 2023-03-14 北京的卢铭视科技有限公司 人脸活体检测方法、电子设备及存储介质
CN117876222A (zh) * 2024-03-12 2024-04-12 昆明理工大学 一种弱纹理湖泊水面场景下的无人机影像拼接方法

Also Published As

Publication number Publication date
CN102821238A (zh) 2012-12-12
CN102821238B (zh) 2015-07-22

Similar Documents

Publication Publication Date Title
WO2014023231A1 (fr) Système et procédé d'imagerie optique de très grande résolution et à large champ de vision
WO2021073331A1 (fr) Dispositif et procédé d'acquisition d'image floutée au zoom basés sur un dispositif terminal
US9325899B1 (en) Image capturing device and digital zooming method thereof
CN108600576B (zh) 图像处理装置、方法和系统以及计算机可读记录介质
WO2017088678A1 (fr) Appareil et procédé de prise d'image panoramique à exposition prolongée
JP5389697B2 (ja) 高品質な合成パノラマ画像のイン・カメラ生成
JP5969992B2 (ja) 携帯型装置でのステレオスコピック(3d)のパノラマ生成
US20060120712A1 (en) Method and apparatus for processing image
WO2020007320A1 (fr) Procédé de fusion d'images à plusieurs angles de vision, appareil, dispositif informatique, et support de stockage
US20060078214A1 (en) Image processing based on direction of gravity
CN105530431A (zh) 一种反射式全景成像系统及方法
CN112085659B (zh) 一种基于球幕相机的全景拼接融合方法、系统及存储介质
JP2002503893A (ja) 仮想現実カメラ
JP2006211139A (ja) 撮像装置
WO2014183385A1 (fr) Terminal et son procede de traitement d'image
JP3907008B2 (ja) 写真のための被写界の深度を増大するための方法及び手段
JP2016129289A (ja) 画像処理装置、撮像装置、画像処理方法、プログラム、および、記憶媒体
WO2021184302A1 (fr) Procédé et appareil de traitement d'image, dispositif d'imagerie, porteur mobile et support de stockage
CN108513057B (zh) 图像处理方法及装置
US20230412755A1 (en) Image stitching in the presence of a full field of view reference image
JP2020053774A (ja) 撮像装置および画像記録方法
TWI398716B (zh) Use the flash to assist in detecting focal lengths
WO2018196854A1 (fr) Procédé de photographie, appareil de photographie et terminal mobile
CN103546680B (zh) 一种无变形的全方位鱼眼摄像装置及实现方法
JP6665917B2 (ja) 画像処理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13828244

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13828244

Country of ref document: EP

Kind code of ref document: A1