WO2017020150A1 - 一种图像处理方法、装置及摄像机 - Google Patents

一种图像处理方法、装置及摄像机 Download PDF

Info

Publication number
WO2017020150A1
WO2017020150A1 PCT/CN2015/085641 CN2015085641W WO2017020150A1 WO 2017020150 A1 WO2017020150 A1 WO 2017020150A1 CN 2015085641 W CN2015085641 W CN 2015085641W WO 2017020150 A1 WO2017020150 A1 WO 2017020150A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
coordinate system
jitter
target
angle
Prior art date
Application number
PCT/CN2015/085641
Other languages
English (en)
French (fr)
Inventor
俞利富
李泽飞
曹子晟
王铭钰
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2015/085641 priority Critical patent/WO2017020150A1/zh
Priority to CN201580071829.3A priority patent/CN107113376B/zh
Publication of WO2017020150A1 publication Critical patent/WO2017020150A1/zh
Priority to US15/884,985 priority patent/US10594941B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/243Aligning, centring, orientation detection or correction of the image by compensating for image skew or non-uniform image deformations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, and video camera.
  • the camera is a kind of optical image signal converted into an electrical signal for subsequent transmission and storage, and the lens is a key device of the camera, which is distinguished based on the angle of view.
  • the camera lens includes a standard lens, a wide-angle lens, etc., and the angle of view of the wide-angle lens can reach 90 degrees or more.
  • the angle of view of the fisheye lens is close to or equal to 180 degrees, which is suitable for shooting a wide range of scenery objects.
  • the existing anti-shake technology is generally realized by a mechanical structure.
  • a common method is to use a physical pan/tilt to achieve anti-shake by three-axis rotation compensation.
  • the physical cloud platform is not only costly, but also requires the user to carry it extra, without using the user.
  • the embodiment of the invention provides an image processing method, a device and a camera, which can better achieve anti-shake to obtain a higher quality image.
  • an embodiment of the present invention provides an image processing method, where the method includes:
  • the jitter information includes jitter angle information
  • a target image is determined from the acquired image based on the jitter information.
  • an embodiment of the present invention provides another image processing method, where the method includes:
  • jitter information includes jitter angle information
  • a target image is selected from the distortion-corrected image based on the jitter information.
  • the embodiment of the present invention further provides an image processing apparatus, including:
  • An image acquisition module configured to acquire a captured image obtained by the image collection device
  • a jitter acquisition module configured to acquire jitter information associated with the captured image when the image capture device is photographed, where the jitter information includes jitter angle information;
  • a processing module configured to determine a target image from the acquired image according to the jitter information.
  • the embodiment of the present invention further provides a camera, where the camera includes: a lens, an image sensor, and an image processor, where
  • the image sensor is configured to collect image data through the lens
  • the image processor is connected to the image sensor for obtaining a captured image according to the image data collected by the image sensor, and acquiring the jitter information associated with the captured image when the image capturing device is photographed;
  • the jitter information determines a target image from the acquired image; wherein the jitter information includes jitter angle information.
  • an embodiment of the present invention further provides another image processing apparatus, where the apparatus includes:
  • a first acquiring module configured to acquire a captured image obtained by the image capturing device
  • a second acquiring module configured to acquire jitter information associated with the captured image generated when the image capturing device is photographed, where the jitter information includes jitter angle information;
  • a first processing module configured to perform distortion correction on the acquired image according to the jitter information and preset distortion correction parameters
  • a second processing module configured to select a target image from the distortion-corrected image according to the jitter information.
  • the embodiment of the present invention further provides another camera, where the camera includes: a lens, an image sensor, and an image processor, where
  • the image sensor is configured to collect image data through the lens
  • the image processor is connected to the image sensor for obtaining a captured image according to image data collected by the image sensor, and acquiring jitter information associated with the captured image generated by the image capturing device when capturing
  • the jitter information includes jitter angle information; the captured image is subjected to distortion correction according to the jitter information and a preset distortion correction parameter; and the target image is selected from the distortion corrected image according to the jitter information.
  • an embodiment of the present invention further provides an aerial vehicle comprising an aircraft body and a camera mounted on the aircraft body, the camera comprising the camera mentioned in the fourth aspect or the sixth aspect Go to the camera.
  • an embodiment of the present invention further provides an aerial vehicle comprising: an aircraft body, an image acquisition device, and a processor, wherein the image acquisition device is mounted on the aircraft body, and the processing The device is connected to the image acquisition device data, and the processor includes the image processing device mentioned in the third aspect or the image processing device mentioned in the fifth aspect.
  • the embodiment of the invention performs image processing based on the jitter information, and the anti-shake function is completed during the processing, and no additional anti-shake hardware devices such as a pan/tilt are needed, which saves cost and is convenient for the user to use.
  • FIG. 1 is a schematic flow chart of an image processing method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flow chart of another image processing method according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of a method for determining a coordinate mapping relationship according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of mapping between points in a target image coordinate system in a target image coordinate system according to an embodiment of the present invention
  • Figure 5 is a schematic diagram of the rotation of the world coordinate system to the tangent plane coordinate system
  • FIG. 6 is a schematic diagram of rotation of a tangent plane coordinate system to a target image coordinate system
  • FIG. 7 is a schematic diagram of distortion correction according to an embodiment of the present invention.
  • FIG. 8 is a schematic flowchart diagram of another image processing method according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram showing the relationship between a distortion corrected image and a target image according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of a processing module in the image processing apparatus of FIG. 10;
  • FIG. 12 is a schematic structural diagram of a camera according to an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of a first processing module in the image processing apparatus of FIG. 13; FIG.
  • FIG. 15 is a schematic structural diagram of a camera according to an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention.
  • the method in the embodiment of the present invention can be applied to various image capturing devices and implemented by one or more image processors. Specifically, the method includes:
  • S101 Acquire a captured image obtained by the image acquisition device.
  • the image capture device may be a smart camera capable of capturing images such as various cameras, cameras, and the like. After being connected to the image acquisition device data, the acquired image can be obtained from the image acquisition device.
  • the image capturing device may specifically be a wide-angle lens such as a fisheye lens, and the captured image collected by the image capturing device corresponds to a fisheye image or the like.
  • S102 Acquire jitter information associated with the acquired image generated when the image capturing device is photographed, where the jitter information includes jitter angle information.
  • the jitter information may refer to position change angle information of the image capturing device relative to the last time the image was captured when the current picture is captured.
  • the jitter information may also refer to position change angle information of the image capture device relative to the reference position when the current picture is captured, and the reference position may be an initial position when the image capture device starts shooting. It mainly includes angle information of the image capturing device in three directions, specifically, a pitch angle, a yaw angle, and a roll angle.
  • an angle sensor may be disposed in the image acquisition device to detect jitter information of the image acquisition device, for example, by a gyroscope.
  • the original data can be obtained directly from the angle sensor, and then the relevant angle information can be calculated, and the relevant angle information can be directly obtained from some angle sensors with computing power.
  • the angle sensor may be directly disposed on the image acquisition device and fixedly connected to the image acquisition device. It may also be fixedly connected to various external devices rigidly connected to the image capturing device, such as an aircraft such as an aircraft on which the image capturing device is mounted, and the shaking of the external devices may directly cause the image capturing device to shake.
  • the angle sensor fixedly attached to the external device can directly detect the jitter information of the image pickup device. It can be understood that if the external device is flexibly connected to the image capturing device and the angle sensor cannot accurately detect the information of the image capturing device, the angle sensor cannot be set on the external device.
  • S103 Determine a target image from the acquired image according to the jitter information.
  • the S103 may be: performing coordinate transformation according to the jitter information to obtain a coordinate mapping relationship between the plane coordinate system of the acquired image and a target coordinate system; and then determining texture information of each pixel in the collected image. (The gray value corresponding to the pixel point), the texture mapping relationship is obtained. Based on the coordinate mapping relationship and the texture mapping relationship described above, and further based on the preset distortion correction parameters, the conversion from the acquired image to the target image is finally obtained, and the target image after the distortion correction is obtained.
  • the S103 may further be: performing coordinate transformation according to the jitter information to obtain a coordinate mapping relationship between the plane coordinate system of the acquired image and a target coordinate system; and then determining each pixel in the captured image.
  • the texture information of the point (the gray value corresponding to the pixel) is obtained by the texture mapping relationship.
  • a conversion from the acquired image to an initial image is obtained, and an image after distortion correction is obtained.
  • the shake compensation is performed, and the target image after the shake compensation is obtained in the image after the distortion correction.
  • the jitter compensation refers to: determining an initial region based on a preset angle of view in the image after the distortion correction, and then compensating the initial region according to the jitter information to obtain a target region, where the target region is obtained.
  • the image covered is the target image.
  • the preset viewing angle is smaller than the angle of view when the image capturing device captures the captured image.
  • the preset viewing angle may be a standard image.
  • the angle of view; and the captured image corresponds to an angle of view such as a fisheye image.
  • the acquired image corresponds to a standard field of view angle
  • the preset field of view angle corresponds to less than a standard field of view angle.
  • the initial region is reversely moved upward by a first angle
  • the yaw angle information indicates the image capturing device Turning the second angle to the left
  • the initial area is reversed to the right by a second angle.
  • the jitter compensation is done by analogy.
  • the embodiment of the invention performs image processing based on the jitter information, and the anti-shake function is completed during the processing, and no additional anti-shake hardware devices such as a pan/tilt are needed, which saves cost and is convenient for the user to use.
  • FIG. 2 it is a schematic flowchart of another image processing method according to an embodiment of the present invention.
  • the method in the embodiment of the present invention can be applied to various image capturing devices and implemented by one or more image processors. Specifically, the method includes:
  • S201 Acquire an acquired image obtained by the image acquisition device.
  • the captured image can be captured by various cameras.
  • the captured image may be a wide-angle image.
  • a target image whose angle of view is smaller than the angle of view of the wide-angle image is subsequently selected from the wide-angle image.
  • S202 Acquire jitter information associated with the captured image generated when the image capturing device is photographed, where the jitter information includes jitter angle information.
  • the jitter information refers to position change angle information when the image capture device obtains the captured image relative to the last time the captured image is obtained; or the jitter information refers to that the image capture device is obtained.
  • the angle information is changed with respect to a position of the reference position when the image is acquired; the jitter angle information includes: a pitch angle, a yaw angle, and/or a roll angle.
  • the jitter information may be acquired by a sensor such as a gyroscope disposed on the image capture device.
  • the jitter information can be used in subsequent plane coordinate mapping processing and selection of target area jitter compensation.
  • the jitter information mainly includes the three-dimensional angle information mentioned above.
  • the jitter information may be calculated based on information such as an angular acceleration sensed by a gyroscope or the like.
  • S203 Perform time calibration on the acquired captured image and the jitter information.
  • the time calibration is performed to ensure that the data collected by the sensor such as the gyroscope is aligned with the captured image collected by the camera, and the jitter information is associated with the acquired captured image.
  • acquiring a shooting time when the captured image is captured and acquiring jitter data sensed by a sensor such as a gyroscope at the shooting time, according to the jitter data and the time when the last captured image is captured.
  • the jitter data is obtained, and the jitter information of the acquired image is obtained. It is necessary to ensure the correctness of the jitter information at the time of capturing the captured image based on the shooting time and the sensing time of the two jitter data to reduce or even avoid the error of subsequent processing.
  • the trigger information is triggered to trigger the determination of the target image from the acquired image, that is, the execution of S204 described below is triggered.
  • S204 Determine, according to the jitter angle information, a mapping relationship between the image coordinate system in which the acquired image is located and the target coordinate system.
  • the S204 can specifically perform coordinate transformation and mapping through a fixed world coordinate system.
  • the transformation of the image coordinate system in which the acquired image is located to the world coordinate system can be calculated, and the transformation of the image coordinate system to the target coordinate system can be calculated, thereby indirectly obtaining the mapping relationship between the image coordinate system in which the acquired image is located and the target coordinate system.
  • S206 Map each point on the acquired image to the target coordinate system according to the determined mapping relationship between the image coordinate system of the acquired image and the target coordinate system and the distortion correction parameter. Corresponding position point;
  • S207 Acquire a pixel value of each point of the acquired image, and generate a target image according to the acquired pixel value and the position point of each point.
  • mapping each point on the acquired image to a corresponding position point of the target coordinate system and acquiring the pixel value of each point of the captured image in S207 can be performed simultaneously.
  • FIG. 3 is a schematic flowchart of a method for determining a coordinate mapping relationship according to an embodiment of the present invention.
  • the method corresponds to the foregoing S204, and specifically includes:
  • S301 Establish a mapping relationship between the image coordinate system in which the acquired image is located and the world coordinate system according to the jitter angle information.
  • S302 Determine a mapping relationship between the target coordinate system and the world coordinate system according to the jitter angle information.
  • S303 Determine, according to the world coordinate system, a mapping relationship between the image coordinate system in which the acquired image is located and the target coordinate system.
  • the S302 may specifically include: establishing a mapping relationship between the world coordinate system and the preset tangent plane coordinate system according to the jitter angle information; establishing a target coordinate system according to the jitter angle information to the preset tangent plane coordinate system a mapping relationship; determining a mapping relationship between the target coordinate system and the world coordinate system according to the preset tangent plane coordinate system.
  • the tangent plane coordinate system refers to a tangent plane coordinate system at any point on a hemispherical surface with the center of the lens of the image capturing device as the center of the sphere.
  • the coordinate system of the fisheye image (abnormal image) collected by the imaging system be the image coordinate system O 2 -uv, where O 2 is the center point of the image, and u, v are along the horizontal and vertical axes of the image, respectively. .
  • the actual world coordinate system O r -XYZ is determined for a certain focal length fisheye lens, for ease of description without loss of generality, it is assumed that the spatial distribution thereof in a hemispherical lens center to the center of the sphere O r On the surface, it is also intuitive to think that the image formed on the space O 2 -uv is the object space projected on the image sensor along the optical axis.
  • the camera body coordinate system O b -X b Y b Z b the coordinate system is fixed to the camera, and is also fixed to a sensor such as a gyroscope, and the change parameter of the coordinate system can be obtained in real time through a sensor such as a gyroscope.
  • the coordinate system in which the target image (which may be a standard view image) is O 1 -xy, that is, the image coordinates of the image that is finally output.
  • the tangent plane coordinate system O-X'Y' where a certain point O on the spherical surface in the second point is located which can be regarded as an object coordinate system corresponding to the image coordinate system in the third point.
  • the hardware components of the system gyroscope, camera (with fisheye lens), GPU.
  • the positional relationship between the gyroscope and the camera (with the fisheye lens) is fixed.
  • the corrected image is output in real time.
  • the specific calculation process includes:
  • the target image coordinate overcut plane coordinate system is calculated and transformed into a world coordinate system.
  • the resulting final image Screen image screen 4 is the target image.
  • the center point of the target image is O 1
  • the corresponding point in the fisheye scene (that is, the actual scene) is a point O in the spherical surface
  • a point P 1 (x, y) on the target image (on the target image)
  • O 1 P 1 is proportional to the length of the OP, which satisfies the geometric transformation.
  • the mapping relationship is from the screen P 1 to the tangent plane point P, not to the spherical point P s , the relationship between the O r P and the Z axis, and the X axis is ⁇ ′ and
  • the scaling factor k has the same value as above.
  • the coefficient m guarantees the normalization of the screen coordinates, and the screen width and height are equal.
  • the coordinate value (x p , of the point P in the corresponding world coordinate system can be obtained from the coordinate value (x, y) of any point P 1 in the target image.
  • P 2 is the image point of the point P aperture imaging. Due to fisheye distortion, the real image point is P' 2 , and the real image point is located in the image before the distortion correction Input Texture-Fisheye Image.
  • O 2 is the position of O in the fisheye diagram, which is the center point of the image.
  • texture information (a gray value corresponding to a certain pixel point) on a specific pixel point can be obtained, thereby imparting texture information to the point on the target image.
  • u 2 and v 2 are floating point numbers, accessing the pixel pixel requires interpolation, and the process can be completed by the GPU.
  • mapping process from the screen coordinates P 1 (x, y) to the spherical coordinate system P(x p , y p , z p ) and then to the fisheye image P' 2 (u, v) is completed, including the coordinates. Mapping and texture mapping.
  • the fisheye lens model of the formula (14) is:
  • the above formula uses only one distortion correction parameter.
  • the model can be modified to
  • n Take 2 the camera is calibrated to ⁇ k i ⁇ at the factory, and ⁇ k i ⁇ is the uniform variable to the GPU when correcting.
  • the coordinate transformation and distortion correction of each pixel from the fisheye image to the target image is performed by the GPU.
  • the embodiment of the invention performs image processing based on the jitter information, and the anti-shake function is completed during the processing, and no additional anti-shake hardware devices such as a pan/tilt are needed, which saves cost and is convenient for the user to use.
  • FIG. 8 it is a schematic flowchart of another image processing method according to an embodiment of the present invention.
  • the method in the embodiment of the present invention can be applied to various image capturing devices and implemented by one or more image processors. Specifically, the method includes:
  • S801 Acquire a captured image obtained by the image acquisition device.
  • the image capture device may be a smart camera capable of capturing images such as various cameras, cameras, and the like. After being connected to the image acquisition device data, the acquired image can be obtained from the image acquisition device.
  • the image capturing device may be a wide-angle lens such as a fisheye lens, and the collected image collected by the image capturing device corresponds to a fisheye image or the like.
  • S802 Acquire jitter information associated with the acquired image generated when the image capturing device is photographed, where the jitter information includes jitter angle information.
  • the jitter information may refer to position change angle information of the image capturing device relative to the last time the image was captured when the current picture is captured.
  • the jitter information may also refer to position change angle information of the image capture device relative to the reference position when the current picture is captured, and the reference position may be an initial position when the image capture device starts shooting. It mainly includes angle information of the image capturing device in three directions, specifically, a pitch angle, a yaw angle, and a roll angle.
  • an angle sensor may be disposed in the image acquisition device to detect jitter information of the image acquisition device, for example, by a gyroscope.
  • the original data can be obtained directly from the angle sensor, and then the relevant angle information can be calculated, and the relevant angle information can be directly obtained from some angle sensors with computing power.
  • the angle sensor may be directly disposed on the image acquisition device and fixedly connected to the image acquisition device. It may also be fixedly connected to various external devices rigidly connected to the image capture device, such as an aircraft such as an aircraft on which the image capture device is mounted, and the external devices The dithering can directly cause the image capturing device to shake, and thus the angle sensor fixedly connected to the external device can directly detect the jitter information of the image capturing device. It can be understood that if the external device is flexibly connected to the image capturing device and the angle sensor cannot accurately detect the information of the image capturing device, the angle sensor cannot be set on the external device.
  • S803 Perform distortion correction on the acquired image according to the jitter information and preset distortion correction parameters.
  • the target image selected from the distortion corrected image may be a part of the distortion corrected image.
  • the S804 may specifically include: determining an initial region from the corrected image, performing jitter compensation on the initial region according to the jitter information, determining a target region, and obtaining a target image corresponding to the target region.
  • the determining the initial region from the corrected image comprises: determining an initial region from the distortion corrected image according to a preset angle of view.
  • the S804 may specifically include: determining, according to the jitter information and the preset angle of view, an image of the area defined by the angle of view as the target image from the distortion corrected image.
  • the distortion corrected image 901 is at a first angle of view, and the target image is from a distortion corrected image 901 according to a preset angle of view (second view) Field angle) A portion of the image captured.
  • This part of the image is obtained by jitter compensation based on the jitter information, and the shaded portion of the figure does not exist in the final output target image.
  • the image capturing device is rotated downward by an angle, according to the angle of view of the target image, if the shake compensation is not performed, the image in the dotted frame is acquired. By compensating based on the angle and moving up a certain distance in the image after the distortion correction, the target image in the solid line frame after anti-shake can be obtained.
  • the shake compensation is performed, and the target image after the shake compensation is obtained in the distortion-corrected image.
  • the jitter compensation refers to: determining an initial region based on a preset angle of view in the image after the distortion correction, and then compensating the initial region according to the jitter information to obtain a target region, where the target region is obtained.
  • the image covered is the target image.
  • the preset viewing angle is smaller than the angle of view when the image capturing device captures the captured image.
  • the preset viewing angle may be a standard image.
  • the angle of view; and the captured image corresponds to an angle of view such as a fisheye image.
  • the acquired image corresponds to a standard field of view angle
  • the preset field of view angle corresponds to less than a standard field of view angle.
  • the initial region is reversely moved upward by a first angle
  • the yaw angle information indicates the image capturing device Turning the second angle to the left
  • the initial area is reversed to the right by a second angle.
  • the jitter compensation is done by analogy.
  • a step of detecting an angle of the jitter may be included, if each angle value in the jitter angle information, such as the pitch angle, the yaw angle, and the roll angle, is If the threshold value is less than the preset angle, the S803 is triggered to be executed; and if the one or more angle values in the jitter angle information exceed the preset angle threshold, it indicates that the user may actively move the image capturing device to capture an image. , the S803 is not required to be executed, and the S801 or S802 is continuously executed.
  • the method before the performing distortion correction on the acquired image according to the jitter information and the preset distortion correction parameter, the method further includes:
  • the triggering execution determines the target image from the acquired image according to the jitter information.
  • the performing the distortion correction on the collected image according to the jitter information and the preset distortion correction parameter in the embodiment of the present invention may specifically include:
  • the preset distortion correction parameter performs distortion correction on the acquired image to obtain a corrected image.
  • mapping relationship between the image coordinate system in which the captured image is located and the target coordinate system according to the jitter angle information may specifically include:
  • the mapping relationship between the image coordinate system in which the acquired image is located and the target coordinate system is determined.
  • the determining the mapping relationship between the target coordinate system and the world coordinate system according to the jitter angle information may specifically include:
  • the above-mentioned tangent plane coordinate system may refer to a tangent plane coordinate system at any point on the hemispherical surface which is the center of the lens of the image capturing device.
  • the image is subjected to distortion correction according to the determined mapping relationship between the image coordinate system of the acquired image and the target coordinate system and the preset distortion correction parameter, to obtain a corrected image, including :
  • mapping each point on the acquired image to a corresponding position point of the target coordinate system according to the determined mapping relationship between the image coordinate system of the acquired image and the target coordinate system and the distortion correction parameter
  • the coordinate mapping and the distortion correction involved in the above may refer to the specific description of the related content in the embodiments corresponding to FIG. 2 to FIG. 7.
  • the embodiment of the invention performs image processing based on the jitter information, and completes the anti-shake function during the processing. No additional anti-shake hardware devices such as PTZ are needed, which saves costs and is convenient for users.
  • FIG. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • the apparatus in the embodiment of the present invention may be configured in various cameras, or may be used alone.
  • the device includes:
  • the image acquisition module 11 is configured to acquire the acquired image obtained by the image collection device.
  • the jitter acquisition module 12 is configured to acquire the jitter information associated with the captured image generated when the image capture device is photographed, where the jitter information includes jitter angle information;
  • the processing module 13 is configured to determine a target image from the acquired image according to the jitter information.
  • the image capture device may be a smart camera capable of capturing an image such as a camera or a video camera connected to the data processing device data. After being connected to the image acquisition device data, the image acquisition module 11 can acquire the acquired image from the image acquisition device.
  • the image capturing device may specifically be a wide-angle lens such as a fisheye lens, and the captured image collected by the image capturing device corresponds to a fisheye image or the like.
  • the jitter information acquired by the jitter acquiring module 12 may refer to position change angle information when the image capturing device obtains the captured image with respect to the last acquired image; the jitter angle information includes: pitching Angle, yaw angle, and/or roll angle. Alternatively, the jitter information may also refer to position change angle information of the image acquisition device relative to the reference position when the acquired image is obtained.
  • the jitter information may refer to position change angle information of the image capturing device relative to the last time the image was captured when the current picture is captured.
  • the jitter information may also refer to position change angle information of the image capture device relative to the reference position when the current picture is captured, and the reference position may be an initial position when the image capture device starts shooting. It mainly includes angle information of the image capturing device in three directions, specifically, a pitch angle, a yaw angle, and a roll angle.
  • the jitter acquisition module 12 may specifically acquire an angle sensor configured in the image acquisition device to acquire jitter information of the image acquisition device, for example, by a gyroscope.
  • the jitter acquisition module 12 can directly obtain the original data from the angle sensor, and then calculate the relevant angle information, and can also directly obtain the relevant angle information from some angle sensors with computing power.
  • the angle sensor may be directly disposed on the image acquisition device and fixedly connected to the image acquisition device. It may also be fixedly connected to various external devices rigidly connected to the image capture device, such as an aircraft such as an aircraft on which the image capture device is mounted, and the external devices The dithering can directly cause the image capturing device to shake, and thus the angle sensor fixedly connected to the external device can directly detect the jitter information of the image capturing device. It can be understood that if the external device is flexibly connected to the image capturing device and the angle sensor cannot accurately detect the information of the image capturing device, the angle sensor cannot be set on the external device.
  • the target image may be an image obtained by the processing module 13 after performing coordinate change and distortion correction based on the jitter information, or may be followed by the coordinate transformation and the distortion correction, and then follow the jitter information from the processing.
  • the screenshot in the image selects part of the image.
  • apparatus of the embodiment of the present invention may further include:
  • the calibration module 14 is configured to perform time calibration on the acquired captured image and the jitter information; and after the time calibration is successful, issue the trigger information to the processing module.
  • the data acquired by the image acquisition module 11 and the jitter acquisition module 12 is corrected in time by the correction module 14 to ensure that the captured image and the jitter information are aligned in time.
  • processing module 13 may specifically include:
  • the mapping relationship determining unit 131 is configured to determine, according to the shaking angle information, a mapping relationship between the image coordinate system in which the captured image is located and the target coordinate system;
  • the processing unit 132 is configured to determine a target image from the acquired image according to the determined mapping relationship between the image coordinate system of the acquired image and the target coordinate system.
  • mapping relationship determining unit 131 may specifically include:
  • a first determining sub-unit 1311 configured to establish, according to the shaking angle information, a mapping relationship between an image coordinate system in which the captured image is located and a world coordinate system;
  • a second determining sub-unit 1312 configured to determine, according to the jitter angle information, a mapping relationship between the target coordinate system and the world coordinate system;
  • the third determining sub-unit 1313 is configured to determine, according to the world coordinate system, a mapping relationship between the image coordinate system in which the captured image is located and the target coordinate system.
  • the second determining sub-unit 1312 is configured to establish a mapping relationship between the world coordinate system and the preset tangent plane coordinate system according to the shaking angle information; and establish a target coordinate system according to the shaking angle information. Determining a mapping relationship between the preset tangent plane coordinate systems; determining a mapping relationship between the target coordinate system and the world coordinate system according to the preset tangent plane coordinate system.
  • the tangent plane coordinate system refers to a tangent plane coordinate system at any point on the hemispherical surface with the center of the lens of the image capturing device as the center of the sphere.
  • the processing unit 132 is configured to acquire preset distortion correction parameters, and according to the determined mapping relationship between the image coordinate system of the acquired image and the target coordinate system, and the distortion correction parameter, Each point on the acquired image is mapped to a corresponding position point of the target coordinate system; the pixel value of each point of the acquired image is acquired, and the target image is generated according to the acquired pixel value and the position point of each point.
  • the image processing apparatus of an embodiment of the invention may be accompanied in a processor, and the processor may be part of an aerial vehicle comprising an aircraft body, an image acquisition device, and the implementation of the invention
  • a processor of an image processing apparatus wherein the image capture device is mounted on the aircraft body, and the processor is coupled to the image capture device data.
  • the embodiment of the invention performs image processing based on the jitter information, and the anti-shake function is completed during the processing, and no additional anti-shake hardware devices such as a pan/tilt are needed, which saves cost and is convenient for the user to use.
  • FIG. 12 is a schematic structural diagram of a camera according to an embodiment of the present invention.
  • the camera of the embodiment of the present invention includes a lens 01, an image sensor 02, and an image processor 03, where
  • the image sensor 02 is configured to collect image data by using the lens 01;
  • the image processor 03 is connected to the image sensor 02 for obtaining a captured image according to image data collected by the image sensor 02, and acquiring jitter information associated with the captured image generated by the image capturing device when photographing Determining a target image from the acquired image according to the jitter information; wherein the jitter information includes jitter angle information.
  • the jitter information refers to position change angle information when the image capture device obtains the captured image with respect to the last acquired image; the jitter angle information includes: a pitch angle and a yaw angle. , and / or roll angle.
  • the jitter information may refer to position change angle information of the image capturing device relative to the reference position when the captured image is obtained.
  • the image processor 03 is further configured to perform time calibration on the acquired captured image and the jitter information; after the time calibration is successful, the determining the target from the collected image according to the jitter information image.
  • the lens 01 is a fisheye lens 01
  • the captured image is a fisheye image
  • the image processor 03 is specifically configured to determine, according to the shaking angle information, a mapping relationship between the image coordinate system in which the captured image is located and the target coordinate system; and according to the determined image coordinate of the captured image A mapping relationship to the target coordinate system determines a target image from the acquired image.
  • the image processor 03 is specifically configured to establish, according to the shaking angle information, a mapping relationship between the image coordinate system in which the captured image is located and the world coordinate system; and determine the target coordinate system according to the shaking angle information. a mapping relationship between the world coordinate systems; and a mapping relationship between the image coordinate system in which the captured image is located and the target coordinate system is determined according to the world coordinate system.
  • the image processor 03 is specifically configured to establish a mapping relationship between the world coordinate system and the preset tangent plane coordinate system according to the shaking angle information; and establish a target coordinate system according to the shaking angle information to the preset a mapping relationship between the tangent plane coordinate systems; determining a mapping relationship between the target coordinate system and the world coordinate system according to the preset tangent plane coordinate system.
  • the tangent plane coordinate system refers to a tangent plane coordinate system at any point on a hemispherical surface centered on the center of the lens 01 of the image capturing device.
  • the image processor 03 is specifically configured to acquire a preset distortion correction parameter; and according to the determined mapping relationship between the image coordinate system of the acquired image and the target coordinate system, and the distortion correction parameter, Mapping each point on the acquired image to a corresponding position point of the target coordinate system; acquiring a pixel value of each point of the acquired image, and generating a target image according to the acquired pixel value and the position point of each point.
  • the camera of an embodiment of the invention may be part of an aerial vehicle comprising an aircraft body and the camera, the camera being mounted on the aircraft body.
  • the embodiment of the invention performs image processing based on the jitter information, and the anti-shake function is completed during the processing, and no additional anti-shake hardware devices such as a pan/tilt are needed, which saves cost and is convenient for the user to use.
  • FIG. 13 it is a schematic structural diagram of another image processing apparatus according to an embodiment of the present invention.
  • the apparatus of the embodiment of the present invention may be configured into various cameras, or may be used alone. Specifically, the present invention is implemented.
  • the device of the example includes:
  • a first acquiring module 21 configured to acquire a captured image obtained by the image capturing device
  • a second acquiring module 22 configured to acquire, when the image capturing device is photographed, the acquired image Like associated jitter information, wherein the jitter information includes jitter angle information;
  • the first processing module 23 is configured to perform distortion correction on the acquired image according to the jitter information and preset distortion correction parameters;
  • the second processing module 24 is configured to select a target image from the distortion-corrected image according to the jitter information.
  • the target image selected by the second processing module 24 is a part of the image in the distortion corrected image.
  • the second processing module 24 is configured to determine an initial region from the corrected image, perform jitter compensation on the initial region according to the jitter information, determine a target region, and obtain the target.
  • the second processing module 24 is configured to determine an initial region from the distortion-corrected image according to a preset angle of view.
  • the second processing module 24 is configured to determine, according to the jitter information and the preset viewing angle, an image of the area defined by the viewing angle from the distortion corrected image as a target. image.
  • the image capture device may be a smart camera capable of capturing images such as various cameras, cameras, and the like. After the first acquiring module 21 is connected to the external image capturing device, the acquired image can be obtained from the image capturing device.
  • the image capturing device may be a wide-angle lens such as a fisheye lens, and the collected image collected by the image capturing device corresponds to a fisheye image or the like.
  • the jitter information acquired by the second acquiring module 22 may refer to position change angle information of the image capturing device relative to the last time the image was captured when the current picture is captured.
  • the jitter information may also refer to position change angle information of the image capture device relative to the reference position when the current picture is captured, and the reference position may be an initial position when the image capture device starts shooting. It mainly includes angle information of the image capturing device in three directions, specifically, a pitch angle, a yaw angle, and a roll angle.
  • the second acquiring module 22 may specifically configure an angle sensor in the image capturing device to detect the jitter information of the image capturing device, for example, by using a gyroscope.
  • the original data can be obtained directly from the angle sensor, and then the relevant angle information can be calculated, and the relevant angle information can be directly obtained from some angle sensors with computing power.
  • the angle sensor may be directly disposed on the image acquisition device, and the image Like the acquisition device fixed connection. It may also be fixedly connected to various external devices rigidly connected to the image capturing device, such as an aircraft such as an aircraft on which the image capturing device is mounted, and the shaking of the external devices may directly cause the image capturing device to shake.
  • the angle sensor fixedly attached to the external device can directly detect the jitter information of the image pickup device. It can be understood that if the external device is flexibly connected to the image capturing device and the angle sensor cannot accurately detect the information of the image capturing device, the angle sensor cannot be set on the external device.
  • the second processing module 24 specifically determines an initial region from the distortion-corrected image according to a preset angle of view.
  • the first processing module 23 first performs coordinate transformation according to the jitter information, and obtains a coordinate mapping relationship between the plane coordinate system of the acquired image and a target coordinate system; and then determines texture information of each pixel in the collected image. (The gray value corresponding to the pixel point), the texture mapping relationship is obtained. Based on the coordinate mapping relationship and the texture mapping relationship described above, and further based on the preset distortion correction parameters, a conversion from the acquired image to an initial image is obtained, and an image after distortion correction is obtained.
  • the second processing module 24 further performs jitter compensation based on the jitter information and the preset angle of view, and obtains the target image after the jitter compensation in the distortion corrected image.
  • the jitter compensation refers to: determining an initial region based on a preset angle of view in the image after the distortion correction, and then compensating the initial region according to the jitter information to obtain a target region, where the target region is obtained.
  • the image covered is the target image.
  • the preset viewing angle is smaller than the angle of view when the image capturing device captures the captured image.
  • the preset viewing angle may be a standard image.
  • the angle of view; and the captured image corresponds to an angle of view such as a fisheye image.
  • the acquired image corresponds to a standard field of view angle
  • the preset field of view angle corresponds to less than a standard field of view angle.
  • the second processing module 24 moves the initial region upward by a first angle, if The flight angle information indicates that the image capture device has rotated a second angle to the left, and the second processing module 24 moves the initial region to the right by a second angle.
  • the jitter compensation is done by analogy.
  • the method further includes: a detection determining module, configured to detect a jitter angle, if the above-mentioned pitch angle, yaw angle, and roll angle information are included in the jitter angle information Each angle value is less than a preset angle threshold, and sending trigger information to the first processing module 23 triggers distortion correction; and if one or more angle values in the jitter angle information exceeds
  • the preset angle threshold indicates that the user may actively move the image capturing device to capture an image, and the distortion correction and the processing of the target image are not performed, and the subsequent first processing module 23 and the second processing module may be stopped.
  • the operation of 24, or the detection determination module may send trigger information to the first acquisition module 21 and the second acquisition module 22 to trigger acquisition of related data.
  • the second processing module 24 is specifically configured to determine an initial region from the distortion corrected image according to a preset angle of view.
  • the jitter information refers to position change angle information when the image capture device obtains the captured image with respect to the last acquired image; the jitter angle information includes: a pitch angle and a yaw angle. , and / or roll angle.
  • apparatus of the embodiment of the present invention may further include:
  • the calibration module 25 is configured to perform time calibration on the acquired captured image and the jitter information. After the time calibration is successful, trigger information is sent to the first processing module 23.
  • the correction module 25 corrects the data acquired by the first acquiring module 21 and the second acquiring module 22 to ensure that the captured image and the jitter information are aligned in time, and the jitter information is ensured to be captured by the image capturing device.
  • the first processing module 23 may specifically include:
  • the relationship determining unit 231 is configured to determine, according to the shaking angle information, a mapping relationship between the image coordinate system in which the captured image is located and the target coordinate system;
  • the image determining unit 232 is configured to perform distortion correction on the acquired image according to the determined mapping relationship between the image coordinate system of the captured image and the target coordinate system and the preset distortion correction parameter, to obtain a corrected image.
  • the relationship determining unit 231 may specifically include:
  • a first determining subunit 2311 configured to establish, according to the shaking angle information, a mapping relationship between an image coordinate system in which the captured image is located and a world coordinate system;
  • a second determining subunit 2312 configured to determine, according to the jitter angle information, a mapping relationship between the target coordinate system and the world coordinate system;
  • the third determining subunit 2313 is configured to determine, according to the world coordinate system, a mapping relationship between the image coordinate system in which the captured image is located and the target coordinate system.
  • the second determining subunit 2312 is configured to establish a mapping relationship between the world coordinate system and the preset tangent plane coordinate system according to the shaking angle information; and establish a mesh according to the shaking angle information.
  • the mapping coordinate is a mapping relationship between the preset tangent plane coordinate systems; and the mapping relationship between the target coordinate system and the world coordinate system is determined according to the preset tangent plane coordinate system.
  • the tangent plane coordinate system refers to a tangent plane coordinate system at any point on the hemispherical surface with the center of the lens of the image capturing device as the center of the sphere.
  • the image determining unit 232 is specifically configured to: according to the determined mapping relationship between the image coordinate system of the acquired image and the target coordinate system and the distortion correction parameter, each of the captured images A point is mapped to a corresponding position point of the target coordinate system; a pixel value of each point of the acquired image is obtained, and the corrected image is generated according to the acquired pixel value and the position point of each point.
  • the image processing apparatus of an embodiment of the invention may be accompanied in a processor, and the processor may be part of an aerial vehicle comprising an aircraft body, an image acquisition device, and the implementation of the invention
  • a processor of an image processing apparatus wherein the image capture device is mounted on the aircraft body, and the processor is coupled to the image capture device data.
  • the embodiment of the invention performs image processing based on the jitter information, and the anti-shake function is completed during the processing, and no additional anti-shake hardware devices such as a pan/tilt are needed, which saves cost and is convenient for the user to use.
  • FIG. 15 it is a schematic structural diagram of a camera according to an embodiment of the present invention.
  • the camera of the embodiment of the present invention includes: a lens 04, an image sensor 05, and an image processor 06, wherein
  • the image sensor 05 is configured to collect image data through the lens 04;
  • the image processor 06 is connected to the image sensor 05 for obtaining a captured image according to the image data collected by the image sensor 05, and acquiring the jitter associated with the captured image generated when the image capturing device is photographed.
  • Information wherein the jitter information includes jitter angle information; performing distortion correction on the acquired image according to the jitter information and preset distortion correction parameters; and selecting a target image from the distortion corrected image according to the jitter information .
  • the target image selected by the image processor 06 is a part of the image in the distortion corrected image.
  • the image processor 06 is configured to determine an initial region from the corrected image, and perform jitter compensation on the initial region according to the jitter information to determine a target region. A target image corresponding to the target area is obtained.
  • the image processor 06 is configured to determine an initial region from the distortion-corrected image according to a preset angle of view.
  • the image processor 06 is configured to determine, according to the jitter information and the preset angle of view, an image of the area defined by the angle of view as the target image from the distortion corrected image. .
  • the image processor 06 is specifically configured to determine an initial region from the distortion-corrected image according to a preset angle of view.
  • the jitter information refers to position change angle information when the image capture device obtains the captured image with respect to the last acquired image; or the jitter information refers to the image capture device Obtaining angle information with respect to a position of the reference position when the image is acquired; the jitter angle information includes: a pitch angle, a yaw angle, and/or a roll angle.
  • the image processor 06 is further configured to perform time calibration on the acquired captured image and the jitter information; after the time calibration is successful, determine the target image from the acquired image according to the jitter information.
  • the lens 04 is a fisheye lens 04
  • the captured image is a fisheye image.
  • the image processor 06 is specifically configured to determine, according to the jitter angle information, a mapping relationship between the image coordinate system in which the acquired image is located and the target coordinate system; and determine the image coordinate of the captured image according to the determination
  • the acquired image is subjected to distortion correction by a mapping relationship to the target coordinate system and a preset distortion correction parameter to obtain a corrected image.
  • the image processor 06 is specifically configured to establish, according to the shaking angle information, a mapping relationship between the image coordinate system in which the acquired image is located and the world coordinate system; and determine the target coordinate system according to the shaking angle information. a mapping relationship between the world coordinate systems; and a mapping relationship between the image coordinate system in which the captured image is located and the target coordinate system is determined according to the world coordinate system.
  • the image processor 06 is configured to establish a mapping relationship between the world coordinate system and the preset tangent plane coordinate system according to the jitter angle information; and establish a target coordinate system to the preset according to the jitter angle information. a mapping relationship between the tangent plane coordinate systems; determining a mapping relationship between the target coordinate system and the world coordinate system according to the preset tangent plane coordinate system.
  • the tangent plane coordinate system refers to a tangent plane coordinate system at any point on a hemispherical surface with the center of the lens 04 of the image capturing device as the center of the sphere.
  • the image processor 06 is configured to: according to the determined mapping relationship between the image coordinate system of the acquired image and the target coordinate system and the distortion correction parameter, each of the captured images A point is mapped to a corresponding position point of the target coordinate system; a pixel value of each point of the acquired image is obtained, and the corrected image is generated according to the acquired pixel value and the position point of each point.
  • the camera of an embodiment of the invention may be part of an aerial vehicle comprising an aircraft body and the camera, the camera being mounted on the aircraft body.
  • the embodiment of the invention performs image processing based on the jitter information, and the anti-shake function is completed during the processing, and no additional anti-shake hardware devices such as a pan/tilt are needed, which saves cost and is convenient for the user to use.
  • the related apparatus and method disclosed may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium. Including a number of instructions for causing a computer processor to perform various embodiments of the present invention All or part of the steps of the method.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Geometry (AREA)

Abstract

本发明实施例提供了一种图像处理方法、装置及摄像机,其中,所述方法包括:获取图像采集装置得到的采集图像;获取所述图像采集装置拍摄时产生的与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息;根据所述抖动信息从所述采集图像中确定出目标图像。本发明基于抖动信息进行图像处理,在处理过程中完成防抖功能,不需要额外的如云台等防抖硬件设备,节省了成本,方便用户使用。

Description

一种图像处理方法、装置及摄像机 技术领域
本发明涉及图像处理技术领域,尤其涉及一种图像处理方法、装置及摄像机。
背景技术
摄像机是一种将光学图像信号转换为电信号以方便后续传输及存储,而镜头是摄像机的关键装置,基于视角来区分,摄像机镜头包括标准镜头、广角镜头等,广角镜头的视角可以达到90度以上,其中的鱼眼镜头的视角则接近或等于180度,适于拍摄大范围内的景色物体。
在使用摄像机拍摄物体的过程中,如果摄像机发生抖动,则可能会使得拍摄的影像中产生果冻、水波纹等现象,严重影响影像质量。
现有的防抖技术一般通过机械结构来实现,常用的方式是使用物理云台,通过三轴转动补偿来实现防抖。但是,物理云台不仅成本高,而且需要用户额外携带,不利用用户使用。
发明内容
本发明实施例提供了一种图像处理方法、装置及摄像机,可较好地实现防抖得到较高质量的影像。
第一方面,本发明实施例的提供了一种图像处理方法,所述方法包括:
获取图像采集装置得到的采集图像;
获取所述图像采集装置拍摄时产生的与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息;
根据所述抖动信息从所述采集图像中确定出目标图像。
第二方面,本发明实施例还提供了另一种图像处理方法,所述方法包括:
获取图像采集装置得到的采集图像;
获取所述图像采集装置拍摄时产生与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息;
根据所述抖动信息和预置的畸变校正参数对所述采集图像进行畸变校正;
根据抖动信息从所述畸变校正后的图像中选择出目标图像。
第三方面,本发明实施例相应地还提供了一种图像处理装置,包括:
图像获取模块,用于获取图像采集装置得到的采集图像;
抖动获取模块,用于获取所述图像采集装置拍摄时产生与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息;
处理模块,用于根据所述抖动信息从所述采集图像中确定出目标图像。
第四方面,本发明实施例相应地还提供了一种摄像机,所述摄像机包括:镜头、图像传感器以及图像处理器,其中,
所述图像传感器,用于通过所述镜头采集图像数据;
所述图像处理器,与所述图像传感器相连,用于根据所述图像传感器采集的图像数据得到采集图像,获取所述图像采集装置拍摄时产生与所述采集图像关联的抖动信息;根据所述抖动信息从所述采集图像中确定出目标图像;其中,所述抖动信息包括抖动角度信息。
第五方面,本发明实施例还提供了另一种图像处理装置,所述装置包括:
第一获取模块,用于获取图像采集装置得到的采集图像;
第二获取模块,用于获取所述图像采集装置拍摄时产生的与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息;
第一处理模块,用于根据所述抖动信息和预置的畸变校正参数对所述采集图像进行畸变校正;
第二处理模块,用于根据抖动信息从所述畸变校正后的图像中选择出目标图像。
第六方面,本发明实施例相应地还提供了另一种摄像机,所述摄像机包括:镜头、图像传感器以及图像处理器,其中,
所述图像传感器,用于通过所述镜头采集图像数据;
所述图像处理器,与所述图像传感器相连,用于根据所述图像传感器采集到的图像数据得到采集图像,获取所述图像采集装置拍摄时产生的与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息;根据所述抖动信息和预置的畸变校正参数对所述采集图像进行畸变校正;根据抖动信息从所述畸变校正后的图像中选择出目标图像。
第七方面,本发明实施例还提供了一种航拍飞行器,该航拍飞行器包括飞行器本体和挂载在所述飞行器本体上的摄像机,所述摄像机包括第四方面提到的摄像机或者第六方面提到的摄像机。
第八方面,本发明实施例还提供了一种航拍飞行器,该航拍飞行器包括:飞行器本体、图像采集装置以及处理器,其中,所述图像采集装置挂载在所述飞行器本体上,所述处理器与所述图像采集装置数据相连,所述处理器包括第三方面提到的图像处理装置或第五方面提到的图像处理装置。
本发明实施例基于抖动信息进行图像处理,在处理过程中完成防抖功能,不需要额外的如云台等防抖硬件设备,节省了成本,方便用户使用。
附图说明
图1是本发明实施例的一种图像处理方法的流程示意图;
图2是本发明实施例的另一种图像处理方法的流程示意图;
图3是本发明实施例的其中一种确定坐标映射关系的方法的流程示意图;
图4是本发明实施例的目标图像坐标系中的点到世界坐标系对应点间的映射示意图;
图5是世界坐标系到切平面坐标系的旋转示意图;
图6是切平面坐标系到目标图像坐标系的旋转示意图;
图7是本发明实施例的畸变校正示意图;
图8是本发明实施例的另一种图像处理方法的流程示意图;
图9是本发明实施例的畸变校正后的图像与目标图像之间的关系示意图;
图10是本发明实施例的一种图像处理装置的结构示意图;
图11是图10中所述图像处理装置中的处理模块的结构示意图;
图12是本发明实施例的一种摄像机的结构示意图;
图13是本发明实施例的另一种图像处理装置的结构示意图;
图14是图13中的图像处理装置中第一处理模块的结构示意图;
图15是本发明实施例的一种摄像机的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清 楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参见图1,是本发明实施例的一种图像处理方法的流程示意图,本发明实施例的所述方法可应用在各种图像采集设备中,由一个或者多个图像处理器实现。具体的,所述方法包括:
S101:获取图像采集装置得到的采集图像。
所述图像采集装置可以为各种照相机、摄像机等可拍摄图像的智能拍摄装置。与图像采集装置数据相连后,可以从图像采集装置中获取得到采集图像。所述图像采集装置具体可以是鱼眼镜头等广角镜头,其采集到的所述采集图像则对应为鱼眼图像等。
S102:获取所述图像采集装置拍摄时产生的与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息。
在拍摄视频或者连拍时,所述抖动信息可以是指拍摄到当前图片时,图像采集装置相对于上一次拍摄到图片时的位置变化角度信息。所述抖动信息也可以是指拍摄到当前图片时,图像采集装置相对于参考位置的位置变化角度信息,该参考位置可以为图像采集装置开始拍摄时的初始位置。主要包括所述图像采集装置在三个方向上的角度信息,具体可以俯仰角、偏航角以及横滚角。
具体可以在所述图像采集装置中配置角度传感器来检测所述图像采集装置的抖动信息,例如通过陀螺仪来检测。可以直接从角度传感器中获取原始数据,进而计算得到相关的角度信息,也可以从某些具有计算能力的角度传感器中直接获取到相关的角度信息。
优选地,所述角度传感器可以直接设置在所述图像采集装置上,与所述图像采集装置固定连接。也可以固定连接在与所述图像采集装置刚性连接的各种外部设备上,例如挂载所述图像采集装置的飞行器等设备上,这些外部设备的抖动可直接导致所述图像采集装置的抖动,因而固定连接在外部设备上的角度传感器能够直接检测图像采集装置的抖动信息。可以理解的是,如果外部设备与所述图像采集装置柔性连接,角度传感器无法准确检测所述图像采集装置的信息时,则不能将角度传感器设置在此外部设备上。
S103:根据所述抖动信息从所述采集图像中确定出目标图像。
所述S103具体可以为:先根据抖动信息进行坐标变换,得到所述采集图像所在的平面坐标系到一个目标坐标系的坐标映射关系;然后再确定出所述采集图像中各个像素点的纹理信息(像素点上所对应的灰度值),得到纹理映射关系。基于上述的坐标映射关系和纹理映射关系,并进一步基于预置的畸变校正参数,最终得到由所述采集图像到目标图像的转换,得到进行畸变校正后的目标图像。
进一步地,所述S103具体还可以为:先根据抖动信息进行坐标变换,得到所述采集图像所在的平面坐标系到一个目标坐标系的坐标映射关系;然后再确定出所述采集图像中各个像素点的纹理信息(像素点上所对应的灰度值),得到纹理映射关系。基于上述的坐标映射关系和纹理映射关系,并进一步基于预置的畸变校正参数,得到由所述采集图像到一个初始图像的转换,得到畸变校正后的一个图像。然后再进一步地基于抖动信息和预置的视场角,进行抖动补偿,并在所述畸变校正后的图像中得到抖动补偿后的目标图像。
所述抖动补偿具体是指:在所述畸变校正后的图像中基于预置的视场角确定出一个初始区域,然后再根据抖动信息对所述初始区域进行补偿,得到目标区域,该目标区域所涵盖的图像即为目标图像。本发明实施例中,所述预置的视场角小于所述图像采集设备拍摄所述采集图像时的视场角,具体的,所述预置的视场角可以为一个标准图像所对应的视场角;而所述采集图像则对应为诸如鱼眼图像的视场角。或者,所述采集图像对应为标准视场角,而所述预置的视场角对应为小于标准视场角。
具体的,在补偿时,如果抖动信息中的俯仰角度信息指示图像采集装置向下转动了第一角度,则将所述初始区域反向向上移动第一角度,如果偏航角度信息指示图像采集装置向左转动了第二角度,则将所述初始区域反向向右移动第二角度。以此类推完成抖动补偿。
本发明实施例基于抖动信息进行图像处理,在处理过程中完成防抖功能,不需要额外的如云台等防抖硬件设备,节省了成本,方便用户使用。
再请参见图2,是本发明实施例的另一种图像处理方法的流程示意图,本发明实施例的所述方法可应用在各种图像采集设备中,由一个或者多个图像处理器实现。具体的,所述方法包括:
S201:获取图像采集装置得到的采集图像。
可以通过各种相机来拍摄得到采集图像。所述采集图像可以是广角图像, 后续从广角图像中选出视场角小于该广角图像的视场角的目标图像。
S202:获取所述图像采集装置拍摄时产生的与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息。
其中具体的,所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于上一次得到采集图像时的位置变化角度信息;或者所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于参考位置的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。
所述抖动信息可以由设置在所述图像采集装置上的陀螺仪等传感器来采集得到。所述抖动信息可以用作后续的平面坐标映射处理和选择目标区域的抖动补偿中。所述抖动信息主要包括上述提及的三维角度信息。所述抖动信息可以基于陀螺仪等感测到的角加速度等信息计算得到。
S203:对获取的采集图像和抖动信息进行时间校准。
进行时间校准以保证陀螺仪等传感器采集到的数据到相机采集到的采集图像在时间上对齐,保证该抖动信息与获取的采集图像关联。
具体获取所述采集图像被拍摄得到时的拍摄时间,再获取在该拍摄时间下的陀螺仪等传感器感测到的抖动数据,根据该抖动数据和上一采集图像被拍摄得到时的时间下的抖动数据,得到所述采集图像的抖动信息。需要基于拍摄时间和两次抖动数据的感测时间来确保得到关于本次拍摄采集图像时的抖动信息的正确性,以减小甚至避免后续处理的误差。
在时间校准成功后,发出触发信息触发从采集图像中确定出目标图像,即触发执行下述的S204。
S204:根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系。
所述S204具体可以通过固定不变的世界坐标系来作坐标变换与映射。可以计算所述采集图像所在的图像坐标系到世界坐标系的变换,计算图像坐标系到目标坐标系的变换,从而间接得到所述采集图像所在的图像坐标系到目标坐标系的映射关系。
S205:获取预置的畸变校正参数;
S206:根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和所述畸变校正参数,将所述采集图像上的每一个点映射到目标坐标系的 相应位置点;
S207:获取所述采集图像每一个点的像素值,根据获取的每一个点的像素值和位置点,生成目标图像。
可以理解的是,所述S206中将所述采集图像上的每一个点映射到目标坐标系的相应位置点和所述S207中获取所述采集图像每一个点的像素值可以同时进行。
具体的,请参见图3,是本发明实施例的其中一种确定坐标映射关系的方法的流程示意图,所述方法对应于上述的S204,具体包括:
S301:根据抖动角度信息建立所述采集图像所在的图像坐标系到世界坐标系之间的映射关系。
S302:根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系。
S303:根据世界坐标系,确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系。
其中所述S302具体可以包括:根据抖动角度信息建立世界坐标系到预置的切平面坐标系之间的映射关系;根据抖动角度信息建立目标坐标系到所述预置的切平面坐标系之间的映射关系;根据所述预置的切平面坐标系,确定出所述目标坐标系到所述世界坐标系之间的映射关系。其中,所述切平面坐标系是指:在以所述图像采集装置的镜头中心为球心的半球面上,任意一点所在的切平面坐标系。
下面对本发明实施例中的坐标变换以及对所述采集图像(鱼眼图像)畸变校正的处理过程进行详细描述。
首先,设成像系统所采集到的鱼眼图像(畸形图像)所在的坐标系为图像坐标系O2-uv,其中O2为图像的中心点,u,v分别沿图像的横轴和纵轴。
第二,实际的世界坐标系为Or-XYZ,对于某一确定焦距的鱼眼镜头,为便于描述且不失一般性,假设其物空间分布在以镜头中心Or为球心的一个半球面上,同时可直观的认为像空间O2-uv上所成的像,即为物空间沿光轴在图像传感器上投影所得。
第三,相机体坐标系Ob-XbYbZb,该坐标系与相机固连,同时也与陀螺仪等传感器固连,通过陀螺仪等传感器可以实时获得该坐标系的变化参数。
第四,目标图像(可为标准视角图像)所在的坐标系为O1-xy,即为最终输出的图像所处的图像坐标。
第五,第二点中的球面上某一点O所在的切平面坐标系O-X’Y’,该坐标系可认为是对应于第三点中的图像坐标系的物体坐标系。
系统的硬件组成单元:陀螺仪,相机(搭配鱼眼镜头),GPU。陀螺仪和相机(搭配鱼眼镜头)的位置关系固定。
本发明实施例所述方法的坐标映射以及畸变校正的处理过程包括:
获取离线标定参数:鱼眼图像中心点坐标;鱼眼图像的长度、宽度;标准图像的长度和宽度;焦距相关的校正参数f;视场角γ;
获取每帧的输入参数:相机位置相对于前一帧时相机位置的变化角度,其中包括俯仰角θ,偏航角
Figure PCTCN2015085641-appb-000001
横滚角ρ;本帧鱼眼图;俯仰角速度
Figure PCTCN2015085641-appb-000002
偏航角速度
Figure PCTCN2015085641-appb-000003
横滚角速度
Figure PCTCN2015085641-appb-000004
实时输出校正后的图像。
具体的,首先,设定相机的帧率,然后让相机开始工作,实时采集得到鱼眼图像,同时实时从陀螺仪上读取相机的位置变化参数。通过时间校准,保证陀螺采集到的数据和相机采集到的数据在时间上对齐。最后将采集到的数据,连同之前离线标定的数据一齐输入GPU(Graphics Processing Unit,图像处理器),计算出最后的目标图像并输出。具体计算过程包括:
首先,计算得到目标图像坐标过切平面坐标系变换到世界坐标系。
如图4所示,原始的鱼眼图像所对应的实际场景所处的坐标系为Or-XYZ,该场景处于图示的半球面上,最后所得的屏幕图像Screen image即为目标图像。目标图像的中心点为O1,其所对应的鱼眼场景(也就是实际场景)中的点为球面中的某一点O,目标图像上的一点P1(x,y)(目标图像上)对应球面上O点切平面上的点P,O1P1与OP长度成正比,满足几何变换。映射关系是从屏幕P1到切平面点P,而非到球面点Ps,OrP与Z轴,X轴的夹角分别为θ’和
Figure PCTCN2015085641-appb-000005
其中切点O的位置由视角参数θ和
Figure PCTCN2015085641-appb-000006
确定,默认球半径和图像半径均为1。
1)在O处建立切平面坐标系X’OY’,如图5所示,轴OY’垂直于OrO,是俯仰角θ的变化方向,轴OX’垂直于OrO,是偏航角
Figure PCTCN2015085641-appb-000007
的变化方向。球面上虚线框对应显示屏幕的边缘。
2)坐标系Or-XYZ经过如下变换到坐标系O-X’Y’Z’:
绕Z轴转动
Figure PCTCN2015085641-appb-000008
度,绕X轴旋转θ度,沿轴OrO平移Or至O点。
设P点在O-X’Y’Z’中的坐标为(x’,y’,0),在坐标系Or-XYZ中的坐标为(xp,yp,zp),则有:
Figure PCTCN2015085641-appb-000009
其中,
Figure PCTCN2015085641-appb-000010
Figure PCTCN2015085641-appb-000011
3)从目标图像坐标系xo1y到切面坐标系X’OY’的坐标变换:
当横滚角ρ=0,则xo1y和X’OY’之间无相对旋转量,只存在平移。由此:
Figure PCTCN2015085641-appb-000012
其中,
Figure PCTCN2015085641-appb-000013
为xo1y中的向量,
Figure PCTCN2015085641-appb-000014
为X’OY’中的向量,比例系数k与输入参数γ(视场角)相关。结合图5和图6中,沿轴ox′方向有:
Figure PCTCN2015085641-appb-000015
由于,
|OrO|=1   (6)
所以,
|OA|=tan(γ/2)   (7)
又|o1a1|=1   (8)
得:k=tan(γ/2)   (9)
同理可知,在轴oy′上,比例系数k的取值同上。
若考虑轴向旋转,即ρ≠0,需对屏幕坐标进行旋转处理,式(4)表示如下:
Figure PCTCN2015085641-appb-000016
系数m保证屏幕坐标的归一化,屏幕宽高相等时有
Figure PCTCN2015085641-appb-000017
从而可得如下关系式:
Figure PCTCN2015085641-appb-000018
结合式(1)和式(11),即可由目标图像中的任一点P1的坐标值(x,y),求得其所对应的世界坐标系中的点P的坐标值(xp,yp,zp)
4)在图4中,易得如下关系:
Figure PCTCN2015085641-appb-000019
Figure PCTCN2015085641-appb-000020
其次,计算世界坐标系变换到鱼眼图像坐标。
如图7示,P2为点P小孔成像的像点,由于鱼眼畸变,真实像点为P’2,该真实像点位于畸变矫正前的图像Input Texture-Fisheye Image。图中,O2为O在鱼眼图中位置,为图像中心点。
校正采用的鱼眼镜头模型:
r=f*θ’    (14)
其中,r是P’2到鱼眼图中心Of的距离,表示为OfP’2,f为与焦距相关的参数。获得r后根据
Figure PCTCN2015085641-appb-000021
可算出鱼眼图的P’2的坐标,表示为(u,v)。
Figure PCTCN2015085641-appb-000022
结合式(1)、(11)、(12)、(13)、(14)及(15),可以获得目标图像上的某一点P1的坐标值(x,y)与其所对应的鱼眼图像上的点P’2的坐标值(u,v)之间的对应关系。
同时在鱼眼图像上,能够获得具体像素点上的纹理信息(某一像素点上所对应的灰度值),从而给目标图像上的点赋纹理信息。同时由于,u2,v2为浮点数,所以访问像素点pixel需要插值,该过程可以由GPU完成。
至此,完成了从屏幕坐标P1(x,y)到球坐标系中P(xp,yp,zp),再到鱼眼图像P’2(u,v)的映射过程,包括坐标映射及纹理映射。
在鱼眼校正过程中,主要考虑径向畸变,式(14)采用鱼眼镜头模型为:
r=f*θ’      (16)
上式只使用一个畸变校正参数,为增加校正精度,可修改模型为
Figure PCTCN2015085641-appb-000023
n取2即可,相机出厂时标定好{ki},校正时{ki}为uniform变量传入GPU。由GPU完成每一个像素点从鱼眼图像到目标图像的坐标变换以及畸变校正。
本发明实施例基于抖动信息进行图像处理,在处理过程中完成防抖功能,不需要额外的如云台等防抖硬件设备,节省了成本,方便用户使用。
再请参见图8,是本发明实施例的另一种图像处理方法的流程示意图,本发明实施例的所述方法可以应用在各种图像采集设备中,由一个或者多个图像处理器实现。具体的,所述方法包括:
S801:获取图像采集装置得到的采集图像。
所述图像采集装置可以为各种照相机、摄像机等可拍摄图像的智能拍摄装置。与图像采集装置数据相连后,可以从图像采集装置中获取得到采集图像。所述图像采集装置可以是鱼眼镜头等广角镜头,其采集到的所述采集图像则对应为鱼眼图像等。
S802:获取所述图像采集装置拍摄时产生的与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息。
在拍摄视频或者连拍时,所述抖动信息可以是指拍摄到当前图片时,图像采集装置相对于上一次拍摄到图片时的位置变化角度信息。所述抖动信息也可以是指拍摄到当前图片时,图像采集装置相对于参考位置的位置变化角度信息,该参考位置可以为图像采集装置开始拍摄时的初始位置。主要包括所述图像采集装置在三个方向上的角度信息,具体可以俯仰角、偏航角以及横滚角。
具体可以在所述图像采集装置中配置角度传感器来检测所述图像采集装置的抖动信息,例如通过陀螺仪来检测。可以直接从角度传感器中获取原始数据,进而计算得到相关的角度信息,也可以从某些具有计算能力的角度传感器中直接获取到相关的角度信息。
优选地,所述角度传感器可以直接设置在所述图像采集装置上,与所述图像采集装置固定连接。也可以固定连接在与所述图像采集装置刚性连接的各种外部设备上,例如挂载所述图像采集装置的飞行器等设备上,这些外部设备的 抖动可直接导致所述图像采集装置的抖动,因而固定连接在外部设备上的角度传感器能够直接检测图像采集装置的抖动信息。可以理解的是,如果外部设备与所述图像采集装置柔性连接,角度传感器无法准确检测所述图像采集装置的信息时,则不能将角度传感器设置在此外部设备上。
S803:根据所述抖动信息和预置的畸变校正参数对所述采集图像进行畸变校正。
S804:根据抖动信息从所述畸变校正后的图像中选择出目标图像。
其中,从畸变校正后的图像中选择出的目标图像可以是所述畸变校正后的图像中的其中一部分。
所述S804具体可以包括:从校正后的图像中确定出初始区域,并根据所述抖动信息对所述初始区域进行抖动补偿,确定出目标区域并得到该目标区域所对应的目标图像。所述从校正后的图像中确定出初始区域,包括:根据预置的视场角从所述畸变校正后的图像中确定出初始区域。
或者,所述S804具体可以包括:根据抖动信息和预置的视场角,从所述畸变校正后的图像中确定出所述视场角所限定区域的图像作为目标图像。
本发明实施例中,具体如图9所示,畸变校正后的图像901处于第一视场角,而目标图像则是从畸变校正后的图像901中按照预置的视场角(第二视场角)截取的一部分图像。这一部分图像是根据抖动信息进行抖动补偿后得到的,而图中的阴影部分并不会存在于最终输出的目标图像。如图9所示,本发明实施例中,图像采集装置向下转动了一个角度后,根据目标图像的视场角,如果没有进行抖动补偿,则会获取到虚线框内的图像。而基于该角度进行补偿,在畸变校正后的图像中向上移动一定距离,则就可以得到防抖后的实线框内的目标图像。
先根据抖动信息进行坐标变换,得到所述采集图像所在的平面坐标系到一个目标坐标系的坐标映射关系;然后再确定出所述采集图像中各个像素点的纹理信息(像素点上所对应的灰度值),得到纹理映射关系。基于上述的坐标映射关系和纹理映射关系,并进一步基于预置的畸变校正参数,得到由所述采集图像到一个初始图像的转换,得到畸变校正后的一个图像。
进一步地再基于抖动信息和预置的视场角,进行抖动补偿,并在所述畸变校正后的图像中得到抖动补偿后的目标图像。
所述抖动补偿具体是指:在所述畸变校正后的图像中基于预置的视场角确定出一个初始区域,然后再根据抖动信息对所述初始区域进行补偿,得到目标区域,该目标区域所涵盖的图像即为目标图像。本发明实施例中,所述预置的视场角小于所述图像采集设备拍摄所述采集图像时的视场角,具体的,所述预置的视场角可以为一个标准图像所对应的视场角;而所述采集图像则对应为诸如鱼眼图像的视场角。或者,所述采集图像对应为标准视场角,而所述预置的视场角对应为小于标准视场角。
具体的,在补偿时,如果抖动信息中的俯仰角度信息指示图像采集装置向下转动了第一角度,则将所述初始区域反向向上移动第一角度,如果偏航角度信息指示图像采集装置向左转动了第二角度,则将所述初始区域反向向右移动第二角度。以此类推完成抖动补偿。
可选地,在检测到所述抖动信息后,还可以包括一个对抖动的角度进行检测步骤,如果上述提到的俯仰角、偏航角以及横滚角等抖动角度信息中的各个角度值均小于预设的角度阈值,则触发执行所述S803;而如果抖动角度信息中的一个或者多个角度值超出了预设的角度阈值,则表明用户目前可能在主动移动所述图像采集装置拍摄图像,则不需执行所述S803,而继续执行所述S801或S802。
进一步可选地,所述根据所述抖动信息和预置的畸变校正参数对所述采集图像进行畸变校正之前,还包括:
对获取的采集图像和抖动信息进行时间校准,以确保获取的所述抖动信息与获取的采集图像相关联;
在时间校准成功后,触发执行所述根据所述抖动信息从所述采集图像中确定出目标图像。
进行采集图像和抖动信息时间校准的具体实现可参考图2至图7所对应实施例中的相关内容的具体描述。
进一步可选地,本发明实施例中的所述根据所述抖动信息和预置的畸变校正参数对所述采集图像进行畸变校正具体可以包括:
根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系;
根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和 预置的畸变校正参数对所述采集图像进行畸变校正,得到校正后的图像。
进一步地,所述所述根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系具体可以包括:
根据抖动角度信息建立所述采集图像所在的图像坐标系到世界坐标系之间的映射关系;
根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系;
根据世界坐标系,确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系。
进一步地,所述根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系具体可以包括:
根据抖动角度信息建立世界坐标系到预置的切平面坐标系之间的映射关系;
根据抖动角度信息建立目标坐标系到所述预置的切平面坐标系之间的映射关系;
根据所述预置的切平面坐标系,确定出所述目标坐标系到所述世界坐标系之间的映射关系。
其中,上述提及的切平面坐标系可以是指:在以所述图像采集装置的镜头中心为球心的半球面上,任意一点所在的切平面坐标系。
进一步可选地,所述根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和预置的畸变校正参数对所述采集图像进行畸变校正,得到校正后的图像,包括:
根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和所述畸变校正参数,将所述采集图像上的每一个点映射到目标坐标系的相应位置点;
获取所述采集图像每一个点的像素值,根据获取的每一个点的像素值和位置点,生成校正后的图像。
具体的,上述所涉及的坐标映射以及畸变校正可参考图2至图7所对应实施例中相关内容的具体描述。
本发明实施例基于抖动信息进行图像处理,在处理过程中完成防抖功能, 不需要额外的如云台等防抖硬件设备,节省了成本,方便用户使用。
下面对本发明实施例中的图像处理装置及摄像机进行详细描述。
请参见图10,是本发明实施例的一种图像处理装置的结构示意图,本发明实施例中的所述装置可以配置到各种相机中,也可以当独使用,具体的,本发明实施例的所述装置包括:
图像获取模块11,用于获取图像采集装置得到的采集图像;
抖动获取模块12,用于获取所述图像采集装置拍摄时产生的与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息;
处理模块13,用于根据所述抖动信息从所述采集图像中确定出目标图像。
所述图像采集装置可以为与所述数据处理装置数据相连的照相机、摄像机等可拍摄图像的智能拍摄装置。与图像采集装置数据相连后,所述图像获取模块11可以从图像采集装置中获取得到采集图像。所述图像采集装置具体可以是鱼眼镜头等广角镜头,其采集到的所述采集图像则对应为鱼眼图像等。
所述抖动获取模块12获取到的所述抖动信息可以是指所述图像采集装置在得到所述采集图像时相对于上一次得到采集图像时的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。或者,所述抖动信息也可以是指所述图像采集装置在得到所述采集图像时相对于参考位置的位置变化角度信息。
在拍摄视频或者连拍时,所述抖动信息可以是指拍摄到当前图片时,图像采集装置相对于上一次拍摄到图片时的位置变化角度信息。所述抖动信息也可以是指拍摄到当前图片时,图像采集装置相对于参考位置的位置变化角度信息,该参考位置可以为图像采集装置开始拍摄时的初始位置。主要包括所述图像采集装置在三个方向上的角度信息,具体可以俯仰角、偏航角以及横滚角。
所述抖动获取模块12具体可以在所述图像采集装置中配置的角度传感器来获取所述图像采集装置的抖动信息,例如通过陀螺仪来检测。所述抖动获取模块12可以直接从角度传感器中获取原始数据,进而计算得到相关的角度信息,也可以从某些具有计算能力的角度传感器中直接获取到相关的角度信息。
优选地,所述角度传感器可以直接设置在所述图像采集装置上,与所述图像采集装置固定连接。也可以固定连接在与所述图像采集装置刚性连接的各种外部设备上,例如挂载所述图像采集装置的飞行器等设备上,这些外部设备的 抖动可直接导致所述图像采集装置的抖动,因而固定连接在外部设备上的角度传感器能够直接检测图像采集装置的抖动信息。可以理解的是,如果外部设备与所述图像采集装置柔性连接,角度传感器无法准确检测所述图像采集装置的信息时,则不能将角度传感器设置在此外部设备上。
所述目标图像可以是所述处理模块13通过基于抖动信息进行坐标变化、畸变校正后得到的图像,也可以在进行所述坐标变换、畸变校正后的基础上,再跟进抖动信息从处理后的图像中截图选择出的一部分图像。
进一步可选地,本发明实施例的所述装置还可以包括:
校正模块14,用于对获取的采集图像和抖动信息进行时间校准;并在时间校准成功后,发出触发信息给处理模块。
通过所述校正模块14对所述图像获取模块11和抖动获取模块12获取到的数据在时间上进行校正,保证所述采集图像和抖动信息在时间上对齐。
进一步具体地,如图11所示,所述处理模块13具体可以包括:
映射关系确定单元131,用于根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系;
处理单元132,用于根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系从所述采集图像中确定出目标图像。
其中可选地,所述映射关系确定单元131具体可以包括:
第一确定子单元1311,用于根据抖动角度信息建立所述采集图像所在的图像坐标系到世界坐标系之间的映射关系;
第二确定子单元1312,用于根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系;
第三确定子单元1313,用于根据世界坐标系,确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系。
其中进一步可选地,所述第二确定子单元1312,具体用于根据抖动角度信息建立世界坐标系到预置的切平面坐标系之间的映射关系;根据抖动角度信息建立目标坐标系到所述预置的切平面坐标系之间的映射关系;根据所述预置的切平面坐标系,确定出所述目标坐标系到所述世界坐标系之间的映射关系。
其中具体地,所述切平面坐标系是指:在以所述图像采集装置的镜头中心为球心的半球面上,任意一点所在的切平面坐标系。
进一步可选地,所述处理单元132,具体用于获取预置的畸变校正参数;根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和所述畸变校正参数,将所述采集图像上的每一个点映射到目标坐标系的相应位置点;获取所述采集图像每一个点的像素值,根据获取的每一个点的像素值和位置点,生成目标图像。
本发明实施例的所述图像处理装置可以陪着在一个处理器中,而所述处理器可以为一航拍飞行器的一部分,所述航拍飞行器包括飞行器本体、图像采集装置以及所述包括本发明实施例的图像处理装置的处理器,其中,所述图像采集装置挂载在所述飞行器本体上,所述处理器与所述图像采集装置数据相连。
需要说明的是,本发明实施例中所述图像处理装置中的各个模块、单元以及子单元的具体实现可参考图1至图8所对应实施例中相关方法步骤的具体实现的描述。
本发明实施例基于抖动信息进行图像处理,在处理过程中完成防抖功能,不需要额外的如云台等防抖硬件设备,节省了成本,方便用户使用。
进一步地再请参见图12,是本发明实施例的一种摄像机的结构示意图,本发明实施例的所述摄像机包括镜头01、图像传感器02以及图像处理器03,其中,
所述图像传感器02,用于通过所述镜头01采集图像数据;
所述图像处理器03,与所述图像传感器02相连,用于根据所述图像传感器02采集的图像数据得到采集图像,获取所述图像采集装置拍摄时产生的与所述采集图像关联的抖动信息;根据所述抖动信息从所述采集图像中确定出目标图像;其中,所述抖动信息包括抖动角度信息。
其中可选地,所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于上一次得到采集图像时的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。或者,所述抖动信息可以是指所述图像采集装置在得到所述采集图像时相对于参考位置的位置变化角度信息。
进一步可选地,所述图像处理器03,还用于对获取的采集图像和抖动信息进行时间校准;在时间校准成功后,再所述根据所述抖动信息从所述采集图像中确定出目标图像。
进一步可选地,所述镜头01为鱼眼镜头01,所述采集图像为鱼眼图像。
进一步可选地,所述图像处理器03,具体用于根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系;根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系从所述采集图像中确定出目标图像。
进一步可选地,所述图像处理器03,具体用于根据抖动角度信息建立所述采集图像所在的图像坐标系到世界坐标系之间的映射关系;根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系;根据世界坐标系,确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系。
进一步可选地,所述图像处理器03,具体用于根据抖动角度信息建立世界坐标系到预置的切平面坐标系之间的映射关系;根据抖动角度信息建立目标坐标系到所述预置的切平面坐标系之间的映射关系;根据所述预置的切平面坐标系,确定出所述目标坐标系到所述世界坐标系之间的映射关系。
进一步可选地,所述切平面坐标系是指:在以所述图像采集装置的镜头01中心为球心的半球面上,任意一点所在的切平面坐标系。
进一步可选地,所述图像处理器03,具体用于获取预置的畸变校正参数;根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和所述畸变校正参数,将所述采集图像上的每一个点映射到目标坐标系的相应位置点;获取所述采集图像每一个点的像素值,根据获取的每一个点的像素值和位置点,生成目标图像。
本发明实施例的所述摄像机可以为一个航拍飞行器的一部分,所述航拍飞行器包括飞行器本体和所述摄像机,所述摄像机挂载在所述飞行器本体。
需要说明的是,本发明实施例中所述摄像机的各个部件的具体实现可参考图1至图8所对应实施例中相关方法步骤的具体实现的描述。
本发明实施例基于抖动信息进行图像处理,在处理过程中完成防抖功能,不需要额外的如云台等防抖硬件设备,节省了成本,方便用户使用。
再请参见图13,是本发明实施例的另一种图像处理装置的结构示意图,本发明实施例的所述装置可以配置到各种相机中,也可以当独使用,具体的,本发明实施例的所述装置包括:
第一获取模块21,用于获取图像采集装置得到的采集图像;
第二获取模块22,用于获取所述图像采集装置拍摄时产生的与所述采集图 像关联的抖动信息,其中,所述抖动信息包括抖动角度信息;
第一处理模块23,用于根据所述抖动信息和预置的畸变校正参数对所述采集图像进行畸变校正;
第二处理模块24,用于根据抖动信息从所述畸变校正后的图像中选择出目标图像。
其中可选地,所述第二处理模块24所选择出的目标图像为所述畸变校正后的图像中的一部分图像。
其中可选地,所述第二处理模块24,具体用于从校正后的图像中确定出初始区域,并根据所述抖动信息对所述初始区域进行抖动补偿,确定出目标区域并得到该目标区域所对应的目标图像。
其中可选地,所述第二处理模块24,具体用于根据预置的视场角从所述畸变校正后的图像中确定出初始区域。
其中可选地,所述第二处理模块24,具体用于根据抖动信息和预置的视场角,从所述畸变校正后的图像中确定出所述视场角所限定区域的图像作为目标图像。
所述图像采集装置可以为各种照相机、摄像机等可拍摄图像的智能拍摄装置。所述第一获取模块21与外部图像采集装置数据相连后,可以从图像采集装置中获取得到采集图像。所述图像采集装置可以是鱼眼镜头等广角镜头,其采集到的所述采集图像则对应为鱼眼图像等。
在拍摄视频或者连拍时,所述第二获取模块22获取到的所述抖动信息可以是指拍摄到当前图片时,图像采集装置相对于上一次拍摄到图片时的位置变化角度信息。所述抖动信息也可以是指拍摄到当前图片时,图像采集装置相对于参考位置的位置变化角度信息,该参考位置可以为图像采集装置开始拍摄时的初始位置。主要包括所述图像采集装置在三个方向上的角度信息,具体可以俯仰角、偏航角以及横滚角。
所述第二获取模块22具体可以在所述图像采集装置中配置角度传感器来检测所述图像采集装置的抖动信息,例如通过陀螺仪来检测。可以直接从角度传感器中获取原始数据,进而计算得到相关的角度信息,也可以从某些具有计算能力的角度传感器中直接获取到相关的角度信息。
优选地,所述角度传感器可以直接设置在所述图像采集装置上,与所述图 像采集装置固定连接。也可以固定连接在与所述图像采集装置刚性连接的各种外部设备上,例如挂载所述图像采集装置的飞行器等设备上,这些外部设备的抖动可直接导致所述图像采集装置的抖动,因而固定连接在外部设备上的角度传感器能够直接检测图像采集装置的抖动信息。可以理解的是,如果外部设备与所述图像采集装置柔性连接,角度传感器无法准确检测所述图像采集装置的信息时,则不能将角度传感器设置在此外部设备上。
所述第二处理模块24具体是根据预置的视场角从所述畸变校正后的图像中确定出初始区域。
所述第一处理模块23先根据抖动信息进行坐标变换,得到所述采集图像所在的平面坐标系到一个目标坐标系的坐标映射关系;然后再确定出所述采集图像中各个像素点的纹理信息(像素点上所对应的灰度值),得到纹理映射关系。基于上述的坐标映射关系和纹理映射关系,并进一步基于预置的畸变校正参数,得到由所述采集图像到一个初始图像的转换,得到畸变校正后的一个图像。
所述第二处理模块24进一步地再基于抖动信息和预置的视场角,进行抖动补偿,并在所述畸变校正后的图像中得到抖动补偿后的目标图像。
所述抖动补偿具体是指:在所述畸变校正后的图像中基于预置的视场角确定出一个初始区域,然后再根据抖动信息对所述初始区域进行补偿,得到目标区域,该目标区域所涵盖的图像即为目标图像。本发明实施例中,所述预置的视场角小于所述图像采集设备拍摄所述采集图像时的视场角,具体的,所述预置的视场角可以为一个标准图像所对应的视场角;而所述采集图像则对应为诸如鱼眼图像的视场角。或者,所述采集图像对应为标准视场角,而所述预置的视场角对应为小于标准视场角。
具体的,在补偿时,如果抖动信息中的俯仰角度信息指示图像采集装置向下转动了第一角度,所述第二处理模块24则将所述初始区域反向向上移动第一角度,如果偏航角度信息指示图像采集装置向左转动了第二角度,所述第二处理模块24则将所述初始区域反向向右移动第二角度。以此类推完成抖动补偿。
可选地,在检测到所述抖动信息后,还可以包括检测判断模块,用于对抖动的角度进行检测步骤,如果上述提到的俯仰角、偏航角以及横滚角等抖动角度信息中的各个角度值均小于预设的角度阈值,向所述第一处理模块23发送触发信息触发进行畸变校正;而如果抖动角度信息中的一个或者多个角度值超出 了预设的角度阈值,则表明用户目前可能在主动移动所述图像采集装置拍摄图像,则不需执行畸变校正以及目标图像的处理,可停止后续所述第一处理模块23和第二处理模块24的操作,或者所述检测判断模块可以发送触发信息给第一获取模块21和第二获取模块22以触发获取相关数据。
进一步可选地,所述第二处理模块24,具体用于根据预置的视场角从所述畸变校正后的图像中确定出初始区域。
其中可选地,所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于上一次得到采集图像时的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。
进一步可选地,本发明实施例的所述装置还可以包括:
校正模块25,用于对获取的采集图像和抖动信息进行时间校准;在时间校准成功后,向所述第一处理模块23发出触发信息。
所述校正模块25对所述第一获取模块21和第二获取模块22获取的数据进行校正,以确保所述采集图像和抖动信息在时间上对齐,保证该抖动信息为图像采集装置拍摄该采集图像时产生的位置变化角度信息。
进一步可选地,如图14所示,所述第一处理模块23具体可以包括:
关系确定单元231,用于根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系;
图像确定单元232,用于根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和预置的畸变校正参数对所述采集图像进行畸变校正,得到校正后的图像。
其中可选地,所述关系确定单元231具体可以包括:
第一确定子单元2311,用于根据抖动角度信息建立所述采集图像所在的图像坐标系到世界坐标系之间的映射关系;
第二确定子单元2312,用于根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系;
第三确定子单元2313,用于根据世界坐标系,确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系。
其中可选地,所述第二确定子单元2312,具体用于根据抖动角度信息建立世界坐标系到预置的切平面坐标系之间的映射关系;根据抖动角度信息建立目 标坐标系到所述预置的切平面坐标系之间的映射关系;根据所述预置的切平面坐标系,确定出所述目标坐标系到所述世界坐标系之间的映射关系。
其中具体地,所述切平面坐标系是指:在以所述图像采集装置的镜头中心为球心的半球面上,任意一点所在的切平面坐标系。
进一步可选地,所述图像确定单元232,具体用于根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和所述畸变校正参数,将所述采集图像上的每一个点映射到目标坐标系的相应位置点;获取所述采集图像每一个点的像素值,根据获取的每一个点的像素值和位置点,生成校正后的图像。
本发明实施例的所述图像处理装置可以陪着在一个处理器中,而所述处理器可以为一航拍飞行器的一部分,所述航拍飞行器包括飞行器本体、图像采集装置以及所述包括本发明实施例的图像处理装置的处理器,其中,所述图像采集装置挂载在所述飞行器本体上,所述处理器与所述图像采集装置数据相连。
需要说明的是,本发明实施例中所述图像处理装置中的各个模块、单元以及子单元的具体实现可参考图1至图8所对应实施例中相关方法步骤的具体实现的描述。
本发明实施例基于抖动信息进行图像处理,在处理过程中完成防抖功能,不需要额外的如云台等防抖硬件设备,节省了成本,方便用户使用。
进一步地再请参见图15,是本发明实施例的一种摄像机的结构示意图,本发明实施例的所述摄像机包括:镜头04、图像传感器05以及图像处理器06,其中,
所述图像传感器05,用于通过所述镜头04采集图像数据;
所述图像处理器06,与所述图像传感器05相连,用于根据所述图像传感器05采集到的图像数据得到采集图像,获取所述图像采集装置拍摄时产生的与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息;根据所述抖动信息和预置的畸变校正参数对所述采集图像进行畸变校正;根据抖动信息从所述畸变校正后的图像中选择出目标图像。
其中可选地,所述图像处理器06所选择出的目标图像为所述畸变校正后的图像中的一部分图像。
其中可选地,所述图像处理器06,具体用于从校正后的图像中确定出初始区域,并根据所述抖动信息对所述初始区域进行抖动补偿,确定出目标区域并 得到该目标区域所对应的目标图像。
其中可选地,所述图像处理器06,具体用于根据预置的视场角从所述畸变校正后的图像中确定出初始区域。
其中可选地,所述图像处理器06,具体用于根据抖动信息和预置的视场角,从所述畸变校正后的图像中确定出所述视场角所限定区域的图像作为目标图像。
进一步可选地,所述图像处理器06,具体用于根据预置的视场角从所述畸变校正后的图像中确定出初始区域。
进一步可选地,所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于上一次得到采集图像时的位置变化角度信息;或者所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于参考位置的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。
进一步可选地,所述图像处理器06,还用于对获取的采集图像和抖动信息进行时间校准;在时间校准成功后,根据所述抖动信息从所述采集图像中确定出目标图像。
进一步可选地,所述镜头04为鱼眼镜头04,所述采集图像为鱼眼图像。
进一步可选地,所述图像处理器06,具体用于根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系;根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和预置的畸变校正参数对所述采集图像进行畸变校正,得到校正后的图像。
进一步可选地,所述图像处理器06,具体用于根据抖动角度信息建立所述采集图像所在的图像坐标系到世界坐标系之间的映射关系;根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系;根据世界坐标系,确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系。
进一步可选地,所述图像处理器06,具体用于根据抖动角度信息建立世界坐标系到预置的切平面坐标系之间的映射关系;根据抖动角度信息建立目标坐标系到所述预置的切平面坐标系之间的映射关系;根据所述预置的切平面坐标系,确定出所述目标坐标系到所述世界坐标系之间的映射关系。
进一步可选地,所述切平面坐标系是指:在以所述图像采集装置的镜头04中心为球心的半球面上,任意一点所在的切平面坐标系。
进一步可选地,所述图像处理器06,具体用于根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和所述畸变校正参数,将所述采集图像上的每一个点映射到目标坐标系的相应位置点;获取所述采集图像每一个点的像素值,根据获取的每一个点的像素值和位置点,生成校正后的图像。
本发明实施例的所述摄像机可以为一个航拍飞行器的一部分,所述航拍飞行器包括飞行器本体和所述摄像机,所述摄像机挂载在所述飞行器本体。
需要说明的是,本发明实施例中所述摄像机的各个部件的具体实现可参考图1至图8所对应实施例中相关方法步骤的具体实现的描述。
本发明实施例基于抖动信息进行图像处理,在处理过程中完成防抖功能,不需要额外的如云台等防抖硬件设备,节省了成本,方便用户使用。
在本发明所提供的几个实施例中,应该理解到,所揭露的相关装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得计算机处理器(processor)执行本发明各个实施例所 述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (74)

  1. 一种图像处理方法,其特征在于,包括:
    获取图像采集装置得到的采集图像;
    获取所述图像采集装置拍摄时产生的与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息;
    根据所述抖动信息从所述采集图像中确定出目标图像。
  2. 如权利要求1所述的方法,其特征在于,
    所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于上一次得到采集图像时的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。
  3. 如权利要求1所述的方法,其特征在于,
    所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于参考位置的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。
  4. 如权利要求1所述的方法,其特征在于,所述根据所述抖动信息从所述采集图像中确定出目标图像之前,还包括:
    对获取的采集图像和抖动信息进行时间校准;
    在时间校准成功后,触发执行所述根据所述抖动信息从所述采集图像中确定出目标图像。
  5. 如权利要求1所述的方法,其特征在于,所述采集图像为鱼眼图像。
  6. 如权利要求1所述的方法,其特征在于,所述根据所述抖动信息从所述采集图像中确定出目标图像,包括:
    根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系;
    根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系从所述采集图像中确定出目标图像。
  7. 如权利要求6所述的方法,其特征在于,所述根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系,包括:
    根据抖动角度信息建立所述采集图像所在的图像坐标系到世界坐标系之间的映射关系;
    根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系;
    根据世界坐标系,确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系。
  8. 如权利要求7所述的方法,其特征在于,所述根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系,包括:
    根据抖动角度信息建立世界坐标系到预置的切平面坐标系之间的映射关系;
    根据抖动角度信息建立目标坐标系到所述预置的切平面坐标系之间的映射关系;
    根据所述预置的切平面坐标系,确定出所述目标坐标系到所述世界坐标系之间的映射关系。
  9. 如权利要求8所述的方法,其特征在于,
    所述切平面坐标系是指:在以所述图像采集装置的镜头中心为球心的半球面上,任意一点所在的切平面坐标系。
  10. 如权利要求6-9任一项所述的方法,其特征在于,所述根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系从所述采集图像中确定出目标图像,包括:
    获取预置的畸变校正参数;
    根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和所述畸变校正参数,将所述采集图像上的每一个点映射到目标坐标系的相应位 置点;
    获取所述采集图像每一个点的像素值,根据获取的每一个点的像素值和位置点,生成目标图像。
  11. 一种图像处理方法,其特征在于,包括:
    获取图像采集装置得到的采集图像;
    获取所述图像采集装置拍摄时产生与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息;
    根据所述抖动信息和预置的畸变校正参数对所述采集图像进行畸变校正;
    根据抖动信息从所述畸变校正后的图像中选择出目标图像。
  12. 如权利要求11所述的方法,其特征在于,所选择出的目标图像为所述畸变校正后的图像中的一部分图像。
  13. 如权利要求11所述的方法,其特征在于,所述根据抖动信息从所述畸变校正后的图像中选择出目标图像,包括:
    从校正后的图像中确定出初始区域,并根据所述抖动信息对所述初始区域进行抖动补偿,确定出目标区域并得到该目标区域所对应的目标图像。
  14. 如权利要求13所述的方法,其特征在于,所述从校正后的图像中确定出初始区域,包括:
    根据预置的视场角从所述畸变校正后的图像中确定出初始区域。
  15. 如权利要求11所述的方法,其特征在于,所述根据抖动信息从所述畸变校正后的图像中选择出目标图像,包括:
    根据抖动信息和预置的视场角,从所述畸变校正后的图像中确定出所述视场角所限定区域的图像作为目标图像。
  16. 如权利要求11所述的方法,其特征在于,
    所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于上一次得到采集图像时的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航 角、和/或横滚角。
  17. 如权利要求11所述的方法,其特征在于,
    所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于参考位置的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。
  18. 如权利要求11所述的方法,其特征在于,所述根据所述抖动信息和预置的畸变校正参数对所述采集图像进行畸变校正之前,还包括:
    对获取的采集图像和抖动信息进行时间校准;
    在时间校准成功后,触发执行所述根据所述抖动信息从所述采集图像中确定出目标图像。
  19. 如权利要求11所述的方法,其特征在于,所述采集图像为鱼眼图像。
  20. 如权利要求11所述的方法,其特征在于,所述根据所述抖动信息和预置的畸变校正参数对所述采集图像进行畸变校正,包括:
    根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系;
    根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和预置的畸变校正参数对所述采集图像进行畸变校正,得到校正后的图像。
  21. 如权利要求20所述的方法,其特征在于,所述所述根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系,包括:
    根据抖动角度信息建立所述采集图像所在的图像坐标系到世界坐标系之间的映射关系;
    根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系;
    根据世界坐标系,确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系。
  22. 如权利要求21所述的方法,其特征在于,所述根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系,包括:
    根据抖动角度信息建立世界坐标系到预置的切平面坐标系之间的映射关系;
    根据抖动角度信息建立目标坐标系到所述预置的切平面坐标系之间的映射关系;
    根据所述预置的切平面坐标系,确定出所述目标坐标系到所述世界坐标系之间的映射关系。
  23. 如权利要求22所述的方法,其特征在于,
    所述切平面坐标系是指:在以所述图像采集装置的镜头中心为球心的半球面上,任意一点所在的切平面坐标系。
  24. 如权利要求20-23任一项所述的方法,其特征在于,所述根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和预置的畸变校正参数对所述采集图像进行畸变校正,得到校正后的图像,包括:
    根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和所述畸变校正参数,将所述采集图像上的每一个点映射到目标坐标系的相应位置点;
    获取所述采集图像每一个点的像素值,根据获取的每一个点的像素值和位置点,生成校正后的图像。
  25. 一种图像处理装置,其特征在于,包括:
    图像获取模块,用于获取图像采集装置得到的采集图像;
    抖动获取模块,用于获取所述图像采集装置拍摄时产生与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息;
    处理模块,用于根据所述抖动信息从所述采集图像中确定出目标图像。
  26. 如权利要求25所述的装置,其特征在于,所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于上一次得到采集图像时的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。
  27. 如权利要求25所述的装置,其特征在于,所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于参考位置的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。
  28. 如权利要求25所述的装置,其特征在于,还包括:
    校正模块,用于对获取的采集图像和抖动信息进行时间校准;并在时间校准成功后,发出触发信息给处理模块。
  29. 如权利要求25所述的装置,其特征在于,所述处理模块包括:
    映射关系确定单元,用于根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系;
    处理单元,用于根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系从所述采集图像中确定出目标图像。
  30. 如权利要求29所述的装置,其特征在于,所述映射关系确定单元包括:
    第一确定子单元,用于根据抖动角度信息建立所述采集图像所在的图像坐标系到世界坐标系之间的映射关系;
    第二确定子单元,用于根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系;
    第三确定子单元,用于根据世界坐标系,确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系。
  31. 如权利要求30所述的装置,其特征在于,
    所述第二确定子单元,具体用于根据抖动角度信息建立世界坐标系到预置的切平面坐标系之间的映射关系;根据抖动角度信息建立目标坐标系到所述预置的切平面坐标系之间的映射关系;根据所述预置的切平面坐标系,确定出所述目标坐标系到所述世界坐标系之间的映射关系。
  32. 如权利要求31所述的装置,其特征在于,
    所述切平面坐标系是指:在以所述图像采集装置的镜头中心为球心的半球 面上,任意一点所在的切平面坐标系。
  33. 如权利要求29-32任一项所述的装置,其特征在于,
    所述处理单元,具体用于获取预置的畸变校正参数;根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和所述畸变校正参数,将所述采集图像上的每一个点映射到目标坐标系的相应位置点;获取所述采集图像每一个点的像素值,根据获取的每一个点的像素值和位置点,生成目标图像。
  34. 一种摄像机,其特征在于,包括:镜头、图像传感器以及图像处理器,其中,
    所述图像传感器,用于通过所述镜头采集图像数据;
    所述图像处理器,与所述图像传感器相连,用于根据所述图像传感器采集的图像数据得到采集图像,获取所述图像采集装置拍摄时产生与所述采集图像关联的抖动信息;根据所述抖动信息从所述采集图像中确定出目标图像;其中,所述抖动信息包括抖动角度信息。
  35. 如权利要求34所述的摄像机,其特征在于,所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于上一次得到采集图像时的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。
  36. 如权利要求34所述的摄像机,其特征在于,所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于参考位置的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。
  37. 如权利要求34所述的摄像机,其特征在于,
    所述图像处理器,还用于对获取的采集图像和抖动信息进行时间校准;在时间校准成功后,再所述根据所述抖动信息从所述采集图像中确定出目标图像。
  38. 如权利要求34所述的摄像机,其特征在于,所述镜头为鱼眼镜头,所述采集图像为鱼眼图像。
  39. 如权利要求34所述的摄像机,其特征在于,
    所述图像处理器,具体用于根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系;根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系从所述采集图像中确定出目标图像。
  40. 如权利要求39所述的摄像机,其特征在于,
    所述图像处理器,具体用于根据抖动角度信息建立所述采集图像所在的图像坐标系到世界坐标系之间的映射关系;根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系;根据世界坐标系,确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系。
  41. 如权利要求40所述的摄像机,其特征在于,
    所述图像处理器,具体用于根据抖动角度信息建立世界坐标系到预置的切平面坐标系之间的映射关系;根据抖动角度信息建立目标坐标系到所述预置的切平面坐标系之间的映射关系;根据所述预置的切平面坐标系,确定出所述目标坐标系到所述世界坐标系之间的映射关系。
  42. 如权利要求41所述的摄像机,其特征在于,所述切平面坐标系是指:在以所述图像采集装置的镜头中心为球心的半球面上,任意一点所在的切平面坐标系。
  43. 如权利要求39-42任一项所述的摄像机,其特征在于,
    所述图像处理器,具体用于获取预置的畸变校正参数;根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和所述畸变校正参数,将所述采集图像上的每一个点映射到目标坐标系的相应位置点;获取所述采集图像每一个点的像素值,根据获取的每一个点的像素值和位置点,生成目标图像。
  44. 一种图像处理装置,其特征在于,包括:
    第一获取模块,用于获取图像采集装置得到的采集图像;
    第二获取模块,用于获取所述图像采集装置拍摄时产生的与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息;
    第一处理模块,用于根据所述抖动信息和预置的畸变校正参数对所述采集图像进行畸变校正;
    第二处理模块,用于根据抖动信息从所述畸变校正后的图像中选择出目标图像。
  45. 如权利要求44所述的装置,其特征在于,所述第二处理模块所选择出的目标图像为所述畸变校正后的图像中的一部分图像。
  46. 如权利要求44所述的装置,其特征在于,
    所述第二处理模块,具体用于从校正后的图像中确定出初始区域,并根据所述抖动信息对所述初始区域进行抖动补偿,确定出目标区域并得到该目标区域所对应的目标图像。
  47. 如权利要求46所述的装置,其特征在于,
    所述第二处理模块,具体用于根据预置的视场角从所述畸变校正后的图像中确定出初始区域。
  48. 如权利要求44所述的装置,其特征在于,
    所述第二处理模块,具体用于根据抖动信息和预置的视场角,从所述畸变校正后的图像中确定出所述视场角所限定区域的图像作为目标图像。
  49. 如权利要求44所述的装置,其特征在于,所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于上一次得到采集图像时的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。
  50. 如权利要求44所述的装置,其特征在于,所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于参考位置的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。
  51. 如权利要求44所述的装置,其特征在于,还包括:
    校正模块,用于对获取的采集图像和抖动信息进行时间校准;在时间校准 成功后,向所述第一处理模块发出触发信息。
  52. 如权利要求44所述的装置,其特征在于,所述第一处理模块包括:
    关系确定单元,用于根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系;
    图像确定单元,用于根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和预置的畸变校正参数对所述采集图像进行畸变校正,得到校正后的图像。
  53. 如权利要求52所述的装置,其特征在于,所述关系确定单元包括:
    第一确定子单元,用于根据抖动角度信息建立所述采集图像所在的图像坐标系到世界坐标系之间的映射关系;
    第二确定子单元,用于根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系;
    第三确定子单元,用于根据世界坐标系,确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系。
  54. 如权利要求53所述的装置,其特征在于,
    所述第二确定子单元,具体用于根据抖动角度信息建立世界坐标系到预置的切平面坐标系之间的映射关系;根据抖动角度信息建立目标坐标系到所述预置的切平面坐标系之间的映射关系;根据所述预置的切平面坐标系,确定出所述目标坐标系到所述世界坐标系之间的映射关系。
  55. 如权利要求54所述的装置,其特征在于,所述切平面坐标系是指:在以所述图像采集装置的镜头中心为球心的半球面上,任意一点所在的切平面坐标系。
  56. 如权利要求52-55任一项所述的装置,其特征在于,
    所述图像确定单元,具体用于根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和所述畸变校正参数,将所述采集图像上的每一个点映射到目标坐标系的相应位置点;获取所述采集图像每一个点的像素值,根 据获取的每一个点的像素值和位置点,生成校正后的图像。
  57. 一种摄像机,其特征在于,包括:镜头、图像传感器以及图像处理器,其中,
    所述图像传感器,用于通过所述镜头采集图像数据;
    所述图像处理器,与所述图像传感器相连,用于根据所述图像传感器采集到的图像数据得到采集图像,获取所述图像采集装置拍摄时产生的与所述采集图像关联的抖动信息,其中,所述抖动信息包括抖动角度信息;根据所述抖动信息和预置的畸变校正参数对所述采集图像进行畸变校正;根据抖动信息从所述畸变校正后的图像中选择出目标图像。
  58. 如权利要求57所述的摄像机,其特征在于,所述图像处理器所选择出的目标图像为所述畸变校正后的图像中的一部分图像。
  59. 如权利要求57所述的摄像机,其特征在于,
    所述图像处理器,具体用于从校正后的图像中确定出初始区域,并根据所述抖动信息对所述初始区域进行抖动补偿,确定出目标区域并得到该目标区域所对应的目标图像。
  60. 如权利要求59所述的摄像机,其特征在于,
    所述图像处理器,具体用于根据预置的视场角从所述畸变校正后的图像中确定出初始区域。
  61. 如权利要求57所述的摄像机,其特征在于,
    所述图像处理器,具体用于根据抖动信息和预置的视场角,从所述畸变校正后的图像中确定出所述视场角所限定区域的图像作为目标图像。
  62. 如权利要求57所述的摄像机,其特征在于,所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于上一次得到采集图像时的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。
  63. 如权利要求57所述的摄像机,其特征在于,所述抖动信息是指所述图像采集装置在得到所述采集图像时相对于参考位置的位置变化角度信息;所述抖动角度信息包括:俯仰角、偏航角、和/或横滚角。
  64. 如权利要求57所述的摄像机,其特征在于,
    所述图像处理器,还用于对获取的采集图像和抖动信息进行时间校准;在时间校准成功后,根据所述抖动信息从所述采集图像中确定出目标图像。
  65. 如权利要求57所述的摄像机,其特征在于,所述镜头为鱼眼镜头,所述采集图像为鱼眼图像。
  66. 如权利要求57所述的摄像机,其特征在于,
    所述图像处理器,具体用于根据抖动角度信息确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系;根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和预置的畸变校正参数对所述采集图像进行畸变校正,得到校正后的图像。
  67. 如权利要求66所述的摄像机,其特征在于,
    所述图像处理器,具体用于根据抖动角度信息建立所述采集图像所在的图像坐标系到世界坐标系之间的映射关系;根据抖动角度信息确定出所述目标坐标系到所述世界坐标系之间的映射关系;根据世界坐标系,确定出所述采集图像所在的图像坐标系到目标坐标系的映射关系。
  68. 如权利要求67所述的摄像机,其特征在于,
    所述图像处理器,具体用于根据抖动角度信息建立世界坐标系到预置的切平面坐标系之间的映射关系;根据抖动角度信息建立目标坐标系到所述预置的切平面坐标系之间的映射关系;根据所述预置的切平面坐标系,确定出所述目标坐标系到所述世界坐标系之间的映射关系。
  69. 如权利要求68所述的摄像机,其特征在于,所述切平面坐标系是指:在以所述图像采集装置的镜头中心为球心的半球面上,任意一点所在的切平面 坐标系。
  70. 如权利要求66-69任一项所述的摄像机,其特征在于,
    所述图像处理器,具体用于根据确定出的所述采集图像所在的图像坐标系到目标坐标系的映射关系和所述畸变校正参数,将所述采集图像上的每一个点映射到目标坐标系的相应位置点;获取所述采集图像每一个点的像素值,根据获取的每一个点的像素值和位置点,生成校正后的图像。
  71. 一种航拍飞行器,其特征在于,包括:飞行器本体和挂载在所述飞行器本体上的摄像机,所述摄像机包括如权利要求34-43任一项所述的摄像机。
  72. 一种航拍飞行器,其特征在于,包括:飞行器本体和挂载在所述飞行器本体上的摄像机,所述摄像机包括如权利要求57-70任一项所述的摄像机。
  73. 一种航拍飞行器,其特征在于,包括:飞行器本体、图像采集装置以及处理器,其中,所述图像采集装置挂载在所述飞行器本体上,所述处理器与所述图像采集装置数据相连,所述处理器包括如权利要求25至33任一项所述的图像处理装置。
  74. 一种航拍飞行器,其特征在于,包括:飞行器本体、图像采集装置以及处理器,其中,所述图像采集装置挂载在所述飞行器本体上,所述处理器与所述图像采集装置数据相连,所述处理器包括如权利要求44至56任一项所述的图像处理装置。
PCT/CN2015/085641 2015-07-31 2015-07-31 一种图像处理方法、装置及摄像机 WO2017020150A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2015/085641 WO2017020150A1 (zh) 2015-07-31 2015-07-31 一种图像处理方法、装置及摄像机
CN201580071829.3A CN107113376B (zh) 2015-07-31 2015-07-31 一种图像处理方法、装置及摄像机
US15/884,985 US10594941B2 (en) 2015-07-31 2018-01-31 Method and device of image processing and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/085641 WO2017020150A1 (zh) 2015-07-31 2015-07-31 一种图像处理方法、装置及摄像机

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/884,985 Continuation US10594941B2 (en) 2015-07-31 2018-01-31 Method and device of image processing and camera

Publications (1)

Publication Number Publication Date
WO2017020150A1 true WO2017020150A1 (zh) 2017-02-09

Family

ID=57942212

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/085641 WO2017020150A1 (zh) 2015-07-31 2015-07-31 一种图像处理方法、装置及摄像机

Country Status (3)

Country Link
US (1) US10594941B2 (zh)
CN (1) CN107113376B (zh)
WO (1) WO2017020150A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109493391A (zh) * 2018-11-30 2019-03-19 Oppo广东移动通信有限公司 摄像头标定方法和装置、电子设备、计算机可读存储介质
CN110674665A (zh) * 2018-07-03 2020-01-10 杭州海康威视系统技术有限公司 图像处理方法、装置、森林防火系统及电子设备
CN115134537A (zh) * 2022-01-18 2022-09-30 长城汽车股份有限公司 一种图像处理方法、装置及车辆

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018037944A (ja) * 2016-09-01 2018-03-08 ソニーセミコンダクタソリューションズ株式会社 撮像制御装置、撮像装置および撮像制御方法
CN109313455B (zh) * 2017-11-16 2021-09-28 深圳市大疆创新科技有限公司 智能眼镜及其控制云台的方法、云台、控制方法和无人机
CN109059843A (zh) * 2018-10-31 2018-12-21 中国矿业大学(北京) 一种具有摄像机角度检测功能的装置
CN110809781B (zh) * 2018-11-15 2024-02-27 深圳市大疆创新科技有限公司 一种图像处理方法、控制终端及存储介质
US10380724B1 (en) * 2019-01-28 2019-08-13 StradVision, Inc. Learning method and learning device for reducing distortion occurred in warped image generated in process of stabilizing jittered image by using GAN to enhance fault tolerance and fluctuation robustness in extreme situations
CN111213159A (zh) * 2019-03-12 2020-05-29 深圳市大疆创新科技有限公司 一种图像处理方法、装置及系统
CN110276734B (zh) * 2019-06-24 2021-03-23 Oppo广东移动通信有限公司 图像畸变校正方法和装置
CN112129317B (zh) * 2019-06-24 2022-09-02 南京地平线机器人技术有限公司 信息采集时间差确定方法、装置以及电子设备、存储介质
CN110415196B (zh) * 2019-08-07 2023-12-29 上海视云网络科技有限公司 图像校正方法、装置、电子设备及可读存储介质
CN110956585B (zh) * 2019-11-29 2020-09-15 深圳市英博超算科技有限公司 全景图像拼接方法、装置以及计算机可读存储介质
CN111669499B (zh) * 2020-06-12 2021-11-19 杭州海康机器人技术有限公司 一种视频防抖方法、装置及视频采集设备
CN112418086A (zh) * 2020-11-23 2021-02-26 浙江大华技术股份有限公司 一种规则框校正方法、装置、电子设备及存储介质
CN112489114B (zh) * 2020-11-25 2024-05-10 深圳地平线机器人科技有限公司 图像转换方法、装置、计算机可读存储介质及电子设备
CN113096192B (zh) * 2021-04-25 2024-05-07 西安四维图新信息技术有限公司 图像传感器内参标定方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101390383A (zh) * 2006-02-23 2009-03-18 松下电器产业株式会社 图像修正装置、方法、程序、集成电路、系统
CN102082910A (zh) * 2009-12-01 2011-06-01 索尼公司 图像拍摄设备、图像拍摄方法以及图像拍摄程序
JP2012252213A (ja) * 2011-06-03 2012-12-20 Fujifilm Corp 撮像装置、プログラム及び撮像装置のブレ補正方法
CN103414844A (zh) * 2013-08-27 2013-11-27 北京奇艺世纪科技有限公司 视频抖动修正方法及装置
CN103929635A (zh) * 2014-04-25 2014-07-16 哈尔滨工程大学 一种uuv纵横摇时的双目视觉图像补偿方法
CN104574567A (zh) * 2015-01-07 2015-04-29 苏州科技学院 一种车载立体影像记录装置及其信号处理方法
WO2015093083A1 (ja) * 2013-12-17 2015-06-25 オリンパス株式会社 撮像装置及び撮像装置の制御方法

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19942900B4 (de) * 1998-09-08 2004-01-22 Ricoh Company, Ltd. Vorrichtung zur Korrektur von Bildfehlern, die durch ein Kameraverwackeln hervorgerufen werden
US7015954B1 (en) * 1999-08-09 2006-03-21 Fuji Xerox Co., Ltd. Automatic video system using multiple cameras
US6833843B2 (en) * 2001-12-03 2004-12-21 Tempest Microsystems Panoramic imaging and display system with canonical magnifier
JP2004234379A (ja) * 2003-01-30 2004-08-19 Sony Corp 画像処理方法、画像処理装置及び画像処理方法を適用した撮像装置、表示装置
JP3862688B2 (ja) * 2003-02-21 2006-12-27 キヤノン株式会社 画像処理装置及び画像処理方法
JP2005252626A (ja) * 2004-03-03 2005-09-15 Canon Inc 撮像装置および画像処理方法
JP2005252625A (ja) * 2004-03-03 2005-09-15 Canon Inc 撮像装置および画像処理方法
JP2008020716A (ja) * 2006-07-13 2008-01-31 Pentax Corp 像ぶれ補正装置
JP5074322B2 (ja) * 2008-08-05 2012-11-14 オリンパス株式会社 画像処理装置、画像処理方法、画像処理プログラム、及び、撮像装置
JP2010050745A (ja) * 2008-08-21 2010-03-04 Canon Inc 画像処理装置およびその方法
JP5523017B2 (ja) * 2009-08-20 2014-06-18 キヤノン株式会社 画像処理装置及び画像処理方法
EP2640059B1 (en) * 2010-11-11 2018-08-29 Panasonic Intellectual Property Corporation of America Image processing device, image processing method and program
CN102714696B (zh) * 2010-11-11 2016-03-02 松下电器(美国)知识产权公司 图像处理装置、图像处理方法及摄影装置
CN102611849A (zh) * 2012-03-20 2012-07-25 深圳市金立通信设备有限公司 一种手机拍照防抖系统及方法
CN104380709B (zh) * 2012-06-22 2018-05-29 富士胶片株式会社 摄像装置及其动作控制方法
JP5762587B2 (ja) * 2013-04-15 2015-08-12 キヤノン株式会社 画像処理装置および画像処理方法
JP6257207B2 (ja) * 2013-08-01 2018-01-10 キヤノン株式会社 像振れ補正装置およびその制御方法、レンズ鏡筒、並びに撮像装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101390383A (zh) * 2006-02-23 2009-03-18 松下电器产业株式会社 图像修正装置、方法、程序、集成电路、系统
CN102082910A (zh) * 2009-12-01 2011-06-01 索尼公司 图像拍摄设备、图像拍摄方法以及图像拍摄程序
JP2012252213A (ja) * 2011-06-03 2012-12-20 Fujifilm Corp 撮像装置、プログラム及び撮像装置のブレ補正方法
CN103414844A (zh) * 2013-08-27 2013-11-27 北京奇艺世纪科技有限公司 视频抖动修正方法及装置
WO2015093083A1 (ja) * 2013-12-17 2015-06-25 オリンパス株式会社 撮像装置及び撮像装置の制御方法
CN103929635A (zh) * 2014-04-25 2014-07-16 哈尔滨工程大学 一种uuv纵横摇时的双目视觉图像补偿方法
CN104574567A (zh) * 2015-01-07 2015-04-29 苏州科技学院 一种车载立体影像记录装置及其信号处理方法

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674665A (zh) * 2018-07-03 2020-01-10 杭州海康威视系统技术有限公司 图像处理方法、装置、森林防火系统及电子设备
CN110674665B (zh) * 2018-07-03 2023-06-30 杭州海康威视系统技术有限公司 图像处理方法、装置、森林防火系统及电子设备
CN109493391A (zh) * 2018-11-30 2019-03-19 Oppo广东移动通信有限公司 摄像头标定方法和装置、电子设备、计算机可读存储介质
CN115134537A (zh) * 2022-01-18 2022-09-30 长城汽车股份有限公司 一种图像处理方法、装置及车辆

Also Published As

Publication number Publication date
CN107113376A (zh) 2017-08-29
US10594941B2 (en) 2020-03-17
US20180160045A1 (en) 2018-06-07
CN107113376B (zh) 2019-07-19

Similar Documents

Publication Publication Date Title
WO2017020150A1 (zh) 一种图像处理方法、装置及摄像机
WO2021227359A1 (zh) 一种无人机投影方法、装置、设备及存储介质
US10645284B2 (en) Image processing device, image processing method, and recording medium storing program
CN111344644B (zh) 用于基于运动的自动图像捕获的技术
CN109309796B (zh) 使用多个相机获取图像的电子装置和用其处理图像的方法
WO2019113966A1 (zh) 一种避障方法、装置和无人机
CN111800589B (zh) 图像处理方法、装置和系统,以及机器人
JP2019510234A (ja) 奥行き情報取得方法および装置、ならびに画像取得デバイス
WO2019183845A1 (zh) 云台的控制方法、装置、系统、计算机存储介质及无人机
KR102452575B1 (ko) 광학식 이미지 안정화 움직임에 의한 이미지의 변화를 보상하기 위한 장치 및 방법
JP2017017689A (ja) 全天球動画の撮影システム、及びプログラム
WO2021104308A1 (zh) 全景深度测量方法、四目鱼眼相机及双目鱼眼相机
CN112204946A (zh) 数据处理方法、装置、可移动平台及计算机可读存储介质
WO2020024182A1 (zh) 一种参数处理方法、装置及摄像设备、飞行器
WO2023236508A1 (zh) 一种基于亿像素阵列式相机的图像拼接方法及系统
WO2020019175A1 (zh) 图像处理方法和设备、摄像装置以及无人机
US11657477B2 (en) Image processing device, image processing system, imaging device, image processing method, and recording medium storing program code
US11128814B2 (en) Image processing apparatus, image capturing apparatus, video reproducing system, method and program
CN114390186A (zh) 视频拍摄方法及电子设备
WO2020062024A1 (zh) 基于无人机的测距方法、装置及无人机
US20210092306A1 (en) Movable body, image generation method, program, and recording medium
WO2021052217A1 (zh) 一种进行图像处理和框架体控制的控制装置
WO2017057426A1 (ja) 投影装置、コンテンツ決定装置、投影方法、および、プログラム
TW201911239A (zh) 立體環景影片產生方法及裝置
CN112313942A (zh) 一种进行图像处理和框架体控制的控制装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15899929

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15899929

Country of ref document: EP

Kind code of ref document: A1