WO2019000664A1 - 一种信息处理方法及电子设备 - Google Patents

一种信息处理方法及电子设备 Download PDF

Info

Publication number
WO2019000664A1
WO2019000664A1 PCT/CN2017/102938 CN2017102938W WO2019000664A1 WO 2019000664 A1 WO2019000664 A1 WO 2019000664A1 CN 2017102938 W CN2017102938 W CN 2017102938W WO 2019000664 A1 WO2019000664 A1 WO 2019000664A1
Authority
WO
WIPO (PCT)
Prior art keywords
trajectory information
camera module
electronic device
corrected
image
Prior art date
Application number
PCT/CN2017/102938
Other languages
English (en)
French (fr)
Inventor
张帆
Original Assignee
联想(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 联想(北京)有限公司 filed Critical 联想(北京)有限公司
Publication of WO2019000664A1 publication Critical patent/WO2019000664A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction

Definitions

  • the present invention relates to information processing technologies in the field of communications, and in particular, to an information processing method and an electronic device.
  • the main object of the present invention is to provide an information processing method and an electronic device, which are directed to solving the above problems in the prior art.
  • the present invention provides an information processing method, which is applied to an electronic device, including:
  • the second camera module to capture the first object to obtain candidate images of at least two first objects
  • the embodiment of the present invention further provides an electronic device, where the electronic device includes a first camera module and a second camera module, and the electronic device further includes:
  • control unit configured to: start a second camera module to capture a first object to obtain a candidate image of at least two first objects; and when detecting a shooting instruction for the first camera module, control the first camera module to target The second object is photographed to obtain an initial image for the second object;
  • a calculating unit configured to select at least two reference images from the candidate images of the at least two first objects based on the occurrence timing of the shooting instruction for the first camera module, based on the selected at least two reference images Calculating, by the first track information generated by the electronic device when the shooting instruction is executed;
  • a correction unit configured to correct an initial image of the second object based on the first trajectory information to obtain a corrected image for the second object.
  • the first camera module captures an initial image for the second object, and then calculates the trajectory of the electronic device based on at least the reference image captured by the second camera module.
  • the information is further corrected based on the trajectory information of the electronic device to obtain a corrected image.
  • the image is captured for the electronic device, when the image is shaken, the image is corrected by using the shooting condition of the other camera module, thereby avoiding the inaccurate correction caused by the parameter drift of the sensor when the sensor performs the correction.
  • the problem is to improve the accuracy of the corrected image.
  • FIG. 1 is a schematic flowchart of an information processing method according to an embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram 1 of an electronic device according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural view 2 of an electronic device according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a scenario of an embodiment of the present invention.
  • the techniques of this disclosure may be implemented in the form of hardware and/or software (including firmware, microcode, etc.). Additionally, the techniques of this disclosure may take the form of a computer program product on a computer readable medium storing instructions for use by or in connection with an instruction execution system.
  • a computer readable medium can be any medium that can contain, store, communicate, propagate or transport the instructions.
  • a computer readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium.
  • the computer readable medium include: a magnetic storage device such as a magnetic tape or a hard disk (HDD); an optical storage device such as a compact disk (CD-ROM); a memory such as a random access memory (RAM) or a flash memory; and/or a wired /Wireless communication link.
  • a magnetic storage device such as a magnetic tape or a hard disk (HDD)
  • an optical storage device such as a compact disk (CD-ROM)
  • a memory such as a random access memory (RAM) or a flash memory
  • RAM random access memory
  • An embodiment of the present invention provides an information processing method, which is applied to an electronic device, where the electronic device includes a first camera module and a second camera module, as shown in FIG.
  • Step 101 Turn on the second camera module to capture the first object to obtain candidate images of at least two first objects.
  • Step 102 When detecting a shooting instruction for the first camera module, controlling the first camera module to capture a second object, and obtaining an initial image for the second object;
  • Step 103 Select at least two reference images from the candidate images of the at least two first objects based on the occurrence timing of the shooting instruction for the first camera module, and calculate based on the selected at least two reference images. Obtaining first trajectory information generated by the electronic device when the shooting instruction is executed;
  • Step 104 Correct an initial image of the second object based on the first trajectory information to obtain a corrected image for the second object.
  • the camera module in the electronic device may be a camera
  • the first and second camera modules may be two front cameras, which may be two rear cameras, or one front camera and one rear camera. , here is not exhaustive.
  • the second camera module may be turned on when the electronic device is started, and is kept in the working state, or the second camera module may be turned on when the step 102 is performed.
  • the shooting of the first object may be performed based on the shooting period, the shooting period It can be 1 ms or less, and is not limited here.
  • step 102 the second object is captured, and only one picture can be taken to obtain an initial image for the second object.
  • step 103 based on the occurrence time of the shooting instruction for the first camera module, at least two reference images are selected from the candidate images of the at least two first objects, based on the selected at least two reference images.
  • the time of occurrence of the shooting instruction is taken as the center time, and the time interval of Nms is used as the time period, and the length of 2N ms is selected as the length of the selected reference image;
  • the first trajectory information generated by the electronic device when the shooting instruction is executed is then calculated based on the reference image.
  • controlling the second camera module to perform capturing of the first object to obtain candidate images of at least two first objects; (ie, performing step 102, performing step 101) ;
  • the shooting duration of the second camera module is set to a preset duration, for example, it may be M ms (may be 10 ms or longer, and is not limited herein);
  • All the candidate images captured by the second imaging module during the shooting duration are used as reference images, and based on the selected at least two reference images, the first trajectory information generated by the electronic device when the shooting instruction is executed is calculated.
  • step 104 is executed to correct the initial image of the second object by using the first trajectory information to obtain a corrected image for the second object.
  • the manner of correcting the image by using the first trajectory information may include: by using the corrected w'xi, w'yi, w'zi, sxi, syi, and szi, we can obtain the trajectory of the first camera. It is the point spread function. Finally, you can use the classic Wiener filter to restore the image before blurring, which can be calculated in the following way:
  • I' F ⁇ I ⁇ F ⁇ psf ⁇ */(
  • the anti-Fourier change is then performed on I' to obtain the final corrected image.
  • Usually electronic devices can have front and rear cameras with consistent angular velocities at different points on the rigid body.
  • the image sequence information of the second camera can be used to assist in estimating the rotation component of the first camera, and at the same time, the translation component of the first camera is estimated according to the distance information of the calibration, and the trajectory information of the electronic device is obtained, and the corrected second object is obtained.
  • Image can be used to assist in estimating the rotation component of the first camera, and at the same time, the translation component of the first camera is estimated according to the distance information of the calibration, and the trajectory information of the electronic device is obtained, and the corrected second object is obtained.
  • the first camera module captures an initial image for the second object, and then at least can obtain the trajectory information of the electronic device based on the reference image captured by the second camera module, and further, based on the electronic device.
  • the trajectory information is corrected for the initial image to obtain a corrected image.
  • the image is captured for the electronic device, when the image is shaken, the image is corrected by using the shooting condition of the other camera module, thereby avoiding the inaccurate correction caused by the parameter drift of the sensor when the sensor performs the correction.
  • the problem is to improve the accuracy of the corrected image.
  • An embodiment of the present invention provides an information processing method, which is applied to an electronic device, where the electronic device includes a first camera module and a second camera module, as shown in FIG.
  • Step 101 Turn on the second camera module to capture the first object to obtain candidate images of at least two first objects.
  • Step 102 When detecting a shooting instruction for the first camera module, controlling the first camera module to capture a second object, and obtaining an initial image for the second object;
  • Step 103 Select at least two reference images from the candidate images of the at least two first objects based on the occurrence timing of the shooting instruction for the first camera module, and calculate based on the selected at least two reference images. Obtaining first trajectory information generated by the electronic device when the shooting instruction is executed;
  • Step 104 Correct an initial image of the second object based on the first trajectory information to obtain a corrected image for the second object.
  • the camera module in the electronic device may be a camera
  • the first and second camera modules may be two front cameras, which may be two rear cameras, or one front camera and one rear camera. , here is not exhaustive.
  • the difference between this embodiment and the first embodiment is that in order to correct the blurred photo captured by the first camera, the second camera and the first sensor (gyro) work simultaneously, and the second camera enters a burst mode, and the shooting is low.
  • the image sequence of resolution and high frame rate the gyroscope records the rotational angular velocity of the device at a higher sampling frequency.
  • the second camera module may be turned on when the electronic device is started, and is always in the working state, or may be turned on when the step 102 is performed; Step 101 may also be started when the electronic device detects that the electronic device generates the jitter, wherein the electronic device generates the jitter that can be detected by the acceleration sensor, for example, when detecting that a large acceleration is generated in a short time, determining that the electronic device generates the jitter.
  • the shooting of the first object may be performed based on the shooting period, and the shooting period may be 1 ms or less, which is not limited herein.
  • step 102 the second object is captured, and only one picture can be taken, and one piece for the second object is obtained.
  • Initial image the second object is captured, and only one picture can be taken, and one piece for the second object is obtained.
  • step 103 based on the occurrence time of the shooting instruction for the first camera module, at least two reference images are selected from the candidate images of the at least two first objects, based on the selected at least two reference images. Calculating the first trajectory information generated by the electronic device when the shooting instruction is executed, which is different from the first embodiment in that the present embodiment can perform the third embodiment in addition to the foregoing two modes. :
  • the method further includes: performing angular velocity acquisition to obtain an angular velocity of at least two sampling points and a sampling moment; wherein, the first sensor performs angular velocity acquisition may be performed all the time, but only buffers angular velocity and sampling time within a certain duration; or To perform step 102, control begins angular velocity acquisition.
  • the method further includes:
  • selecting at least two reference sampling points based on the occurrence time of the shooting instruction for the first camera module and the sampling time of the at least two sampling points which may be: selecting 2N before and after the shooting instruction The ms is selected as the duration, and then multiple reference sampling points within the 2N ms selection duration are selected from the buffered plurality of sampling points and their corresponding sampling moments. Where N is an integer.
  • the initial image of the second object is corrected based on the first trajectory information, and the corrected image for the second object is obtained, including:
  • the initial image of the second object is corrected based on the corrected second trajectory information to obtain a corrected image for the second object.
  • the correcting the second trajectory information based on the at least two translation vectors in the first trajectory information, and/or the at least two rotation angles includes:
  • the method may further include: calculating, according to at least two translation vectors in the first trajectory information, a translation vector corresponding to at least two reference sampling points;
  • the homography matrix H is decomposed into a rotation matrix R, a translation vector s, and a normal vector n, and the specific manners can be as follows:
  • H2 [R s; 0 0 0 1]
  • the second trajectory information of the camera is calculated from the acquired data of the first sensor, that is, the collection point of the gyro. Specific steps are as follows:
  • the correction method can be, but is not limited to, using a linear correction method:
  • W'xi (wx–wx1–wx2–...–wxT)(ti+1–ti)/(tT–t1)
  • W'yi (wy–wy1–wy2–...–wyT)(ti+1–ti)/(tT–t1)
  • the estimation method can be, but is not limited to, using a linear method:
  • the homography Hd may include:
  • Two test cards are fixed at the same time, and two mobile phones or other camera devices are selected to ensure that the two cameras can shoot the complete test card when the mobile phone is in the position.
  • the first camera and the second camera respectively take corresponding test cards to obtain images I11 and I12; when the mobile phone moves to the second position, images I21 and I22 are obtained.
  • H1 is calculated from I11 and I12; H2 is calculated from I21 and I22.
  • different positions can be selected, multiple sets of ⁇ I11, I12, I21I22 ⁇ are taken, and a more stable Hd is calculated using a least squares estimation.
  • step 104 is executed to correct the initial image of the second object by using the first trajectory information to obtain a corrected image for the second object.
  • the manner of correcting the image by using the first trajectory information may include: by using the corrected w'xi, w'yi, w'zi, sxi, syi, and szi, we can obtain the trajectory of the first camera. It is the point spread function.
  • the classic Wiener filter can be used to restore the image before blurring:
  • I' F ⁇ I ⁇ F ⁇ psf ⁇ */(
  • Usually electronic devices can have front and rear cameras with consistent angular velocities at different points on the rigid body.
  • the image sequence information of the second camera can be used to assist in estimating the rotation component of the first camera, and at the same time, the translation component of the first camera is estimated according to the distance information of the calibration, and the trajectory information of the electronic device is obtained, and the corrected second object is obtained.
  • Image can be used to assist in estimating the rotation component of the first camera, and at the same time, the translation component of the first camera is estimated according to the distance information of the calibration, and the trajectory information of the electronic device is obtained, and the corrected second object is obtained.
  • the first camera module captures an initial image for the second object, and then at least can obtain the trajectory information of the electronic device based on the reference image captured by the second camera module, and further, based on the electronic device.
  • the trajectory information is corrected for the initial image to obtain a corrected image.
  • the image is captured for the electronic device, when the image is shaken, the image is corrected by using the shooting condition of the other camera module, thereby avoiding the inaccurate correction caused by the parameter drift of the sensor when the sensor performs the correction.
  • the problem is to improve the accuracy of the corrected image.
  • An embodiment of the present invention provides an electronic device. As shown in FIG. 2, the electronic device includes a first camera module 21 and a second camera module 22. The electronic device further includes:
  • the control unit 23 is configured to enable the second camera module to capture the first object to obtain at least two candidates of the first object. The image is detected; when the shooting instruction for the first camera module is detected, the first camera module is controlled to capture the second object, and an initial image for the second object is obtained;
  • the calculating unit 24 is configured to select at least two reference images from the candidate images of the at least two first objects based on the occurrence timing of the shooting instruction for the first camera module, based on the selected at least two references Image, calculating first trajectory information generated by the electronic device when the shooting instruction is executed;
  • the correcting unit 25 is configured to correct the initial image of the second object based on the first trajectory information to obtain a corrected image for the second object.
  • the camera module in the electronic device may be a camera
  • the first and second camera modules may be two front cameras, which may be two rear cameras, or one front camera and one rear camera. , here is not exhaustive.
  • control unit 23, for performing the photographing of the first object may be photographing based on the photographing period, and the photographing period may be 1 ms or less, which is not limited herein.
  • the photographing of the second object described above may be performed by taking only one sheet and obtaining an initial image for the second object.
  • the time of occurrence of the shooting instruction is taken as the center time, and the time period of Nms is used as the time period, and the time length of 2N ms is selected as the length of the selected reference image;
  • N is an integer; for example, it can be 5 ms. That is, the candidate image within 10 ms of the occurrence time of the shooting instruction is selected as the reference image;
  • the first trajectory information generated by the electronic device when the shooting instruction is executed is then calculated based on the reference image.
  • the shooting duration of the second camera module is set to a preset duration, for example, it may be M ms (may be 10 ms or longer, and is not limited herein);
  • All the candidate images captured by the second imaging module during the shooting duration are used as reference images, and based on the selected at least two reference images, the first trajectory information generated by the electronic device when the shooting instruction is executed is calculated.
  • the final correction unit wherein the method for correcting the image by using the first trajectory information may include: by using the corrected w'xi, w'yi, w'zi, sxi, syi, and szi, we can obtain the first camera
  • the trajectory is the point spread function.
  • the classic Wiener filter can be used to restore the image before blurring:
  • I' F ⁇ I ⁇ F ⁇ psf ⁇ */(
  • Usually electronic devices can have front and rear cameras with consistent angular velocities at different points on the rigid body.
  • the image sequence information of the second camera can be used to assist in estimating the rotation component of the first camera, and at the same time, the translation component of the first camera is estimated according to the distance information of the calibration, and the trajectory information of the electronic device is obtained, and the corrected second object is obtained.
  • Image can be used to assist in estimating the rotation component of the first camera, and at the same time, the translation component of the first camera is estimated according to the distance information of the calibration, and the trajectory information of the electronic device is obtained, and the corrected second object is obtained.
  • the first camera module captures an initial image for the second object, and then at least can obtain the trajectory information of the electronic device based on the reference image captured by the second camera module, and further, based on the electronic device.
  • the trajectory information is corrected for the initial image to obtain a corrected image.
  • the image is captured for the electronic device, when the image is shaken, the image is corrected by using the shooting condition of the other camera module, thereby avoiding the inaccurate correction caused by the parameter drift of the sensor when the sensor performs the correction.
  • the problem is to improve the accuracy of the corrected image.
  • the embodiment of the present invention is different from the third embodiment in that, in order to remove the blurred photo captured by the first camera, the second camera and the first sensor (gyro) work simultaneously, and the second camera enters a burst mode, and the second camera enters a burst mode.
  • the low resolution, high frame rate image sequence, the gyroscope records the rotational angular velocity of the device at a higher sampling frequency.
  • the electronic device of the embodiment further includes: a sensing unit 26, configured to perform angular velocity acquisition to obtain an angular velocity of at least two sampling points and a sampling time;
  • the calculating unit is configured to select at least two reference sampling points based on the occurrence time of the shooting instruction for the first camera module and the sampling time of the at least two sampling points; Calculating, according to the angular velocity corresponding to the at least two reference sampling points, the second trajectory information generated by the electronic device when the shooting instruction is executed; wherein the second trajectory information includes at least two adjacent reference sampling points The angle of rotation between the two.
  • Performing angular velocity acquisition to obtain an angular velocity of at least two sampling points and a sampling time; wherein, the first sensor performs angular velocity acquisition may be performed all the time, but only the angular velocity and the sampling time within a certain duration are cached; or, when step 102 is performed, Control begins the angular velocity acquisition.
  • the correcting unit is configured to correct the second trajectory information based on at least two translation vectors, and/or at least two rotation angles in the first trajectory information;
  • the initial image of the second object is corrected based on the corrected second trajectory information to obtain a corrected image for the second object.
  • the correcting the second trajectory information based on the at least two translation vectors, and/or the at least two rotation angles in the first trajectory information includes:
  • the calculating unit is configured to calculate, according to at least two translation vectors in the first trajectory information, a translation vector corresponding to at least two reference sampling points;
  • the homography matrix H is decomposed into a rotation matrix R and a translation vector s and a normal vector n, and the specific manners can be as follows:
  • H2 [R s; 0 0 0 1]
  • the second trajectory information of the camera is calculated from the acquired data of the first sensor, that is, the collection point of the gyro. Specific steps are as follows:
  • the correction method can be, but is not limited to, using a linear correction method:
  • W'xi (wx–wx1–wx2–...–wxT)(ti+1–ti)/(tT–t1)
  • W'yi (wy–wy1–wy2–...–wyT)(ti+1–ti)/(tT–t1)
  • the estimation method can be, but is not limited to, using a linear method:
  • the homography Hd may include:
  • Two test cards are fixed at the same time, and two mobile phones or other camera devices are selected to ensure that the two cameras can shoot the complete test card when the mobile phone is in the position.
  • the first camera and the second camera respectively take corresponding test cards to obtain images I11 and I12; when the mobile phone moves to the second position, images I21 and I22 are obtained.
  • H1 is calculated from I11 and I12; H2 is calculated from I21 and I22.
  • different positions can be selected, multiple sets of ⁇ I11, I12, I21I22 ⁇ are taken, and a more stable Hd is calculated using a least squares estimation.
  • the last correction unit uses the first trajectory information to correct the initial image of the second object to obtain a corrected image for the second object.
  • the manner of correcting the image by using the first trajectory information may include: by using the corrected w'xi, w'yi, w'zi, sxi, syi, and szi, we can obtain the trajectory of the first camera. It is the point spread function. Finally, you can use the classic Wiener filter to restore the image before blurring, which can be calculated in the following way:
  • I' F ⁇ I ⁇ F ⁇ psf ⁇ */(
  • the anti-Fourier change is then performed on I' to obtain the final corrected image.
  • Usually electronic devices can have front and rear cameras with consistent angular velocities at different points on the rigid body.
  • the image sequence information of the second camera can be used to assist in estimating the rotation component of the first camera, and at the same time, the translation component of the first camera is estimated according to the distance information of the calibration, and the trajectory information of the electronic device is obtained, and the corrected second object is obtained.
  • Image can be used to assist in estimating the rotation component of the first camera, and at the same time, the translation component of the first camera is estimated according to the distance information of the calibration, and the trajectory information of the electronic device is obtained, and the corrected second object is obtained.
  • the first camera module captures a second object, that is, a certain character, and obtains an initial image; and simultaneously extracts a second camera module at a shooting time.
  • the four images within 10 ms are respectively images 1-4, and the first trajectory information is obtained based on the images 1-4; and the first sensor simultaneously obtains the angular velocities of the three dimensions of the sampling points to obtain the second trajectory information.
  • the figure does not show how to perform the correction for the initial image, based on the foregoing processing of the embodiment, it can be known that the initial image is corrected based on the first trajectory information and the second trajectory information respectively obtained by the second camera module and the sensor. Treatment plan.
  • the first camera module is photographed for the second object to obtain an initial image, and then At least the trajectory information of the electronic device can be calculated based on the reference image captured by the second camera module, and the corrected image can be obtained by correcting the initial image based on the trajectory information of the electronic device.
  • the image is captured for the electronic device, when the image is shaken, the image is corrected by using the shooting condition of the other camera module, thereby avoiding the inaccurate correction caused by the parameter drift of the sensor when the sensor performs the correction.
  • the problem is to improve the accuracy of the corrected image.
  • the above various units may be combined and implemented in one unit, or any one of the units may be split into a plurality of units.
  • at least some of the functions of one or more of the units may be combined with at least some of the functions of the other units and implemented in one unit.
  • at least one of the above various units may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, A system, an application specific integrated circuit (ASIC) on the package, or hardware or firmware in any other reasonable manner to integrate or package the circuit, or in a suitable combination of software, hardware, and firmware implementations.
  • FPGA Field Programmable Gate Array
  • PLA Programmable Logic Array
  • ASIC application specific integrated circuit
  • at least one of the above-described respective units may be implemented at least in part as a computer program element, and when the program is executed by a computer, the functions of the respective units may be performed.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, computer, device, air conditioner, or network device, etc.) to perform the methods described in various embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本发明公开了一种信息处理方法及电子设备,所述方法包括:开启第二摄像模组对第一对象进行拍摄得到至少两个第一对象的候选图像;检测到针对第一摄像模组的拍摄指令时,控制所述第一摄像模组针对第二对象进行拍摄,得到针对所述第二对象的初始图像;基于所述针对第一摄像模组的拍摄指令的发生时刻,从所述至少两个第一对象的候选图像中选取得到至少两个参考图像,基于选取的至少两个参考图像,计算得到执行拍摄指令时所述电子设备所产生的第一轨迹信息;基于所述第一轨迹信息,对所述第二对象的初始图像进行修正,得到针对所述第二对象的修正后的图像。

Description

一种信息处理方法及电子设备 技术领域
本发明涉及通信领域中的信息处理技术,尤其涉及一种信息处理方法及电子设备。
背景技术
用户使用电子设备,尤其是具备拍摄功能的电子设备进行抓拍时,常难以避免手抖,导致照片模糊。这时三脚架拍摄、连拍挑选清晰照片或融合照片,以及光学防抖都无法凑效。单张图像去模糊的方法可以缓解图像模糊,虽然手持设备配备有惯性传感器(包括gyroscope和accelerometer),但传感器存在噪声漂移,仍然会导致轨迹和模糊核估计不准。
发明内容
本发明的主要目的在于提出一种信息处理方法及电子设备,旨在解决现有技术中存在的上述问题。
为实现上述目的,本发明提供一种信息处理方法,应用于电子设备,包括:
开启第二摄像模组对第一对象进行拍摄得到至少两个第一对象的候选图像;
检测到针对第一摄像模组的拍摄指令时,控制所述第一摄像模组针对第二对象进行拍摄,得到针对所述第二对象的初始图像;
基于所述针对第一摄像模组的拍摄指令的发生时刻,从所述至少两个第一对象的候选图像中选取得到至少两个参考图像,基于选取的至少两个参考图像,计算得到执行拍摄指令时所述电子设备所产生的第一轨迹信息;
基于所述第一轨迹信息,对所述第二对象的初始图像进行修正,得到针对所述第二对象的修正后的图像。
本发明实施例还提供了一种电子设备,所述电子设备包括第一摄像模组以及第二摄像模组,所述电子设备还包括:
控制单元,用于开启第二摄像模组对第一对象进行拍摄得到至少两个第一对象的候选图像;检测到针对第一摄像模组的拍摄指令时,控制所述第一摄像模组针对第二对象进行拍摄,得到针对所述第二对象的初始图像;
计算单元,用于基于所述针对第一摄像模组的拍摄指令的发生时刻,从所述至少两个第一对象的候选图像中选取得到至少两个参考图像,基于选取的至少两个参考图像,计算得到执行拍摄指令时所述电子设备所产生的第一轨迹信息;
修正单元,用于基于所述第一轨迹信息,对所述第二对象的初始图像进行修正,得到针对所述第二对象的修正后的图像。
本发明提出的一种信息处理方法及电子设备,在第一摄像模组针对第二对象进行拍摄得到初始图像,然后至少能够基于第二摄像模组拍摄得到的参考图像,计算得到电子设备的轨迹信息,进而基于电子设备的轨迹信息对初始图像进行修正得到修正后的图像。如此,在针对电子设备进行图像拍摄时,产生抖动的时候,采用另一个摄像模组的拍摄情况进行图像修正,避免了由传感器进行修正时,由于传感器的参数漂移所带来的无法准确修正的问题,提升了修正图像的准确度。
附图说明
图1为本发明实施例信息处理方法流程示意图;
图2为本发明实施例电子设备组成结构示意图1;
图3为本发明实施例电子设备组成结构示意图2;
图4为本发明实施例场景示意图。
具体实施方式
下面结合附图和具体实施例对本发明作进一步详细说明。
在此使用的术语仅仅是为了描述具体实施例,而并非意在限制本公开。在此使用的术语“包括”、“包含”等表明了所述特征、步骤、操作和/或部件的存在,但是并不排除存在或添加一个或多个其他特征、步骤、操作或部件。
在此使用的所有术语(包括技术和科学术语)具有本领域技术人员通常所理解的含义,除非另外定义。应注意,这里使用的术语应解释为具有与本说明书的上下文相一致的含义,而不应以理想化或过于刻板的方式来解释。
在使用类似于“A、B和C等中至少一个”这样的表述的情况下,一般来说应该按照本领域技术人员通常理解该表述的含义来予以解释(例如,“具有A、B和C中至少一个的系统”应包括但不限于单独具有A、单独具有B、单独具有C、具有A和B、具有A和C、具有B和C、和/或具有A、B、C的系统等)。在使用类似于“A、B或C等中至少一个”这样的表述的情况下,一般来说应该按照本领域技术人员通常理解该表述的含义来予以解释(例如,“具有A、B或C中至少一个的系统”应包括但不限于单独具有A、单独具有B、单独具有C、具有A和B、具有A和C、具有B和C、和/或具有A、B、C的系统等)。本领域技术人员还应理解,实质上任意表示两个或更多可选项目的转折连词和/或短语,无论是在说明书、 权利要求书还是附图中,都应被理解为给出了包括这些项目之一、这些项目任一方、或两个项目的可能性。例如,短语“A或B”应当被理解为包括“A”或“B”、或“A和B”的可能性。
附图中示出了一些方框图和/或流程图。应理解,方框图和/或流程图中的一些方框或其组合可以由计算机程序指令来实现。这些计算机程序指令可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器,从而这些指令在由该处理器执行时可以创建用于实现这些方框图和/或流程图中所说明的功能/操作的装置。
因此,本公开的技术可以硬件和/或软件(包括固件、微代码等)的形式来实现。另外,本公开的技术可以采取存储有指令的计算机可读介质上的计算机程序产品的形式,该计算机程序产品可供指令执行系统使用或者结合指令执行系统使用。在本公开的上下文中,计算机可读介质可以是能够包含、存储、传送、传播或传输指令的任意介质。例如,计算机可读介质可以包括但不限于电、磁、光、电磁、红外或半导体系统、装置、器件或传播介质。计算机可读介质的具体示例包括:磁存储装置,如磁带或硬盘(HDD);光存储装置,如光盘(CD-ROM);存储器,如随机存取存储器(RAM)或闪存;和/或有线/无线通信链路。
实施例一、
本发明实施例提供一种信息处理方法,应用于电子设备,所述电子设备包括第一摄像模组以及第二摄像模组,如图1所示,包括:
步骤101:开启第二摄像模组对第一对象进行拍摄得到至少两个第一对象的候选图像;
步骤102:检测到针对第一摄像模组的拍摄指令时,控制所述第一摄像模组针对第二对象进行拍摄,得到针对所述第二对象的初始图像;
步骤103:基于所述针对第一摄像模组的拍摄指令的发生时刻,从所述至少两个第一对象的候选图像中选取得到至少两个参考图像,基于选取的至少两个参考图像,计算得到执行拍摄指令时所述电子设备所产生的第一轨迹信息;
步骤104:基于所述第一轨迹信息,对所述第二对象的初始图像进行修正,得到针对所述第二对象的修正后的图像。
这里,所述电子设备中的摄像模组,可以为摄像头,所述第一以及第二摄像模组,可以为两个前摄像头,可以为两个后摄像头,也可以为一个前摄像头一个后摄像头,这里不进行穷举。
需要指出的是,本实施例中步骤101中,第二摄像模组可以是在电子设备启动时开启,并一直保持工作状态,也可以为执行步骤102时开启第二摄像模组。
进一步地,步骤101中,进行第一对象的拍摄可以为基于拍摄周期进行拍摄,拍摄周期 可以为1ms或者更少的时间,这里不进行限定。
上述步骤102中,进行第二对象的拍摄,可以为仅拍摄一张,得到一张针对第二对象的初始图像。
步骤103中,基于所述针对第一摄像模组的拍摄指令的发生时刻,从所述至少两个第一对象的候选图像中选取得到至少两个参考图像,基于选取的至少两个参考图像,计算得到执行拍摄指令时所述电子设备所产生的第一轨迹信息可以存在两种实施方式:
方式一、
当第一摄像模组进行拍摄时,以拍摄指令的发生时刻为中心时刻,以前后Nms为时间段,选取得到2N ms的时长作为选取参考图像的时长;N为整数;比如,可以为N=5,也就是说,选取拍摄指令的发生时刻的10ms内的候选图像作为参考图像;
然后基于参考图像计算得到执行拍摄指令时电子设备所产生的第一轨迹信息。
方式二、
确定检测到针对第一摄像模组的拍摄指令时,控制第二摄像模组进行对第一对象进行拍摄得到至少两个第一对象的候选图像;(即执行步骤102的同时,执行步骤101);
然后将第二摄像模组的拍摄时长设置为预设时长,比如,可以为M ms(可以为10ms或者更长,这里不进行限定);
将第二拍摄模组在拍摄时长内拍摄得到的全部候选图像作为参考图像,基于选取的至少两个参考图像,计算得到执行拍摄指令时所述电子设备所产生的第一轨迹信息。
最后执行步骤104,采用第一轨迹信息,对第二对象的初始图像进行修正,得到针对第二对象的修正后的图像。
具体来说,采用第一轨迹信息进行图像的修正的方式可以包括:由校正后的w'xi,w'yi,w'zi,sxi,syi,和szi,我们可以得到第一摄像头的轨迹,即为模糊核(point spread function)。最后,可采用经典的Wiener filter,恢复模糊前的图像,可以采用以下方式进行计算:
I’=F{I}F{psf}*/(|F{psf}|2+Ф);其中,w'xi,w'yi,w'zi,sxi,syi,和szi,分别表示三个维度的旋转角度以及平移向量;I表示初始图像,psf为三维矩阵,F{}表示傅里叶变换;|F{psf}|表示模运算,Ф为预设的常数;F{}*表示共轭计算;
再将I’进行反傅立叶变化,得到最终的修正后的图像。
通常电子设备,尤其是智能手机可以具备前置和后置摄像头,且刚体上的不同点的角速度一致。可以利用第二个摄像头的图像序列信息辅助估计第一个摄像头的旋转分量,同时根据实现标定的距离信息推算第一个摄像头的平移分量进而得到电子设备的轨迹信息,得到修正后的第二对象的图像。
可见,通过采用上述方案,在第一摄像模组针对第二对象进行拍摄得到初始图像,然后至少能够基于第二摄像模组拍摄得到的参考图像,计算得到电子设备的轨迹信息,进而基于电子设备的轨迹信息对初始图像进行修正得到修正后的图像。如此,在针对电子设备进行图像拍摄时,产生抖动的时候,采用另一个摄像模组的拍摄情况进行图像修正,避免了由传感器进行修正时,由于传感器的参数漂移所带来的无法准确修正的问题,提升了修正图像的准确度。
实施例二、
本发明实施例提供一种信息处理方法,应用于电子设备,所述电子设备包括第一摄像模组以及第二摄像模组,如图1所示,包括:
步骤101:开启第二摄像模组对第一对象进行拍摄得到至少两个第一对象的候选图像;
步骤102:检测到针对第一摄像模组的拍摄指令时,控制所述第一摄像模组针对第二对象进行拍摄,得到针对所述第二对象的初始图像;
步骤103:基于所述针对第一摄像模组的拍摄指令的发生时刻,从所述至少两个第一对象的候选图像中选取得到至少两个参考图像,基于选取的至少两个参考图像,计算得到执行拍摄指令时所述电子设备所产生的第一轨迹信息;
步骤104:基于所述第一轨迹信息,对所述第二对象的初始图像进行修正,得到针对所述第二对象的修正后的图像。
这里,所述电子设备中的摄像模组,可以为摄像头,所述第一以及第二摄像模组,可以为两个前摄像头,可以为两个后摄像头,也可以为一个前摄像头一个后摄像头,这里不进行穷举。
本实施例与实施例一不同之处在于,为了修正第一摄像头拍到的模糊照片,第二摄像头和第一传感器(陀螺仪)同时工作,第二摄像头进入连拍(burst)模式,拍摄低分辨率、高帧率的图像序列,陀螺仪则以更高的采样频率记录设备的旋转角速度。
需要指出的是,本实施例中步骤101中,第二摄像模组可以是在电子设备启动时开启,并一直保持工作状态,也可以为执行步骤102时开启第二摄像模组;另外,执行步骤101还可以在通过传感器检测到电子设备产生抖动的时候开始执行,其中,电子设备产生抖动可以通过加速度传感器进行检测,比如检测到短时间内产生较大的加速度时,确定电子设备产生抖动。
进一步地,步骤101中,进行第一对象的拍摄可以为基于拍摄周期进行拍摄,拍摄周期可以为1ms或者更少的时间,这里不进行限定。
上述步骤102中,进行第二对象的拍摄,可以为仅拍摄一张,得到一张针对第二对象的 初始图像。
步骤103中,基于所述针对第一摄像模组的拍摄指令的发生时刻,从所述至少两个第一对象的候选图像中选取得到至少两个参考图像,基于选取的至少两个参考图像,计算得到执行拍摄指令时所述电子设备所产生的第一轨迹信息,与实施例一不同之处在于,本实施例除了前述两种方式之外,还可以配合传感器的参数执行第三种实施方式:
方式三、
所述方法还包括:进行角速度采集得到至少两个采样点的角速度以及采样时刻;其中,第一传感器进行角速度采集可以为一直执行,但是仅缓存一定时长内的角速度和采样时刻;或者,也可以为执行步骤102时,控制开始进行角速度采集。
相应的,所述基于所述针对第一摄像模组的拍摄指令的发生时刻,从所述至少两个第一对象的候选图像中,选取得到至少两个参考图像时,所述方法还包括:
基于所述针对第一摄像模组的拍摄指令的发生时刻,以及所述至少两个采样点的采样时刻,选取得到至少两个参考采样点;
基于所述至少两个参考采样点所对应的角速度,计算得到执行拍摄指令时所述电子设备所产生的第二轨迹信息;其中,所述第二轨迹信息中至少包括有至少两个相邻参考采样点之间的旋转角度。
基于所述针对第一摄像模组的拍摄指令的发生时刻,以及所述至少两个采样点的采样时刻,选取得到至少两个参考采样点,可以为:基于拍摄指令的发生时刻,选取前后2N ms作为选取时长,然后从缓存的多个采样点及其对应的采样时刻中,选取2N ms选取时长内的多个参考采样点。其中,N为整数。
相应的,所述基于所述第一轨迹信息,对所述第二对象的初始图像进行修正,得到针对所述第二对象的修正后的图像,包括:
基于所述第一轨迹信息中的至少两个平移向量,和/或,至少两个旋转角度,对所述第二轨迹信息进行校正;
基于校正后的第二轨迹信息,对所述第二对象的初始图像进行修正,得到针对所述第二对象的修正后的图像。
具体的,所述基于所述第一轨迹信息中的至少两个平移向量,和/或,至少两个旋转角度,对所述第二轨迹信息进行校正包括:
基于所述第一轨迹信息中的至少一个旋转角度,对所述第二轨迹信息中的至少两个相邻参考采样点之间的旋转角度进行校正,得到校正后的第二轨迹信息;
进一步地,还可以包括:所述基于所述第一轨迹信息中的至少两个平移向量,计算得到 至少两个参考采样点所对应的平移向量;
将所述至少两个参考采样点所对应的平移向量,添加至校正后的第二轨迹信息。
具体来说,从第二摄像头的连拍序列中,依照经典光流算法反算相邻两帧之间摄像头的运动。步骤如下:
提取至少两个参考图像中每一个参考图像的特征点;
匹配相邻两帧参考图像的特征点,得到特征点对集;
根据特征点对的坐标,计算两帧图像的在成像平面上的单应矩阵homography H;
将homography矩阵H分解成旋转矩阵R、平移向量s和法向向量n,采用的具体方式可以如下:
H=K2·(R+s·nT)·K2-1,由此得到空间上的单应矩阵homography;
H2=[R s;0 0 0 1];
根据预先标定的双摄像头之间的标定的距离信息,表示为homography Hd,算出在此过程中第一摄像头的homography,即:H1=Hd·H2·Hd-1;
基于H1=[R1 s1;0 0 0 1],分解得到旋转矩阵R1和平移向量s1,进而计算沿三个坐标轴的旋转角度wx,wy,和wz。
再次,从第一传感器,也就是陀螺仪的采集点的采集数据中,计算摄像头的第二轨迹信息。具体步骤如下:
读取每个采样点的角速度和采样时刻;
根据采样时刻的差分,计算相邻两个采样点的间隔;
根据角速度和间隔,计算相邻两个采样点间的旋转角度,wxi,wyi和wzi。
陀螺仪数据虽然采样频率更高,更精细,但因为噪声,积分过程中会导致漂移,所以需要利用wx,wy,wz对wxt,wyt和wzt做校正。假设相邻两图像帧涵盖的陀螺仪采样点为i=1,2,…,T。校正方法可以,但不仅限于使用线性校正方法:
w'xi=(wx–wx1–wx2–…–wxT)(ti+1–ti)/(tT–t1)
w'yi=(wy–wy1–wy2–…–wyT)(ti+1–ti)/(tT–t1)
w'zi=(wz–wz1–wz2–…–wzT)(ti+1–ti)/(tT–t1)
同时,陀螺仪等运动传感器无法准确感知摄像头的平移,所以我们利用平移向量s来估算陀螺仪采样点的平移分量。估算方法可以是,但不仅限于使用线性方法:
sxi=sx(ti+1–ti)/(tT–t1)
syi=sy(ti+1–ti)/(tT–t1)
szi=sz(ti+1–ti)/(tT–t1)。
由于本实施例提供的场景可以应用于具备两个摄像头的场景,因此需要首先确定两个摄像头之间的标定距离,homography Hd,可以包括:
选定两处位置同时固定摆放两张测试卡,选定两处位置摆放手机或其它相机设备,确保手机在所在位置上时,其两个相机能分别拍摄完整的测试卡。手机固定在第一处位置时,第一摄像头和第二摄像头分别拍摄对应的测试卡,得到图像I11和I12;手机移动到第二处位置时,得到图像I21和I22。
由I11和I12计算得到H1;由I21和I22计算得到H2。解方程H1·Hd=Hd·H2可得到Hd。
进一步地,为了减小误差干扰,可以挑选不同的位置,拍摄多组{I11,I12,I21I22},采用最小二乘估计计算更稳定的Hd。
最后执行步骤104,采用第一轨迹信息,对第二对象的初始图像进行修正,得到针对第二对象的修正后的图像。
具体来说,采用第一轨迹信息进行图像的修正的方式可以包括:由校正后的w'xi,w'yi,w'zi,sxi,syi,和szi,我们可以得到第一摄像头的轨迹,即为模糊核(point spread function)。最后,可采用经典的Wiener filter,恢复模糊前的图像:
I’=F{I}F{psf}*/(|F{psf}|2+Ф);其中,w'xi,w'yi,w'zi,sxi,syi,和szi,分别表示三个维度的旋转角度以及平移向量;I表示初始图像,psf为三维矩阵,F{}表示傅里叶变换;|F{psf}|表示模运算,Ф为预设的常数。
通常电子设备,尤其是智能手机可以具备前置和后置摄像头,且刚体上的不同点的角速度一致。可以利用第二个摄像头的图像序列信息辅助估计第一个摄像头的旋转分量,同时根据实现标定的距离信息推算第一个摄像头的平移分量进而得到电子设备的轨迹信息,得到修正后的第二对象的图像。
可见,通过采用上述方案,在第一摄像模组针对第二对象进行拍摄得到初始图像,然后至少能够基于第二摄像模组拍摄得到的参考图像,计算得到电子设备的轨迹信息,进而基于电子设备的轨迹信息对初始图像进行修正得到修正后的图像。如此,在针对电子设备进行图像拍摄时,产生抖动的时候,采用另一个摄像模组的拍摄情况进行图像修正,避免了由传感器进行修正时,由于传感器的参数漂移所带来的无法准确修正的问题,提升了修正图像的准确度。
实施例三、
本发明实施例提供一种电子设备,如图2所示,所述电子设备包括第一摄像模组21以及第二摄像模组22,所述电子设备还包括:
控制单元23,用于开启第二摄像模组对第一对象进行拍摄得到至少两个第一对象的候选 图像;检测到针对第一摄像模组的拍摄指令时,控制所述第一摄像模组针对第二对象进行拍摄,得到针对所述第二对象的初始图像;
计算单元24,用于基于所述针对第一摄像模组的拍摄指令的发生时刻,从所述至少两个第一对象的候选图像中选取得到至少两个参考图像,基于选取的至少两个参考图像,计算得到执行拍摄指令时所述电子设备所产生的第一轨迹信息;
修正单元25,用于基于所述第一轨迹信息,对所述第二对象的初始图像进行修正,得到针对所述第二对象的修正后的图像。
这里,所述电子设备中的摄像模组,可以为摄像头,所述第一以及第二摄像模组,可以为两个前摄像头,可以为两个后摄像头,也可以为一个前摄像头一个后摄像头,这里不进行穷举。
进一步地,控制单元23,用于进行第一对象的拍摄可以为基于拍摄周期进行拍摄,拍摄周期可以为1ms或者更少的时间,这里不进行限定。
上述进行第二对象的拍摄,可以为仅拍摄一张,得到一张针对第二对象的初始图像。
计算单元,用于执行存在两种实施方式:
方式一、
当第一摄像模组进行拍摄时,以拍摄指令的发生时刻为中心时刻,以前后Nms为时间段,选取得到2N ms的时长作为选取参考图像的时长;N为整数;比如,可以为5ms,也就是说,选取拍摄指令的发生时刻的10ms内的候选图像作为参考图像;
然后基于参考图像计算得到执行拍摄指令时电子设备所产生的第一轨迹信息。
方式二、
确定检测到针对第一摄像模组的拍摄指令时,控制第二摄像模组进行对第一对象进行拍摄得到至少两个第一对象的候选图像;
然后将第二摄像模组的拍摄时长设置为预设时长,比如,可以为M ms(可以为10ms或者更长,这里不进行限定);
将第二拍摄模组在拍摄时长内拍摄得到的全部候选图像作为参考图像,基于选取的至少两个参考图像,计算得到执行拍摄指令时所述电子设备所产生的第一轨迹信息。
最后修正单元,用于采用第一轨迹信息进行图像的修正的方式可以包括:由校正后的w'xi,w'yi,w'zi,sxi,syi,和szi,我们可以得到第一摄像头的轨迹,即为模糊核(point spread function)。最后,可采用经典的Wiener filter,恢复模糊前的图像:
I’=F{I}F{psf}*/(|F{psf}|2+Ф);其中,w'xi,w'yi,w'zi,sxi,syi,和szi,分别表示三个维度的旋转角度以及平移向量;I表示初始图像,psf为三维矩阵,F{}表示傅里叶变换;|F{psf}| 表示模运算,Ф为预设的常数。
通常电子设备,尤其是智能手机可以具备前置和后置摄像头,且刚体上的不同点的角速度一致。可以利用第二个摄像头的图像序列信息辅助估计第一个摄像头的旋转分量,同时根据实现标定的距离信息推算第一个摄像头的平移分量进而得到电子设备的轨迹信息,得到修正后的第二对象的图像。
可见,通过采用上述方案,在第一摄像模组针对第二对象进行拍摄得到初始图像,然后至少能够基于第二摄像模组拍摄得到的参考图像,计算得到电子设备的轨迹信息,进而基于电子设备的轨迹信息对初始图像进行修正得到修正后的图像。如此,在针对电子设备进行图像拍摄时,产生抖动的时候,采用另一个摄像模组的拍摄情况进行图像修正,避免了由传感器进行修正时,由于传感器的参数漂移所带来的无法准确修正的问题,提升了修正图像的准确度。
实施例四、
本发明实施例与实施例三不同之处在于,为了去除第一摄像头拍到的模糊照片,第二摄像头和第一传感器(陀螺仪)同时工作,第二摄像头进入连拍(burst)模式,拍摄低分辨率、高帧率的图像序列,陀螺仪则以更高的采样频率记录设备的旋转角速度。
基于图2,参见图3,本实施例所述电子设备还包括:传感单元26,用于进行角速度采集得到至少两个采样点的角速度以及采样时刻;
相应的,所述计算单元,用于基于所述针对第一摄像模组的拍摄指令的发生时刻,以及所述至少两个采样点的采样时刻,选取得到至少两个参考采样点;基于所述至少两个参考采样点所对应的角速度,计算得到执行拍摄指令时所述电子设备所产生的第二轨迹信息;其中,所述第二轨迹信息中至少包括有至少两个相邻参考采样点之间的旋转角度。
进行角速度采集得到至少两个采样点的角速度以及采样时刻;其中,第一传感器进行角速度采集可以为一直执行,但是仅缓存一定时长内的角速度和采样时刻;或者,也可以为执行步骤102时,控制开始进行角速度采集。
所述修正单元,用于基于所述第一轨迹信息中的至少两个平移向量、和/或、至少两个旋转角度,对所述第二轨迹信息进行校正;
基于校正后的第二轨迹信息,对所述第二对象的初始图像进行修正,得到针对所述第二对象的修正后的图像。
具体的,所述基于所述第一轨迹信息中的至少两个平移向量、和/或、至少两个旋转角度,对所述第二轨迹信息进行校正包括:
基于所述第一轨迹信息中的至少一个旋转角度,对所述第二轨迹信息中的至少两个相邻 参考采样点之间的旋转角度进行校正,得到校正后的第二轨迹信息;
进一步地,所述计算单元,用于所述基于所述第一轨迹信息中的至少两个平移向量,计算得到至少两个参考采样点所对应的平移向量;
将所述至少两个参考采样点所对应的平移向量,添加至校正后的第二轨迹信息。
具体来说,从第二摄像头的连拍序列中,依照经典光流算法反算相邻两帧之间摄像头的运动。步骤如下:
提取至少两个参考图像中每一个参考图像的特征点;
匹配相邻两帧参考图像的特征点,得到特征点对集;
根据特征点对的坐标,计算两帧图像的在成像平面上的单应矩阵homography H;
将homography矩阵H分解成旋转矩阵R和平移向量s和法向向量n,采用的具体方式可以如下:
H=K2·(R+s·nT)·K2-1,由此得到空间上的单应矩阵homography;
H2=[R s;0 0 0 1];
根据预先标定的双摄像头之间的标定的距离信息,表示为homography Hd,算出在此过程中第一摄像头的homography,即:H1=Hd·H2·Hd-1;
基于H1=[R1 s1;0 0 0 1],分解得到旋转矩阵R1和平移向量s1,进而计算沿三个坐标轴的旋转角度wx,wy,和wz。
再次,从第一传感器,也就是陀螺仪的采集点的采集数据中,计算摄像头的第二轨迹信息。具体步骤如下:
读取每个采样点的角速度和采样时刻;
根据采样时刻的差分,计算相邻两个采样点的间隔;
根据角速度和间隔,计算相邻两个采样点间的旋转角度,wxi,wyi和wzi。
陀螺仪数据虽然采样频率更高,更精细,但因为噪声,积分过程中会导致漂移,所以需要利用wx,wy,wz对wxt,wyt和wzt做校正。假设相邻两图像帧涵盖的陀螺仪采样点为i=1,2,…,T。校正方法可以,但不仅限于使用线性校正方法:
w'xi=(wx–wx1–wx2–…–wxT)(ti+1–ti)/(tT–t1)
w'yi=(wy–wy1–wy2–…–wyT)(ti+1–ti)/(tT–t1)
w'zi=(wz–wz1–wz2–…–wzT)(ti+1–ti)/(tT–t1)
同时,陀螺仪等运动传感器无法准确感知摄像头的平移,所以我们利用平移向量s来估算陀螺仪采样点的平移分量。估算方法可以是,但不仅限于使用线性方法:
sxi=sx(ti+1–ti)/(tT–t1)
syi=sy(ti+1–ti)/(tT–t1)
szi=sz(ti+1–ti)/(tT–t1)。
由于本实施例提供的场景可以应用于具备两个摄像头的场景,因此需要首先确定两个摄像头之间的标定距离,homography Hd,可以包括:
选定两处位置同时固定摆放两张测试卡,选定两处位置摆放手机或其它相机设备,确保手机在所在位置上时,其两个相机能分别拍摄完整的测试卡。手机固定在第一处位置时,第一摄像头和第二摄像头分别拍摄对应的测试卡,得到图像I11和I12;手机移动到第二处位置时,得到图像I21和I22。
由I11和I12计算得到H1;由I21和I22计算得到H2。解方程H1·Hd=Hd·H2可得到Hd。
进一步地,为了减小误差干扰,可以挑选不同的位置,拍摄多组{I11,I12,I21I22},采用最小二乘估计计算更稳定的Hd。
最后修正单元采用第一轨迹信息,对第二对象的初始图像进行修正,得到针对第二对象的修正后的图像。
具体来说,采用第一轨迹信息进行图像的修正的方式可以包括:由校正后的w'xi,w'yi,w'zi,sxi,syi,和szi,我们可以得到第一摄像头的轨迹,即为模糊核(point spread function)。最后,可采用经典的Wiener filter,恢复模糊前的图像,可以采用以下方式进行计算:
I’=F{I}F{psf}*/(|F{psf}|2+Ф);其中,w'xi,w'yi,w'zi,sxi,syi,和szi,分别表示三个维度的旋转角度以及平移向量;I表示初始图像,psf为三维矩阵,F{}表示傅里叶变换;|F{psf}|表示模运算,Ф为预设的常数;F{}*表示共轭计算;
再将I’进行反傅立叶变化,得到最终的修正后的图像。
通常电子设备,尤其是智能手机可以具备前置和后置摄像头,且刚体上的不同点的角速度一致。可以利用第二个摄像头的图像序列信息辅助估计第一个摄像头的旋转分量,同时根据实现标定的距离信息推算第一个摄像头的平移分量进而得到电子设备的轨迹信息,得到修正后的第二对象的图像。
本实施例提供的处理场景,可以参见图4,其中,第一摄像模组针对第二对象,也就是某一个人物进行拍摄,并得到初始图像;同时提取第二摄像模组在拍摄发生时刻的10ms内的4个图像,分别为图像1-4,基于图像1-4得到第一轨迹信息;并且第一传感器同时得到采样点的三个维度的角速度,得到第二轨迹信息。虽然图中未示出如何进行针对初始图像的修正,但是基于本实施例前述处理,可以得知基于第二摄像模组以及传感器分别得到的第一轨迹信息以及第二轨迹信息进行初始图像的修正的处理方案。
可见,通过采用上述方案,在第一摄像模组针对第二对象进行拍摄得到初始图像,然后 至少能够基于第二摄像模组拍摄得到的参考图像,计算得到电子设备的轨迹信息,进而基于电子设备的轨迹信息对初始图像进行修正得到修正后的图像。如此,在针对电子设备进行图像拍摄时,产生抖动的时候,采用另一个摄像模组的拍摄情况进行图像修正,避免了由传感器进行修正时,由于传感器的参数漂移所带来的无法准确修正的问题,提升了修正图像的准确度。
可以理解的是,上述各个单元可以合并在一个单元中实现,或者其中的任意一个单元可以被拆分成多个单元。或者,这些单元中的一个或多个单元的至少部分功能可以与其他单元的至少部分功能相结合,并在一个单元中实现。根据本发明的实施例,上述各个单元中的至少一个可以至少被部分地实现为硬件电路,例如现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)、片上系统、基板上的系统、封装上的系统、专用集成电路(ASIC),或可以以对电路进行集成或封装的任何其他的合理方式等硬件或固件来实现,或以软件、硬件以及固件三种实现方式的适当组合来实现。或者,上述各个单元中的至少一个可以至少被部分地实现为计算机程序单元,当该程序被计算机运行时,可以执行相应单元的功能。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,装置,空调器,或者网络设备等)执行本发明各个实施例所述的方法。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (10)

  1. 一种信息处理方法,应用于电子设备,所述电子设备包括第一摄像模组以及第二摄像模组,所述方法包括:
    开启第二摄像模组对第一对象进行拍摄得到至少两个第一对象的候选图像;
    检测到针对第一摄像模组的拍摄指令时,控制所述第一摄像模组针对第二对象进行拍摄,得到针对所述第二对象的初始图像;
    基于所述针对第一摄像模组的拍摄指令的发生时刻,从所述至少两个第一对象的候选图像中选取得到至少两个参考图像,基于选取的至少两个参考图像,计算得到执行拍摄指令时所述电子设备所产生的第一轨迹信息;
    基于所述第一轨迹信息,对所述第二对象的初始图像进行修正,得到针对所述第二对象的修正后的图像。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:进行角速度采集得到至少两个采样点的角速度以及采样时刻;
    相应的,所述基于所述针对第一摄像模组的拍摄指令的发生时刻,从所述至少两个第一对象的候选图像中,选取得到至少两个参考图像时,所述方法还包括:
    基于所述针对第一摄像模组的拍摄指令的发生时刻,以及所述至少两个采样点的采样时刻,选取得到至少两个参考采样点;
    基于所述至少两个参考采样点所对应的角速度,计算得到执行拍摄指令时所述电子设备所产生的第二轨迹信息;其中,所述第二轨迹信息中至少包括有至少两个相邻参考采样点之间的旋转角度。
  3. 根据权利要求2所述的方法,其特征在于,所述基于所述第一轨迹信息,对所述第二对象的初始图像进行修正,得到针对所述第二对象的修正后的图像,包括:
    基于所述第一轨迹信息中的至少两个平移向量、和/或、至少两个旋转角度,对所述第二轨迹信息进行校正;
    基于校正后的第二轨迹信息,对所述第二对象的初始图像进行修正,得到针对所述第二对象的修正后的图像。
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述第一轨迹信息中的至少两个平移向量、和/或、至少两个旋转角度,对所述第二轨迹信息进行校正包括:
    基于所述第一轨迹信息中的至少一个旋转角度,对所述第二轨迹信息中的至少两个相邻参考采样点之间的旋转角度进行校正,得到校正后的第二轨迹信息。
  5. 根据权利要求3所述的方法,其特征在于,基于所述第一轨迹信息中的至少两个平移向量、和/或、至少两个旋转角度,对所述第二轨迹信息进行校正,包括:
    所述基于所述第一轨迹信息中的至少两个平移向量,计算得到至少两个参考采样点所对应的平移向量;
    将所述至少两个参考采样点所对应的平移向量,添加至校正后的第二轨迹信息。
  6. 一种电子设备,所述电子设备包括第一摄像模组以及第二摄像模组,所述电子设备还包括:
    控制单元,用于开启第二摄像模组对第一对象进行拍摄得到至少两个第一对象的候选图像;检测到针对第一摄像模组的拍摄指令时,控制所述第一摄像模组针对第二对象进行拍摄,得到针对所述第二对象的初始图像;
    计算单元,用于基于所述针对第一摄像模组的拍摄指令的发生时刻,从所述至少两个第一对象的候选图像中选取得到至少两个参考图像,基于选取的至少两个参考图像,计算得到执行拍摄指令时所述电子设备所产生的第一轨迹信息;
    修正单元,用于基于所述第一轨迹信息,对所述第二对象的初始图像进行修正,得到针对所述第二对象的修正后的图像。
  7. 根据权利要求6所述的电子设备,其特征在于,所述电子设备还包括:传感单元,用于进行角速度采集得到至少两个采样点的角速度以及采样时刻;
    相应的,所述计算单元,用于基于所述针对第一摄像模组的拍摄指令的发生时刻,以及所述至少两个采样点的采样时刻,选取得到至少两个参考采样点;基于所述至少两个参考采样点所对应的角速度,计算得到执行拍摄指令时所述电子设备所产生的第二轨迹信息;其中,所述第二轨迹信息中至少包括有至少两个相邻参考采样点之间的旋转角度。
  8. 根据权利要求7所述的电子设备,其特征在于,所述修正单元,用于基于所述第一轨迹信息中的至少两个平移向量、和/或、至少两个旋转角度,对所述第二轨迹信息进行校正;基于校正后的第二轨迹信息,对所述第二对象的初始图像进行修正,得到针对所述第二对象的修正后的图像。
  9. 根据权利要求8所述的电子设备,其特征在于,所述计算单元,用于基于所述第一轨迹信息中的至少一个旋转角度,对所述第二轨迹信息中的至少两个相邻参考采样点之间的旋转角度进行校正,得到校正后的第二轨迹信息。
  10. 根据权利要求8所述的电子设备,其特征在于,所述计算单元,用于所述基于所述第一轨迹信息中的至少两个平移向量,计算得到至少两个参考采样点所对应的平移向量;将所述至少两个参考采样点所对应的平移向量,添加至校正后的第二轨迹信息。
PCT/CN2017/102938 2017-06-29 2017-09-22 一种信息处理方法及电子设备 WO2019000664A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710516465.6A CN107370941B (zh) 2017-06-29 2017-06-29 一种信息处理方法及电子设备
CN201710516465.6 2017-06-29

Publications (1)

Publication Number Publication Date
WO2019000664A1 true WO2019000664A1 (zh) 2019-01-03

Family

ID=60305843

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/102938 WO2019000664A1 (zh) 2017-06-29 2017-09-22 一种信息处理方法及电子设备

Country Status (2)

Country Link
CN (1) CN107370941B (zh)
WO (1) WO2019000664A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107392850B (zh) 2017-06-30 2020-08-25 联想(北京)有限公司 图像处理方法及其系统
CN108280815B (zh) * 2018-02-26 2021-10-22 安徽新闻出版职业技术学院 一种面向监控场景结构的几何校正方法
CN109410130B (zh) * 2018-09-28 2020-12-04 华为技术有限公司 图像处理方法和图像处理装置
CN110800282B (zh) * 2018-11-20 2021-07-27 深圳市大疆创新科技有限公司 云台调整方法、云台调整设备、移动平台及介质
CN110648285A (zh) * 2019-08-02 2020-01-03 杭州电子科技大学 基于惯性测量装置下的快速运动去模糊的方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101356802A (zh) * 2006-04-11 2009-01-28 松下电器产业株式会社 摄像装置
CN101742122A (zh) * 2009-12-21 2010-06-16 汉王科技股份有限公司 一种去除视频抖动的方法和系统
US7817187B2 (en) * 2007-06-27 2010-10-19 Aptina Imaging Corporation Image blur correction using a secondary camera
CN106444220A (zh) * 2015-08-12 2017-02-22 三星电机株式会社 相机模块

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006129236A (ja) * 2004-10-29 2006-05-18 Sanyo Electric Co Ltd リンギング除去装置およびリンギング除去プログラムを記録したコンピュータ読み取り可能な記録媒体
JP2006295238A (ja) * 2005-04-05 2006-10-26 Olympus Imaging Corp 撮像装置
EP3117597A1 (en) * 2014-03-12 2017-01-18 Sony Corporation Method, system and computer program product for debluring images
EP3120537B1 (en) * 2014-03-19 2020-02-26 Sony Corporation Control of shake blur and motion blur for pixel multiplexing cameras

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101356802A (zh) * 2006-04-11 2009-01-28 松下电器产业株式会社 摄像装置
US7817187B2 (en) * 2007-06-27 2010-10-19 Aptina Imaging Corporation Image blur correction using a secondary camera
CN101742122A (zh) * 2009-12-21 2010-06-16 汉王科技股份有限公司 一种去除视频抖动的方法和系统
CN106444220A (zh) * 2015-08-12 2017-02-22 三星电机株式会社 相机模块

Also Published As

Publication number Publication date
CN107370941B (zh) 2020-06-23
CN107370941A (zh) 2017-11-21

Similar Documents

Publication Publication Date Title
WO2019000664A1 (zh) 一种信息处理方法及电子设备
Karpenko et al. Digital video stabilization and rolling shutter correction using gyroscopes
JP4527152B2 (ja) カメラのモーションブラー関数を決定する手段を有するデジタルイメージ取得システム
EP2849428B1 (en) Image processing device, image processing method, image processing program, and storage medium
US8264553B2 (en) Hardware assisted image deblurring
Hanning et al. Stabilizing cell phone video using inertial measurement sensors
EP3443736B1 (en) Method and apparatus for video content stabilization
US11611697B2 (en) Optical image stabilization movement to create a super-resolution image of a scene
US9025859B2 (en) Inertial sensor aided instant autofocus
WO2018223381A1 (zh) 一种视频防抖方法及移动设备
JP2019510234A (ja) 奥行き情報取得方法および装置、ならびに画像取得デバイス
EP3296952B1 (en) Method and device for blurring a virtual object in a video
WO2013062742A1 (en) Sensor aided video stabilization
US11375097B2 (en) Lens control method and apparatus and terminal
US20130162786A1 (en) Image processing apparatus, imaging apparatus, image processing method, and program
JPH1124122A (ja) 手ぶれ画像補正方法および手ぶれ画像補正装置並びにその方法をコンピュータに実行させるためのプログラムを記録したコンピュータ読み取り可能な記録媒体
KR20160140193A (ko) 영상 보정 회로 및 그 보정 방법
CN107231526B (zh) 图像处理方法以及电子设备
CN112414400B (zh) 一种信息处理方法、装置、电子设备和存储介质
CN112204946A (zh) 数据处理方法、装置、可移动平台及计算机可读存储介质
CN110119189B (zh) Slam系统的初始化、ar控制方法、装置和系统
WO2019205087A1 (zh) 图像增稳方法和装置
CN108260360B (zh) 场景深度计算方法、装置及终端
JP2017060091A (ja) 姿勢推定装置、姿勢推定方法及びプログラム
CN111955005B (zh) 处理360度图像内容的方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17915560

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20.05.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17915560

Country of ref document: EP

Kind code of ref document: A1