WO2019062214A1 - 用于移动设备拍摄全景图像的方法、移动设备、计算机可读存储介质和计算机产品 - Google Patents

用于移动设备拍摄全景图像的方法、移动设备、计算机可读存储介质和计算机产品 Download PDF

Info

Publication number
WO2019062214A1
WO2019062214A1 PCT/CN2018/091385 CN2018091385W WO2019062214A1 WO 2019062214 A1 WO2019062214 A1 WO 2019062214A1 CN 2018091385 W CN2018091385 W CN 2018091385W WO 2019062214 A1 WO2019062214 A1 WO 2019062214A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
mobile device
images
camera
vector
Prior art date
Application number
PCT/CN2018/091385
Other languages
English (en)
French (fr)
Inventor
李维国
赵天月
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to EP18859941.9A priority Critical patent/EP3691246A4/en
Priority to US16/335,991 priority patent/US11381738B2/en
Publication of WO2019062214A1 publication Critical patent/WO2019062214A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Definitions

  • the present disclosure relates to the field of image capture, and more particularly to a method, mobile device, computer readable storage medium, and computer product for a mobile device to capture a panoramic image.
  • smart mobile devices can support multiple camera modes, where the panoramic camera mode requires the user to be relatively fixed at the camera point and rotate the lens in the same spatial direction (usually horizontal or vertical) to continuously take a picture and obtain the final photo.
  • the panoramic camera mode requires the user to be relatively fixed at the camera point and rotate the lens in the same spatial direction (usually horizontal or vertical) to continuously take a picture and obtain the final photo.
  • a panoramic image is taken by a mobile device, it is usually set to continuously capture a plurality of images at regular time intervals, but the moving speed of the human hand when the camera is adjusted cannot be kept constant, resulting in poor quality of the captured image.
  • the present disclosure is directed to a method of realizing automatic continuous shooting of an image and outputting a panoramic image by using orientation information provided by a sensor within the mobile device.
  • a method for a mobile device to capture a panoramic image comprising: the camera capturing a first image at a first moment, and The sensor senses a first pointing vector of the camera when the first image is captured; the sensor senses a measured pointing vector of the camera at a second time; the measured pointing vector is relative to the first In a case where the amount of change of the pointing vector is greater than or equal to the first triggering shooting threshold or the time interval between the second time and the first time is greater than or equal to the second triggering shooting threshold, the camera captures the second image, And the sensor senses a second pointing vector of the camera when the second image is captured; and if it is determined that the shooting stop condition is satisfied, the panoramic image is generated based on the plurality of captured images.
  • the method may further include marking the first pointing vector as a direction vector of the first image and marking the second pointing vector as a direction vector of the second image.
  • the mobile device may further include a display screen; the method may further include: displaying the currently captured images in the display screen if the shooting stop condition is not satisfied.
  • the method may further include determining a pixel offset of the second image relative to the first image; and based on a pixel offset of the second image relative to the first image, A relative position of the first image and the second image is determined.
  • the determining a pixel offset of the second image relative to the first image may further include: determining an image overlap region of the first image and the second image; Determining a pixel overlap of the second image relative to the first image.
  • the determining a pixel offset of the second image relative to the first image may further include: obtaining a reference ratio of a pixel offset and a direction vector variation; and according to the reference ratio and Determining a pixel offset of the second image relative to the first image relative to a change amount of a direction vector of the second image relative to a direction vector of the first image.
  • the reference ratio is obtained based on a pixel offset and a change amount of a direction vector between two images initially captured in the plurality of images.
  • the method may further include: determining an edge region of the image overlap region of the first image and the second image; and correcting the first image and the first image according to the edge region The relative position of the two images.
  • the generating the panoramic image may further include: splicing the plurality of images according to a relative position between the plurality of images; and selecting an image within the maximized effective rectangular region as the panoramic image .
  • a mobile device comprising: a camera configured to capture a first image at a first time; a sensor configured to sense the camera to capture the first a first pointing vector at an image, and sensing the measured pointing vector of the camera at a second time; the processor configured to change the measured pointing vector relative to the first pointing vector by greater than or equal to Controlling, by the camera, capturing a second image and controlling the sensor sensing, in a case where the first triggering shooting threshold or the time interval between the second time and the first time is greater than or equal to a second triggering shooting threshold
  • the processor may be further configured to mark the first pointing vector as a direction vector of the first image and mark the second pointing vector as a direction of the second image Vector.
  • the mobile device may further include: a display screen configured to display the currently captured images in the display screen if the shooting stop condition is not satisfied.
  • the processor may be further configured to determine a pixel offset of the second image relative to the first image; according to a pixel offset of the second image relative to the first image Determining a relative position of the first image and the second image.
  • the processor may be further configured to: determine an image overlap region of the first image and the second image; and determine the second image relative to the image according to the image overlap region The pixel offset of the first image.
  • the processor may be further configured to: obtain a reference ratio of the pixel offset and the direction vector variation; and compare the first reference vector and the direction vector of the second image with respect to the first The amount of change in the direction vector of the image determines the pixel offset of the second image relative to the first image.
  • the reference ratio is obtained based on a pixel offset and a change amount of a direction vector between two images initially captured in the plurality of images.
  • the processor may be further configured to: determine an edge region of the image overlap region of the first image and the second image; and correct the first image and according to the edge region The relative position of the second image.
  • the processor may be further configured to: stitch the plurality of images according to a relative position between the plurality of images; and select an image within the maximized effective rectangular region as a panorama image.
  • the mobile device may include: an image capturing unit, a sensing unit, a control unit, and a position determining unit .
  • a computer readable storage medium for storing a computer program, the computer program comprising instructions for performing one or more of the methods of the first aspect of the present disclosure .
  • a computer product comprising one or more processors configured to execute computer instructions to perform one or more of the methods of the first aspect of the present disclosure Steps.
  • the dual-trigger shooting condition of the camera pointing change and the change of the shooting time interval is adopted, so that the adjustment of the camera by the human hand can be more flexibly adapted to take a high-quality image.
  • FIG. 1 is a schematic diagram showing a coordinate system for identifying a space in which a mobile device is located;
  • FIG. 2 is a diagram showing a correspondence relationship of direction parameters in a coordinate system of a space in which the mobile device is shown in FIG. 1;
  • FIG. 3 is a flowchart illustrating a method for a mobile device to capture a panoramic image, in accordance with an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram showing display of respective captured images on a display screen of a mobile device, in accordance with an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram illustrating a mobile device for capturing a panoramic image, in accordance with an embodiment of the present disclosure.
  • FIG. 1 is a schematic diagram showing a coordinate system for identifying a space in which a mobile device is located, and the coordinate system is used to identify a pointing of a mobile device camera; and
  • FIG. 2 is a view showing a space in which the mobile device shown in FIG. A diagram of the correspondence of direction parameters in the coordinate system.
  • the X axis corresponds to the up and down direction
  • the Y axis corresponds to the left and right direction
  • the Z axis corresponds to the front and rear direction.
  • Pitch indicates that the horizontal position of the mobile device is rotated by the Z axis, that is, the change of the left and right hand positions when the user holds the mobile device for shooting;
  • the scroll indicates that the rotation is centered on the Y axis.
  • the angle that is, the pitch angle of the mobile device, that is, the field of view expansion angle of the camera of the mobile device in the vertical direction;
  • the deflection indicates that the X axis is the central axis, that is, the camera of the mobile device is horizontally
  • the field of view extends the angle.
  • the direction vector ⁇ Pitch, Roll, Yaw ⁇ can uniquely determine the spatial attitude angle at which the mobile device is located, that is, the direction in which the camera is pointing.
  • This vector can serve as a direction vector for identifying the orientation parameters of the image captured by the mobile device.
  • the user can adjust the pointing of the camera in the X-Y two-dimensional plane, and the sensor equipped in the mobile device can sense the pointing vector of the camera in real time during the movement of the camera.
  • the direction vector ⁇ Pitch, Roll, Yaw ⁇ can be obtained in real time by calling the relevant API of the mobile device.
  • the mobile device here may be, for example, a smart phone, a tablet, a camera, or the like that supports photographing.
  • the sensor provided in the mobile device for sensing the pointing vector of the camera for example, may be an orientation sensor, which may be combined by a magnetic field sensor and an acceleration sensor, or may be other sensors or sensors capable of sensing the orientation. Combinations, the disclosure does not limit this.
  • FIG. 3 is a flow chart illustrating a method for a mobile device to take a panoramic photo, in accordance with an embodiment of the present disclosure.
  • the panorama shooting process is turned on.
  • the camera captures the first image at the first moment
  • the sensor senses the first pointing vector of the camera when the first image is captured (S310).
  • the mobile device can store the first image and the first pointing vector.
  • the sensor senses the measured pointing vector of the camera at the second moment (S320).
  • the measured pointing vector can be dynamically changed correspondingly as the pointing direction of the camera changes.
  • the camera captures a second image, and the sensor senses a second pointing vector of the camera when the second image is captured (S330).
  • the mobile device can store the second image and a second pointing vector sensed by the camera when the second image is captured. It is determined whether the shooting stop condition is satisfied (S340). When the shooting stop condition is satisfied, a panoramic image is generated based on the plurality of images that have been captured (S350). When the shooting stop condition is not satisfied, the operation of step S320 is returned.
  • the shooting stop condition is not satisfied, the above-described photographing process is performed cyclically, and the mobile device automatically takes a plurality of images continuously in accordance with the judgment for the triggering shooting condition. That is, the amount of change in the current angle of the camera relative to the angle at which the previous image was taken is greater than or equal to the first trigger capture threshold, or the change in the current time relative to the time at which the previous image was taken is greater than or equal to the second When the shooting threshold is triggered, the camera of the mobile device automatically captures the next image.
  • the sensor of the mobile device can continue to sense the change in the orientation of the camera, and when either of the change in the direction of the camera or the change in the shooting time interval satisfies the condition again, the camera of the mobile device can take an image again. Therefore, the process of loop shooting can continue until the shooting stop condition is satisfied.
  • first image and the second image referred to herein are only used to mark two consecutive images taken consecutively, and are not limited to two images that are always initially taken.
  • first image and the second image described herein may be the first image and the second image captured, or may be the third image and the fourth image captured, or may be captured.
  • the amount of change of the second pointing vector relative to the first pointing vector may be an angular change of any component in the ⁇ Pitch, Roll, Yaw ⁇ vector as shown in FIG. 2, for example, it may be ⁇ Pitch, Roll, The angle change of the fastest changing component in the Yaw ⁇ vector.
  • the first trigger capture threshold may be 2° and the second trigger capture threshold may be 50 ms.
  • the change in the pointing direction of the camera is greater than or equal to 2°, or the time interval of shooting is greater than or equal to 50 ms, the next image is taken.
  • the first trigger capture threshold and the second trigger capture threshold may be fixed, or may be changed to other suitable values according to user needs and actual shooting conditions.
  • the case where the shooting stop condition is satisfied may be any one of the following cases: the area swept by the camera may have finished traversing all areas within the current viewing range (eg, the finder frame presented by the display screen); or The user presses the shooting stop button to actively terminate the shooting. At this time, if there is an untraversed area, the shooting system automatically analyzes the maximum possible area as the final imaged area; a predetermined number of images have been taken; a predetermined length of time has been taken; The operation of another application within the mobile device causes an interruption in shooting; and, in another case, it may depend on the limitations of the storage capacity of the shooting system.
  • the above case is only an example of a shooting stop condition, and is not a limitation thereto.
  • embodiments of the present disclosure provide a method for a mobile device to capture a panoramic image, which may be in any of the time intervals of the camera's pointing change or photographing sensed by a sensor equipped in the mobile device.
  • the triggering camera automatically takes an image continuously.
  • the conditions of the image enable the mobile device to sharply capture multiple images at appropriate intervals, so that the distance of the captured image is not too far or too close, because too far will result in no common part between the captured adjacent images. Cannot be spliced, too close will result in a small difference in the number of images taken, which wastes the shooting resources, so it is necessary to take images at an appropriate distance. And since the adjustment of the pointing of the camera by the user is irregular, the double-trigger shooting conditions of pointing change and time change can effectively avoid the above problem, so as to generate an ideal panoramic image. Therefore, the method has higher flexibility in capturing an image than a single trigger shooting condition such as a fixed angle change or a fixed time interval, and in particular, can better adapt to the moving speed of the camera to capture an image.
  • the pointing of the camera is adjusted by the user holding the mobile device
  • embodiments of the present disclosure are not limited thereto, but the mobile device may also be placed on the movable device by A mobile device controls the movement of the camera of the mobile device.
  • the captured panoramic image may be distorted or misaligned, affecting the user experience.
  • the position of the captured image may be further determined to generate a high quality panoramic image. This operation will be specifically described below.
  • the method for a mobile device to capture a panoramic image may further include the steps of: marking the first pointing vector as a direction vector of the first image, and marking the second pointing vector Is the direction vector of the second image.
  • This step can be performed each time an image is taken, that is, each image has a corresponding direction vector, which is the pointing vector of the camera that the sensor senses when the image is taken.
  • This step is intended to add orientation information to the captured image, which orientation information can be used to determine the relative position of the plurality of captured images in order to generate the desired panoramic image in a subsequent step.
  • the captured image can be displayed on the display screen of the mobile device in real time, specifically, in the view frame on the display screen. For example, when the first image is taken, the first image is displayed full screen in the view frame on the display screen.
  • the mobile device can automatically take a picture whenever the amount of change pointed by the camera is greater than or equal to the first trigger shooting threshold, or the time interval of the shooting is greater than or equal to the second trigger shooting threshold. An image.
  • the mobile device can add the newly captured image in real time to the view frame on the display for display.
  • the framing frame can be a full-screen effective display area of the display.
  • FIG. 4 shows three images taken, the number of images taken may be more than one and not limited to three, and the relative positions at which the captured images shown in FIG. 4 are displayed are also only illustrative. This disclosure does not limit this.
  • the mobile device may determine the relative positions of the captured images in the background, and the mobile device may also display the captured images in real time on the display screen according to the determined relative positions.
  • the mobile device may determine a pixel offset of the second image relative to the first image; and according to the second image relative to the first image a pixel offset that determines the relative position of the first image and the second image.
  • the mobile device may determine an image overlap region of the first image and the second image; and determine the first according to the image overlap region
  • the two images are offset relative to the pixels of the first image.
  • the image overlapping area of the first image and the second image may be detected according to the image detecting method.
  • the mobile device may determine a moving direction of the second image relative to the first image according to the direction vector of the first image and the direction vector of the second image, and further select the first image in the moving direction.
  • An area that overlaps the second image. This area may be any suitable size area such as 4*4 pixels, 8*8 pixels, or 16*16 pixels.
  • the determination of the pixel offset between the two images does not require detection of the overlapping regions of the two images to determine the pixel offset between the two images, but a more convenient method can be considered.
  • the pixel offset of the two images taken and the angular change of the two images when the two images are taken should follow a certain fixed relationship, the fixed relationship can be utilized to The position of the subsequently taken image is estimated.
  • the ratio of the relative pixel shift amount ⁇ P in the X (or Y) axis direction to the direction vector change amount ⁇ of the two images (ie, ⁇ P) of the two images captured in the scene. / ⁇ ) should be a constant value.
  • the mobile device acquires a reference ratio of the pixel offset and the direction vector change amount; and according to the reference ratio and the second The amount of change of the direction vector of the image relative to the direction vector of the first image determines the pixel offset of the second image relative to the first image.
  • the reference ratio is obtained based on a pixel offset between the two images initially captured in the plurality of images and a variation amount of the direction vector.
  • the pixel offset of the image taken in the next shot relative to the previously captured image can be quickly determined, so that it can be saved when determining the relative position of adjacent images. Most of the computing time.
  • the fixed relationship between the pixel offset and the amount of angular change is applicable to any two images taken in the same scene, whether the two images are adjacent or whether they are the first two captured. image.
  • the pixel offset of the two initially captured images has been determined in the first case, it is convenient to calculate the amount of change of the direction vector of the two initially captured images, and then find out The ratio of the pixel offset to the amount of change in the direction vector, which is stored as a reference ratio in the scene in the mobile device for use in the second case.
  • the manner of calculating by mathematical equations may cause the relative positions of the determined images to be relative to the actual relative positions of the images. There is a slight deviation, and thus the relative positions of the determined images can be corrected to more accurately generate a panoramic image.
  • the relative position of the image in the second case described above can be corrected by the following steps.
  • the mobile device can determine an edge region of the image overlap region of each adjacent two images; and correct the relative position of the adjacent two images according to the edge region. Since the relative positions of the captured images have been determined in the previous steps, that is, the deviation between the relative positions of the currently determined images and their actual relative positions is relatively small, Subsequent steps are only to fine-tune the position of the image.
  • the mobile device can sequentially correct the positions of the two adjacent images. Specifically, the mobile device can detect several (for example, 1-2) pixel rows of the image overlapping area of each adjacent two images, and then Fine-tune the relative positions of two adjacent images within the range.
  • the edge 1 pixel row of the overlapping region can be detected at the first image and applied to the second image.
  • the edges of the overlapping regions are aligned by several pixel rows.
  • the determination of the overlapping area of the image and the determination of the edge area of the overlapping area of the image are performed differently, and the calculation amount and the time taken for the two are different.
  • the person is smaller than the former. Obviously, the latter only needs to be detected within a few pixels of the edge of the overlap region. Therefore, in the embodiment of the present disclosure, the method of using the extrapolation plus correction requires less computation and speed, and more importantly, without sacrificing, compared to directly determining the overlapping regions of the two adjacent images. The accuracy of the determined pixel offset.
  • the captured images are displayed in real time on the display frame on the display screen, as the number of images captured increases, the images with the determined relative positions can be scaled proportionally to present them.
  • the framing is marked inside the box.
  • the first image when the first image is taken, the first image may be displayed in full screen in the view frame.
  • the area in which the first image is displayed is reduced to leave room for displaying the second image, and based on the determined relative positions of the first image and the second image.
  • the second image is displayed at the corresponding position.
  • the display positions and sizes of the first and second images are adjusted according to the relative positions of the second image and the third image to leave an appropriate space at the corresponding position.
  • the third image is displayed.
  • you continue to shoot the fourth, fifth, ..., Nth image the steps are like this until the shooting is complete.
  • the user can present the framing area to the user while prompting the user to move the trajectory of the captured image and the unrecorded area of the framing area shown.
  • the user can then adjust the pointing of the camera accordingly, so as to traverse the framing area as much as possible, and capture the most comprehensive images of the coverage.
  • the generating the panoramic image may include: splicing the plurality of images according to a relative position between the plurality of images; and selecting an image within the maximized effective rectangular region as the panoramic image. For example, when the shooting is stopped by the user's desired shooting purpose, the maximized effective rectangular area covered with the image may be selected, cropped, and the multiple images in the cropping area are smoothly spliced into a panoramic image, and finally The panoramic image is presented on the display.
  • a method for a mobile device to capture a panoramic image by which a relative position of each image can be determined based on a direction vector added to the captured image And displaying each image at a corresponding position of the display screen according to the determined relative position, so that when the framing area is presented to the user, the user is prompted to move the trajectory of the captured image and the framing area has not been In the captured area, the user can then adjust the pointing of the camera accordingly, so as to traverse the framing area as much as possible, capture the most complete images of the coverage, and select the maximized effective rectangle based on the determined relative positions of the images.
  • the images in the area are cropped and smooth stitched, and finally displayed as a panoramic image on the display.
  • FIG. 5 is a schematic diagram showing a mobile device in accordance with an embodiment of the present disclosure.
  • the mobile device 500 can include: a camera 510 configured to capture a first image at a first time; and a sensor 520 configured to sense a first pointing vector of the camera 510 when the first image is captured And sensing the measured pointing vector of the camera 510 at a second time; the processor 530 is configured to: when the amount of change of the measured pointing vector relative to the first pointing vector is greater than or equal to the first triggering shooting threshold or the second moment In a case where the time interval between the first moments is greater than or equal to the second trigger capture threshold, the control camera 510 captures the second image and the control sensor 520 senses the second pointing vector of the camera 510 when the second image is captured, And in the case where it is determined that the shooting stop condition is satisfied, the panoramic image is generated based on the plurality of images that have been taken.
  • the shooting stop condition is not satisfied, the above-described photographing process is performed cyclically, and the mobile device automatically takes a plurality of images continuously in accordance with the judgment for the triggering shooting condition. That is, the amount of change in the current angle of the camera relative to the angle at which the previous image was taken is greater than or equal to the first trigger capture threshold, or the change in the current time relative to the time at which the previous image was taken is greater than or equal to the second When the shooting threshold is triggered, the camera of the mobile device captures the next image.
  • the sensor of the mobile device can continue to sense the change in the orientation of the camera, and when either of the change in the direction of the camera or the change in the shooting time interval satisfies the condition again, the camera of the mobile device can take an image again. Therefore, the process of loop shooting can continue until the shooting stop condition is satisfied.
  • first image and the second image referred to herein are only used to mark adjacent two images taken consecutively, and are not limited to the first image and the second image that are always taken. In fact, it may be the first image and the second image captured, or the second image and the third image captured, or the nth image and the n+1th image taken. An image in which n is an integer greater than or equal to one.
  • the mobile device 500 can be a device that supports photographing, such as a smart phone, a tablet, a camera, and the like.
  • sensor 520 may be an orientation sensor that may be a combination of a magnetic field sensor and an acceleration sensor, or may be a combination of other sensors or sensors capable of sensing an orientation.
  • the mobile device can also store the first image and the first pointing vector, and the second image and a second pointing vector sensed by the camera when the second image is captured.
  • the amount of change of the second pointing vector relative to the first pointing vector may be an angular change of any component in the ⁇ Pitch, Roll, Yaw ⁇ vector as shown in FIG. 2, or may be ⁇ Pitch, Roll, Yaw ⁇ The angular change of the component with the fastest change in the vector.
  • the first trigger capture threshold may be 2° and the second trigger capture threshold may be 50 ms.
  • the change in the pointing direction of the camera is greater than or equal to 2°, or the time interval of shooting is greater than or equal to 50 ms, the next image is taken.
  • the first trigger capture threshold and the second trigger capture threshold may be fixed or may be changed to other suitable values depending on user needs and actual shooting conditions.
  • the case where the shooting stop condition is satisfied may include any one of the following cases: the user presses the shooting stop button, has taken a predetermined number of images, has taken a predetermined length of time, and another in the mobile device The running of the application caused an interruption in the shooting. It should be understood that the above case is only an example of a shooting stop condition, and is not intended to be a limitation thereto.
  • embodiments of the present disclosure provide a mobile device for photographing a panoramic image, which may be any one of a change in direction of a camera or a time interval of photographing sensed by a sensor equipped therein
  • the camera is triggered to automatically take an image continuously.
  • the mobile device can reach the first triggering shooting threshold as the condition for triggering the shooting of the next image when the pointing direction of the camera changes rapidly, and when the camera pointing changes slowly, at the shooting interval
  • the second trigger capture threshold is reached as a condition for triggering the next image to be captured, so that the mobile device can sharply capture multiple images at an appropriate pitch so that the captured image is not too far or too close.
  • taking the double trigger shooting condition can effectively avoid the above problem, in order to generate an ideal panoramic image.
  • the processor 530 of the mobile device 500 may be further configured to mark the first pointing vector as a direction vector of the first image and mark the second pointing vector as the second image.
  • Direction vector The processor 530 can perform this step each time an image is captured, intended to add orientation information to the captured image, the orientation information being used for subsequent processing of the captured multiple images, such as relative to each image The position is determined in order to generate a panoramic image.
  • the mobile device 500 may further include: a display screen 540 configured to display the currently captured images in the display screen 540 without satisfying the shooting stop condition, specifically The ground is displayed in the framing frame on the display screen 540.
  • a display screen 540 configured to display the currently captured images in the display screen 540 without satisfying the shooting stop condition, specifically The ground is displayed in the framing frame on the display screen 540.
  • the mobile device displays the first image in full screen on the display frame 540 on the display screen 540.
  • the mobile device can automatically take a picture whenever the amount of change pointed by the camera is greater than or equal to the first trigger shooting threshold, or the time interval of the shooting is greater than or equal to the second trigger shooting threshold.
  • An image Each time an image is taken, the currently captured images can be displayed in real time in the view frame of the display 540 of the mobile device.
  • the framing indicator box can be a full screen effective display area of the display screen 540.
  • the processor 530 of the mobile device may be further configured to determine a pixel offset of the second image relative to the first image; according to the pixel offset of the second image relative to the first image Move to determine the relative position of the first image and the second image.
  • the processor 530 may determine the relative positions of the captured images in the background, and the mobile device 500 may also display the finder on the display screen 540 according to the determined relative positions of the images.
  • the captured images are displayed in real time in the box.
  • the processor 530 can determine a pixel offset of the second image relative to the first image; and based on the second image relative to the first image A pixel offset determines a relative position of the first image and the second image.
  • the processor 530 may be configured to: determine an image overlap region of the first image and the second image; and according to the image The overlap region determines a pixel offset of the second image relative to the first image. For example, for the determination of the image overlapping area, the image overlapping area of the first image and the second image may be detected according to the image detecting method. For another example, the mobile device may determine a moving direction of the second image relative to the first image according to the direction vector of the first image and the direction vector of the second image, and further select the first image in the moving direction.
  • An area overlapping the second image which may be any suitable size area such as 4*4 pixels, 8*8 pixels, or 16*16 pixels.
  • the two The same image of the block is overlapped, thus determining the pixel offset of the second image relative to the first image.
  • the processor 530 may also And configured to: obtain a reference ratio of the pixel offset and the direction vector change amount; and determine, according to the reference ratio and the change amount of the direction vector of the second image with respect to the direction vector of the first image, determining the second image relative to The pixel of the first image is offset.
  • the reference ratio is obtained based on a pixel shift between the two images initially captured in the plurality of images and a variation amount of the direction vector.
  • the two images initially captured are selected, and the relative pixel shift amount ⁇ P in the two directions of the XY axes is further detected with reference to the maximum angle change amount ⁇ of the absolute value. Since the ratio of ⁇ P/ ⁇ is relatively close to a fixed value in the scene, it can be used to quickly determine the pixel offset of the image taken in the next shot relative to the image taken in the previous shot, so that the adjacent image can be determined. Most of the computation time is saved when the relative pixel offset.
  • the relative positions of the images may also be performed. Correction for more accurate generation of panoramic images.
  • the processor 530 may be further configured to: determine an edge region of an image overlap region of each adjacent two images, and correct a relative position of the adjacent two images according to the edge region. Specifically, the processor 530 can detect the edge 1-2 pixel rows of the image overlapping area of the adjacent two images, thereby adjusting the relative positions of the adjacent two images in a small range.
  • the images in which the relative positions are determined can be scaled proportionally to display them.
  • the finder is displayed in the frame of the display screen 540.
  • the first image can be displayed in full screen in the view frame.
  • the camera moves to capture the second image, the area displaying the first image is reduced to leave room for displaying the second image, and based on the determined relative positions of the first image and the second image.
  • the second image is displayed at the corresponding position.
  • the display positions and sizes of the first and second images are adjusted according to the relative positions of the second image and the third image to leave an appropriate space at the corresponding position.
  • the third image is displayed.
  • you continue to shoot the fourth, fifth, ..., Nth image the steps are like this until the shooting is complete.
  • the user can present the framing area to the user while prompting the user to move the trajectory of the captured image and the area of the framing area that has not been photographed, and the user can According to this, the pointing of the camera is adjusted so as to traverse the framing area as much as possible, and the plurality of images with the most comprehensive coverage are taken.
  • the processor 530 may be further configured to: stitch the plurality of images according to a relative position between the plurality of images; and select an image in the maximized effective rectangular area as a panorama image. Specifically, when the shooting is suspended for the purpose of the user's desired shooting, the processor 530 may crop the selected effective rectangular area covered with the image, and smoothly stitch the multiple images in the cropping area into a panoramic image. Finally, the panoramic image is presented on the display screen 540.
  • a mobile device for capturing a panoramic image can determine the relative position of each image according to the direction vector added to the captured image, so that when the framing area is displayed to the user, the user is prompted to move the trajectory of the captured image and the framing area has not been photographed yet. The area, the user can then adjust the pointing of the camera accordingly, so as to traverse the framing area as much as possible, take the most complete image of the coverage, and select the maximized effective rectangular area based on the determined relative positions of the images.
  • the images inside are cropped and smooth stitched, and finally displayed as a panoramic image on the display.
  • the mobile device may include: an image capturing unit, a sensing unit, a control unit, and a position determining unit.
  • a computer readable storage medium for storing a computer program, the computer program comprising instructions for performing one or more of the above methods.
  • Embodiments of the present disclosure also provide a computer product comprising one or more processors configured to execute computer instructions to perform one or more of the above methods.
  • the computer product further includes a memory coupled to the processor, configured to store the computer instructions.
  • the memory may be implemented by any type of volatile or non-volatile memory device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), Erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM Erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Disk Disk
  • Optical Disk Optical Disk
  • the processor may be a central processing unit (CPU) or a field programmable logic array (FPGA) or a single chip microcomputer (MCU) or a digital signal processor (DSP) or an application specific integrated circuit (ASIC) or a graphics processing unit (GPU).
  • CPU central processing unit
  • FPGA field programmable logic array
  • MCU single chip microcomputer
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • GPU graphics processing unit
  • the one or more processors may be configured to perform the above method simultaneously in a processor group that is calculated in parallel, or configured to perform some of the above methods in a partial processor, and the other processors perform other partial steps in the above methods, etc. .
  • Computer instructions include one or more processor operations defined by an instruction set architecture corresponding to a processor, which may be logically included and represented by one or more computer programs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Studio Devices (AREA)
  • Stereoscopic And Panoramic Photography (AREA)

Abstract

一种用于移动设备拍摄全景图像的方法及移动设备。移动设备包括传感器和摄像头,所述方法包括:所述摄像头在第一时刻拍摄第一图像,并且所述传感器感测所述摄像头在拍摄所述第一图像时的第一指向向量(S310);所述传感器在第二时刻感测所述摄像头的实测指向向量(S320);在所述实测指向向量相对于所述第一指向向量的变化量大于或等于第一触发拍摄阈值或者所述第二时刻与所述第一时刻之间的时间间隔大于或等于第二触发拍摄阈值的情况下,所述摄像头拍摄第二图像,并且所述传感器感测所述摄像头在拍摄所述第二图像时的第二指向向量(S330);在确定满足拍摄停止条件的情况下,基于已拍摄的多个图像生成全景图像(S350)。

Description

用于移动设备拍摄全景图像的方法、移动设备、计算机可读存储介质和计算机产品
相关申请的交叉引用
本申请要求于2017年9月30日提交的申请号为201710938414.2且发明名称为“用于移动设备拍摄全景图像的方法以及移动设备”的中国申请的优先权,通过引用将其全部内容并入于此。
技术领域
本公开涉及图像拍摄领域,并且更具体地,涉及一种用于移动设备拍摄全景图像的方法、移动设备、计算机可读存储介质和计算机产品。
背景技术
当前,智能移动设备可以支持多种拍照模式,其中的全景拍照模式需要用户在拍照点相对固定并在沿同一空间方向(通常是水平或垂直方向)旋转镜头指向,连续拍照并获得最终照片。移动设备在进行全景图像拍摄时,通常被设置为以固定时间间隔连续拍摄多幅图像,但人手在调整摄像头时移动速度不能保持恒定,因而导致拍摄出的图像质量比较差。
发明内容
为解决上述问题,本公开旨在提供一种利用移动设备内的传感器提供的方位信息来实现自动连续拍摄图像并输出全景图像的方法。
根据本公开的第一方面,提供了一种用于移动设备拍摄全景图像的方法,所述移动设备包括传感器和摄像头,所述方法包括:所述摄像头在第一时刻拍摄第一图像,并且所述传感器感测所述摄像头在拍摄所述第一图像时的第一指向向量;所述传感器在第二时刻感测所述摄像头的实测指向向量;在所述实测指向向量相对于所述第一指向向量的变化量大于或等于第一触发拍摄阈值或者所述第二时刻与所述 第一时刻之间的时间间隔大于或等于第二触发拍摄阈值的情况下,所述摄像头拍摄第二图像,并且所述传感器感测所述摄像头在拍摄所述第二图像时的第二指向向量;在确定满足拍摄停止条件的情况下,基于已拍摄的多个图像生成全景图像。
在一个实施例中,所述方法还可以包括:将所述第一指向向量标记为所述第一图像的方向矢量,并且将所述第二指向向量标记为所述第二图像的方向矢量。
在一个实施例中,所述移动设备还可以包括显示屏;所述方法还可以包括:在不满足所述拍摄停止条件的情况下,将当前已拍摄的各幅图像显示在显示屏内。
在一个实施例中,所述方法还可以包括:确定所述第二图像相对于所述第一图像的像素偏移;并且根据所述第二图像相对于所述第一图像的像素偏移,确定所述第一图像和所述第二图像的相对位置。
在一个实施例中,所述确定所述第二图像相对于所述第一图像的像素偏移,还可以包括:确定所述第一图像和所述第二图像的图像重叠区域;并且根据所述图像重叠区域,确定所述第二图像相对于所述第一图像的像素偏移。
在一个实施例中,所述确定所述第二图像相对于所述第一图像的像素偏移,还可以包括:获取像素偏移与方向矢量变化量的参考比值;并且根据所述参考比值和所述第二图像的方向矢量相对于所述第一图像的方向矢量的变化量,确定所述第二图像相对于所述第一图像的像素偏移。
在一个实施例中,其中,所述参考比值是基于所述多个图像中初始拍摄到的两幅图像之间的像素偏移和方向矢量的变化量得到的。
在一个实施例中,所述方法还可以包括:确定所述第一图像和所述第二图像的图像重叠区域的边缘区域;并且根据所述边缘区域,矫正所述第一图像和所述第二图像的相对位置。
在一个实施例中,所述生成全景图像还可以包括:根据所述多个图像之间的相对位置,对所述多个图像进行拼接;并且选取最大化的有效矩形区域内的图像作为全景图像。
根据本公开的第二方面,提供了一种移动设备,所述移动设备包括:摄像头,被配置为在第一时刻拍摄第一图像;传感器,被配置为感测所述摄像头在拍摄所述第一图像时的第一指向向量,并且在第二时刻感测所述摄像头的实测指向向量;处理器,被配置为在所述实测指向向量相对于所述第一指向向量的变化量大于或等于第一触发拍摄阈值或者所述第二时刻与所述第一时刻之间的时间间隔大于或等于第二触发拍摄阈值的情况下,控制所述摄像头拍摄第二图像并且控制所述传感器感测所述摄像头在拍摄所述第二图像时的第二指向向量,并且在确定满足拍摄停止条件的情况下,基于已拍摄的多个图像生成全景图像。
在一个实施例中,所述处理器还可以被配置为将所述第一指向向量标记为所述第一图像的方向矢量,并且将所述第二指向向量标记为所述第二图像的方向矢量。
在一个实施例中,所述移动设备还可以包括:显示屏,被配置为在不满足所述拍摄停止条件的情况下,将当前已拍摄的各幅图像显示在显示屏内。
在一个实施例中,所述处理器还可以被配置为确定所述第二图像相对于所述第一图像的像素偏移;根据所述第二图像相对于所述第一图像的像素偏移,确定所述第一图像和所述第二图像的相对位置。
在一个实施例中,所述处理器还可以被配置为:确定所述第一图像和所述第二图像的图像重叠区域;并且根据所述图像重叠区域,确定所述第二图像相对于所述第一图像的像素偏移。
在一个实施例中,所述处理器还可以被配置为:获取像素偏移与方向矢量变化量的参考比值;并且根据所述参考比值和所述第二图像的方向矢量相对于所述第一图像的方向矢量的变化量,确定所述第二图像相对于所述第一图像的像素偏移。
在一个实施例中,其中,所述参考比值是基于所述多个图像中初始拍摄到的两幅图像之间的像素偏移和方向矢量的变化量得到的。
在一个实施例中,所述处理器还可以被配置为:确定所述第一图像和所述第二图像的图像重叠区域的边缘区域;并且根据所述边缘区 域,矫正所述第一图像和所述第二图像的相对位置。
在一个实施例中,所述处理器还可以被配置为:根据所述多个图像之间的相对位置,对所述多个图像进行拼接;并且选取最大化的有效矩形区域内的图像作为全景图像。
根据本公开的第三方面,提供了一种用于执行本公开的第一方面所述方法的移动设备,所述移动设备可以包括:图像捕捉单元、感测单元、控制单元、以及位置确定单元。
根据本公开的第四方面,提供了一种计算机可读存储介质,用于存储计算机程序,所述计算机程序包括用于执行本公开的第一方面所述方法中的一个或多个步骤的指令。
根据本公开的第五方面,提供了一种计算机产品,包括一个或多个处理器,所述处理器被配置为运行计算机指令,以执行本公开的第一方面所述方法中的一个或多个步骤。
根据本公开的实施例,通过利用配备有传感器的移动设备执行上述拍摄全景图像的方法,可以实现结合传感器感测到的摄像头的指向变化以及拍摄时间的变化形成双触发拍摄条件来控制连续自动拍摄图像,并且可以将所拍摄的各幅图像输出为全景图像。该方法中采用摄像头指向变化和拍摄时间间隔变化的双触发拍摄条件,因而可以更灵活地适应人手对摄像头的调整,以便拍摄出高质量的图像。
附图说明
为了更清楚地说明本发明实施例的技术方案,下面将对实施例的附图作简单地介绍,显而易见地,下面描述中的附图仅仅涉及本发明的一些实施例,而非对本发明的限制。
图1是示出了用于标识移动设备所在空间的坐标系的示意图;
图2是示出了在图1所示的移动设备所在空间的坐标系中的方向参数对应关系的图;
图3是示出了根据本公开的实施例的用于移动设备拍摄全景图像的方法的流程图;
图4是示出了根据本公开的实施例的将所拍摄的各幅图像显示 在移动设备的显示屏上的示意图;以及
图5是示出了根据本公开的实施例的用于拍摄全景图像的移动设备的示意图。
具体实施方式
将参照附图详细描述根据本发明的各个实施例。这里,需要注意的是,在附图中,将相同的附图标记赋予基本上具有相同或类似结构和功能的组成部分,并且将省略关于它们的重复描述。
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例的附图,对本发明实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例是本发明的一部分实施例,而不是全部的实施例。基于所描述的本发明的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。
除非另作定义,此处使用的技术术语或者科学术语应当为本发明所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
图1是示出了用于标识移动设备所在空间的坐标系的示意图,并且该坐标系用于标识移动设备摄像头的指向;以及图2是示出了在图1所示的移动设备所在空间的坐标系中的方向参数对应关系的图。
如图1所示的X-Y-Z轴坐标系,移动设备横向放置时,X轴对应上下方向,Y轴对应左右方向,Z轴对应前后方向。如图2所示,纵摇(Pitch)表示移动设备水平位置以Z轴旋转,即用户手持移动设 备进行拍摄时,其左右手位置高低发生的变化;滚动(Roll)表示以Y轴为中心旋转的角度,即移动设备的俯仰角度,也即移动设备的摄像头在垂直方向上的视野扩展角度;偏转(Yaw或Azimuth)表示以X轴为中心轴旋转,也即移动设备的摄像头在水平方向上的视野扩展角度。由此,方向矢量{Pitch,Roll,Yaw}可以唯一确定移动设备所处的空间姿态角,也即其摄像头所指方向。该矢量可以作为用于标识移动设备所拍摄的图像的方位参数的方向矢量。
在实际拍摄过程中,用户可以在X-Y二维平面内调整摄像头的指向,并且在摄像头移动的过程中,移动设备内配备的传感器可以实时感测摄像头的指向向量。该方向矢量{Pitch,Roll,Yaw}可以通过程序调用移动设备的相关既有API而实时获得。
这里的移动设备,例如可以是智能电话、平板电脑、相机等支持拍照的设备。并且移动设备内配备的用于感测摄像头的指向向量的传感器,例如,可以是方位传感器,其可以由磁场传感器和加速度传感器组合而成,或者也可以是能够感测方位的其它传感器或传感器的组合,本公开不对此做出限制。
图3是示出了根据本公开的实施例的用于移动设备拍摄全景照片的方法的流程图。
当用户选择全景拍摄模式并将摄像头对准拍摄对象按下拍摄按钮时,便开启了全景拍摄过程。如图3所示,摄像头在第一时刻拍摄第一图像,并且由传感器感测摄像头在拍摄该第一图像时的第一指向向量(S310)。移动设备可以存储该第一图像和该第一指向向量。之后,传感器在第二时刻感测摄像头的实测指向向量(S320)。在调整摄像头的过程中,实测指向向量可以相应地随摄像头的指向变化而动态变化。在该实测指向向量相对于该第一指向向量的变化量大于或等于第一触发拍摄阈值或者该第二时刻与该第一时刻之间的时间间隔大于或等于第二触发拍摄阈值的情况下,摄像头拍摄第二图像,并且传感器感测摄像头在拍摄该第二图像时的第二指向向量(S330)。移动设备可以存储该第二图像、以及拍摄该第二图像时由摄像头感测到的第二指向向量。确定是否满足拍摄停止条件(S340)。当满足拍摄 停止条件时,基于已拍摄的多个图像生成全景图像(S350)。当不满足拍摄停止条件时,返回执行步骤S320的操作。
应当了解,在不满足拍摄停止条件的情况下,上述拍摄过程是循环进行的,移动设备依照对于触发拍摄条件的判断来自动连续拍摄多幅图像。也就是说,在摄像头的当前角度相对于拍摄前一图像时的角度的变化量大于或等于第一触发拍摄阈值、或者当前时刻相对于拍摄前一图像时的时刻的变化量大于或等于第二触发拍摄阈值时,移动设备的摄像头便会自动拍摄下一幅图像。之后,移动设备的传感器可以继续感测摄像头的指向变化,当摄像头的指向变化或拍摄时间间隔的变化中的任一者再次满足条件时,移动设备的摄像头可以再次拍摄一张图像。所以,循环拍摄的过程可以一直持续到满足拍摄停止条件为止。
还应当了解,这里所提及的第一图像、第二图像仅用于标记连续拍摄的相邻两幅图像,而非被限定为总是初始拍摄的两幅图像。事实上,这里所述的第一图像、第二图像可以是所拍摄的第一幅图像和第二幅图像,也可以是所拍摄的第三幅图像和第四幅图像,还可以是所拍摄的第n幅图像和第n+1幅图像,其中n为大于或等于1的整数。
作为示例,第二指向向量相对于第一指向向量的变化量可以是如图2所示{Pitch,Roll,Yaw}向量中的任一分量的角度变化,例如,其可以是{Pitch,Roll,Yaw}向量中的变化最快的分量的角度变化。
作为示例,第一触发拍摄阈值可以是2°,并且第二触发拍摄阈值可以是50ms。当摄像头的指向方向变化大于或等于2°、或者拍摄的时间间隔大于或等于50ms时,便拍摄下一幅图像。应当了解,第一触发拍摄阈值和第二触发拍摄阈值可以是固定的,也可以根据用户需要和实际拍摄情况而被更改为其它适当的值。
作为示例,满足拍摄停止条件的情况可以是以下情况中的任意一种:摄像头所扫过的区域已遍历当前取景范围(例如,显示屏呈现的取景框)内的所有区域时可终止拍摄;或用户按下拍摄停止键主动终止拍摄,此时如果存在未遍历区域,则由拍摄系统自动分析最大可能区域作为最终成像的区域;已拍摄了预定数目的图像;已进行了一段 预定时长的拍摄;移动设备内的另一应用的运行而导致拍摄的中断;以及,还有一种情况是可能要依赖于拍摄系统的存储能力的限制。上述情况仅作为拍摄停止条件的示例,而不作为对其的限制。
如上所述,本公开的实施例提供了一种用于移动设备拍摄全景图像的方法,该方法可以在由移动设备内配备的传感器感测到的摄像头的指向变化或拍摄的时间间隔中的任一者满足预定条件的情况下,触发摄像头自动连续拍摄图像。通过该方法,可以在摄像头指向方向变化较快时,以摄像头指向的变化量为触发拍摄下一幅图像的条件,而当摄像头指向变化较慢时,以拍摄的时间间隔为触发拍摄下一幅图像的条件,使得移动设备可以敏锐地捕捉到处于适当间距的多幅图像,这样拍摄的图像的距离不会过远或过近,因为过远会导致拍摄的相邻图像之间没有公共部分而无法拼接,过近会导致拍摄的多幅图像区别不大而浪费拍摄资源,所以以适当的距离拍摄图像是必要的。并且由于用户通过手对摄像头的指向进行的调整是不规律的,因而采取指向变化和时间变化的双触发拍摄条件可以有效避免上述问题,以便生成理想的全景图像。因此,该方法相比以固定角度变化或固定时间间隔等单一触发拍摄条件来拍摄图像而言,具有更高的灵活性,尤其是能够更好地适应摄像头的移动速度来拍摄图像。
应当了解,尽管在本公开的实施例中,由用户手持移动设备来调整摄像头的指向,但本公开的实施例不限于此,而是还可以将该移动设备放置于可移动的装置上,由可移动装置来控制该移动设备的摄像头的移动。
考虑到在实际拍摄过程中,由于用户手持移动设备时很容易产生抖动或偏移,导致拍摄出的全景图像会有扭曲变形或错位的现象,影响用户体验。因而,根据本公开的实施例的方法,还可以进一步对所拍摄的图像的位置进行确定,以便生成高质量的全景图像。下文将对该操作进行具体描述。
在本公开的实施例中,所述用于移动设备拍摄全景图像的方法,还可以包括以下步骤:将该第一指向向量标记为该第一图像的方向矢量,并且将该第二指向向量标记为该第二图像的方向矢量。可以在每 当拍摄一幅图像时执行该步骤,也就是说,每幅图像都会有一个与其对应的方向矢量,该方向矢量是传感器在拍摄该图像时感测的摄像头的指向向量。该步骤意在向所拍摄的图像中添加方位信息,该方位信息可以用于对拍摄的多幅图像进行相对位置的确定,以便在后续步骤中生成理想的全景图像。
图4是示出了根据本公开的实施例的将所拍摄的各幅图像显示在显示屏上的取景标示框的示意图。在拍摄开始至结束的过程中,即在不满足拍摄停止条件的情况下,所拍摄的图像可以被实时显示在移动设备的显示屏上,具体地,显示在显示屏上的取景标示框内。例如,当拍摄了第一幅图像时,在显示屏上的取景标示框内全屏显示该第一幅图像。随着用户不断调整摄像头的指向,如前所述每当摄像头指向的变化量大于或等于第一触发拍摄阈值、或者拍摄的时间间隔大于或等于第二触发拍摄阈值时,移动设备可以自动拍摄下一幅图像。每拍摄一幅图像,移动设备可以将新拍摄的图像实时地添加到显示屏上的取景标示框内进行显示。通常,取景标示框可以是显示屏的全屏有效显示区域。尽管图4示出了所拍摄的3幅图像,但拍摄的图像数目可以是更多幅而不限于3幅,并且图4中所示出的拍摄的图像被显示的相对位置也仅是示意性地本公开不对此做出限制。
根据本公开的实施例,移动设备可以在后台对已拍摄的各幅图像的相对位置进行确定,并且移动设备还可以将已拍摄的各幅图像按照所确定的相对位置实时显示在显示屏上。
根据本公开的实施例,在拍摄了第一图像和第二图像后,移动设备可以确定该第二图像相对于该第一图像的像素偏移;并且根据该第二图像相对于该第一图像的像素偏移,确定该第一图像和该第二图像的相对位置。
在确定第二图像相对于第一图像的像素偏移时,可以视情况选择以下两种方案中的一种。将在下文详细描述这两种方案。
第一种情况,当第一图像和第二图像是初始拍摄到的两幅图像时,移动设备可以确定该第一图像和该第二图像的图像重叠区域;并且根据该图像重叠区域确定该第二图像相对于该第一图像的像素偏 移。例如,对于图像重叠区域的确定,可以根据图像检测方法来检测第一图像和第二图像的图像重叠区域。再例如,移动设备可以根据该第一图像的方向矢量和该第二图像的方向矢量来判断第二图像相对于第一图像的移动方向,进而在该移动方向上选定第一图像上有可能与第二图像重叠的一块区域。该区域可以是4*4像素、8*8像素或16*16像素等任一适当大小的区域。在第一图像上选定了一块区域后,在第二图像上快速检测到与选定区域或其部分内的图像相同的图像所在的位置,进而可以将该两块相同图像进行重叠,这样也就确定了该第二图像相对于该第一图像的像素偏移。
第二种情况,当第一图像和第二图像不是初始拍摄到的两幅图像而是随后拍摄到的任意两幅相邻的图像(例如,第三幅图像和第四幅图像)时,对于该两幅图像之间像素偏移的确定,不必对该两幅图像的重叠区域进行检测来确定两幅图像之间的像素偏移,而是可以考虑更为简便的方法。考虑到在同一个拍摄场景下,所拍摄的两幅图像的像素偏移以及拍摄该两幅图像时两幅图像的角度变化之间应当遵循某个固定的关系,可以利用该固定的关系来对后续拍摄的图像的位置进行推算。具体地说,在该场景下拍摄到的两幅图像,其在X(或Y)轴方向上的相对像素偏移量ΔP与该两幅图像的方向矢量变化量Δθ之间的比值(即ΔP/Δθ)应当是个恒定值。
因此,在对后面拍摄的第一图像和第二图像的相对位置进行确定时,可以进行以下步骤:移动设备获取像素偏移与方向矢量变化量的参考比值;并且根据该参考比值和该第二图像的方向矢量相对于该第一图像的方向矢量的变化量,确定该第二图像相对于该第一图像的像素偏移。其中,该参考比值是基于所述多个图像中初始拍摄到的两幅图像之间的像素偏移和方向矢量的变化量得到的。
通过利用像素偏移量与角度变化量之间的固定关系,可以快速地确定后一拍摄的图像相对于前一拍摄的图像的像素偏移,这样便可以在确定相邻图像的相对位置时节省大部分运算时间。
应当了解的是,所述像素偏移量与角度变化量之间的固定关系适用于在相同场景下拍摄的任意两幅图像,无论该两幅图像是否相邻或 者是否是初始拍摄到的两幅图像。
由于第一种情况下已经确定了初始拍摄到的两幅图像的像素偏移,因而,较为便捷的方法是可以再计算出该初始拍摄到的两幅图像的方向矢量的变化量,然后求出像素偏移与方向矢量变化量的比值,将其作为该场景下的参考比值存储在该移动设备内,以便在第二种情况下使用。
在通过上述参考比值确定了已拍摄的各幅图像的相对位置后,由于这种通过数学等式推算的方式可能会使所确定的各幅图像的相对位置与各幅图像的实际相对位置之间存在微小的偏差,因而还可以对所确定的该各幅图像的相对位置进行矫正,以便更精确地生成全景图像。
可以通过以下步骤对上述第二情况下的图像的相对位置进行矫正。例如,移动设备可以确定每相邻两图像的图像重叠区域的边缘区域;并且根据该边缘区域,矫正该相邻两图像的相对位置。由于在之前的步骤中,已经对拍摄的各幅图像的相对位置进行了确定,也就是说,当前所确定的各幅图像的相对位置与其实际的相对位置之间的偏差是比较小的,所以后续进行的步骤只是对图像的位置进行微调。移动设备可以对两两相邻图像的位置依次进行矫正,具体地,移动设备可以对每相邻两图像的图像重叠区域的边缘几个(例如,1-2)像素行进行检测,进而在小范围内对相邻两图像的相对位置进行微调。由于只是进行两图像的相对位置的微调,所以可以只对两图像的重叠区域的边缘进行检测,例如,可以在第一图像处检测重叠区域的边缘1像素行,并将其与第二图像上的重叠区域的边缘几个像素行进行比对。当第一图像的重叠区域的边缘像素行与第二图像的重叠区域的边缘像素行匹配时,基于匹配的边缘像素行来更新第一图像和第二图像的相对位置。
应当了解,在本公开的实施例中,先后进行的对于图像重叠区域的确定、和对于图像重叠区域的边缘区域的确定,二者所需的计算量和所花的时间是不相同的,后者比前者要小。很明显地,后者仅需要在重叠区域的边缘的几个像素范围内进行检测。因此,在本公开的实 施例中,采用推算加矫正的方法相比于直接对两两相邻图像依次进行重叠区域的确定,需要更少的计算量并且速度更快,更重要的是无需牺牲所确定的像素偏移的精确度。
在将已拍摄的各幅图像实时显示在显示屏上的取景标示框内时,随着拍摄的图像数目增多,可以将确定了相对位置的各幅图像进行等比例放缩,以将其呈现在取景标示框内。例如,如图4所示,拍摄第一幅图像时,该第一幅图像可以全屏显示在该取景标示框内。当摄像头移动而拍摄了第二幅图像时,显示第一幅图像的区域被缩小以留出空间显示第二幅图像,并且根据所确定出的第一幅图像和第二幅图像的相对位置来在对应位置处显示该第二幅图像。当摄像头继续移动而拍摄了第三幅图像时,根据第二幅图像和第三幅图像的相对位置来调整第一、第二幅图像的显示位置和大小,以留出适当的空间在相应位置处显示第三幅图像。当继续拍摄第四、第五、…、第N幅图像时,步骤以此类推,直至拍摄完毕。
通过将所拍摄的各幅图像实时显示在显示屏上的取景标示框内,可以在向用户展示取景区域的同时,提示用户已拍摄的图像的移动轨迹和所示取景区域中尚未被拍摄的区域,用户进而可以据此调整摄像头的指向,以便尽可能地遍历该取景区域,拍摄覆盖面最全的多幅图像。
在本公开的实施例中,所述生成全景图像可以包括:根据该多个图像之间的相对位置,对该多个图像进行拼接;并且选取最大化的有效矩形区域内的图像作为全景图像。例如,在达到用户期望的拍摄目的而拍摄中止时,可以选取覆盖有图像的最大化的有效矩形区域,对其进行裁剪,将该裁剪区域内的多幅图像平滑拼接为全景图像,最后将该全景图像呈现在显示屏上。
如上所述,在本公开的实施例中,提供了一种用于移动设备拍摄全景图像的方法,通过所述方法,可以根据对所拍摄的图像添加的方向矢量来确定各幅图像的相对位置,并且可以根据所确定的相对位置而在显示屏的相应位置处显示各幅图像,从而使得在向用户展示取景区域的同时,提示用户已拍摄的图像的移动轨迹和所示取景区域中尚 未被拍摄的区域,用户进而可以据此调整摄像头的指向,以便尽可能地遍历该取景区域,拍摄覆盖面最全的多幅图像,并且基于所确定的各幅图像的相对位置,选取最大化的有效矩形区域内的图像进行裁剪和平滑拼接,最后作为全景图像显示在显示屏上。
图5是示出了根据本公开的实施例的一种移动设备的示意图。如图5所示,该移动设备500可以包括:摄像头510,被配置为在第一时刻拍摄第一图像;传感器520,被配置为感测摄像头510在拍摄该第一图像时的第一指向向量,并且在第二时刻感测摄像头510的实测指向向量;处理器530,被配置为在该实测指向向量相对于该第一指向向量的变化量大于或等于第一触发拍摄阈值或者该第二时刻与该第一时刻之间的时间间隔大于或等于第二触发拍摄阈值的情况下,控制摄像头510拍摄第二图像并且控制传感器520感测摄像头510在拍摄该第二图像时的第二指向向量,并且在确定满足拍摄停止条件的情况下,基于已拍摄的多个图像生成全景图像。
应当了解,在不满足拍摄停止条件的情况下,上述拍摄过程是循环进行的,移动设备依照对于触发拍摄条件的判断来自动连续拍摄多幅图像。也就是说,在摄像头的当前角度相对于拍摄前一图像时的角度的变化量大于或等于第一触发拍摄阈值、或者当前时刻相对于拍摄前一图像时的时刻的变化量大于或等于第二触发拍摄阈值时,移动设备的摄像头便会拍摄下一幅图像。之后,移动设备的传感器可以继续感测摄像头的指向变化,当摄像头的指向变化或拍摄时间间隔的变化中的任一者再次满足条件时,移动设备的摄像头可以再次拍摄一张图像。所以,循环拍摄的过程可以一直持续到满足拍摄停止条件为止。
还应当了解,这里所提及的第一图像、第二图像仅用于标记连续拍摄的相邻两幅图像,而非被限定为总是所拍摄的第一幅图像和第二幅图像。事实上,其可以是所拍摄的第一幅图像和第二幅图像,也可以是所拍摄的第二幅图像和第三幅图像,还可以是所拍摄的第n幅图像和第n+1幅图像,其中n为大于或等于1的整数。
该移动设备500可以是智能电话、平板电脑、相机等支持拍照的设备。
在本公开的实施例中,传感器520,例如,可以是方位传感器,其可以由磁场传感器和加速度传感器组合而成,或者也可以是能够感测方位的其它传感器或传感器的组合。
移动设备还可以存储该第一图像和该第一指向向量、以及该第二图像和拍摄该第二图像时由摄像头感测到的第二指向向量。
作为示例,第二指向向量相对于第一指向向量的变化量可以是如图2所示{Pitch,Roll,Yaw}向量中的任一分量的角度变化,或者也可以是{Pitch,Roll,Yaw}向量中的变化最快的分量的角度变化。
作为示例,第一触发拍摄阈值可以是2°,并且第二触发拍摄阈值可以是50ms。当摄像头的指向方向变化大于或等于2°、或者拍摄的时间间隔大于或等于50ms时,便拍摄下一幅图像。应当了解,第一触发拍摄阈值和第二触发拍摄阈值可以是固定的,或者也可以根据用户需要和实际拍摄情况而被更改为其它适当的值。
作为示例,满足拍摄停止条件的情况可以包括以下情况中的任意一种:用户按下拍摄停止键、已拍摄了预定数目的图像、已进行了一段预定时长的拍摄、以及移动设备内的另一应用的运行而导致拍摄的中断。应当了解,上述情况仅作为拍摄停止条件的示例,而不作为对其的限制。
如上所述,本公开的实施例提供了一种用于拍摄全景图像的移动设备,该移动设备可以在由其中配备的传感器感测到的摄像头的指向变化或拍摄的时间间隔中的任一者满足预定条件的情况下,触发摄像头自动连续拍摄图像。该移动设备,可以在摄像头指向方向变化较快时,以摄像头指向的变化量达到第一触发拍摄阈值作为触发拍摄下一幅图像的条件,而当摄像头指向变化较慢时,以拍摄的时间间隔达到第二触发拍摄阈值作为触发拍摄下一幅图像的条件,使得移动设备可以敏锐地捕捉到处于适当间距的多幅图像,这样拍摄的图像的距离不会过远或过近。并且由于用户用手对于摄像头的指向的调整是不规律的,因而采取双触发拍摄条件可以有效避免上述问题,以便生成理想的全景图像。
在本公开的实施例中,该移动设备500的处理器530还可以被配 置为将该第一指向向量标记为该第一图像的方向矢量,并且将该第二指向向量标记为该第二图像的方向矢量。处理器530可以在每当拍摄一幅图像时执行该步骤,意在向所拍摄的图像中添加方位信息,该方位信息用于对拍摄的多幅图像的后续处理,例如对各幅图像的相对位置的确定,以便生成全景图像。
在本公开的实施例中,该移动设备500还可以包括:显示屏540,被配置为在不满足该拍摄停止条件的情况下,将当前已拍摄的各幅图像显示在显示屏540内,具体地,显示在显示屏540上的取景标示框内。例如,当拍摄了第一幅图像时,移动设备将该第一幅图像全屏显示在显示屏540上的取景标示框内。随着用户不断调整摄像头的指向,如前所述每当摄像头指向的变化量大于或等于第一触发拍摄阈值、或者拍摄的时间间隔大于或等于第二触发拍摄阈值时,移动设备可以自动拍摄下一幅图像。每拍摄一幅图像,可以在移动设备的显示屏540的取景标示框内实时地显示当前所拍摄的各幅图像。通常,取景标示框可以是显示屏540的全屏有效显示区域。
在本公开的实施例中,该移动设备的处理器530还可以被配置为确定该第二图像相对于该第一图像的像素偏移;根据该第二图像相对于该第一图像的像素偏移,确定该第一图像和该第二图像的相对位置。
根据本公开的实施例,处理器530可以在后台对已拍摄的各幅图像的相对位置进行确定,并且移动设备500也可以根据所确定的各幅图像的相对位置而在显示屏540的取景标示框内实时显示已拍摄的各幅图像。如前所述,在拍摄了第一图像和第二图像后,处理器530可以确定该第二图像相对于该第一图像的像素偏移;并且根据该第二图像相对于该第一图像的像素偏移,确定该第一图像和该第二图像的相对位置。
在确定第二图像相对于第一图像的像素偏移时,可以视情况选择以下两种方案中的一种。将在下文详细描述这两种方案。
第一种情况,当第一图像和第二图像是初始拍摄到的两幅图像时,处理器530可以被配置为:确定该第一图像和该第二图像的图像 重叠区域;并且根据该图像重叠区域确定该第二图像相对于该第一图像的像素偏移。例如,对于图像重叠区域的确定,可以根据图像检测方法来检测第一图像和第二图像的图像重叠区域。再例如,移动设备可以根据该第一图像的方向矢量和该第二图像的方向矢量来判断第二图像相对于第一图像的移动方向,进而在该移动方向上选定第一图像上有可能与第二图像重叠的一块区域,该区域可以是4*4像素、8*8像素或16*16像素等任一适当大小的区域。在选定了第一图像上的一块区域后,在第二图像上找到与该区域或其部分内的图像相同的部分,进而基于两图像上的相同的图像所在的位置,进而可以将该两块相同图像进行重叠,这样也就确定了该第二图像相对于该第一图像的像素偏移。
第二种情况,当第一图像和第二图像不是初始拍摄到的两幅图像而是随后拍摄到的两幅图像(例如,第三幅图像和第四幅图像)时,处理器530还可以被配置为:获取像素偏移与方向矢量变化量的参考比值;并且根据该参考比值和该第二图像的方向矢量相对于该第一图像的方向矢量的变化量,确定该第二图像相对于该第一图像的像素偏移。该参考比值是基于所述多个图像中初始拍摄到的两幅图像之间的像素偏移和方向矢量的变化量得到的。具体地,选取初始拍摄到的两幅图像,以绝对值最大的角度变化量Δθ为参考,进一步检测出两幅图像在XY轴两个方向上的相对像素偏移量ΔP。由于在该场景中ΔP/Δθ的比值相对接近于一个固定值,因而可以用于快速确定后一拍摄的图像相对于前一拍摄的图像的像素偏移量,这样便可以在确定相邻图像的相对像素偏移时节省大部分运算时间。
在通过上述参考比值确定了已拍摄的各幅图像的相对位置后,由于这种推算的方式可能会使各幅图像的位置存在微小的像素偏差,因而还可以对该各幅图像的相对位置进行矫正,用于更精确地生成全景图像。
在本公开的实施例中,处理器530还可以被配置为:确定每相邻两图像的图像重叠区域的边缘区域,并且根据该边缘区域,矫正该相邻两图像的相对位置。具体地,处理器530可以对相邻两图像的图像 重叠区域的边缘1-2像素行进行检测,进而在小范围内调整相邻两图像的相对位置。
在将已拍摄的各幅图像实时显示在显示屏540的取景标示框内时,随着拍摄的图像数目增多,可以将确定了相对位置的各幅图像进行等比例放缩,以将其显示在显示屏540的取景标示框内。例如,拍摄第一幅图像时,该第一幅图像可以全屏显示在该取景标示框内。当摄像头移动而拍摄了第二幅图像时,显示第一幅图像的区域缩小以留出空间显示第二幅图像,并且根据所判断出的第一幅图像和第二幅图像的相对位置来在对应位置处显示该第二幅图像。当摄像头继续移动而拍摄了第三幅图像时,根据第二幅图像和第三幅图像的相对位置来调整第一、第二幅图像的显示位置和大小,以留出适当的空间在相应位置处显示第三幅图像。当继续拍摄第四、第五、…、第N幅图像时,步骤以此类推,直至拍摄完毕。通过将所拍摄的各幅图像实时显示在取景标示框内,可以在向用户展示取景区域的同时,提示用户已拍摄的图像的移动轨迹和所示取景区域中尚未被拍摄的区域,用户进而可以据此调整摄像头的指向,以便尽可能地遍历该取景区域,拍摄覆盖面最全的多幅图像。
在本公开的实施例中,该处理器530还可以被配置为:根据该多个图像之间的相对位置,对该多个图像进行拼接;并且选取最大化的有效矩形区域内的图像作为全景图像。具体地,在达到用户期望的拍摄目的而中止拍摄时,处理器530可以对所选取的覆盖有图像的最大化的有效矩形区域进行裁剪,将裁剪区域内的多幅图像进行平滑拼接为全景图像,最后将该全景图像呈现在显示屏540上。
如上所述,在本公开的实施例中,提供了一种用于拍摄全景图像的移动设备。该移动设备可以根据对拍摄的图像添加的方向矢量来判断各幅图像的相对位置,使得在向用户展示取景区域的同时,提示用户已拍摄的图像的移动轨迹和所示取景区域中尚未被拍摄的区域,用户进而可以据此调整摄像头的指向,以便尽可能地遍历该取景区域,拍摄覆盖面最全的多幅图像,并且基于所确定的各幅图像的相对位置,选取最大化的有效矩形区域内的图像进行裁剪和平滑拼接,最后 作为全景图像显示在显示屏上。
在本公开的实施例中,还提供了一种用于执行上述方法的移动设备,该移动设备可以包括:图像捕捉单元、感测单元、控制单元、以及位置确定单元。
在本公开的实施例中,还提供了一种计算机可读存储介质,用于存储计算机程序,该计算机程序包括用于执行上述方法中的一个或多个步骤的指令。
本公开的实施例还提供了一种计算机产品,包括一个或多个处理器,所述处理器被配置为运行计算机指令,以执行上述方法中的一个或多个步骤。
在一个示例中,所述计算机产品还包括存储器,连接所述处理器,被配置为存储所述计算机指令。
其中,存储器可以是各种由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
其中,处理器可以是中央处理单元(CPU)或者现场可编程逻辑阵列(FPGA)或者单片机(MCU)或者数字信号处理器(DSP)或者专用集成电路(ASIC)或者图形处理器(GPU)等具有数据处理能力和/或程序执行能力的逻辑运算器件。一个或多个处理器可以被配置为以并行计算的处理器组同时执行上述方法,或者被配置为以部分处理器执行上述方法中的部分步骤,部分处理器执行上述方法中的其它部分步骤等。
计算机指令包括了一个或多个由对应于处理器的指令集架构定义的处理器操作,这些计算机指令可以被一个或多个计算机程序在逻辑上包含和表示。
尽管这里已经参考附图描述了示例实施例,应理解上述示例实施例仅仅是示例性的,并且不意图将本发明的范围限制于此。本领域普通技术人员可以在其中进行各种改变和修改,而不偏离本发明的范围 和精神。所有这些改变和修改意在被包括在所附权利要求所要求的本发明的范围之内。

Claims (20)

  1. 一种用于移动设备拍摄全景图像的方法,所述移动设备包括传感器和摄像头,所述方法包括:
    所述摄像头在第一时刻拍摄第一图像,并且所述传感器感测所述摄像头在拍摄所述第一图像时的第一指向向量;
    所述传感器在第二时刻感测所述摄像头的实测指向向量;
    在所述实测指向向量相对于所述第一指向向量的变化量大于或等于第一触发拍摄阈值或者所述第二时刻与所述第一时刻之间的时间间隔大于或等于第二触发拍摄阈值的情况下,所述摄像头拍摄第二图像,并且所述传感器感测所述摄像头在拍摄所述第二图像时的第二指向向量;
    在确定满足拍摄停止条件的情况下,基于已拍摄的多个图像生成全景图像。
  2. 如权利要求1所述的方法,所述方法还包括:将所述第一指向向量标记为所述第一图像的方向矢量,并且将所述第二指向向量标记为所述第二图像的方向矢量。
  3. 如权利要求1所述的方法,所述移动设备还包括显示屏;所述方法还包括:在不满足所述拍摄停止条件的情况下,将当前已拍摄的各幅图像显示在显示屏内。
  4. 如权利要求1至3中任一所述的方法,所述方法还包括:
    确定所述第二图像相对于所述第一图像的像素偏移;以及
    根据所述第二图像相对于所述第一图像的像素偏移,确定所述第一图像和所述第二图像的相对位置。
  5. 如权利要求4所述的方法,所述确定所述第二图像相对于所述第一图像的像素偏移,包括:
    确定所述第一图像和所述第二图像的图像重叠区域;以及
    根据所述图像重叠区域,确定所述第二图像相对于所述第一图像的像素偏移。
  6. 如权利要求4所述的方法,所述确定所述第二图像相对于所 述第一图像的像素偏移,包括:
    获取像素偏移与方向矢量变化量的参考比值;以及
    根据所述参考比值和所述第二图像的方向矢量相对于所述第一图像的方向矢量的变化量,确定所述第二图像相对于所述第一图像的像素偏移。
  7. 如权利要求6所述的方法,所述参考比值是基于所述多个图像中初始拍摄到的两幅图像之间的像素偏移和方向矢量的变化量得到的。
  8. 如权利要求6所述的方法,所述方法还包括:
    确定所述第一图像和所述第二图像的图像重叠区域的边缘区域;以及
    根据所述边缘区域,矫正所述第一图像和所述第二图像的相对位置。
  9. 如权利要求1所述的方法,所述生成全景图像包括:
    根据所述多个图像之间的相对位置,对所述多个图像进行拼接;以及
    选取最大化的有效矩形区域内的图像作为全景图像。
  10. 一种移动设备,所述移动设备包括:
    摄像头,被配置为在第一时刻拍摄第一图像;
    传感器,被配置为感测所述摄像头在拍摄所述第一图像时的第一指向向量,并且在第二时刻感测所述摄像头的实测指向向量;
    处理器,被配置为在所述实测指向向量相对于所述第一指向向量的变化量大于或等于第一触发拍摄阈值或者所述第二时刻与所述第一时刻之间的时间间隔大于或等于第二触发拍摄阈值的情况下,控制所述摄像头拍摄第二图像并且控制所述传感器感测所述摄像头在拍摄所述第二图像时的第二指向向量;并且在确定满足拍摄停止条件的情况下,基于已拍摄的多个图像生成全景图像。
  11. 如权利要求10所述的移动设备,所述处理器还被配置为:将所述第一指向向量标记为所述第一图像的方向矢量,并且将所述第二指向向量标记为所述第二图像的方向矢量。
  12. 如权利要求10所述的移动设备,所述移动设备还包括:
    显示屏,被配置为在不满足所述拍摄停止条件的情况下,将当前已拍摄的各幅图像显示在显示屏内。
  13. 如权利要求10至12中任一所述的移动设备,所述处理器还被配置为:确定所述第二图像相对于所述第一图像的像素偏移;并且根据所述第二图像相对于所述第一图像的像素偏移,确定所述第一图像和所述第二图像的相对位置。
  14. 如权利要求13所述的移动设备,所述处理器还被配置为:
    确定所述第一图像和所述第二图像的图像重叠区域;以及
    根据所述图像重叠区域,确定所述第二图像相对于所述第一图像的像素偏移。
  15. 如权利要求13所述的移动设备,所述处理器还被配置为:
    获取像素偏移与方向矢量变化量的参考比值;以及
    根据所述参考比值和所述第二图像的方向矢量相对于所述第一图像的方向矢量的变化量,确定所述第二图像相对于所述第一图像的像素偏移。
  16. 如权利要求15所述的移动设备,其中,所述参考比值是基于所述多个图像中初始拍摄到的两幅图像之间的像素偏移和方向矢量的变化量得到的。
  17. 如权利要求15所述的移动设备,所述处理器还被配置为:
    确定所述第一图像和所述第二图像的图像重叠区域的边缘区域;以及
    根据所述边缘区域,矫正所述第一图像和所述第二图像的相对位置。
  18. 如权利要求10所述的移动设备,所述处理器还被配置为:
    根据所述多个图像之间的相对位置,对所述多个图像进行拼接;以及
    选取最大化的有效矩形区域内的图像作为全景图像。
  19. 一种计算机可读存储介质,用于存储计算机程序,所述计算机程序包括用于执行权利要求1-9中所述任一方法中的一个或多个步 骤的指令。
  20. 一种计算机产品,包括一个或多个处理器,所述处理器被配置为运行计算机指令,以执行如权利要求1-9中所述任一方法中的一个或多个步骤。
PCT/CN2018/091385 2017-09-30 2018-06-15 用于移动设备拍摄全景图像的方法、移动设备、计算机可读存储介质和计算机产品 WO2019062214A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18859941.9A EP3691246A4 (en) 2017-09-30 2018-06-15 METHOD FOR USE IN CAPTURING A PANORAMIC IMAGE ON A MOBILE DEVICE, MOBILE DEVICE, COMPUTER READABLE STORAGE MEDIA, AND COMPUTER PRODUCT
US16/335,991 US11381738B2 (en) 2017-09-30 2018-06-15 Method for a mobile device to photograph a panoramic image, mobile device, and computer readable storage medium and computer product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710938414.2A CN109600543B (zh) 2017-09-30 2017-09-30 用于移动设备拍摄全景图像的方法以及移动设备
CN201710938414.2 2017-09-30

Publications (1)

Publication Number Publication Date
WO2019062214A1 true WO2019062214A1 (zh) 2019-04-04

Family

ID=65900585

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/091385 WO2019062214A1 (zh) 2017-09-30 2018-06-15 用于移动设备拍摄全景图像的方法、移动设备、计算机可读存储介质和计算机产品

Country Status (4)

Country Link
US (1) US11381738B2 (zh)
EP (1) EP3691246A4 (zh)
CN (1) CN109600543B (zh)
WO (1) WO2019062214A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675338B (zh) * 2019-09-09 2022-10-04 武汉大学 一种基于多幅影像的矢量数据自动纠正方法
CN115334245A (zh) * 2019-12-06 2022-11-11 达闼机器人股份有限公司 一种图像矫正方法、装置、电子设备及存储介质
CN113592751B (zh) * 2021-06-24 2024-05-07 荣耀终端有限公司 图像处理方法、装置和电子设备
CN113674157B (zh) * 2021-10-21 2022-02-22 广东唯仁医疗科技有限公司 眼底图像拼接方法、计算机装置和存储介质
CN114390219B (zh) * 2022-01-18 2023-11-14 Oppo广东移动通信有限公司 拍摄方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040189849A1 (en) * 2003-03-31 2004-09-30 Hofer Gregory V. Panoramic sequence guide
US20090153685A1 (en) * 2007-12-18 2009-06-18 Byung-Jun Son Method for automatically photographing panoramic picture
CN102905079A (zh) * 2012-10-16 2013-01-30 北京小米科技有限责任公司 用于全景拍摄的方法、装置及移动终端
CN103176347A (zh) * 2011-12-22 2013-06-26 百度在线网络技术(北京)有限公司 全景图拍摄方法及拍摄装置和电子设备
CN103813089A (zh) * 2012-11-13 2014-05-21 联想(北京)有限公司 一种获得图像的方法、电子设备以及辅助旋转装置
CN104320581A (zh) * 2014-10-28 2015-01-28 广东欧珀移动通信有限公司 一种全景拍摄的方法

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6930703B1 (en) 2000-04-29 2005-08-16 Hewlett-Packard Development Company, L.P. Method and apparatus for automatically capturing a plurality of images during a pan
US7746404B2 (en) * 2003-11-10 2010-06-29 Hewlett-Packard Development Company, L.P. Digital camera with panoramic image capture
US7424218B2 (en) 2005-07-28 2008-09-09 Microsoft Corporation Real-time preview for panoramic images
EP2013849A1 (en) * 2006-04-24 2009-01-14 Nxp B.V. Method and device for generating a panoramic image from a video sequence
WO2009095728A2 (en) * 2008-01-30 2009-08-06 Kodak Graphic Communications Canada Imaging patterns of features with skewed edges
WO2012086326A1 (ja) * 2010-12-24 2012-06-28 富士フイルム株式会社 立体パノラマ画像作成装置、立体パノラマ画像作成方法及び立体パノラマ画像作成プログラム並びに立体パノラマ画像再生装置、立体パノラマ画像再生方法及び立体パノラマ画像再生プログラム、記録媒体
US20120293607A1 (en) 2011-05-17 2012-11-22 Apple Inc. Panorama Processing
US8957944B2 (en) 2011-05-17 2015-02-17 Apple Inc. Positional sensor-assisted motion filtering for panoramic photography
JP5965596B2 (ja) * 2011-07-27 2016-08-10 オリンパス株式会社 画像処理システム、情報処理装置及びプログラム
US10848731B2 (en) * 2012-02-24 2020-11-24 Matterport, Inc. Capturing and aligning panoramic image and depth data
CN103139479A (zh) * 2013-02-25 2013-06-05 广东欧珀移动通信有限公司 一种结束全景预览扫描方法及装置
KR102070776B1 (ko) * 2013-03-21 2020-01-29 엘지전자 주식회사 디스플레이 장치 및 그 제어 방법
FR3032052B1 (fr) * 2015-01-26 2017-03-10 Parrot Drone muni d'une camera video et de moyens de compensation des artefacts produits aux angles de roulis les plus importants
CN104735356A (zh) * 2015-03-23 2015-06-24 深圳市欧珀通信软件有限公司 全景照片拍摄方法及装置
US10313651B2 (en) * 2017-05-22 2019-06-04 Fyusion, Inc. Snapshots at predefined intervals or angles
CN107105166B (zh) * 2017-05-26 2020-12-01 努比亚技术有限公司 图像拍摄方法、终端和计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040189849A1 (en) * 2003-03-31 2004-09-30 Hofer Gregory V. Panoramic sequence guide
US20090153685A1 (en) * 2007-12-18 2009-06-18 Byung-Jun Son Method for automatically photographing panoramic picture
CN103176347A (zh) * 2011-12-22 2013-06-26 百度在线网络技术(北京)有限公司 全景图拍摄方法及拍摄装置和电子设备
CN102905079A (zh) * 2012-10-16 2013-01-30 北京小米科技有限责任公司 用于全景拍摄的方法、装置及移动终端
CN103813089A (zh) * 2012-11-13 2014-05-21 联想(北京)有限公司 一种获得图像的方法、电子设备以及辅助旋转装置
CN104320581A (zh) * 2014-10-28 2015-01-28 广东欧珀移动通信有限公司 一种全景拍摄的方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3691246A4 *

Also Published As

Publication number Publication date
CN109600543A (zh) 2019-04-09
US11381738B2 (en) 2022-07-05
EP3691246A1 (en) 2020-08-05
EP3691246A4 (en) 2021-04-28
US20210368097A1 (en) 2021-11-25
CN109600543B (zh) 2021-01-22

Similar Documents

Publication Publication Date Title
WO2019062214A1 (zh) 用于移动设备拍摄全景图像的方法、移动设备、计算机可读存储介质和计算机产品
CN112689135B (zh) 投影校正方法、装置、存储介质及电子设备
US10129462B2 (en) Camera augmented reality based activity history tracking
US9282242B2 (en) Method and electric device for taking panoramic photograph
JP6351870B1 (ja) モバイル装置の全方向立体撮影
US20220159189A1 (en) Handheld gimbal and shooting control method for handheld gimbal
JP5659305B2 (ja) 画像生成装置および画像生成方法
JP5659304B2 (ja) 画像生成装置および画像生成方法
JPWO2013069049A1 (ja) 画像生成装置および画像生成方法
CN105741233B (zh) 一种视频图像球面拼接方法及系统
CN113194263B (zh) 枪球联动控制方法、装置、计算机设备和存储介质
TWI552598B (zh) 一種攝影機之自動校正系統及其自動校正方法
WO2019119410A1 (zh) 全景拍照方法、拍照设备及机器可读存储介质
US9699378B2 (en) Image processing apparatus, method, and storage medium capable of generating wide angle image
US20200058101A1 (en) Image processing device, imaging device, terminal apparatus, image correction method, and image processing program
JP2019012201A (ja) 画像撮影装置、画像撮影プログラム、画像撮影方法
JP6483661B2 (ja) 撮像制御装置、撮像制御方法およびプログラム
CN115174878B (zh) 投影画面校正方法、装置和存储介质
WO2022036512A1 (zh) 数据处理方法、装置、终端和存储介质
TWI639338B (zh) 影像擷取裝置及其影像平順縮放方法
JP2020067511A (ja) カメラシステム、その制御方法およびプログラム
TWI569641B (zh) 影像擷取方法及電子裝置
CN111124137B (zh) 图像展示方法、装置、设备及存储介质
JP2012054899A (ja) 撮像装置及び姿勢検出方法
CN114740681A (zh) 一种配置旋转镜头的单片液晶投影仪的智能测距调节系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18859941

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2018859941

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2018859941

Country of ref document: EP

Effective date: 20200430