CN111091498B - Image processing method, device, electronic equipment and medium - Google Patents

Image processing method, device, electronic equipment and medium Download PDF

Info

Publication number
CN111091498B
CN111091498B CN201911423652.5A CN201911423652A CN111091498B CN 111091498 B CN111091498 B CN 111091498B CN 201911423652 A CN201911423652 A CN 201911423652A CN 111091498 B CN111091498 B CN 111091498B
Authority
CN
China
Prior art keywords
images
image
group
panoramic
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911423652.5A
Other languages
Chinese (zh)
Other versions
CN111091498A (en
Inventor
辛佳慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911423652.5A priority Critical patent/CN111091498B/en
Publication of CN111091498A publication Critical patent/CN111091498A/en
Application granted granted Critical
Publication of CN111091498B publication Critical patent/CN111091498B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present disclosure provides an image processing method, the method including: obtaining a photographing instruction in a panoramic mode; responding to a photographing instruction in a panoramic mode, obtaining a first group of images based on a first position and obtaining a second group of images based on a second position, wherein the spatial positions of the first position and the second position are different; the first position and the second position are different positions on the motion trail meeting the guiding condition; processing the first group of images and the second group of images to generate and store panoramic pictures; if the panoramic picture is displayed in a dynamic mode, a first area and a second area in the panoramic picture simultaneously display dynamic output effects, wherein the first area displays the first dynamic output effects through a first group of images, and the second area displays the second dynamic output effects through a second group of images. The present disclosure also provides an image processing apparatus, an electronic device, and a computer-readable storage medium.

Description

Image processing method, device, electronic equipment and medium
Technical Field
The present disclosure relates to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of electronic technology, more and more electronic devices are used by people. The electronic device has a photographing function, and the photographing function of the electronic device of the related art cannot meet various photographing requirements of users. For example, the related art electronic device has a photographing function based on a panoramic mode, but it is difficult for a photographed panoramic picture to satisfy a user's demand, for example, the photographed panoramic picture is a still picture, and it is impossible for the user to satisfy the user's demand for a moving picture.
Disclosure of Invention
One aspect of the present disclosure provides an image processing method, the method including: and obtaining a photographing instruction in the panoramic mode. Responding to the photographing instruction in the panoramic mode, obtaining a first group of images based on a first position and obtaining a second group of images based on a second position, wherein the first position is different from the second position in space; the first position and the second position are different positions on a motion trail meeting a guiding condition. Processing the first group of images and the second group of images to generate and store panoramic pictures; and if the panoramic picture is displayed in a dynamic mode, a first area and a second area in the panoramic picture simultaneously display dynamic output effects, wherein the first area displays first dynamic output effects by the first group of images, and the second area displays second dynamic output effects by the second group of images.
Optionally, processing the first set of images and the second set of images to generate and save a panoramic image includes: determining a first stitched image from the first set of images and a second stitched image from the second set of images; and the edge area of the first spliced image and the edge area of the second spliced image meet a similarity condition, and the first group of images and the second group of images are fused into the panoramic picture based on the edge area of the first spliced image and the edge area of the second spliced image.
Optionally, fusing the first set of images and the second set of images into the panoramic picture based on the edge region of the first stitched image and the edge region of the second stitched image includes: and stitching the first stitched image with the second stitched image based on the edge area of the first stitched image and the edge area of the second stitched image to determine the reference of the first stitched image and the reference of the second stitched image, adjusting the multi-frame images of the first group of images except for the first stitched image based on the first stitched image so that the reference of each frame of the first group of images is consistent, adjusting the multi-frame images of the second group of images except for the second stitched image based on the second stitched image so that the reference of each frame of the second group of images is consistent, and storing the adjusted first group of images and the adjusted second group of images in a correlated mode.
Optionally, each frame image in the first set of images has depth information; each frame image in the second set of images has depth information. The processing the first set of images and the second set of images to generate and save a panoramic picture includes: obtaining a foreground image and a background image of each frame image of the first group of images based on the depth information of each frame image of the first group of images, obtaining a foreground image and a background image of each frame image of the first group of images based on the depth information of each frame image of the second group of images, fusing the background image of each frame image of the first group of images into a first spliced image and fusing the background image of each frame image of the second group of images into a second spliced image, generating a background of a panoramic image based on an edge area of the first spliced image and an edge area of the second spliced image, and associating and storing the background of the panoramic image, a first foreground image group formed by the foreground images of each frame image of the first group of images and a second foreground image group formed by the foreground images of each frame image of the second group of images.
Optionally, the method further comprises: acquiring an acquired image of a third position, wherein the acquired image is used for image stitching in a panoramic mode, and the third position is located between the first position and the second position; the first position, the second position and the third position are different positions on the movement track meeting the guiding condition.
Optionally, the method further comprises: setting calibration motion collection at different positions based on motion tracks of guide conditions in a preview state of the panoramic mode; and acquiring a group of images at a calibration motion acquisition position of a motion track in a guiding condition in the image acquisition process under the space motion change in response to the photographing state of the photographing instruction in the panoramic mode.
Another aspect of the present disclosure provides an image processing apparatus, the apparatus including: the device comprises a first obtaining module, a second obtaining module and a processing module. The first obtaining module obtains a photographing instruction in a panoramic mode. A second obtaining module, configured to obtain a first set of images based on a first location and a second set of images based on a second location in response to the photographing instruction in the panoramic mode, wherein the first location is different from the second location in spatial location; the first position and the second position are different positions on a motion trail meeting a guiding condition. The processing module is used for processing the first group of images and the second group of images to generate and store panoramic pictures; and if the panoramic picture is displayed in a dynamic mode, a first area and a second area in the panoramic picture simultaneously display dynamic output effects, wherein the first area displays first dynamic output effects by the first group of images, and the second area displays second dynamic output effects by the second group of images.
Optionally, processing the first set of images and the second set of images to generate and save a panoramic image includes: determining a first stitched image from the first set of images and a second stitched image from the second set of images; and the edge area of the first spliced image and the edge area of the second spliced image meet a similarity condition, and the first group of images and the second group of images are fused into the panoramic picture based on the edge area of the first spliced image and the edge area of the second spliced image.
Optionally, fusing the first set of images and the second set of images into the panoramic picture based on the edge region of the first stitched image and the edge region of the second stitched image includes: and stitching the first stitched image with the second stitched image based on the edge area of the first stitched image and the edge area of the second stitched image to determine the reference of the first stitched image and the reference of the second stitched image, adjusting the multi-frame images of the first group of images except for the first stitched image based on the first stitched image so that the reference of each frame of the first group of images is consistent, adjusting the multi-frame images of the second group of images except for the second stitched image based on the second stitched image so that the reference of each frame of the second group of images is consistent, and storing the adjusted first group of images and the adjusted second group of images in a correlated mode.
Optionally, each frame image in the first set of images has depth information; each frame image in the second set of images has depth information. The processing the first set of images and the second set of images to generate and save a panoramic picture includes: obtaining a foreground image and a background image of each frame image of the first group of images based on the depth information of each frame image of the first group of images, obtaining a foreground image and a background image of each frame image of the first group of images based on the depth information of each frame image of the second group of images, fusing the background image of each frame image of the first group of images into a first spliced image and fusing the background image of each frame image of the second group of images into a second spliced image, generating a background of a panoramic image based on an edge area of the first spliced image and an edge area of the second spliced image, and associating and storing the background of the panoramic image, a first foreground image group formed by the foreground images of each frame image of the first group of images and a second foreground image group formed by the foreground images of each frame image of the second group of images.
Optionally, the apparatus further includes: the third acquisition module is used for acquiring an acquired image at a third position, wherein the acquired image is used for image stitching in a panoramic mode, and the third position is located between the first position and the second position; the first position, the second position and the third position are different positions on the movement track meeting the guiding condition.
Optionally, the apparatus further includes: the setting module is used for setting calibration motion acquisition at different positions based on the motion trail of the guiding condition in the preview state of the panoramic mode; and acquiring a group of images at a calibration motion acquisition position of a motion track in a guiding condition in the image acquisition process under the space motion change in response to the photographing state of the photographing instruction in the panoramic mode.
Another aspect of the present disclosure provides an electronic device, comprising: a camera; and a processor for performing: obtaining a photographing instruction in a panoramic mode, and responding to the photographing instruction in the panoramic mode, obtaining a first group of images based on a first position and obtaining a second group of images based on a second position, wherein the first position is different from the second position in space; the first position and the second position are different positions on a motion track meeting the guide condition, the first group of images and the second group of images are processed, and panoramic pictures are generated and stored; and if the panoramic picture is displayed in a dynamic mode, a first area and a second area in the panoramic picture simultaneously display dynamic output effects, wherein the first area displays first dynamic output effects by the first group of images, and the second area displays second dynamic output effects by the second group of images.
Another aspect of the present disclosure provides a non-transitory readable storage medium storing computer executable instructions which, when executed, are configured to implement a method as described above.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions which, when executed, are adapted to carry out the method as described above.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically illustrates a flowchart of an image processing method according to an embodiment of the present disclosure;
fig. 2 schematically illustrates a schematic diagram of an image processing method according to a first embodiment of the present disclosure;
fig. 3 schematically illustrates a schematic diagram of an image processing method according to a second embodiment of the present disclosure;
fig. 4 schematically illustrates a schematic diagram of an image processing method according to a third embodiment of the present disclosure;
fig. 5 schematically illustrates a schematic diagram of an image processing method according to a fourth embodiment of the present disclosure;
fig. 6 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure; and
fig. 7 schematically illustrates a block diagram of a computer system for implementing image processing in accordance with an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a formulation similar to at least one of "A, B or C, etc." is used, in general such a formulation should be interpreted in accordance with the ordinary understanding of one skilled in the art (e.g. "a system with at least one of A, B or C" would include but not be limited to systems with a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some of the block diagrams and/or flowchart illustrations are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable control apparatus, such that the instructions, when executed by the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart.
Thus, the techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). Additionally, the techniques of this disclosure may take the form of a computer program product on a computer-readable medium having instructions stored thereon, the computer program product being usable by or in connection with an instruction execution system. In the context of this disclosure, a computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a computer-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the computer readable medium include: magnetic storage devices such as magnetic tape or hard disk (HDD); optical storage devices such as compact discs (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or a wired/wireless communication link.
Embodiments of the present disclosure provide an image processing method, including: and obtaining a photographing instruction in the panoramic mode. Responding to a photographing instruction in a panoramic mode, obtaining a first group of images based on a first position and obtaining a second group of images based on a second position, wherein the spatial positions of the first position and the second position are different; the first position and the second position are different positions on the movement track meeting the guiding condition. Then, processing the first group of images and the second group of images to generate and store panoramic pictures; if the panoramic picture is displayed in a dynamic mode, a first area and a second area in the panoramic picture simultaneously display dynamic output effects, wherein the first area displays the first dynamic output effects through a first group of images, and the second area displays the second dynamic output effects through a second group of images.
Fig. 1 schematically shows a flowchart of an image processing method according to an embodiment of the present disclosure. Fig. 2 schematically illustrates a schematic diagram of an image processing method according to a first embodiment of the present disclosure.
As shown in fig. 1, the image processing method includes operations S110 to S130, for example.
Operations S110 to S130 are described below in conjunction with fig. 1 and 2.
In operation S110, a photographing instruction in a panorama mode is obtained.
The image processing method of the embodiment of the disclosure can be used for electronic equipment, and the electronic equipment can be a mobile phone, a computer and other equipment. For ease of understanding, the present disclosure exemplifies an electronic device as a cell phone.
In the embodiment of the disclosure, the electronic device has a photographing function, for example. Specifically, the electronic device has, for example, a photographing function based on a panoramic mode. The panoramic mode is, for example, a mode in which photographed multi-frame images are combined into one panoramic picture.
In the panoramic mode, after a photographing instruction in the panoramic mode is obtained, the electronic device can obtain an image based on the photographing instruction.
In operation S120, in response to a photographing instruction in the panorama mode, a first set of images 210 is obtained based on a first location and a second set of images 220 is obtained based on a second location, wherein the first location is different from the second location in spatial location. The first position and the second position are different positions on the movement track meeting the guiding condition.
According to the embodiment of the disclosure, in the panoramic mode, the electronic device generates a guide to guide the user to move the electronic device for shooting. Then, the electronic device moves according to the guide to shoot images of different positions, for example, a motion track is generated after the electronic device moves, and the different positions at least comprise a first position and a second position, for example, the first position and the second position are positions on the motion track.
According to an embodiment of the present disclosure, the first set of images 210 and the second set of images 220 each include, for example, a plurality of frames of images, and the number of frames of the first set of images 210 may be the same as or different from the number of frames of the second set of images 220.
In operation S130, the first and second sets of images 210 and 220 are processed, and a panoramic picture 230 is generated and saved. Wherein, the panoramic image 230 is displayed in a dynamic manner, for example, and the first area 231 and the second area 232 in the panoramic image 230 simultaneously display dynamic output effects. The first region 231 shows a first dynamic output effect with the first set of images 210 and the second region 232 shows a second dynamic output effect with the second set of images 220.
According to an embodiment of the present disclosure, the generated panoramic picture 230 may be, for example, a moving picture. The panoramic image 230 includes, for example, a plurality of regions, each region for dynamically displaying, for example, a set of images. Specifically, for example, the first region 231 is used to dynamically display the first set of images 210 and the second region 232 is used to dynamically display the second set of images 220.
For example, the first group of images 210 includes, for example, three frames of images that are displayed in the first area 231 in a cyclic manner, for example, in a certain order, that is, the first area 231 displays one frame of image in the first group of images 210 at a time, thereby achieving the first dynamic output effect. Similarly, the second set of images 220 includes, for example, two frames of images that are displayed in the second region 232 in a cyclic manner, for example, in a certain order, i.e., the second region 232 displays one frame of images in the second set of images 220 at a time, thereby achieving a second dynamic output effect.
It can be appreciated that the technical solution of the embodiments of the present disclosure obtains multiple sets of images in a panoramic mode, and processes the multiple sets of images to generate a panoramic view, where different regions in the panoramic view may exhibit different dynamic effects. According to the technical scheme of the embodiment of the disclosure, different requirements of a user under a shooting scene are met, and a panoramic picture generating function is provided for the user. And the panoramic picture can show different dynamic effects in different areas, so that the panoramic picture is more vivid.
Fig. 3 schematically illustrates a schematic diagram of an image processing method according to a second embodiment of the present disclosure.
Referring to fig. 3, the operation S130 may include, for example, the following steps (1) to (2).
(1) A first stitched image is determined from the first set of images and a second stitched image is determined from the second set of images. The edge area of the first spliced image and the edge area of the second spliced image meet the similarity condition.
According to an embodiment of the present disclosure, the first set of images includes, for example, image frame 311, image frame 312, image frame 313. The second set of images includes, for example, image frames 321, 322. For example, the first stitched image and the second stitched image may be determined from edge regions of respective image frames of the first set of images and edge regions of respective image frames of the second set of images. For example, the first stitched image is image frame 311 and the second stitched image is image frame 321. The image frame 311 has a high similarity with an edge region of the image frame 321, for example.
(2) And fusing the first group of images and the second group of images into a panoramic picture based on the edge region of the first stitched image and the edge region of the second stitched image. The specific procedure is described below.
First, the first stitched image and the second stitched image are stitched based on the edge regions of the first stitched image and the second stitched image to determine a reference of the first stitched image and a reference of the second stitched image.
For example, the stitching result obtained by stitching the first stitched image and the second stitched image is the image 330. The image 330 includes, for example, an adjusted first stitched image and an adjusted second stitched image. The adjusted first stitched image, for example, determines a reference for the first set of images, and the adjusted second stitched image, for example, determines a reference for the second set of images.
Next, a multi-frame image of the first group of images, excluding the first stitched image, is adjusted based on the first stitched image so that each frame of image in the first group of images is consistent in reference.
Then, the multi-frame images of the second group of images, except for the second stitched image, are adjusted based on the second stitched image so that the references of each frame of images in the second group of images are consistent.
For example, the references of the image frames 312 and 313 in the first group of images are adjusted to coincide with the references of the adjusted first stitched image. And adjusting the reference of the image frame 322 in the second set of images to be consistent with the adjusted reference of the second stitched image.
Finally, the adjusted first set of images and the adjusted second set of images are stored in association to obtain the panoramic image 340. Wherein, after the reference is adjusted, the overlapping ratio of the same information in the panoramic picture 340 is improved. For example, the same information in each image frame in a first set of images that are dynamically displayed in panoramic image 340 is highly overlapping and the same information in each image frame in a second set of images that are dynamically displayed in panoramic image 340 is highly overlapping. Thus, when the panoramic image 340 is dynamically displayed, the dynamic part observed by the user is a dynamic part in the real environment, and the static part in the real environment is close to static in the panoramic image 340, which is an effect achieved by adjusting the reference of each group of images so that the same information of each frame of images is displayed in the panoramic image 340 in a superposition manner.
Fig. 4 schematically illustrates a schematic diagram of an image processing method according to a third embodiment of the present disclosure.
Referring to fig. 4, the operation S130 may further include the following steps (1) - (5), for example
(1) Based on the depth information of each frame of the first set of images, a foreground image and a background image of each frame of the first set of images are obtained.
(2) Based on the depth information of each frame of the second set of images, a foreground image and a background image of each frame of the first set of images are obtained.
According to an embodiment of the present disclosure, each frame image in the first set of images has depth information, and each frame image in the second set of images also has depth information. Wherein the depth information may be used, for example, to distinguish between a foreground image and a background image of each frame image. Accordingly, the foreground image and the background image of each frame image can be acquired based on the depth information of each frame image.
(3) The background image of each frame of the first set of images is fused into a first stitched image 431 and the background image of each frame of the second set of images is fused into a second stitched image 432.
For example, as shown in fig. 4, the background image of each frame image of the first group of images includes, for example, background images 411, 412, 413. For example, the background images 411, 412, 413 are fused into a first stitched image 431. The fusion process may be, for example, based on similar information of the background images 411, 412, 413, such that similar information in the background images 411, 412, 413 is superimposed in the first stitched image 431 as much as possible. Similarly, the background image of each frame image of the second set of images includes, for example, the background images 421, 422. For example, the background images 421, 422 are fused into a second stitched image 432.
(4) A background of the panoramic picture is generated based on the edge region of the first stitched image 431 and the edge region of the second stitched image 432.
For example, the edge area of the first stitched image 431 and the similar information of the edge area of the second stitched image 432 are displayed in a superimposed manner, so as to obtain the background of the panoramic image generated by the first stitched image 431 and the second stitched image 432, where the background of the panoramic image is, for example, the picture 440.
(5) And associating and storing the background of the panoramic picture, a first foreground image group formed by foreground images of each frame of image of the first group of images and a second foreground image group formed by foreground images of each frame of image of the second group of images to obtain the panoramic picture 450.
Wherein the panoramic picture 450 comprises, for example, a first set of foreground images that are, for example, dynamic in a real environment. Thus, the first set of foreground images is dynamically presented, for example, in panoramic picture 450. Similarly, the panoramic image 450 includes, for example, a second set of foreground images that are, for example, dynamic in the real environment. Thus, the second set of foreground images is dynamically presented, for example, in panoramic picture 450.
It will be appreciated that embodiments of the present disclosure ultimately form a dynamically displayed panoramic picture by separating the foreground and background of each set of images, synthesizing the background of each set of images into the background of the panoramic picture, and then processing the foreground of each set of images and the synthesized background of the panoramic picture. In a real environment, the foreground is dynamic, and the background is static, so that the background of each group of images is synthesized into the background of the panoramic picture by separating the foreground from the background, the background of the synthesized panoramic picture is not influenced by dynamic information (foreground), the synthesized panoramic picture is more accurate in background, and the effect is better.
Fig. 5 schematically shows a schematic diagram of an image processing method according to a fourth embodiment of the present disclosure.
As shown in fig. 5, embodiments of the present disclosure may obtain, for example, an acquired image 530 of a third location. The acquired images are used for image stitching in a panoramic mode. The third position is located between the first position and the second position, for example, and the first position, the second position, and the third position are different positions on the movement track that satisfies the guiding condition.
In the disclosed embodiment, the acquired image 530 may be, for example, a frame of image, i.e., the acquired image 530 may be a still image. When multiple sets of images need to be acquired at multiple locations, a frame of acquired image 530 may be acquired between two adjacent locations, the frame of acquired image 530 being used, for example, to stitch with the two adjacent sets of images, so that third locations may include multiple locations, each of which may obtain a frame of acquired image 530, for example. For example, if two adjacent images are the first image 510 and the second image 520, the acquired image 530 may be stitched with the first image 510 and the second image 520 to obtain the panoramic image 540, for example.
In one case, for example, a frame of image may be determined from the first set of images 510 as a first stitched image and a frame of image may be determined from the second set of images 520 as a second stitched image. And then stitching the first stitched image, the second stitched image and the acquired image 530 according to the edge regions of the images to obtain a stitching result. In the stitching result, for example, a reference of the first stitched image and a reference of the second stitched image are determined. Then, the references of the image frames in the first group of images 510 other than the first stitched image are adjusted to be consistent with the references of the first stitched image in the stitching result, and the references of the image frames in the second group of images 520 other than the second stitched image are adjusted to be consistent with the references of the second stitched image in the stitching result. Finally, the image frames after the reference adjustment and the stitching result may be combined into a final panoramic image 540.
In another case, the background and the foreground of each frame of image in the first set of images 510 may be separated, and the background and the foreground of each frame of image in the second set of images 520 may be separated, and all the background and the acquired image 530 may be stitched to obtain the background of the panoramic image. The foreground of the first set of images 510, the foreground of the second set of images 520, and the background of the panoramic picture are then combined to obtain the final panoramic picture 540.
The embodiment of the disclosure obtains the acquired images at positions between two adjacent image groups and splices the acquired images with the image groups. Because the acquired image is a frame of static image, a plurality of adjacent image groups are spliced with the acquired image, so that the splicing effect can be improved. Specifically, because the collected image is a static image, in the stitching process, the collected image can provide more static information as static reference information in the stitching process. Therefore, the splicing process uses more static information as a reference to splice, so that the splicing effect can be ensured.
According to the embodiment of the disclosure, in a preview state of a panoramic mode, calibration motion acquisition is set at different positions based on motion trajectories of guide conditions, so that a group of images are acquired at calibration motion acquisition positions of the motion trajectories of the guide conditions in response to spatial motion change in a photographing state of a photographing instruction in the panoramic mode.
For example, the electronic device may calibrate the motion profile at different locations of the motion profile. When the electronic device moves to a calibrated position, the electronic device may automatically trigger acquisition to acquire a set of images at the calibrated position. For example, the motion acquisition may be calibrated at both the first and second positions of the motion profile. When the electronic equipment moves to a first position along the movement track, the first group of images are automatically triggered to be acquired, and when the electronic equipment moves to a second position along the movement track, the second group of images are automatically triggered to be acquired.
It can be appreciated that, in the embodiment of the disclosure, through a position calibration manner, each group of images is automatically triggered and collected by triggering the electronic device at the calibrated position, so that the effect that a user is not required to manually collect each group of images is achieved, the flexibility of image collection is improved, the complexity of image collection is reduced, and the use experience of the user is improved.
Alternatively, each set of images may be acquired according to the needs of the user. For example, when the electronic device moves to a first position along the motion trajectory, the user may touch or press a photographing key, and then the electronic device acquires a first set of images at the first position in response to the user's touch or press. Similarly, the electronic device may acquire a second set of images in response to a touch or press by the user at a second location. The method realizes that each group of images is acquired according to the requirements of the user, ensures that each acquired group of images accords with the intention of the user, and improves the satisfaction degree of the user.
In one application scenario, for example, the target user is a photographed object. Firstly, a panoramic mode of the electronic equipment is started, a first group of images of a target user reading books in a study is obtained, and the first group of images comprise images of a plurality of frames of target user reading books, for example. Then, the panoramic mode is maintained, the target user can walk to the tea table for tea drinking, the electronic device moves to a position, and a second set of images of the target user, for example, including images of multiple frames of target user for tea drinking, are acquired. Then, the first group of images and the second group of images are combined into a panoramic picture. The panoramic picture comprises two types of dynamic states of the target user, one is a reading dynamic state and the other is a tea drinking dynamic state. The two dynamics are for example displayed in different areas of the panoramic picture, i.e. both dynamics can be viewed simultaneously when viewing the panoramic picture. It is to be understood that embodiments of the present disclosure are not limited to two sets of images, but may be multiple sets of images. When multiple groups of images are acquired, the generated panoramic picture comprises multiple dynamic states of a target user, and the multiple dynamic states comprise reading books, drinking tea, watering flowers and the like.
The present disclosure also provides an electronic device, including: a camera; and a processor for performing: obtaining a photographing instruction in a panoramic mode, and responding to the photographing instruction in the panoramic mode, obtaining a first group of images based on a first position and obtaining a second group of images based on a second position, wherein the spatial positions of the first position and the second position are different; the first position and the second position are different positions on the motion trail meeting the guiding condition, and the first group of images and the second group of images are processed to generate and store panoramic pictures; if the panoramic picture is displayed in a dynamic mode, a first area and a second area in the panoramic picture simultaneously display dynamic output effects, wherein the first area displays the first dynamic output effects through a first group of images, and the second area displays the second dynamic output effects through a second group of images. The processor is for example used to perform the methods described in fig. 1-5.
Fig. 6 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the image processing apparatus 600 includes, for example, a first obtaining module 610, a second obtaining module 620, and a processing module 630.
The first obtaining module 610 may be configured to obtain a photographing instruction in a panoramic mode. According to an embodiment of the present disclosure, the first obtaining module 610 may perform, for example, operation S110 described above with reference to fig. 1, which is not described herein.
The second obtaining module 620 may be configured to obtain, in response to a photographing instruction in the panoramic mode, a first set of images based on a first location and a second set of images based on a second location, where the first location is different from the second location in space; the first position and the second position are different positions on the movement track meeting the guiding condition. The second obtaining module 620 may, for example, perform operation S120 described above with reference to fig. 1 according to an embodiment of the present disclosure, which is not described herein.
The processing module 630 may be configured to process the first set of images and the second set of images to generate and store a panoramic picture; if the panoramic picture is displayed in a dynamic mode, a first area and a second area in the panoramic picture simultaneously display dynamic output effects, wherein the first area displays the first dynamic output effects through a first group of images, and the second area displays the second dynamic output effects through a second group of images. According to an embodiment of the present disclosure, the processing module 630 may perform, for example, operation S130 described above with reference to fig. 1, which is not described herein.
According to an embodiment of the present disclosure, processing a first set of images and a second set of images, generating and saving a panoramic picture includes: determining a first stitched image from the first set of images and a second stitched image from the second set of images; the edge area of the first spliced image and the edge area of the second spliced image meet the similarity condition, and the first group of images and the second group of images are fused into a panoramic picture based on the edge area of the first spliced image and the edge area of the second spliced image.
According to an embodiment of the present disclosure, fusing the first set of images and the second set of images into a panoramic picture based on the edge region of the first stitched image and the edge region of the second stitched image includes: and based on the edge area of the first spliced image and the edge area of the second spliced image, splicing the first spliced image and the second spliced image to determine the reference of the first spliced image and the reference of the second spliced image, adjusting the multi-frame images except the first spliced image in the first group of images based on the first spliced image so that the reference of each frame of image in the first group of images is consistent, adjusting the multi-frame images except the second spliced image in the second group of images based on the second spliced image so that the reference of each frame of image in the second group of images is consistent, and storing the adjusted first group of images and the adjusted second group of images in a correlated mode.
According to an embodiment of the disclosure, each frame image in the first set of images has depth information; each frame of image in the second set of images has depth information. Processing the first set of images and the second set of images to generate and save a panoramic picture includes: obtaining a foreground image and a background image of each frame of image of the first group of images based on the depth information of each frame of image of the first group of images, obtaining a foreground image and a background image of each frame of image of the first group of images based on the depth information of each frame of image of the second group of images, fusing the background image of each frame of image of the first group of images into a first spliced image and the background image of each frame of image of the second group of images into a second spliced image, generating a background of a panoramic image based on an edge area of the first spliced image and an edge area of the second spliced image, and associating and storing the background of the panoramic image, a first foreground image group formed by the foreground image of each frame of image of the first group of images and a second foreground image group formed by the foreground image of each frame of image of the second group of images.
According to an embodiment of the present disclosure, the apparatus 600 further comprises: the third acquisition module is used for acquiring an acquired image at a third position, wherein the acquired image is used for image stitching in a panoramic mode, and the third position is located between the first position and the second position; the first position, the second position and the third position are different positions on the movement track meeting the guiding condition.
According to an embodiment of the present disclosure, the apparatus 600 further comprises: the setting module is used for setting calibration motion acquisition at different positions based on the motion trail of the guiding condition in the preview state of the panoramic mode; and the image acquisition device is used for acquiring a group of images at the calibrated motion acquisition position of the motion trail under the guiding condition in the image acquisition process under the space motion change in response to the photographing state of the photographing instruction in the panoramic mode.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions.
For example, any of the first obtaining module 610, the second obtaining module 620, and the processing module 630 may be combined and implemented in one module, or any of the modules may be split into a plurality of modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to embodiments of the present disclosure, at least one of the first obtaining module 610, the second obtaining module 620, and the processing module 630 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable way of integrating or packaging the circuits, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, at least one of the first obtaining module 610, the second obtaining module 620, and the processing module 630 may be at least partially implemented as a computer program module, which when executed, may perform the corresponding functions.
Fig. 7 schematically illustrates a block diagram of a computer system for implementing image processing in accordance with an embodiment of the present disclosure. The computer system illustrated in fig. 7 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 7, a computer system 700 implementing image processing includes a processor 701, a computer readable storage medium 702. The system 700 may perform a method according to an embodiment of the present disclosure.
In particular, the processor 701 may comprise, for example, a general purpose microprocessor, an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 701 may also include on-board memory for caching purposes. The processor 701 may be a single processing unit or a plurality of processing units for performing different actions of the method flow according to embodiments of the present disclosure.
The computer-readable storage medium 702 may be, for example, any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the readable storage medium include: magnetic storage devices such as magnetic tape or hard disk (HDD); optical storage devices such as compact discs (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or a wired/wireless communication link.
The computer-readable storage medium 702 may comprise a computer program 703, which computer program 703 may comprise code/computer-executable instructions, which when executed by the processor 701, cause the processor 701 to perform a method according to an embodiment of the present disclosure or any variant thereof.
The computer program 703 may be configured with computer program code comprising, for example, computer program modules. For example, in an example embodiment, code in the computer program 703 may include one or more program modules, including 703A, modules 703B, … …, for example. It should be noted that the division and number of modules is not fixed, and a person skilled in the art may use suitable program modules or combinations of program modules according to the actual situation, which when executed by the processor 701, enable the processor 701 to perform the method according to an embodiment of the disclosure or any variations thereof.
According to embodiments of the present disclosure, any of the above-described modules, sub-modules, units, at least part of the functionality of any of the sub-units may be implemented as computer program modules described with reference to fig. 7, which, when executed by the processor 701, may implement the respective operations described above.
The present disclosure also provides a computer-readable medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer readable medium carries one or more programs which, when executed, implement the above image processing method.
According to embodiments of the present disclosure, the computer readable medium may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, fiber optic cable, radio frequency signals, or the like, or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
While the present disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents. The scope of the disclosure should, therefore, not be limited to the above-described embodiments, but should be determined not only by the following claims, but also by the equivalents of the following claims.

Claims (9)

1. An image processing method, the method comprising:
obtaining a photographing instruction in a panoramic mode;
responding to the photographing instruction in the panoramic mode, obtaining a first group of images based on a first position and obtaining a second group of images based on a second position, wherein the first position is different from the second position in space; the first position and the second position are different positions on a motion trail meeting a guiding condition; and
processing the first set of images and the second set of images, generating and saving a panoramic picture, determining a first stitched image from the first set of images and determining a second stitched image from the second set of images; the edge area of the first spliced image and the edge area of the second spliced image meet a similarity condition; and
Fusing the first set of images and the second set of images into the panoramic picture based on edge regions of the first stitched image and edge regions of the second stitched image; and if the panoramic picture is displayed in a dynamic mode, a first area and a second area in the panoramic picture simultaneously display dynamic output effects, wherein the first area displays first dynamic output effects by the first group of images, and the second area displays second dynamic output effects by the second group of images.
2. The method of claim 1, wherein the fusing the first set of images and the second set of images into the panoramic picture based on edge regions of the first stitched image and edge regions of the second stitched image comprises:
stitching the first stitched image with the second stitched image based on the edge region of the first stitched image and the edge region of the second stitched image to determine a reference of the first stitched image and a reference of the second stitched image;
adjusting a multi-frame image of the first group of images except for the first spliced image based on the first spliced image so that the reference of each frame of image in the first group of images is consistent;
Adjusting a plurality of frames of images of the second group of images except the second spliced image based on the second spliced image so that the reference of each frame of image in the second group of images is consistent; and
the adjusted first set of images and the adjusted second set of images are stored in association.
3. The method of claim 1, wherein each frame image in the first set of images has depth information; each frame of image in the second set of images has depth information;
the processing the first set of images and the second set of images to generate and save a panoramic picture includes:
obtaining a foreground image and a background image of each frame image of the first group of images based on the depth information of each frame image of the first group of images;
obtaining a foreground image and a background image of each frame image of the first group of images based on the depth information of each frame image of the second group of images;
fusing the background image of each frame of image of the first group of images into a first spliced image and fusing the background image of each frame of image of the second group of images into a second spliced image;
generating a background of the panoramic picture based on the edge region of the first stitched image and the edge region of the second stitched image; and
And storing a background of the panoramic picture, a first foreground image group formed by foreground images of each frame of image of the first group of images and a second foreground image group formed by foreground images of each frame of image of the second group of images in an associated mode.
4. A method according to claim 1 or 3, the method further comprising:
acquiring an acquired image of a third position, wherein the acquired image is used for image stitching in a panoramic mode; and
the third position is located between the first position and the second position; the first position, the second position and the third position are different positions on the movement track meeting the guiding condition.
5. The method of claim 4, the method further comprising:
setting calibration motion collection at different positions based on motion tracks of guide conditions in a preview state of the panoramic mode; and acquiring a group of images at a calibration motion acquisition position of a motion track in a guiding condition in the image acquisition process under the space motion change in response to the photographing state of the photographing instruction in the panoramic mode.
6. An image processing apparatus, the apparatus comprising:
the first obtaining module is used for obtaining a photographing instruction in a panoramic mode;
A second obtaining module, configured to obtain a first set of images based on a first location and a second set of images based on a second location in response to the photographing instruction in the panoramic mode, wherein the first location is different from the second location in spatial location; the first position and the second position are different positions on a motion trail meeting a guiding condition; and
the processing module is used for processing the first group of images and the second group of images, generating and storing panoramic pictures, determining a first spliced image from the first group of images and determining a second spliced image from the second group of images; the edge area of the first spliced image and the edge area of the second spliced image meet a similarity condition; and
fusing the first set of images and the second set of images into the panoramic picture based on edge regions of the first stitched image and edge regions of the second stitched image; and if the panoramic picture is displayed in a dynamic mode, a first area and a second area in the panoramic picture simultaneously display dynamic output effects, wherein the first area displays first dynamic output effects by the first group of images, and the second area displays second dynamic output effects by the second group of images.
7. The apparatus of claim 6, wherein the processing the first and second sets of images to generate and save a panoramic picture comprises:
determining a first stitched image from the first set of images and a second stitched image from the second set of images; the edge area of the first spliced image and the edge area of the second spliced image meet a similarity condition; and
and fusing the first group of images and the second group of images into the panoramic picture based on the edge area of the first spliced image and the edge area of the second spliced image.
8. An electronic device, comprising:
a camera; and
a processor for performing:
obtaining a photographing instruction in a panoramic mode;
responding to the photographing instruction in the panoramic mode, obtaining a first group of images based on a first position and obtaining a second group of images based on a second position, wherein the first position is different from the second position in space; the first position and the second position are different positions on a motion trail meeting a guiding condition; and
processing the first group of images and the second group of images to generate and store panoramic pictures; and if the panoramic picture is displayed in a dynamic mode, a first area and a second area in the panoramic picture simultaneously display dynamic output effects, wherein the first area displays first dynamic output effects by the first group of images, and the second area displays second dynamic output effects by the second group of images.
9. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method according to any of claims 1-5.
CN201911423652.5A 2019-12-31 2019-12-31 Image processing method, device, electronic equipment and medium Active CN111091498B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911423652.5A CN111091498B (en) 2019-12-31 2019-12-31 Image processing method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911423652.5A CN111091498B (en) 2019-12-31 2019-12-31 Image processing method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111091498A CN111091498A (en) 2020-05-01
CN111091498B true CN111091498B (en) 2023-06-23

Family

ID=70398737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911423652.5A Active CN111091498B (en) 2019-12-31 2019-12-31 Image processing method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111091498B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114780004A (en) * 2022-04-11 2022-07-22 北京达佳互联信息技术有限公司 Image display method and device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108933899A (en) * 2018-08-22 2018-12-04 Oppo广东移动通信有限公司 Panorama shooting method, device, terminal and computer readable storage medium
WO2019071613A1 (en) * 2017-10-13 2019-04-18 华为技术有限公司 Image processing method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2408193A3 (en) * 2004-04-16 2014-01-15 James A. Aman Visible and non-visible light sensing camera for videoing and object tracking
CN101072332A (en) * 2007-06-04 2007-11-14 深圳市融合视讯科技有限公司 Automatic mobile target tracking and shooting method
KR20170025058A (en) * 2015-08-27 2017-03-08 삼성전자주식회사 Image processing apparatus and electronic system including the same
CN105827946B (en) * 2015-11-26 2019-02-22 东莞市步步高通信软件有限公司 A kind of generation of panoramic picture and playback method and mobile terminal
KR102423175B1 (en) * 2017-08-18 2022-07-21 삼성전자주식회사 An apparatus for editing images using depth map and a method thereof
CN109982036A (en) * 2019-02-20 2019-07-05 华为技术有限公司 A kind of method, terminal and the storage medium of panoramic video data processing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019071613A1 (en) * 2017-10-13 2019-04-18 华为技术有限公司 Image processing method and device
CN108933899A (en) * 2018-08-22 2018-12-04 Oppo广东移动通信有限公司 Panorama shooting method, device, terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN111091498A (en) 2020-05-01

Similar Documents

Publication Publication Date Title
US11860511B2 (en) Image pickup device and method of tracking subject thereof
US9710132B2 (en) Image display apparatus and image display method
CN105681656B (en) System and method for bullet time shooting
CN105282430B (en) Electronic device using composition information of photograph and photographing method using the same
CN108702445B (en) Image display method, electronic equipment and computer readable storage medium
US20170311004A1 (en) Video processing method and device
KR20160021497A (en) The Apparatus and Method for Portable Device
US9535250B2 (en) Head mounted display device and method for controlling the same
CN110493526A (en) Image processing method, device, equipment and medium based on more photographing modules
GB2517730A (en) A method and system for producing a video production
US20140198229A1 (en) Image pickup apparatus, remote control apparatus, and methods of controlling image pickup apparatus and remote control apparatus
US20170171456A1 (en) Stereo Autofocus
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
US9456142B2 (en) Method for processing image and electronic device thereof
US9826145B2 (en) Method and system to assist a user to capture an image or video
KR20170086203A (en) Method for providing sports broadcasting service based on virtual reality
CN108141540A (en) Omnidirectional camera with mobile detection
CN103109538A (en) Image processing device, image capture device, image processing method, and program
CN111093020B (en) Information processing method, camera module and electronic equipment
CN106375670A (en) Image processing method and terminal
CN109997171A (en) Display device and program
CN107172413A (en) Method and system for displaying video of real scene
CN111091498B (en) Image processing method, device, electronic equipment and medium
JP2022522071A (en) Image processing methods and devices, electronic devices and storage media
CN113870213A (en) Image display method, image display device, storage medium, and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant