CN108391050A - A kind of image processing method and mobile terminal - Google Patents

A kind of image processing method and mobile terminal Download PDF

Info

Publication number
CN108391050A
CN108391050A CN201810145669.8A CN201810145669A CN108391050A CN 108391050 A CN108391050 A CN 108391050A CN 201810145669 A CN201810145669 A CN 201810145669A CN 108391050 A CN108391050 A CN 108391050A
Authority
CN
China
Prior art keywords
image
target object
initial image
initial
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810145669.8A
Other languages
Chinese (zh)
Other versions
CN108391050B (en
Inventor
盛宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810145669.8A priority Critical patent/CN108391050B/en
Publication of CN108391050A publication Critical patent/CN108391050A/en
Application granted granted Critical
Publication of CN108391050B publication Critical patent/CN108391050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of image processing method and mobile terminals, wherein the method includes:Obtain the first initial pictures and the second initial pictures that the camera of mobile terminal is moved along first direction, and is continuously shot according to the first focal length;When first position of the target object in the first initial pictures is relative to the second position in the second initial pictures, and offset in a second direction exceeds predetermined threshold, the reference picture for including target object that camera is shot according to the second focal length is obtained;According to the target object in reference picture, the target object in the intermediate image obtained after synthesis processing is carried out to the first initial pictures and the second initial pictures and is replaced, target image is obtained;Wherein, the angle between first direction and second direction is in predetermined angular range;First focal length is more than the second focal length.The present invention avoids under panorama photographing mode, is distorted by the panoramic image distortion that at least two frame initial pictures finally synthesize.

Description

Image processing method and mobile terminal
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an image processing method and a mobile terminal.
Background
With the development of mobile terminal technology, a camera has become one of basic configurations of a mobile terminal. On the basis of the hardware of the camera, the camera of the mobile terminal has multiple functions of beautifying, video recording, blurring, slow-focus lens, panoramic photography and the like. In general panoramic photography, after a camera is horizontally rotated or translated around a certain point to shoot a plurality of local images, the shot images are sequentially combined into a panoramic photo in a mode that the edges of the scenery are adjacent.
The photographing mode requires that the mobile terminals are at the same level as much as possible during the moving process. However, in the actual shooting process, a user manually moves the mobile terminal to perform panoramic shooting in many cases, and if the user shakes during the moving of the mobile terminal, it is difficult to ensure that the mobile terminal is always kept at the same horizontal height, so that the finally synthesized panoramic picture is discontinuous and a fault occurs, and the synthesized image is distorted.
Disclosure of Invention
The invention provides an image processing method and a mobile terminal, and aims to solve the problems that a panoramic photo obtained by panoramic shooting is discontinuous, and a fault and even image distortion occur in the prior art.
In a first aspect, an embodiment of the present invention provides an image processing method, which is applied to a mobile terminal, where the method includes:
the method comprises the steps of obtaining a first initial image and a second initial image which are obtained by moving a camera of the mobile terminal along a first direction and continuously shooting according to a first focal length;
when the offset of the target object in the first direction relative to the second position in the second initial image at the first position in the first initial image exceeds a preset threshold, acquiring a reference image which is shot by the camera according to a second focal length and contains the target object;
replacing a target object in an intermediate image obtained after the first initial image and the second initial image are synthesized according to the target object in the reference image to obtain a target image;
wherein an angle between the first direction and the second direction is within a predetermined angle range;
the first focal length is greater than the second focal length.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes:
the first acquisition module is used for acquiring a first initial image and a second initial image which are obtained by continuously shooting according to a first focal length when a camera of the mobile terminal moves along a first direction;
the second acquisition module is used for acquiring a reference image which is obtained by the camera according to a second focal length and contains the target object when the offset of the first position of the target object in the first initial image relative to the second position of the target object in the second initial image in a second direction exceeds a preset threshold;
the processing module is used for replacing a target object in an intermediate image obtained after the first initial image and the second initial image are synthesized according to the target object in the reference image to obtain a target image;
wherein an angle between the first direction and the second direction is within a predetermined angle range;
the first focal length is greater than the second focal length.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the image processing method described above.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the image processing method as described above.
In the embodiment of the invention, a first initial image and a second initial image are obtained by acquiring the camera of the mobile terminal to move along a first direction and continuously shooting according to a first focal length; and when the offset of the first position of the target object in the first initial image relative to the second position in the second initial image in the second direction exceeds a preset threshold, acquiring a reference image which is shot by the camera according to a second focal length and contains the target object. And replacing the target object in the intermediate image obtained by synthesizing the first initial image and the second initial image according to the target object in the reference image so as to eliminate the distortion phenomenon of the intermediate image and further improve the photographing effect during panoramic photographing.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 shows one of the flow charts of an image processing method of an embodiment of the present invention;
FIG. 2 is a schematic diagram of an Nth initial image according to an embodiment of the invention;
FIG. 3 is a schematic diagram of an N +1 th initial image according to an embodiment of the present invention;
FIG. 4 shows a schematic diagram of an intermediate image of an embodiment of the invention;
FIG. 5 is a schematic diagram of a reference image of an embodiment of the present invention;
FIG. 6 shows a schematic view of a target image of an embodiment of the invention;
FIG. 7 is a second flowchart of an image processing method according to an embodiment of the invention;
FIG. 8 is a third flowchart of an image processing method according to an embodiment of the present invention;
FIG. 9 shows the steps of obtaining a target image from the synthesis of a reference image and an intermediate image according to an embodiment of the invention;
FIG. 10 shows a block diagram of a mobile terminal of an embodiment of the invention;
fig. 11 is a schematic diagram of a hardware configuration of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides an image processing method applied to a mobile terminal, where the method includes:
and step 11, acquiring a first initial image and a second initial image which are obtained by moving a camera of the mobile terminal along a first direction and continuously shooting according to a first focal length.
The image processing method in this embodiment is applicable to a panoramic shooting scene. Specifically, when a photographing instruction of panoramic photographing input by a user is acquired, the mobile terminal starts a panoramic photographing mode and photographs an nth initial image (a first initial image) according to a first focal length, wherein N is a positive integer. And when the camera is detected to rotate by a preset angle along the first direction at a preset center or the camera is detected to translate (such as horizontally translate) for a preset distance along the first direction, shooting an N +1 th initial image (a second initial image) according to the first focal length.
It should be noted that the setting of the preset angle or the preset distance should satisfy that the N +1 th initial image and the nth initial image have the same image information, so as to avoid a cross-over in the panoramic image synthesized according to the nth initial image and the N +1 th initial image.
And step 12, when the offset of the first position of the target object in the first initial image relative to the second position in the second initial image in the second direction exceeds a preset threshold, acquiring a reference image which is obtained by the camera according to the second focal length and contains the target object.
Wherein an angle between the first direction and the second direction is within a predetermined angle range; the first focal length is greater than the second focal length.
As a preferred implementation, in the case where two initial images (a first initial image and a second initial image) taken in succession are initial images taken when the camera is rotated by a preset angle in a first direction at a predetermined center, the first direction refers to an arc direction in which the rotation radii are all in the same plane, and the second direction is a radial direction perpendicular to the arc direction.
As another preferred implementation, in the case where two initial images consecutively photographed are initial images photographed while the camera is translated by a preset distance in the first direction, the first direction may be a horizontal direction, and the second direction may be a vertical direction perpendicular to the horizontal direction.
In this embodiment, as shown in fig. 2, an example of an nth initial image 210 (a first initial image) captured by a camera according to a first focal length is given, where the nth initial image 210 includes a target object 201; as shown in fig. 3, an N +1 th initial image 310 (a second initial image) captured by the camera according to the first focal length is shown, wherein the N +1 th initial image 310 also includes the same target object 201; when the offset of the first position of the target object 201 in the first initial image relative to the second position in the second initial image in the second direction exceeds a predetermined threshold (as shown in fig. 4), that is, when it is determined that the two initial images of the two consecutive frames are spliced to obtain the intermediate image shown in fig. 4, a distortion phenomenon occurs. In this case, the camera is controlled to adjust the focal length and shoot at a second focal length smaller than the first focal length, so as to shoot a reference image 510 shown in fig. 5, wherein the reference image 510 includes the target object 201 and image information within a predetermined range from the target object 201.
And step 13, replacing the target object in the intermediate image obtained after the first initial image and the second initial image are synthesized according to the target object in the reference image to obtain the target image.
In this embodiment, two frames of initial images continuously captured are combined so that the edges of the captured images are adjacent to each other, and an intermediate image is obtained. Since the amount of shift of the target object in the second direction relative to the first position in the first initial image and the second position in the second initial image exceeds the predetermined threshold, the intermediate image has a distortion phenomenon in the image region corresponding to the target object. Therefore, the image area of the target object in the reference image is replaced with the image area of the target object in the intermediate image to eliminate the distortion phenomenon of the intermediate image, so as to obtain the target image 610 shown in fig. 6, thereby avoiding the distortion of the panoramic image finally synthesized by at least two frames of initial images, and further improving the shooting effect of panoramic shooting.
Referring to fig. 7, an embodiment of the present invention further provides an image processing method applied to a mobile terminal, where the method includes:
and step 71, acquiring a first initial image and a second initial image which are obtained by continuously shooting the camera of the mobile terminal along the first direction according to the first focal length.
The image processing method in this embodiment is applicable to a panoramic shooting scene. Specifically, when a photographing instruction of panoramic photographing input by a user is acquired, the mobile terminal starts a panoramic photographing mode and photographs an nth initial image (a first initial image) according to a first focal length, wherein N is a positive integer. And when the camera is detected to rotate by a preset angle along the first direction at a preset center or the camera is detected to translate (such as horizontally translate) for a preset distance along the first direction, shooting an N +1 th initial image (a second initial image) according to the first focal length.
It should be noted that the setting of the preset angle or the preset distance should satisfy that the N +1 th initial image and the nth initial image have the same image information, so as to avoid a cross-over in the panoramic image synthesized according to the nth initial image and the N +1 th initial image.
Step 72, detecting whether the deflection angle of the first direction relative to the camera in the preset moving direction exceeds a preset angle threshold.
Wherein, the preset moving direction is a reference moving direction set when the photographing starts.
In this embodiment, the predetermined moving direction may be a rotating direction rotating along a circular arc of a predetermined center, and may also be a horizontal or vertical parallel moving direction, or another parallel moving direction. Specifically, the deflection angle of the camera of the mobile terminal relative to the predetermined moving direction when the camera moves along the first direction can be acquired through the angle sensor.
If the predetermined angle threshold is exceeded, it is determined that the amount of deviation of the first position of the target object in the first initial image relative to the second position in the second initial image in the second direction exceeds the predetermined threshold, step 73.
In this embodiment, when it is detected that a deflection angle relative to a predetermined moving direction exceeds a predetermined angle threshold when a camera of the mobile terminal moves in a first direction, it is determined that an offset of a first position of a target object in a first initial image relative to a second position in a second initial image in the second direction exceeds the predetermined angle threshold, so that a large amount of data operation can be avoided, and further, the processing efficiency of the mobile terminal is improved.
And 74, when the offset of the first position of the target object in the first initial image relative to the second position in the second initial image in the second direction exceeds a preset threshold, acquiring a reference image which is obtained by the camera according to the second focal length and contains the target object.
Wherein an angle between the first direction and the second direction is within a predetermined angle range; the first focal length is greater than the second focal length.
As a preferred implementation, in the case where two initial images (a first initial image and a second initial image) taken in succession are initial images taken when the camera is rotated by a preset angle in a first direction at a predetermined center, the first direction refers to an arc direction in which the rotation radii are all in the same plane, and the second direction is a radial direction perpendicular to the arc direction.
As another preferred implementation, in the case where two initial images consecutively photographed are initial images photographed while the camera is translated by a preset distance in the first direction, the first direction may be a horizontal direction, and the second direction may be a vertical direction perpendicular to the horizontal direction.
In this embodiment, as shown in fig. 2, an example of an nth initial image 210 (a first initial image) captured by a camera according to a first focal length is given, where the nth initial image 210 includes a target object 201; as shown in fig. 3, an N +1 th initial image 310 (a second initial image) captured by the camera according to the first focal length is shown, wherein the N +1 th initial image 310 also includes the same target object 201; when the offset of the first position of the target object 201 in the first initial image relative to the second position in the second initial image in the second direction exceeds a predetermined threshold (as shown in fig. 4), that is, when it is determined that the two initial images of the two consecutive frames are spliced to obtain the intermediate image shown in fig. 4, a distortion phenomenon occurs. In this case, the camera is controlled to adjust the focal length and shoot at a second focal length smaller than the first focal length, so as to shoot a reference image 510 shown in fig. 5, wherein the reference image 510 includes the target object 201 and image information within a predetermined range from the target object 201.
And 75, replacing the target object in the intermediate image obtained by synthesizing the first initial image and the second initial image according to the target object in the reference image to obtain the target image.
In this embodiment, two frames of initial images continuously captured are combined so that the edges of the captured images are adjacent to each other, and an intermediate image is obtained. Since the amount of shift of the target object in the second direction relative to the first position in the first initial image and the second position in the second initial image exceeds the predetermined threshold, the intermediate image has a distortion phenomenon in the image region corresponding to the target object. Therefore, the image area of the target object in the reference image is replaced with the image area of the target object in the intermediate image to eliminate the distortion phenomenon of the intermediate image, so as to obtain the target image 610 shown in fig. 6, thereby avoiding the distortion of the panoramic image finally synthesized by at least two frames of initial images, and further improving the shooting effect of panoramic shooting.
Referring to fig. 8, an embodiment of the present invention further provides an image processing method applied to a mobile terminal, where the method includes:
and step 81, acquiring a first initial image and a second initial image which are obtained by moving a camera of the mobile terminal along a first direction and continuously shooting according to a first focal length.
The image processing method in this embodiment is applicable to a panoramic shooting scene. Specifically, when a photographing instruction of panoramic photographing input by a user is acquired, the mobile terminal starts a panoramic photographing mode and photographs an nth initial image (a first initial image) according to a first focal length, wherein N is a positive integer. And when the camera is detected to rotate by a preset angle along the first direction at a preset center or the camera is detected to translate (such as horizontally translate) for a preset distance along the first direction, shooting an N +1 th initial image (a second initial image) according to the first focal length.
It should be noted that the setting of the preset angle or the preset distance should satisfy that the N +1 th initial image and the nth initial image have the same image information, so as to avoid a cross-over in the panoramic image synthesized according to the nth initial image and the N +1 th initial image.
Step 82, a first distance from a target reference point in the target object to a predetermined position in the first initial image and a second distance from the target reference point to a predetermined position in the second initial image are obtained.
Wherein the predetermined position is an image edge extending in the second direction or an image center line in the first direction in the first initial image and the second initial image.
Before the step 82, the method further includes: determining a target reference point in the target object.
Specifically, a geometric center point of the same target object in the first initial image and the second initial image is obtained as a target reference point; or acquiring a key pixel point in the same target object in the first initial image and the second initial image as a target reference point; or acquiring a part of the contour line of the same target object in the first initial image and the second initial image as a target reference point.
Specifically, the image edges extending along the second direction may include a first image edge (e.g., an upper edge of the initial image region in fig. 2) and a second image edge (e.g., a lower edge of the initial image region in fig. 2); the image center line (shown as a dashed line in fig. 2) is located between the first image edge and the second image edge and is parallel to the first image edge and the second image edge, respectively.
It should be noted here that, when an image edge extending along the second direction or an image center line extending along the first direction is selected as the predetermined position, the predetermined position selected in the second initial image should be the same as the predetermined position selected in the first initial image, for example: if the first image edge in the first initial image is selected as the preset position, the second initial image also selects the first image edge as the preset position; if the second image edge in the first initial image is selected as the preset position, the second initial image should also select the second image edge as the preset position; if the image center line in the first initial image is selected as the predetermined position, the image center line in the second initial image is also selected as the predetermined position.
Step 83, detecting whether the difference between the first distance and the second distance exceeds a predetermined threshold.
In this embodiment, the preset threshold is set to be a positive number, and may be determined according to a critical value of occurrence of a cross-talk in the intermediate image obtained by the synthesis process observed by human eyes, and the critical value may be obtained by an experiment.
If the predetermined threshold is exceeded, it is determined that the amount of displacement of the target object in the first direction relative to the second position in the second initial image exceeds the predetermined threshold, step 84.
In this embodiment, by calculating an absolute value of a difference between the first distance and the first distance, if the absolute value of the difference is greater than a preset threshold, it is determined that a first position of the target object in the first initial image is relative to a second position of the target object in the second initial image, and an offset in the second direction exceeds a predetermined threshold, so as to directly determine whether a discontinuity occurs in a panoramic image synthesized from the first initial image and the second initial image, which is beneficial to improving accuracy of a detection result.
And step 85, when the offset of the first position of the target object in the first initial image relative to the second position in the second initial image in the second direction exceeds a preset threshold, acquiring a reference image which is obtained by the camera according to the second focal length and contains the target object.
Wherein an angle between the first direction and the second direction is within a predetermined angle range; the first focal length is greater than the second focal length.
As a preferred implementation, in the case where two initial images (a first initial image and a second initial image) taken in succession are initial images taken when the camera is rotated by a preset angle in a first direction at a predetermined center, the first direction refers to an arc direction in which the rotation radii are all in the same plane, and the second direction is a radial direction perpendicular to the arc direction.
As another preferred implementation, in the case where two initial images consecutively photographed are initial images photographed while the camera is translated by a preset distance in the first direction, the first direction may be a horizontal direction, and the second direction may be a vertical direction perpendicular to the horizontal direction.
In this embodiment, as shown in fig. 2, an example of an nth initial image 210 (a first initial image) captured by a camera according to a first focal length is given, where the nth initial image 210 includes a target object 201; as shown in fig. 3, an N +1 th initial image 310 (a second initial image) captured by the camera according to the first focal length is shown, wherein the N +1 th initial image 310 also includes the same target object 201; when the offset of the first position of the target object 201 in the first initial image relative to the second position in the second initial image in the second direction exceeds a predetermined threshold (as shown in fig. 4), that is, when it is determined that the two initial images of the two consecutive frames are spliced to obtain the intermediate image shown in fig. 4, a distortion phenomenon occurs. In this case, the camera is controlled to adjust the focal length and shoot at a second focal length smaller than the first focal length, so as to shoot a reference image 510 shown in fig. 5, wherein the reference image 510 includes the target object 201 and image information within a predetermined range from the target object 201.
And 86, replacing the target object in the intermediate image obtained by synthesizing the first initial image and the second initial image according to the target object in the reference image to obtain the target image.
In this embodiment, two frames of initial images continuously captured are combined so that the edges of the captured images are adjacent to each other, and an intermediate image is obtained. Since the amount of shift of the target object in the second direction relative to the first position in the first initial image and the second position in the second initial image exceeds the predetermined threshold, the intermediate image has a distortion phenomenon in the image region corresponding to the target object. Therefore, the image area of the target object in the reference image is replaced with the image area of the target object in the intermediate image to eliminate the distortion phenomenon of the intermediate image, so as to obtain the target image 610 shown in fig. 6, thereby avoiding the distortion of the panoramic image finally synthesized by at least two frames of initial images, and further improving the shooting effect of panoramic shooting.
Specifically, referring to fig. 9, in each of the above embodiments: replacing a target object in an intermediate image obtained by synthesizing the first initial image and the second initial image according to the target object in the reference image, wherein the step of obtaining the target image specifically comprises:
and step 91, synthesizing the first initial image and the second initial image to obtain an intermediate image.
As an implementation manner, the same image area in the first initial image and the second initial image is overlapped to obtain an intermediate image obtained by splicing the first initial image and the second initial image.
Specifically, the intermediate image is obtained by overlapping and splicing the edges of the same image area in the first initial image and the second initial image; wherein the first initial image is one of two initial images taken continuously, and the second initial image is the other of the two initial images taken continuously.
As another implementation manner, the same image area in the first initial image as that in the second initial image is cut off, and the cut first initial image and the second initial image are spliced to obtain an intermediate image.
Specifically, the same image area in the first initial image and the second initial image is cut off to obtain an intermediate image to be processed. And splicing the cutting edge of the intermediate image to be processed with the edge of the second initial image, which is adjacent to the cutting edge belonging to the image information, to obtain an intermediate image.
And step 92, matting a first image area containing the target object in the reference image to obtain an initial sub-image.
In this embodiment, when a first image region corresponding to a target object in a reference image is scratched, an image region surrounded by an edge contour of the target object may be used as the first image region, and the first image region is scratched to obtain a sub-image.
And step 93, determining the magnification ratio for magnifying the initial sub-image according to the ratio of the first focal length to the second focal length.
In this embodiment, since the second focal length is smaller than the first focal length, it is equivalent to obtaining a reference image (wide-angle image) including the target object by shooting at the second focal length. Therefore, before the target object in the reference image is overlaid on the target object in the intermediate image obtained by synthesizing the first initial image and the second initial image, the initial sub-image needs to be enlarged.
And step 94, amplifying the initial sub-image according to the amplification ratio to obtain a target sub-image.
Of course, as another implementation manner, the magnification ratio of the reference image for magnification processing may also be determined according to the ratio between the first focal length and the second focal length; amplifying the reference image according to the amplification ratio to obtain a target reference image; and matting a first image area containing the target object in the target reference image to obtain a target sub-image.
And step 95, covering the target sub-image in a second image area corresponding to the target object in the intermediate image to obtain a target image.
In this embodiment, the second image region may be determined according to an image range surrounded by an edge contour of the target object in the intermediate image.
The reference image is obtained by shooting the camera with the second focal length, the intermediate image is obtained by synthesizing two frames of initial images which are obtained by continuously shooting the camera with the first focal length, and the second focal length is smaller than the first focal length, so that the size of the target object in the reference image is smaller than that of the target object in the intermediate image. Therefore, after the initial sub-image is obtained by matting, the initial sub-image is amplified according to the ratio between the first focal length and the second focal length, so as to obtain a target sub-image which has the same size as the second image area or larger than the second image area by a preset value, so that the second image area can be completely covered when the target sub-image is covered in the second image area.
Referring to fig. 10, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal 1000 includes:
the first obtaining module 1010 is configured to obtain a first initial image and a second initial image, which are obtained by moving a camera of the mobile terminal along a first direction and continuously shooting according to a first focal length.
A second obtaining module 1020, configured to obtain a reference image that is obtained by the camera according to a second focal length and contains the target object when an offset of the first position of the target object in the first initial image relative to the second position of the target object in the second initial image in the second direction exceeds a predetermined threshold.
And a processing module 1030, configured to replace, according to the target object in the reference image, the target object in the intermediate image obtained by performing synthesis processing on the first initial image and the second initial image, so as to obtain a target image.
Wherein an angle between the first direction and the second direction is within a predetermined angle range; the first focal length is greater than the second focal length.
Wherein the mobile terminal 1000 further comprises:
the first detection module is used for detecting whether a deflection angle of the first direction relative to the camera in a preset moving direction exceeds a preset angle threshold, wherein the preset moving direction is a reference moving direction set when photographing starts.
And the first determining module is used for determining that the offset of the first position of the target object in the first initial image relative to the second position in the second initial image in the second direction exceeds the preset threshold if the first position exceeds the preset angle threshold.
Wherein the mobile terminal 1000 further comprises:
and the third acquisition module is used for acquiring a first distance from a target reference point in the target object to a preset position in the first initial image and a second distance from the target reference point to a preset position in the second initial image.
A second detection module for detecting whether a difference between the first distance and the second distance exceeds a predetermined threshold.
And the second determining module is used for determining that the offset of the first position of the target object in the first initial image relative to the second position in the second initial image in the second direction exceeds a preset threshold if the offset exceeds the preset threshold.
Wherein the predetermined position is an image edge extending in the second direction or an image center line in the first direction in the first initial image and the second initial image.
Wherein the mobile terminal 1000 further comprises:
and the fourth acquisition module is used for acquiring the geometric center point of the same target object in the first initial image and the second initial image as a target reference point.
Or, a fifth obtaining module, configured to obtain a key pixel point in the same target object in the first initial image and the second initial image as a target reference point.
Or, the sixth obtaining module is configured to obtain a part of a contour line of the same target object in the first initial image and the second initial image as a target reference point.
Wherein the processing module 1030 comprises:
and the first synthesizing unit is used for synthesizing the first initial image and the second initial image to obtain an intermediate image.
And the matting unit is used for matting the first image area containing the target object in the reference image to obtain an initial sub-image.
And the processing unit is used for determining the amplification ratio for amplifying the initial sub-image according to the ratio of the first focal length to the second focal length.
And the amplifying unit is used for amplifying the initial sub-image according to the amplifying proportion to obtain a target sub-image.
And the second synthesis unit is used for covering the target sub-image in a second image area corresponding to the target object in the intermediate image to obtain a target image.
The mobile terminal provided by the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 9, and is not described herein again to avoid repetition.
In the mobile terminal 900 in the above scheme, the camera of the mobile terminal is obtained to move along the first direction, and a first initial image and a second initial image obtained by continuous shooting according to the first focal length are obtained; and when the offset of the first position of the target object in the first initial image relative to the second position in the second initial image in the second direction exceeds a preset threshold, acquiring a reference image which is shot by the camera according to a second focal length and contains the target object. And replacing the target object in the intermediate image obtained by synthesizing the first initial image and the second initial image according to the target object in the reference image so as to eliminate the distortion phenomenon of the intermediate image and further improve the photographing effect during panoramic photographing.
Fig. 11 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention.
The mobile terminal 1100 includes, but is not limited to: radio frequency unit 1101, network module 1102, audio output unit 1103, input unit 1104, sensor 1105, display unit 1106, user input unit 1107, interface unit 1108, memory 1109, processor 1110, and power supply 1111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 11 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 1110 is configured to acquire a first initial image and a second initial image, where the first initial image and the second initial image are obtained by continuously shooting according to a first focal length, and the camera of the mobile terminal moves along a first direction; when the offset of the target object in the first direction relative to the second position in the second initial image at the first position in the first initial image exceeds a preset threshold, acquiring a reference image which is shot by the camera according to a second focal length and contains the target object; replacing a target object in an intermediate image obtained after the first initial image and the second initial image are synthesized according to the target object in the reference image to obtain a target image; wherein an angle between the first direction and the second direction is within a predetermined angle range; the first focal length is greater than the second focal length.
In the mobile terminal 1100 in the above scheme, the camera of the mobile terminal is obtained to move along the first direction, and a first initial image and a second initial image obtained by continuous shooting according to the first focal length are obtained; and when the offset of the first position of the target object in the first initial image relative to the second position in the second initial image in the second direction exceeds a preset threshold, acquiring a reference image which is shot by the camera according to a second focal length and contains the target object. And replacing the target object in the intermediate image obtained by synthesizing the first initial image and the second initial image according to the target object in the reference image so as to eliminate the distortion phenomenon of the intermediate image and further improve the photographing effect during panoramic photographing.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 1101 may be configured to receive and transmit signals during a message transmission or a call, and specifically, receive downlink data from a base station and then process the received downlink data to the processor 1110; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 1101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 1101 may also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 1102, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 1103 may convert audio data received by the radio frequency unit 1101 or the network module 1102 or stored in the memory 1109 into an audio signal and output as sound. Also, the audio output unit 1103 may also provide audio output related to a specific function performed by the mobile terminal 1100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 1103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1104 is used to receive audio or video signals. The input Unit 1104 may include a Graphics Processing Unit (GPU) 11041 and a microphone 11042, and the Graphics processor 11041 processes image data of still pictures or video obtained by an image capturing device, such as a camera, in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 1106. The image frames processed by the graphic processor 11041 may be stored in the memory 1109 (or other storage medium) or transmitted via the radio frequency unit 1101 or the network module 1102. The microphone 11042 may receive sound and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 1101 in case of the phone call mode.
The mobile terminal 1100 also includes at least one sensor 1105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 11061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 11061 and/or a backlight when the mobile terminal 1100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 1105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., and will not be described in detail herein.
The display unit 1106 is used to display information input by a user or information provided to the user. The Display unit 1106 may include a Display panel 11061, and the Display panel 11061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 1107 includes a touch panel 11071 and other input devices 11072. The touch panel 11071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 11071 (e.g., operations by a user on or near the touch panel 11071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 11071 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1110, and receives and executes commands sent from the processor 1110. In addition, the touch panel 11071 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 1107 may include other input devices 11072 in addition to the touch panel 11071. In particular, the other input devices 11072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 11071 can be overlaid on the display panel 11061, and when the touch panel 11071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 1110 to determine the type of the touch event, and then the processor 1110 provides a corresponding visual output on the display panel 11061 according to the type of the touch event. Although the touch panel 11071 and the display panel 11061 are shown in fig. 11 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 11071 and the display panel 11061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 1108 is an interface through which an external device is connected to the mobile terminal 1100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. Interface unit 1108 may be used to receive input from external devices (e.g., data information, power, etc.) and transmit the received input to one or more elements within mobile terminal 1100 or may be used to transmit data between mobile terminal 1100 and external devices.
The memory 1109 may be used to store software programs as well as various data. The memory 1109 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory 1109 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 1109 and calling data stored in the memory 1109, thereby integrally monitoring the mobile terminal. Processor 1110 may include one or more processing units; preferably, the processor 1110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1110.
The mobile terminal 1100 may also include a power supply 1111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 1111 may be logically connected to the processor 1110 via a power management system such that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the mobile terminal 1100 includes some functional modules that are not shown, and thus will not be described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, including a processor 1110, a memory 1109, and a computer program stored in the memory 1109 and capable of running on the processor 1110, where the computer program, when executed by the processor 1110, implements each process of the above-described image processing method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. An image processing method applied to a mobile terminal is characterized by comprising the following steps:
the method comprises the steps of obtaining a first initial image and a second initial image which are obtained by moving a camera of the mobile terminal along a first direction and continuously shooting according to a first focal length;
when the offset of the target object in the first direction relative to the second position in the second initial image at the first position in the first initial image exceeds a preset threshold, acquiring a reference image which is shot by the camera according to a second focal length and contains the target object;
replacing a target object in an intermediate image obtained after the first initial image and the second initial image are synthesized according to the target object in the reference image to obtain a target image;
wherein an angle between the first direction and the second direction is within a predetermined angle range;
the first focal length is greater than the second focal length.
2. The image processing method according to claim 1, wherein before the step of acquiring the reference image containing the target object captured by the camera at the second focal length when the offset of the first position of the target object in the first initial image relative to the second position of the target object in the second initial image in the second direction exceeds a predetermined threshold, the method further comprises:
detecting whether a deflection angle of the first direction relative to the camera in a preset moving direction exceeds a preset angle threshold or not, wherein the preset moving direction is a reference moving direction set when photographing starts;
if the preset angle threshold is exceeded, determining that the offset of the first position of the target object in the first initial image relative to the second position of the target object in the second initial image in the second direction exceeds the preset threshold.
3. The image processing method according to claim 1, wherein before the step of acquiring the reference image containing the target object captured by the camera at the second focal length when the offset of the first position of the target object in the first initial image relative to the second position of the target object in the second initial image in the second direction exceeds a predetermined threshold, the method further comprises:
acquiring a first distance from a target reference point in the target object to a preset position in a first initial image and a second distance from the target reference point to a preset position in a second initial image;
detecting whether a difference between the first distance and the second distance exceeds a predetermined threshold;
if the offset exceeds the preset threshold, determining that the offset of the target object in the second direction exceeds the preset threshold, wherein the first position of the target object in the first initial image is relative to the second position of the target object in the second initial image;
wherein the predetermined position is an image edge extending in the second direction or an image center line in the first direction in the first initial image and the second initial image.
4. The image processing method according to claim 3, wherein the step of obtaining a first distance from a target reference point in the target object to a predetermined position in a first initial image and a second distance from the target reference point to a predetermined position in a second initial image is preceded by the step of:
acquiring a geometric central point of the same target object in the first initial image and the second initial image as a target reference point; or,
acquiring a key pixel point in the same target object in the first initial image and the second initial image as a target reference point; or,
and acquiring a part of the contour line of the same target object in the first initial image and the second initial image as a target reference point.
5. The image processing method according to claim 1, wherein the step of replacing, according to the target object in the reference image, the target object in the intermediate image obtained by combining the first initial image and the second initial image to obtain the target image comprises:
synthesizing the first initial image and the second initial image to obtain an intermediate image;
matting a first image area containing the target object in the reference image to obtain an initial sub-image;
determining the amplification ratio of the amplification processing of the initial sub-image according to the ratio of the first focal length to the second focal length;
amplifying the initial sub-image according to the amplification ratio to obtain a target sub-image;
and covering the target sub-image in a second image area corresponding to the target object in the intermediate image to obtain a target image.
6. A mobile terminal, characterized in that the mobile terminal comprises:
the first acquisition module is used for acquiring a first initial image and a second initial image which are obtained by continuously shooting according to a first focal length when a camera of the mobile terminal moves along a first direction;
the second acquisition module is used for acquiring a reference image which is obtained by the camera according to a second focal length and contains the target object when the offset of the first position of the target object in the first initial image relative to the second position of the target object in the second initial image in a second direction exceeds a preset threshold;
the processing module is used for replacing a target object in an intermediate image obtained after the first initial image and the second initial image are synthesized according to the target object in the reference image to obtain a target image;
wherein an angle between the first direction and the second direction is within a predetermined angle range;
the first focal length is greater than the second focal length.
7. The mobile terminal of claim 6, wherein the mobile terminal further comprises:
the first detection module is used for detecting whether a deflection angle of the first direction relative to the camera in a preset moving direction exceeds a preset angle threshold or not, wherein the preset moving direction is a reference moving direction set when photographing starts;
and the first determining module is used for determining that the offset of the first position of the target object in the first initial image relative to the second position in the second initial image in the second direction exceeds the preset threshold if the first position exceeds the preset angle threshold.
8. The mobile terminal of claim 7, wherein the mobile terminal further comprises:
a third obtaining module, configured to obtain a first distance from a target reference point in the target object to a predetermined position in the first initial image, and a second distance from the target reference point to a predetermined position in the second initial image;
a second detection module for detecting whether a difference between the first distance and the second distance exceeds a predetermined threshold;
the second determining module is used for determining that the offset of the first position of the target object in the first initial image relative to the second position in the second initial image in the second direction exceeds a preset threshold if the offset exceeds the preset threshold;
wherein the predetermined position is an image edge extending in the second direction or an image center line in the first direction in the first initial image and the second initial image.
9. The mobile terminal of claim 8, wherein the mobile terminal further comprises:
a fourth obtaining module, configured to obtain a geometric center point of the same target object in the first initial image and the second initial image as a target reference point; or,
a fifth obtaining module, configured to obtain a key pixel point in the same target object in the first initial image and the second initial image as a target reference point; or,
and a sixth obtaining module, configured to obtain a part of a contour line of the same target object in the first initial image and the second initial image as a target reference point.
10. The mobile terminal of claim 6, wherein the processing module comprises:
a first synthesizing unit, configured to perform synthesis processing on the first initial image and the second initial image to obtain an intermediate image;
the matting unit is used for matting a first image area containing the target object in the reference image to obtain an initial sub-image;
the processing unit is used for determining the amplification ratio of the amplification processing on the initial sub-image according to the ratio between the first focal length and the second focal length;
the amplifying unit is used for amplifying the initial sub-image according to the amplifying proportion to obtain a target sub-image;
and the second synthesis unit is used for covering the target sub-image in a second image area corresponding to the target object in the intermediate image to obtain a target image.
11. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 5.
CN201810145669.8A 2018-02-12 2018-02-12 Image processing method and mobile terminal Active CN108391050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810145669.8A CN108391050B (en) 2018-02-12 2018-02-12 Image processing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810145669.8A CN108391050B (en) 2018-02-12 2018-02-12 Image processing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN108391050A true CN108391050A (en) 2018-08-10
CN108391050B CN108391050B (en) 2020-04-14

Family

ID=63069441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810145669.8A Active CN108391050B (en) 2018-02-12 2018-02-12 Image processing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN108391050B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014871A (en) * 2021-02-20 2021-06-22 青岛小鸟看看科技有限公司 Endoscope image display method, device and endoscope operation auxiliary system
CN113538252A (en) * 2020-04-17 2021-10-22 嘉楠明芯(北京)科技有限公司 Image correction method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120075409A1 (en) * 2010-09-27 2012-03-29 Hon Hai Precision Industry Co., Ltd. Image segmentation system and method thereof
CN104184961A (en) * 2013-05-22 2014-12-03 辉达公司 Mobile device and system used for generating panoramic video
CN104243819A (en) * 2014-08-29 2014-12-24 小米科技有限责任公司 Photo acquiring method and device
CN104735356A (en) * 2015-03-23 2015-06-24 深圳市欧珀通信软件有限公司 Panorama picture shooting method and device
CN104756479A (en) * 2012-10-29 2015-07-01 谷歌公司 Smart targets facilitating the capture of contiguous images
CN105959549A (en) * 2016-05-26 2016-09-21 努比亚技术有限公司 Panorama picture shooting device and method
CN106791455A (en) * 2017-03-31 2017-05-31 努比亚技术有限公司 Panorama shooting method and device
CN106981048A (en) * 2017-03-31 2017-07-25 联想(北京)有限公司 A kind of image processing method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120075409A1 (en) * 2010-09-27 2012-03-29 Hon Hai Precision Industry Co., Ltd. Image segmentation system and method thereof
CN104756479A (en) * 2012-10-29 2015-07-01 谷歌公司 Smart targets facilitating the capture of contiguous images
CN104184961A (en) * 2013-05-22 2014-12-03 辉达公司 Mobile device and system used for generating panoramic video
CN104243819A (en) * 2014-08-29 2014-12-24 小米科技有限责任公司 Photo acquiring method and device
CN104735356A (en) * 2015-03-23 2015-06-24 深圳市欧珀通信软件有限公司 Panorama picture shooting method and device
CN105959549A (en) * 2016-05-26 2016-09-21 努比亚技术有限公司 Panorama picture shooting device and method
CN106791455A (en) * 2017-03-31 2017-05-31 努比亚技术有限公司 Panorama shooting method and device
CN106981048A (en) * 2017-03-31 2017-07-25 联想(北京)有限公司 A kind of image processing method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538252A (en) * 2020-04-17 2021-10-22 嘉楠明芯(北京)科技有限公司 Image correction method and device
CN113538252B (en) * 2020-04-17 2024-03-26 嘉楠明芯(北京)科技有限公司 Image correction method and device
CN113014871A (en) * 2021-02-20 2021-06-22 青岛小鸟看看科技有限公司 Endoscope image display method, device and endoscope operation auxiliary system
CN113014871B (en) * 2021-02-20 2023-11-10 青岛小鸟看看科技有限公司 Endoscopic image display method and device and endoscopic surgery auxiliary system

Also Published As

Publication number Publication date
CN108391050B (en) 2020-04-14

Similar Documents

Publication Publication Date Title
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN107592466B (en) Photographing method and mobile terminal
CN110784651B (en) Anti-shake method and electronic equipment
CN108495029B (en) Photographing method and mobile terminal
CN109660723B (en) Panoramic shooting method and device
CN110445984B (en) Shooting prompting method and electronic equipment
CN110602401A (en) Photographing method and terminal
CN107948505B (en) Panoramic shooting method and mobile terminal
CN110262737A (en) A kind of processing method and terminal of video data
CN109905603B (en) Shooting processing method and mobile terminal
CN108763998B (en) Bar code identification method and terminal equipment
CN111064895B (en) Virtual shooting method and electronic equipment
CN109474787B (en) Photographing method, terminal device and storage medium
CN108449546B (en) Photographing method and mobile terminal
CN111010511B (en) Panoramic body-separating image shooting method and electronic equipment
CN108924422B (en) Panoramic photographing method and mobile terminal
CN110798621A (en) Image processing method and electronic equipment
CN110769156A (en) Picture display method and electronic equipment
CN108174110B (en) Photographing method and flexible screen terminal
CN110769154B (en) Shooting method and electronic equipment
CN108156386B (en) Panoramic photographing method and mobile terminal
CN108234978B (en) A kind of image processing method and mobile terminal
CN108391050B (en) Image processing method and mobile terminal
CN111770275B (en) Shooting method and device, electronic equipment and readable storage medium
CN111182206B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant