CN108234888B - Image processing method and mobile terminal - Google Patents

Image processing method and mobile terminal Download PDF

Info

Publication number
CN108234888B
CN108234888B CN201810209159.2A CN201810209159A CN108234888B CN 108234888 B CN108234888 B CN 108234888B CN 201810209159 A CN201810209159 A CN 201810209159A CN 108234888 B CN108234888 B CN 108234888B
Authority
CN
China
Prior art keywords
image
target object
area
mobile terminal
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810209159.2A
Other languages
Chinese (zh)
Other versions
CN108234888A (en
Inventor
李丹丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810209159.2A priority Critical patent/CN108234888B/en
Publication of CN108234888A publication Critical patent/CN108234888A/en
Application granted granted Critical
Publication of CN108234888B publication Critical patent/CN108234888B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides an image processing method and a mobile terminal, wherein the image processing method comprises the following steps: acquiring a first image and at least one second image which are obtained by photographing of a user, wherein the positions of target objects displayed in the first image and the second image are different; performing image segmentation on the first image and the second image according to the positions of the identified target objects in the first image and the second image; and synthesizing the first image and the second image according to the segmented image to obtain a synthesized shot image, wherein the target object does not exist in the synthesized shot image. The scheme can solve the problem that redundant target objects exist in the image obtained by the photographing scheme in the prior art.

Description

Image processing method and mobile terminal
Technical Field
The invention relates to the technical field of terminals, in particular to an image processing method and a mobile terminal.
Background
With the continuous innovation of the hardware and software technology of Camera, the photographing effect of the mobile phone is better and better, and the mobile phone can reach or exceed the level of an early digital Camera. The camera of the beautiful picture class of various also satisfies people and beautifies synthetic demand by oneself, wherein not lack the user and can face the mirror auto heterodyne, can see the molding of oneself at the in-process of auto heterodyne, the endeavor lets oneself more outstanding some, at this moment also inevitable shoot own cell-phone, more or less can cover some places, also the unnatural of performance simultaneously, the putting position of the cell-phone of paying attention to of needing to be done all can certain interference to the final effect of auto heterodyne or the convenience of auto heterodyne, influence user experience.
Disclosure of Invention
The invention aims to provide an image processing method and a mobile terminal, and aims to solve the problem that redundant objects exist in an image obtained by a photographing scheme in the prior art.
In order to solve the technical problem, the invention is realized as follows: an image processing method is applied to a mobile terminal, and comprises the following steps:
acquiring a first image and at least one second image which are obtained by photographing of a user, wherein the positions of target objects displayed in the first image and the second image are different;
performing image segmentation on the first image and the second image according to the positions of the identified target objects in the first image and the second image;
and synthesizing the first image and the second image according to the segmented image to obtain a synthesized shot image, wherein the target object does not exist in the synthesized shot image.
In a first aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes:
the device comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is used for acquiring a first image and at least one second image which are obtained by photographing by a user, and the positions of target objects displayed in the first image and the second image are different;
the first processing module is used for carrying out image segmentation on the first image and the second image according to the positions of the identified target objects in the first image and the second image;
and the first synthesis module is used for synthesizing the first image and the second image according to the segmented image to obtain a synthesized shot image, wherein the target object does not exist in the synthesized shot image.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the image processing method described above.
In a third aspect, the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps of the image processing method described above.
In the embodiment of the invention, a first image and at least one second image obtained by photographing by a user are obtained, wherein the positions of target objects displayed in the first image and the second image are different; performing image segmentation on the first image and the second image according to the positions of the identified target objects in the first image and the second image; synthesizing the first image and the second image according to the segmented image to obtain a synthesized shot image, wherein the target object does not exist in the synthesized shot image; the problem that redundant objects exist in images obtained by a photographing scheme in the prior art can be solved.
Drawings
FIG. 1 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 2 is a diagram of a self-timer of FIG. 1 according to an embodiment of the present invention;
FIG. 3 is a diagram of a self-timer of FIG. 2 according to an embodiment of the present invention;
FIG. 4 is a diagram of a self-timer of FIG. 3 according to an embodiment of the present invention;
FIG. 5 is a schematic view of the self-timer of FIG. 2 with the mirror border removed in accordance with an embodiment of the present invention;
FIG. 6 is a schematic view of a self-timer of the embodiment of the invention taken from FIG. 3 with the mirror border removed;
FIG. 7 is a first composite diagram of the self-timer FIG. 2 and the self-timer FIG. 3 according to the embodiment of the invention;
FIG. 8 is a second composite diagram of the self-timer FIG. 2 and the self-timer FIG. 3 according to the embodiment of the invention;
FIG. 9 is a schematic diagram of a composite captured image according to an embodiment of the present invention;
FIG. 10 is a flowchart illustrating an exemplary application of the image processing method according to the embodiment of the present invention;
fig. 11 is a first schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides an image processing method aiming at the problem that redundant objects exist in an image obtained by a photographing scheme in the prior art, which is applied to a mobile terminal and comprises the following steps of:
step 11: the method comprises the steps of obtaining a first image and at least one second image obtained by photographing of a user, wherein the positions of target objects displayed in the first image and the second image are different.
The object may be a person or an object, such as: a mobile terminal; the scheme can be specifically applied to the scene of the user shooting the self-timer by the opposite mirror, correspondingly, the step 11 can specifically be as follows: the method comprises the steps of obtaining at least two shot pictures for shooting an image in a mirror when a user faces the mirror, wherein in the at least two shot pictures, the positions of the shot mobile terminals are different.
Step 12: and performing image segmentation on the first image and the second image according to the positions of the identified target objects in the first image and the second image.
Corresponding to the application scenario, step 12 may specifically be: identifying the mobile terminal according to the at least two shot pictures; and carrying out image segmentation on the at least two shot pictures according to the mobile terminal identified in the at least two shot pictures.
Step 13: and synthesizing the first image and the second image according to the segmented image to obtain a synthesized shot image, wherein the target object does not exist in the synthesized shot image.
Corresponding to the application scenario, step 13 may specifically be: and synthesizing at least two divided shot pictures to obtain a synthesized shot image, wherein the mobile terminal does not exist in the synthesized shot image.
The image processing method provided by the embodiment of the invention obtains a first image and at least one second image obtained by photographing of a user, wherein the positions of target objects displayed in the first image and the second image are different; performing image segmentation on the first image and the second image according to the positions of the identified target objects in the first image and the second image; synthesizing the first image and the second image according to the segmented image to obtain a synthesized shot image, wherein the target object does not exist in the synthesized shot image; the problem that redundant target objects exist in images obtained by a photographing scheme in the prior art can be solved, and the user self-photographing experience of the opposite mirror is improved.
Wherein the step of image segmenting the first image and the second image according to the positions of the identified objects in the first image and the second image comprises: according to the positions of the identified target objects in the first image and the second image, performing image segmentation on the first image and the second image to obtain at least two area images without the target objects;
correspondingly, the step of synthesizing the first image and the second image according to the segmented image to obtain a synthesized captured image includes: and synthesizing the area images to obtain a synthesized shot image.
It is explained here that at least two area images in which the object does not exist include an area image on the first image and an area image on the second image, which can ensure that the first image and the second image are synthesized to obtain a synthesized captured image that does not include the object.
Specifically, the number of the second images is one, and the step of performing image segmentation on the first image and the second image according to the positions of the objects identified in the first image and the second image to obtain at least two area images where the objects do not exist includes: dividing the first image into a first area image with a target object and a second area image without the target object according to a first straight line, and dividing the second image into a third area image with the target object and a fourth area image without the target object; the first straight line is a straight line which is intersected with a preset point on a second straight line and forms a preset included angle with the second straight line, and the second straight line is a connecting line of a central point of a target object in the first image and a central point of the target object in the second image.
Thus, the image is segmented to obtain the area image without the target object more easily and quickly.
Wherein the step of synthesizing the region images to obtain a synthesized captured image includes: acquiring the similarity between the cutting edge of the second area image and the cutting edge of the third area image; and when the similarity is greater than or equal to a preset threshold value, synthesizing the cut edge of the second area image and the cut edge of the third area image to obtain a synthesized shot image.
Therefore, seamless joint of the area images can be realized, the complete image which looks like direct shooting is obtained, and the user experience is improved.
For a scene of a user shooting from a pair of mirrors, more specifically, the step of performing image segmentation on the first image and the second image according to the positions of the objects identified in the first image and the second image to obtain at least two area images without the objects may be: connecting the center of the mobile terminal in the first image with the center of the mobile terminal in the second image, and making a vertical line aiming at the connecting line; and dividing the first image into a first area photo with a mobile terminal and a second area photo without the mobile terminal according to the vertical line, and dividing the second image into a third area photo with the mobile terminal and a fourth area photo without the mobile terminal.
In this way, the image segmentation is performed by directly using the perpendicular line of the center line of the mobile terminal in consideration of the external shape of the mobile terminal, and the region image not including the target object can be obtained by one segmentation.
Further, before the first image is divided into a first area photograph with a mobile terminal and a second area photograph without a mobile terminal according to the vertical line and the second image is divided into a third area photograph with a mobile terminal and a fourth area photograph without a mobile terminal, the image processing method further includes: and adjusting the position of the perpendicular line on the connecting line, or adjusting an included angle between the perpendicular line and the connecting line.
Therefore, diversified segmentation modes can be realized, different requirements of users are met, and user experience is improved.
Further, the image processing method further includes: and when the similarity is smaller than a preset threshold value, reminding the user to shoot other images again according to the position of the target object in the first image or the second image with the earlier shooting time.
That is, when the first image and the second image do not meet the synthesis requirement, the other images are re-photographed according to the image with the earlier photographing time, and the image with the earlier photographing time and the other photographed images can be used for subsequent synthesis without re-photographing, so that the synthesis time can be saved, and the user experience can be improved.
Specifically, the step of reminding the user to shoot another image again according to the position of the target object in the first image or the second image with the shooting time being earlier includes: prompting a user to move the mobile terminal to a preset position according to the position of the target object in the first image or the second image with the shooting time being earlier; reminding a user to shoot other images after the mobile terminal is moved to a preset position; wherein there is no overlap between the position of the target object in the first image or the second image that is earlier in the shooting time and the predetermined position.
This makes it possible to obtain another image satisfying the condition as quickly as possible to perform image synthesis.
In the embodiment of the present invention, when the user is reminded to take other images, the image processing method further includes: and shooting the image in the camera at the current moment as a candidate image.
This makes it possible to increase the speed of obtaining an image satisfying the synthesis condition, and to try to synthesize an image using an alternative image when another image captured by the user does not satisfy the requirement.
Further, after the user retakes another image, the image processing method further includes: performing image segmentation on the first image or the second image which is shot earlier, the other re-shot images and the alternative image to correspondingly obtain a fifth area image, a sixth area image and a seventh area image without the target object; acquiring a first similarity between a segmentation edge of a fifth area image and a segmentation edge of a sixth area image, and acquiring a second similarity between the segmentation edge of the fifth area image and the segmentation edge of the sixth area image; synthesizing the fifth area image and the sixth area image according to the first similarity and the second similarity to obtain a synthesized shot image; or the fifth area image and the seventh area image are synthesized to obtain a synthesized photographed image.
That is, the other photographed images and the candidate images are respectively tried to be synthesized with the image (the image in the first image and the image in the second image) in the front photographing time, and the image with the best synthesis degree is reserved as the synthesized photographed image, so that the synthesized image with better appearance can be obtained, and the user experience is improved.
Specifically, the fifth area image and the sixth area image are synthesized according to the first similarity and the second similarity to obtain a synthesized shot image; or the step of synthesizing the fifth area image and the seventh area image to obtain a synthesized captured image includes: if the first similarity is larger than the second similarity, synthesizing the fifth area image and the sixth area image to obtain a synthesized shot image; or if the first similarity is equal to the second similarity, synthesizing the fifth area image and the sixth area image or the seventh area image to obtain a synthesized shot image; or if the first similarity is smaller than the second similarity, synthesizing the fifth area image and the seventh area image to obtain a synthesized shot image.
Namely, two regional images with larger similarity are selected for synthesis to obtain a better synthetic image.
Further, before capturing an image in the camera at the current time as the candidate image, the image processing method further includes: setting the number of alternative images;
correspondingly, the step of taking the image in the camera at the current moment as the alternative image comprises the following steps: and shooting the image in the camera at the current moment as an alternative image by adopting a circular covering mode according to the number.
This prevents an excessive number of candidate images from being captured, and reduces the processing speed of the synthesis.
In the embodiment of the present invention, the step of acquiring the first image and the at least one second image obtained by photographing by the user includes: when a first image or a second image is shot, detecting whether the position of a target object in the currently acquired image is overlapped with the position of the target object in the second image or the first image which is shot at a earlier time; if the overlap exists, prompting the user to move the position of the mobile terminal until the position of the target object in the currently acquired image does not overlap with the position of the target object in the second image or the first image which is close to the shooting time, and prompting the user to shoot; and if the overlapping does not exist, shooting the currently acquired image to obtain a first image or a second image.
Namely, when the first image and the second image are shot, the condition limitation can be directly carried out on the later shot images, so that the shot images meeting the synthesis condition can be directly obtained, and the subsequent processes are further reduced.
Further, before acquiring a first image and at least one second image photographed by a user, the image processing method further includes: acquiring a self-photographing test image and appearance information of a target; identifying a target object in the self-timer test image according to the target object shape information; and if the identification is accurate, executing the step of acquiring the first image and the at least one second image obtained by photographing by the user.
Thus, the recognition rate of the target object can be improved, and the accuracy of the final composite image can be improved.
Further, the image processing method further includes: and if the target object in the self-shooting test image is identified inaccurately, entering a target object identification training process.
Therefore, the recognition rate of the target object can be ensured, and the processing precision of the scheme on the target object related flow is further improved.
The following further describes the image processing method provided by the embodiment of the present invention, taking a scene of a user holding a mobile phone to take a picture of the picture.
In view of the above technical problems, an embodiment of the present invention provides an image processing method, which mainly performs self-photographing on at least two slightly changed mirrors, provides a recognition mobile phone in a software processing manner, intelligently prompts a self-photographing position, performs post-processing synthesis to remove the mobile phone when the mirrors are self-photographed, and improves the perfection of self-photographing on the mirrors.
The scheme provided by the embodiment of the invention mainly comprises the following parts:
part one: when the mobile phone enters the mode (the image processing method provided by the scheme is adopted for the first time), a hand-held mode is adopted to photograph a self-portrait of the mobile phone from a mirror, namely fig. 1 (as shown in fig. 2), the mobile phone profile is identified according to the depth of field and the color difference by specifying the color of the mobile phone (the color can be selected through menu options or manually taken from the self-portrait), the mobile phone is framed by adopting the lines with obvious difference, the mobile phone and the background color, a user can adjust the wire frame to ensure that the mobile phone is correctly identified, then image identification learning (such as appearance, color distribution and the like) is carried out on the mobile phone in the wire frame, the angle of the mobile phone can be slightly changed and the position of the mobile phone can be freely changed after the completion, the mobile phone self-portrait can.
And part two: when the user selects a self-photographing gesture, a self-photographing picture 2 (shown in figure 3) is obtained by taking a first self-photographing to the mirror by adopting a single-hand holding gesture;
the user changes the left hand and the right hand under the condition of keeping the integral self-photographing posture unchanged (specifically, the posture of only changing the handheld mobile phone), the other hand is adopted for holding the posture for self-photographing, meanwhile, the replaced hand can put out the favorite shape, at the moment, whether the position of the mobile phone is overlapped with the position of the mobile phone in the picture 3 is judged by detecting the position of the mobile phone, if the position is overlapped, the mobile phone enters a third part, and if the position is not overlapped, the mobile phone enters a fourth part;
and part three: prompting the user that the overlapping occurs through unlimited modes such as voice, vibration, flash lamp and the like, asking the user to move the position of the mobile phone until the overlapping does not occur, and entering a part four;
and part four: by comparing the similarity between the non-mobile phone area (area a in fig. 5) in fig. 3 and the adjacent area of the non-mobile phone area (area B in fig. 6) in the current preview interface, if the similarity basically meets the synthesis requirement (i.e., the image looks like after the local position is finely adjusted, the boundary is stretched and blurred), the process enters part five, otherwise, the user is prompted to move in a possible direction (up, down, left and right) in a voice mode according to the relative position of the mobile phone in fig. 2 until the synthesis requirement is met, or the user gives up the self-shooting, and the process ends.
Part five: prompting the user to take a self-timer picture 3 (as shown in fig. 4) by means of voice or vibration, flash and the like which are different from the third part, and simultaneously automatically taking the picture at the moment as an alternative picture (the user can set the number of single automatic alternative pictures for preventing the picture from being taken too much, and if the number of the single automatic alternative pictures exceeds the number of the single automatic alternative pictures, the user can take the picture circularly in a mode of covering the earliest alternative pictures), and the user actively takes the picture and then enters the sixth part;
taking alternative pictures can prevent the user from not taking a picture of figure 3, or from shaking when taking a picture of figure 3, and the resulting image cannot be used, resulting in an inability to synthesize an image.
Part six: after the self-timer image 2 and the self-timer image 3 are synthesized, as shown in fig. 7 and 8, the center point of the mobile phone in fig. 4 is S1, the center point of the mobile phone in fig. 3 is S2, the midpoint of the connection line from S1 to S2 is O, the connection line of S1S2 crosses the O point to form a vertical line, the straight line divides the synthesized image into two areas AB, the area a after the synthesis correspondingly fills the portion without the mobile phone image in fig. 2 (specifically, the area a in the image shown in fig. 5), the area B similarly correspondingly fills the portion without the mobile phone image in fig. 3 (specifically, the area B in the image shown in fig. 6), and finally, the synthesized image is obtained, as shown in fig. 9, the process is shown, and the process is ended.
Wherein, the vertical line passing through the point O can also provide manual adjustment, the adjustment principle is that the vertical line can move up and down on the S1S2 connecting line after the vertical line is made, and can also rotate around the intersection point O, and a thumbnail can be provided for real-time preview effect.
Specifically, the scheme provided by the embodiment of the present invention can be as shown in fig. 10, and includes:
step 101: starting;
step 102: reminding a user to shoot the appearance of the mobile phone by facing the mirror in a standard mode;
the standard manner may be customized and is not limited thereto.
Step 103: judging whether the color of the specified mobile phone is correctly identified and the mobile phone is marked, if not, entering a step 104, and if so, entering a step 105;
step 104: entering a mobile phone self-timer identification training process, and entering a step 1013;
step 105: reminding a user to select a self-photographing gesture to photograph a first self-photographing picture facing a mirror;
step 106: reminding a user of replacing the left hand and the right hand of the handheld mobile phone to prepare for shooting a second self-photo, and identifying the position of the mobile phone at the moment;
step 107: judging whether the position of the mobile phone at the moment is superposed with the position of the mobile phone in the first self-photographing or not according to the recognized position of the mobile phone, if so, entering step 108; if not, go to step 110;
step 108: prompting the user to move the position of the mobile phone in a voice mode;
and other modes such as vibration and the like can also be adopted for prompting.
Step 109: detecting the mobile phone position of the user in real time, and returning to the step 107;
step 1010: prompting the user to take a picture in a voice mode;
and other modes such as vibration and the like can also be adopted for prompting.
Step 1011: shooting a second self-photograph;
step 1012: starting software synthesis and displaying effects;
step 1013: and (6) ending.
In the embodiment of the present invention, the user may be prompted about the location where the mobile phone is located to perform the shooting when the user shoots the first self-photo, and the user may be prompted about the location where the mobile phone is located when the user shoots the second self-photo subsequently, for example: prompting a user that the first hand-held mobile phone is deviated to self-shoot, and the second hand-held mobile phone is close to the eye to self-shoot after the left hand and the right hand are changed.
Therefore, the scheme provided by the embodiment of the invention can specifically prompt the user that the first hand-held mobile phone is deviated from the lower position for self-shooting, and the second hand-held mobile phone is close to the eyes for self-shooting after the left hand and the right hand are changed, so that the synthesis effect can be displayed in real time as soon as possible, if the user is satisfied, the user can indicate to output the synthesis image by an interaction mode without limiting voice, keys and the like, and can also indicate to give up for re-shooting synthesis, thereby further improving the experience.
In the embodiment of the present invention, the synthesis may be performed by more than two self-photographs, so as to further improve the synthesis effect, which is not limited herein.
An embodiment of the present invention further provides a mobile terminal, as shown in fig. 11, where the mobile terminal includes:
the first acquiring module 111 is configured to acquire a first image and at least a second image obtained by photographing by a user, where positions of target objects displayed in the first image and the second image are different;
a first processing module 112, configured to perform image segmentation on the first image and the second image according to positions of the identified target objects in the first image and the second image;
a first synthesizing module 113, configured to synthesize the first image and the second image according to the segmented image to obtain a synthesized captured image, where the target does not exist in the synthesized captured image.
The mobile terminal provided by the embodiment of the invention acquires a first image and at least one second image obtained by photographing by a user, wherein the positions of target objects displayed in the first image and the second image are different; performing image segmentation on the first image and the second image according to the positions of the identified target objects in the first image and the second image; synthesizing the first image and the second image according to the segmented image to obtain a synthesized shot image, wherein the target object does not exist in the synthesized shot image; the problem that redundant target objects exist in images obtained by a photographing scheme in the prior art can be solved, and the user self-photographing experience of the opposite mirror is improved.
Wherein the first processing module comprises: the first processing submodule is used for carrying out image segmentation on the first image and the second image according to the positions of the identified target objects in the first image and the second image to obtain at least two area images without the target objects;
correspondingly, the first synthesis module comprises: and the first synthesis submodule is used for synthesizing the area images to obtain a synthesized shot image.
It is explained here that at least two area images in which the object does not exist include an area image on the first image and an area image on the second image, which can ensure that the first image and the second image are synthesized to obtain a synthesized captured image that does not include the object.
Specifically, the number of the second images is one, and the first processing sub-module includes: the first processing unit is used for dividing the first image into a first area image with a target object and a second area image without the target object according to a first straight line, and dividing the second image into a third area image with the target object and a fourth area image without the target object; the first straight line is a straight line which is intersected with a preset point on a second straight line and forms a preset included angle with the second straight line, and the second straight line is a connecting line of a central point of a target object in the first image and a central point of the target object in the second image.
Thus, the image is segmented to obtain the area image without the target object more easily and quickly.
Wherein the first synthesis submodule comprises: a first acquisition unit configured to acquire a similarity between a cut edge of the second region image and a cut edge of the third region image; and the first synthesizing unit is used for synthesizing the cut edge of the second area image and the cut edge of the third area image when the similarity is greater than or equal to a preset threshold value to obtain a synthesized shot image.
Therefore, seamless joint of the area images can be realized, the complete image which looks like direct shooting is obtained, and the user experience is improved.
Further, the mobile terminal further includes: and the first reminding module is used for reminding the user to shoot other images again according to the position of the target object in the first image or the second image with the shooting time before when the similarity is smaller than the preset threshold value.
That is, when the first image and the second image do not meet the synthesis requirement, the other images are re-photographed according to the image with the earlier photographing time, and the image with the earlier photographing time and the other photographed images can be used for subsequent synthesis without re-photographing, so that the synthesis time can be saved, and the user experience can be improved.
Specifically, the first reminding module comprises: the first prompting sub-module is used for prompting a user to move the mobile terminal to a preset position according to the position of the target object in the first image or the second image with the shooting time being earlier; the first reminding sub-module is used for reminding a user to shoot other images after the mobile terminal is moved to a preset position; wherein there is no overlap between the position of the target object in the first image or the second image that is earlier in the shooting time and the predetermined position.
This makes it possible to obtain another image satisfying the condition as quickly as possible to perform image synthesis.
In the embodiment of the present invention, the mobile terminal further includes: the first shooting module is used for shooting the image in the camera at the current moment as a candidate image when reminding the user to shoot other images.
This makes it possible to increase the speed of obtaining an image satisfying the synthesis condition, and to try to synthesize an image using an alternative image when another image captured by the user does not satisfy the requirement.
Further, the mobile terminal further includes: the second processing module is used for carrying out image segmentation on the first image or the second image which is shot earlier, the other re-shot images and the alternative image after the user shoots other images again, and correspondingly obtaining a fifth area image, a sixth area image and a seventh area image without a target object; the second acquisition module is used for acquiring a first similarity between the segmentation edge of the fifth area image and the segmentation edge of the sixth area image and a second similarity between the segmentation edge of the fifth area image and the segmentation edge of the sixth area image; the second synthesis module is used for synthesizing the fifth area image and the sixth area image according to the first similarity and the second similarity to obtain a synthesized shot image; or the fifth area image and the seventh area image are synthesized to obtain a synthesized photographed image.
That is, the other photographed images and the candidate images are respectively tried to be synthesized with the image (the image in the first image and the image in the second image) in the front photographing time, and the image with the best synthesis degree is reserved as the synthesized photographed image, so that the synthesized image with better appearance can be obtained, and the user experience is improved.
Specifically, the second synthesis module includes: the second synthesis submodule is used for synthesizing the fifth area image and the sixth area image to obtain a synthesized shot image if the first similarity is greater than the second similarity; or if the first similarity is equal to the second similarity, synthesizing the fifth area image and the sixth area image or the seventh area image to obtain a synthesized shot image; or if the first similarity is smaller than the second similarity, synthesizing the fifth area image and the seventh area image to obtain a synthesized shot image.
Namely, two regional images with larger similarity are selected for synthesis to obtain a better synthetic image.
Further, the mobile terminal further includes: the first setting module is used for setting the number of the alternative images before the images in the camera at the current moment are shot as the alternative images;
correspondingly, the first shooting module comprises: and the first shooting submodule is used for shooting the image in the camera at the current moment as a candidate image in a circulating coverage mode according to the number.
This prevents an excessive number of candidate images from being captured, and reduces the processing speed of the synthesis.
In an embodiment of the present invention, the first obtaining module includes: the first detection submodule is used for detecting whether the position of the target object in the currently acquired image is overlapped with the position of the target object in the second image or the first image which is earlier in shooting time when the first image or the second image is shot; the second processing sub-module is used for prompting the user to move the position of the mobile terminal if the overlap exists, and prompting the user to shoot until the position of the target object in the currently acquired image does not overlap with the position of the target object in the second image or the first image which is earlier than the shooting time; and the third processing submodule is used for shooting the currently acquired image to obtain the first image or the second image if the overlapping does not exist.
Namely, when the first image and the second image are shot, the condition limitation can be directly carried out on the later shot images, so that the shot images meeting the synthesis condition can be directly obtained, and the subsequent processes are further reduced.
Further, the mobile terminal further includes: the third acquisition module is used for acquiring a self-photographing test image and the appearance information of the target before acquiring the first image and the at least one second image which are obtained by photographing by the user; the first identification module is used for identifying a target object in the self-timer test image according to the appearance information of the target object; and the first execution module is used for executing the operation of acquiring the first image and the at least one second image obtained by photographing by the user if the identification is accurate.
Thus, the recognition rate of the target object can be improved, and the accuracy of the final composite image can be improved.
Further, the mobile terminal further includes: and the third processing module is used for entering a target object identification training process if the target object in the self-shooting test image is identified inaccurately.
Therefore, the recognition rate of the target object can be ensured, and the processing precision of the scheme on the target object related flow is further improved.
The mobile terminal provided in the embodiment of the present invention can implement each process implemented by the terminal in the method embodiments of fig. 1 to fig. 10, and is not described herein again to avoid repetition.
Fig. 12 is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, where the mobile terminal 120 includes, but is not limited to: a radio frequency unit 121, a network module 122, an audio output unit 123, an input unit 124, a sensor 125, a display unit 126, a user input unit 127, an interface unit 128, a memory 129, a processor 1210, and a power source 1211. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 12 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 1210 is configured to acquire a first image and at least a second image obtained by photographing by a user, where positions of target objects displayed in the first image and the second image are different; performing image segmentation on the first image and the second image according to the positions of the identified target objects in the first image and the second image; and synthesizing the first image and the second image according to the segmented image to obtain a synthesized shot image, wherein the target object does not exist in the synthesized shot image.
In the embodiment of the invention, a first image and at least one second image obtained by photographing by a user are obtained, wherein the positions of target objects displayed in the first image and the second image are different; performing image segmentation on the first image and the second image according to the positions of the identified target objects in the first image and the second image; synthesizing the first image and the second image according to the segmented image to obtain a synthesized shot image, wherein the target object does not exist in the synthesized shot image; the problem that redundant objects exist in images obtained by a photographing scheme in the prior art can be solved.
Optionally, the processor 1210 is specifically configured to perform image segmentation on the first image and the second image according to positions of the identified target objects in the first image and the second image, so as to obtain at least two area images where the target object does not exist; and synthesizing the area images to obtain a synthesized shot image.
Optionally, the processor 1210 is specifically configured to divide the first image into a first area image with a target object and a second area image without the target object according to a first straight line, and divide the second image into a third area image with the target object and a fourth area image without the target object; the first straight line is a straight line which is intersected with a preset point on a second straight line and forms a preset included angle with the second straight line, and the second straight line is a connecting line of a central point of a target object in the first image and a central point of the target object in the second image.
Optionally, the processor 1210 is specifically configured to obtain a similarity between a cut edge of the second area image and a cut edge of the third area image; and when the similarity is greater than or equal to a preset threshold value, synthesizing the cut edge of the second area image and the cut edge of the third area image to obtain a synthesized shot image.
Optionally, the processor 1210 is further configured to, when the similarity is smaller than a preset threshold, remind the user to shoot another image again according to a position of the target object in the first image or the second image which is shot earlier.
Optionally, the processor 1210 is specifically configured to prompt the user to move the mobile terminal to the predetermined position according to the position of the target object in the first image or the second image with the earlier shooting time; reminding a user to shoot other images after the mobile terminal is moved to a preset position; wherein there is no overlap between the position of the target object in the first image or the second image that is earlier in the shooting time and the predetermined position.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 121 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 1210; in addition, the uplink data is transmitted to the base station. Generally, the radio frequency unit 121 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 121 may also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 122, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 123 may convert audio data received by the radio frequency unit 121 or the network module 122 or stored in the memory 129 into an audio signal and output as sound. Also, the audio output unit 123 may also provide audio output related to a specific function performed by the mobile terminal 120 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 123 includes a speaker, a buzzer, a receiver, and the like.
The input unit 124 is used to receive an audio or video signal. The input Unit 124 may include a Graphics Processing Unit (GPU) 1241 and a microphone 1242, and the Graphics processor 1241 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 126. The image frames processed by the graphic processor 1241 may be stored in the memory 129 (or other storage medium) or transmitted via the radio frequency unit 121 or the network module 122. The microphone 1242 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 121 in case of a phone call mode.
The mobile terminal 120 also includes at least one sensor 125, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 1261 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 1261 and/or backlight when the mobile terminal 120 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 125 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 126 is used to display information input by the user or information provided to the user. The Display unit 126 may include a Display panel 1261, and the Display panel 1261 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 127 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 127 includes a touch panel 1271 and other input devices 1272. Touch panel 1271, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., user operations on touch panel 1271 or near touch panel 1271 using a finger, stylus, or any other suitable object or attachment). Touch panel 1271 may include two portions, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1210, receives a command from the processor 1210, and executes the command. In addition, the touch panel 1271 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to touch panel 1271, user input unit 127 may include other input devices 1272. In particular, other input devices 1272 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, touch panel 1271 can be overlaid on display panel 1261, and when touch panel 1271 detects a touch operation thereon or nearby, it can be transmitted to processor 1210 to determine the type of touch event, and then processor 1210 can provide corresponding visual output on display panel 1261 according to the type of touch event. Although in fig. 12, the touch panel 1271 and the display panel 1261 are implemented as two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1271 and the display panel 1261 may be integrated to implement the input and output functions of the mobile terminal, and are not limited herein.
The interface unit 128 is an interface through which an external device is connected to the mobile terminal 120. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 128 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 120 or may be used to transmit data between the mobile terminal 120 and external devices.
The memory 129 may be used to store software programs as well as various data. The memory 129 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 129 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1210 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 129 and calling data stored in the memory 129, thereby performing overall monitoring of the mobile terminal. Processor 1210 may include one or more processing units; preferably, the processor 1210 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1210.
The mobile terminal 120 may also include a power source 1211 (e.g., a battery) for powering the various components, and the power source 1211 may be logically coupled to the processor 1210 via a power management system that may be configured to manage charging, discharging, and power consumption.
In addition, the mobile terminal 120 includes some functional modules that are not shown, and thus, the detailed description thereof is omitted.
Preferably, an embodiment of the present invention further provides a mobile terminal, including a processor 1210, a memory 129, and a computer program stored in the memory 129 and capable of running on the processor 1210, where the computer program is executed by the processor 1210 to implement each process of the above-mentioned image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. An image processing method applied to a mobile terminal is characterized by comprising the following steps:
acquiring a first image and at least one second image which are obtained by photographing of a user, wherein the positions of target objects displayed in the first image and the second image are different;
performing image segmentation on the first image and the second image according to the positions of the identified target objects in the first image and the second image;
synthesizing the first image and the second image according to the segmented image to obtain a synthesized shot image, wherein the target object does not exist in the synthesized shot image;
the step of performing image segmentation on the first image and the second image according to the positions of the identified objects in the first image and the second image comprises:
according to the positions of the identified target objects in the first image and the second image, performing image segmentation on the first image and the second image to obtain at least two area images without the target objects;
the step of synthesizing the first image and the second image according to the divided image to obtain a synthesized captured image includes:
synthesizing the area images to obtain a synthesized shot image;
the number of the second images is one, and the step of performing image segmentation on the first image and the second image according to the positions of the identified target objects in the first image and the second image to obtain at least two area images without the target objects comprises:
dividing the first image into a first area image with a target object and a second area image without the target object according to a first straight line, and dividing the second image into a third area image with the target object and a fourth area image without the target object;
the first straight line is a straight line which is intersected with a preset point on a second straight line and forms a preset included angle with the second straight line, and the second straight line is a connecting line of a central point of a target object in the first image and a central point of the target object in the second image.
2. The image processing method according to claim 1, wherein the step of synthesizing the area images to obtain a synthesized captured image includes:
acquiring the similarity between the cutting edge of the second area image and the cutting edge of the third area image;
and when the similarity is greater than or equal to a preset threshold value, synthesizing the cut edge of the second area image and the cut edge of the third area image to obtain a synthesized shot image.
3. The image processing method according to claim 2, characterized in that the image processing method further comprises:
and when the similarity is smaller than a preset threshold value, reminding the user to shoot other images again according to the position of the target object in the first image or the second image with the earlier shooting time.
4. The image processing method according to claim 3, wherein the step of reminding the user to re-shoot the other image according to the position of the target object in the first image or the second image which is shot earlier comprises:
prompting a user to move the mobile terminal to a preset position according to the position of the target object in the first image or the second image with the shooting time being earlier;
reminding a user to shoot other images after the mobile terminal is moved to a preset position;
wherein there is no overlap between the position of the target object in the first image or the second image that is earlier in the shooting time and the predetermined position.
5. A mobile terminal, characterized in that the mobile terminal comprises:
the device comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is used for acquiring a first image and at least one second image which are obtained by photographing by a user, and the positions of target objects displayed in the first image and the second image are different;
the first processing module is used for carrying out image segmentation on the first image and the second image according to the positions of the identified target objects in the first image and the second image;
a first synthesis module, configured to synthesize the first image and the second image according to the segmented image to obtain a synthesized captured image, where the target object does not exist in the synthesized captured image;
the first processing module comprises:
the first processing submodule is used for carrying out image segmentation on the first image and the second image according to the positions of the identified target objects in the first image and the second image to obtain at least two area images without the target objects;
the first synthesis module comprises:
the first synthesis submodule is used for synthesizing the area images to obtain a synthesized shot image;
the number of the second images is one, and the first processing sub-module includes:
the first processing unit is used for dividing the first image into a first area image with a target object and a second area image without the target object according to a first straight line, and dividing the second image into a third area image with the target object and a fourth area image without the target object;
the first straight line is a straight line which is intersected with a preset point on a second straight line and forms a preset included angle with the second straight line, and the second straight line is a connecting line of a central point of a target object in the first image and a central point of the target object in the second image.
6. The mobile terminal of claim 5, wherein the first combining submodule comprises:
a first acquisition unit configured to acquire a similarity between a cut edge of the second region image and a cut edge of the third region image;
and the first synthesizing unit is used for synthesizing the cut edge of the second area image and the cut edge of the third area image when the similarity is greater than or equal to a preset threshold value to obtain a synthesized shot image.
7. The mobile terminal of claim 6, wherein the mobile terminal further comprises:
and the first reminding module is used for reminding the user to shoot other images again according to the position of the target object in the first image or the second image with the shooting time before when the similarity is smaller than the preset threshold value.
8. The mobile terminal of claim 7, wherein the first alert module comprises:
the first prompting sub-module is used for prompting a user to move the mobile terminal to a preset position according to the position of the target object in the first image or the second image with the shooting time being earlier;
the first reminding sub-module is used for reminding a user to shoot other images after the mobile terminal is moved to a preset position;
wherein there is no overlap between the position of the target object in the first image or the second image that is earlier in the shooting time and the predetermined position.
CN201810209159.2A 2018-03-14 2018-03-14 Image processing method and mobile terminal Active CN108234888B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810209159.2A CN108234888B (en) 2018-03-14 2018-03-14 Image processing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810209159.2A CN108234888B (en) 2018-03-14 2018-03-14 Image processing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN108234888A CN108234888A (en) 2018-06-29
CN108234888B true CN108234888B (en) 2020-06-09

Family

ID=62658546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810209159.2A Active CN108234888B (en) 2018-03-14 2018-03-14 Image processing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN108234888B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415318B (en) * 2019-07-26 2023-05-05 上海掌门科技有限公司 Image processing method and device
CN115423752B (en) * 2022-08-03 2023-07-07 荣耀终端有限公司 Image processing method, electronic equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102812490A (en) * 2010-01-20 2012-12-05 三洋电机株式会社 Image processing device and electronic apparatus
CN103685957A (en) * 2013-12-13 2014-03-26 苏州市峰之火数码科技有限公司 Processing method for mobile phone self-timer system
CN103826065A (en) * 2013-12-12 2014-05-28 小米科技有限责任公司 Image processing method and apparatus
CN104580882A (en) * 2014-11-03 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Photographing method and device
CN107423409A (en) * 2017-07-28 2017-12-01 维沃移动通信有限公司 A kind of image processing method, image processing apparatus and electronic equipment
CN107734260A (en) * 2017-10-26 2018-02-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007032082A1 (en) * 2005-09-16 2007-03-22 Fujitsu Limited Image processing method, and image processing device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102812490A (en) * 2010-01-20 2012-12-05 三洋电机株式会社 Image processing device and electronic apparatus
CN103826065A (en) * 2013-12-12 2014-05-28 小米科技有限责任公司 Image processing method and apparatus
CN103685957A (en) * 2013-12-13 2014-03-26 苏州市峰之火数码科技有限公司 Processing method for mobile phone self-timer system
CN104580882A (en) * 2014-11-03 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Photographing method and device
CN107423409A (en) * 2017-07-28 2017-12-01 维沃移动通信有限公司 A kind of image processing method, image processing apparatus and electronic equipment
CN107734260A (en) * 2017-10-26 2018-02-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Also Published As

Publication number Publication date
CN108234888A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
CN109361865B (en) Shooting method and terminal
CN111355889B (en) Shooting method, shooting device, electronic equipment and storage medium
CN109639970B (en) Shooting method and terminal equipment
CN109600550B (en) Shooting prompting method and terminal equipment
CN108881733B (en) Panoramic shooting method and mobile terminal
CN110809115B (en) Shooting method and electronic equipment
CN110602401A (en) Photographing method and terminal
CN111541845A (en) Image processing method and device and electronic equipment
CN110365907B (en) Photographing method and device and electronic equipment
CN108495045B (en) Image capturing method, image capturing apparatus, electronic apparatus, and storage medium
CN108924412B (en) Shooting method and terminal equipment
CN109660723B (en) Panoramic shooting method and device
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN111182205A (en) Photographing method, electronic device, and medium
CN109905603B (en) Shooting processing method and mobile terminal
CN109361874B (en) Photographing method and terminal
CN107948498B (en) A kind of elimination camera Morie fringe method and mobile terminal
CN109474787B (en) Photographing method, terminal device and storage medium
CN109102555B (en) Image editing method and terminal
CN108881544B (en) Photographing method and mobile terminal
CN109885368A (en) A kind of interface display anti-fluttering method and mobile terminal
CN111432195A (en) Image shooting method and electronic equipment
CN111083371A (en) Shooting method and electronic equipment
CN111246102A (en) Shooting method, shooting device, electronic equipment and storage medium
CN109688253A (en) A kind of image pickup method and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant