CN108769521B - Photographing method, mobile terminal and computer readable storage medium - Google Patents

Photographing method, mobile terminal and computer readable storage medium Download PDF

Info

Publication number
CN108769521B
CN108769521B CN201810570646.1A CN201810570646A CN108769521B CN 108769521 B CN108769521 B CN 108769521B CN 201810570646 A CN201810570646 A CN 201810570646A CN 108769521 B CN108769521 B CN 108769521B
Authority
CN
China
Prior art keywords
image
foreground target
preview
foreground
original image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810570646.1A
Other languages
Chinese (zh)
Other versions
CN108769521A (en
Inventor
王宇鹭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810570646.1A priority Critical patent/CN108769521B/en
Publication of CN108769521A publication Critical patent/CN108769521A/en
Application granted granted Critical
Publication of CN108769521B publication Critical patent/CN108769521B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides a photographing method, a mobile terminal and a computer readable storage medium, wherein the photographing method comprises the following steps: the method comprises the steps of acquiring a preview picture acquired by a camera, identifying the characteristic information of the preview picture, carrying out image processing on the preview picture based on the characteristic information, acquiring an original image acquired by the camera after receiving a photographing instruction based on the background label of the preview picture, and carrying out image processing on the original image based on the characteristic information of the preview picture of a preset frame before the original image to obtain a picture.

Description

Photographing method, mobile terminal and computer readable storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to a photographing method, a mobile terminal, and a computer-readable storage medium.
Background
With the development of intelligent mobile terminals, people use mobile terminals such as mobile phones and the like to take pictures more and more frequently. Most of the existing photographing functions of mobile terminals support image processing, such as a filter function for a human face, a peeling function, a whitening function, and the like.
At present, when previewing and taking pictures, the previewing pictures collected by the camera are detected and processed correspondingly, so that the previewing pictures and the taken pictures have the same effect. However, the memory consumed by such a photographing method is large.
Disclosure of Invention
In view of this, embodiments of the present application provide a photographing method, a mobile terminal, and a computer-readable storage medium, so as to solve the problem that the current photographing method consumes a relatively large amount of memory.
A first aspect of an embodiment of the present application provides a photographing method, including:
acquiring a preview picture acquired by the camera;
identifying characteristic information of the preview picture, and carrying out image processing on the preview picture based on the characteristic information, wherein the characteristic information of the preview picture comprises a background label of the preview picture;
after receiving a photographing instruction, acquiring an original image acquired by the camera;
and performing image processing on the original image based on the characteristic information of the preview picture of the preset frame before the original image to obtain a photo.
A second aspect of an embodiment of the present application provides a mobile terminal, including:
the preview image acquisition module is used for acquiring a preview image acquired by the camera;
the preview image processing module is used for identifying the characteristic information of the preview image and processing the preview image based on the characteristic information, wherein the characteristic information of the preview image comprises a background label of the preview image;
the characteristic information acquisition module is used for acquiring an original image acquired by the camera after receiving a photographing instruction;
and the photographing processing module is used for carrying out image processing on the original image based on the characteristic information of the preview picture of the preset frame before the original image to obtain a photo.
A third aspect of an embodiment of the present application provides a mobile terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method provided in the first aspect of the embodiment of the present application when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by one or more processors, performs the steps of the method provided by the first aspect of embodiments of the present application.
A fifth aspect of embodiments of the present application provides a computer program product comprising a computer program that, when executed by one or more processors, performs the steps of the method provided by the first aspect of embodiments of the present application.
The embodiment of the application firstly acquires the preview picture acquired by the camera, identifies the characteristic information of the preview picture, and performing image processing on the preview screen based on the feature information, wherein the feature information of the preview screen comprises a background label of the preview screen, after receiving a photographing instruction, acquiring an original image acquired by the camera, based on the characteristic information of a preview picture of a preset frame before the original image, the original image is processed to obtain the photo, and because the background label identification of the original image collected by photographing is not needed when photographing in the application, the background label of the preview picture of the preset frame before the original image is adopted to carry out image processing on the original image, so that the memory during photographing can be reduced, and the effect that the picture obtained by photographing and the preview picture seen by a user during previewing show the same can be ensured.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of a photographing method according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating an implementation of another photographing method provided in the embodiment of the present application;
fig. 3 is a schematic flow chart illustrating an implementation of another photographing method provided in the embodiment of the present application;
fig. 4 is a schematic block diagram of a mobile terminal according to an embodiment of the present application;
fig. 5 is a schematic block diagram of another mobile terminal provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 1 is a schematic flow chart of an implementation of a photographing method provided in an embodiment of the present application, and is applied to a mobile terminal, where as shown in the figure, the method may include the following steps:
and step S101, acquiring a preview picture acquired by the camera.
In the embodiment of the application, when a user needs to take a picture, the camera of the mobile terminal is usually started, the camera can acquire the preview picture in real time, and the preview picture acquired by the camera in real time can be displayed on the screen of the mobile terminal.
And step S102, identifying the characteristic information of the preview picture, and carrying out image processing on the preview picture based on the characteristic information, wherein the characteristic information of the preview picture comprises a background label of the preview picture.
In this embodiment of the application, the feature information of the preview picture includes a background tag of the preview picture, the feature information identifying the preview picture is to identify a scene of the preview picture through a scene identification model, and the background tag of the preview picture is obtained according to the scene of the preview picture, where the scene identification model may be a convolutional neural network model. And the background label of the preview picture is used for carrying out related image processing on the image to be processed.
For example, the background tag is a night scene tag, the preview picture may be subjected to noise reduction processing, after the preview picture is subjected to noise reduction processing, the preview picture subjected to noise reduction processing is subjected to image enhancement processing, and the background tag is a sunset tag, and the preview picture may be subjected to processing for improving image brightness.
As another embodiment of the present application, before identifying the feature information of the preview screen, the method further includes:
and compressing the preview picture acquired by the camera.
Correspondingly, the characteristic information for identifying the preview picture comprises:
and identifying the characteristic information of the preview picture after the compression processing.
In the embodiment of the application, if the preview picture acquired by the camera is subjected to scene recognition, since the preview picture acquired by the camera is relatively large, the time for scene recognition is increased, and the memory occupancy is increased at the same time.
In practical application, the camera collects preview pictures at preset time intervals, in order to further reduce the memory occupancy, the camera may not perform scene recognition on each collected preview picture, but perform scene recognition only once every few frames of preview pictures, for example, the 1 st frame preview picture performs scene recognition to obtain a 1 st frame background tag, the 1 st to 5 th frame preview pictures perform image processing based on the 1 st frame background tag, it may also be considered that the 2 nd to 5 th frame preview pictures have the background tag of the 1 st frame, the 6 th frame preview picture performs scene recognition to obtain a 6 th frame background tag, the 6 th to 10 th frame preview pictures perform image processing based on the 6 th frame background tag, it may also be considered that the 7 th to 10 th frame preview pictures have the background tag of the 6 th frame, the 11 th frame preview picture performs scene recognition to obtain the 11 th frame background tag, … … are provided. It should be noted that, if a photographing instruction is not received, the above steps are continued, that is, each time the camera acquires a preview picture of each frame, image processing needs to be performed on the preview picture of the current frame based on the feature information of the preview picture of each frame, only the feature information of the preview picture of some frames is the feature information for identifying itself, and the feature information of the preview picture of some frames is the feature information of the preview picture of the previous frame.
And step S103, after receiving a photographing instruction, acquiring an original image acquired by the camera.
And step S104, performing image processing on the original image based on the characteristic information of the preview picture of the preset frame before the original image to obtain a photo.
In the embodiment of the application, after a photographing instruction is received, an original image acquired by the camera is acquired, rather than the current preview picture is subjected to screen capture. Although the memory occupancy can be reduced by capturing the current preview picture, and a photo with the same effect as the preview picture can be obtained at the same time, the resolution of the photo obtained by capturing the picture is too low to meet the requirement of people on high resolution of daily photographing. After the original image captured by the camera is obtained, a background label of the original image is not required to be identified, but a picture is obtained by performing image processing on the original image based on a background label of a preview picture of a preset frame before the original image, for example, if the original image is a 120 th frame picture captured by the camera, the picture can be obtained by performing image processing on the original image based on the background label of the 116 th frame preview picture, or the picture can be obtained by performing image processing on the original image based on the background label of the 119 th frame preview picture.
In the embodiment of the application, the background label of the original image collected by photographing is not required to be identified during photographing, but the background label of the preview picture of the preset frame before the original image is adopted to perform image processing on the original image, so that the memory during photographing can be reduced, and the photo obtained by photographing and the preview picture seen by a user during previewing can be ensured to have the same effect.
Fig. 2 is a schematic flowchart of another photographing method provided in an embodiment of the present application, and as shown in the drawing, the method may include the following steps:
step S201, acquiring a preview picture acquired by the camera.
Step S202, carrying out scene recognition on the preview picture to obtain a background label of the preview picture.
The contents of step S201 to step S202 are the same as the contents of step S101 to step S102, and the descriptions of step S101 to step S102 may be specifically referred to, and are not repeated herein.
Step S203, detecting whether a foreground target exists in the preview picture, and if a foreground target exists in the preview picture, acquiring a foreground tag of the foreground target.
In the embodiment of the present application, in order to obtain a better photographing effect, the original image may be subjected to image processing based on a background tag of a preview picture of a preset frame before the original image to obtain a picture, and the foreground target of the original image may be subjected to image processing based on a foreground tag of a foreground target in the original image, so that a user can see the photographing effect in a preview stage, the foreground target in the preview picture may be subjected to image processing based on the foreground tag of the foreground target in the preview picture, however, this can be achieved only when the foreground target exists in the preview picture, so that it is required to detect whether the foreground target exists in the preview picture, and if the foreground target exists in the preview picture, the foreground tag of the foreground target is obtained. Whether a foreground target exists in the preview picture can be identified through the target identification model, when the foreground target exists, a detection frame can be displayed in the preview picture, a foreground label is generated at the same time, the detection frame is the identified foreground target, and in practical application, the detection frame does not need to be displayed, and only the foreground label is output.
And if the foreground target exists in the preview picture, the characteristic information of the preview picture comprises a background label of the preview picture and a foreground label of the foreground target in the preview picture.
And step S204, performing global processing on the preview picture based on the background label of the preview picture.
This step can refer to the description in step S102, and is not described herein again.
Step S205, performing local processing on the foreground object in the preview screen based on the foreground tag of the foreground object in the preview screen.
In this application embodiment, when the foreground label is the face label, can carry out the relevant processing of beauty to the face, when the foreground label is the food label, can promote the processing of color vividness to the food, other foreground labels no longer exemplify, and in practical application, can set up different foreground target image processing mode according to different foreground labels.
And step S206, after receiving the photographing instruction, acquiring the original image acquired by the camera.
Step S207, performing global processing on the original image based on the background label of the preview screen of the frame before the original image.
Step S208, detecting whether a foreground target exists in the original image; and if the foreground target exists in the original image, acquiring a foreground label of the foreground target in the original image and position information of the foreground target.
Step S209, based on the foreground label of the foreground object in the original image and the position information of the foreground object, perform local processing on the foreground object in the original image.
In the embodiment of the application, after a photographing instruction is received, an original image acquired by a camera is acquired, the image processing process of the original image is consistent with the image processing process of a preview picture of a previous frame of the original image, the original image is subjected to global processing only based on a background label of the preview picture of the previous frame of the original image, and a foreground target in the original image is subjected to local processing based on a foreground label of the foreground target in the original image and position information of the foreground target.
In the embodiment of the application, when the preview picture is processed, the preview picture is processed based on the background label of the preview picture and the foreground label of the foreground target, and when the preview picture is photographed, in order to reduce the memory utilization rate and obtain the same image effect as the preview picture, the background label of the preview picture of the previous frame of the photographed original image and the foreground label of the foreground target of the original image are adopted to process the original image.
Fig. 3 is a schematic flowchart of another photographing method provided in an embodiment of the present application, and as shown in the diagram, on the basis of the embodiment shown in fig. 2, the method describes how to segment a foreground object from an original image when performing local processing on the foreground object in the original image, and may include the following steps:
step S301, after obtaining the position information of the foreground object in the original image, obtaining the image in the detection frame in the original image.
In the embodiment of the present application, if a foreground target in the original image needs to be locally processed, the foreground target to which the foreground target belongs needs to be segmented from the original image, and on the basis of the embodiment shown in fig. 2, a detection frame of the foreground target can be obtained, and since most of images in the detection frame are foreground target images, the foreground target can be segmented based on the images in the detection frame.
Step S302, based on the gray gradient of the image in the detection frame, the boundary of the foreground object in the image in the detection frame is identified, and a foreground object contour line is obtained.
In the embodiment of the application, the boundary of the foreground object is an important basis for distinguishing the foreground object from the background image, and generally, the change rate of the gray value around the boundary point of the foreground object is high, so that the boundary of the foreground object can be identified through the gray gradient of the image, and the contour line of the foreground object is obtained. However, there is also a problem with foreground object contours: in the image, there are actually boundaries, however, no contour lines may be generated because the gradient changes are not obvious; or, in fact, not the boundary, the contour lines are generated inside the foreground object because the gray-scale value inside the foreground object changes more significantly. Therefore, the foreground object contour line generated at this time is not the true foreground object contour line.
Step S303, acquiring a gray threshold sequence, and performing binarization processing on the image in the detection frame through each gray threshold in the gray threshold sequence to obtain a gray foreground target image sequence.
In the embodiment of the application, gray processing is performed on an image in the detection frame to obtain a gray image, then a gray threshold sequence is obtained, binarization processing is performed on the image in the detection frame through each gray threshold in the gray threshold sequence to obtain a gray foreground target image sequence, and a gray foreground target image exists in the gray foreground target image sequences and can roughly represent a foreground target area.
In the gray threshold sequence, if the threshold is set properly, the foreground target area can be obtained. However, in practical applications, it is difficult to accurately select a proper threshold to segment the foreground object from the background image, and even if the proper threshold is selected, pixels having the same gray value as that of the pixels in the foreground object inevitably exist in the background image.
As can be seen from the analysis of step S302 and step S303, both the binarization method and the gray scale gradient method have certain defects, and the obtained result is not very accurate. In order to obtain accurate results, the embodiment of the present application combines the binarization method and the gray gradient method to obtain the foreground object, for example, the contents described in step S304 to step S305.
And step S304, acquiring a gray level foreground target image with the highest matching degree with the foreground target contour line from the gray level foreground target image sequence.
In the embodiment of the application, in the gray-scale foreground target image sequence, there is a region range, closest to the foreground target reality, of a foreground target in a gray-scale foreground target image, and how to obtain an image, closest to the foreground target reality, of the region range from the gray-scale foreground target image sequence can obtain a gray-scale foreground target image, highest in matching degree with the foreground target contour line, from the gray-scale foreground target image sequence. The matching degree can be obtained by a binarization method and a gray gradient method respectively, and the overlapping degree of the areas of the foreground object is used as the matching degree. The target area in each gray scale foreground target image sequence can be temporarily regarded as a foreground target area, the area in the foreground target contour line obtained by the gray scale gradient method can also be temporarily regarded as a foreground target area, and the gray scale foreground target image with the highest coincidence degree with the foreground target area obtained by the gray scale gradient method is found. The foreground object region in the gray-scale foreground object image can relatively truly represent the foreground object region, but is not a completely accurate foreground object region.
Step S305, fusing the gray level foreground target image with the highest matching degree with the foreground target contour line to generate a continuous foreground target area, wherein the image in the foreground target area in the original image is the foreground target.
In the embodiment of the present application, in fact, no matter the foreground target area in the obtained gray-scale foreground target image with the highest matching degree or the foreground target area represented by the foreground target contour line obtained by the gray-scale gradient method, the foreground target area can not be accurately described, however, the gray-scale foreground target image with the highest matching degree with the foreground target contour line and the foreground target contour line can be fused to generate a continuous foreground target area, the inaccurate part in the foreground target contour line obtained by the gray-scale gradient method is discarded by the binary image, the inaccurate part in the gray-scale foreground target image is discarded by the foreground target contour line obtained by the gray-scale gradient method, the continuous foreground target area is obtained after fusion, the foreground target area obtained after fusion is not the real foreground target image because the gray-scale image and the contour line obtained after the binary image are fused, therefore, the obtained foreground object region represents the coordinates of the foreground object in the original image. The image in the foreground object region in the original image is the foreground object.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 4 is a schematic block diagram of a mobile terminal according to an embodiment of the present application, and only a portion related to the embodiment of the present application is shown for convenience of description.
The mobile terminal 4 may be a software unit, a hardware unit or a combination of software and hardware unit built in a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, etc., or may be integrated into a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, etc., as an independent pendant.
The mobile terminal 4 includes:
a preview image obtaining module 41, configured to obtain a preview image acquired by the camera;
a preview screen processing module 42, configured to identify feature information of the preview screen, and perform image processing on the preview screen based on the feature information, where the feature information of the preview screen includes a background tag of the preview screen;
the characteristic information obtaining module 43 is configured to obtain an original image collected by the camera after receiving a photographing instruction;
and the photographing processing module 44 is configured to perform image processing on the original image to obtain a photo based on the feature information of the preview image of the preset frame before the original image.
Optionally, the preview screen processing module 42 includes:
a first background tag identification unit 421, configured to perform scene identification on the preview screen to obtain a background tag of the preview screen;
a first foreground tag identifying unit 422, configured to detect whether a foreground target exists in the preview picture, and if a foreground target exists in the preview picture, obtain a foreground tag of the foreground target;
taking a background label of the preview picture and a foreground label of a foreground target in the preview picture as characteristic information of the preview picture;
correspondingly, the characteristic information of the preview screen further includes: and the foreground label of the foreground target in the preview picture.
Optionally, the preview screen processing module 42 further includes:
a first global processing unit 423, configured to perform global processing on the preview screen based on a background tag of the preview screen;
a first partial processing unit 424, configured to perform partial processing on a foreground object in the preview screen based on a foreground tag of the foreground object in the preview screen.
Optionally, the photographing processing module 44 includes:
a second global processing unit 441, configured to perform global processing on the original image based on a background tag of a preview screen of a frame previous to the original image;
a second foreground tag identification unit 442, configured to detect whether a foreground object exists in the original image; if the foreground target exists in the original image, acquiring a foreground label of the foreground target in the original image and position information of the foreground target;
a second local processing unit 443, configured to perform local processing on a foreground target in the original image based on a foreground tag of the foreground target in the original image and the position information of the foreground target.
Optionally, the position information of the foreground object is position information of a detection frame corresponding to the foreground object,
the photographing processing module 44 further includes:
a detection frame image acquisition unit, configured to acquire an image in the detection frame in the original image after acquiring position information of a foreground target in the original image;
the target contour line acquisition unit is used for identifying the boundary of a foreground target in the image in the detection frame based on the gray gradient of the image in the detection frame to obtain a foreground target contour line;
the target image sequence acquisition unit is used for acquiring a gray threshold sequence and carrying out binarization processing on the image in the detection frame through each gray threshold in the gray threshold sequence to obtain a gray foreground target image sequence;
and the fusion unit is used for determining a foreground target area based on the gray level foreground target image sequence and the foreground target contour line, and an image in the foreground target area in the original image is a foreground target.
Optionally, the fusion unit includes:
the target image acquisition subunit is used for acquiring a gray level foreground target image with the highest matching degree with the foreground target contour line from the gray level foreground target image sequence;
and the fusion subunit is used for fusing the gray level foreground target image with the highest matching degree with the foreground target contour line to generate a continuous foreground target area.
Optionally, the mobile terminal 4 further includes:
the compression module 45 is configured to compress the preview image acquired by the camera before identifying the feature information of the preview image;
correspondingly, the preview screen processing module is further configured to:
and identifying the characteristic information of the preview picture after the compression processing.
It will be apparent to those skilled in the art that, for convenience and simplicity of description, the foregoing functional units and modules are merely illustrated in terms of division, and in practical applications, the foregoing functional allocation may be performed by different functional units and modules as needed, that is, the internal structure of the mobile terminal is divided into different functional units or modules to perform all or part of the above described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the above-mentioned apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 5 is a schematic block diagram of a mobile terminal according to another embodiment of the present application. As shown in fig. 5, the mobile terminal 5 of this embodiment includes: one or more processors 50, a memory 51 and a computer program 52 stored in said memory 51 and executable on said processors 50. The processor 50 executes the computer program 52 to implement the steps in the above-mentioned various photographing method embodiments, such as the steps S101 to S104 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of the modules/units in the above-described mobile terminal embodiments, such as the functions of the modules 41 to 44 shown in fig. 4.
Illustratively, the computer program 52 may be partitioned into one or more modules/units, which are stored in the memory 51 and executed by the processor 50 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 52 in the mobile terminal 5. For example, the computer program 52 may be divided into a preview screen acquisition module, a preview screen processing module, a feature information acquisition module, and a photographing processing module.
The preview image acquisition module is used for acquiring a preview image acquired by the camera;
the preview image processing module is used for identifying the characteristic information of the preview image and processing the preview image based on the characteristic information, wherein the characteristic information of the preview image comprises a background label of the preview image;
the characteristic information acquisition module is used for acquiring an original image acquired by the camera after receiving a photographing instruction;
and the photographing processing module is used for processing the original image to obtain a photo based on the characteristic information of the preview picture of the preset frame before the original image.
Other modules or units can refer to the description of the embodiment shown in fig. 4, and are not described again here.
The mobile terminal includes, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is only one example of a mobile terminal 5 and is not intended to limit the mobile terminal 5 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the mobile terminal may also include input devices, output devices, network access devices, buses, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the mobile terminal 5, such as a hard disk or a memory of the mobile terminal 5. The memory 51 may also be an external storage device of the mobile terminal 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the mobile terminal 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the mobile terminal 5. The memory 51 is used for storing the computer program and other programs and data required by the mobile terminal. The memory 51 may also be used to temporarily store data that has been output or is to be output.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed mobile terminal and method may be implemented in other ways. For example, the above-described embodiments of the mobile terminal are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (5)

1. A photographing method is applied to a mobile terminal and comprises the following steps:
acquiring a preview picture acquired by a camera at a preset time interval;
identifying the characteristic information of the preview picture, and performing image processing on the preview picture based on the characteristic information, wherein the characteristic information of the preview picture comprises a background label of the preview picture, and specifically comprises the following steps: carrying out scene recognition once every several frames of preview pictures to obtain the characteristic information of the preview pictures, carrying out image processing on the current frame of preview pictures based on the characteristic information of each frame of preview pictures by the camera every time the camera collects one frame of preview pictures, wherein the characteristic information of a part of frames of preview pictures is the characteristic information for recognizing the camera, and the characteristic information of the other part of frames of preview pictures is the characteristic information of the previous frame of preview pictures;
after receiving a photographing instruction, acquiring an original image acquired by the camera;
based on the characteristic information of the preview picture of the preset frame before the original image, the original image is subjected to image processing to obtain a photo, and the method comprises the following steps: globally processing the original image based on a background label of a preview picture of a previous frame of the original image; detecting whether a foreground target exists in the original image; if the foreground target exists in the original image, acquiring a foreground label of the foreground target in the original image and position information of the foreground target; locally processing the foreground target in the original image based on the foreground label of the foreground target in the original image and the position information of the foreground target;
the position information of the foreground target is the position information of a detection frame corresponding to the foreground target;
after the position information of the foreground object in the original image is acquired, the method further comprises the following steps:
acquiring an image in the detection frame in the original image;
based on the gray gradient of the image in the detection frame, identifying the boundary of a foreground target in the image in the detection frame to obtain a foreground target contour line;
acquiring a gray threshold sequence, and performing binarization processing on the image in the detection frame through each gray threshold in the gray threshold sequence to obtain a gray foreground target image sequence;
determining a foreground target area based on the gray level foreground target image sequence and the foreground target contour line, wherein an image in the foreground target area in the original image is a foreground target;
the determining a foreground object region based on the grayscale foreground object image sequence and the foreground object contour line comprises:
acquiring a gray level foreground target image with the highest matching degree with the foreground target contour line from the gray level foreground target image sequence;
and fusing the gray level foreground target image with the highest matching degree with the foreground target contour line to generate a continuous foreground target area.
2. The photographing method of claim 1, before identifying the feature information of the preview screen, further comprising:
compressing the preview picture acquired by the camera;
correspondingly, the characteristic information for identifying the preview picture comprises:
and identifying the characteristic information of the preview picture after the compression processing.
3. A mobile terminal, comprising:
the preview image acquisition module is used for acquiring preview images acquired by the camera at preset time intervals;
the preview image processing module is configured to identify feature information of the preview image, and perform image processing on the preview image based on the feature information, where the feature information of the preview image includes a background tag of the preview image, and specifically: carrying out scene recognition once every several frames of preview pictures to obtain the characteristic information of the preview pictures, carrying out image processing on the current frame of preview pictures based on the characteristic information of each frame of preview pictures by the camera every time the camera collects one frame of preview pictures, wherein the characteristic information of a part of frames of preview pictures is the characteristic information for recognizing the camera, and the characteristic information of the other part of frames of preview pictures is the characteristic information of the previous frame of preview pictures;
the characteristic information acquisition module is used for acquiring an original image acquired by the camera after receiving a photographing instruction;
the photographing processing module is used for carrying out image processing on the original image based on the characteristic information of the preview picture of the preset frame before the original image to obtain a photo;
the photographing processing module comprises:
the second global processing unit is used for carrying out global processing on the original image based on a background label of a preview picture of a previous frame of the original image;
the second foreground label identification unit is used for detecting whether a foreground target exists in the original image; if the foreground target exists in the original image, acquiring a foreground label of the foreground target in the original image and position information of the foreground target;
the second local processing unit is used for carrying out local processing on the foreground target in the original image based on the foreground label of the foreground target in the original image and the position information of the foreground target;
the position information of the foreground target is the position information of a detection frame corresponding to the foreground target;
the photographing processing unit further comprises:
a detection frame image acquisition unit, configured to acquire an image in the detection frame in the original image after acquiring position information of a foreground target in the original image;
the target contour line acquisition unit is used for identifying the boundary of a foreground target in the image in the detection frame based on the gray gradient of the image in the detection frame to obtain a foreground target contour line;
the target image sequence acquisition unit is used for acquiring a gray threshold sequence and carrying out binarization processing on the image in the detection frame through each gray threshold in the gray threshold sequence to obtain a gray foreground target image sequence;
a fusion unit, configured to determine a foreground target region based on the grayscale foreground target image sequence and the foreground target contour line, where an image in the foreground target region in the original image is a foreground target;
the fusion unit includes:
the target image acquisition subunit is used for acquiring a gray level foreground target image with the highest matching degree with the foreground target contour line from the gray level foreground target image sequence;
and the fusion subunit is used for fusing the gray level foreground target image with the highest matching degree with the foreground target contour line to generate a continuous foreground target area.
4. A mobile terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to claim 1 or 2 are implemented when the processor executes the computer program.
5. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by one or more processors, implements the steps of the method according to claim 1 or 2.
CN201810570646.1A 2018-06-05 2018-06-05 Photographing method, mobile terminal and computer readable storage medium Active CN108769521B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810570646.1A CN108769521B (en) 2018-06-05 2018-06-05 Photographing method, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810570646.1A CN108769521B (en) 2018-06-05 2018-06-05 Photographing method, mobile terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108769521A CN108769521A (en) 2018-11-06
CN108769521B true CN108769521B (en) 2021-02-02

Family

ID=63999004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810570646.1A Active CN108769521B (en) 2018-06-05 2018-06-05 Photographing method, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108769521B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113572980B (en) * 2020-04-28 2022-10-11 华为技术有限公司 Photographing method and device, terminal equipment and storage medium
CN117479000B (en) * 2022-08-08 2024-08-27 荣耀终端有限公司 Video recording method and related device
CN118175238B (en) * 2024-05-14 2024-09-03 威海凯思信息科技有限公司 Image generation method and device based on AIGC

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514446A (en) * 2013-10-16 2014-01-15 北京理工大学 Outdoor scene recognition method fused with sensor information
CN106101547A (en) * 2016-07-06 2016-11-09 北京奇虎科技有限公司 The processing method of a kind of view data, device and mobile terminal
CN107948617A (en) * 2017-12-06 2018-04-20 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and computer equipment
CN108024107A (en) * 2017-12-06 2018-05-11 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3870124B2 (en) * 2002-06-14 2007-01-17 キヤノン株式会社 Image processing apparatus and method, computer program, and computer-readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514446A (en) * 2013-10-16 2014-01-15 北京理工大学 Outdoor scene recognition method fused with sensor information
CN106101547A (en) * 2016-07-06 2016-11-09 北京奇虎科技有限公司 The processing method of a kind of view data, device and mobile terminal
CN107948617A (en) * 2017-12-06 2018-04-20 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and computer equipment
CN108024107A (en) * 2017-12-06 2018-05-11 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium

Also Published As

Publication number Publication date
CN108769521A (en) 2018-11-06

Similar Documents

Publication Publication Date Title
US11457138B2 (en) Method and device for image processing, method for training object detection model
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN110796600B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN110335216B (en) Image processing method, image processing apparatus, terminal device, and readable storage medium
CN109005368B (en) High dynamic range image generation method, mobile terminal and storage medium
CN107172354B (en) Video processing method and device, electronic equipment and storage medium
CN106682620A (en) Human face image acquisition method and device
CN108776800B (en) Image processing method, mobile terminal and computer readable storage medium
CN108769521B (en) Photographing method, mobile terminal and computer readable storage medium
CN108764139B (en) Face detection method, mobile terminal and computer readable storage medium
CN109116129B (en) Terminal detection method, detection device, system and storage medium
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
CN109005367B (en) High dynamic range image generation method, mobile terminal and storage medium
CN110717452B (en) Image recognition method, device, terminal and computer readable storage medium
CN113034354B (en) Image processing method and device, electronic equipment and readable storage medium
CN110766706A (en) Image fusion method and device, terminal equipment and storage medium
CN111131688B (en) Image processing method and device and mobile terminal
CN109286758B (en) High dynamic range image generation method, mobile terminal and storage medium
CN107770487B (en) Feature extraction and optimization method, system and terminal equipment
CN108289176B (en) Photographing question searching method, question searching device and terminal equipment
CN116249015A (en) Camera shielding detection method and device, camera equipment and storage medium
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN114758268A (en) Gesture recognition method and device and intelligent equipment
CN114140481A (en) Edge detection method and device based on infrared image
CN108810407B (en) Image processing method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant