CN109474786B - Preview image generation method and terminal - Google Patents
Preview image generation method and terminal Download PDFInfo
- Publication number
- CN109474786B CN109474786B CN201811581777.6A CN201811581777A CN109474786B CN 109474786 B CN109474786 B CN 109474786B CN 201811581777 A CN201811581777 A CN 201811581777A CN 109474786 B CN109474786 B CN 109474786B
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- target
- sub
- acquired
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the invention provides a preview image generation method and a terminal. The method comprises the following steps: acquiring a first position of a target object in a first image acquired by a first camera; acquiring a second position corresponding to the target composition mode; determining a target image area based on the first position and the second position; acquiring a first sub-image positioned in a target image area in the first image and a second sub-image positioned in the target image area in a second image acquired by a second camera; and performing image fusion processing on the first image and the second image to generate a third image, and outputting and displaying the third image as a target preview image. In the embodiment of the invention, when in photographing, the picture is intelligently composed, the images collected by the two cameras with different visual angles are fused, and the position of the target object is synthesized to enable the whole picture composition of the image to be more harmonious and natural preview image, so that if a user photographs with the preview image, a picture with good picture composition effect can be photographed.
Description
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a preview image generation method and a terminal.
Background
With the development and popularization of intelligent terminals, the functions of the intelligent terminals are increasingly powerful, and the photographing function becomes one of the indispensable functions of the intelligent terminals. When a photo is taken, the quality of the composition has a great influence on the final photo taking effect. Good composition can make the picture more excellent, and poor composition can make the picture feel uncomfortable. The quality of a general composition depends on the personal experience of a photographer, and a photograph taken by an ordinary person hardly has a good composition without better understanding of photographing, so that the taken photograph is not perfect.
Disclosure of Invention
The embodiment of the invention provides a preview image generation method and a terminal, which are used for solving the problem that a user is difficult to shoot a picture with a good composition effect in the prior art.
In order to solve the technical problems, the invention adopts the following technical scheme:
in a first aspect, a preview image generating method is provided, which is applied to a terminal, where the terminal includes: the terminal comprises a first camera and a second camera which are arranged on the same side of the terminal, wherein the visual angle of the first camera is smaller than that of the second camera.
Wherein the method comprises the following steps:
acquiring a first position of a target object in a first image acquired by the first camera;
acquiring a second position corresponding to the target composition mode; wherein the second position is a desired position of the target object in the first image;
determining a target image area based on the first location and the second location; the target image area is an image area within the range of the shooting field of view of the first camera, and the position of the target object in the target image area is the same as the second position;
acquiring a first sub-image positioned in the target image area in the first image and a second sub-image positioned in the target image area in a second image acquired by the second camera;
performing image fusion processing on the first sub-image and the second sub-image to generate a third image, and outputting and displaying the third image as a target preview image; wherein the position of the target object in the target preview image is the same as the second position.
In a second aspect, a terminal is provided, which includes: the terminal comprises a first camera and a second camera which are arranged on the same side of the terminal, wherein the visual angle of the first camera is smaller than that of the second camera.
Wherein, the terminal further includes:
the first acquisition module is used for acquiring a first position of a target object in a first image acquired by a first camera;
the second acquisition module is used for acquiring a second position corresponding to the target composition mode; wherein the second position is a desired position of the target object in the first image;
the first determining module is used for determining a target image area based on the first position acquired by the first acquiring module and the second position acquired by the second acquiring module; the target image area is an image area within the range of the shooting field of view of the first camera, and the position of the target object in the target image area is the same as the second position;
the third acquisition module is used for acquiring a first sub-image positioned in the target image area determined by the first determination module in the first image and a second sub-image positioned in the target image area in a second image acquired by the second camera;
the processing module is used for carrying out image fusion processing on the first sub-image and the second sub-image acquired by the third acquisition module to generate a third image, and outputting and displaying the third image as a target preview image; wherein the position of the target object in the target preview image is the same as the second position.
In a third aspect, a terminal is provided, including: a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the preview image generating method as described above.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which, when executed by a processor, implements the steps of the preview image generating method as described above.
In the embodiment of the invention, in order to enable a user to shoot a picture with better composition effect, the picture is intelligently composed during shooting, images collected by two cameras with different visual angles are fused, and a preview image of a frame of target object position is synthesized, so that the overall picture composition of the image is more harmonious and natural. According to the technical scheme, the two image data with different visual angles are fused, the content of the preview image can be automatically adjusted, the preview image with the best composition mode is presented to the user, the user does not need to move a terminal to match the most appropriate composition, and the user does not need to have rich shooting experience, so that the user can be helped to shoot excellent photos.
Drawings
FIG. 1 is a flow chart of a preview image generation method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a first image and a target image area according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a first image, a target image area and a second image in a second coordinate system according to an embodiment of the invention;
fig. 4 shows one of block diagrams of a terminal according to an embodiment of the present invention;
fig. 5 shows a second block diagram of the terminal according to the embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
According to an aspect of an embodiment of the present invention, there is provided a preview image generating method. The method is applied to a terminal, and the terminal can be a mobile terminal (such as a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device and the like), a desktop computer, an intelligent television and the like. The terminal includes: and the first camera and the second camera are arranged on the same side of the terminal. The visual angle of the first camera is smaller than the visual angle of the second camera. Preferably, first camera is standard camera, and the second camera is wide-angle camera, specifically, wide-angle camera includes: a normal wide-angle camera or a super wide-angle camera. The first camera and the second camera can be front-facing cameras or rear-facing cameras.
As shown in fig. 1, the preview image generating method includes:
step 101: a first position of a target object in a first image acquired by a first camera is acquired.
The target object is an image of a subject shot by the first camera. The subject may be automatically determined by the first camera, for example, by performing specific object recognition on the first image, and using the recognized specific object (e.g., a person, an animal, etc.) as the subject. In addition, the subject may be specified by the user, for example, an input operation of the first image by the user is received, and the subject is determined according to the input operation. Specifically, if a user clicks on the first image, the object corresponding to the clicking position is determined as the subject. After the subject is determined, an image corresponding to the subject in the first image (i.e., the target object) and a relative position of the target object in the first image (hereinafter referred to as a first position) may be determined.
Step 102: and acquiring a second position corresponding to the target composition mode.
Wherein the second position is a desired position of the target object in the first image.
In an embodiment of the present invention, after the target object is determined, a second position (i.e., a desired position or an ideal position) of the target object in the first image is determined. When the target object is located at the second position in the first image, the overall composition effect of the first image is better. The second position can be determined according to the prior art composition method (such as a nine-grid pattern method, a diagonal pattern method, a triangular pattern method, or a spiral pattern method).
After determining the second position of the target object in the first image, comparing the second position of the target object in the first image with the first position, and when the second position is inconsistent with the first position, entering a subsequent step 103; when the second position coincides with the first position, the subsequent steps may not be performed.
Step 103: based on the first location and the second location, a target image region is determined.
The target image area is an image area within the shooting field range of the first camera and is the same as the first image in size. The target image area can be understood as an image range at a shooting angle that makes the composition effect better, so when determining the target image area, the following should be satisfied: the position of the target object in the target image area is the same as the second position of the target object in the first image, so that the position of the target object in the picture obtained by shooting in the target image area can meet the composition requirement, and the overall composition effect of the picture is better.
For example, as shown in fig. 2, a solid-line box in the figure represents a first image 201, a dashed-line box represents a target image area 202, a solid-line circle represents a first position 203 of a target object in the first image 201, and a dashed-line circle represents a second position 204 of the target object in the first image 201. Within the target image area 202, the position of the target object is the same as the second position 204 of the target object in the first image 201.
Step 104: and acquiring a first sub-image positioned in the target image area in the first image and a second sub-image positioned in the target image area in the second image acquired by the second camera.
In the embodiment of the present invention, after the target image area is determined, an image (i.e., a first sub-image) in the target image area in the first image and an image (i.e., a second sub-image) in the target image area in the second image acquired by the second camera are acquired.
Step 105: and carrying out image fusion processing on the first sub-image and the second sub-image to generate a third image, and outputting and displaying the third image as a target preview image.
And the third image is positioned in the target image area and has the same size as the target image area. The position of the target object in the third image is the same as the second position of the target object in the first image.
As can be seen from fig. 2, only a partial image of the first image 201 is in the target image area 202, and in order to obtain an image in the complete target image area, in the embodiment of the present invention, when the first image capture is started to capture, the second camera is also started to acquire the second image. Since the second camera has a larger viewing angle range than the first camera, and can acquire more image content, the first image and the second image can be fused to generate a third image which is in the target image area and has the same size as the target image area.
The position of the target object in the third image is the same as the second position of the target object in the first image, that is, the third image is an image with better composition effect, so that the third image is output and displayed as a preview image, and at this time, the user takes a picture with the preview image, and a picture with the composition effect can be taken.
According to the technical scheme, the two image data with different visual angles are fused, the content of the preview image can be automatically adjusted, the preview image with the best composition mode is presented to the user, the user does not need to move a terminal to match the most appropriate composition, and the user does not need to have rich shooting experience, so that the user can be helped to shoot excellent photos.
Specifically, the step 102 of acquiring the second position corresponding to the target composition mode includes: determining a target composition mode with the second position closest to the first position in at least two preset composition modes; and acquiring a second position of the target object in the first image according to the target composition mode.
In the embodiment of the invention, a plurality of composition modes are stored in the terminal in advance, and when the second position of the target object in the first image is determined, one target composition mode can be selected from the plurality of composition modes to determine the second position of the target object. Preferably, for the selected target composition mode, the second position of the determined target object in the first image is closest to the first position of the target object in the first image compared with the second position of the target object in the first image determined by other composition modes, so that the offset distance of the target image area relative to the first image can be reduced to the maximum extent, and the first image is located in the target image area as much as possible.
Further, acquiring the first sub-image in the first image in the target image area includes: and determining an image area in the first image, which is located in the target image area, as a first sub-image according to the area ranges of the first image and the target image area in the first coordinate system.
The first coordinate system is a coordinate system established according to the image collected by the first camera.
In the embodiment of the invention, a first coordinate system is established according to the image acquired by the first camera. When a first sub-image in the first image, which is in the target image area, is acquired, the image of the first image, which is entirely in the target image area, is determined as the first sub-image according to the first image and the position of the target image area in the first coordinate system. Determining the image of the first image within the target image area in accordance with the coordinate system may make the determination more accurate.
For example, assume that the Field of view (FOV) of the first camera is FOV — n, w _ n wide, h _ n high, where n after the underline is the first letter of normal. And taking the lower left corner of the image acquired by the first camera as a coordinate origin (0, 0), wherein the image area is represented by { the x-axis coordinate of the lower left corner of the image, the y-axis coordinate of the lower left corner of the image, the width of the image in the x-axis direction, and the height of the image in the y-axis direction }. According to this rule, the image area of the first image should be represented as {0, 0, w _ n, h _ n }. Further, assuming that, in the first coordinate system, the first position of the target object in the first image is (x, y), and the second position is (x ', y'), the image area corresponding to the target image area should be represented as: { x-x ', y-y', w _ n, h _ n }.
A. If x-x '> 0 and y-y' >0, the image area where the first image overlaps the target image area (hereinafter referred to as the first sub-image) is: { x-x ', y-y', w _ n- | x-x '|, h _ n- | y-y' | }.
B. If x-x '> 0 and y-y' <0, the first sub-image is: { x-x ', 0, w _ n- | x-x ' |, h _ n- | y-y ' | }.
C. If x-x '< 0 and y-y' >0, the first sub-image is: {0, y-y ', w _ n- | x-x ' |, h _ n- | y-y ' | }.
D. If x-x '< 0 and y-y' <0, then the regions where a normal camera should participate in the fusion are: {0, 0, w _ n- | x-x '|, h _ n- | y-y' | }.
Further, acquiring a second sub-image in the second image within the target image area includes: mapping the first image and the target image area into a second coordinate system; determining a third sub-image in the second image, wherein the third sub-image is located in the target image area according to the first image, the second image and the area range of the target image area in the second coordinate system; an image of the third sub-image that is non-overlapping with the first image is determined as the second sub-image.
The second coordinate system is established according to the image collected by the second camera.
In the embodiment of the invention, a second coordinate system is established according to the image acquired by the second camera. When a second sub-image in the second image within the target image area is acquired, the first image and the target image area are mapped into a second coordinate system, and then the second sub-image is determined according to the area ranges of the first image, the second image and the target image area in the second coordinate system. Determining the image of the second image within the target image area in accordance with the coordinate system may make the determination more accurate.
The result of mapping the first image and the target image area into the second coordinate system is shown in fig. 3. In fig. 3, a large solid-line box represents the second image 205, a small solid-line box represents the first image 201 mapped into the second coordinate system, a dashed-line box represents the target image area 202 mapped into the second coordinate system, a solid-line circle represents the first position 203 of the target object in the first image 201, and a dashed-line circle represents the second position 204 of the target object in the first image 201.
Wherein after mapping the first image and the target image area into the second coordinate system, generally the first image coincides with a central image area of the second image.
In order to better understand the above technical solution, the following examples are given.
Assume that the second camera has a field angle of fov _ w, a width of w _ w, and a height of h _ w, where w, which is the initials of wide, is behind the underline. And taking the lower left corner of the image collected by the second camera as a coordinate origin (0, 0), and representing the image area by { lower left corner x-axis coordinate, lower left corner y-axis coordinate, upper right corner x-axis coordinate, and upper right corner y-axis coordinate }. According to this rule, the image area of the second image should be expressed as: {0, 0, w _ w, h _ w }. The first image is mapped to a central region of the second image in the second coordinate system, and the image region is expressed as: { (w _ w-w _ fov _ n/fov _ w)/2, (h _ w-h _ w _ fov _ n/fov _ w)/2, w _ fov _ n/fov _ w, h _ w _ fov _ n/fov _ w }. The target image area should be expressed as: { (w _ fov _ n/fov _ w)/2+ (x-x ') (w _ w/w _ n) (-fov _ n/fov _ w), (h _ w _ fov _ n/fov _ w)/2+ (y-y') (h _ w/h _ n) (-fov _ n/fov _ w), w _ w (fov _ n/fov _ w), h _ w (fov _ n/fov _ w) }.
A. If x-x '> 0 and y-y' >0, the second sub-image is:
{
(w_w–w_w*fov_n/fov_w)/2+|x-x’|*(w_w/w_n)*(fov_n/fov_w),(h_w–h_w*fov_n/fov_w)/2+h_w*(fov_n/fov_w),w_w*(fov_n/fov_w),|y-y’|*(h_w/h_n)*(fov_n/fov_w)
}
and
{
(w_w–w_w*fov_n/fov_w)/2+w_w*(fov_n/fov_w),(h_w–h_w*fov_n/fov_w)/2+|y-y’|*(h_w/h_n)*(fov_n/fov_w),|x-x’|*(w_w/w_n)*(fov_n/fov_w),h_w*(fov_n/fov_w)-|y’–y|*(h_w/h_n)*(fov_n/fov_w)
}。
B. if x-x '> 0 and y-y' <0, then the second sub-image is:
{
(w_w–w_w*fov_n/fov_w)/2+|x’–x|*(w_w/w_n)*(fov_n/fov_w),(h_w–h_w*fov_n/fov_w)/2-|y’-y|*(h_w/h_n)*(fov_n/fov_w),w_w*(fov_n/fov_w),|y-y’|*(h_w/h_n)*(fov_n/fov_w)
}
and
{
(w_w–w_w*fov_n/fov_w)/2+w_w*fov_n/fov_w,(h_w–h_w*fov_n/fov_w)/2,|x-x’|*(w_w/w_n)*(fov_n/fov_w),h_w*(fov_n/fov_w)-|y’–y|*(h_w/h_n)*(fov_n/fov_w)
}。
C. if x-x '< 0 and y-y' >0, the second sub-image is:
{
(w_w–w_w*fov_n/fov_w)/2-|x’-x|*(w_w/w_n)*(fov_n/fov_w),(h_w–h_w*fov_n/fov_w)/2+h_w*(fov_n/fov_w),w_w*(fov_n/fov_w),|y-y’|*(h_w/h_n)*(fov_n/fov_w)
}
and
{
(w_w–w_w*fov_n/fov_w)/2-|x’-x|*(w_w/w_n)*(fov_n/fov_w),(h_w–h_w*fov_n/fov_w)/2|y’–y|*(h_w/h_n)*(fov_n/fov_w),|x-x’|*(w_w/w_n)*(fov_n/fov_w),h_n*(h_w/h_n)*(fov_n/fov_w)-|y’–y|*(h_w/h_n)*(fov_n/fov_w)
}。
D. if x-x '< 0 and y-y' <0, then the second sub-image is:
{
(w_w–w_w*fov_n/fov_w)/2-|x’-x|*(w_w/w_n)*(fov_n/fov_w),(h_w–h_w*fov_n/fov_w)/2-|y’–y|*(h_w/h_n)*(fov_n/fov_w),w_w*(fov_n/fov_w),|y-y’|*(h_w/h_n)*(fov_n/fov_w)
}
and
{
(w_w–w_w*fov_n/fov_w)/2-|x’-x|*(w_w/w_n)*(fov_n/fov_w),(h_w–h_w*fov_n/fov_w)/2,|x-x’|*(w_w/w_n)*(fov_n/fov_w),h_n*(h_w/h_n)*(fov_n/fov_w)-|y’–y|*(h_w/h_n)*(fov_n/fov_w)
}。
it should be noted that, after mapping the target image area to the second coordinate system, it may be determined whether the target image area is completely within the image area of the second image. If the target image area is completely in the image area of the second image, the subsequent process can be continued; if the target image area is not completely located in the image area of the second image, which indicates that the image in the target image area cannot be obtained well, the image acquired by the first camera is directly output and displayed as a preview image without intelligent composition.
Specifically, the fusing the first sub-image and the second sub-image to generate the third image includes:
zooming the second subimage according to the pixel ratio of the images acquired by the first camera and the second camera; and performing fusion processing on the first sub-image and the scaled second sub-image to generate a third image.
Because the first sub-image and the second sub-image are selected based on the field angles of the first camera and the second camera, but the number of pixels included in the images acquired by the two cameras is different for the same field angle, after the image area is selected, before the image area is fused, the image of the second sub-image needs to be zoomed based on the pixel ratio of the first camera and the second camera, and after the zooming is completed, the first sub-image and the second sub-image are fused, so that the synthesized image looks more natural and harmonious.
The zooming process of the second sub-image according to the corresponding relationship between the field angles of the first camera and the second camera specifically comprises the following steps:
Where fov _ n denotes a field angle value of the first camera, w _ n denotes a width value of the first image (i.e., a width of a field angle of the first camera), and h _ n denotes a height value of the first image (i.e., a height of the field angle of the first camera); fov _ w denotes a field angle value of the second camera, w _ w denotes a width value of the second image (i.e., a width of a field angle of the second camera), and h _ w denotes a height value of the second image (i.e., a height of the field angle of the second camera).
The preset scaling formula is determined according to the pixel ratio of the images acquired by the first camera and the second camera, and the second subimage is scaled through the preset formula, so that the processing process is more accurate.
Further, before determining the second position of the target object in the first image in step 101, the method further comprises:
detecting whether a target object exists in an image acquired by a first camera; controlling a second camera to be in an activated state under the condition that a target object exists in a first image acquired by a first camera; and controlling the second camera to be in a dormant state under the condition that the target object does not exist in the first image acquired by the first camera.
In the embodiment of the invention, after receiving the starting instruction of the camera, the terminal can normally start the preview, start the first camera to collect the image, and detect whether the image of the main body exists in the image collected by the first camera, namely whether the main body is shot by the first camera.
When the first camera does not shoot the main body, the second camera is controlled to be in a dormant state, namely, in a state of being powered on but not collecting images. Because when the main part is not shot at first camera, can not trigger intelligent composition generally, and the second camera only can be useful when having triggered intelligent composition, consequently, when the main part is not shot at first camera, can control the second camera and be in the dormant state, so both can reduce the consumption that opens always, can satisfy the needs that use just can quick start at any time again. And if the second camera is in an activated state when the first camera does not shoot the main body, controlling the second camera to be adjusted from the activated state to the dormant state.
When the main body is shot by the first camera, the intelligent composition is considered to be required to be triggered, the second camera is controlled to be in an activated state at the moment, and the second camera is controlled to acquire image data.
Preferably, the terminal may control the second camera to be in the sleep state after receiving the start instruction of the camera. When the first camera does not shoot the main body, the second camera is controlled to be adjusted from the dormant state to the active state, so that power consumption is reduced.
In summary, in the embodiments of the present invention, in order to enable a user to take a picture with a better composition effect, the images collected by two cameras with different viewing angles are combined to synthesize a preview image of a frame of target object, so that the overall composition of the image is more harmonious and natural. According to the technical scheme, the two image data with different visual angles are fused, the content of the preview image can be automatically adjusted, the preview image with the best composition mode is presented to the user, the user does not need to move a terminal to match the most appropriate composition, and the user does not need to have rich shooting experience, so that the user can be helped to shoot excellent photos.
According to another aspect of the embodiments of the present invention, a terminal is provided, which can implement details in the above preview image generating method and achieve the same effect.
Wherein, the terminal includes: the terminal comprises a first camera and a second camera which are arranged on the same side of the terminal, wherein the visual angle of the first camera is smaller than that of the second camera.
As shown in fig. 4, the terminal 400 further includes:
the first obtaining module 401 is configured to obtain a first position of the target object in a first image acquired by the first camera.
A second obtaining module 402, configured to obtain a second position corresponding to the target composition mode.
Wherein the second position is a desired position of the target object in the first image.
A first determining module 403, configured to determine a target image area based on the first position acquired by the first acquiring module 401 and the second position acquired by the second acquiring module 402.
The target image area is an image area within the range of the shooting field of view of the first camera, and the position of the target object in the target image area is the same as the second position.
A third obtaining module 404, configured to obtain a first sub-image in the first image, which is located in the target image area determined by the first determining module 403, and a second sub-image in the target image area in the second image acquired by the second camera.
And the processing module 405 is configured to perform image fusion processing on the first sub-image and the second sub-image acquired by the third acquiring module 404, generate a third image, and output and display the third image as a target preview image.
Wherein the position of the target object in the target preview image is the same as the second position.
Further, the second obtaining module 402 includes:
and the first determining unit is used for determining a target composition mode of which the second position is closest to the first position in at least two preset composition modes.
And the first acquisition unit is used for acquiring a second position of the target object in the first image according to the target composition mode determined by the first determination unit.
Further, the third obtaining module 404 includes:
and the second determining unit is used for determining an image in the target image area in the first image as a first sub-image according to the area ranges of the first image and the target image area in the first coordinate system.
The first coordinate system is a coordinate system established according to the image collected by the first camera.
Further, the third obtaining module 404 includes:
a mapping unit for mapping the first image and the target image area into a second coordinate system.
A third determining unit, configured to determine a third sub-image in the second image, where the third sub-image is located in the target image area, after the mapping unit maps the first image and the target image area into a second coordinate system.
A fourth determining unit, configured to determine, as the second sub-image, an image in which the third sub-image determined by the third determining unit is not overlapped with the first image.
And the second coordinate system is a coordinate system established according to the image acquired by the second camera.
Further, the processing module 405 includes:
and the zooming processing unit is used for zooming the second sub-image according to the pixel ratio of the images acquired by the first camera and the second camera.
And the processing unit is used for fusing the first sub-image and the second sub-image which is scaled by the scaling processing unit to generate the third image.
Specifically, the scaling processing unit is specifically configured to:
The preset scaling formula is determined according to the pixel ratio of the images acquired by the first camera and the second camera.
Wherein fov _ n represents a field angle value of the first camera, w _ n represents a width value of the first image, and h _ n represents a height value of the first image; fov _ w represents the angle of field value of the second camera, w _ w represents the width value of the second image, h _ w represents the height value of the second image.
Specifically, the terminal further includes:
the detection module is used for detecting whether a target object exists in the image acquired by the first camera;
the first control module is used for controlling the second camera to be in an activated state under the condition that the detection module detects that a target object exists in the image acquired by the first camera;
and the second control module is used for controlling the second camera to be in a dormant state under the condition that the detection module detects that the target object does not exist in the image acquired by the first camera.
In the embodiment of the invention, in order to enable a user to shoot a picture with better composition effect, the picture is intelligently composed during shooting, images collected by two cameras with different visual angles are fused, and a preview image of a frame of target object position is synthesized, so that the overall picture composition of the image is more harmonious and natural. According to the technical scheme, the two image data with different visual angles are fused, the content of the preview image can be automatically adjusted, the preview image with the best composition mode is presented to the user, the user does not need to move a terminal to match the most appropriate composition, and the user does not need to have rich shooting experience, so that the user can be helped to shoot excellent photos.
Fig. 5 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention.
The terminal 500 includes but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the terminal configuration shown in fig. 5 is not intended to be limiting, and that the terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A processor 510, configured to obtain a first position of a target object in a first image acquired by the first camera and obtain a second position corresponding to a target composition mode; determining a target image area based on the first location and the second location; acquiring a first sub-image positioned in the target image area in the first image and a second sub-image positioned in the target image area in a second image acquired by the second camera; and performing image fusion processing on the first sub-image and the second sub-image to generate a third image, and outputting and displaying the third image as a target preview image through a display unit 506.
Wherein the second position is a desired position of the target object in the first image; the target image area is an image area within the range of the shooting field of view of the first camera, and the position of the target object in the target image area is the same as the second position; the position of the target object in the target preview image is the same as the second position.
In the embodiment of the invention, in order to enable a user to shoot a picture with better composition effect, the picture is intelligently composed during shooting, images collected by two cameras with different visual angles are fused, and a preview image of a frame of target object position is synthesized, so that the overall picture composition of the image is more harmonious and natural. According to the technical scheme, the two image data with different visual angles are fused, the content of the preview image can be automatically adjusted, the preview image with the best composition mode is presented to the user, the user does not need to move a terminal to match the most appropriate composition, and the user does not need to have rich shooting experience, so that the user can be helped to shoot excellent photos.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The terminal provides wireless broadband internet access to the user through the network module 502, such as helping the user send and receive e-mails, browse web pages, access streaming media, and the like.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the terminal 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The terminal 500 also includes at least one sensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 5061 and/or a backlight when the terminal 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 5, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the terminal, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the terminal, and is not limited herein.
The interface unit 508 is an interface for connecting an external device to the terminal 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the terminal 500 or may be used to transmit data between the terminal 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the terminal, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the terminal. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The terminal 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 through a power management system, so that functions of managing charging, discharging, and power consumption are performed through the power management system.
In addition, the terminal 500 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a terminal, including a processor 510, a memory 509, and a computer program stored in the memory 509 and capable of running on the processor 510, where the computer program, when executed by the processor 510, implements each process of the above-mentioned preview image generation method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned preview image generating method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (12)
1. A preview image generation method is applied to a terminal, and the terminal comprises the following steps: the terminal comprises a first camera and a second camera which are arranged on the same side of the terminal, wherein the visual angle of the first camera is smaller than that of the second camera, and the method is characterized by comprising the following steps:
acquiring a first position of a target object in a first image acquired by the first camera;
acquiring a second position corresponding to the target composition mode; wherein the second position is a desired position of the target object in the first image;
determining a target image area based on the first location and the second location; the target image area is an image area within the range of the shooting field of view of the first camera, and the position of the target object in the target image area is the same as the second position;
acquiring a first sub-image positioned in the target image area in the first image and a second sub-image positioned in the target image area in a second image acquired by the second camera;
performing image fusion processing on the first sub-image and the second sub-image to generate a third image, and outputting and displaying the third image as a target preview image; wherein the position of the target object in the target preview image is the same as the second position;
the acquiring a first sub-image of the first image in the target image area includes:
determining an image in the first image, which is located in the target image area, as a first sub-image according to the area ranges of the first image and the target image area in a first coordinate system;
the first coordinate system is a coordinate system established according to the image collected by the first camera;
the acquiring of the second sub-image in the second image acquired by the second camera and located in the target image area includes:
mapping the first image and the target image area into a second coordinate system;
determining a third sub-image in the second image, wherein the third sub-image is located in the target image area, according to the first image, the second image and the area range of the target image area in the second coordinate system;
determining an image of the third sub-image that is non-overlapping with the first image as a second sub-image;
and the second coordinate system is a coordinate system established according to the image acquired by the second camera.
2. The method according to claim 1, wherein the obtaining the second position corresponding to the target composition mode comprises:
determining a target composition mode of the second position closest to the first position in at least two preset composition modes;
and acquiring a second position of the target object in the first image according to the target composition mode.
3. The method according to claim 1, wherein the image fusion processing of the first sub-image and the second sub-image to generate a third image comprises:
zooming the second subimage according to the pixel ratio of the images acquired by the first camera and the second camera;
and performing fusion processing on the first sub-image and the scaled second sub-image to generate the third image.
4. The method according to claim 3, wherein the scaling the second sub-image according to the pixel ratio system of the images acquired by the first camera and the second camera comprises:
according to a preset scaling formula:scaling the second sub-image; the preset scaling formula is determined according to the pixel ratio of the images acquired by the first camera and the second camera;
wherein fov _ n represents a field angle value of the first camera, w _ n represents a width value of the first image, and h _ n represents a height value of the first image; fov _ w represents the angle of field value of the second camera, w _ w represents the width value of the second image, h _ w represents the height value of the second image.
5. The method of claim 1, wherein prior to acquiring the first position of the target object in the first image acquired by the first camera, the method further comprises:
detecting whether a target object exists in a first image acquired by the first camera;
controlling the second camera to be in an activated state under the condition that a target object exists in a first image acquired by the first camera;
and controlling the second camera to be in a dormant state under the condition that the target object does not exist in the first image acquired by the first camera.
6. A terminal, the terminal comprising: set up in first camera and the second camera of terminal homonymy, the visual angle of first camera is less than the visual angle of second camera, its characterized in that, the terminal still includes:
the first acquisition module is used for acquiring a first position of a target object in a first image acquired by a first camera;
the second acquisition module is used for acquiring a second position corresponding to the target composition mode; wherein the second position is a desired position of the target object in the first image;
the first determining module is used for determining a target image area based on the first position acquired by the first acquiring module and the second position acquired by the second acquiring module; the target image area is an image area within the range of the shooting field of view of the first camera, and the position of the target object in the target image area is the same as the second position;
the third acquisition module is used for acquiring a first sub-image positioned in the target image area determined by the first determination module in the first image and a second sub-image positioned in the target image area in a second image acquired by the second camera;
the processing module is used for carrying out image fusion processing on the first sub-image and the second sub-image acquired by the third acquisition module to generate a third image, and outputting and displaying the third image as a target preview image; wherein the position of the target object in the target preview image is the same as the second position;
the third obtaining module includes:
a second determining unit, configured to determine, according to the first image and an area range of the target image area in a first coordinate system, an image in the target image area in the first image as a first sub-image;
the first coordinate system is a coordinate system established according to the image collected by the first camera;
the third obtaining module includes:
a mapping unit for mapping the first image and the target image area into a second coordinate system;
a third determining unit, configured to determine a third sub-image in the second image, where the third sub-image is located in the target image area, after the mapping unit maps the first image and the target image area into a second coordinate system;
a fourth determining unit configured to determine an image, which is the third sub-image determined by the third determining unit and is not overlapped with the first image, as a second sub-image;
and the second coordinate system is a coordinate system established according to the image acquired by the second camera.
7. The terminal of claim 6, wherein the second obtaining module comprises:
the first determining unit is used for determining a target composition mode of which the second position is closest to the first position in at least two preset composition modes;
and the first acquisition unit is used for acquiring a second position of the target object in the first image according to the target composition mode determined by the first determination unit.
8. The terminal of claim 6, wherein the processing module comprises:
the zooming processing unit is used for zooming the second sub-image according to the pixel ratio of the images acquired by the first camera and the second camera;
and the processing unit is used for fusing the first sub-image and the second sub-image which is scaled by the scaling processing unit to generate the third image.
9. The terminal according to claim 8, wherein the scaling processing unit is specifically configured to:
the preset scaling formula is determined according to the pixel ratio of the images acquired by the first camera and the second camera;
wherein fov _ n represents a field angle value of the first camera, w _ n represents a width value of the first image, and h _ n represents a height value of the first image; fov _ w represents the angle of field value of the second camera, w _ w represents the width value of the second image, h _ w represents the height value of the second image.
10. The terminal of claim 6, further comprising:
the detection module is used for detecting whether a target object exists in the image acquired by the first camera;
the first control module is used for controlling the second camera to be in an activated state under the condition that the detection module detects that a target object exists in the image acquired by the first camera;
and the second control module is used for controlling the second camera to be in a dormant state under the condition that the detection module detects that the target object does not exist in the image acquired by the first camera.
11. A terminal, comprising: a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the preview image generating method of any of claims 1 to 5.
12. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements the steps of the preview image generating method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811581777.6A CN109474786B (en) | 2018-12-24 | 2018-12-24 | Preview image generation method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811581777.6A CN109474786B (en) | 2018-12-24 | 2018-12-24 | Preview image generation method and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109474786A CN109474786A (en) | 2019-03-15 |
CN109474786B true CN109474786B (en) | 2021-07-23 |
Family
ID=65676497
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811581777.6A Active CN109474786B (en) | 2018-12-24 | 2018-12-24 | Preview image generation method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109474786B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112215048B (en) * | 2019-07-12 | 2024-03-22 | 中国移动通信有限公司研究院 | 3D target detection method, device and computer readable storage medium |
CN110830715B (en) * | 2019-10-31 | 2021-06-25 | 维沃移动通信(杭州)有限公司 | Photographing method and electronic equipment |
CN111277760B (en) * | 2020-02-28 | 2022-02-01 | Oppo广东移动通信有限公司 | Shooting composition method, terminal and storage medium |
EP4106315A4 (en) * | 2020-03-20 | 2023-08-16 | Huawei Technologies Co., Ltd. | Photographing method and device |
CN114065881B (en) * | 2020-07-29 | 2024-04-02 | 浙江莲荷科技有限公司 | Relationship establishment, object identification and prompting methods, devices, equipment and storage medium |
CN111770277A (en) * | 2020-07-31 | 2020-10-13 | RealMe重庆移动通信有限公司 | Auxiliary shooting method, terminal and storage medium |
CN112637515B (en) * | 2020-12-22 | 2023-02-03 | 维沃软件技术有限公司 | Shooting method and device and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101843092A (en) * | 2008-09-08 | 2010-09-22 | 索尼公司 | Image pickup apparatus, image pickup method, and program |
CN103905713A (en) * | 2012-12-26 | 2014-07-02 | 联想(北京)有限公司 | Control method, control device and electronic equipment |
CN104754228A (en) * | 2015-03-27 | 2015-07-01 | 广东欧珀移动通信有限公司 | Mobile terminal and method for taking photos by using cameras of mobile terminal |
US9369612B2 (en) * | 2013-08-20 | 2016-06-14 | Hanwha Techwin Co., Ltd. | Image fusion system and method |
CN107395971A (en) * | 2017-07-31 | 2017-11-24 | 努比亚技术有限公司 | A kind of image-pickup method, equipment and computer-readable recording medium |
CN109068055A (en) * | 2018-08-10 | 2018-12-21 | 维沃移动通信有限公司 | A kind of patterning process, terminal and storage medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6915432B1 (en) * | 1999-01-29 | 2005-07-05 | International Business Machines Corporation | Composing a realigned image |
EP2765769A4 (en) * | 2012-09-10 | 2014-12-24 | Huawei Tech Co Ltd | Image processing method and image processing device |
CN106888349A (en) * | 2017-03-30 | 2017-06-23 | 努比亚技术有限公司 | A kind of image pickup method and device |
CN107592467A (en) * | 2017-10-20 | 2018-01-16 | 维沃移动通信有限公司 | A kind of image pickup method and mobile terminal |
CN108513070B (en) * | 2018-04-04 | 2020-09-04 | 维沃移动通信有限公司 | Image processing method, mobile terminal and computer readable storage medium |
CN108377339A (en) * | 2018-05-07 | 2018-08-07 | 维沃移动通信有限公司 | A kind of photographic method and camera arrangement |
CN111541845B (en) * | 2020-04-30 | 2022-06-24 | 维沃移动通信(杭州)有限公司 | Image processing method and device and electronic equipment |
-
2018
- 2018-12-24 CN CN201811581777.6A patent/CN109474786B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101843092A (en) * | 2008-09-08 | 2010-09-22 | 索尼公司 | Image pickup apparatus, image pickup method, and program |
CN103905713A (en) * | 2012-12-26 | 2014-07-02 | 联想(北京)有限公司 | Control method, control device and electronic equipment |
US9369612B2 (en) * | 2013-08-20 | 2016-06-14 | Hanwha Techwin Co., Ltd. | Image fusion system and method |
CN104754228A (en) * | 2015-03-27 | 2015-07-01 | 广东欧珀移动通信有限公司 | Mobile terminal and method for taking photos by using cameras of mobile terminal |
CN107395971A (en) * | 2017-07-31 | 2017-11-24 | 努比亚技术有限公司 | A kind of image-pickup method, equipment and computer-readable recording medium |
CN109068055A (en) * | 2018-08-10 | 2018-12-21 | 维沃移动通信有限公司 | A kind of patterning process, terminal and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109474786A (en) | 2019-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108513070B (en) | Image processing method, mobile terminal and computer readable storage medium | |
CN109474786B (en) | Preview image generation method and terminal | |
CN111541845B (en) | Image processing method and device and electronic equipment | |
CN108881733B (en) | Panoramic shooting method and mobile terminal | |
CN109246360B (en) | Prompting method and mobile terminal | |
WO2019174628A1 (en) | Photographing method and mobile terminal | |
CN109348020B (en) | Photographing method and mobile terminal | |
CN110505400B (en) | Preview image display adjustment method and terminal | |
CN110602401A (en) | Photographing method and terminal | |
CN111010512A (en) | Display control method and electronic equipment | |
CN109194839B (en) | Display control method, terminal and computer readable storage medium | |
CN108881544B (en) | Photographing method and mobile terminal | |
CN107730460B (en) | Image processing method and mobile terminal | |
CN107749046B (en) | Image processing method and mobile terminal | |
CN109241832B (en) | Face living body detection method and terminal equipment | |
CN111031234B (en) | Image processing method and electronic equipment | |
CN111145192A (en) | Image processing method and electronic device | |
CN109819166B (en) | Image processing method and electronic equipment | |
CN108881721B (en) | Display method and terminal | |
CN108924422B (en) | Panoramic photographing method and mobile terminal | |
CN110798621A (en) | Image processing method and electronic equipment | |
CN108174110B (en) | Photographing method and flexible screen terminal | |
CN109104564B (en) | Shooting prompting method and terminal equipment | |
CN110290263B (en) | Image display method and mobile terminal | |
CN110602390B (en) | Image processing method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |