CN113873159A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN113873159A
CN113873159A CN202111163373.7A CN202111163373A CN113873159A CN 113873159 A CN113873159 A CN 113873159A CN 202111163373 A CN202111163373 A CN 202111163373A CN 113873159 A CN113873159 A CN 113873159A
Authority
CN
China
Prior art keywords
image
area
input
target
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111163373.7A
Other languages
Chinese (zh)
Inventor
陈明杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111163373.7A priority Critical patent/CN113873159A/en
Publication of CN113873159A publication Critical patent/CN113873159A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Abstract

The application discloses an image processing method and device and electronic equipment. The method comprises the following steps: acquiring a first target image; the first target image is an image of a first area in the first image, and the first image is an image acquired by the first camera; displaying a first target image in a second area of a second image displayed in the shooting preview interface, wherein the second image is an image collected by a second camera; the focal length of the first camera is larger than or smaller than that of the second camera, and the second area is an area corresponding to the first area.

Description

Image processing method and device and electronic equipment
Technical Field
The application belongs to the field of image processing, and particularly relates to an image processing method and device and electronic equipment.
Background
At present, the camera shooting technology and the image processing technology in the mobile terminal develop rapidly, and most of mobile terminals can be equipped with a camera module with a plurality of focal segments, so that the zooming capability is greatly improved. The camera can be divided into a long-focus camera and a short-focus camera; the long focus camera has a large focal length and is suitable for shooting far pictures, and the short focus camera has a small focal length and is suitable for shooting near pictures. In the shooting process, if the short-focus camera is switched to the long-focus camera, the problem that the target object in the shot picture is lost is likely to occur because the field angle of the long-focus camera is small.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, which can solve the problem that a target object in a shooting picture is lost when a user carries out zoom shooting.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring a first target image, wherein the first target image is an image of a first area in the first image, and the first image is an image acquired by a first camera;
and displaying the first target image in a second area of a second image displayed in a shooting preview interface, wherein the second image is an image collected by a second camera, the focal length of the first camera is greater than or less than that of the second camera, and the second area is an area corresponding to the first area.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the first acquisition module is used for acquiring a first target image, wherein the first target image is an image of a first area in the first image, and the first image is an image acquired by a first camera;
the display module is used for displaying the first target image in a second area of a second image displayed in a shooting preview interface, wherein the second image is an image collected by a second camera, the focal length of the first camera is larger than or smaller than that of the second camera, and the second area is an area corresponding to the first area.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory electrically connected to the processor, where the memory stores a computer program, and the processor is configured to invoke and execute the computer program from the memory to implement the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a storage medium for storing a computer program, where the computer program is executable by a processor to implement the method of the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, a first target image of a first area in a first image is obtained first, and then the first target image is displayed in a second area of a second image displayed in a shooting preview interface. The first image is an image collected by the first camera, the second image is an image collected by the second camera, the focal length of the first camera is greater than or less than that of the second camera, and the second area is an area corresponding to the first area. Since the first area and the second area are local areas in the first image and the second image respectively, in this embodiment, a user can check the zoom effect of the local area in the shooting preview image (i.e., the second image), and further select whether to zoom according to the zoom effect of the local area, thereby avoiding a situation that a target object is lost in a shooting picture when the user performs zoom shooting; in addition, the target image shot by the first camera is displayed in the local area of the second image shot by the second camera, so that a part of the non-zooming image and a part of the zooming image can be displayed simultaneously, the difference before and after zooming of the image can be conveniently checked by a user, and the interestingness of zooming shooting by the user is improved.
Drawings
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic interface diagram of a captured image according to an embodiment of the present disclosure;
fig. 3 is one of schematic interface diagrams of an image processing method according to an embodiment of the present application;
FIG. 4 is a schematic interface diagram of an image processing method according to another embodiment of the present application;
fig. 5 is a second schematic interface diagram of an image processing method according to another embodiment of the present application;
FIG. 6 is a flowchart illustrating an image processing method according to another embodiment of the present application;
fig. 7 is a second flowchart illustrating an image processing method according to another embodiment of the present application;
fig. 8 is a schematic block diagram of an image processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic block diagram of an electronic device according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, which can solve the problem that a target object in a shooting picture is lost when a user carries out zoom shooting.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application, as shown in fig. 1, including the following steps S102-S104:
s102, a first target image is obtained. The first target image is an image of a first area in the first image, and the first image is an image acquired by the first camera.
Alternatively, the shape of the first region in the first image may be a regular shape or an irregular shape, the regular shape including but not limited to being a rectangle, a circle, or a square; the irregular shape may be a closed figure with an irregular contour.
And S104, displaying the first target image in a second area of the second image displayed in the shooting preview interface. The second image is an image collected by the second camera, the focal length of the first camera is greater than or smaller than that of the second camera, and the second area is an area corresponding to the first area.
In this embodiment, the focal length of the first camera is greater than or less than the focal length of the second camera, that is, the zoom magnifications of the first image and the second image are not the same. In various embodiments of the present application, a tele camera and a short camera are both relative terms. Specifically, if the focal length of the first camera is greater than the focal length of the second camera, the first camera may be considered as a tele camera with respect to the second camera; similarly, the second camera may be considered a short focus camera with respect to the first camera. If the focal length of the first camera is smaller than that of the second camera, the first camera can be regarded as a short-focus camera relative to the second camera; similarly, the second camera may be considered a tele camera with respect to the first camera.
When the first camera is a long-focus camera and the second camera is a short-focus camera, a first image acquired by the first camera is a long-focus picture, and a second image acquired by the second camera is a short-focus picture; when the first camera is a short-focus camera and the second camera is a long-focus camera, the first image collected by the first camera is a short-focus picture, and the second image collected by the second camera is a long-focus picture.
Fig. 2 is an interface schematic diagram of a shooting preview interface according to an embodiment of the present application, as shown in fig. 2, in a left diagram (a), a second image 22 is displayed on a shooting preview interface 21; in the right-hand diagram (b), the first target image 23 is displayed in the second region of the second image 22. Wherein the second area is illustrated as a rectangle.
In the embodiment of the application, a first target image of a first area in a first image is obtained first, and then the first target image is displayed in a second area of a second image displayed in a shooting preview interface. The first image is an image collected by the first camera, the second image is an image collected by the second camera, the focal length of the first camera is greater than or less than that of the second camera, and the second area is an area corresponding to the first area. Since the first area and the second area are local areas in the first image and the second image respectively, in this embodiment, a user can check the zoom effect of the local area in the shooting preview image (i.e., the second image), and further select whether to zoom according to the zoom effect of the local area, thereby avoiding a situation that a target object is lost in a shooting picture when the user performs zoom shooting; in addition, the target image shot by the first camera is displayed in the local area of the second image shot by the second camera, so that a part of the non-zooming image and a part of the zooming image can be displayed simultaneously, the difference before and after zooming of the image can be conveniently checked by a user, and the interestingness of zooming shooting by the user is improved.
The method of the embodiments of the present application will be further described with reference to specific embodiments. In one embodiment, the second region in the second image may be determined based on: before the first target image is obtained, object recognition is carried out on a second image displayed in the shooting preview interface based on preset features to obtain a target object, and an image area where the target object is located is determined as the second area.
In this embodiment, the preset feature is a feature corresponding to a preset target object. For example, when the target object is a face image, the preset features may include face features; when the target object is an object, the preset features may include object contour features.
Taking the second image shown in fig. 2 as an example, as can be seen from the diagram (a) in fig. 2, a face is displayed in the second image 22, and if the face is used as a target feature, the region where the face is located can be determined by identifying the face feature in the second image 22, and then the region where the face is located is determined as the second region.
In the drawings of the following embodiments, the first camera is a long-focus camera, and the second camera is a short-focus camera. For the case that the first camera is a short-focus camera and the second camera is a long-focus camera, the implementation method is similar to that when the first camera is a long-focus camera and the second camera is a short-focus camera, and therefore repeated description is omitted.
In this embodiment, the target object (e.g., the face image and the object image) is obtained by performing object recognition on the second image displayed in the shooting preview interface based on the preset features (e.g., the face feature and the object contour feature), and then the target object can be subjected to zoom processing, so that the zoom processing effect of the target object is shown for the user, the situation that the target object is lost in the image after the image zoom processing is avoided, and the target object can be automatically recognized, thereby improving the convenience of zoom shooting.
In one embodiment, the second region in the second image may be determined based on: before the first target image is obtained, first input of a user on a second image displayed in the shooting preview interface is received, and then the second area is obtained in response to the first input.
In this embodiment, there are various input modes of the first input, which may be determined according to actual use requirements, and this is not limited in this embodiment of the application. The present embodiment is exemplified by the following three ways:
in the first mode, the first input is click input, and the click input may include click input, double click input, or click input of any number of touch points. For example, the first input is a click input, and when a click input of a user for a second image displayed in the shooting preview interface is received, the second region is obtained in response to the click input.
In the second mode, the first input is a voice command input by the user, and the content of the voice command corresponding to the first input can be preset. For example, when the voice instruction content corresponding to the first input is received and recognized, the second area is obtained in response to the voice instruction content.
And in the third mode, the first input is a specific gesture input by the user, such as any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture and a double-click gesture. For example, the first input is a slide gesture, and when a slide gesture of a user for shooting a second image displayed in the preview interface is received, the second area is obtained in response to the slide gesture.
In this embodiment, the size information of the second region may be preset, and after responding to the first input, the region that includes the first input location and conforms to the preset size in the second image may be determined to be the second region.
Optionally, the second area is centered on the first input position. For example, the second area is rectangular, and the coordinates of the first input position are C2(a, b), the size information of the preset second region may include a length L and a width W. When a first input of a user is received, a position C can be determined from the second image according to the first input position and preset size information2(a, b) is a second region centered and matching the length L and width W.
Optionally, the second area includes the first input position but is not centered on the first input position, and for this case, a first ratio of the target object (such as a face image, an object image, etc.) to the second image may be determined first, and then the second area may be determined based on the first ratio.
In this embodiment, the first ratio may include a ratio of the area of the region of the second image occupied by the target object. The size of the second region is positively correlated to a first ratio of the target object to the second image. That is, the larger the first ratio of the target object occupying the second image is, the larger the second area is; the smaller the first proportion of the target object occupying the second image is, the smaller the second area is.
For example, the features of the face image in the second image are identified through a face detection algorithm, the contour information of the face image is obtained, and the area of the face image is calculated according to the identified features and contour information of the face image; then, a first proportion of the face image in the second image is obtained by calculating a ratio of the area of the face image to the total area of the second image; and determining a second area in the second image according to the determined first proportion.
In this embodiment, when the second image includes a target object such as a face image, the second area may be determined according to the first ratio of the face image in the second image, so that the face image subjected to zoom processing is displayed as completely as possible in the second area in the second image, thereby facilitating a user to obtain a complete zoomed face image.
In one embodiment, after obtaining the second region according to the first input, the center point of the first region in the first image may be determined by the following steps a 1-a 2:
step a1, a second center point of the second area is obtained.
And step A2, determining a first center point according to the camera parameters of the first camera and the second center point. The first central point is a central point of a first area in the first image.
In this step, the camera parameter may be a focal length corresponding to the camera, and since the sizes of the image sensors corresponding to the cameras with different focal lengths are different, a first central point corresponding to the second central point on the first image needs to be determined according to a proportional relationship between the focal lengths corresponding to the first camera and the second camera.
Wherein the first center point and the second center point can be characterized by coordinates. Since the sizes of the first image and the second image are not necessarily the same, the conversion relationship between the coordinate values in the two coordinate systems corresponding to the first image and the second image is related to the proportional relationship between the focal lengths corresponding to the first camera and the second camera, and the image size. For example, the first image is a rectangle W × H (length × width), and the second image is a rectangle W × H (length × width). The coordinates of the point on the second image are represented as (a, b), and the coordinates of the point on the first image corresponding to the point (a, b) are represented as (x, y). Respectively establishing two coordinate systems based on the size information of the first image and the second image, wherein W and W are respectively horizontal coordinate values on the respective corresponding coordinate systems, and H and H are respectively vertical coordinate values on the respective corresponding coordinate systems, so that the coordinate values corresponding to the first image and the coordinate values corresponding to the second image satisfy the following proportional relationship: x/W is a/W; and determining the coordinates (x, y) of the point corresponding to the point (a, b) on the first image based on the proportional relation, so as to determine the corresponding point coordinates on the first image according to the point coordinates on the second image. Based on the above principle, a first center point on the first image corresponding to the second center point can be determined.
After determining the first center point, the image of the first region in the first image may be cropped based on the first center point and the size of the second region, thereby obtaining a first target image.
In this embodiment, the first central point corresponding to the second central point on the first image is determined through the camera parameters of the first camera and the second central point, and then the target image corresponding to the first area in the first image is cut out according to the size information of the second area, so that the image effect that the second image displayed in the second area is covered by the target image can be presented, and a user can compare the zooming effect of the target image after zooming processing.
In one embodiment, after the first target image is displayed in S104, the current first target image may be switched to a second target image in other areas on the first image according to a second input. The switching of the target image may be performed in the following manner: and receiving a second input of the user to the third area in the second image. And responding to the second input, acquiring a second target image, further displaying the second target image in a third area, and canceling the display of the first target image. The second target image is an image of a fourth area in the first image, and the fourth area is an area corresponding to the third area.
In this embodiment, the second input may be: the click input of the user on the second image, or the voice instruction input by the user, or the specific gesture input by the user may be specifically determined according to the actual use requirement, which is not limited in the embodiment of the present application.
In addition, the click input in the embodiment of the application can be click input, double-click input, click input of any number of touch points, and the like, and can also be long-press input or short-press input; the specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-click gesture.
For example, the second input may be a double-click input among click inputs. After the first target image is displayed, double-click input of a user to the third area in the second image is received, and the second target image in the fourth area in the first image is acquired in response to the double-click input.
Also for example, the second input may be a drag gesture among the specific gestures input by the user. After the first target image is displayed, receiving input of a user for a dragging gesture of a second area in the second image, acquiring an input position corresponding to the dragging gesture of the user in real time, obtaining a fourth area in the first image according to the finally determined second input position, and further acquiring a second target image in the fourth area.
Fig. 3 is an interface schematic diagram of an image processing method according to an embodiment of the present application, as shown in fig. 3, in a left diagram (a), a second area of a second image 32 in a shooting preview interface 31 displays a first target image 33; in the right-hand drawing (b), the third area of the second image 32 in the shooting preview interface 31 displays the second object image 34. Before receiving a second input of the user to the second image 32, the shooting preview interface 31 displays the first target image 33 in a second area in the second image 32 as shown in the left diagram (a); after acquiring the second object image 33 in the fourth area in the first image (not shown in the figure) in response to the second input of the second image 32 by the user, the photographing preview interface 31 displays the second object image 34 in the third area in the second image 32 as shown in the diagram (b) on the right side.
In this embodiment, by receiving a second input to the second image from the user, the zoom area in the second image can be smoothly switched, so that the user can change the zoom object quickly and smoothly during the zoom photographing process.
In one embodiment, after displaying the first target image in S104, the user may trigger the display of the first image on the shooting preview interface by an input, i.e., displaying the full zoom image. Optionally, the user performs a third input within the second area, and when the third input to the second area by the user is received, the first image is displayed in the shooting preview interface in response to the third input. Therefore, after previewing the local zoom picture, the user triggers and displays the complete zoom picture, the integrity of the target object in the zoom picture is ensured, and the display of the zoom picture is smoother and smoother.
In this embodiment, the third input may be: the click input of the user on the second image, or the voice instruction input by the user, or the specific gesture input by the user may be specifically determined according to the actual use requirement, which is not limited in the embodiment of the present application.
The click input can be single click input, double click input or click input of any number of times, and can also be long-time press input or short-time press input; the specific gesture may be any one of a single-tap gesture, a slide gesture, a drag gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-tap gesture.
As shown by way of example in the image of the shooting preview interface shown in fig. 4, in the left-hand diagram (a), the first object image 43 is displayed in the second area of the second image 42; in the right-hand panel (b), the entire first image 44 is displayed on the preview interface 41. Assuming that the third input is a double-click input, after the second region displays the first target image 43 in the second image 42, the preview interface 41 is taken as shown in the left diagram (a), if the complete first image 44 is to be displayed, the user can perform the double-click input on the second region (i.e., the region corresponding to the first target image 43), and when the user receives the double-click input on the second region, the complete first image 44 is displayed as shown in the diagram (b).
In this embodiment, the effect of quickly switching the second image currently covered by the target image to the complete first image can be achieved by receiving a third input of the user in the second area, that is, the user presents a complete zoom screen of the second image after zoom processing, so as to achieve a smoother and complete zoom effect.
In one embodiment, after displaying the first target image, if the user wants to restore the original image, i.e., exit the display effect of the partial zoom screen, a fourth input may be performed in the second image. When a fourth input of the user to the area of the second image except the second area is received, the first target image is canceled from being displayed in the shooting preview interface in response to the fourth input.
In this embodiment, the fourth input may be a click input of the second image by the user, or a voice instruction input by the user, or a specific gesture input by the user, which may be specifically determined according to actual use requirements, and this is not limited in this embodiment of the present application.
The click input in the embodiment of the application can be click input, double-click input or click input of any number of touch points, and can also be long-press input or short-press input; the specific gesture in the embodiment of the application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-click gesture.
Taking the image on the shooting preview interface shown in fig. 5 as an example, in the left diagram (a), the second area of the second image 52 displays the first target image 53; in the right-hand panel (b), the second image 52 is displayed in its entirety on the preview screen 51. Assuming that the fourth input is a click input, after the first target image 53 is displayed in the second region of the second image 52, if it is desired to display the entire second image 52, the user may click on a region other than the second region in the second image 52, and when the user's click input is received on a region other than the second region in the second image 52, the display of the first target image 53 is cancelled to display the entire second image 52, as shown in the right diagram (b) in fig. 5.
In this embodiment, by receiving a fourth input of the user in the second image, the second image not in the second area can be quickly switched to the complete second image, so that the process of canceling zooming is smoother and smoother.
In one embodiment, after receiving a first input of a user for shooting a second image displayed in the preview interface, a target zoom magnification can be determined according to an input parameter of the first input, and then the first camera is determined according to the target zoom magnification.
In this embodiment, for different first input modes, input parameters corresponding to the input modes are determined, and then a target zoom magnification is determined according to the input parameters.
If the first input is a click input of the second image by the user, and the zoom magnification may include 1.2 times, 1.5 times, and 1.75 times, then the input parameter may include the number of clicks and/or the number of touch points. For example, a click input of one touch point represents a zoom magnification of "zoom in", and a click input of two touch points represents a zoom magnification of "zoom out". The single-click input of one touch point correspondingly amplifies the zoom magnification of 1.2 times; double-click input of one touch point corresponds to zoom magnification of 1.5 times, and triple-click input of one touch point corresponds to zoom magnification of 1.75 times; the single-click input of the two touch points can be correspondingly reduced by 1.2 times of zoom magnification, the double-click input is correspondingly reduced by 1.5 times of zoom magnification, and the three-click input is correspondingly reduced by 1.75 times of zoom magnification. When receiving a click input of a user to one touch point of the second image, determining that the target zoom magnification is 1.2 times of 'magnification'; when double-click input of the user on two touch points of the second image is received, the target zoom magnification is determined to be reduced by 1.5 times.
That is, if the number of the touch points input by clicking is one, the input mode corresponds to the target zoom magnification being the amplified zoom magnification, and the first camera is a telephoto camera relative to the second camera; if the number of the touch points in the input parameters of the click input is two, the input mode corresponds to that the target zoom magnification is the reduced zoom magnification, and the first camera is a short-focus camera relative to the second camera.
If the first input is a voice instruction input by a user, the voice instruction content corresponding to the input parameter can be preset. For example, the contents of the voice instruction corresponding to the respective target zoom magnifications are set in advance, such as "zoom out by 1.2 times", "zoom out by 1.5 times", and "zoom out by 1.75 times", or also such as "zoom in by 1.2 times", "zoom in by 1.5 times", and "zoom in by 1.75 times". That is, if the voice instruction content includes "zoom in", the first camera is a telephoto camera relative to the second camera; if the voice instruction content comprises 'zoom out', the first camera is a short-focus camera relative to the second camera.
If the first input is a specific gesture input by the user, such as a long-press gesture, a swipe gesture, and the like. For example, when the specific gesture input by the user is a long-press gesture, the input parameter may be a press duration of the long-press gesture, and then the target zoom magnification may be determined according to the press duration; wherein, the target zoom magnification and the pressing duration of the long-press gesture are in positive correlation or negative correlation. That is, if the target zoom ratio and the pressing duration of the long-press gesture are in positive correlation, the first camera is a long-focus camera; and if the target zooming magnification and the pressing duration of the long-press gesture are in negative correlation, the first camera is the short-focus camera.
In this embodiment, the target zoom magnification is determined according to the first input parameter, and the target zoom magnification can be determined while the second area is determined, so that smoothness and fluency of zoom processing are improved, the operation is simple, and convenience is brought to a user.
In one embodiment, the target zoom magnification may be determined based on input parameters of the first input, exemplified by the first input of the user being a particular gesture, and the particular gesture input by the user being a long press gesture. Optionally, the target zoom magnification is determined according to a pressing duration of the long-press gesture. If the pressing duration of the long-press gesture exceeds the preset duration, a first image of the second image subjected to zooming processing based on the target zooming magnification can be acquired, and then a first target image subjected to zooming processing based on the position of the long-press gesture and the target zooming magnification is acquired. Based on this, if the target zoom magnification and the pressing duration of the long-press gesture are in positive correlation, the target zoom magnification increases with the increase of the pressing duration; if there is a negative correlation between the target zoom magnification and the press duration of the long press gesture, the target zoom magnification decreases as the press duration increases.
For example, the zoom magnification may include 1.2 times, 1.5 times, 1.75 times, 2 times, and the default zoom magnification is 1.5 times. If the zooming magnification and the pressing duration are in positive correlation, the zooming magnification can be increased once every 1 second of the pressing duration of the long-press gesture. On the basis that the current zoom magnification is 1.5 times of the default zoom magnification, the pressing duration is increased by 1 second, and the zoom magnification is increased to 1.75 times; for example, if the zoom magnification and the pressing duration are negatively correlated, the zoom magnification may be decreased once every 1 second increase of the pressing duration of the long-press gesture, and the pressing duration is increased by 1 second on the basis that the current zoom magnification is 1.5 times of the default zoom magnification, and the zoom magnification is decreased to 1.2 times.
In the embodiment, the zoom magnification is switched according to the pressing duration of the long-press gesture, so that a user can quickly adjust the zoom magnification to a target in the zoom shooting process, and convenience is improved.
In one embodiment, when the target zoom magnification is determined according to the first input parameter, if the second image includes the target object, the first ratio of the target object in the second image and the position of the area where the target object is located in the second image are both the first input parameter. Taking the target object as a face image as an example, the target zoom magnification may be determined according to the following steps B1 to B2:
step B1, if the position of the first input is in the area of the face image, determining a first proportion of the face image in the second image.
Optionally, identifying the characteristics of the face image in the second image through a face detection algorithm, acquiring the contour information of the face image, and calculating the area of the face image according to the identified characteristics and contour information of the face image; and then, calculating the ratio of the area of the face image to the area of the second image to obtain the first proportion of the face image to the second image.
And step B2, determining the target zoom magnification according to at least one of the first proportion of the face image in the second image and the position of the area where the face image is located in the second image.
In this step, in order to prevent the incomplete face image after the zoom processing, the target zoom magnification may be determined in real time according to the first ratio of the face image to the second image. If the first proportion of the face image in the second image is larger, the target zoom magnification can be set to be smaller; if the first ratio of the face image to the second image is small, the target zoom magnification may be set to a larger magnification.
Optionally, the target zoom magnification may also be determined according to the position of the region where the face image is located in the second image. For example, if the region where the face image is located is the center position of the second image, the target zoom magnification can be appropriately adjusted to a larger magnification; if the position of the area where the face image is located in the second image is the edge position of the second image, the target zoom magnification can be properly adjusted to be a smaller magnification, and the face image at the edge position subjected to zoom processing is prevented from being displayed incompletely or lost in a picture due to the larger target zoom magnification.
In this embodiment, the target zoom magnification is adjusted in real time by considering the proportion of the face image in the second image and the position of the region where the face image is located in the second image, so that the face image subjected to zoom processing is displayed in the target image as completely as possible, and the situation that the target image including the complete face image cannot be obtained is avoided.
Fig. 6 is a schematic flowchart of an image processing method according to another embodiment of the present application, and as shown in fig. 6, the method includes the following steps S601 to S612:
s601, receiving a first input of a user to the second image displayed in the shooting preview interface.
S602, responding to the first input, and obtaining a second area.
The method comprises the steps that object recognition can be carried out on a second image displayed in a shooting preview interface based on preset characteristics to obtain a target object, and then an image area where the target object is located is determined as a second area; size information of the second area can also be preset, and the area containing the first input position and the preset size in the second image can be determined to be the second area.
And S603, determining the target zoom magnification according to the input parameters of the first input. And then determining the first camera according to the target zoom magnification.
And determining input parameters corresponding to the input modes according to the different input modes of the first input, and further determining the target zoom magnification according to the input parameters. If the second image comprises the target object, the first proportion of the target object in the second image and the area where the target object is located are the first input parameters.
S604, acquiring a second central point of the second area.
And S605, determining a first central point according to the camera parameters of the first camera and the second central point.
The first central point is a central point of a first area in the first image. The camera parameters include a focal length of the camera.
S606, based on the sizes of the first central point and the second area, the image of the first area in the first image is cut out, and then the first target image is obtained.
The first target image is an image of a first area in the first image, and the first image is an image acquired by the first camera.
S607, the first target image is displayed in the second area of the second image displayed in the shooting preview interface.
The focal length of the first camera is larger than or smaller than that of the second camera, and the second area is an area corresponding to the first area.
S608, receiving a second input of the user to the third area in the second image.
The third area is a local area in the second image except the second area.
And S609, responding to the second input, and acquiring a second target image.
The second target image is an image of a fourth area in the first image, and the fourth area is an area corresponding to the third area on the first image.
S610, displaying the second target image in the third area, and canceling the displaying of the first target image.
S611, receiving a third input to the second area from the user.
And S612, responding to a third input, and displaying the first image in the shooting preview interface.
In this embodiment, the input modes corresponding to the first input, the second input, and the third input have been described in detail in the above embodiments, and are not described herein again.
In the embodiment of the application, by receiving the input of the local area except the second area in the second image by the user, the second area is switched to the third area, that is, the first area is switched to the fourth area, on the basis of displaying the second image, the target image can be switched in real time to determine that the target image contains the target zoom object, so that the accuracy and operability of the zoom processing are improved, and the user can conveniently track the zoom effect of different local areas in the second image in real time.
In one embodiment, when the second image includes at least one target object, if the first input position is located within an area on the second image where one of the target objects is located, the target object may be subjected to a local zoom process. In this embodiment, taking a target object as an example of a face image, fig. 7 is a flowchart illustrating an image processing method according to an embodiment of the present application, and as shown in fig. 7, the method includes:
s701, receiving a first input of a user on a second image, wherein the second image comprises a face image.
S702, if the first input position is located in the area where the face image is located, determining a first proportion of the face image in the second image.
In the step, the characteristics of the face image in the short-focus picture can be identified through a face detection algorithm, the contour information of the face image is obtained, and the area of the face image is calculated, so that the first proportion of the face image in the second image is obtained.
And S703, determining the target zoom magnification according to the first ratio and the position of the area where the face image is located in the second image.
In this step, in order to avoid the incomplete face image after zoom processing, a target zoom magnification may be determined according to a first ratio of the face image to the second image. Or determining the target zoom magnification according to the position of the area where the face image is located in the second image.
And S704, determining a first camera according to a first input position of the user on the second image and the target zoom magnification.
S705, a first target image is obtained through the first camera.
The first target image is a face image of a first area in the first image, and the first image is an image acquired by the first camera. If the first camera is a long-focus camera relative to the second camera, determining that the first target image is a long-focus picture; and if the first camera is a short-focus camera relative to the second camera, determining that the first target image is a short-focus picture.
S706, the first target image is displayed in the second area of the second image displayed in the shooting preview interface.
The second image is an image collected by the second camera, wherein the focal length of the first camera is greater than or less than that of the second camera, and the second area is an area corresponding to the first area.
After the step, if a third input of the user on the second image is received, executing S707; if a fourth input from the user in an area other than the second area in the second image is received, S708 is performed.
And S707, canceling the display of the first target image, and acquiring and displaying the first image.
S708, the first target image is canceled from being displayed, and the complete second image is displayed at the same time.
In this embodiment, the input modes corresponding to the first input, the third input and the fourth input have been described in detail in the above embodiments, and are not described herein again.
By adopting the embodiment of the application, when the target object in the second image is the face image, the target zooming magnification is adjusted according to the proportion of the face image and the position of the area where the face image is located in the second image, and the second area is determined based on the first proportion of the face image in the second image, so that the face image subjected to zooming processing is displayed in the target image as completely as possible, the condition that the target image containing the complete face image cannot be obtained is avoided, and the interestingness of zooming processing and the image integrity are enhanced. In addition, input operations of a user in different areas (a second area and a non-second area) can be received, so that the effect of quickly switching to a complete first image or second image is achieved.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module for executing the method of image processing in the image processing apparatus. The image processing apparatus provided in the embodiment of the present application is described with an example in which an image processing apparatus executes an image processing method.
Fig. 8 is a schematic block diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 8, the apparatus includes:
a first obtaining module 810, configured to obtain a first target image, where the first target image is an image of a first area in a first image, and the first image is an image acquired by a first camera;
the display module 820 is configured to display the first target image in a second area of a second image displayed in a shooting preview interface, where the second image is an image collected by a second camera, a focal length of the first camera is greater than or less than a focal length of the second camera, and the second area is an area corresponding to the first area.
In the embodiment of the application, a first target image of a first area in a first image is obtained first, and then the first target image is displayed in a second area of a second image displayed in a shooting preview interface. The first image is an image collected by the first camera, the second image is an image collected by the second camera, the focal length of the first camera is greater than or less than that of the second camera, and the second area is an area corresponding to the first area. Since the first area and the second area are local areas in the first image and the second image respectively, in this embodiment, a user can check the zoom effect of the local area in the shooting preview image (i.e., the second image), and further select whether to zoom according to the zoom effect of the local area, thereby avoiding a situation that a target object is lost in a shooting picture when the user performs zoom shooting; in addition, the target image shot by the first camera is displayed in the local area of the second image shot by the second camera, so that a part of the non-zooming image and a part of the zooming image can be displayed simultaneously, the difference before and after zooming of the image can be conveniently checked by a user, and the interestingness of zooming shooting by the user is improved.
In one embodiment, the apparatus further comprises:
the recognition module is used for carrying out object recognition on a second image displayed in the shooting preview interface based on preset characteristics before the first target image is obtained, so as to obtain a target object;
and the first determining module is used for determining the image area where the target object is located as a second area.
In the embodiment, the target object is obtained by performing object recognition on the second image displayed in the shooting preview interface based on the preset features, so that zooming processing can be performed on the target object, the zooming processing effect of the target object is displayed for a user, the situation that the target object is lost in a picture after the image zooming processing is avoided, the effect of automatically recognizing the second area is realized, and the convenience of zooming shooting is improved.
In one embodiment, the apparatus further comprises:
the first receiving module is used for receiving first input of a user on a second image displayed in the shooting preview interface before the first target image is acquired;
and the response module is used for responding to the first input to obtain a second area.
In the embodiment, the second area is obtained in response to the first input, so that the target image is conveniently displayed in the second area of the second image, and a user can see the zooming effect of the local area in the picture.
In one embodiment, the apparatus further comprises:
the second acquisition module is used for acquiring a second central point of the second area after the second area is obtained;
the second determining module is used for determining a first central point according to the camera parameters of the first camera and the second central point, wherein the first central point is the central point of a first area in the first image;
the first obtaining module 810 includes:
and the cutting unit is used for cutting the image of the first area in the first image based on the sizes of the first central point and the second area to obtain the first target image.
In this embodiment, the first central point corresponding to the second central point on the first image is determined through the camera parameters of the first camera and the second central point, and then the target image corresponding to the first area in the first image is cut out according to the size information of the second area, so that the image effect that the second image displayed in the second area is covered by the target image can be presented, and a user can compare the zooming effect of the target image after zooming processing.
In one embodiment, the apparatus further comprises:
the second receiving module is used for receiving a second input of a user to a third area in the second image after the first target image is displayed;
a third obtaining module, configured to obtain, in response to the second input, a second target image, where the second target image is an image of a fourth region in the first image, and the fourth region is a region corresponding to the third region;
and the display module is used for displaying the second target image in the third area and canceling the display of the first target image.
In this embodiment, by receiving a second input to the second image from the user, the zoom area in the second image can be smoothly switched, so that the user can change the zoom object quickly and smoothly during the zoom photographing process.
In one embodiment, the apparatus further comprises:
a third receiving module, configured to receive a third input to the second area by the user after the first target image is displayed;
and the response and display module is used for responding to the third input and displaying the first image in the shooting preview interface.
In this embodiment, the effect of quickly switching the second image currently covered by the target image to the complete first image can be achieved by receiving a third input of the user in the second image, that is, the user presents a complete zoom screen of the second image after zoom processing, so as to achieve a smoother and complete zoom effect.
In one embodiment, the apparatus further comprises:
the third determining module is used for responding to a first input after the first input of a user for a second image displayed in a shooting preview interface is received, and determining a target zoom magnification according to an input parameter of the first input;
and the fourth determining module is used for determining the first camera according to the target zooming magnification.
In this embodiment, the target zoom magnification is determined according to the first input parameter, and the target zoom magnification can be determined while the second area is determined, so that smoothness and fluency of zoom processing are improved, the operation is simple, and convenience is brought to a user.
An image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
An image processing apparatus in an embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 7, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 9, an electronic device 900 is further provided in this embodiment of the present application, and includes a processor 901, a memory 902, and a program or an instruction stored in the memory 902 and executable on the processor 901, where the program or the instruction is executed by the processor 901 to implement each process of the foregoing image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, a processor 1010, and a power supply 1011. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 8 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 1010 is configured to acquire a first target image, where the first target image is an image of a first area in the first image, and the first image is an image acquired by a first camera;
the display unit 1006 is configured to display the first target image in a second area of a second image displayed in a shooting preview interface, where the second image is an image acquired by a second camera, a focal length of the first camera is greater than or less than a focal length of the second camera, and the second area is an area corresponding to the first area.
In the embodiment of the application, a first target image of a first area in a first image is obtained first, and then the first target image is displayed in a second area of a second image displayed in a shooting preview interface. The first image is an image collected by the first camera, the second image is an image collected by the second camera, the focal length of the first camera is greater than or less than that of the second camera, and the second area is an area corresponding to the first area. Because the first area and the second area are local areas in the first image and the second image respectively, in this embodiment, the user can check the zoom effect of the local area in the shooting preview image (i.e., the second image), and then select whether to zoom according to the zoom effect of the local area, thereby avoiding a situation that the target object is lost in the shooting picture when the user zooms and shoots; in addition, the first target image shot by the first camera is displayed in the local area of the second image shot by the second camera, namely, a part of non-zooming shooting preview picture and a part of zooming picture can be displayed simultaneously, so that a user can check the difference before and after zooming processing of the picture of the local area in the shooting preview picture, and the interestingness of the experience of zooming shooting of the user is improved.
In an embodiment, before the obtaining of the first target image, the processor 1010 is further configured to perform object recognition on a second image displayed in the shooting preview interface based on a preset feature, so as to obtain a target object; and determining the image area where the target object is located as a second area.
In the embodiment, the target object is obtained by performing object recognition on the second image displayed in the shooting preview interface based on the preset features, so that zooming processing can be performed on the target object, the zooming processing effect of the target object is displayed for a user, the situation that the target object is lost in a picture after the image zooming processing is avoided, the effect of automatically recognizing the second area is realized, and the convenience of zooming shooting is improved.
In one embodiment, before acquiring the first target image, the user input unit 1007 is configured to receive a first input of a user on a second image displayed in the shooting preview interface;
processor 1010, further configured to obtain a second region in response to the first input.
In the embodiment, the second area is obtained in response to the first input, so that the target image is conveniently displayed in the second area of the second image, and a user can see the zooming effect of the local area in the picture.
In one embodiment, after obtaining the second area, the processor 1010 is further configured to obtain a second central point of the second area; determining a first central point according to the camera parameters of the first camera and the second central point, wherein the first central point is the central point of a first area in the first image; and cutting the image of the first area in the first image based on the sizes of the first central point and the second area to obtain the first target image.
In this embodiment, the first central point corresponding to the second central point on the first image is determined through the camera parameters of the first camera and the second central point, and then the target image corresponding to the first area in the first image is cut out according to the size information of the second area, so that the image effect that the second image displayed in the second area is covered by the target image can be presented, and a user can compare the zooming effect of the target image after zooming processing.
In one embodiment, after the displaying of the first target image, the user input unit 1007 is further configured to receive a second input of a third area in the second image from a user;
the processor 1010 is further configured to, in response to the second input, acquire a second target image, where the second target image is an image of a fourth region in the first image, and the fourth region is a region corresponding to the third region;
the display unit 1006 is further configured to display the second target image in the third area, and cancel displaying the first target image.
In this embodiment, by receiving a second input to the second image from the user, the zoom area in the second image can be smoothly switched, so that the user can change the zoom object quickly and smoothly during the zoom photographing process.
In one embodiment, after the displaying of the first target image, the user input unit 1007 is further configured to receive a third input of the second region by the user;
a display unit 1006, further configured to display the first image in the shooting preview interface in response to the third input.
In this embodiment, the effect of quickly switching the second image currently covered by the target image to the complete first image can be achieved by receiving a third input of the user in the second image, that is, the user presents a complete zoom screen of the second image after zoom processing, so as to achieve a smoother and complete zoom effect.
In one embodiment, after receiving a first input from a user to capture a second image displayed in the preview interface, the processor 1010 is further configured to determine a target zoom magnification according to an input parameter of the first input in response to the first input; and determining a first camera according to the target zooming magnification.
In this embodiment, the target zoom magnification is determined according to the first input parameter, and the target zoom magnification can be determined while the second area is determined, so that smoothness and fluency of zoom processing are improved, the operation is simple, and convenience is brought to a user.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 1001 may be used for receiving and sending signals during a message transmission or a call, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 1010; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 1001 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 1001 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user through the network module 1002, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 1003 may convert audio data received by the radio frequency unit 1001 or the network module 1002 or stored in the memory 1009 into an audio signal and output as sound. Also, the audio output unit 1003 may also provide audio output related to a specific function performed by the electronic apparatus 1000 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 1003 includes a speaker, a buzzer, a receiver, and the like.
The input unit 1004 is used to receive an audio or video signal. The input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 8042, and the Graphics processor 8041 processes image data of still pictures or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 806. The image frames processed by the graphic processor 10041 may be stored in the memory 1009 (or other storage medium) or transmitted via the radio frequency unit 1001 or the network module 1002. The microphone 10042 can receive sound and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 1001 in case of a phone call mode.
The electronic device 1000 also includes at least one sensor 1005, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 10061 according to the brightness of ambient light and a proximity sensor that can turn off the display panel 10061 and/or the backlight when the electronic device 1000 moves to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 1005 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 1006 is used to display information input by the user or information provided to the user. The Display unit 1006 may include a Display panel 10061, and the Display panel 10061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1007 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 10071 (e.g., operations by a user on or near the touch panel 10071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 10071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1010, and receives and executes commands sent by the processor 1010. In addition, the touch panel 10071 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 10071, the user input unit 1007 can include other input devices 10072. Specifically, the other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 10071 can be overlaid on the display panel 10061, and when the touch panel 10071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 1010 to determine the type of the touch event, and then the processor 1010 provides a corresponding visual output on the display panel 10061 according to the type of the touch event. Although in fig. 10, the touch panel 10071 and the display panel 10061 are two independent components for implementing the input and output functions of the electronic device, in some embodiments, the touch panel 10071 and the display panel 10061 may be integrated to implement the input and output functions of the electronic device, and the implementation is not limited herein.
The interface unit 1008 is an interface for connecting an external device to the electronic apparatus 1000. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 1008 may be used to receive input from external devices (e.g., data information, power, etc.) and transmit the received input to one or more elements within the electronic device 1000 or may be used to transmit data between the electronic device 1000 and the external devices.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, and the like), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1009 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1010 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 1009 and calling data stored in the memory 1009, thereby integrally monitoring the electronic device. Processor 1010 may include one or more processing units; preferably, the processor 1010 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
In addition, the electronic device 1000 includes some functional modules that are not shown, and are not described in detail herein.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (15)

1. An image processing method, comprising:
acquiring a first target image, wherein the first target image is an image of a first area in the first image, and the first image is an image acquired by a first camera;
and displaying the first target image in a second area of a second image displayed in a shooting preview interface, wherein the second image is an image collected by a second camera, the focal length of the first camera is greater than or less than that of the second camera, and the second area is an area corresponding to the first area.
2. The image processing method according to claim 1, wherein before the acquiring the first target image, further comprising:
performing object recognition on a second image displayed in the shooting preview interface based on preset characteristics to obtain a target object;
and determining the image area where the target object is located as a second area.
3. The image processing method according to claim 1, wherein before the acquiring the first target image, further comprising:
receiving a first input of a user to a second image displayed in the shooting preview interface;
in response to the first input, a second region is obtained.
4. The image processing method according to claim 2 or 3, wherein after obtaining the second region, the method further comprises:
acquiring a second central point of the second area;
determining a first central point according to the camera parameters of the first camera and the second central point, wherein the first central point is the central point of a first area in the first image;
the acquiring of the first target image includes:
and cutting the image of the first area in the first image based on the sizes of the first central point and the second area to obtain the first target image.
5. The image processing method according to claim 1, further comprising, after the displaying the first target image:
receiving a second input of a user to a third area in the second image;
responding to the second input, and acquiring a second target image, wherein the second target image is an image of a fourth area in the first image, and the fourth area is an area corresponding to the third area;
and displaying the second target image in the third area, and canceling the display of the first target image.
6. The image processing method according to claim 1, further comprising, after the displaying the first target image:
receiving a third input of the user to the second area;
displaying the first image in the capture preview interface in response to the third input.
7. The image processing method according to claim 3, wherein after receiving a first input of a user for capturing a second image displayed in the preview interface, the method further comprises:
responding to the first input, and determining a target zoom magnification according to the input parameters of the first input;
and determining a first camera according to the target zooming magnification.
8. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring a first target image, wherein the first target image is an image of a first area in the first image, and the first image is an image acquired by a first camera;
the display module is used for displaying the first target image in a second area of a second image displayed in a shooting preview interface, wherein the second image is an image collected by a second camera, the focal length of the first camera is larger than or smaller than that of the second camera, and the second area is an area corresponding to the first area.
9. The apparatus of claim 8, further comprising:
the recognition module is used for carrying out object recognition on a second image displayed in the shooting preview interface based on preset characteristics before the first target image is obtained, so as to obtain a target object;
and the first determining module is used for determining the image area where the target object is located as a second area.
10. The apparatus of claim 8, further comprising:
the first receiving module is used for receiving first input of a user on a second image displayed in the shooting preview interface before the first target image is acquired;
and the response module is used for responding to the first input to obtain a second area.
11. The apparatus of claim 9 or 10, further comprising:
the second acquisition module is used for acquiring a second central point of the second area after the second area is obtained;
the second determining module is used for determining a first central point according to the camera parameters of the first camera and the second central point, wherein the first central point is the central point of a first area in the first image;
the first obtaining module comprises:
and the cutting unit is used for cutting the image of the first area in the first image based on the sizes of the first central point and the second area to obtain the first target image.
12. The apparatus of claim 8, further comprising:
the second receiving module is used for receiving a second input of a user to a third area in the second image after the first target image is displayed;
a third obtaining module, configured to obtain, in response to the second input, a second target image, where the second target image is an image of a fourth region in the first image, and the fourth region is a region corresponding to the third region;
and the display module is used for displaying the second target image in the third area and canceling the display of the first target image.
13. The apparatus of claim 8, further comprising:
a third receiving module, configured to receive a third input to the second area by the user after the first target image is displayed;
and the response and display module is used for responding to the third input and displaying the first image in the shooting preview interface.
14. The apparatus of claim 10, further comprising:
the third determining module is used for responding to a first input after the first input of a user for a second image displayed in a shooting preview interface is received, and determining a target zoom magnification according to an input parameter of the first input;
and the fourth determining module is used for determining the first camera according to the target zooming magnification.
15. An electronic device comprising a processor and a memory electrically connected to the processor, the memory storing a computer program, the processor being configured to invoke and execute the computer program from the memory to implement the image processing method of any one of claims 1 to 7.
CN202111163373.7A 2021-09-30 2021-09-30 Image processing method and device and electronic equipment Pending CN113873159A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111163373.7A CN113873159A (en) 2021-09-30 2021-09-30 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111163373.7A CN113873159A (en) 2021-09-30 2021-09-30 Image processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113873159A true CN113873159A (en) 2021-12-31

Family

ID=79001230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111163373.7A Pending CN113873159A (en) 2021-09-30 2021-09-30 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113873159A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114885098A (en) * 2022-04-27 2022-08-09 广东美的厨房电器制造有限公司 Video shooting method, video shooting device, readable storage medium and cooking utensil

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106993131A (en) * 2017-03-13 2017-07-28 联想(北京)有限公司 Information processing method and electronic equipment
CN108307111A (en) * 2018-01-22 2018-07-20 努比亚技术有限公司 A kind of zoom photographic method, mobile terminal and storage medium
CN112911130A (en) * 2019-12-03 2021-06-04 深圳市万普拉斯科技有限公司 Auxiliary view finding method, device, terminal and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106993131A (en) * 2017-03-13 2017-07-28 联想(北京)有限公司 Information processing method and electronic equipment
CN108307111A (en) * 2018-01-22 2018-07-20 努比亚技术有限公司 A kind of zoom photographic method, mobile terminal and storage medium
CN112911130A (en) * 2019-12-03 2021-06-04 深圳市万普拉斯科技有限公司 Auxiliary view finding method, device, terminal and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114885098A (en) * 2022-04-27 2022-08-09 广东美的厨房电器制造有限公司 Video shooting method, video shooting device, readable storage medium and cooking utensil

Similar Documents

Publication Publication Date Title
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN108495029B (en) Photographing method and mobile terminal
CN111541845B (en) Image processing method and device and electronic equipment
CN109495711B (en) Video call processing method, sending terminal, receiving terminal and electronic equipment
CN110557566B (en) Video shooting method and electronic equipment
CN110913132B (en) Object tracking method and electronic equipment
CN109005286B (en) Display control method and folding screen terminal
CN111031398A (en) Video control method and electronic equipment
CN110602389B (en) Display method and electronic equipment
CN110198413B (en) Video shooting method, video shooting device and electronic equipment
CN107741814B (en) Display control method and mobile terminal
CN111147752B (en) Zoom factor adjusting method, electronic device, and medium
CN109408171B (en) Display control method and terminal
CN110913139A (en) Photographing method and electronic equipment
CN109413333B (en) Display control method and terminal
CN110830713A (en) Zooming method and electronic equipment
CN111405181B (en) Focusing method and electronic equipment
CN108174110B (en) Photographing method and flexible screen terminal
CN110944113B (en) Object display method and electronic equipment
CN110944114B (en) Photographing method and electronic equipment
CN110908517A (en) Image editing method, image editing device, electronic equipment and medium
CN108737731B (en) Focusing method and terminal equipment
CN111246105B (en) Photographing method, electronic device, and computer-readable storage medium
CN111131706B (en) Video picture processing method and electronic equipment
CN110913133B (en) Shooting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination