CN111541845B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN111541845B
CN111541845B CN202010367204.4A CN202010367204A CN111541845B CN 111541845 B CN111541845 B CN 111541845B CN 202010367204 A CN202010367204 A CN 202010367204A CN 111541845 B CN111541845 B CN 111541845B
Authority
CN
China
Prior art keywords
image
camera
area
preview image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010367204.4A
Other languages
Chinese (zh)
Other versions
CN111541845A (en
Inventor
曾柏泉
陈露兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202010367204.4A priority Critical patent/CN111541845B/en
Publication of CN111541845A publication Critical patent/CN111541845A/en
Application granted granted Critical
Publication of CN111541845B publication Critical patent/CN111541845B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Abstract

The embodiment of the invention provides an image processing method, an image processing device and electronic equipment, which are applied to the field of communication and can solve the problem of low definition of a local image in a photo after amplification. The method comprises the following steps: receiving a first input of a user to a first area in a preview image under the condition of displaying the preview image acquired by a first camera; responding to a first input, and displaying a first image in a second area of the preview image, wherein the first image is an image of the first area acquired by a second camera; the second area is at least partially overlapped with the first area, the display size of the second area is larger than that of the first area, and the focal length of the second camera is larger than that of the first camera. The method is particularly applied to the scene of local image amplification.

Description

Image processing method and device and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method and device and electronic equipment.
Background
At present, the self-photographing with friends by electronic devices such as mobile phones and tablet computers becomes the trend of many young people, but common figure photos (such as self-photographing photos) are not enough to meet the interesting requirements of people, so that many emerging photographing modes such as AR (augmented reality) sprouting, various filters and stickers appear.
Many popular shooting Applications (APPs) can provide an interesting shooting method, and magnify the whole or a part (such as a human face) of a person in a photo to realize the local image magnification in the photo, thereby achieving a funny effect. Specifically, in the first current scheme, the electronic device may perform a matting operation and a clipping operation on an image of a certain portion of a certain person in a photo through a face recognition technology, and then directly amplify the clipped image and synthesize the amplified image into an original photo, thereby implementing local image amplification in the photo. In the second scheme, the electronic device may stretch the whole or a part of the person in the photo by using a spherical image processing technology to realize local image magnification in the photo.
However, the first solution described above will result in a reduction of pixels of the image of the cropped portion of the photograph, and the second solution will result in a distortion and stretching of the partially enlarged image in the photograph, both resulting in a lower picture sharpness of the partially enlarged image in the photograph.
Disclosure of Invention
The embodiment of the invention provides an image processing method and device and electronic equipment, which can solve the problem of low definition of a local image in a photo after the local image is amplified.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, where the method includes: receiving a first input of a user to a first area in a preview image under the condition of displaying the preview image acquired by a first camera; responding to a first input, and displaying a first image in a second area of the preview image, wherein the first image is an image of the first area acquired by a second camera; the second area is at least partially overlapped with the first area, the display size of the second area is larger than that of the first area, and the focal length of the second camera is larger than that of the first camera.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including: the receiving module is used for receiving first input of a user to a first area in a preview image under the condition of displaying the preview image acquired by the first camera; the display control module is used for responding to the first input received by the receiving module and displaying a first image in a second area of the preview image, wherein the first image is an image of the first area acquired by the second camera; the second area is at least partially overlapped with the first area, the display size of the second area is larger than that of the first area, and the focal length of the second camera is larger than that of the first camera.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the computer program implements the steps of the image processing method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the image processing method according to the first aspect.
In the embodiment of the present invention, in the case of displaying the preview image captured by the first camera, through a first input of the user to the first area in the preview image, the display of the first image in the second area of the preview image may be triggered, where the first image is an image of the first area acquired by the second camera. The second area is at least partially overlapped with the first area, and the display size of the second area is larger than that of the first area, so that the first image displayed on the second area of the preview image can meet the interesting requirement of a user on local image amplification, such as local amplification of the human face of a person in the preview image. Specifically, the focal length of the second camera is greater than that of the first camera, that is, the second camera is a telephoto camera compared to the first camera, so that pixels of the first image of the target object are higher than pixels of the image in which the target object in the preview screen is photographed. As such, the first image may present more image detail than the image of the first region in the preview image. And the definition of the first image is high, the sense of incongruity of image stretching is avoided, and the problems that the definition of a local enlarged image is low and the image is distorted and stretched due to the fact that the image of the first area in the preview image is directly amplified are solved.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an electronic device according to an embodiment of the present invention;
fig. 4 is a schematic flowchart illustrating an electronic device executing an image processing method according to an embodiment of the present invention;
fig. 5 is a second schematic flowchart illustrating an electronic device executing an image processing method according to an embodiment of the invention;
fig. 6 is a second schematic diagram of the display content of the electronic device according to the embodiment of the invention;
fig. 7 is a third schematic diagram of the display content of the electronic device according to the embodiment of the invention;
FIG. 8 is a fourth schematic diagram illustrating the display content of the electronic device according to the embodiment of the present invention;
FIG. 9 is a fifth diagram illustrating display contents of an electronic device according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of a possible image processing apparatus according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "such as" in an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present relevant concepts in a concrete fashion.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first camera, the second camera, etc. are used to distinguish between different cameras, rather than to describe a particular order of the cameras.
It should be noted that the image processing method provided by the embodiment of the present invention may be applied to a scene in which a local image of a photo is enlarged when the electronic device takes the photo, specifically, a scene in which an image is taken by a telephoto camera.
According to the image processing method, the image processing device and the electronic equipment provided by the embodiment of the invention, under the condition that the preview image acquired by the first camera is displayed, the first input of the first area in the preview image by the user can trigger the display of the first image in the second area of the preview image, wherein the first image is the image of the first area acquired by the second camera. The second area is at least partially overlapped with the first area, and the display size of the second area is larger than that of the first area, so that the first image displayed on the second area of the preview image can meet the interesting requirement of a user on local image amplification, such as local amplification of the human face of a person in the preview image. Specifically, the focal length of the second camera is greater than that of the first camera, that is, the second camera is a telephoto camera compared to the first camera, so that pixels of the first image of the target object are higher than pixels of the image in which the target object in the preview screen is photographed. As such, the first image may present more image detail than the image of the first region in the preview image. And the definition of the first image is high, the sense of incongruity of image stretching is avoided, and the problems that the definition of a local enlarged image is low and the image is distorted and stretched due to the fact that the image of the first area in the preview image is directly amplified are solved.
It should be noted that, in the image Processing method provided in the embodiment of the present invention, the execution main body may be an image Processing apparatus, an electronic device, a Central Processing Unit (CPU) of the electronic device, or a control module in the electronic device for executing the image Processing method. Alternatively, the image processing apparatus may be implemented by an electronic device, for example, a control module for executing the image processing method in the electronic device.
In the following, an image processing method performed by an electronic device is taken as an example to describe the image processing method provided by the embodiment of the present invention.
The electronic device in the embodiment of the invention can be a mobile electronic device or a non-mobile electronic device. The mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile electronic device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the image processing method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application. For example, applications such as a system setup application, a system chat application, and a system camera application. And the third-party setting application, the third-party camera application, the third-party chatting application and other application programs.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system operating environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image processing method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image processing method may operate based on the android operating system shown in fig. 1. Namely, the processor or the electronic device can implement the image processing method provided by the embodiment of the invention by running the software program in the android operating system.
The following describes the image processing method provided by the embodiment of the present invention in detail with reference to the flowchart of the image processing method shown in fig. 2. Wherein, although the logical order of the image processing methods provided by the embodiments of the present invention is illustrated in the method flow diagrams, in some cases, the steps shown or described may be performed in an order different from that presented herein. For example, the image processing method shown in fig. 2 may include step 201 and step 202:
step 201, the electronic device receives a first input of a user to a first area in a preview image under the condition that the preview image acquired by the first camera is displayed.
Optionally, the first input may include a sub-input (denoted as sub-input 1) of the user selecting the first area, that is, the first input may select the first area from the preview image.
It can be understood that, when the user triggers the electronic device to select the first area, the electronic device may determine that a shooting object (such as a target object described below) in the first area is a target object (or a cachexia object). At this time, the user requests the electronic device to enlarge the image of the first area, that is, to perform an enlarging operation on the photographic subject in the first area.
Alternatively, the first region may be determined based on a frame selection control provided by the electronic device. For example, in a case where the electronic device displays a frame selection control on the preview image, the user controls the frame selection control to select the first area through the sub-input 1 (e.g., a drag input).
Alternatively, the first area may be determined based on an input trajectory of the user. For example, the user may trigger the electronic device to determine the area within a circle as the first area by executing the sub-input 1 with an input track of the circle on the preview interface.
Further, the first region may be determined based on a photographic subject in the preview image. The electronic device can automatically recognize the shooting object in the preview image or manually select the shooting object in the preview image by a user. Further, when the electronic device recognizes a photographic subject in the preview image, the photographic subject can be marked by the mark frame. At this time, the user inputs a sub-input 1 (e.g., a click input) to a mark box on the preview image, which may trigger the electronic device to select the first area, and thus select the photographic subject in the first area.
Optionally, the shape of one mark frame may be a circle, a rectangle, an ellipse, etc., and may be determined according to the actual requirement of the user, which is not specifically limited herein.
Alternatively, in the case of displaying the preview image, the electronic device may automatically recognize and mark one or more photographed objects in the preview image. For example, the electronic device may identify and mark one or more people, or one or more faces, in the preview image. Further, the first input may include a selection input (e.g., a click input or a long-press input) of a certain subject that has been marked by the user, such as a selection input of a certain face.
In the embodiment of the present invention, a camera application in the electronic device may provide an interesting photographing function, so as to support the electronic device to implement a step of taking a picture (or called an image) with enlarged local images by using two or more cameras with different focal lengths, that is, to execute the image processing method provided in the embodiment of the present invention. The shooting preview interface displayed when the electronic device shoots a picture through the interesting shooting function can be called an interesting shooting interface, and the interesting shooting interface is the same as or different from a conventional shooting preview interface.
It can be understood that, in the embodiment of the present invention, a control for turning on the interesting shooting function may be provided in the camera application, and is used for triggering the electronic device to enable the interesting shooting function and display the interesting shooting interface.
Specifically, in the embodiment of the present invention, when the electronic device opens the camera application and enters the interesting shooting interface, the first camera may be enabled, and the preview image acquired by the first camera is displayed in the interesting shooting interface of the camera, so as to execute the step 201. Optionally, when the electronic device activates the first camera, another camera (for example, a second camera described below) may also be activated, which is not specifically limited in this embodiment of the present invention.
Step 202, the electronic device responds to the first input, and displays a first image in a second area of the preview image, wherein the first image is an image of the first area acquired by the second camera.
And the focal length of the second camera is greater than that of the first camera.
It should be noted that, in the embodiment of the present invention, the electronic device may include two or more cameras, and a focal length (or focal length) of each camera is different, so that a function of each camera is different.
It can be understood that, when the electronic device takes a picture, the electronic device may cooperate with itself or invoke different cameras in the camera to complete a picture taking task according to a picture taking mode selected by a user.
Illustratively, the camera of the electronic device includes a main camera, a 2-time telephoto camera and a 5-time periscopic telephoto camera. Wherein, above-mentioned main camera can be standard camera. Specifically, the focal lengths of the standard camera, the 2-time long-focus camera and the 5-time periscopic long-focus camera are sequentially increased. For example, the focal length of a 2-time telephoto camera is 2 times that of a standard camera, and the focal length of a 5-time periscopic telephoto camera is 5 times that of the standard camera.
The periscopic long-focus camera is realized by enabling lenses of the long-focus lens to be parallel to the electronic equipment body in an aligned mode and transmitting light to the sensor in a prism refraction mode.
It can be understood that the field angle (i.e. the viewing angle) of the telephoto camera is small, the spatial range (i.e. the range of the scene) of the shot object is small, and the pixels of the shot image are higher than those of the shot image of the standard camera (such as the above-mentioned main camera) at the same shooting distance, which is suitable for shooting the details of the distant scene and shooting the shot object which is not easy to approach. In addition, the depth of field of the telephoto camera is short, and thus the object to be photographed in a cluttered environment can be made prominent. In addition, the perspective effect of the long-focus camera is poor, and the long-focus camera has the characteristics of obviously compressing the depth distance of the space and exaggerating the background. Among other features, embodiments of the present invention may be characterized as a tele camera as described herein as compared to a standard camera.
For example, in the embodiment of the present invention, the first camera may be a main camera in a camera of the electronic device, that is, a standard camera.
Alternatively, the photographic subject in the first region may be a foreground in the image of the first region. At this time, the image of the photographic subject may be included in the image of the first area, and an image of a background in which the photographic subject is located may also be included.
Alternatively, the image of the first region is an image of the photographic subject in the first region, and does not include an image of the background.
Optionally, in this embodiment of the present invention, the object to be photographed in the first area may be a whole human, a part of a human (e.g., a face, an ear, an arm, etc.), a whole animal, a part of an animal, a whole scene, or a part of a scene (e.g., a petal part of a flower), which may be determined according to an actual requirement of a user, and this is not specifically limited in this embodiment of the present invention. The image processing method provided by the embodiment of the present invention is described below by taking a photographic subject in the first region as an example of a human face of a person in a preview image.
Alternatively, in a case where the electronic device determines the photographic subject in the first area, the electronic device may recognize and record feature information of the photographic subject from the preview image. Specifically, the feature information of the photographic subject in the first area may be used to distinguish the photographic subject from other subjects (e.g., subjects in other areas than the first area in the preview image). For example, in the case where the object in the first area is a face of a person, the electronic device may recognize the face by a face recognition technique and record feature information of the face. In addition, the electronic device may mark the object in the first area through the mark frame.
The shooting preview interface of the electronic device may include a region to be shot and a shooting control region, the region to be shot is used for displaying a preview image, and the shooting control region may be used for displaying functional controls such as a shooting control. The shooting control can trigger the electronic equipment to shoot images through the camera by a user, such as shooting images through the first camera and shooting images through the second camera.
Optionally, the first input may further include a sub-input (denoted as sub-input 2, such as a click input or a long-press input) to the shooting control, which is used to trigger the electronic device to obtain a preview image through shooting by the first camera, and to trigger obtaining of the first image through shooting by the following second camera unit. For example, in the process of the user performing the first input, the sub input 1 may be performed first and then the sub input 2 may be performed.
Illustratively, the second camera is a tele camera, such as the 2-fold tele camera or the 5-fold periscopic tele camera described above, compared to the first camera.
It can be understood that, since the focal length of the second camera is greater than that of the first camera, the pixels of the first image are higher than those of the original image of the first area in the preview image (i.e., the original image of the first area that is not enlarged), more details of the photographic subject in the first area can be shown, so that the first image definition is higher.
It should be noted that, in the process that the electronic device acquires the first image of the first area through the second camera, the first area or the photographic subject in the first area is within the view range of the second camera.
The second area is at least partially overlapped with the first area, and the display size of the second area is larger than that of the first area.
It is understood that the electronic device may display the enlarged image of the first region (i.e., the first image) on the second region of the preview image based on the first region, such that the positional relationship between the enlarged image of the first region and the image of the object other than the original image of the first region in the preview image remains unchanged.
For example, when the photographic subject in the first area is a face of a person, the first area is an area where an original image of the face is located in the preview image, and the second area is an area where an enlarged image of the face (i.e., the first image displayed on the second area) is located in the preview image. At this time, since the second region at least partially overlaps the first region, the position relationship between the enlarged image of the face and the other part (such as a body part or a neck) of the person in the preview image is kept unchanged, for example, the enlarged image of the face is located above the body part of the person in the preview image. Therefore, the display effect of the amplified image (namely the first image) of the shooting object in the first area is natural, the user feels comfortable when the preview image of the first image is displayed on the second area, and the user experience of the user in the interesting shooting function process is improved.
Specifically, in the case where the photographic subject in the first region is the face of one person, the fun photographing function is explained for performing an enlargement operation on the face image of the person in the preview image.
Optionally, when the electronic device displays the first image in the second region of the preview image, the electronic device may display the first image in the second region of the preview image in a layered manner, or fuse the first image into the second region of the preview image. Specifically, the electronic device may cover the original image in the second region in the preview image with the first image, or the electronic device may perform fusion processing on the edge of the first image and the initial image in the second region in the preview image, so that the effect of fusing the first image to the preview image is natural.
According to the image processing method provided by the embodiment of the invention, under the condition that the preview image acquired by the first camera is displayed, the first image can be triggered to be displayed in the second area of the preview image through the first input of the user to the first area in the preview image, wherein the first image is the image of the first area acquired by the second camera. The second area is at least partially overlapped with the first area, and the display size of the second area is larger than that of the first area, so that the first image displayed on the second area of the preview image can meet the interesting requirement of a user on local image amplification, such as local amplification of the human face of a person in the preview image. Specifically, the focal length of the second camera is greater than that of the first camera, that is, the second camera is a telephoto camera compared to the first camera, so that pixels of the first image of the target object are higher than pixels of the image in which the target object in the preview screen is photographed. As such, the first image may present more image detail than the image of the first region in the preview image. And the definition of the first image is high, the sense of incongruity of image stretching is avoided, and the problems that the definition of a local enlarged image is low and the image is distorted and stretched due to the fact that the image of the first area in the preview image is directly amplified are solved.
Optionally, a center position of the second region is the same as a center position of the first region, or at least one edge position of the second region is the same as at least one edge position of the first region.
It is to be understood that the first region may be located in the second region with a side edge location of the second region being the same as a side edge location of a corresponding side in the first region, when at least one edge location of the second region is the same as at least one edge location of the first region. For example, the lower edge position of the first region is the same as the lower edge position of the second region.
Optionally, in this embodiment of the present invention, an edge position of an area may be a coordinate position on a preview image displayed by an electronic device, or a display position on a screen.
For example, in a scene in which the photographic subject in the first region is a face of a person, if the center position of the second region is the same as the center position of the first region, the enlarged image of the face is located above the image of the body part of the person in the preview image and may cover the image of the neck part of the person. Or, in a scene in which the object in the first area is a face of a person, if at least one edge position of the second area is the same as at least one edge position of the first area, if the lower edge position of the second area is the same as the lower edge position of the first area, the enlarged image of the face is enlarged based on the neck portion or the chin portion of the person in the preview image. In this case, the enlarged image of the face is positioned above the image of the body part of the person in the preview image and does not cover the image of the neck part of the person. Therefore, the position relation between the image of the enlarged human face and the image of other parts (such as body parts or necks) of the person in the preview image is kept unchanged, namely the position of the image between the parts of the person accords with the structure of the human body, the display effect of displaying the first image on the second area on the preview image is natural, and the requirement of a user on the appearance of the interesting shooting image is met.
For another example, in a scene in which the object in the first region is the left ear of a person, if at least one edge position of the second region is the same as at least one edge position of the first region, and if the right edge position of the second region is the same as the right edge position of the first region, the enlarged image of the left ear is enlarged based on the face part of the person in the preview image. At this time, the enlarged image of the left ear is located on the left side of the image of the face in the preview image and does not cover the image of the face of the person.
The center position of the second area is the same as the center position of the first area, or at least one edge position of the second area is the same as at least one edge position of the first area, so that the display effect of the electronic equipment for displaying the first image on the second area of the preview image is rich. Therefore, even if one display effect does not meet the requirements of the user, the electronic equipment can display the first image on the second area of the preview image according to another display effect, and the display effect of the electronic equipment for displaying the first image on the preview image is favorable for meeting the requirements of the user.
Optionally, the first image is an image of a photographic object identified and extracted from the target initial image, the target initial image is an image of a third area in the preview image acquired by the second camera, and the third area includes the first area. And the display size of the third area is larger than or equal to that of the first area.
It is understood that the electronic device acquires the target initial image of the third area, and may be capturing an image including the photographic subject in the third area.
And the shooting object in the third area is in the shooting range of the second camera. At this time, the photographic subject in the third region is the same as the photographic subject in the first region, i.e., both are target subjects.
The electronic equipment can identify a target object from a target initial image, and perform a matting operation and a cropping operation on an image of the target object from the target initial image to obtain a first image.
It is understood that the target object is a photographic object in the first region. Currently, one or more photographic subjects, of which the target subject is one, may be included in the first region.
For example, in the case that the target object is a human face of a person, the electronic device may recognize the target object from the third image through a human face recognition technology, and perform a matting operation and a cropping operation on an image of the target object in the third image to crop the first image. Specifically, if the electronic device determines that the target object is removed from the preview image and the feature information of the target object is recorded, after the electronic device acquires the third image, the electronic device may identify the target object from the third image by using a human face recognition technology and a human face recognition technology according to the feature information to acquire the first image.
It can be understood that, during the process of acquiring the first image by the second camera, the electronic device may continuously display the preview image on the shooting preview interface (i.e. the interesting shooting interface), without separately displaying the target initial image and the first image in a preview manner.
Illustratively, as shown in fig. 3, a schematic diagram of displayed content during the process of executing the above-mentioned image processing and enlarging for the electronic device provided by the embodiment of the present invention is shown. As shown in fig. 3 (a), the upper half area of the interesting photographing interface displayed by the electronic device displays a preview image, and the first area P1 of the preview image includes the object 41 (i.e., the face image represented by the object 41, i.e., the above-mentioned target object). At this time, the preview image shown in (a) of fig. 3 may be an image captured by the electronic device through the first camera. The sub input 1 may be a click input of the user on the object 41, and the sub input 2 may be a click input of the user on the photographing control 42. Subsequently, after the user clicks on the shooting control 42, the electronic device may be triggered to shoot a preview image a1 shown in (a) of fig. 3 through the first camera, and shoot an initial image of the target corresponding to the first area through the second camera (not shown in fig. 3). Further, the electronic device may identify and crop out a first image of the object 41 from the target initial image. Finally, the electronic device may integrate the first image of the object 41 into the second region P2 (i.e., the second region P2 of the preview image) shown in (b) of fig. 3, thereby triggering the electronic device to display the image a2 shown in (b) of fig. 3. Among them, the image in the second region P2 in the image a2 shown in (b) in fig. 3 may be the first image of the object 41 (i.e., the enlarged image of the object 41).
Alternatively, in practice, the electronic apparatus may not display two dashed boxes representing the first region P1 and the second region P2 on the fun photographing interface, and in fig. 3, two dashed boxes representing the first region P1 and the second region P2 are displayed for convenience of describing the positional relationship of the first region P1 and the second region P2.
Referring to fig. 3, as shown in fig. 4, a flowchart of an image processing method performed by an electronic device is shown. After the user makes the sub input 2 to the photographing control 42 shown in (a) of fig. 3, the electronic apparatus may acquire a preview image a1 photographed by the first camera as shown in (a) of fig. 4, and a target initial image A3 of the target object 41 photographed by the second camera as shown in (b) of fig. 4. Wherein the image of the object 41 in the preview image a1 is in the first region P1. Then, the electronic device may recognize and cut out the first image a4 of the object 41 as shown in (c) in fig. 4 from the target initial image A3. Subsequently, the electronic device integrates the first image a4 of the object 41 into the second region P2 in the preview image a1, resulting in the image a2 as shown in (d) in fig. 4 (i.e., the resulting image of the first image a4 is displayed on the second region P2 of the preview image a 1).
In this way, in the embodiment of the present invention, since the electronic device may obtain the target initial image of the third area in the preview image through the second camera, where the third area includes the first area, the electronic device may further identify and extract the first image of the photographic object from the target initial image. Therefore, the first image acquired by the electronic equipment is an image meeting the requirements of the user.
Optionally, in the image processing method provided in the embodiment of the present invention, the step 202 is implemented by the steps 202a to 202 c:
step 202a, the electronic device acquires a first initial image of the first area through the second camera.
Step 202b, the electronic device amplifies the first initial image according to the first amplification factor to obtain a first image.
Step 202c, the electronic device displays the first image in the second area of the preview image.
Wherein, the ratio of the first angle of view to the second angle of view is a first value; the first field angle is the field angle of the first camera, and the second field angle is the field angle of the second camera; the first magnification is the ratio of the target magnification to the first value; the product of the target magnification and the display size of the first area is the display size of the second area; the target magnification is a default value or a user-entered value.
Wherein the target magnification is a default value or a user-entered value.
In the embodiment of the present invention, the first focal length of the first camera is smaller than the second focal length of the second camera, and at this time, the first field angle of the first camera is larger than the second field angle of the second camera. For example, in connection with the example in the above-described embodiment, the field angles of the main camera, the 2-time telephoto camera, and the 5-time periscopic telephoto camera are sequentially decreased. For example, the field angle of a standard camera is 2 times that of a 2-time telephoto camera, and the field angle of a standard camera is 5 times that of a 5-time periscopic telephoto camera.
Optionally, in this embodiment of the present invention, the electronic device may zoom in the target object by taking a zoom of the target object. The zoom shooting process may include physical zooming, or physical zooming and electronic zooming (or digital zooming).
It will be appreciated that the first value may be a magnification during physical zooming of the target object. Specifically, the electronic device converts the focal length of the shooting target object from the first focal length of the first camera to the second focal length of the second camera to realize physical zooming, that is, the first value is the ratio of the focal length of the second camera to the focal length of the first camera. Obviously, the pixels of the first image obtained by physically zooming the target object by the second camera are higher and the definition is higher.
The first magnification may be a magnification in an electronic zooming process of the target object. Specifically, the electronic device obtains a first initial image of the target object obtained through physical zooming based on the first numerical value, and then obtains the first image through electronic zooming based on the first magnification. Obviously, the pixels of the first image after magnification at the first magnification are higher than the pixels of the image of the target object in the original preview image.
It should be noted that the value of the target magnification is a value greater than 1, the first magnification may be a value greater than or equal to 1, and the first value is a value greater than 1.
In conjunction with fig. 4 described above, the target magnification is the ratio of the display size of the image of the object 41 in the second region P2 of the image a2 to the display size of the image of the object 41 in the first region P1 of the preview image a 1.
For example, in the case that the first camera is a standard camera and the second camera is a 2-time long-focus camera, the focal length of the second camera may be twice that of the first camera, and the first value may be 2. At this time, if the target magnification is 2, the first magnification is 1. If the target magnification is 4, the first magnification is 2.
It is emphasized that the range of the focal length of each camera (e.g., the first camera and the second camera) is determined, so that the range of the first value is determined.
Referring to fig. 4, as shown in fig. 5, a flow chart for performing image processing enlargement for the electronic device is shown. After the user makes a second input to the photographing control 42 illustrated in (a) of fig. 3, the electronic device may acquire a preview image a1 photographed by the first camera as illustrated in (a) of fig. 5, and a target initial image A3 of the target object 41 photographed by the second camera as illustrated in (b) of fig. 5. Wherein the image of the object 41 in the preview image a1 is in the first region P1. Then, the electronic device may recognize and cut out the first initial image a5 of the object 41 as shown in (c) in fig. 5 from the target initial image A3. Further, the electronic device may obtain a first magnification according to a default or user-input target magnification and a first numerical value, and magnify the first initial image a5 of the object 41 according to the first magnification, resulting in a magnified first image a4 of the object 41 as shown in (e) of fig. 5. Subsequently, the electronic device integrates the enlarged first image a4 of the object 41 into the second region P2 in the preview image a1, resulting in an image a2 as shown in fig. 5 (d).
The electronic equipment can also amplify the first image according to the first magnification after acquiring the first image through the second camera, so that the electronic equipment realizes that the image of the target object is amplified according to the target magnification based on the preview image through cooperation of the first numerical value and the first magnification even if the target magnification is larger than the first numerical value.
Optionally, the image processing method provided in the embodiment of the present invention may further include step 206, for example, before step 204, step 206 may further include:
step 206, the electronic device displays target content on the preview image, wherein the target content includes at least one of the following items: the camera comprises at least one mark frame, a view-finding frame of a second camera, a first control, a first mark and a second control.
Each mark frame is used for marking a shooting object in the preview image, at least one mark frame comprises a target mark frame, the target mark frame is used for marking the target object, the first control is used for inputting the target magnification, and the first identification is used for identifying the target magnification; the second control is used for triggering and displaying the preview image; the target magnification was: a ratio of a display size of the first image to a display size of an image of the target object in the preview image.
Optionally, in implementation mode 1 provided in the embodiment of the present invention, the target content includes at least one mark frame and a view frame of the second camera. Wherein the target magnification is a default value in the electronic device.
Optionally, in implementation mode 2 provided in the embodiment of the present invention, the target magnification is a numerical value input by the user in real time.
It can be understood that the area where the preview image displayed by the electronic device is located is within the view frame of the first camera, i.e. within the view range of the first camera.
In implementation mode 1, when the electronic device starts the interesting photographing function and displays the interesting photographing interface, and when one or more faces appear in the interesting photographing interface, the electronic device may obtain positions (e.g., coordinates) of face images in the preview image by using a face recognition technology, and mark the positions of the faces on the preview image with a mark frame (i.e., a face frame). Meanwhile, the electronic device can turn on a long-focus camera with a longer focal length, such as a 2-time long-focus camera, and mark out the shooting range of the long-focus camera (i.e., the view finder of the camera) by a square frame on the interesting shooting interface. Further, the user performs click input on any one of the face frames on the screen, selects the face as a magnification malicious object (i.e., a target object), and keeps the face within the shooting range of the second camera (i.e., the image of the face is within the framing frame of the second camera).
It is understood that in the case where an image of a photographic subject is in a view frame of a camera, it indicates that the photographic subject is in a view range of the camera, and the camera can successfully capture the image of the photographic subject. For example, in a case where the image of the target object is within the view frame of the second camera, the electronic device may successfully acquire the target initial image of the target object as well as the first image through the second camera.
Illustratively, based on the above implementation 1, in conjunction with fig. 3, the preview image a1 shown in fig. 6 has a mark frame 43 and a mark frame 44 displayed thereon, where the mark frame 43 is used to represent the object 41 (i.e., the face represented by the object 41), and the mark frame 44 is used to mark another face in the preview image a 1. The mark frames 43 and 44 are rectangular. In addition, fig. 6 also shows a view box B1 of the second camera. At this time, the mark frame 43 of the object 41 is within the view frame B1 of the second camera, indicating that the object 41 is within the view range of the second camera.
Further, user input to the capture control 42 shown in fig. 6 (e.g., sub-input 2) may trigger the electronic device to display an image a2 as shown in fig. 3 (b).
Alternatively, when the mark frame 43 of the object 41 exceeds the view frame B1 of the second camera, the electronic device may prompt the user so that the user can move the electronic device to make the mark frame 43 of the object 41 re-enter the view frame B1 of the second camera, and further make the object 41 re-enter the view range of the second camera.
In implementation mode 2, the electronic device may display a conventional preview interface, and when one or more faces appear in the conventional interface, the electronic device may obtain face coordinates through a face recognition technology, and mark the position of each face with a mark frame (i.e., a face frame) on the preview image. Then, a user clicks any one face frame on the shooting preview interface, the electronic equipment can be triggered to start the interesting shooting function, the conventional preview interface is used as an interesting shooting interface, and the person or the face identified by the face frame is selected as a target object.
Optionally, in implementation mode 2 provided in the embodiment of the present invention, the first input includes a first sub-input and a second sub-input. The above step 201 and step 202 can be realized by the steps 202-1 to 202-6:
step 202-1, in the case of a target mark frame displayed on the preview image, the electronic device receives a first sub-input of the user to the target mark frame.
For example, the first sub-input may be a touch screen input such as a click input.
Step 202-2, in response to the first sub-input, the electronic device determines the area marked by the target marking box, where the target object is located, as the first area.
Step 202-3, in the case that the first control is displayed on the preview image, the electronic device receives a second sub-input of the user to the first control displayed on the preview image.
For example, the second sub-input may be a touch screen input such as a click input.
And step 202-4, in response to the second sub-input, the electronic equipment determines the numerical value input through the first control as the target magnification.
And 202-5, under the condition that the target mark frame is positioned in a view frame of the second camera displayed on the preview image, the electronic equipment acquires the first image through the second camera.
And step 202-6, the electronic equipment displays the first image in the second area of the preview image according to the target magnification.
The target mark frame is marked as one mark frame in at least one mark frame displayed on the preview image, and each mark frame is used for marking a shooting object in the preview image; the first control is used for inputting a target magnification; the product of the target magnification and the display size of the first area is the display size of the second area; the target mark frame is positioned in the view finding frame of the second camera and used for indicating that the target object is positioned in the view finding range of the second camera.
Optionally, the electronic device may further display a first identifier on the preview image, where the first identifier is used to indicate a numerical value of the target magnification.
Optionally, the electronic device may further display a second control on the preview image, where the second control is used to cancel the enlargement operation on the image of the first area. That is, the second control is used to cancel the display of the first image in the second area on the preview image and instead resume the display of the preview image itself.
Optionally, the electronic device may first display the first control and the second control on the interesting photographing interface, that is, display the first control and the second control on the preview image. Subsequently, after the user operates the first control through the second input, the first identifier, such as the target magnification, may be displayed.
Illustratively, in conjunction with fig. 3 and 6, as shown in (a) of fig. 7, the electronic device may display a mark frame 43 and a mark frame 44 on the preview image a 1. Subsequently, after the user inputs the mark frame 43, the electronic apparatus may display three buttons of a button "+" (i.e., plus sign), a button "-" (i.e., minus sign), and a button "x" as shown in fig. 7 (b). The button "+" and the button "-" are both first controls, and the button "+" and the button "-" are respectively used for increasing and decreasing the target magnification factor for magnifying the face of the selected person. The button "x" is the second control described above, and indicates that the operation on the person is restored to default, i.e., the electronic apparatus is triggered to display the preview image a1 itself. Specifically, when the user clicks the buttons "+" and "-" and the centers of the buttons "+" (i.e., plus sign), the button "-" (i.e., minus sign), and the button "x" on the preview image of the electronic device as shown in fig. 7 (c) display the first mark "2", and at this time, the target magnification is 2.
Further, user input to the capture control 42 shown in fig. 7 (c) may trigger the electronic device to display an image a2 as shown in fig. 3 (b).
Further, if the user clicks the "+" button again, the target magnification is continuously increased and the first identifier is updated and displayed, or if the user clicks the "-" button again, the target magnification is continuously decreased and the first identifier is updated and displayed.
Therefore, the electronic equipment can provide various controls, user operation is supported to trigger the electronic equipment to execute the amplification operation on the first area in the preview image, and the man-machine interaction performance in the process is improved.
Optionally, based on the foregoing implementation mode 2, the image processing method provided in the embodiment of the present invention may further include, before the foregoing step 202-5, step 203 and step 204:
and step 203, the electronic equipment displays a view frame of the second camera on the preview image with a first display effect under the condition that the target magnification is larger than the first numerical value.
It can be understood that the target magnification is greater than the first numerical value, which indicates that the target magnification is greater, and the electronic device needs to perform electronic focusing on the first image acquired by the second camera to obtain the amplified first image.
Optionally, the first display effect is a blinking display, that is, the electronic device displays the view finder of the second camera if the view finder is hidden or not present, so as to prompt that the target magnification is larger.
Illustratively, assuming the first value is 2, if the target magnification is 2.2, the target magnification is greater than the first value. Specifically, in conjunction with FIG. 7 above, after the user clicks the "+" button again, triggering the electronic device to continue to increase the target magnification by 2.2, the electronic device may update the viewfinder B1 that displays the first indicia of "2.2" and the second camera on the preview image A1, as shown in FIG. 8.
And 204, under the condition that the target mark frame is positioned outside the view frame of the second camera, the electronic equipment displays the view frame of the second camera with a second display effect.
And the second effect is used for indicating that the target object is out of the framing range of the second camera.
Optionally, the second display effect is a display effect continuously displayed by using a red frame, that is, the electronic device continuously and explicitly displays a view finder of the second camera to prompt the user that the target object exceeds a view finding range of the second camera, and the electronic device cannot successfully acquire the first image of the target object by the second device.
Illustratively, assuming the first value is 2, if the target magnification is 2.2, the target magnification is greater than the first value. If the mark frame 43 (i.e., the target mark frame) of the object 41 exceeds the view frame B1 of the second camera, i.e., the object 41 exceeds the view range of the second camera, the electronic device continues to display the view frame B1 of the second camera continuously on the preview image a1 and displays it in red. Therefore, the human-computer interaction performance of the electronic equipment in the interesting shooting process through the plurality of cameras is improved.
Optionally, based on the implementation mode 2, before the step 202-5, the image processing method provided in the embodiment of the present invention may further include the step 205 and the step 206:
and step 205, the electronic equipment displays the view frame of the third camera on the preview image under the condition that the coordinates of the target mark frame are positioned in the view range of the third camera.
Wherein the electronic device may display a finder frame of the third camera on the preview image in a case where the target object is within a finder range of the third camera.
It can be understood that the coordinates of the target mark frame are located within the viewing range of the third camera, and it can be said that the target mark frame is located within the viewing range of the third camera, which indicates that the target object is located within the viewing range of the third camera.
For example, it is assumed that the third camera may be the 2-time telephoto camera, and the second camera is the 5-time periscopic telephoto camera.
And step 206, when the target object is located outside the viewing range of the third camera and within the viewing range of the second camera, the electronic device closes the third camera, starts the second camera, and displays the viewing frame of the second camera on the preview image.
The focal length of the third camera is larger than that of the first camera and smaller than that of the second camera.
Specifically, in step 206, the target mark frame may be located outside the view frame of the third camera and inside the view frame of the second camera, and the target object is located outside the view frame of the third camera and outside the view frame of the second camera. For example, the coordinates of the target mark frame are located outside the range of view of the 2-time telephoto camera and within the range of view of the 5-time periscopic telephoto camera.
For example, in step 205 and step 206, the third camera may be a 2-time telephoto lens, and the third camera may be a 5-time periscopic telephoto camera.
Similarly, in conjunction with FIG. 8 above, after the user clicks the "+" button again, triggering the electronic device to continue to increase the target magnification by 5, the electronic device may update the viewing frame B2 that displays the first marker "5" and displays the 5-fold periscopic tele camera on the preview image A1, as shown in FIG. 9.
It should be noted that, in the embodiment of the present invention, the electronic device may start the corresponding camera according to the position of the mark frame of the target object (i.e. the target mark frame): and if the coordinates of the target mark frame are positioned outside the shooting range of the 2-time long-focus camera and the 5-time periscopic long-focus camera, the electronic equipment only starts the standard camera. And if the coordinates of the target mark frame are positioned in the framing range of the 2-time long-focus camera and outside the shooting range of the 5-time periscopic long-focus camera, the electronic equipment turns on the 2-time long-focus lens. And if the coordinates of the target mark frame are positioned in the framing range of the 5-time long-focus camera, the electronic equipment turns on the 5-time periscopic long-focus lens.
In this way, by the image processing method provided by the embodiment of the present invention, the first area where the target object is located can be shot by switching the plurality of tele cameras, so that the image of the first area can be shot by the tele camera with a larger focal length, and the resolution can be further improved by improving the pixels of the enlarged image of the first area.
Optionally, step 207 may be further included after step 202:
step 207, the electronic device saves at least one of the following items: a preview image, a first image, a second image.
And obtaining a second image according to the preview image and the first image in the second area. I.e. the image comprising the first image on the second area of the second image preview image.
The electronic equipment can store the preview image, the first image and the second image into an album of the electronic equipment. Subsequently, after the user triggers the electronic device to enter the album, the user may click on the second image (i.e., the perusal picture) to view and re-edit the second image. Specifically, the user may perform similar operations in step 202 on the person in the first region in the electronic second image, i.e., the person represented by the target object, to zoom in and out on the image of the target object therein (i.e., adjust the target magnification, specifically, adjust the first magnification), and then save the image as a new image. Therefore, the man-machine interaction performance in the process of amplifying the image of the target object is further improved, so that the user can adjust the target amplification factor of the image of the amplified target object according to the requirement.
Fig. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention. Fig. 10 shows that the image processing apparatus 10 includes: the receiving module 11 receives a first input of a user to a first area in a preview image under the condition that the preview image acquired by the first camera is displayed; a display control module 12, configured to respond to the first input received by the receiving module 11, display a first image in a second area of the preview image, where the first image is an image of the first area acquired by the second camera; the second area is at least partially overlapped with the first area, the display size of the second area is larger than that of the first area, and the focal length of the second camera is larger than that of the first camera.
Optionally, the display control module 12 is specifically configured to obtain a first initial image of the first area through the second camera; amplifying the first initial image according to a first amplification factor to obtain a first image; displaying the first image in a second area of the preview image; wherein, the ratio of the first angle of view to the second angle of view is a first value; the first field angle is the field angle of the first camera, and the second field angle is the field angle of the second camera; the first magnification is the ratio of the target magnification to the first value; the product of the target magnification and the display size of the first area is the display size of the second area; the target magnification is a default value or a user-entered value.
Optionally, a center position of the second region is the same as a center position of the first region, or at least one edge position of the second region is the same as at least one edge position of the first region.
Optionally, the first image is an image of a target object recognized and extracted from a target initial image, the target initial image is an image of a third area in a preview image acquired by a second camera, and the third area includes the first area.
Optionally, the first input includes a first sub-input and a second sub-input; the display control module 12 is specifically configured to determine, in response to a first sub-input of a target mark frame displayed on the preview image by a user, an area where a target object marked by the target mark frame is located as a first area; in response to a second sub-input of the user to the first control displayed on the preview image, determining the numerical value input through the first control as a target magnification; acquiring a first image through a second camera under the condition that a target mark frame is positioned in a viewing frame of the second camera displayed on the preview image; displaying the first image in a second area of the preview image according to the target magnification; the target mark frame is marked as one mark frame in at least one mark frame displayed on the preview image, and each mark frame is used for marking a shooting object in the preview image; the first control is used for inputting a target magnification; the product of the target magnification and the display size of the first area is the display size of the second area; the target mark frame is positioned in the view finding frame of the second camera and used for indicating that the target object is positioned in the view finding range of the second camera.
Optionally, the display control module 12 is further configured to, when the target mark frame is located in a view frame of a second camera displayed on the preview image, before the first image is acquired by the second camera, display the view frame of the second camera on the preview image with a first display effect when the target magnification is greater than a first numerical value; under the condition that the target mark frame is positioned outside the view frame of the second camera, displaying the view frame of the second camera with a second display effect; wherein, the ratio of the first angle of view to the second angle of view is a first value; the first angle of view is the angle of view of the first camera, and the second angle of view is the angle of view of the second camera.
Optionally, the display control module 12 is specifically configured to display a finder frame of the third camera on the preview image when the target object is within a finder range of the third camera; when the target object is positioned outside the framing range of the third camera and in the framing range of the second camera, closing the third camera, starting the second camera, and displaying a framing frame of the second camera on the preview image; the focal length of the third camera is larger than that of the first camera and smaller than that of the second camera.
Optionally, the apparatus 10 further comprises: a saving module 13, configured to, after the display control module 12 displays the first image in the second area of the preview image, save at least one of the following: preview images, first images, second images; and obtaining the second image according to the preview image and the first image in the second area.
The image processing apparatus 10 provided in the embodiment of the present invention can implement each process implemented by the electronic device in the foregoing method embodiment, and for avoiding repetition, details are not described here again.
In the image processing apparatus provided in the embodiment of the present invention, when the preview image acquired by the first camera is displayed, a first input to a first area in the preview image by a user may trigger a second area of the preview image to display the first image, where the first image is an image of the first area acquired by the second camera. The second area is at least partially overlapped with the first area, and the display size of the second area is larger than that of the first area, so that the first image displayed on the second area of the preview image can meet the interesting requirement of a user on local image amplification, such as local amplification of the human face of a person in the preview image. Specifically, the focal length of the second camera is greater than that of the first camera, that is, the second camera is a telephoto camera compared to the first camera, so that pixels of the first image of the target object are higher than pixels of the image in which the target object in the preview screen is photographed. As such, the first image may present more image detail than the image of the first region in the preview image. And the definition of the first image is high, the sense of incongruity of image stretching is avoided, and the problems that the definition of a local enlarged image is low and the image is distorted and stretched due to the fact that the image of the first area in the preview image is directly amplified are solved.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention, where the electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 11 does not constitute a limitation of electronic devices, which may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
Alternatively, the image processing apparatus 10 may be implemented by the electronic device 100.
Alternatively, the receiving module 11 in the image processing apparatus 10 may be implemented by the user input unit 107 in the electronic device 100; the display control module 12 may be implemented by the processor 110 and the display unit 106 in the electronic device 100.
The user input unit 107 is configured to receive a first input of a user to a first area in a preview image while the preview image captured by the first camera is displayed; a display unit 106 for displaying a first image in a second area of the preview image in response to a first input received by the user input unit 107, the first image being an image of the first area acquired by the second camera; the second area is at least partially overlapped with the first area, the display size of the second area is larger than that of the first area, and the focal length of the second camera is larger than that of the first camera.
In the electronic device provided by the embodiment of the invention, when the preview image acquired by the first camera is displayed, the first input of the user to the first area in the preview image can trigger the display of the first image in the second area of the preview image, wherein the first image is the image of the first area acquired by the second camera. The second area is at least partially overlapped with the first area, and the display size of the second area is larger than that of the first area, so that the first image displayed on the second area of the preview image can meet the interesting requirement of a user on local image amplification, such as local amplification of the human face of a person in the preview image. Specifically, the focal length of the second camera is greater than that of the first camera, that is, the second camera is a telephoto camera compared to the first camera, so that pixels of the first image of the target object are higher than pixels of the image of the target object in the shooting preview picture. As such, the first image may present more image detail than the image of the first region in the preview image. And the definition of the first image is high, the sense of incongruity of image stretching is avoided, and the problems that the definition of a local enlarged image is low and the image is distorted and stretched due to the fact that the image of the first area in the preview image is directly amplified are solved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, the other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 11, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and an external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power source 111 (such as a battery) for supplying power to each component, and preferably, the power source 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the foregoing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiments of the present invention further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the processes of the foregoing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. An image processing method, characterized in that the method comprises:
receiving a first input of a user to a first area in a preview image under the condition of displaying the preview image acquired by a first camera;
responding to the first input, and displaying a first image in a second area of the preview image, wherein the first image is an image of the first area acquired by a second camera;
the second area is at least partially overlapped with the first area, the display size of the second area is larger than that of the first area, and the focal length of the second camera is larger than that of the first camera; the first image is an image of an object in the first area, which is obtained by identifying and executing a matting operation and a cropping operation from an initial target image, the initial target image is an image of a third area in the preview image acquired by the second camera, and the third area comprises the first area;
the first input comprises a first sub-input and a second sub-input;
the displaying a first image in a second area of the preview image in response to the first input, comprising:
responding to a first sub-input of a user to a target marking frame displayed on the preview image, and determining an area where a target object marked by the target marking frame is located as the first area;
in response to a second sub-input of a user to a first control displayed on the preview image, determining a numerical value input through the first control as a target magnification;
under the condition that the target magnification is larger than a first numerical value, displaying a view-finding frame of the second camera on the preview image in a first display effect;
under the condition that the target mark frame is positioned outside the view frame of the second camera, displaying the view frame of the second camera with a second display effect;
wherein, the ratio of the first angle of view to the second angle of view is a first value; the first field angle is the field angle of the first camera, and the second field angle is the field angle of the second camera;
displaying a finder frame of a third camera on the preview image if the target object is within a viewing range of the third camera;
when the target object is located outside the framing range of the third camera and within the framing range of the second camera, closing the third camera, starting the second camera, and displaying a framing frame of the second camera on the preview image;
the focal length of the third camera is greater than that of the first camera and less than that of the second camera;
and displaying the first image in the second area of the preview image according to the target magnification.
2. The method of claim 1, wherein displaying the first image in the second area of the preview image comprises:
acquiring a first initial image of the first area through the second camera;
amplifying the first initial image according to a first amplification factor to obtain a first image;
displaying the first image in the second area of the preview image;
wherein the first magnification is a ratio of a target magnification to a first value; the product of the target magnification and the display size of the first area is the display size of the second area; the target magnification is a default value or a value input by a user.
3. The method of claim 1, wherein a center position of the second region is the same as a center position of the first region, or wherein at least one edge position of the second region is the same as at least one edge position of the first region.
4. The method of claim 1,
wherein the target mark frame is marked as one mark frame of at least one mark frame displayed on the preview image, and each mark frame is used for marking a shooting object in the preview image; the first control is used for inputting the target magnification; the product of the target magnification and the display size of the first area is the display size of the second area;
the target mark frame is positioned in a view frame of the second camera and used for indicating that the target object is positioned in a view range of the second camera.
5. The method of any of claims 1-4, wherein after displaying the first image in the second region of the preview image, the method further comprises:
saving at least one of: the preview image, the first image, the second image;
wherein the second image is derived from the preview image and the first image in the second region.
6. An image processing apparatus, characterized in that the apparatus comprises:
the device comprises a receiving module, a display module and a display module, wherein the receiving module is used for receiving first input of a user to a first area in a preview image under the condition of displaying the preview image acquired by a first camera;
the display control module is used for responding to the first input received by the receiving module, and displaying a first image in a second area of the preview image, wherein the first image is an image of the first area acquired by a second camera;
the second area is at least partially overlapped with the first area, the display size of the second area is larger than that of the first area, and the focal length of the second camera is larger than that of the first camera; the first image is an image of an object in the first area, which is obtained by identifying and executing a matting operation and a cropping operation from an initial target image, the initial target image is an image of a third area in the preview image, which is acquired by the second camera, and the third area contains the first area;
the first input comprises a first sub-input and a second sub-input;
the display control module is specifically configured to determine, in response to a first sub-input of a target mark frame displayed on the preview image by a user, an area where a target object marked by the target mark frame is located as the first area; in response to a second sub-input of a user to a first control displayed on the preview image, determining a numerical value input through the first control as a target magnification; under the condition that the target magnification is larger than a first numerical value, displaying a view-finding frame of the second camera on the preview image in a first display effect; displaying the viewfinder frame of the second camera with a second display effect under the condition that the target mark frame is positioned outside the viewfinder frame of the second camera;
wherein, the ratio of the first angle of view to the second angle of view is a first value; the first field angle is the field angle of the first camera, and the second field angle is the field angle of the second camera;
the display control module is specifically configured to display a finder frame of a third camera on the preview image when the target object is within a finder range of the third camera; when the target object is located outside the framing range of the third camera and within the framing range of the second camera, closing the third camera, starting the second camera, and displaying a framing frame of the second camera on the preview image;
the focal length of the third camera is greater than that of the first camera and less than that of the second camera;
the display control module is further configured to display the first image in the second area of the preview image according to the target magnification.
7. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 5.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 5.
CN202010367204.4A 2020-04-30 2020-04-30 Image processing method and device and electronic equipment Active CN111541845B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010367204.4A CN111541845B (en) 2020-04-30 2020-04-30 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010367204.4A CN111541845B (en) 2020-04-30 2020-04-30 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111541845A CN111541845A (en) 2020-08-14
CN111541845B true CN111541845B (en) 2022-06-24

Family

ID=71975297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010367204.4A Active CN111541845B (en) 2020-04-30 2020-04-30 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111541845B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109474786B (en) * 2018-12-24 2021-07-23 维沃移动通信有限公司 Preview image generation method and terminal
CN114422687B (en) * 2020-10-28 2024-01-19 北京小米移动软件有限公司 Preview image switching method and device, electronic equipment and storage medium
CN112584040B (en) * 2020-12-02 2022-05-17 维沃移动通信有限公司 Image display method and device and electronic equipment
CN112637495B (en) * 2020-12-21 2022-06-17 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN112702524B (en) * 2020-12-25 2022-10-11 维沃移动通信(杭州)有限公司 Image generation method and device and electronic equipment
CN112995500B (en) * 2020-12-30 2023-08-08 维沃移动通信(杭州)有限公司 Shooting method, shooting device, electronic equipment and medium
CN112804451B (en) * 2021-01-04 2023-04-25 三星电子(中国)研发中心 Method and system for photographing by utilizing multiple cameras and mobile device
CN112954195A (en) * 2021-01-27 2021-06-11 维沃移动通信有限公司 Focusing method, focusing device, electronic equipment and medium
CN113038008A (en) * 2021-03-08 2021-06-25 维沃移动通信有限公司 Imaging method, imaging device, electronic equipment and storage medium
CN113364976B (en) * 2021-05-10 2022-07-15 荣耀终端有限公司 Image display method and electronic equipment
CN115473996B (en) * 2021-06-11 2024-04-05 荣耀终端有限公司 Video shooting method and electronic equipment
CN113835815A (en) * 2021-09-28 2021-12-24 维沃移动通信有限公司 Image previewing method and device
CN116208846A (en) * 2021-11-29 2023-06-02 中兴通讯股份有限公司 Shooting preview method, image fusion method, electronic device and storage medium
CN115037879A (en) * 2022-06-29 2022-09-09 维沃移动通信有限公司 Shooting method and device thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761757A (en) * 2013-12-31 2014-04-30 上海莱凯数码科技有限公司 Producing method for locally amplifying picture in digital animation
CN104052931A (en) * 2014-06-27 2014-09-17 宇龙计算机通信科技(深圳)有限公司 Image shooting device, method and terminal
CN106576143A (en) * 2014-07-25 2017-04-19 三星电子株式会社 Image photographing apparatus and image photographing method
CN106888349A (en) * 2017-03-30 2017-06-23 努比亚技术有限公司 A kind of image pickup method and device
CN106909274A (en) * 2017-02-27 2017-06-30 努比亚技术有限公司 A kind of method for displaying image and device
WO2018106310A1 (en) * 2016-12-06 2018-06-14 Qualcomm Incorporated Depth-based zoom function using multiple cameras
CN110941375A (en) * 2019-11-26 2020-03-31 腾讯科技(深圳)有限公司 Method and device for locally amplifying image and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761757A (en) * 2013-12-31 2014-04-30 上海莱凯数码科技有限公司 Producing method for locally amplifying picture in digital animation
CN104052931A (en) * 2014-06-27 2014-09-17 宇龙计算机通信科技(深圳)有限公司 Image shooting device, method and terminal
CN106576143A (en) * 2014-07-25 2017-04-19 三星电子株式会社 Image photographing apparatus and image photographing method
WO2018106310A1 (en) * 2016-12-06 2018-06-14 Qualcomm Incorporated Depth-based zoom function using multiple cameras
CN106909274A (en) * 2017-02-27 2017-06-30 努比亚技术有限公司 A kind of method for displaying image and device
CN106888349A (en) * 2017-03-30 2017-06-23 努比亚技术有限公司 A kind of image pickup method and device
CN110941375A (en) * 2019-11-26 2020-03-31 腾讯科技(深圳)有限公司 Method and device for locally amplifying image and storage medium

Also Published As

Publication number Publication date
CN111541845A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
CN111541845B (en) Image processing method and device and electronic equipment
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN108668083B (en) Photographing method and terminal
WO2021104197A1 (en) Object tracking method and electronic device
US11451706B2 (en) Photographing method and mobile terminal
CN111182205B (en) Photographing method, electronic device, and medium
WO2021051995A1 (en) Photographing method and terminal
CN109474786B (en) Preview image generation method and terminal
CN111031398A (en) Video control method and electronic equipment
WO2021104227A1 (en) Photographing method and electronic device
CN109905603B (en) Shooting processing method and mobile terminal
CN111010512A (en) Display control method and electronic equipment
WO2021036623A1 (en) Display method and electronic device
CN111010511B (en) Panoramic body-separating image shooting method and electronic equipment
WO2021082744A1 (en) Video viewing method and electronic apparatus
WO2021104226A1 (en) Photographing method and electronic device
WO2021190390A1 (en) Focusing method, electronic device, storage medium and program product
CN108924422B (en) Panoramic photographing method and mobile terminal
CN110830713A (en) Zooming method and electronic equipment
WO2021104266A1 (en) Object display method and electronic device
CN110798621A (en) Image processing method and electronic equipment
CN110769156A (en) Picture display method and electronic equipment
WO2021017713A1 (en) Photographing method and mobile terminal
CN111464746B (en) Photographing method and electronic equipment
CN111083374B (en) Filter adding method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant