CN114025100A - Shooting method, shooting device, electronic equipment and readable storage medium - Google Patents

Shooting method, shooting device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114025100A
CN114025100A CN202111449935.4A CN202111449935A CN114025100A CN 114025100 A CN114025100 A CN 114025100A CN 202111449935 A CN202111449935 A CN 202111449935A CN 114025100 A CN114025100 A CN 114025100A
Authority
CN
China
Prior art keywords
image
target
input
background image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111449935.4A
Other languages
Chinese (zh)
Other versions
CN114025100B (en
Inventor
胡孔明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111449935.4A priority Critical patent/CN114025100B/en
Publication of CN114025100A publication Critical patent/CN114025100A/en
Application granted granted Critical
Publication of CN114025100B publication Critical patent/CN114025100B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, a shooting device, electronic equipment and a readable storage medium. Belonging to the field of camera shooting. An embodiment of the method comprises: controlling a camera to shoot N groups of images of N shot objects, wherein each group of images comprises an object image and a background image; receiving a first input of a user; responding to the first input, carrying out image synthesis on the target object image and the target background image, and generating a target blurring image; wherein the target object image and the target background image are determined from the first input, and N is a positive integer. The embodiment reduces the cost of blurring shooting and enriches the blurring effect of the image.

Description

Shooting method, shooting device, electronic equipment and readable storage medium
Technical Field
The embodiment of the application relates to the field of camera shooting, in particular to a shooting method, a shooting device, electronic equipment and a readable storage medium.
Background
Blurring is an image processing method in which the depth of field is reduced by a digital image processing technique, the focus is focused on the subject, and the image outside the focal plane is gradually blurred. Compared with the common image, the blurring image can better highlight the shot object.
In the related art, a certain shooting object can be focused usually in the shooting process, the depth of field is obtained through the double-camera module, a blurred image of the shooting object is obtained based on the depth of field, and due to the fact that the double-camera module is needed, the double-camera module can only set one combined focal plane, and areas outside the combined focal plane cannot be clearly displayed, the method is high in cost, and the blurring effect is single.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method, a shooting device, electronic equipment and a readable storage medium, which can solve the technical problems of high blurring shooting cost and single blurring effect.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a shooting method, where the method includes: controlling a camera to shoot N groups of images of N shot objects, wherein each group of images comprises an object image and a background image; receiving a first input of a user; responding to the first input, carrying out image synthesis on a target object image and a target background image, and generating a target blurring image; wherein the target object image and target background image are determined from the first input, N being a positive integer.
In a second aspect, an embodiment of the present application provides a shooting apparatus, including: the control unit is used for controlling the camera to shoot N groups of images of N shot objects, wherein each group of images comprises an object image and a background image; a first receiving unit for receiving a first input of a user; a generating unit, configured to perform image synthesis on a target object image and a target background image in response to the first input, and generate a target blurred image; wherein the target object image and target background image are determined from the first input, N being a positive integer.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method as described in the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method as described in the first aspect above.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method described in the first aspect.
In the embodiment of the application, N groups of images of N shooting objects are shot by controlling the camera, each group of images comprises an object image and a background image, and then a first input of a user is received, so that the target object image and the target background image are subjected to image synthesis in response to the first input, and a target blurring image is generated. Therefore, on one hand, under the condition that the number of the cameras is not increased, a single camera is controlled to zoom continuously to shoot a plurality of images, the generation of the virtual images is realized by combining the interactive operation of the user, and the virtual shooting cost is reduced. On the other hand, the user can flexibly select the target object and the background blurring effect which need to be clearly displayed, the selected target object is not limited by the focusing plane, the target blurring image which meets the blurring effect required by the user can be obtained, and the blurring effect of the image is enriched.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 is a flowchart of a shooting method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a marking process of a photographic subject of the photographic method provided by the embodiment of the application;
fig. 3 is one of schematic diagrams of a shooting result display interface of a shooting method provided in an embodiment of the present application;
fig. 4 is a schematic process diagram of receiving a first input in the shooting method provided by the embodiment of the present application;
fig. 5 is a second schematic view of a display interface of a shooting result of the shooting method according to the embodiment of the present application;
fig. 6 is a schematic structural diagram of a shooting device provided in an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware structure of an electronic device suitable for implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method, the shooting device, the electronic device and the readable storage medium provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings by specific embodiments and application scenarios thereof.
Please refer to fig. 1, which shows one of the flowcharts of the shooting method provided in the embodiment of the present application. The shooting method provided by the embodiment of the application can be applied to electronic equipment. In practice, the electronic device may be a smartphone, a tablet computer, a laptop, etc. The electronic device may have a camera application installed therein, and the camera application may have a photographing function.
The process of the shooting method provided by the embodiment of the application comprises the following steps:
step 101, controlling a camera to shoot N groups of images of N shot objects, wherein each group of images comprises an object image and a background image.
In the present embodiment, the execution subject of the video shooting method (the electronic apparatus described above) may be mounted with a camera. The execution body may control the camera to capture N sets of images of N photographic subjects. Wherein N is a positive integer. The N photographic subjects may be any N subjects in the video preview interface, such as people, animals, still, scenes, and the like. As an example, if the photographing preview interface displays a child, a woman, and a tree, at least one of the child, the woman, and the tree may be a photographing object. For each object, when the camera is controlled to shoot the shooting object, the shooting object can be shot after focusing.
In the present embodiment, N sets of images can be obtained by performing shooting for N subjects. Each set of images may include an object image and a background image. The object image may be an image area surrounded by an object contour of the photographic object, and the background image may be an image of an area other than the image area surrounded by the object contour in the photographic image. In addition, the object image may be an image including the image area, and the background image may be an image including an area other than the image area, for example, both the object image and the background image may include all image contents in the video preview interface.
In some optional implementations of this embodiment, before controlling the camera to capture N groups of images of N subjects, the executing body may further receive a second input of the user to the N preview areas on the capture preview interface; in response to the above-described second input, the object in each of the N preview areas may be determined as the photographic object. The second input may include, but is not limited to, a sequential click input, a sequential long press input, a sequential circle selection input, a sequential frame selection input, and the like for each of the N preview regions. Therefore, the user can flexibly select the shooting object in the shooting preview interface according to the requirement.
In some optional implementations of the embodiment, after determining the object in the preview area as the shooting object, the execution subject may further mark the shooting object in the video preview interface. The marking means may include, but is not limited to, at least one of: displaying a mark frame on the photographic subject, displaying a logo beside the photographic subject, changing the style of the preview area where the photographic subject is located, and the like. As an example, fig. 2 shows a schematic diagram of a marking process of a photographic subject. The shooting preview interface comprises three shooting objects, namely a female at a close distance, a child at a medium distance, a tree and a bench at a far distance. The user may first click on a female in the vicinity, and the numeral 1 appears next to the contour of the corresponding female, identifying it as the first photographic subject. The user may continue to click on a child at a medium distance, at which time the child appears with the number 2 next to the outline, indicating that it is the second subject. Therefore, when a user selects one shooting object every time, the corresponding selection result can be displayed, the user can conveniently distinguish the selected shooting object from the unselected shooting objects, the user is prevented from repeatedly selecting or mistakenly cancelling the same shooting object, and the accuracy of user operation is improved.
It should be noted that, in this process, if the user wishes to cancel a certain photographic subject, the user may click the main body again, and then click the "change setting" control to cancel the selection of the photographic subject. The selection process of the photographic subject may be ended when the user "finishes setting" the control.
In some optional implementations of this embodiment, after the camera is controlled to capture N groups of images of N photographic subjects, N subject images may be displayed in the first area, and N background images may be displayed in the second area. The first area and the second area can be any two areas in the display interface. As an example, three photographic subjects, a female at close range, a child at medium range, and a tree at far range, are included in the photographic preview interface. After controlling the camera to capture three sets of images of the three subjects, the capture result display interface may be as shown in fig. 3. The upper area of the shooting result display interface is a first area, and an object image of a close-range female (as shown by reference numeral 301), an object image of a distant-range tree (as shown by reference numeral 302), and an object image of a medium-range child (as shown by reference numeral 303) can be displayed. The lower area of the display interface for the photographed result is a second area, which can display a background image of a close-view woman (as indicated by reference numeral 304), a background image of a distant view tree (as indicated by reference numeral 305), and a background image of a medium-view child (as indicated by reference numeral 306). By displaying the object image and the background image in different areas, the user can distinguish the object image from the background image conveniently, and convenience is provided for the user to select the target object image and the target background image in a candidate mode.
In some optional implementation manners of this embodiment, when the camera is controlled to capture N groups of images of N subjects, each subject may be focused and captured first to obtain a first intermediate image. Then, the image of the first region in the first intermediate image may be extracted to obtain an object image, and the first region may include a region surrounded by an object contour of the photographic object. Finally, a background image of each photographic subject may be obtained based on the first intermediate image or the second intermediate image. The second intermediate image may be obtained by focusing and shooting a second area, and the second area may be an image area other than the first area in the first intermediate image. The object image and the background image acquired by the method can clearly display different shot objects and different degrees of blurring effects, and provide convenience for a user to distinguish and select the shot objects required to be clearly displayed and the required blurring effects. As an example, as shown in fig. 2, the shooting preview interface includes three shooting objects, which are a close-range female, a middle-range child, and a distant-range tree. A close-up woman may be focused and photographed first, resulting in a first intermediate image of the close-up woman. Then, the area where the close-range woman is located in the first intermediate image may be used as a first area to be deducted, so as to obtain an object image (as indicated by reference numeral 301 in fig. 3) and a background image (as indicated by reference numeral 304 in fig. 3) of the close-range woman. Then, focusing and shooting can be carried out on the distant view tree, and a first middle image of the distant view tree is obtained. Then, the area where the tree is located in the first intermediate image may be used as a first area to be deducted, so as to obtain an object image (as indicated by reference numeral 304 in fig. 3) and a background image (as indicated by reference numeral 305 in fig. 3) of the distant view tree. Then, focusing and shooting can be carried out on the medium-view child, and a first intermediate image of the medium-view child is obtained. Then, the area where the child is located in the first intermediate image may be used as a first area to perform deduction, so as to obtain an object image (as indicated by reference numeral 303 in fig. 3) of the child in the middle view and a background image (as indicated by reference numeral 306 in fig. 3). By displaying the object image and the background image in different areas, the user can distinguish the object image from the background image conveniently, and convenience is provided for the user to select the target object image and the target background image in a candidate mode.
In some optional implementation manners of the present embodiment, when acquiring the background image of each photographic subject, the background image may be obtained by matting the image of the second region in the first intermediate image. For example, as shown in fig. 2, the shooting preview interface includes three shooting objects, which are a close-range female, a middle-range child, and a distant-range tree. For the first intermediate image of a close-up woman, the first region may be a region where the close-up woman is located, and the second region may be an image region other than the close-up woman. The background image may be obtained by deducting the image area outside the close-up woman. For children in the middle view and trees in the long view, the same principle is not repeated here. Therefore, background images with different blurring degrees can be obtained for different shooting objects, and a user can flexibly select the background image with the required blurring degree conveniently.
In some optional implementation manners of this embodiment, when obtaining the background image of each photographic object, focusing and photographing may be performed on the second area first to obtain a second intermediate image; then, the image of the second area in the second intermediate image can be extracted to obtain the background image. For example, as shown in fig. 2, the shooting preview interface includes three shooting objects, which are a close-range female, a middle-range child, and a distant-range tree. For the first intermediate image of a close-up woman, the first region may be a region where the close-up woman is located, and the second region may be an image region other than the close-up woman. The second intermediate image may be obtained by focusing on an area of the image other than the close-up woman (e.g., focusing on the far view). And then, deducting the image area except the close-range female in the second intermediate image to obtain a background image. For children in the middle view and trees in the long view, the same principle is not repeated here. This makes it possible to obtain a clear background image. On the basis, the blurring degree of the background image can be further adjusted, so that a user can obtain a clear background image and background images with different blurring degrees, and the blurring effect is further enriched.
Step 102, a first input of a user is received.
In this embodiment, the execution subject may receive a first input from a user. Wherein the first input is operable to select a target object image and a target background image from the N sets of images. The first input may include an input of an object image and an input of a background image in the N sets of images. The input may include, but is not limited to, a click input, a long press input, a circle selection input, a box selection input, and the like. The execution subject may take a certain object image as a target object image after detecting an input of the object image by a user. Similarly, the executing body may use the object image as the target background image after detecting the user input of a certain background image.
And 103, in response to the first input, performing image synthesis on the target object image and the target background image to generate a target blurring image.
In this embodiment, in response to the first input, the execution subject may perform image synthesis on the target object image and the target background image to generate the target blurred image. In the target blurring image, the target object image selected by the user can be displayed clearly, and other areas can be displayed in a blurring effect. When image-combining the target object image and the target background image, an image area surrounded by an object outline of the photographic object may be first extracted from the target object image. Then, the clipped region is aligned with a region surrounded by the object outline of the corresponding photographic subject in the background image. And finally, covering the deducted area in the aligned area so as to obtain the target blurring image.
In some optional implementations of the embodiment, the executing body may receive user input of at least one object image of the N object images and at least one background image of the N background images. In response to the input, the at least one object image may be first determined as a target object image and the at least one background image may be determined as a target background image.
As an example, in one shooting scene, a close-range female, a middle-range child, and a distant-range tree are included. After the camera is controlled to shoot the three groups of images of the three shot objects, an object image of a close-range female, an object image of a distant-range tree, an object image of a middle-range child, a background image of the close-range female, a background image of the distant-range tree and a background image of the middle-range child can be obtained. If the user needs to obtain an image in which the woman in the near view and the child in the middle view can be clearly displayed and the rest areas are displayed in a blurred manner, as shown in fig. 4, two photos, namely an object image of the woman in the near view (as shown by a reference numeral 401) and an object image of the child in the middle view (as shown by a reference numeral 402), can be clicked, and at this time, the edge of the photo is changed to be dark, which indicates selection, as shown in fig. 4.
Next, a background blurred picture may be selected. The degree of blurring of the background varies depending on the subject focus position. According to the optical principle, the closer the lens is to the subject, the higher the degree of background blurring. Therefore, if the user desires a high degree of blurring of the distant view, the background image of the close-up woman can be selected. If the user desires a lower or no blurring of the background, the background image of the distant view tree may be selected. Here, the user desires the background to be of a moderate blurring degree, and may select a background image of a medium-view child (as shown at reference numeral 403). On the basis, the execution main body can synthesize the images selected by the user to obtain the target blurring image with the blurring effect required by the user. In the target blurring image, a close-range woman and a middle-range child can be clearly displayed, and other areas can be blurred and displayed according to a medium blurring degree. Therefore, the user can freely select the shot object which is expected to keep clear and the area which is expected to be blurred, the selected shot object does not need to be in the same focusing plane, and the image blurring effect is enriched.
As another example, for the same shooting scene, if the user needs an image in which the woman in the close range is clearly displayed and the rest area is blurred, the user may click on the object image of the woman in the close range. Next, a background blurred picture may be selected. If the user wishes to have a high degree of blurring of the distant view, a background image of a close-up woman may be selected. If the user desires a lower or no blurring of the background, the background image of the distant view tree may be selected. If the user desires a moderate degree of blurring of the background, a background image of a medium scene child may be selected. Further, the user may select two or more background images. For example, if the user selects a background image of a close-up woman and a background image of a medium-view child, the degree of blurring may be between a high degree of blurring and a medium degree of blurring, i.e., at a high degree of blurring. On the basis, the execution main body can synthesize the images selected by the user to obtain the target blurring image with the blurring effect required by the user. In the target blurring image, a close-range female can be clearly displayed, and other areas can be blurred and displayed according to the medium-high blurring degree. Therefore, the user can freely select the shot object which is expected to keep clear and the area which is expected to be blurred, the selected shot object does not need to be in the same focusing plane, and the image blurring effect is enriched. In some optional implementation manners of this embodiment, the receiving the first input of the user may further include receiving an input of the user for a target background image in the N background images, where the input may be used to set a target blurring degree of the target background image. The input may include, but is not limited to, a click input, a double click input, a long press input, a swipe input, a custom gesture input, and the like. Thus, according to the first input, the execution subject can also determine the target blurring degree of the target background image. In the image synthesis, the target background image may be blurred according to the target blurring degree. Then, the target object image and the blurred target background image may be image-synthesized to generate a target blurred image.
As an example, the user has photographed a close-up woman as a photographic subject, and selects a subject image in the set of images as a target subject image. Then the background area is shot again, and the clear background image is selected as the target background image. At this time, as shown in fig. 5, the target object image and the target background image (as shown by reference numeral 502) may be displayed in the photographing result display interface (as shown by reference numeral 501). After the user presses the target background image for a long time, a blurring level customization interface (as indicated by reference numeral 503) may be displayed. A "background clear without blurring" control (as indicated by reference numeral 504) and a "blurring strength" control (as indicated by reference numeral 505) may be displayed in the blurring degree customization interface. In the "blurring strength" control, a slider for setting a degree of blurring may be displayed. If background blurring is not needed, the user can click the control of 'background is clear and blurring' is not needed. If background blurring is required, the user can do this by sliding the slider in the "blurring strength" control.
If the user selects the control that the background is clear and is not blurred, the blurring degree configuration information can indicate that the blurring degree is zero, blurring processing with the blurring degree being zero can be performed on the clear target background image at the moment, and the obtained blurred background image is the original clear target background image. If the user selects the control slide rail in the "blurring strength" control to be located at the center of the slide bar, the blurring degree configuration information may indicate that the blurring degree is 50%, and at this time, blurring processing of 50% of the blurring degree may be performed on the clear target background image to obtain a blurred target background image. The blurring degree of the target background image is interactively controlled by the user, so that the blurring degree of the target background image can be improved, the definition of the target background image can also be improved, the blurring effect is further enriched, and the blurring shooting flexibility is improved.
The user may input the target background image of the N background images, which may be implemented by sliding the target background image up and down, sliding the target background image left and right, or determining the blurring degree according to the sliding direction and/or distance, and the like, and is not limited to setting on the control.
In the method provided by the above embodiment of the present application, the camera is controlled to capture N groups of images of N objects, each group of images includes an object image and a background image, and then a first input of a user is received, so that in response to the first input, an image synthesis is performed on a target object image and a target background image, and a target blurred image is generated. Therefore, on one hand, under the condition that the number of the cameras is not increased, a single camera is controlled to zoom continuously to shoot a plurality of images, the generation of the virtual images is realized by combining the interactive operation of the user, and the virtual shooting cost is reduced. On the other hand, the user can flexibly select the target object and the background blurring effect which need to be clearly displayed, the selected target object is not limited by the focusing plane, the target blurring image which meets the blurring effect required by the user can be obtained, and the blurring effect of the image is enriched.
It should be noted that, in the shooting method provided in the embodiment of the present application, the execution subject may be a shooting device, or a control module in the shooting device for executing the loading shooting method. In the embodiment of the present application, a shooting device executes a loading shooting method as an example, and the shooting method provided in the embodiment of the present application is described.
As shown in fig. 6, the above-mentioned photographing apparatus 600 of the present embodiment includes: a control unit 601 for controlling the camera to capture N sets of images of N photographic subjects, each set of images including a subject image and a background image; a first receiving unit 602, configured to receive a first input of a user; a generating unit 603 configured to generate a target blurred image by image-synthesizing the target object image and the target background image in response to the first input; wherein the target object image and the target background image are determined according to the first input, and N is a positive integer.
In some optional implementations of this embodiment, the method further includes: the second receiving unit is used for receiving second input of the user to the N preview areas on the shooting preview interface; a first determining unit configured to determine the subject in each of the N preview areas as the photographic subject in response to the second input. Therefore, the user can flexibly select the shooting object in the shooting preview interface according to the requirement.
In some optional implementations of this embodiment, the method further includes: and the display unit is used for displaying the N object images in the first area and displaying the N background images in the second area. By displaying the object image and the background image in different areas, the user can distinguish the object image from the background image conveniently, and convenience is provided for the user to select the target object image and the target background image in a candidate mode.
In some optional implementations of this embodiment, the first receiving unit 602 is further configured to: receiving user input of at least one object image in the N object images and at least one background image in the N background images; the method further comprises the following steps: a second determining unit, configured to determine the at least one object image as a target object image, and determine the at least one background image as a target background image. Therefore, the user can freely select the shot object which is expected to keep clear and the area which is expected to be blurred, the selected shot object does not need to be in the same focusing plane, and the image blurring effect is enriched.
In some optional implementations of this embodiment, the first receiving unit 602 is further configured to: receiving the input of a user to a target background image in the N background images; the method further comprises the following steps: a third determining unit, configured to determine a target blurring degree of the target background image according to the first input; the method further comprises the following steps: blurring the target background image according to the target blurring degree; and carrying out image synthesis on the target object image and the target background image after the blurring processing to generate a target blurring image. The blurring degree of the target background image is interactively controlled by the user, so that the blurring degree of the target background image can be improved, the definition of the target background image can also be improved, the blurring effect is further enriched, and the blurring shooting flexibility is improved.
In some optional implementation manners of this embodiment, the control unit 601 is further configured to focus and shoot each shooting object to obtain a first intermediate image; matting an image of a first region in the first intermediate image to obtain an object image, wherein the first region comprises a region surrounded by an object outline of the shooting object; and obtaining a background image of each shooting object based on the first intermediate image or the second intermediate image. The object image and the background image acquired by the method can clearly display different shot objects and different degrees of blurring effects, and provide convenience for a user to distinguish and select the shot objects required to be clearly displayed and the required blurring effects.
In some optional implementation manners of this embodiment, the control unit 601 is further configured to scratch an image of a second region in the first intermediate image to obtain a background image, where the second region is an image region of the first intermediate image other than the first region. Therefore, background images with different blurring degrees can be obtained for different shooting objects, and a user can flexibly select the background image with the required blurring degree conveniently. Or focusing and shooting a second area to obtain a third intermediate image, and matting the image of the second area in the second intermediate image to obtain a background image, wherein the second area is an image area except the first area in the first intermediate image. On the basis, the blurring degree of the background image can be further adjusted, so that a user can obtain a clear background image and background images with different blurring degrees, and the blurring effect is further enriched.
The shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting device provided in the embodiment of the present application can implement each process implemented by the shooting device in the method embodiments of fig. 1 and fig. 5, and is not described here again to avoid repetition.
The device provided by the above embodiment of the present application receives a first input from a user; responding to the first input, determining at least one shooting object in a shooting preview interface, and sequentially carrying out focusing shooting on each shooting object to obtain a preview image of each shooting object; receiving a second input of the user; and responding to the second input, acquiring a virtual background image, and synthesizing the preview image and the virtual background image so as to obtain the virtual image. Therefore, on one hand, under the condition that the number of cameras is not increased, a mode of continuously zooming to shoot a plurality of images is adopted, virtual shooting is achieved through the combination of user interaction operation, and the cost of virtual shooting is reduced. On the other hand, the user can flexibly select the target object and the background blurring effect which need to be clearly displayed, the selected target object is not limited by the focusing plane, the target blurring image which meets the blurring effect required by the user can be obtained, and the blurring effect of the image is enriched.
Optionally, an electronic device is further provided in this embodiment of the present application, and includes a processor 910, a memory 909, and a program or an instruction stored in the memory 909 and capable of being executed on the processor 910, where the program or the instruction is executed by the processor 910 to implement each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 710 is configured to control the camera to capture N groups of images of N objects, where each group of images includes an object image and a background image; a user input unit 707 for receiving a first input by a user; processor 710 is further configured to image combine the target object image and the target background image in response to the first input, generating a target blurred image; wherein the target object image and target background image are determined from the first input, N being a positive integer.
In the embodiment of the present application, the processor 710 controls the camera to capture N sets of images of N subjects, each set of images including a subject image and a background image, and the user input unit 707 receives a first input from the user, so that the processor 710 performs image synthesis on the target subject image and the target background image in response to the first input to generate a target blurred image. Therefore, on one hand, under the condition that the number of the cameras is not increased, a single camera is controlled to zoom continuously to shoot a plurality of images, the generation of the virtual images is realized by combining the interactive operation of the user, and the virtual shooting cost is reduced. On the other hand, the user can flexibly select the target object and the background blurring effect which need to be clearly displayed, the selected target object is not limited by the focusing plane, the target blurring image which meets the blurring effect required by the user can be obtained, and the blurring effect of the image is enriched.
Optionally, the user input unit 707 is further configured to receive a second input of the user to the N preview areas on the shooting preview interface; the processor 710 is further configured to determine the object in each of the N preview areas as the photographic object in response to the second input. Therefore, the user can flexibly select the shooting object in the shooting preview interface according to the requirement.
Optionally, the display unit 706 is configured to display N object images in the first area, and display N background images in the second area. By displaying the object image and the background image in different areas, the user can distinguish the object image from the background image conveniently, and convenience is provided for the user to select the target object image and the target background image in a candidate mode.
Optionally, the user input unit 707 is further configured to receive user input on at least one object image of the N object images and at least one background image of the N background images; the processor 710 is further configured to determine the at least one object image as a target object image and the at least one background image as a target background image. Therefore, the user can freely select the shot object which is expected to keep clear and the area which is expected to be blurred, the selected shot object does not need to be in the same focusing plane, and the image blurring effect is enriched.
Optionally, the user input unit 707 is further configured to receive an input of a user to a target background image in the N background images; a processor 710, further configured to determine a target blurring degree of the target background image according to the first input; performing blurring processing on the target background image according to the target blurring degree; and carrying out image synthesis on the target object image and the target background image subjected to blurring processing to generate a target blurring image. The blurring degree of the target background image is interactively controlled by the user, so that the blurring degree of the target background image can be improved, the definition of the target background image can also be improved, the blurring effect is further enriched, and the blurring shooting flexibility is improved.
Optionally, the processor 710 is further configured to focus and shoot each shot object to obtain a first intermediate image; matting an image of a first region in the first intermediate image to obtain an object image, wherein the first region comprises a region surrounded by an object outline of the shooting object; and obtaining a background image of each shooting object based on the first intermediate image or the second intermediate image. The object image and the background image acquired by the method can clearly display different shot objects and different degrees of blurring effects, and provide convenience for a user to distinguish and select the shot objects required to be clearly displayed and the required blurring effects.
Optionally, the processor 710 is further configured to scratch an image of a second region in the first intermediate image to obtain a background image, where the second region is an image region of the first intermediate image except for the first region. Therefore, background images with different blurring degrees can be obtained for different shooting objects, and a user can flexibly select the background image with the required blurring degree conveniently. Or the processor 710 is further configured to focus and shoot a second region to obtain a third intermediate image, and extract an image of the second region in the second intermediate image to obtain a background image, where the second region is an image region in the first intermediate image except for the first region. On the basis, the blurring degree of the background image can be further adjusted, so that a user can obtain a clear background image and background images with different blurring degrees, and the blurring effect is further enriched.
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data, including but not limited to applications and operating systems. Processor 710 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A photographing method, characterized in that the method comprises:
controlling a camera to shoot N groups of images of N shot objects, wherein each group of images comprises an object image and a background image;
receiving a first input of a user;
responding to the first input, carrying out image synthesis on a target object image and a target background image, and generating a target blurring image;
wherein the target object image and target background image are determined from the first input, N being a positive integer.
2. The method according to claim 1, wherein before the controlling the camera to capture N sets of images of N subjects, further comprising:
receiving second input of the user to the N preview areas on the shooting preview interface;
determining an object in each of the N preview areas as a photographic object in response to the second input.
3. The method according to claim 1, wherein after the controlling the camera to capture N groups of images of N subjects, further comprising:
in the first area, N object images are displayed, and in the second area, N background images are displayed.
4. The method of claim 3, wherein receiving a first input from a user comprises:
receiving user input of at least one object image of the N object images and at least one background image of the N background images;
before the image synthesis of the target object image and the target background image to generate the target blurred image, the method further includes:
and determining the at least one object image as a target object image and determining the at least one background image as a target background image.
5. The method of claim 3, wherein receiving a first input from a user comprises:
receiving input of a user on a target background image in the N background images;
before the image synthesis of the target object image and the target background image to generate the target blurred image, the method further includes:
determining a target blurring degree of the target background image according to the first input;
the image synthesis of the target object image and the target background image to generate the target blurred image includes:
performing blurring processing on the target background image according to the target blurring degree;
and carrying out image synthesis on the target object image and the target background image subjected to blurring processing to generate a target blurring image.
6. The method according to claim 1, wherein the controlling the camera to take N sets of images of N photographic subjects comprises:
focusing and shooting each shot object to obtain a first intermediate image;
matting an image of a first region in the first intermediate image to obtain an object image, wherein the first region comprises a region surrounded by an object outline of the shooting object;
and obtaining a background image of each shooting object based on the first intermediate image or the second intermediate image.
7. The method according to claim 6, wherein the obtaining a background image of each photographic subject based on the first intermediate image or the second intermediate image comprises:
matting an image of a second region in the first intermediate image to obtain a background image, wherein the second region is an image region except the first region in the first intermediate image;
or focusing and shooting a second area to obtain a second intermediate image, and matting the image of the second area in the second intermediate image to obtain a background image, wherein the second area is an image area except the first area in the first intermediate image.
8. A camera, characterized in that the camera comprises:
the control unit is used for controlling the camera to shoot N groups of images of N shot objects, wherein each group of images comprises an object image and a background image;
a first receiving unit for receiving a first input of a user;
a generating unit, configured to perform image synthesis on a target object image and a target background image in response to the first input, and generate a target blurred image;
wherein the target object image and target background image are determined from the first input, N being a positive integer.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the photographing method according to any one of claims 1-7.
10. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the photographing method according to any one of claims 1-7.
CN202111449935.4A 2021-11-30 2021-11-30 Shooting method, shooting device, electronic equipment and readable storage medium Active CN114025100B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111449935.4A CN114025100B (en) 2021-11-30 2021-11-30 Shooting method, shooting device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111449935.4A CN114025100B (en) 2021-11-30 2021-11-30 Shooting method, shooting device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN114025100A true CN114025100A (en) 2022-02-08
CN114025100B CN114025100B (en) 2024-04-05

Family

ID=80067410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111449935.4A Active CN114025100B (en) 2021-11-30 2021-11-30 Shooting method, shooting device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114025100B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114710624A (en) * 2022-04-24 2022-07-05 维沃移动通信有限公司 Photographing method and photographing apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165488A1 (en) * 2015-09-18 2016-10-20 中兴通讯股份有限公司 Photo processing method and device
CN107613203A (en) * 2017-09-22 2018-01-19 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN110139033A (en) * 2019-05-13 2019-08-16 Oppo广东移动通信有限公司 Camera control method and Related product
CN111246106A (en) * 2020-01-22 2020-06-05 维沃移动通信有限公司 Image processing method, electronic device, and computer-readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165488A1 (en) * 2015-09-18 2016-10-20 中兴通讯股份有限公司 Photo processing method and device
CN107613203A (en) * 2017-09-22 2018-01-19 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN110139033A (en) * 2019-05-13 2019-08-16 Oppo广东移动通信有限公司 Camera control method and Related product
CN111246106A (en) * 2020-01-22 2020-06-05 维沃移动通信有限公司 Image processing method, electronic device, and computer-readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114710624A (en) * 2022-04-24 2022-07-05 维沃移动通信有限公司 Photographing method and photographing apparatus

Also Published As

Publication number Publication date
CN114025100B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN112954210B (en) Photographing method and device, electronic equipment and medium
CN112714253B (en) Video recording method and device, electronic equipment and readable storage medium
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
CN113766129A (en) Video recording method, video recording device, electronic equipment and medium
CN112714257B (en) Display control method, display control device, electronic device, and medium
CN112532881B (en) Image processing method and device and electronic equipment
CN112637500B (en) Image processing method and device
CN112492212A (en) Photographing method and device, electronic equipment and storage medium
CN111866392A (en) Shooting prompting method and device, storage medium and electronic equipment
CN112492215B (en) Shooting control method and device and electronic equipment
CN114390201A (en) Focusing method and device thereof
CN112437232A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113194256B (en) Shooting method, shooting device, electronic equipment and storage medium
CN114390197A (en) Shooting method and device, electronic equipment and readable storage medium
CN112449110B (en) Image processing method and device and electronic equipment
CN113866782A (en) Image processing method and device and electronic equipment
CN114025100B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN112330728A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN113794831B (en) Video shooting method, device, electronic equipment and medium
CN112653841B (en) Shooting method and device and electronic equipment
CN112383708B (en) Shooting method and device, electronic equipment and readable storage medium
CN113989387A (en) Camera shooting parameter adjusting method and device and electronic equipment
CN114339051A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113873168A (en) Shooting method, shooting device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant