CN111263071B - Shooting method and electronic equipment - Google Patents
Shooting method and electronic equipment Download PDFInfo
- Publication number
- CN111263071B CN111263071B CN202010119183.4A CN202010119183A CN111263071B CN 111263071 B CN111263071 B CN 111263071B CN 202010119183 A CN202010119183 A CN 202010119183A CN 111263071 B CN111263071 B CN 111263071B
- Authority
- CN
- China
- Prior art keywords
- image
- person
- person image
- determining
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000012545 processing Methods 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 10
- 238000007499 fusion processing Methods 0.000 claims description 6
- 230000002093 peripheral effect Effects 0.000 claims description 6
- 238000012805 post-processing Methods 0.000 abstract description 11
- 230000006870 function Effects 0.000 description 12
- 230000000694 effects Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention provides a shooting method and electronic equipment, wherein the shooting method comprises the following steps: acquiring a first image in real time through a camera, and determining a first person image and a second person image in the first image; and deleting the second person image in the first image, filling the background of the area corresponding to the second person image to obtain a target image, and displaying the target image in real time. The technical scheme provided by the invention solves the problem that the post-processing difficulty of the shot picture is increased by the existing shooting mode.
Description
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a shooting method and an electronic device.
Background
With the increasing photographing function of electronic devices, people have higher and higher requirements for photographing pictures. When shooting, the existing electronic device usually wants to obtain a clean background to highlight the main person in the shot image. However, in actual shooting, particularly in shooting in a public space (e.g., a scenic spot), there is always a case where there are passers-by and passers-by in the shooting preview background. At present, the post-processing is generally carried out on the shot picture to highlight the main person in the shot picture; and post-processing has higher requirements on hardware of electronic equipment and professional ability of processing personnel, and the processing difficulty of the shot picture is increased.
Therefore, the post-processing difficulty of the shot picture is increased by the existing shooting mode.
Disclosure of Invention
The embodiment of the invention provides a shooting method and electronic equipment, and aims to solve the problem that the post-processing difficulty of a shot picture is increased in the existing shooting mode.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a shooting method, including:
acquiring a first image in real time through a camera, and determining a first person image and a second person image in the first image;
deleting the second person image from the first image, and performing background filling on an area corresponding to the second person image to obtain a target image;
and displaying the target image in real time.
In a second aspect, an embodiment of the present invention further provides an electronic device, including:
the determining module is used for acquiring a first image in real time through a camera and determining a first person image and a second person image in the first image;
the processing module is used for deleting the second person image from the first image and carrying out background filling on an area corresponding to the second person image to obtain a target image;
and the display module is used for displaying the target image in real time.
In a third aspect, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the electronic device implements the steps of the shooting method according to the first aspect.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the shooting method as described in the first aspect.
According to the technical scheme provided by the embodiment of the invention, the electronic equipment acquires a first image in real time through a camera and determines a first person image and a second person image in the first image; and deleting the second person image in the first image, filling the background of the area corresponding to the second image to obtain a target image, and displaying the target image in real time. Therefore, the electronic equipment can automatically process the figure image in the target image before the shot image is formed, post processing on the shot image is not needed, the processing difficulty of the shot image is reduced, the shooting processing flow is simplified, and the user use experience of the electronic equipment is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart of a photographing method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a scene of a display interface of an electronic device to which the photographing method provided in FIG. 1 is applied;
FIG. 3 is a schematic view of another scene of a display interface of an electronic device to which the photographing method provided in FIG. 1 is applied;
FIG. 4 is a block diagram of an electronic device according to an embodiment of the present invention;
fig. 5 is a block diagram of another electronic device provided in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention provides a shooting method which is applied to electronic equipment with a camera, such as a mobile phone, a tablet personal computer, wearable equipment and the like.
Referring to fig. 1, fig. 1 is a flowchart of a shooting method according to an embodiment of the present invention, and as shown in fig. 1, the shooting method includes the following steps:
It should be noted that the first image is an image acquired by the camera in real time after the electronic device starts the shooting application program and before the electronic device performs the shooting operation, and the first image is not a shot image. The camera acquires the first image in real time, and the first image is not specific to a specific image or a specific frame of image, and the first image may be any one of a plurality of images or a plurality of frames of images continuously acquired by the camera. In this case, if the person main body corresponding to the person image moves when the first image includes the person image, the position areas of the same person image in the plurality of first images are changed.
In this step, the electronic device acquires the first image in real time through the camera under the condition that the shooting application program is started, and determines a first person image and a second person image in the first image. The number of the person main bodies corresponding to the first person image can be one or more; similarly, one or more person subjects corresponding to the second person image may be used.
For example, if the first image only includes two person images, one of the person images is a first person image, and the other person image is a second person image; for another example, if the first image includes a plurality of character images, a certain specific character image is a first character image, and the rest of the specific character images are second character images; as shown in fig. 2, it is determined that the person image a in the first image is the first person image, the remaining person images B are the second person images, and the number of the second person images is 5.
Optionally, the step 101 may include:
the method comprises the steps of acquiring a first image in real time through a camera, and determining a first person image and a second person image in the first image if target operation is received.
That is, the electronic device will obtain the first person image and the second person image in the first image only when receiving the target operation; if the electronic device does not receive the target operation when the camera is started, the operation of determining the first person image and the second person image in the first image cannot be executed, further the person images in the first image cannot be distinguished, and the subsequent steps cannot be executed, so that the electronic device can be in a common shooting mode.
Illustratively, the target operation may be in a variety of operational forms. For example, the target operation may refer to acting on a specific sliding track on the display screen, or the target operation may refer to a user triggering a specific physical key on the electronic device, or the target operation may refer to a user triggering a specific virtual key on the shooting interface (i.e., the interface displaying the first image), and so on.
In the embodiment of the invention, after the electronic equipment acquires the first image in real time through the camera, if the first image shows the person image, the person image needs to be distinguished into the first person image and the second person image. Specifically, the determining the first person image and the second person image in the first image comprises:
acquiring the distance between the person main body corresponding to each person image in the first image and the camera, determining the person image corresponding to the person main body with the distance smaller than a preset distance as a first person image, and determining the person image corresponding to the person main body with the distance larger than the preset distance as a second person image;
or,
acquiring the image size of each person image in the first image, determining the person image with the image size larger than or equal to a preset size as a first person image, and determining the person image with the image size smaller than the preset size as a second person image;
or,
acquiring face information of each person image in the first image, determining the person image of which the face information accords with preset face information as a first person image, and determining the person image of which the face information does not accord with the preset face information as a second person image;
or,
and acquiring the moving speed of the person main body corresponding to each person image in the first image, determining the person image corresponding to the person main body with the moving speed less than or equal to a preset speed as a first person image, and determining the person image corresponding to the person main body with the moving speed greater than the preset speed as a second person image.
Several embodiments provided above will be specifically described below.
In the first embodiment, the distinction is made based on the distance between the human subject and the camera. After the electronic equipment acquires the first image, acquiring the distance between the person main body and the camera corresponding to each person image; the person image corresponding to the person main body closest to the camera may be determined as a first person image (in this case, one first person image), and the remaining person images may be determined as second person images; alternatively, the person image whose distance from the camera is less than or equal to a preset distance may be determined as the first person image (in this case, one or more of the first person images may be used), and the person image whose distance is greater than the preset distance may be determined as the second person image. The electronic device can determine the human body firstly through modes of face recognition, human body recognition, multiple focusing, laser focusing and the like, and then can detect the distance between the human body and the camera through modes of infrared distance measurement, laser distance measurement and the like.
In the second embodiment, the first personal image and the second personal image are distinguished by recognizing the size of the image size of the personal image. The electronic device obtains the image size of each person image when displaying the first image, and may determine the person image with the largest image size as the first person image (in this case, one first person image), determine the remaining person images as the second person images, and determine the person image a with the largest image size as the first person image and the remaining person images B as the second person images as shown in fig. 2; alternatively, a person image having an image size equal to or larger than a preset size is determined as the first person image (in this case, one or a plurality of first person images may be used), and a person image having an image size smaller than the preset size is determined as the second person image.
In the third embodiment, the first person image and the second person image are distinguished by face recognition. It is to be understood that the electronic device may be a device in which face information (i.e. preset face information) of one or more users is stored in advance, and in the case where the electronic device displays the first image, the first person image and the second person image are distinguished by identifying the face information of the person image in the first image. That is, when the face information of the person image matches the preset face information, such person image is determined as the first person image, and the person image whose face information does not match the preset face information is determined as the second person image.
In the fourth embodiment, the first person image and the second person image are distinguished by acquiring the moving speed of the person main body corresponding to each person image in the first image. For example, a person image in which the moving speed is the smallest or which is not moving may be determined as the first person image, and the rest may be determined as the second person image; or, a person image with a moving speed less than or equal to a preset speed is determined as the first person image, and a person image with a moving speed greater than the preset speed is determined as the second person image, where the preset speed may be a preset stored moving speed, for example, the preset speed may be 0, that is, a person image corresponding to a still person main body is determined as the first person image.
And 102, deleting the second person image from the first image, and performing background filling on an area corresponding to the second person image to obtain a target image.
That is, when the first person image and the second person image in the first image are determined, the second person image is deleted, and the first person image remains. In the case where the first image includes a person image, the person image in the first image may be divided into a first person image and a second person image, and when the second person image is deleted, the remaining person image in the first image may be the first person image. As shown in fig. 3, the second person image has been deleted from the first image, and only the first person image a is displayed. Like this, electronic equipment can be before forming the picture of shooing, has handled the personage image in the first image automatically, just also need not to carry out post processing to the picture of shooing again, has reduced the processing degree of difficulty of the picture of shooing, has also simplified the processing procedure of shooing, promotes electronic equipment's user and has used experience.
In a specific embodiment, when the user is in a public space (e.g. a scenic spot) and there may be more person images in the first image, the electronic device can automatically identify the first person image and the second person image by the above-mentioned several ways, for example, the person subject to be photographed is usually still, the passerby is usually in a moving state and the moving speed is fast, the electronic device can automatically determine the passerby in the moving state as the second person image based on the moving speed, so as to delete the passerby from the first image, and can perform background processing on the first image after deleting the passerby, so that only the person subject to be photographed (i.e. the first person image) is retained in the first image, and a good photographing effect can be obtained quickly, and the user can photograph directly based on the first image, the user is not needed to perform post-processing on the shot image, and the shooting experience of the user is improved.
Specifically, how to delete the second personal image and perform background filling on the first image from which the second personal image is deleted to obtain the target image may be realized in the following different manners.
In one embodiment, the step 102 may include:
acquiring a background image corresponding to a region corresponding to the second character image in a preset frame image before the first image;
and deleting the second person image in the first image, and filling the background image into the first image to obtain a target image.
In this embodiment, after acquiring a first image acquired by a camera in real time and determining a first person image and a second person image therein, an electronic device needs to delete the second person image from the first image, acquires a background image corresponding to the second person image in previous frames (for example, a previous frame) of images of the current first image, deletes the second person image in the current first image, and fills the background image into a deleted second person image area to fill the first image from which the second person image is deleted, so as to obtain a target image; then, the second person image is not included in the target image, but replaced by the background image, so as to ensure the integrity of the first image. Therefore, the person image in the target image only comprises the first person image, and the post-processing of the shot image is simplified.
In another embodiment, the step 102 may include:
acquiring a first background image in a preset area around the second character image, and copying the first background image to obtain a copied image;
performing fusion processing on the copied image and a second background image in the peripheral area of the second person image to obtain a processed image;
and deleting the second person image in the first image, and filling the processed image into the first image to obtain a target image.
In this embodiment, after acquiring a first image acquired by a camera in real time and determining a first person image and a second person image therein, an electronic device acquires a first background image in a preset area around the second person image, copies the first background image to obtain a copied image, and performs fusion processing on the copied image and the second background image in the preset area around the second person image to obtain a processed image; the first background image and the second background image may be the same image, or the image range of the second background image is larger than the image range of the first background image, the copied image and the second background image are fused to obtain a processed image, so that the copied image can be fused with the background image in the peripheral region of the second character image to obtain a processed image, the second character image in the first image is deleted, and the processed image is filled into the region of the first image from which the second character image is deleted, so that the first image from which the second character image is deleted is filled, and further the target image is obtained, so as to ensure the integrity of the first image. The process of fusing the copied image and the second background image may refer to a related image processing technology, which is not described in detail herein.
Optionally, in the shooting method provided in the embodiment of the present invention, after the step 101, the method may further include:
and if the character features corresponding to the first character image are not detected in the new first image acquired within the preset time length, displaying the new first image acquired after the preset time length in real time.
It is understood that, in the case that the first image includes the person image, the person body corresponding to the person image may be in a moving state, and due to the movement of the person body, for example, the person body moves out of the view frame of the camera, or due to the shake of the mobile phone of the photographer, the person body moves out of the view frame, that is, the person body corresponding to the first person image disappears within the capturing range of the camera, so the first person image may not be included in the first image.
In the embodiment of the invention, if the character features corresponding to the first character image are not detected in the first image acquired within the preset time length, a new first image acquired after the preset time length is displayed in real time. In this case, the electronic device may not distinguish the person image in the new first image, and the operation of step 102 is not performed, so that the second person image is not deleted, but remains in the new first image. Or after the new first image obtained after the preset time is displayed in real time, the person image in the new first image may be distinguished, and the first person image and the second person image in the new first image are determined, where the specific person image distinguishing method may refer to the specific description in the above embodiment, and is not described herein again. The first image acquired within the preset time length refers to any one or one of a plurality of or multiple frames of new first images continuously acquired by the camera of the electronic equipment within the preset time length.
It should be noted that, if the person feature corresponding to the first person image is detected in the first image acquired within the preset time period, the person image corresponding to the person feature is maintained as the first person image. For example, if the first person image only disappears for a short period of time within a preset time period, for example, the person main body corresponding to the first person image moves out of the view finder of the camera and returns, or the person main body corresponding to the first person image is blocked by a passerby passing by, the first person image in the first image is kept unchanged, the determined second person image is still deleted, and only the first person image is retained in the target image.
Alternatively, the embodiment of the present invention may also be configured to avoid performing the deletion operation of the second personal image by a specific operation. For example, after the electronic device displays a first image and determines a first person image and a second person image in the first image, the electronic device enters a person elimination shooting mode, and if a preset sliding track acted on a display screen by a user is received, or a trigger operation acted on a preset physical key of the electronic device is received, or a trigger operation acted on a preset virtual key is received, the electronic device may exit the current person elimination shooting mode and return to a normal shooting mode, and then the electronic device does not execute step 102, that is, the person image in the first image is not deleted. Therefore, the shooting mode of the electronic equipment can be switched through specific operation, the shooting operation of a user is more convenient, and the shooting experience is improved.
And 103, displaying the target image in real time.
Therefore, the user does not need to perform post processing on the shot image, the processing difficulty of the shot image is reduced, and better shooting experience is brought to the user.
In addition, after the electronic device displays the first image and determines the first person image and the second person image in the first image, the electronic device can lock the first person image. In this embodiment of the present invention, after the step 101, the method may further include:
determining a newly added figure image in a new first image acquired in real time as a second figure image;
or,
if the character features corresponding to the first character image are detected in a new first image acquired in real time, keeping the character image corresponding to the character features as the first character image in the new first image, and determining other character images as second character images;
deleting the second person image in the new first image, and performing background filling on an area corresponding to the second person image to obtain a new target image;
and displaying the new target image.
It can be understood that, as the person moves, the person image in the image captured by the camera may not be the same, and a new person image may enter the capturing range of the camera, so that a new person image may be added to the first image. In the embodiment of the invention, the newly added person images in the new first image are all determined to be the second person images, so that the newly added person images are deleted, and the first person images determined in the original first image are still remained in the new first image.
For example, when the shooting scene is a scenic spot or a scene with a large traffic of people such as a road, there may be many passersby coming and going, and new people may continuously increase in the first image acquired by the camera in real time, so after the first person image and the second person image (for example, passersby) are determined, the newly increased person image in the new first image acquired in real time is determined to be the second person image, and is also deleted, so that it is ensured that only the first person image is displayed in the new target image. Therefore, the shooting processing of the electronic equipment is more flexible, and better shooting experience is brought to users.
In addition, in a new first image acquired in real time, the character features corresponding to the first character image in the previous frame of first image are detected, and in the new first image, the character image corresponding to the character features is kept as the first character image, and other character images are determined as the second character image. It is understood that the person main bodies corresponding to the first person image and the second person image may also move, and then the positions of the first person image and the second person image in consecutive several or several frames of the first image are changed, and the electronic device still determines the originally determined first person image as the first person image and determines the other person images as the second person image. That is, the first person image and the second person image are not redetermined due to the movement of the person main body, and for example, even if the person main body corresponding to the second person image moves to the nearest distance from the camera, the person main body is still the second person image. In this way, the electronic device can continuously track the first person image and the second person image, for example, as shown in fig. 2 and fig. 3, borders with different marks can be always displayed outside the first person image and the second person image, so that the user can intuitively know the position of the first person image, and the user shooting experience is further improved.
It should be noted that the determination operation on the person image in the new first image acquired in real time may be performed before the step 103, or may be performed after the step 103. For example, when the electronic device acquires a new first image, it may need to perform a processing operation such as distinguishing or deleting a person image in the new first image, and in this case, the electronic device may display a previous frame of the target image, and after the new target image is obtained by performing the processing operation such as deleting the new first image, the electronic device displays the new target image. Note that the new target image still refers to the preview image displayed before the shooting operation is performed.
According to the technical scheme provided by the embodiment of the invention, the electronic equipment acquires a first image in real time through a camera and determines a first person image and a second person image in the first image; and deleting the second person image in the first image, filling the background of the area corresponding to the second person image to obtain a target image, and then displaying the target image in real time. Therefore, the electronic equipment can automatically process the figure image in the target image before the shot image is formed, post processing on the shot image is not needed, the processing difficulty of the shot image is reduced, the shooting processing flow is simplified, and the user use experience of the electronic equipment is improved.
Referring to fig. 4, fig. 4 is a structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 4, the electronic device 400 includes:
the determining module 401 is configured to obtain a first image in real time through a camera, and determine a first person image and a second person image in the first image;
a processing module 402, configured to delete the second person image from the first image, and perform background filling on an area corresponding to the second person image to obtain a target image;
and a display module 403, configured to display the target image in real time.
Optionally, the determining module 401 is further configured to:
acquiring the distance between the person main body corresponding to each person image in the first image and the camera, determining the person image corresponding to the person main body with the distance less than or equal to a preset distance as a first person image, and determining the person image corresponding to the person main body with the distance greater than the preset distance as a second person image;
or,
acquiring the image size of each person image in the first image, determining the person image with the image size larger than or equal to a preset size as a first person image, and determining the person image with the image size smaller than the preset size as a second person image;
or,
acquiring face information of each person image in the first image, determining the person image of which the face information accords with preset face information as a first person image, and determining the person image of which the face information does not accord with the preset face information as a second person image;
or,
and acquiring the moving speed of the person main body corresponding to each person image in the first image, determining the person image corresponding to the person main body with the moving speed less than or equal to a preset speed as a first person image, and determining the person image corresponding to the person main body with the moving speed greater than the preset speed as a second person image.
Optionally, the processing module 402 is further configured to:
acquiring a background image corresponding to a region corresponding to the second character image in a preset frame image before the first image;
and deleting the second person image in the first image, and filling the background image into the first image to obtain a target image.
Optionally, the processing module 402 is further configured to:
acquiring a first background image in a preset area around the second character image, and copying the first background image to obtain a copied image;
performing fusion processing on the copied image and a second background image in the peripheral area of the second person image to obtain a processed image;
and deleting the second person image in the first image, and filling the processed image into the first image to obtain a target image.
Optionally, the determining module 401 is further configured to:
and if the character features corresponding to the first character image are not detected in the first image acquired within the preset time length, displaying a new first image acquired after the preset time length in real time.
Optionally, the determining module 401 is further configured to:
determining a newly added figure image in a new first image acquired in real time as a second figure image;
or,
if the character features corresponding to the first character image are detected in a new first image acquired in real time, keeping the character image corresponding to the character features as the first character image in the new first image, and determining other character images as second character images;
the processing module is further configured to: deleting the second person image in the new first image, and performing background filling on an area corresponding to the second person image to obtain a new target image;
the display module is further configured to: and displaying the new target image.
It should be noted that the electronic device 400 can implement each process of the shooting method embodiment described in fig. 1, and can achieve the same technical effect, and for avoiding repetition, details are not described here again.
In the embodiment of the present invention, the electronic device 400 acquires a first image in real time through a camera, and determines a first person image and a second person image in the first image; deleting the second person image from the first image, and performing background filling on an area corresponding to the second person image to obtain a target image; and displaying the target image in real time. In this way, the electronic device 400 can automatically process the person image in the target image before the shot image is formed, and the shot image does not need to be post-processed, so that the processing difficulty of the shot image is reduced, the shooting processing flow is simplified, and the user experience is improved.
Referring to fig. 5, fig. 5 is a structural diagram of another electronic device for implementing the embodiment of the invention, and the electronic device 500 can implement each process of the embodiment of the shooting method described in fig. 1 and achieve the same technical effect. As shown in fig. 5, the electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 5 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, the processor 510 is configured to:
acquiring a first image in real time through a camera, and determining a first person image and a second person image in the first image;
deleting the second person image from the first image, and performing background filling on an area corresponding to the second person image to obtain a target image;
and displaying the target image in real time.
Optionally, processor 510 may be further configured to:
acquiring the distance between the person main body corresponding to each person image in the first image and the camera, determining the person image corresponding to the person main body with the distance less than or equal to a preset distance as a first person image, and determining the person image corresponding to the person main body with the distance greater than the preset distance as a second person image;
or,
acquiring the image size of each person image in the first image, determining the person image with the image size larger than or equal to a preset size as a first person image, and determining the person image with the image size smaller than the preset size as a second person image;
or,
acquiring face information of each person image in the first image, determining the person image of which the face information accords with preset face information as a first person image, and determining the person image of which the face information does not accord with the preset face information as a second person image;
or,
and acquiring the moving speed of the person main body corresponding to each person image in the first image, determining the person image corresponding to the person main body with the moving speed less than or equal to a preset speed as a first person image, and determining the person image corresponding to the person main body with the moving speed greater than the preset speed as a second person image.
Optionally, the processor 510 may be further configured to:
acquiring a background image corresponding to a region corresponding to the second character image in a preset frame image before the first image;
and deleting the second person image in the first image, and filling the background image into the first image to obtain a target image.
Optionally, the processor 510 may be further configured to:
acquiring a first background image in a preset area around the second character image, and copying the first background image to obtain a copied image;
performing fusion processing on the copied image and a second background image in the peripheral area of the second person image to obtain a processed image;
and deleting the second person image in the first image, and filling the processed image into the first image to obtain a target image.
Optionally, the processor 510 may be further configured to:
and if the character features corresponding to the first character image are not detected in the first image acquired within the preset time length, displaying a new first image acquired after the preset time length in real time.
Optionally, the processor 510 may be further configured to:
determining a newly added figure image in a new first image acquired in real time as a second figure image;
or,
if the character features corresponding to the first character image are detected in a new first image acquired in real time, keeping the character image corresponding to the character features as the first character image in the new first image, and determining other character images as second character images;
deleting the second person image in the new first image, and performing background filling on an area corresponding to the second person image to obtain a new target image;
and displaying the new target image.
In the embodiment of the present invention, the electronic device 500 obtains a first image in real time through a camera, and determines a first person image and a second person image in the first image; and deleting the second person image in the first image, filling the background of the area corresponding to the second person image to obtain a target image, and displaying the target image in real time. In this way, the electronic device 500 can automatically process the person image in the target image before the shot image is formed, and the shot image does not need to be post-processed, so that the processing difficulty of the shot image is reduced, the shooting processing flow is simplified, and the user experience is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The electronic device 500 provides the user with wireless broadband internet access via the network module 502, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the electronic apparatus 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still image or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphics processor 5041 may be stored in the memory 509 (or other computer-readable storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The electronic device 500 also includes at least one sensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 5051 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 5051 and/or a backlight when the electronic device 500 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5051, and the Display panel 5051 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device 500. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, a touch panel 5071 may be overlaid on the display panel 5051, and when the touch panel 5071 detects a touch operation thereon or thereabout, the touch panel is transmitted to the processor 510 to determine the type of touch event, and then the processor 510 provides a corresponding visual output on the display panel 5051 according to the type of touch event. Although in fig. 5, the touch panel 5071 and the display panel 5051 are implemented as two separate components to implement the input and output functions of the electronic device 500, in some embodiments, the touch panel 5071 and the display panel 5051 may be integrated to implement the input and output functions of the electronic device 500, and are not limited herein.
The interface unit 508 is an interface for connecting an external device to the electronic apparatus 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic apparatus 500 or may be used to transmit data between the electronic apparatus 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the electronic device 500, connects various parts of the whole electronic device 500 by using various interfaces and lines, and performs various functions of the electronic device 500 and processes data by running or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the electronic device 500. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The electronic device 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system.
In addition, the electronic device 500 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. A photographing method, characterized by comprising:
acquiring a first image in real time through a camera, and determining a first person image and a second person image in the first image;
deleting the second person image from the first image, and performing background filling on an area corresponding to the second person image to obtain a target image;
displaying the target image in real time;
after the first image is obtained in real time through the camera and the first person image and the second person image in the first image are determined, the method further comprises the following steps:
determining a newly added figure image in a new first image acquired in real time as a second figure image;
or,
if the character features corresponding to the first character image are detected in a new first image acquired in real time, keeping the character image corresponding to the character features as the first character image in the new first image, and determining other character images as second character images; deleting the second person image in the new first image, and performing background filling on an area corresponding to the second person image to obtain a new target image;
displaying the new target image;
after the first image is obtained in real time through the camera and the first person image and the second person image in the first image are determined, the method further comprises the following steps:
and if the character features corresponding to the first character image are not detected in the first image acquired within the preset time length, displaying a new first image acquired after the preset time length in real time.
2. The method of claim 1, wherein determining the first and second images of the person in the first image comprises:
acquiring the distance between the person main body corresponding to each person image in the first image and the camera, determining the person image corresponding to the person main body with the distance less than or equal to a preset distance as a first person image, and determining the person image corresponding to the person main body with the distance greater than the preset distance as a second person image;
or,
acquiring the image size of each person image in the first image, determining the person image with the image size larger than or equal to a preset size as a first person image, and determining the person image with the image size smaller than the preset size as a second person image;
or,
acquiring face information of each person image in the first image, determining the person image of which the face information accords with preset face information as a first person image, and determining the person image of which the face information does not accord with the preset face information as a second person image;
or,
and acquiring the moving speed of the person main body corresponding to each person image in the first image, determining the person image corresponding to the person main body with the moving speed less than or equal to a preset speed as a first person image, and determining the person image corresponding to the person main body with the moving speed greater than the preset speed as a second person image.
3. The method of claim 1, wherein the deleting the second person image and the background filling of the area corresponding to the second person image to obtain the target image comprises:
acquiring a background image corresponding to a region corresponding to the second character image in a preset frame image before the first image;
and deleting the second person image in the first image, and filling the background image into the first image to obtain a target image.
4. The method of claim 1, wherein the deleting the second person image and the background filling of the area corresponding to the second person image to obtain the target image comprises:
acquiring a first background image in a preset area around the second character image, and copying the first background image to obtain a copied image;
performing fusion processing on the copied image and a second background image in the peripheral area of the second person image to obtain a processed image;
and deleting the second person image in the first image, and filling the processed image into the first image to obtain a target image.
5. An electronic device, comprising:
the determining module is used for acquiring a first image in real time through a camera and determining a first person image and a second person image in the first image;
the processing module is used for deleting the second person image from the first image and carrying out background filling on an area corresponding to the second person image to obtain a target image;
the display module is used for displaying the target image in real time;
the determination module is further to:
determining a newly added figure image in a new first image acquired in real time as a second figure image;
or,
if the character features corresponding to the first character image are detected in a new first image acquired in real time, keeping the character image corresponding to the character features as the first character image in the new first image, and determining other character images as second character images;
the processing module is further configured to: deleting the second person image in the new first image, and performing background filling on an area corresponding to the second person image to obtain a new target image;
the display module is further configured to: displaying the new target image;
the determination module is further to:
and if the character features corresponding to the first character image are not detected in the first image acquired within the preset time length, displaying a new first image acquired after the preset time length in real time.
6. The electronic device of claim 5, wherein the determination module is further configured to:
acquiring the distance between the person main body corresponding to each person image in the first image and the camera, determining the person image corresponding to the person main body with the distance less than or equal to a preset distance as a first person image, and determining the person image corresponding to the person main body with the distance greater than the preset distance as a second person image;
or,
acquiring the image size of each person image in the first image, determining the person image with the image size larger than or equal to a preset size as a first person image, and determining the person image with the image size smaller than the preset size as a second person image;
or,
acquiring face information of each person image in the first image, determining the person image of which the face information accords with preset face information as a first person image, and determining the person image of which the face information does not accord with the preset face information as a second person image;
or,
and acquiring the moving speed of the person main body corresponding to each person image in the first image, determining the person image corresponding to the person main body with the moving speed less than or equal to a preset speed as a first person image, and determining the person image corresponding to the person main body with the moving speed greater than the preset speed as a second person image.
7. The electronic device of claim 5, wherein the processing module is further configured to:
acquiring a background image corresponding to a region corresponding to the second character image in a preset frame image before the first image;
and deleting the second person image in the first image, and filling the background image into the first image to obtain a target image.
8. The electronic device of claim 5, wherein the processing module is further configured to:
acquiring a first background image in a preset area around the second character image, and copying the first background image to obtain a copied image;
performing fusion processing on the copied image and a second background image in the peripheral area of the second person image to obtain a processed image;
and deleting the second person image in the first image, and filling the processed image into the first image to obtain a target image.
9. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the photographing method according to any of claims 1 to 4.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the photographing method according to any one of claims 1 to 4.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010119183.4A CN111263071B (en) | 2020-02-26 | 2020-02-26 | Shooting method and electronic equipment |
PCT/CN2021/076870 WO2021169851A1 (en) | 2020-02-26 | 2021-02-19 | Photographing method and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010119183.4A CN111263071B (en) | 2020-02-26 | 2020-02-26 | Shooting method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111263071A CN111263071A (en) | 2020-06-09 |
CN111263071B true CN111263071B (en) | 2021-12-10 |
Family
ID=70952710
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010119183.4A Active CN111263071B (en) | 2020-02-26 | 2020-02-26 | Shooting method and electronic equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111263071B (en) |
WO (1) | WO2021169851A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111263071B (en) * | 2020-02-26 | 2021-12-10 | 维沃移动通信有限公司 | Shooting method and electronic equipment |
CN112422828B (en) * | 2020-11-17 | 2023-04-28 | 维沃移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and readable storage medium |
CN112468722B (en) * | 2020-11-19 | 2022-05-06 | 惠州Tcl移动通信有限公司 | Shooting method, device, equipment and storage medium |
CN112887611A (en) * | 2021-01-27 | 2021-06-01 | 维沃移动通信有限公司 | Image processing method, device, equipment and storage medium |
CN113873135A (en) * | 2021-11-03 | 2021-12-31 | 乐美科技股份私人有限公司 | Image obtaining method and device, electronic equipment and storage medium |
CN114040129B (en) * | 2021-11-30 | 2023-12-05 | 北京字节跳动网络技术有限公司 | Video generation method, device, equipment and storage medium |
CN116708995B (en) * | 2023-08-01 | 2023-09-29 | 世优(北京)科技有限公司 | Photographic composition method, photographic composition device and photographic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104349045A (en) * | 2013-08-09 | 2015-02-11 | 联想(北京)有限公司 | Image collecting method and electronic equipment |
CN105827952A (en) * | 2016-02-01 | 2016-08-03 | 维沃移动通信有限公司 | Photographing method for removing specified object and mobile terminal |
CN106331460A (en) * | 2015-06-19 | 2017-01-11 | 宇龙计算机通信科技(深圳)有限公司 | Image processing method and device, and terminal |
CN109040604A (en) * | 2018-10-23 | 2018-12-18 | Oppo广东移动通信有限公司 | Shoot processing method, device, storage medium and the mobile terminal of image |
CN109993688A (en) * | 2017-12-29 | 2019-07-09 | 深圳市优必选科技有限公司 | Robot, photo shooting and processing method thereof and storage device |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2494498B1 (en) * | 2009-10-30 | 2018-05-23 | QUALCOMM Incorporated | Method and apparatus for image detection with undesired object removal |
JP2016042661A (en) * | 2014-08-18 | 2016-03-31 | キヤノン株式会社 | Information processing unit, system, information processing method, and program |
JP6324879B2 (en) * | 2014-11-18 | 2018-05-16 | 富士フイルム株式会社 | Imaging apparatus and control method thereof |
JP6640460B2 (en) * | 2015-03-30 | 2020-02-05 | 富士フイルム株式会社 | Image capturing apparatus, image capturing method, program, and recording medium |
CN107360366B (en) * | 2017-06-30 | 2020-05-12 | Oppo广东移动通信有限公司 | Photographing method and device, storage medium and electronic equipment |
CN108924418A (en) * | 2018-07-02 | 2018-11-30 | 珠海市魅族科技有限公司 | A kind for the treatment of method and apparatus of preview image, terminal, readable storage medium storing program for executing |
CN111263071B (en) * | 2020-02-26 | 2021-12-10 | 维沃移动通信有限公司 | Shooting method and electronic equipment |
-
2020
- 2020-02-26 CN CN202010119183.4A patent/CN111263071B/en active Active
-
2021
- 2021-02-19 WO PCT/CN2021/076870 patent/WO2021169851A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104349045A (en) * | 2013-08-09 | 2015-02-11 | 联想(北京)有限公司 | Image collecting method and electronic equipment |
CN106331460A (en) * | 2015-06-19 | 2017-01-11 | 宇龙计算机通信科技(深圳)有限公司 | Image processing method and device, and terminal |
CN105827952A (en) * | 2016-02-01 | 2016-08-03 | 维沃移动通信有限公司 | Photographing method for removing specified object and mobile terminal |
CN109993688A (en) * | 2017-12-29 | 2019-07-09 | 深圳市优必选科技有限公司 | Robot, photo shooting and processing method thereof and storage device |
CN109040604A (en) * | 2018-10-23 | 2018-12-18 | Oppo广东移动通信有限公司 | Shoot processing method, device, storage medium and the mobile terminal of image |
Also Published As
Publication number | Publication date |
---|---|
CN111263071A (en) | 2020-06-09 |
WO2021169851A1 (en) | 2021-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111263071B (en) | Shooting method and electronic equipment | |
CN108668083B (en) | Photographing method and terminal | |
CN108513070B (en) | Image processing method, mobile terminal and computer readable storage medium | |
CN110913132B (en) | Object tracking method and electronic equipment | |
CN108471498B (en) | Shooting preview method and terminal | |
CN108495029B (en) | Photographing method and mobile terminal | |
CN109639969B (en) | Image processing method, terminal and server | |
CN110213440B (en) | Image sharing method and terminal | |
CN108307106B (en) | Image processing method and device and mobile terminal | |
CN107730460B (en) | Image processing method and mobile terminal | |
CN110881105B (en) | Shooting method and electronic equipment | |
CN111050069B (en) | Shooting method and electronic equipment | |
CN108174110B (en) | Photographing method and flexible screen terminal | |
CN109246351B (en) | Composition method and terminal equipment | |
CN111405181B (en) | Focusing method and electronic equipment | |
CN111401463A (en) | Method for outputting detection result, electronic device, and medium | |
CN110855897B (en) | Image shooting method and device, electronic equipment and storage medium | |
CN110602387B (en) | Shooting method and electronic equipment | |
CN108881721A (en) | A kind of display methods and terminal | |
CN111064888A (en) | Prompting method and electronic equipment | |
CN108924413B (en) | Shooting method and mobile terminal | |
CN108345657B (en) | Picture screening method and mobile terminal | |
CN108243489B (en) | Photographing control method and mobile terminal | |
CN108156386B (en) | Panoramic photographing method and mobile terminal | |
CN110913133B (en) | Shooting method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |