CN111654620B - Shooting method and device - Google Patents

Shooting method and device Download PDF

Info

Publication number
CN111654620B
CN111654620B CN202010454009.5A CN202010454009A CN111654620B CN 111654620 B CN111654620 B CN 111654620B CN 202010454009 A CN202010454009 A CN 202010454009A CN 111654620 B CN111654620 B CN 111654620B
Authority
CN
China
Prior art keywords
preview image
image
target
shooting
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010454009.5A
Other languages
Chinese (zh)
Other versions
CN111654620A (en
Inventor
马若洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010454009.5A priority Critical patent/CN111654620B/en
Publication of CN111654620A publication Critical patent/CN111654620A/en
Application granted granted Critical
Publication of CN111654620B publication Critical patent/CN111654620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method and a shooting device, and belongs to the technical field of communication. The method mainly comprises the steps of receiving a first input of a first preview image under the condition that the first preview image is displayed; responding to a first input, and determining a first shooting time period according to a first influence value of a first object in a second preview image on a first target image, wherein the first target image is an imaged image of a first preview image; shooting a first preview image in a first shooting time period, and displaying a first target image; and the first view range of the first preview image is smaller than the second view range of the second preview image, and the view picture of the second view range comprises the view picture of the first view range. The problems of poor image shooting effect and low efficiency in the prior art can be solved.

Description

Shooting method and device
Technical Field
The application belongs to the technical field of communication, and particularly relates to a shooting method and device.
Background
With the development of science and technology, various electronic devices are rapidly popularized, and become essential tools for daily life of people. At present, more and more scenes are available for taking pictures or videos through electronic equipment, for example, scenes of taking pictures in a new scenic spot, scenes of taking pictures with friends, even images and other people when people travel outside, and the like. When a user shoots a picture, the user can pay attention to the screen, passers-by are difficult to perceive, and the passers-by can be discovered only when the passers-by appears in the shot image, so that the image shooting effect is poor, and the shooting efficiency is low.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method and a shooting device, which can solve the problems of poor image shooting effect and low shooting efficiency at present.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a shooting method, where the method includes:
receiving a first input to the first preview image in a case where the first preview image is displayed;
responding to a first input, and determining a first shooting time period according to a first influence value of a first object in a second preview image on a first target image, wherein the first target image is an imaged image of a first preview image;
shooting a first preview image in a first shooting time period, and displaying a first target image;
and the first view range of the first preview image is smaller than the second view range of the second preview image, and the view picture of the second view range comprises the view picture of the first view range.
In a second aspect, an embodiment of the present application provides a shooting device, including:
the receiving module is used for receiving a first input of the first preview image under the condition that the first preview image is displayed;
the determining module is used for responding to a first input and determining a first shooting time period according to a first influence value of a first object in a second preview image on a first target image, wherein the first target image is an imaged image of a first preview image;
the display module is used for shooting a first preview image in a first shooting time period and displaying a first target image;
and the first view range of the first preview image is smaller than the second view range of the second preview image, and the view picture of the second view range comprises the view picture of the first view range.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, implement the method steps as related to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the method steps as related to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method steps according to the first aspect.
In the embodiment of the application, a first object such as a non-shooting subject except the first preview image can be identified by utilizing a larger viewing range of the second preview image, the optimal shooting time can be predicted according to the first influence value of the target image formed by the first object in the second preview image on the first preview image, and the user can shoot the first preview image according to the optimal shooting time to obtain the target image, so that the frequency of the non-shooting subject (such as passerby) appearing in the shot image is reduced, the shooting effect of the shot image is improved, and in addition, the shooting efficiency can be improved by shooting the image in combination with the optimal shooting time.
Drawings
Fig. 1 is a schematic view of an application scenario of a shooting method provided in an embodiment of the present application;
fig. 2 is a flowchart of a shooting method according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a shooting method based on a combination of a focal length camera and a wide-angle camera according to an embodiment of the present disclosure;
FIG. 4 is a schematic interface diagram of a target object and a first object according to an embodiment of the present disclosure;
fig. 5 is a schematic interface diagram for displaying a target shooting time according to an embodiment of the present disclosure;
fig. 6 is a flowchart of another shooting method based on a combination of a focal-length camera and a wide-angle camera according to an embodiment of the present disclosure;
FIG. 7 is a schematic interface diagram of a target object and a plurality of first objects according to an embodiment of the present disclosure;
fig. 8 is a schematic view of an interface for displaying prompt information according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a shooting device according to an embodiment of the present application;
fig. 10 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, the shooting method can be applied to electronic equipment, and the electronic equipment can include at least two cameras. Here, taking one camera of each of at least two types of cameras, that is, the first camera corresponding to the camera 10 and the second camera corresponding to the camera 11 as an example, the cameras 10 and 11 correspond to different viewing ranges respectively, that is, the shooting range of one camera 10 is larger than that of the other camera 11, and the viewing frame of the viewing range of the camera 10 includes the viewing frame of the viewing range of the other camera 11. Here, the cameras 10 and 11 may be combined as a focus camera and a wide-angle camera, a focus camera and an ultra-wide-angle camera, or a wide-angle camera and an ultra-wide-angle camera, or the like.
Based on this, taking the combination of the focal length camera and the wide-angle camera as an example, because the wide-angle camera has a larger shooting range, and has a larger viewing range compared with the focal length camera with a common viewing angle, a larger viewing picture can be obtained, so that the passerby outside the viewing picture of the focal length camera, i.e. the first preview image, can be identified by using the second preset image, i.e. the viewing picture acquired by the wide-angle camera, during shooting, and the first shooting time period for shooting the viewing picture of the focal length camera, i.e. the first preview image, can be predicted according to the influence value of the passerby on the image after the imaging of the viewing picture of the focal length camera, i.e. the first target image. For example, it is determined when a passerby enters the shooting range of the focus camera, and it is determined whether the passerby has an influence on the shooting effect on the image after the framing picture of the focus camera is imaged. In addition, a time progress bar can be displayed on the display screen according to the first shooting time period, the predicted first shooting time period is marked on the time progress bar, and the user can select the best shooting opportunity according to the indication of the time progress bar.
Therefore, a first object such as a non-shooting subject, namely a passer-by, outside the first preview image can be recognized by utilizing a larger viewing range of the second preview image, the optimal shooting time can be predicted according to the first influence value of the passer-by in the second preview image on the target image after the first preview image is imaged, and the user can shoot the first preview image according to the optimal shooting time to obtain the target image, so that the frequency of the non-shooting subject (such as the passer-by) appearing in the shot image is reduced to improve the shooting effect of the shot image, and in addition, the shooting efficiency can be improved by shooting the image in combination with the optimal shooting time.
It should be noted that the first object in the embodiment of the present application may be an object in a moving state, or may be an object in a stationary state, or of course, may be an object that moves first and then is stationary, and the state of the first object is again specifically defined.
According to the application scenario, the following describes in detail the shooting method provided by the embodiment of the present application with reference to fig. 2 to 8.
Fig. 2 is a flowchart of a shooting method according to an embodiment of the present disclosure.
As shown in fig. 2, the shooting method may specifically include the following steps:
first, in a case where a first preview image is displayed, a first input to the first preview image is received in step 210.
Next, in step 220, in response to the first input, a first shooting time period is determined according to a first influence value of the first object in the second preview image on the first target image, where the first target image is the imaged image of the first preview image.
Then, in step 230, a first preview image is captured during a first capture period, and a first target image is displayed.
And the first view range of the first preview image is smaller than the second view range of the second preview image, and the view picture of the second view range comprises the view picture of the first view range.
Therefore, a first object such as a non-shooting subject except the first preview image can be identified by utilizing a larger view-finding range of the second preview image, the optimal shooting time can be predicted according to the first influence value of the target image formed by the first object in the second preview image on the first preview image, and the user can shoot the first preview image according to the optimal shooting time to obtain the target image, so that the frequency of the non-shooting subject (such as passerby) appearing in the shot image is reduced, the shooting effect of the shot image is improved, and in addition, the shooting efficiency can be improved by shooting the image in combination with the optimal shooting time.
The above steps are described in detail below, specifically as follows:
referring first to step 220, in a possible embodiment, it is determined whether the first influence value satisfies a first preset condition, that is, whether the first influence value is smaller than a first preset threshold, and in a case that the first influence value is smaller than the first preset threshold, the first photographing time period is determined. On the contrary, in the case where the first influence value is greater than or equal to the first preset threshold, it is necessary to determine the second photographing time period in the case where it is determined that the first influence value is smaller than the first preset threshold (or the second preset threshold). The specific steps are as follows:
firstly, determining a first influence value of a first object on a first target image according to the movement track information of the first object in a second preview image; in a case where the first influence value satisfies a first preset condition, a first photographing time period is determined.
Next, in a possible embodiment, this step is described in detail in this application, wherein the following provides a specific step of determining the first influence value according to the movement trajectory information.
Acquiring first imaging information of a first object in a second preview image; determining the moving track information of the first object in the second preview image according to the first imaging information; according to the moving track information, predicting an influence parameter when the first object enters a view picture corresponding to the first preview image; based on the impact parameter, a first impact value of the first object on the first target image is determined.
Here, the above-mentioned influence parameter includes at least one of the following parameters:
the first time when the first object enters the view scene corresponding to the first preview image, and the second imaging information when the first object enters the view scene corresponding to the first preview image. Here, the first imaging information (or the second imaging information) may include at least one of: imaging size, imaging position, occlusion size.
Then, based on the foregoing in one possible embodiment, the present application provides two ways of determining a first influence value of the first object on the first target image based on the influence parameter, which are specifically as follows:
(1) the first method is as follows:
and when the depth of field of the first object is the same as that of the target object in the first preview image, determining a first influence value of the first object on shooting the first preview image according to the first imaging information and the second imaging information.
(1) The second method comprises the following steps:
and when the depth of field of the second object is different from that of the target object in the first preview image, determining a first influence value of the first object on the first target image according to the shielding range of the first object on the target object.
In addition, based on the above mentioned relation that in the case where the first influence value is greater than or equal to the first preset threshold, the step of determining the first photographing time period in the case where it is determined that the first influence value is less than the first preset threshold is required, that is, in the case where the first influence value of the first object on the first target image does not satisfy the first preset condition, the following steps are performed:
determining a third preview image in the second preview image under the condition that the first influence value of the first object on the first target image does not meet a first preset condition, wherein the second influence value of the first object on the second target image after the third preview image is imaged meets a second preset condition;
determining a second shooting time period for shooting the third preview image according to the second target image;
displaying prompt information, wherein the prompt information is used for prompting that the target object in the first preview image is moved to a target position in a third preview image, and/or prompting a user to shoot the third preview image in a second shooting time period to obtain a second target image;
the target position comprises a position where the influence value of the first object on the second target image is lower than a preset threshold value.
It should be noted that the step of displaying the prompt information may be as in step 230 in some scenarios, for example, when the prompt information is only used for prompting the user to capture the third preview image in the second capture period, the step of displaying the prompt information may be determined as step 230.
Then, referring to step 230, in a possible embodiment, the target shooting time may be displayed in a manner of a preset identifier to prompt the user to shoot the first preview image and display the first target image in the first shooting time period; wherein, the preset mark comprises at least one of the following: progress bar identification and time prompt box identification.
Based on the shooting method, the embodiment of the present application provides two embodiments to describe the above manner in detail, which are specifically shown as follows.
Fig. 3 is a flowchart of a shooting method based on a combination of a focal length camera and a wide-angle camera according to an embodiment of the present application.
As shown in fig. 3, the shooting method may specifically include steps 310 to 360, which are specifically as follows:
in step 310, a second input of the user to open a shooting application, such as application a, is received.
Step 320, in response to the second input, displaying the first preview image.
And displaying a first preview image acquired by the focal length camera. Or simultaneously displaying the first preview image and the second preview image acquired by the wide-angle camera. Or, the first preview image is displayed first, and when a preset operation of the user is received, the first preview image and the second preview image are displayed simultaneously (or when the preset operation of the user is received, the first preview image and the second preview image are displayed successively).
And 330, determining a first influence value of the first object in the second preview image on the first target image according to the first preview image and the second preview image, wherein the first target image is an imaged image of the first preview image.
As shown in fig. 4, the electronic device identifies the main photographed object, such as the target object in the first preview image, and the non-subject object, such as the passerby, in the second preview image and the first preview image. Here, the wide-angle camera may acquire passerby outside the shooting range shot by the focal-length camera and within the shooting range shot by the wide-angle camera, that is, passerby which is located between recognizable ranges of the wide-angle camera and can be recognized by the wide-angle camera but does not appear on the first preview image.
Then, according to the moving track information of the first object in the second preview image, a first influence value of the first object on the first target image is determined.
Specifically, the positions of the identified passerby are tracked within a certain period of time, and the moving direction and the moving speed of the passerby are calculated according to the change of the imaging size and the imaging position of the passerby in the second preview image. And calculating the moving track of the passerby, when the passerby enters the first preview image and the imaging size and the imaging position of the passerby in the second preview image in a later time period according to the moving direction and the moving speed of the identified passerby. And for each time point in the subsequent time period, calculating a first influence value of the shooting effect of the passer-by on the first target image after the first preview image is imaged at a certain time according to the imaging size and the imaging position of the passer-by in the second preview image.
Here, the first influence value is calculated by taking the upper half of the target object mainly photographed as the center, taking the distance between the passerby and the target object as a main parameter and the imaging size of the passerby in the first preview image as an auxiliary parameter for the passerby transverse to the target object (i.e., the passerby is on the left or right side of the target object); for the passerby arranged in front of and behind the target object (namely the passerby is in front of or behind the target object), the influence degree of the passerby on the first target image is calculated by taking the shielding size of the passerby on the target object as a main parameter.
In step 340, a first input to the first preview image is received while the first preview image is displayed.
Step 350, responding to the first input, and determining a first shooting time period according to the first influence value.
And determining a first shooting time period suitable for shooting under the condition that the first influence value meets a first preset condition. Here, the manner of determining the first photographing time period includes that the first influence values at all time points within the time period satisfy the first preset condition, that is, the time period is considered to be suitable for photographing.
And 360, displaying the target shooting time in a preset identification mode to prompt a user to shoot the first preview image in the first shooting time period and display the first target image.
Wherein, the preset mark comprises at least one of the following: progress bar identification and time prompt box identification. As shown in fig. 5, in a case that the preset identifier is a progress bar identifier, a progress bar identifier performed over time may be displayed on the screen, and a time period suitable for photographing is indicated on the progress bar identifier in different colors (or different filling formats as shown in fig. 5), so as to prompt the user to select a suitable first photographing time period for photographing according to an indication on the progress bar identifier.
It should be noted that in the embodiment of the present application, the first influence value is determined before the first input to the first preview image is received, and in another embodiment, the first influence value may be determined after the first input is received, so as to determine the first shooting time period according to the first influence value.
The shooting method can be applied to scenes that a user shoots in tourist attractions, the tourists often block the lens in the scenes, and the shooter and the shot receiver only can keep postures ready for shooting at any time to wait for shooting time and waste time. After the shooting method is adopted, the photographer only needs to wait for the analyzed optimal shooting time and then prepare for shooting, and the photographer can also wait for shooting in the preparation posture at the moment, so that the time is saved.
From this, this application embodiment can predict out the influence of passerby to the camera lens according to passerby's action, analyzes out suitable time of shooing to the form suggestion of progress strip gives the shooter, avoids being shot and shelters from the camera lens by the passerby appearing suddenly when being ready to shoot, and the shooter can directly wait for to shoot again to the best time of shooing, saves the shooter and by the latency of shooing, when improving shooting efficiency, improves user experience.
In addition, the embodiment of the application also provides another flow chart of a shooting method based on the combination of the focal length camera and the wide-angle camera.
Unlike the above example, the above embodiment considers the influence of the passerby on the first target image while walking, and the second embodiment considers the influence of the passerby when the walking passerby is still or stationary in the first preview image on the first target image.
Fig. 6 is a flowchart of another shooting method based on a combination of a focal-length camera and a wide-angle camera according to an embodiment of the present disclosure.
As shown in fig. 6, the shooting method may specifically include steps 610 to 660, which are specifically as follows:
step 610, receiving a second input of the user to start the shooting application.
Step 620, in response to the second input, displaying the first preview image.
Step 630, according to the first preview image and the second preview image, determining a first influence value of the first object in the second preview image on the first target image, where the first target image is an imaged image of the first preview image.
The electronic equipment recognizes a main shooting object such as a target object and a non-main object, namely a first object such as a passerby in a framing picture of the first preview image through the second preview image and the first preview image.
The main shot target object and passerby can be distinguished according to the face orientation of the object in the preview image and/or the station position of the object.
Then, the first influence value is calculated by taking the upper half of the main photographed target object as the center, taking the distance between the passerby and the target object as a main parameter and the imaging size of the passerby in the first preview image as an auxiliary parameter for the passerby transverse to the target object (namely, the passerby is on the left or right side of the target object); for the passerby arranged in front of and behind the target object (namely the passerby is in front of or behind the target object), the influence degree of the passerby on the first target image is calculated by taking the shielding size of the passerby on the target object as a main parameter.
In the case of displaying the first preview image, a first input to the first preview image is received, step 640.
Step 650, in response to the first input, determines that the first impact value does not satisfy the first preset condition.
And 660, determining a third preview image in the second preview image under the condition that the first influence value is determined not to meet the first preset condition, wherein the second influence value of the first object on the second target image formed by the third preview image meets the second preset condition.
As shown in fig. 7, at least one passerby in the second preview image, which is not the first preview image, is acquired by using the wide-angle camera, and the position and distance of each passerby relative to the electronic device are calculated according to the imaging size and the imaging position of each passerby in the second preview image, and the position of each passerby in the space is obtained.
And then, according to the position of each passerby in the space, simulating and calculating whether a second influence value of the first object on a second target image formed by the third preview image meets a second preset condition when the shot person is in the third preview pre-image. Here, the second preset condition may be the same as the first preset condition or different from the first preset condition, and is not limited herein.
And when the second influence value meets a second preset condition, determining a target position in the third preview image, wherein the target position comprises a position at which the influence value of the first object on the second target image is lower than a preset threshold value, namely the influence on the effect of shooting the third preview image to obtain the second target image is minimum when the shot person stands at the position for shooting.
And step 670, determining a second shooting time period for shooting the third preview image according to the second target image.
And step 680, displaying prompt information, wherein the prompt information is used for prompting that the target object in the first preview image is moved to the target position in the third preview image, and prompting a user to shoot the third preview image within the second shooting time period to obtain a second target image.
The target position comprises a position where the influence value of the first object on the second target image is lower than a preset threshold value.
Here, as shown in fig. 8, according to the analyzed optimal target position, the direction of moving to the target position is prompted to the user on the screen, that is, if the influence of passerby on the shooting effect is minimal when the user moves 30 cm to the left in the simulation calculation, the user is prompted to move to the left on the screen until the user moves to the target position to prompt that the user has reached the optimal position, and the user can take a picture. And displaying a progress bar mark which is carried out along with the time on the screen, and marking the time period suitable for photographing on the progress bar mark in different colors so as to prompt a user to select a suitable second photographing time period for photographing according to the indication on the progress bar mark to obtain a second target image.
Therefore, when a user takes a picture in a hot spot, a large number of people often exist, and a photographer needs to continuously move to find a proper picture taking position. By the shooting method, a photographer can conveniently know which position can avoid the influence of passerby on the lens, find a proper shooting position quickly, prompt the photographer about reasonable shooting time and save the shooting time.
In addition, based on the shooting method, the embodiment of the application further provides a shooting device, which is specifically described in detail with reference to fig. 9.
Fig. 9 is a schematic structural diagram of a shooting device according to an embodiment of the present application.
As shown in fig. 9, the shooting device 90 may specifically include:
a receiving module 901, configured to receive a first input to a first preview image when the first preview image is displayed;
a determining module 902, configured to determine, in response to a first input, a first shooting time period according to a first influence value of a first object in a second preview image on a first target image, where the first target image is an imaged image of a first preview image;
a display module 903, configured to capture a first preview image in a first capture time period, and display a first target image;
and the first view range of the first preview image is smaller than the second view range of the second preview image, and the view picture of the second view range comprises the view picture of the first view range.
In a possible embodiment, the determining module 902 may be specifically configured to determine, according to the movement trajectory information of the first object in the second preview image, a first influence value of the first object on the first target image;
in a case where the first influence value satisfies a first preset condition, a first photographing time period is determined.
Further, the determining module 902 may be specifically configured to obtain first imaging information of the first object in the second preview image;
determining the moving track information of the first object in the second preview image according to the first imaging information;
according to the moving track information, predicting an influence parameter when the first object enters a view picture corresponding to the first preview image;
based on the impact parameter, a first impact value of the first object on the first target image is determined.
The influence parameter in the embodiment of the present application may include at least one of the following parameters:
the first time when the first object enters the view scene corresponding to the first preview image, and the second imaging information when the first object enters the view scene corresponding to the first preview image.
In another possible embodiment, the determining module 902 may be specifically configured to determine, according to the first imaging information and the second imaging information, a first influence value of the first object on capturing the first preview image when the depth of field of the first object is the same as that of the target object in the first preview image. Or when the depth of field of the second object is different from that of the target object in the first preview image, determining a first influence value of the first object on the first target image according to the shielding range of the first object on the target object.
The display module 903 in this embodiment is specifically configured to display the target shooting time in a preset identifier manner, so as to prompt a user to shoot a first preview image within a first shooting time period, and display the first target image; wherein the content of the first and second substances,
the preset identification comprises at least one of the following: progress bar identification and time prompt box identification.
In addition, in a further possible embodiment, the determining module 902 is further configured to determine, in the second preview image, a third preview image in a case that the first influence value of the first object on the first target image does not satisfy the first preset condition, and the second influence value of the first object on the second target image after the third preview image is imaged satisfies the second preset condition; and determining a second photographing time period for photographing the third preview image according to the second target image. The display module 903 is further configured to display a prompt message, where the prompt message is used to prompt that the target object in the first preview image is moved to a target position in the third preview image, and prompt the user to shoot the third preview image within a second shooting time period to obtain a second target image; the target position comprises a position where the influence value of the first object on the second target image is lower than a preset threshold value.
The shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in an electronic apparatus. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting device provided in the embodiment of the present application can implement each process implemented by the shooting device in the method embodiments of fig. 1 to 8, and is not described herein again to avoid repetition.
In summary, in the embodiment of the present application, a first object, such as a non-photographic subject, other than the first preview image can be identified by using a larger viewing range of the second preview image, and an optimal photographing time is predicted according to a first influence value of the target image after the first object in the second preview image images the first preview image, and the user can photograph the first preview image according to the optimal photographing time to obtain the target image, thereby reducing the frequency of the non-photographic subject (such as a passerby) appearing in the photographed image to improve the photographing effect of the photographed image, and in addition, photographing the image in combination with the optimal photographing time can improve the photographing efficiency.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 1007 is configured to receive a first input for the first preview image when the first preview image is displayed.
The processor 1010 is configured to determine, in response to a first input, a first shooting time period according to a first influence value of a first object in the second preview image on a first target image, where the first target image is an imaged image of the first preview image.
A display unit 1006 is configured to capture a first preview image in a first capturing period, and display a first target image; and the first view range of the first preview image is smaller than the second view range of the second preview image, and the view picture of the second view range comprises the view picture of the first view range.
Therefore, a first object such as a non-shooting subject except the first preview image can be identified by utilizing a larger view-finding range of the second preview image, the optimal shooting time is predicted according to the first influence value of the target image formed by the first object in the second preview image on the first preview image, a user can shoot the first preview image according to the optimal shooting time to obtain the target image, the frequency of the non-shooting subject (such as passerby) appearing in the shot image is reduced, the shooting effect of the shot image is improved, and in addition, the shooting efficiency can be improved by combining the optimal shooting time to shoot the image.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. The readable storage medium includes a computer-readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
In addition, an embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the foregoing shooting method embodiment, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A photographing method, characterized by comprising:
receiving a first input to a first preview image in a case where the first preview image is displayed;
responding to the first input, and determining a first shooting time period according to a first influence value of a first object in a second preview image on a first target image, wherein the first target image is an imaged image of the first preview image;
shooting the first preview image in the first shooting time period, and displaying the first target image;
wherein a first viewing range of the first preview image is smaller than a second viewing range of the second preview image, a viewing picture of the second viewing range includes a viewing picture of the first viewing range, and the first target image imaged in the first photographing time period does not include the first object;
wherein a first impact value of the first object on the first target image is determined based on an impact parameter, the impact parameter comprising at least one of:
the first time when the first object enters the view scene corresponding to the first preview image and the second imaging information when the first object enters the view scene corresponding to the first preview image.
2. The method of claim 1, wherein determining the first capturing period according to the first influence value of the first object on the first target image in the second preview image comprises:
determining a first influence value of the first object on a first target image according to the movement track information of the first object in the second preview image;
determining the first photographing time period when the first influence value satisfies a first preset condition.
3. The method of claim 2, wherein the determining the first influence value of the first object on the first target image according to the movement track information of the first object in the second preview image comprises:
acquiring first imaging information of the first object in the second preview image;
determining the movement track information of the first object in the second preview image according to the first imaging information;
according to the movement track information, predicting an influence parameter when the first object enters a view picture corresponding to the first preview image;
based on the influence parameter, a first influence value of the first object on a first target image is determined.
4. The method of claim 3, wherein determining a first impact value of the first object on the first target image based on the impact parameter comprises:
and when the depth of field of the first object is the same as that of a target object in the first preview image, determining a first influence value of the first object on shooting the first preview image according to the first imaging information and the second imaging information.
5. The method of claim 3, wherein determining a first impact value of the first object on the first target image based on the impact parameter comprises:
when the depth of field of the first object is different from that of a target object in the first preview image, determining a first influence value of the first object on a first target image according to the shielding range of the first object on the target object.
6. A camera, comprising:
the device comprises a receiving module, a display module and a display module, wherein the receiving module is used for receiving a first input of a first preview image under the condition of displaying the first preview image;
the determining module is used for responding to the first input and determining a first shooting time period according to a first influence value of a first object in a second preview image on a first target image, wherein the first target image is an imaged image of the first preview image;
the display module is used for shooting the first preview image in the first shooting time period and displaying the first target image;
wherein a first viewing range of the first preview image is smaller than a second viewing range of the second preview image, a viewing picture of the second viewing range includes a viewing picture of the first viewing range, and the first target image imaged in the first photographing time period does not include the first object;
wherein a first impact value of the first object on the first target image is determined based on an impact parameter, the impact parameter comprising at least one of:
the first time when the first object enters the view scene corresponding to the first preview image and the second imaging information when the first object enters the view scene corresponding to the first preview image.
7. The apparatus according to claim 6, wherein the determining module is specifically configured to determine a first influence value of the first object on the first target image according to the movement trajectory information of the first object in the second preview image;
determining the first photographing time period when the first influence value satisfies a first preset condition.
8. The apparatus according to claim 7, wherein the determining module is specifically configured to obtain first imaging information of the first object in the second preview image;
determining the movement track information of the first object in the second preview image according to the first imaging information;
according to the movement track information, predicting an influence parameter when the first object enters a view picture corresponding to the first preview image;
based on the influence parameter, a first influence value of the first object on a first target image is determined.
9. The apparatus of claim 8, wherein the determining module is specifically configured to determine a first impact value of the first object on capturing the first preview image according to the first imaging information and the second imaging information when the first object and a target object in the first preview image have the same depth of field.
10. The apparatus of claim 8, wherein the determining module is specifically configured to determine the first impact value of the first object on the first target image according to an occlusion range of the first object on the target object when the depth of field of the first object is different from that of the target object in the first preview image.
CN202010454009.5A 2020-05-26 2020-05-26 Shooting method and device Active CN111654620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010454009.5A CN111654620B (en) 2020-05-26 2020-05-26 Shooting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010454009.5A CN111654620B (en) 2020-05-26 2020-05-26 Shooting method and device

Publications (2)

Publication Number Publication Date
CN111654620A CN111654620A (en) 2020-09-11
CN111654620B true CN111654620B (en) 2021-09-17

Family

ID=72349561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010454009.5A Active CN111654620B (en) 2020-05-26 2020-05-26 Shooting method and device

Country Status (1)

Country Link
CN (1) CN111654620B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112399076B (en) * 2020-10-27 2022-08-02 维沃移动通信有限公司 Video shooting method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108449546A (en) * 2018-04-04 2018-08-24 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN110178167A (en) * 2018-06-27 2019-08-27 潍坊学院 Crossing video frequency identifying method violating the regulations based on video camera collaboration relay
CN110210276A (en) * 2018-05-15 2019-09-06 腾讯科技(深圳)有限公司 A kind of motion track acquisition methods and its equipment, storage medium, terminal
CN110933303A (en) * 2019-11-27 2020-03-27 维沃移动通信(杭州)有限公司 Photographing method and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10116945B2 (en) * 2016-02-26 2018-10-30 Panasonic Intellectual Property Management Co., Ltd. Moving picture encoding apparatus and moving picture encoding method for encoding a moving picture having an interlaced structure

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108449546A (en) * 2018-04-04 2018-08-24 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN110210276A (en) * 2018-05-15 2019-09-06 腾讯科技(深圳)有限公司 A kind of motion track acquisition methods and its equipment, storage medium, terminal
CN110178167A (en) * 2018-06-27 2019-08-27 潍坊学院 Crossing video frequency identifying method violating the regulations based on video camera collaboration relay
CN110933303A (en) * 2019-11-27 2020-03-27 维沃移动通信(杭州)有限公司 Photographing method and electronic equipment

Also Published As

Publication number Publication date
CN111654620A (en) 2020-09-11

Similar Documents

Publication Publication Date Title
US11860511B2 (en) Image pickup device and method of tracking subject thereof
CN106937039B (en) Imaging method based on double cameras, mobile terminal and storage medium
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN112637500B (en) Image processing method and device
CN112887609B (en) Shooting method and device, electronic equipment and storage medium
WO2022161340A1 (en) Image display method and apparatus, and electronic device
CN112532881B (en) Image processing method and device and electronic equipment
CN112532808A (en) Image processing method and device and electronic equipment
CN112492221B (en) Photographing method and device, electronic equipment and storage medium
CN112422798A (en) Photographing method and device, electronic equipment and storage medium
CN105635568A (en) Image processing method in mobile terminal and mobile terminal
CN112184722A (en) Image processing method, terminal and computer storage medium
CN111654620B (en) Shooting method and device
CN112954212B (en) Video generation method, device and equipment
CN112653841B (en) Shooting method and device and electronic equipment
CN114143471B (en) Image processing method, system, mobile terminal and computer readable storage medium
CN106488128B (en) Automatic photographing method and device
CN112153291B (en) Photographing method and electronic equipment
CN112887624B (en) Shooting method and device and electronic equipment
CN112887623B (en) Image generation method and device and electronic equipment
CN112738399B (en) Image processing method and device and electronic equipment
CN112367467B (en) Display control method, display control device, electronic apparatus, and medium
CN114245018A (en) Image shooting method and device
CN114025100A (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN113014799A (en) Image display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant