CN111279684A - Shooting control method and electronic device - Google Patents

Shooting control method and electronic device Download PDF

Info

Publication number
CN111279684A
CN111279684A CN201880070282.9A CN201880070282A CN111279684A CN 111279684 A CN111279684 A CN 111279684A CN 201880070282 A CN201880070282 A CN 201880070282A CN 111279684 A CN111279684 A CN 111279684A
Authority
CN
China
Prior art keywords
shooting
model
target object
photographing
preview picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880070282.9A
Other languages
Chinese (zh)
Inventor
王星泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heren Technology Wuhan Co ltd
Original Assignee
Heren Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heren Technology Wuhan Co ltd filed Critical Heren Technology Wuhan Co ltd
Publication of CN111279684A publication Critical patent/CN111279684A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Abstract

The application provides a shooting control method, which comprises the following steps: acquiring a shooting preview picture; and analyzing the shooting preview picture by adopting a preset model to obtain shooting parameters for shooting. The application also provides an electronic device applying the shooting control method. According to the shooting control method and the electronic device, the current shooting parameters are determined by analyzing the shooting preview picture, and the shooting parameters can be determined in time according to the shooting preview picture, so that the current splendid moment can be captured in time with higher shooting quality according to the shooting parameters when shooting is needed.

Description

Shooting control method and electronic device Technical Field
The present disclosure relates to the field of electronic devices, and particularly, to a shooting control method for an electronic apparatus and the electronic apparatus.
Background
Nowadays, with the improvement of living standard of people, photographing has become a common function which is indispensable in life. At present, no matter a camera or an electronic device such as a mobile phone and a tablet computer with a camera function, pixels are higher and higher, and the photographing quality is better and better. However, in the current electronic devices such as cameras and mobile phones, when photographing is controlled, users are often required to manually set photographing parameters or set photographing parameters according to the current environment, however, there is a certain hysteresis in manually setting the photographing parameters or setting the photographing parameters according to the current environment, so that after the photographing parameters are set, a highlight is already flashed, and unfortunately the highlight is missed.
Disclosure of Invention
The application provides a shooting control method and an electronic device, which can set shooting parameters in time, so that wonderful moments can be captured in time with high shooting quality.
In one aspect, a photographing control method is provided, the photographing control method including: acquiring a shooting preview picture; and analyzing the shooting preview picture by adopting a preset model to obtain shooting parameters for shooting.
In another aspect, an electronic device is provided that includes a camera, a memory, and a processor. The memory is to store program instructions. The processor is used for calling the program instruction to execute a shooting control method, and the shooting control method comprises the following steps: acquiring a shooting preview picture through a camera; and analyzing the shooting preview picture by adopting a preset model to obtain shooting parameters for shooting.
In still another aspect, a computer-readable storage medium is provided, in which program instructions are stored, and the program instructions are used by a computer to execute a shooting control method after being called, where the shooting control method includes: acquiring a shooting preview picture; and analyzing the shooting preview picture by adopting a preset model to obtain shooting parameters for shooting.
According to the shooting control method and the electronic device, the current shooting parameters are determined by analyzing the shooting preview picture, and the shooting parameters can be determined in time according to the shooting preview picture, so that the current splendid moment can be captured in time with higher shooting quality according to the shooting parameters when shooting is needed.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other obvious modifications can be obtained by those skilled in the art without inventive efforts.
Fig. 1 is a flowchart of a shooting control method in a first embodiment of the present application.
FIG. 2 is a flowchart of a photographing control method in a second embodiment of the present application
Fig. 3 is a flowchart of a shooting control method in a third embodiment of the present application.
Fig. 4 is a flowchart of a photographing control method in a fourth embodiment of the present application.
Fig. 5 is a flowchart of a photographing control method in a fifth embodiment of the present application.
FIG. 6 is a flowchart of a photographing control method in a sixth embodiment of the present application
Fig. 7 is a flowchart of a photographing control method in a seventh embodiment of the present application.
Fig. 8 is a flowchart of a photographing control method in an eighth embodiment of the present application.
Fig. 9 is a flowchart of a photographing control method in a ninth embodiment of the present application.
Fig. 10 is a block diagram illustrating a partial structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The shooting control method can be applied to an electronic device, the electronic device can comprise a camera, the electronic device can acquire a shooting preview picture through the camera and display the shooting preview picture, and the electronic device can perform operations such as shooting, continuous shooting and video shooting through the camera. The camera comprises a front camera and a rear camera, and operations such as photographing, continuous photographing and video photographing can be photographing through the rear camera and can also be self-photographing through the front camera.
Please refer to fig. 1, which is a flowchart illustrating a photographing control method according to a first embodiment of the present application. The shooting control method is applied to an electronic device. In the first embodiment, the photographing control method includes the steps of:
s11: and acquiring a shooting preview picture.
In some embodiments, the operation of acquiring the shooting preview screen is performed by the camera in response to the operation of turning on the camera, that is, the shooting preview screen is acquired by the camera. And the shooting preview picture is a view-finding picture when the camera is started to find a view.
In some embodiments, the operation of opening the camera is a click operation on the photographing application icon, that is, when the camera is opened in response to the click operation on the photographing application icon, the camera is used to obtain the photographing preview picture.
Alternatively, in other embodiments, the operation of turning on the camera is a specific operation on a physical key of the electronic device, for example, the electronic device includes a volume up key and a volume down key, and the operation of turning on the camera is an operation of simultaneously pressing the volume up key and the volume down key. Further, the operation of starting the photographing application is an operation of successively pressing the volume up key and the volume down key within a preset time (for example, 2 seconds).
In other embodiments, the operation of turning on the camera may also be an operation of a preset touch gesture input in any display interface of the electronic device, for example, on a main interface of the electronic device, a user may input a touch gesture with an annular touch track to turn on the camera.
In other embodiments, the operation of turning on the camera may also be an operation of a preset touch gesture input on the touch screen when the electronic device is in a black screen state.
In some embodiments, when the electronic device is a camera, the operation of turning on the camera is an operation of pressing a shutter button/power button of the camera to trigger the camera to be in an on state.
Optionally, in the present application, the obtaining of the shooting preview picture is to obtain the shooting preview picture in real time through a camera.
S13: and analyzing the shooting preview picture by adopting a preset model to obtain shooting parameters for shooting.
The preset model may be a trained model or an untrained model.
Optionally, in some embodiments, the preset model is a trained neural network model, and the preset model is used to analyze the shooting preview image to obtain shooting parameters for shooting, including: all pixels of the shooting preview picture are used as input, and shooting parameters are output for shooting after calculation is carried out through the neural network model.
Optionally, in another embodiment, the preset model is a trained image processing algorithm model, the trained image processing algorithm model includes a trained target object feature model, and the shooting parameters obtained by analyzing the shooting preview picture with the preset model are used for shooting, including: analyzing a target object in a shooting preview picture by utilizing an image recognition technology to generate a corresponding target object characteristic vector; obtaining an analysis result according to the trained target object feature model and the target object feature vector corresponding to the shooting preview picture; and determining the shooting parameters according to the analysis result for shooting.
Optionally, in an implementation manner, obtaining an analysis result according to the trained target object feature model and the target object feature vector corresponding to the shot preview picture includes: and calculating to obtain an analysis result of the target object characteristics corresponding to the shooting preview picture through the trained target object characteristic model and the target object characteristic vector corresponding to the shooting preview picture.
In one implementation, determining a photographing parameter for photographing according to the analysis result includes: and determining the shooting parameters corresponding to the obtained target object characteristics according to the corresponding relation between the preset target object characteristics and the shooting parameters.
Optionally, in another implementation manner, the preset model is a trained image processing algorithm model, and when the trained image processing algorithm model includes a trained target object feature model, the shooting preview picture is analyzed by using the preset model to obtain shooting parameters for shooting, where the shooting parameters include: and taking the target object feature vector corresponding to the shooting preview picture as input information of a trained target object feature model, and obtaining shooting parameters through the trained target object feature model.
In some embodiments, the analysis result is obtained by analyzing the captured preview image obtained in real time. In other embodiments, the shooting preview picture is analyzed by using a preset model to obtain shooting parameters: and analyzing the currently acquired shooting preview picture at preset intervals (for example, 0.2 second) to obtain shooting parameters.
In the application, the target object may be a hand, a face, a specific scene, etc.; the target object feature model correspondingly comprises a gesture feature model, an expression feature model, a scene feature model and the like, the analyzed target object feature vector can comprise a gesture feature vector, an expression feature vector, a scene feature vector and the like, and the analyzed target object feature can also comprise a gesture feature, an expression feature, a scene feature and the like.
Wherein the shooting parameters include but are not limited to: at least one of shutter time, aperture size, sensitivity, and the like.
Therefore, in the application, the current shooting parameters are determined by analyzing the shooting preview picture, and the shooting parameters can be determined in time according to the shooting preview picture, so that the current wonderful moment can be captured in time with higher shooting quality according to the shooting parameters when shooting is needed.
Please refer to fig. 2, which is a flowchart illustrating a photographing control method according to a second embodiment of the present application. In a second embodiment, the preset model is a trained image processing algorithm model, the trained image processing algorithm model includes a trained target object feature model, and the shooting control method includes the following steps:
s21: and acquiring a shooting preview picture.
S23: and analyzing the target object in the shooting preview picture by utilizing an image recognition technology to generate a corresponding target object characteristic vector, and obtaining an analysis result according to the trained target object characteristic model and the target object characteristic vector corresponding to the shooting preview picture.
Optionally, in an implementation manner, obtaining an analysis result according to the trained target object feature model and the target object feature vector corresponding to the shot preview picture includes: and calculating to obtain an analysis result of the target object characteristics corresponding to the shooting preview picture through the trained target object characteristic model and the target object characteristic vector corresponding to the shooting preview picture.
The target object may include objects such as a hand, a face, a specific scene, etc., the target object feature model may correspondingly include a gesture feature model, an expression feature model, a scene feature model, etc., the analyzed target object feature vector may include expression feature vectors such as laugh, angry, yawning, etc., or gesture feature vectors such as "OK" gesture, "V-shaped" etc., or scene feature vectors such as flowers, birds, mountains, etc., the analyzed target feature may also include expressive features such as laugh, angry, yawning, etc., or gesture features such as "OK" gesture, "V-shaped" etc., or scene features such as flowers, birds, mountains, etc.
Taking a target object as a hand and a target object feature model as a gesture feature model as an example, analyzing a shot preview image by using a preset model to obtain an analysis result, which may further specifically include: analyzing the gestures in the shot preview picture by utilizing an image recognition technology to generate corresponding gesture feature vectors; and obtaining an analysis result according to the gesture feature model and the gesture feature vector corresponding to the shot preview picture.
Further, in an implementation manner, obtaining an analysis result according to the gesture feature model and a gesture feature vector corresponding to the shot preview image includes: and calculating to obtain an analysis result of the gesture characteristics corresponding to the shooting preview picture through the trained gesture characteristic model and the target object characteristic vector corresponding to the shooting preview picture.
In some embodiments, the trained target object feature model includes a plurality of reference images and reference target object feature vectors of the plurality of reference images, and the analysis result of calculating the target object feature corresponding to the shooting preview picture by using the trained target object feature model and the target object feature vector corresponding to the shooting preview picture includes: and comparing the target object feature vector corresponding to the shot preview picture with the reference target object feature vector in the plurality of reference images through the trained target object feature model, determining the reference image corresponding to the reference target object feature vector with the highest similarity, and obtaining the target object feature according to the reference image. For example, after the reference image is determined, the target object feature is determined according to the label of the reference image.
S25: and determining shooting parameters for shooting according to the analysis result.
In some embodiments, when the analysis result is a target object feature derived from the trained target object feature model, determining a shooting parameter for shooting according to the analysis result includes: and determining to obtain the shooting parameters corresponding to the target object characteristics according to the corresponding relation between the preset target object characteristics and the shooting parameters.
Wherein, the shooting parameters include but are not limited to: at least one of shutter time, aperture size, sensitivity, and the like.
Optionally, it may be preset that: the shooting parameters are determined using the target object characteristics, for example, correspondence between shutter time and/or aperture size and the target object characteristics is set in advance. For example, when the shooting parameters include shutter time and/or aperture size, and the target object characteristics may be expression characteristics, gesture characteristics, scene characteristics, and the like, a corresponding relationship between the shutter time and/or aperture size and the expression characteristics is preset, a corresponding relationship between the shutter time and/or aperture size and the gesture characteristics is preset, or a corresponding relationship between the shutter time and/or aperture size and specific scene characteristics is preset.
In particular, when the target object feature is a gesture feature, the corresponding relationship between the shutter time and/or the aperture size and the gesture feature of the distance between the thumb and the index finger can be preset, wherein the shutter time and the distance between the thumb and the index finger can be in a logarithmic relationship. Therefore, when the gesture feature of the distance between the thumb and the index finger is obtained according to the analysis result, the shooting parameters such as the shutter time and/or the aperture size can be determined according to the preset corresponding relation between the shutter time and/or the aperture size and the gesture feature of the distance between the thumb and the index finger, and the control of the exposure intensity and/or the shooting depth of field can be realized.
For another example, when the target object feature is an expression feature, a preset correspondence between the imaging parameter such as shutter time and/or aperture size and a different expression feature may be set, for example, a correspondence between the shutter time and/or aperture size and an expression feature such as smile, laugh, anger, cry, yawn, and the like may be set, and when the corresponding expression feature is obtained from the analysis result, the corresponding imaging parameter may be determined and obtained from the correspondence between the imaging parameter and the different expression feature and the obtained expression feature.
In other embodiments, when the analysis result is an analysis result including shooting parameters from a trained target object feature model, determining the shooting parameters for shooting according to the analysis result includes: and determining shooting parameters for shooting according to the analysis result which is obtained by the trained target object characteristic model and comprises the shooting parameters.
In some embodiments, the target object feature model may be a model completed by the following training steps: analyzing the target object in each picture in the initial training set by using an image or face recognition technology to generate a corresponding target object feature vector; establishing a training sample set based on the generated target object feature vector and a similarity label between the corresponding photo and the reference picture; and then, training and learning are carried out by using the sample set to obtain a trained target object characteristic model.
Therefore, in the application, the shooting parameters can be determined according to the target object in the shooting preview picture, the shooting parameters corresponding to the target object can be set timely and more reasonably, the shooting quality can be improved, and the timely setting of the shooting parameters can be ensured to be completed.
The correspondence relationship may be a correspondence relationship table stored in a memory of the electronic device.
Step S21 corresponds to step S11 in fig. 1, and more specific description can refer to the related description of fig. 1, which is not repeated herein. Steps S23-S25 correspond to step S13 in FIG. 1, and the related descriptions can be referred to each other.
Please refer to fig. 3, which is a flowchart illustrating a photographing control method according to a third embodiment of the present application. In a third embodiment, the preset model is a trained image processing algorithm model, the trained image processing algorithm model includes a trained target object feature model, and the shooting control method includes the following steps:
s301: and acquiring a shooting preview picture.
S303: and analyzing the target object in the shooting preview picture by utilizing an image recognition technology to generate a corresponding target object feature vector.
S305: and taking the target object feature vector corresponding to the shooting preview picture as input information of the trained target object feature model, and obtaining shooting parameters for shooting through the trained target object feature model.
The target object can comprise objects such as a hand, a face, a specific scene, and the like, the target object feature model can correspondingly comprise a gesture feature model, an expression feature model, a scene feature model, and the like, the analyzed target object feature vector can comprise expression feature vectors such as laughing, angry, yawning, and the like, or gesture feature vectors such as 'OK' gesture, 'V-shaped' and the like, or scene feature vectors such as flowers, birds, mountains, and the like, and the analyzed target feature can also comprise expressive features such as laughing, angry, yawning, and the like, or gesture features such as 'OK' gesture, 'V-shaped' and the like, or scene features such as flowers, birds, mountains, and the like.
Taking a target object as a hand and a target object feature model as a gesture feature model as an example, taking a target object feature vector corresponding to a shooting preview picture as input information of the trained target object feature model, and obtaining shooting parameters for shooting through the trained target object feature model may include: and taking the gesture feature vector corresponding to the shooting preview picture as input information of a trained gesture feature model, and obtaining corresponding shooting parameters for shooting through the trained gesture feature model.
Step S301 corresponds to step S11 in fig. 1, and more specific description can refer to the related description of fig. 1, which is not repeated herein. Steps S303 to S305 correspond to step S13 in fig. 1, and the related descriptions may be referred to each other. Steps S303 and S305 also have a certain correspondence with steps S203 and S205 in fig. 2, and the related features may be referred to each other.
Please refer to fig. 4, which is a flowchart illustrating a photographing control method according to a fourth embodiment of the present application. In a fourth embodiment, the preset model is a trained neural network algorithm model, and the shooting control method includes the following steps:
s31: and acquiring a shooting preview picture.
S33: all pixels of the shooting preview picture are used as input, and shooting parameters are output for shooting after calculation is carried out through the trained neural network model.
That is, in some embodiments, the shooting parameters may be directly calculated by using all pixels of the shooting preview image as input information of a trained neural network model.
Steps S31 to S33 correspond to steps S11 to S13 in fig. 1, and they are related to each other in a high-level or low-level description relationship, and the related descriptions can be referred to each other.
Please refer to fig. 5, which is a flowchart illustrating a photographing control method according to a fifth embodiment of the present application. The difference from the first embodiment is that in the fifth embodiment, it is further determined whether or not the shooting condition is currently satisfied. In a fifth embodiment, a photographing control method includes:
s41: and acquiring a shooting preview picture.
S43: and analyzing the shooting preview picture by adopting a preset model to determine whether the shooting condition is met currently. If so, step S45 is executed, otherwise, the flow returns to step S43 or the flow ends.
In some embodiments, when the predetermined model is a trained neural network model; adopting a preset model to analyze the shooting preview picture to determine whether the shooting condition is met currently, and further comprising: analyzing the shot preview picture through the trained neural network model to obtain an analysis result including satisfaction; and determining whether the shooting condition is met currently according to the analysis result. Specifically, in some embodiments, when it is determined that the satisfaction exceeds a satisfaction preset threshold, it is determined that the shooting condition is currently satisfied. Wherein, the satisfaction is that the neural network model outputs the analysis result including the satisfaction by processing according to the model trained in advance by taking all the pixels of the shooting preview picture as input. Wherein, the satisfaction preset threshold value can be 80%, 90% and the like.
That is, in some embodiments, when the shooting preview image is analyzed using the preset model, not only the shooting parameters but also the analysis result including the satisfaction degree is analyzed to determine whether the shooting condition is satisfied.
Optionally, in other embodiments, when the preset model is a trained image processing algorithm model, analyzing the captured preview image by using the preset model to determine whether the current captured preview image meets the capturing condition includes: comparing the shot preview picture with the reference picture by adopting a trained image processing algorithm model, and analyzing an analysis result comprising the similarity between the preview picture and the reference picture; and determining whether the shooting condition is met currently according to the analysis result. Wherein, whether the shooting condition is met currently is determined according to the analysis result, and the method comprises the following steps: and when the similarity is determined to exceed the preset threshold of the similarity, determining that the shooting condition is currently met. Wherein, the preset threshold value of the similarity can be 80%, 90% and so on.
The reference picture may be a standard picture with a specific expression such as smile, laugh, difficulty, anger, yawning and the like preset by a user, may also be a standard picture with a gesture such as an "OK" gesture, a "V-shaped" gesture and the like, and may also be a standard picture with scenes such as flowers, birds, mountains and the like.
That is, in some embodiments, when the preview picture is analyzed by using the preset model, not only the shooting parameters but also the similarity with the reference picture are analyzed to determine whether the shooting condition is satisfied.
Further, the trained image processing algorithm model includes a trained target object feature model, the trained image processing algorithm model is used for comparing the shooting preview picture with the reference picture, and the similarity between the shooting preview picture and the reference picture is analyzed, including: analyzing a target object in a shooting preview picture by utilizing an image recognition technology to generate a corresponding target object characteristic vector; and calculating to obtain the similarity between the shooting preview picture and the reference picture according to the trained target object feature model and the target object feature vector corresponding to the shooting preview picture.
In some embodiments, calculating the similarity between the captured preview picture and the reference picture according to the trained target object feature model and the target object feature vector corresponding to the captured preview picture includes: and taking the target object feature vector corresponding to the shooting preview picture as input information of a trained target object feature model, and calculating the similarity between the shooting preview picture and the reference picture through the target object feature model.
In other embodiments, comparing the captured preview image with a reference image by using a trained image processing algorithm model, and analyzing the similarity between the captured preview image and the reference image, includes: acquiring pixel information of a shooting preview picture; and comparing the pixel information of the shooting preview picture with the pixel information of the reference picture, and analyzing the similarity between the shooting preview picture and the reference picture. That is, in other embodiments, the similarity between the captured preview picture and the reference picture is derived by comparing pixel information such as pixel grayscale values of the two images.
In other embodiments, the analyzing the shooting preview picture by using a preset model to determine whether the shooting condition is currently met includes: analyzing a target object in a shooting preview picture by utilizing an image recognition technology to generate a corresponding target object characteristic vector; the target object feature vector is used as input information of an image processing algorithm model, and an analysis result comprising identification information for identifying whether the current shooting condition is met is obtained; and determining whether a photographing condition is satisfied according to the analysis result. For example, the identification information may be 1, 0, or the like, which identifies whether the analysis result of the photographing condition is currently satisfied. More specifically, when the identification information is 1, the identification satisfies the shooting condition, and when the identification information is 0, the identification does not satisfy the shooting condition. The determining whether the shooting condition is met according to the analysis result includes: when the analysis result includes identification information identifying that the shooting condition is currently satisfied, determining that the shooting condition is currently satisfied; when the analysis result includes identification information identifying that the photographing condition is not currently satisfied, it is determined that the photographing condition is not currently satisfied.
The target object may include objects such as a face, a hand, a specific scene, etc., the target object feature model may include an expression feature model, a gesture feature model, a scene feature model, etc., and the analyzed target object feature vector may include an expression feature vector such as laugh, angry, yawning, etc., or a gesture feature vector such as "OK" gesture, "V-shaped", etc., or a scene feature vector such as a flower, a bird, a mountain, etc., as described above.
In some embodiments, taking the expression feature model as an example, the trained expression feature model is trained in the following manner: providing a plurality of photos with different facial expressions in an initial training set; performing expression analysis on the person in the plurality of photos provided in the initial training set by using a face recognition technology to generate corresponding expression feature vectors Xi, wherein for example, X1 represents the size of the open eyes, X2 represents the degree of the raised mouth corners, and X3 represents the size of the open mouth; establishing a training sample set based on the generated expression feature vectors and the corresponding similarity labels between the photos and the reference pictures; and then, training and learning are carried out by using the sample set to obtain the trained expression characteristic model.
The similarity between the photographed preview picture and the reference picture can be style similarity, color similarity, element layout similarity, gray scale similarity, etc.
S45: and analyzing the shooting preview picture by adopting a preset model to determine shooting parameters.
S47: and controlling to execute the shooting operation according to the shooting parameters.
In some implementations, the photographing operation is a photographing operation, and the performing of the photographing operation is controlled according to photographing parameters, and includes: and controlling to execute the photographing operation according to the photographing parameters to obtain the picture corresponding to the current photographing preview picture.
In other implementations, the photographing operation is a continuous photographing operation, and the performing of the photographing operation is controlled according to photographing parameters, including: and controlling to execute continuous shooting operation to obtain a plurality of photos including the photo corresponding to the current shooting preview picture. Optionally, after the continuous shooting operation is performed, the method may further include the following steps: analyzing a plurality of photos obtained by continuous shooting operation to determine the best photo; and keeping the best photo, and deleting other photos obtained by the continuous shooting operation.
In other implementations, the photographing operation is a video photographing operation, and the performing of the photographing operation is controlled according to photographing parameters, including: and controlling to execute shooting operation according to the shooting parameters to obtain a video file taking the current shooting preview picture as a starting video picture frame. Optionally, after the video file is obtained by performing the video shooting operation, the method may further include the steps of: after the video file is shot, a plurality of video picture frames in the shot video file can be compared to determine the best picture frame; and the best picture frame is intercepted and stored as a photo.
Therefore, in the application, whether the shooting condition is met currently is determined by analyzing the shooting preview picture, whether the picture is the picture expected to be shot by a user can be determined according to the content in the shooting preview picture, so that the current splendid moment can be captured timely, meanwhile, the shooting parameters are automatically determined according to the content in the shooting preview picture, the shooting operation can be performed according to the shooting parameters meeting the current shooting preview picture, and the higher shooting quality is ensured.
Step S41 corresponds to step S11 in fig. 1, and the detailed description can refer to the related description of step S11 in fig. 1. Step S45 corresponds to step S13 in fig. 1, steps S23, S25 in fig. 2, and the like, and specific description may refer to the description related to step S13 in fig. 1, steps S23, S25 in fig. 2, and the like.
In some embodiments, step S43 is performed before step S45, that is, the shooting parameters including the shutter time or the aperture size are determined after the shooting conditions are determined to be satisfied, and the shooting preview picture is analyzed to determine the shooting parameters after the shooting conditions are satisfied, so that the shooting parameters are not determined each time, and the waste of computing resources is avoided.
In some embodiments, the steps S43 and S45 are performed simultaneously, that is, the analysis of the photographing preview screen using the preset model to determine whether the photographing condition is satisfied and the photographing parameter is determined according to the analysis result are performed simultaneously, and the simultaneous analysis results in whether the photographing condition is satisfied and the photographing parameter is satisfied when the photographing preview screen is analyzed using the preset model.
In some embodiments, when the preset model is the image processing algorithm model, the target object and the target object feature model for determining whether the shooting condition is satisfied may be the same as the target object and the target object feature model for determining the shooting parameters.
For example, the target objects are all facial expressions, the target object feature model is an expression feature model, and analyzing the shot preview image by using the preset model to obtain the analysis result may include: analyzing facial expressions in the shooting preview picture by utilizing an image recognition technology to generate corresponding expression feature vectors; and obtaining similarity with a reference picture or directly obtaining identification information indicating whether the identification meets the shooting condition according to the expression feature model and the expression feature vector corresponding to the shooting preview picture, and obtaining expression features or directly obtaining shooting parameters according to the expression feature model and the expression feature vector corresponding to the shooting preview picture.
That is, whether the photographing condition is satisfied and the photographing parameters are determined according to the facial expression in the photographing preview picture can be simultaneously determined.
In other embodiments, when the predetermined model is the image processing algorithm model, the target object and target object feature model for determining whether the photographing condition is satisfied may be different from the target object and target object feature model for determining the photographing parameter. For example, the target object and target object feature models used for determining whether the shooting condition is satisfied are a first target object and a first target object feature model, respectively, and the target object and target object feature models used for determining the shooting parameters are a second target object and a second target object feature model, respectively.
In a more specific example, the target object and the target object feature model for determining whether the photographing condition is satisfied are a face and an expression feature model, respectively, and the target object feature model for determining the photographing parameter are a gesture and a gesture feature model, respectively. The analyzing the photographed preview image by using the preset model to obtain the analysis result may include: analyzing facial expressions in the shooting preview picture by using an image recognition technology to generate corresponding expression feature vectors, and analyzing gestures in the shooting preview picture by using the image recognition technology to obtain gesture feature vectors; obtaining similarity with a reference picture or directly obtaining identification information indicating whether the identification meets shooting conditions or not according to the expression feature model and the expression feature vector corresponding to the shooting preview picture; and obtaining gesture features or directly obtaining shooting parameters according to the gesture feature model and the expression feature vector corresponding to the shooting preview picture.
Namely, the shooting parameters are determined according to whether the facial expression in the shooting preview picture meets the shooting conditions or not and according to the gesture in the shooting preview picture.
Please refer to fig. 6, which is a flowchart illustrating a photographing control method according to a sixth embodiment of the present application. In a sixth embodiment, a photographing control method includes:
s51: and acquiring a shooting preview picture.
Step S51 corresponds to step S11 in fig. 1, and the detailed description can refer to the related description of step S11 in fig. 1.
S53: and analyzing the shooting preview picture by adopting a preset model to determine whether the shooting condition is met currently and determine shooting parameters for shooting.
In some embodiments, when the trained model is a trained neural network model, step S53 includes: by taking all the pixels of the shooting preview picture as the input of the neural network model, the information including whether the shooting condition is satisfied and the shooting parameters is calculated and output by the neural network model.
In some embodiments, when the trained model is a trained image processing algorithm model, taking the example that the trained image processing algorithm model includes a trained expression feature model and a trained gesture feature model, step S53 includes: determining expression similarity according to the trained expression feature model and an expression feature vector corresponding to the shooting preview picture, and determining a gesture feature corresponding to the shooting preview picture according to the trained gesture feature model and a gesture feature vector corresponding to the shooting preview picture; determining that the shooting condition is met when the expression similarity is determined to be greater than a preset similarity threshold; and determining the obtained gesture features according to the gesture feature model and the gesture feature vector corresponding to the shooting preview picture, for example, obtaining the distance between the index finger and the thumb, and the predefined corresponding relationship between the gesture features and the shooting parameters to determine the shooting parameters corresponding to the obtained gesture features, for example, determining the shutter time according to the corresponding relationship between the gesture features, namely the distance between the index finger and the thumb, and the shutter time.
In other embodiments, taking the example that the trained image processing algorithm model includes a trained expression feature model and a trained gesture feature model, step S53 may also include: the expression feature vector corresponding to the shooting preview picture is used as input information of an expression feature model, identification information indicating whether the identification meets shooting conditions is obtained through the trained expression feature model, the gesture feature vector corresponding to the shooting preview picture is used as input information of the trained gesture feature model, shooting parameters are obtained through the trained gesture feature model, and therefore information including the identification information indicating whether the identification meets the shooting conditions and the shooting parameters is obtained; and determining whether the shooting conditions are met and determining the shooting parameters according to the identification information of whether the identification obtained according to the trained expression feature model meets the shooting conditions and the shooting parameters obtained according to the trained gesture feature model.
S55: and when the shooting condition is determined to be met currently, controlling to execute the shooting operation according to the shooting parameters.
Step S55 corresponds to step S47 in fig. 5, and the detailed description can refer to the related description of step S47 in fig. 5. Steps S53 and S43S 45 in fig. 5 also have a certain correspondence, and the method of determining whether the shooting condition is currently satisfied and obtaining the shooting parameters according to the analysis result can be further described with reference to steps S43 to S45 in fig. 5.
Therefore, in the sixth embodiment, the shooting preview picture can be analyzed to determine whether the shooting condition and the shooting parameter are satisfied, and when the shooting condition is satisfied, the shooting operation is performed according to the shooting parameter, so that the shooting can be performed quickly according to the better shooting parameter, and the wonderful moment can be captured in time with higher shooting quality.
Please refer to fig. 7, which is a flowchart illustrating a photographing control method according to a seventh embodiment of the present application. In a seventh embodiment, a photographing control method includes:
s61: and acquiring a shooting preview picture.
Step S61 corresponds to step S11 in fig. 1, and the detailed description can refer to the related description of step S11 in fig. 1.
S63: and analyzing the shooting preview picture by adopting a preset model to determine whether the shooting condition is met currently and determine shooting parameters for shooting.
S65: and when the current shooting condition is determined to be met, controlling to execute continuous shooting operation according to the shooting parameters to obtain a plurality of photos including the photo corresponding to the current shooting preview picture.
In some embodiments, step S65 includes: and when the similarity between the shot preview picture and the reference picture exceeds a preset similarity threshold, determining that the shooting condition is met, and controlling to execute continuous shooting operation.
In other embodiments, when the analysis result of satisfaction is obtained through the trained neural network model, step S65 may also include: and when the satisfaction exceeds a satisfaction preset threshold, determining that the shooting condition is met, and controlling to execute continuous shooting operation.
Specifically, the preset threshold of the similarity or the preset threshold of the satisfaction of the contrast in the continuous shooting operation may be slightly lower than the preset threshold of the similarity or the preset threshold of the satisfaction of the contrast in the continuous shooting operation.
For example, the preset threshold value of the degree of similarity or the preset threshold value of the degree of satisfaction of the contrast when the continuous shooting operation is performed may be 70%, which is lower than the preset threshold value of the degree of similarity or the preset threshold value of the degree of satisfaction of the contrast when the photographing operation is performed, which is 80% or higher.
Therefore, continuous shooting is carried out when the best shooting effect is achieved, and the photos with the best shooting effect can be ensured to be shot in the continuous shooting. For example, the reference picture is a picture with a laughing expression, the compared expression feature vector is an expression feature vector X2 representing the degree of the mouth angle rising, when the mouth angle rising degree is judged to reach 70% of the reference picture, the continuous shooting operation is controlled, the time for the user to continue the smile to reach the maximum rising degree is short, the expression that the user smile reaches the maximum rising degree is shot by the continuous shooting operation, and the continuous shooting operation can be ensured to shoot a picture with the best shooting effect.
Steps S61 to S65 correspond to steps S51 to S55 in fig. 6, respectively, and the following description can refer to steps S51 to S55 in fig. 6.
As shown in fig. 7, the seventh embodiment shown in fig. 7 may further include the steps of:
s67: and analyzing the multiple pictures obtained by the continuous shooting operation to determine the best picture.
Optionally, in an implementation manner, step S67 may include: and analyzing the multiple pictures by adopting a trained neural network model to obtain the satisfaction degree, and determining the picture with the highest satisfaction degree as the optimal picture.
Optionally, in another implementation, step S67 may include: and comparing the multiple photos obtained by continuous shooting with the reference picture, and determining the photo with the highest similarity to the reference picture as the best photo.
Optionally, as shown in fig. 7, the shooting control method may further include:
s69: and keeping the best photo, and deleting other photos acquired by the continuous shooting operation.
In some embodiments, the electronic device includes a memory in which albums are built, and "keep best photos" is to store the best photos in a certain album, for example in a camera album. And deleting other photos to effectively avoid occupying excessive storage space.
Please refer to fig. eight, which is a flowchart illustrating a photographing control method according to an eighth embodiment of the present application. In an eighth embodiment, a photographing control method includes:
s71: and acquiring a shooting preview picture.
Step S71 corresponds to step S11 in fig. 1, and the detailed description can refer to the related description of step S11 in fig. 1.
S73: and analyzing the shooting preview picture by adopting a preset model to determine whether the shooting condition is met currently and determine shooting parameters for shooting.
S75: and when the shooting condition is determined to be met currently, controlling to execute the video shooting operation according to the shooting parameters to obtain a video file taking the current shooting preview picture as a starting video picture frame.
Specifically, the preset threshold of the similarity or the preset threshold of the satisfaction of the contrast in the continuous shooting operation may be slightly lower than the preset threshold of the similarity or the preset threshold of the satisfaction of the contrast in the shooting operation.
Thus, video shooting is performed when the best shooting effect is to be achieved, and it is possible to ensure that the video picture frames including the best shooting effect are included in the video file.
Wherein, the steps S71 to S75 correspond to the steps S61 to S65 in the seventh embodiment shown in fig. 7, respectively, and the more detailed description can be found in the related description of fig. 7. For example, step S77 may include: and when the similarity between the shot preview picture and the reference picture exceeds a preset similarity threshold, determining that the shooting condition is met, and controlling to execute video shooting operation. Optionally, step S77 may also include: and when the satisfaction exceeds a satisfaction preset threshold, determining that the shooting condition is met, and controlling to execute video shooting operation.
As shown in fig. 8, the eighth embodiment shown in fig. 8 may further include the steps of:
s77: and comparing a plurality of video picture frames in the shot video file to determine the best picture frame.
Optionally, in an implementation manner, step S77 may include: and analyzing the plurality of video picture frames by adopting a trained neural network model to obtain the satisfaction, and determining the video picture frame with the highest satisfaction as the optimal picture frame.
Optionally, in another implementation, step S77 may include: and comparing a plurality of video picture frames in the video file with the reference picture, and determining the video picture frame with the highest similarity with the reference picture as the optimal picture frame.
Optionally, as shown in fig. 8, the shooting control method may further include:
s79: and (5) intercepting the optimal picture frame and storing the picture as a photo.
In some embodiments, the electronic device includes a memory, wherein a plurality of albums are built in the memory, and the "capturing the best frame as a photo" is stored in a photo album, for example, a camera album, in a picture/photo format.
Please refer to fig. 9, which is a flowchart illustrating a photographing control method according to a ninth embodiment of the present application. As shown in fig. 9, in the ninth embodiment, the photographing control method may include the steps of:
s81: and acquiring a shooting preview picture.
S83: analyzing the shooting preview picture by adopting a preset model to determine whether the shooting condition is met currently or not and determine the shooting parameters for shooting
S85: and when the shooting condition is determined to be met currently, controlling to execute the shooting operation according to the shooting parameters.
Steps S81 to S85 correspond to steps S51 to S55 in fig. 6, respectively, and correspond to the flowchart in the other embodiment corresponding to fig. 7, and a specific implementation manner can refer to the descriptions of steps S51 to S55 in fig. 6 and the related steps in the other embodiment.
S87: and obtaining satisfaction feedback information of the user on the automatic shooting.
Optionally, in an implementation manner, after the automatic photographing is completed, the user may be prompted to perform satisfaction evaluation on the automatic photographing by generating a prompt message, for example, a prompt box including a "satisfaction" option and a "dissatisfaction" option is generated for the user to select, and the satisfaction feedback information of the automatic photographing is obtained according to the selection of the user.
Optionally, in another embodiment, the satisfaction feedback information of the user for the automatic shooting is obtained by detecting the operation of the user on the picture or the video obtained by the automatic shooting. For example, if it is detected that the user deletes the photo or video obtained by the automatic shooting, it is determined that the user is not satisfied with the automatic shooting, and the unsatisfactory satisfaction feedback information is obtained. For another example, if it is detected that the user performs a setting operation of setting the type of favorite, or the like or a sharing operation on the photo or video obtained by the automatic shooting, it is determined that the user is satisfied with the automatic shooting, and satisfactory satisfaction feedback information is obtained.
S89: and outputting the satisfaction feedback information of the user on the automatic shooting to the currently used model so that the currently used model carries out optimization training by using the satisfaction feedback information.
Therefore, in the method and the device, the training of the model can be optimized by collecting the satisfaction feedback information of the user on automatic shooting, and the model is continuously optimized, so that the automatic shooting can be more accurate in subsequent use.
The currently used model may be a model that has been confirmed to be trained, or may be a model that has not been trained. When the model is not trained yet, the training can be preferably realized.
As described above, the preset model in any embodiment of fig. 1 to 9 may also be an untrained model, and after the user starts the automatic shooting function or the model is automatically started when the model is started, the automatic shooting is performed according to the model, and the current model is optimized and trained according to the satisfaction feedback information fed back by the user.
In some embodiments, when the preset model is an untrained model, the untrained model automatically acquires a picture of a user when shooting each time, and the picture is used as a positive sample for training, or further acquires shooting parameters when shooting and is used as the positive sample for training, or further samples a picture frame of the user when not shooting by manual control as a negative sample according to a preset rule, and gradually optimizes the preset model until the training times reach a preset number of times or the satisfaction feedback information fed back by subsequent users is satisfied, wherein the satisfied ratio exceeds a preset ratio, and then the training is determined to be completed. In this way, the user can better realize personalization because the user trains the model by himself without adopting another model.
Fig. 10 is a block diagram illustrating a partial structure of an electronic device 100 according to an embodiment of the present application. As shown in fig. 10, the electronic device 100 includes a processor 10, a memory 20, and a camera 30. The camera 30 at least includes a rear camera 31 and a front camera 32. The rear camera 31 is used for shooting images behind the electronic device 100 and can be used for shooting operations of a user such as shooting others, and the front camera 32 is used for shooting images in front of the electronic device 100 and can be used for realizing shooting operations such as self-shooting.
In some embodiments, the models in fig. 1-9 may be programs such as specific algorithm functions running in the processor 10, such as neural network algorithm functions, image processing algorithm functions, and so forth. In other embodiments, the electronic device 100 may further include a model processor independent from the processor 10, where the models in fig. 1 to 9 are run and model processors, the processor 10 may generate corresponding instructions to trigger the model processor to run the corresponding models according to needs, and output results of the models are output to the processor 10 through the model processor for the processor 10 to use, so as to perform control such as shooting operation.
The memory 20 has stored therein program instructions.
The processor 10 is used for calling the program instructions stored in the memory 20 to execute the shooting control method in any one of the embodiments shown in fig. 1 to 9.
For example, the processor 10 is configured to call program instructions stored in the memory 20 to execute the following photographing control method:
acquiring a shooting preview picture through the camera 30; and analyzing the shooting preview picture by adopting a preset model to obtain shooting parameters for shooting.
In some embodiments, the operation of acquiring the shooting preview screen is performed by the camera in response to the operation of turning on the camera, that is, the shooting preview screen is acquired by the camera.
In some embodiments, the operation of opening the camera is a click operation on the photographing application icon, that is, when the camera is opened in response to the click operation on the photographing application icon, the camera is used to obtain the photographing preview picture.
Alternatively, in other embodiments, the operation of turning on the camera is a specific operation on a physical key of the electronic device, for example, the electronic device includes a volume up key and a volume down key, and the operation of turning on the camera is an operation of simultaneously pressing the volume up key and the volume down key. Further, the operation of starting the photographing application is an operation of successively pressing the volume up key and the volume down key within a preset time (for example, 2 seconds).
In other embodiments, the operation of turning on the camera may also be an operation of a preset touch gesture input in any display interface of the electronic device, for example, on a main interface of the electronic device, a user may input a touch gesture with an annular touch track to turn on the camera.
In other embodiments, the operation of turning on the camera may also be an operation of a preset touch gesture input on the touch screen when the electronic device is in a black screen state.
In some embodiments, when the electronic device is a camera, the operation of turning on the camera is an operation of pressing a shutter button/power button of the camera to trigger the camera to be in an on state.
Optionally, in the present application, the obtaining of the shooting preview picture is to obtain the shooting preview picture in real time through a camera.
In some embodiments, the predetermined model may be a trained model or an untrained model.
Optionally, in some embodiments, the preset model is a trained neural network model, and the preset model is used to analyze the shooting preview image to obtain shooting parameters for shooting, including: all pixels of the shooting preview picture are used as input, and shooting parameters are output for shooting after calculation is carried out through the neural network model.
Optionally, in another embodiment, the preset model is a trained image processing algorithm model, the trained image processing algorithm model includes a trained target object feature model, and the shooting parameters obtained by analyzing the shooting preview picture with the preset model are used for shooting, including: analyzing a target object in a shooting preview picture by utilizing an image recognition technology to generate a corresponding target object characteristic vector; obtaining an analysis result according to the target object feature model and the target object feature vector corresponding to the shooting preview picture; and determining the shooting parameters according to the analysis result for shooting.
Optionally, in an implementation manner, obtaining an analysis result according to the target object feature model and the target object feature vector corresponding to the captured preview image includes: and calculating to obtain an analysis result of the target object characteristics corresponding to the shooting preview picture through the trained target object characteristic model and the target object characteristic vector corresponding to the shooting preview picture.
In one implementation, determining a photographing parameter for photographing according to the analysis result includes: and determining the shooting parameters corresponding to the obtained target object characteristics according to the corresponding relation between the preset target object characteristics and the shooting parameters.
Optionally, in another implementation manner, the preset model is a trained image processing algorithm model, and when the trained image processing algorithm model includes a trained target object feature model, the shooting preview picture is analyzed by using the preset model to obtain shooting parameters for shooting, where the shooting parameters include: and taking the target object feature vector corresponding to the shooting preview picture as input information of a trained target object feature model, and obtaining shooting parameters through the trained target object feature model.
In some embodiments, the analysis result is obtained by analyzing the captured preview image obtained in real time. In other embodiments, the shooting preview picture is analyzed by using a preset model to obtain shooting parameters: and analyzing the currently acquired shooting preview picture at preset intervals (for example, 0.2 second) to obtain shooting parameters.
In the application, the target object may be a hand, a face, a specific scene, etc.; the target object feature model correspondingly comprises a gesture feature model, an expression feature model, a scene feature model and the like, and the analyzed target object feature vector can comprise a gesture feature vector, an expression feature vector, a police feature vector and the like.
Therefore, in the application, the current shooting parameters are determined by analyzing the shooting preview picture, and the shooting parameters can be determined in time according to the shooting preview picture, so that the current wonderful moment can be captured in time with higher shooting quality according to the shooting parameters when shooting is needed.
The processor 10 may be a microcontroller, a microprocessor, a single chip, a digital signal processor, or the like.
The memory 20 may be any memory device capable of storing information, such as a memory card, a solid-state memory, a micro-hard disk, an optical disk, etc.
As shown in fig. 10, the electronic device 100 further includes an input unit 40 and an output unit 50. The input unit 40 may include a touch panel, a mouse, a microphone, physical keys including a power key, a volume key, and the like. The output unit 50 may include a display screen, a speaker, and the like. In some embodiments, the touch panel of the input unit 40 and the display screen of the output unit 50 are integrated to form a touch screen, while providing both touch input and display output functions.
The electronic device 100 may be a portable electronic device having a camera 30, such as a mobile phone, a tablet computer, and a notebook computer, or may be a camera device, such as a camera and a video camera.
In some embodiments, the present application further provides a computer-readable storage medium, in which program instructions are stored, and the program instructions are used by the main processing unit 20 to execute all or part of the steps in any of the shooting control methods shown in fig. 1 to 10 after being called and executed. In some embodiments, the computer storage medium is the memory 20, and may be any storage device capable of storing information, such as a memory card, a solid-state memory, a micro hard disk, an optical disk, and the like.
The shooting control method and the electronic device 100 can automatically judge whether the shooting condition is met according to the shooting preview picture, shoot when the shooting condition is met, and can capture the wonderful moment including the content corresponding to the current shooting preview picture in time.
While the invention has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. A computer program stored/distributed on a suitable medium supplied together with or as part of other hardware, may also take other distributed forms, such as via the Internet or other wired or wireless telecommunication systems.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the present invention has been described with reference to particular embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (13)

  1. A shooting control method, characterized by comprising:
    acquiring a shooting preview picture;
    and analyzing the shooting preview picture by adopting a preset model to obtain shooting parameters for shooting.
  2. The photographing control method of claim 1, wherein the preset model is a trained neural network model, and the analyzing the photographing preview picture using the preset model to obtain photographing parameters for photographing comprises:
    and taking all pixels of the shooting preview picture as input, calculating through the neural network model, and outputting the shooting parameters for shooting.
  3. The photographing control method of claim 1, wherein the preset model is a trained image processing algorithm model, the trained image processing algorithm model includes a trained target object feature model, and the photographing preview picture is analyzed using the preset model to obtain photographing parameters for photographing, including:
    analyzing a target object in a shooting preview picture by utilizing an image recognition technology to generate a corresponding target object characteristic vector;
    obtaining an analysis result according to the trained target object feature model and the target object feature vector corresponding to the shooting preview picture;
    and determining shooting parameters for shooting according to the analysis result.
  4. The shooting control method according to claim 3, wherein the obtaining an analysis result according to the target object feature model and the target object feature vector corresponding to the shooting preview picture comprises: calculating through the trained target object feature model and the target object feature vector corresponding to the shooting preview picture to obtain an analysis result of the target object feature corresponding to the shooting preview picture;
    the determining of the photographing parameters for photographing according to the analysis result includes:
    and determining the shooting parameters corresponding to the obtained target object characteristics according to the corresponding relation between the preset target object characteristics and the shooting parameters.
  5. [ amending 17.07.2018 according to rules 26 ] the photographing control method according to claim 1, wherein the preset model is a trained image processing algorithm model including a trained target object feature model, and the analyzing the photographing preview picture using the preset model to obtain photographing parameters for photographing comprises: and taking the target object feature vector corresponding to the shooting preview picture as input information of the trained target object feature model, and obtaining shooting parameters for shooting through the trained target object feature model.
  6. The shooting control method according to any one of claims 3 to 5, wherein the target object is a hand, the target object feature model is a gesture feature model, and the target object feature vector is a gesture feature vector.
  7. The shooting control method according to claim 1, wherein a shooting preview screen is acquired, the shooting control method further comprising:
    analyzing the shooting preview picture by adopting a preset model to determine whether the shooting condition is met currently;
    after analyzing the shooting preview picture by using a preset model to obtain shooting parameters, the shooting control method further includes:
    and when the shooting condition is determined to be met, shooting is carried out according to the shooting parameters.
  8. The photographing control method of claim 7, wherein the analyzing the photographing preview screen using a preset model to determine whether a photographing condition is currently satisfied further comprises:
    analyzing the shot preview picture through the trained neural network model to obtain an analysis result of satisfaction;
    and when the satisfaction degree is determined to exceed the satisfaction degree preset threshold, determining that the shooting condition is met currently.
  9. The photographing control method of claim 7, wherein the photographing preview screen is analyzed using a preset model to determine whether a photographing condition is currently satisfied, further comprising:
    comparing the shooting preview picture with a reference picture by adopting a trained image processing algorithm model, and analyzing an analysis result of the similarity between the shooting preview picture and the reference picture;
    and when the similarity is determined to exceed the preset threshold of the similarity, determining that the shooting condition is currently met.
  10. The photographing control method of claim 7, wherein the analyzing the photographing preview screen using a preset model to determine whether a photographing condition is currently satisfied comprises:
    analyzing a target object in a shooting preview picture by utilizing an image recognition technology to generate a corresponding target object characteristic vector;
    the target object feature vector is used as input information of a trained image processing algorithm model, and an analysis result comprising identification information for identifying whether the current shooting condition is met is obtained;
    and when the identification information identifies that the shooting condition is currently met, determining that the shooting condition is currently met.
  11. The shooting control method according to any one of claims 7 to 10, characterized by further comprising:
    obtaining satisfaction feedback information of a user for feeding back a picture or a video obtained by executing shooting operation;
    and outputting the satisfaction feedback information to the preset model, so that the preset model performs optimization training by using the satisfaction feedback information.
  12. An electronic device, comprising:
    a camera;
    a memory for storing program instructions; and
    a processor for calling the program instructions to execute the photographing control method according to any one of claims 1 to 11.
  13. A computer-readable storage medium, characterized in that a program instruction for causing a computer to execute the photographing control method according to any one of claims 1 to 11 after being called is stored in the computer-readable storage medium.
CN201880070282.9A 2018-05-07 2018-05-07 Shooting control method and electronic device Pending CN111279684A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/085898 WO2019213818A1 (en) 2018-05-07 2018-05-07 Photographing control method, and electronic device

Publications (1)

Publication Number Publication Date
CN111279684A true CN111279684A (en) 2020-06-12

Family

ID=68467664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880070282.9A Pending CN111279684A (en) 2018-05-07 2018-05-07 Shooting control method and electronic device

Country Status (2)

Country Link
CN (1) CN111279684A (en)
WO (1) WO2019213818A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565599A (en) * 2020-11-27 2021-03-26 Oppo广东移动通信有限公司 Image shooting method and device, electronic equipment, server and storage medium
CN112911139A (en) * 2021-01-15 2021-06-04 广州富港生活智能科技有限公司 Article shooting method and device, electronic equipment and storage medium
CN114051095A (en) * 2021-11-12 2022-02-15 苏州臻迪智能科技有限公司 Remote processing method of video stream data and shooting system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383224B (en) * 2020-03-19 2024-04-16 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN113489909B (en) * 2021-07-30 2024-01-19 维沃移动通信有限公司 Shooting parameter determining method and device and electronic equipment
CN116320716B (en) * 2023-05-25 2023-10-20 荣耀终端有限公司 Picture acquisition method, model training method and related devices

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120262593A1 (en) * 2011-04-18 2012-10-18 Samsung Electronics Co., Ltd. Apparatus and method for photographing subject in photographing device
CN103716547A (en) * 2014-01-15 2014-04-09 厦门美图之家科技有限公司 Smart mode photographing method
CN106372627A (en) * 2016-11-07 2017-02-01 捷开通讯(深圳)有限公司 Automatic photographing method and device based on face image recognition and electronic device
CN106454071A (en) * 2016-09-09 2017-02-22 捷开通讯(深圳)有限公司 Terminal and automatic shooting method based on gestures
CN107820020A (en) * 2017-12-06 2018-03-20 广东欧珀移动通信有限公司 Method of adjustment, device, storage medium and the mobile terminal of acquisition parameters

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI475882B (en) * 2009-12-30 2015-03-01 Altek Corp Motion detection method using the adjusted digital camera of the shooting conditions
CN104125396B (en) * 2014-06-24 2018-06-08 小米科技有限责任公司 Image capturing method and device
CN104469131A (en) * 2014-09-05 2015-03-25 宇龙计算机通信科技(深圳)有限公司 Method, device and terminal for displaying shooting control
CN106101541A (en) * 2016-06-29 2016-11-09 捷开通讯(深圳)有限公司 A kind of terminal, photographing device and image pickup method based on personage's emotion thereof
CN107566529B (en) * 2017-10-18 2020-08-14 维沃移动通信有限公司 Photographing method, mobile terminal and cloud server
CN107995422B (en) * 2017-11-30 2020-01-10 Oppo广东移动通信有限公司 Image shooting method and device, computer equipment and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120262593A1 (en) * 2011-04-18 2012-10-18 Samsung Electronics Co., Ltd. Apparatus and method for photographing subject in photographing device
CN103716547A (en) * 2014-01-15 2014-04-09 厦门美图之家科技有限公司 Smart mode photographing method
CN106454071A (en) * 2016-09-09 2017-02-22 捷开通讯(深圳)有限公司 Terminal and automatic shooting method based on gestures
CN106372627A (en) * 2016-11-07 2017-02-01 捷开通讯(深圳)有限公司 Automatic photographing method and device based on face image recognition and electronic device
CN107820020A (en) * 2017-12-06 2018-03-20 广东欧珀移动通信有限公司 Method of adjustment, device, storage medium and the mobile terminal of acquisition parameters

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565599A (en) * 2020-11-27 2021-03-26 Oppo广东移动通信有限公司 Image shooting method and device, electronic equipment, server and storage medium
CN112911139A (en) * 2021-01-15 2021-06-04 广州富港生活智能科技有限公司 Article shooting method and device, electronic equipment and storage medium
CN114051095A (en) * 2021-11-12 2022-02-15 苏州臻迪智能科技有限公司 Remote processing method of video stream data and shooting system

Also Published As

Publication number Publication date
WO2019213818A1 (en) 2019-11-14

Similar Documents

Publication Publication Date Title
AU2017261537B2 (en) Automated selection of keeper images from a burst photo captured set
CN111279684A (en) Shooting control method and electronic device
CN108229369B (en) Image shooting method and device, storage medium and electronic equipment
CN109257645B (en) Video cover generation method and device
WO2022028184A1 (en) Photography control method and apparatus, electronic device, and storage medium
US10170157B2 (en) Method and apparatus for finding and using video portions that are relevant to adjacent still images
WO2019213819A1 (en) Photographing control method and electronic device
WO2017031901A1 (en) Human-face recognition method and apparatus, and terminal
CN105809174B (en) Identify the method and device of image
EP3975046B1 (en) Method and apparatus for detecting occluded image and medium
CN106815803B (en) Picture processing method and device
CN110399934A (en) A kind of video classification methods, device and electronic equipment
WO2019213820A1 (en) Photographing control method and electronic device
CN110110742B (en) Multi-feature fusion method and device, electronic equipment and storage medium
KR100705177B1 (en) Mobile communication terminal and method for classifying photograph using the same
WO2019205566A1 (en) Method and device for displaying image
KR101431651B1 (en) Apparatus and method for mobile photo shooting for a blind person
WO2018232669A1 (en) Method for controlling camera to photograph and mobile terminal
JP7175061B1 (en) Program, information processing device, and method
CN113079311B (en) Image acquisition method and device, electronic equipment and storage medium
US20220277547A1 (en) Method and electronic device for detecting candid moment in image frame
CN111107259B (en) Image acquisition method and device and electronic equipment
CN111279682A (en) Electronic device and shooting control method
CN116709013A (en) Terminal equipment control method, terminal equipment control device and storage medium
CN117201837A (en) Video generation method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200612

RJ01 Rejection of invention patent application after publication