CN111279683A - Shooting control method and electronic device - Google Patents

Shooting control method and electronic device Download PDF

Info

Publication number
CN111279683A
CN111279683A CN201880070205.3A CN201880070205A CN111279683A CN 111279683 A CN111279683 A CN 111279683A CN 201880070205 A CN201880070205 A CN 201880070205A CN 111279683 A CN111279683 A CN 111279683A
Authority
CN
China
Prior art keywords
shooting
model
training
picture
control method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880070205.3A
Other languages
Chinese (zh)
Inventor
王星泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heren Technology Wuhan Co ltd
Original Assignee
Heren Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heren Technology Wuhan Co ltd filed Critical Heren Technology Wuhan Co ltd
Publication of CN111279683A publication Critical patent/CN111279683A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Abstract

The application provides a shooting control method, which comprises the following steps: when a user manually controls shooting, taking a currently shot picture as a front sample meeting shooting conditions through the model, and adjusting parameters of the model according to the front sample at this time; sampling a picture frame which is not manually controlled to be shot by a model according to a preset rule to serve as a back sample, and adjusting parameters of the model according to the back sample; and when the training completion condition is determined to be reached, finishing the training to obtain a trained model for subsequent automatic shooting control. The application also provides an electronic device applying the shooting control method. According to the shooting control method and the electronic device, automatic shooting control can be performed subsequently through the training model and the training model, and the wonderful moment including the content corresponding to the current shooting preview picture can be captured in time.

Description

Shooting control method and electronic device Technical Field
The present disclosure relates to the field of electronic devices, and particularly, to a shooting control method for an electronic apparatus and the electronic apparatus.
Background
Nowadays, with the improvement of living standard of people, photographing has become a common function which is indispensable in life. At present, no matter a camera or an electronic device such as a mobile phone and a tablet computer with a camera function, pixels are higher and higher, and the photographing quality is better and better. However, in the current electronic devices such as cameras and mobile phones, when photographing is controlled, users are often required to start photographing by pressing a shutter key or a photographing icon, and because the operation of the users often has a certain hysteresis, the users often cannot capture a wonderful moment in time, so that satisfactory photos cannot be taken frequently, or because of the hysteresis, unsatisfactory photos are taken instead. For example, the photographer may be in a good condition while framing, but conditions such as just eyes not being open, a stiff smile, and the like may occur at the moment of taking a picture, and the finally taken picture tends to be unsatisfactory. For another example, when a baby is photographed, the lovely expression of the baby is often lost immediately, and it is difficult to take a satisfactory picture in time by operating a shutter key or a photographing icon by a user.
Disclosure of Invention
The application provides a shooting control method and an electronic device, which can automatically carry out shooting control through a training model and can catch a splendid moment in time.
In one aspect, a photographing control method is provided, the photographing control method including: when a user manually controls shooting, taking a currently shot picture as a front sample meeting shooting conditions through the model, and adjusting parameters of the model according to the front sample at this time; sampling a picture frame which is not manually controlled and shot by a model according to a preset rule to serve as a back sample, and adjusting parameters of the model according to the back sample; and when the training completion condition is determined to be reached, finishing the training to obtain a trained model for subsequent automatic shooting control.
In another aspect, an electronic device is provided that includes a memory and a processor. The memory is to store program instructions. The processor is used for calling the program instruction to execute a shooting control method, and the shooting control method comprises the following steps: when a user manually controls shooting, taking a currently shot picture as a front sample meeting shooting conditions through the model, and adjusting parameters of the model according to the front sample at this time; sampling a picture frame which is not manually controlled and shot by a model according to a preset rule to serve as a back sample, and adjusting parameters of the model according to the back sample; and when the training completion condition is determined to be reached, finishing the training to obtain a trained model for subsequent automatic shooting control.
In still another aspect, a computer-readable storage medium is provided, in which program instructions are stored, and the program instructions are used by a computer to execute a shooting control method after being called, where the shooting control method includes: when a user manually controls shooting, taking a currently shot picture as a front sample meeting shooting conditions through the model, and adjusting parameters of the model according to the front sample at this time; sampling a picture frame which is not manually controlled and shot by a model according to a preset rule to serve as a back sample, and adjusting parameters of the model according to the back sample; and when the training completion condition is determined to be reached, finishing the training to obtain a trained model for subsequent automatic shooting control.
According to the shooting control method and the electronic device, automatic shooting control can be performed subsequently through the training model and the training model, and the wonderful moment including the content corresponding to the current shooting preview picture can be captured in time.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments are briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other obvious modifications without creative efforts.
Fig. 1 is a flowchart of a model training process in a shooting control method according to a first embodiment of the present application.
FIG. 2 is a flowchart of a model training process in a shooting control method according to a second embodiment of the present application
Fig. 3 is a flowchart of a model training process in a shooting control method according to a third embodiment of the present application.
Fig. 4 is a flowchart of a model training process in a shooting control method according to a fourth embodiment of the present application.
Fig. 5 is a flowchart of a model training process in a shooting control method according to a fifth embodiment of the present application.
Fig. 6 is a flowchart of a model training process in a shooting control method according to a sixth embodiment of the present application.
Fig. 7 is a flowchart of a model training process in a shooting control method according to a seventh embodiment of the present application.
Fig. 8 is a flowchart of a model training process in a shooting control method according to an eighth embodiment of the present application.
Fig. 9 is a flowchart of a model training process in a shooting control method in a ninth embodiment of the present application.
Fig. 10 is a flowchart of a model training process in a shooting control method in the tenth embodiment of the present application.
Fig. 11 is a flowchart of a model training process in a shooting control method in the eleventh embodiment of the present application.
Fig. 12 is a flowchart of a model training process in a shooting control method according to a twelfth embodiment of the present application.
Fig. 13 is a flowchart of automatic shooting control using a model in a shooting control method in the thirteenth embodiment of the present application.
Fig. 14 is a block diagram illustrating a partial structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The shooting control method can be applied to an electronic device, the electronic device at least comprises a camera, the electronic device can pass through the camera to acquire a shooting preview picture and display the shooting preview picture, and the electronic device can pass through the camera to shoot, continuously shoot, shoot videos and other operations. The camera comprises a front camera and a rear camera, and operations such as photographing, continuous photographing and video shooting can be performed through the rear camera and also can be performed through the front camera for self-shooting.
Fig. 1 is a flowchart of a model training process in a shooting control method according to a first embodiment of the present application. As shown in fig. 1, in the first embodiment, the photographing control method may include the steps of:
s11: when a user carries out manual control shooting, a current shot picture is taken as a front sample meeting shooting conditions through the model, and parameters of the model are adjusted according to the front sample.
In some embodiments, the model saves the front samples, and establishes or updates the correspondence between the front samples and the satisfied shooting conditions to adjust the parameters of the model itself. Wherein, satisfying the shooting condition can be marked as a label of the front sample.
Optionally, in one implementation, the user manually controls the shooting to be completed by pressing a shutter key or a shooting icon.
Alternatively, in another implementation, the user manually controls the photographing to be performed by performing a specific operation on a physical key of the electronic device. For example, the electronic device includes a power key, and manually controls shooting by double-clicking the power key.
S13: and sampling a picture frame which is not manually controlled and shot by the model according to a preset rule to be used as a back sample, and adjusting the parameters of the model according to the back sample.
In some embodiments, the model may store the reverse side sample obtained by sampling, and may also establish a corresponding relationship between the reverse side sample and the image not meeting the shooting condition to adjust the parameters of the model itself. Wherein the reverse side sample is a picture which does not meet the shooting condition; the failure to meet the shooting condition may be marked as a label for the reverse side sample.
S15: and when the training completion condition is determined to be reached, finishing the training to obtain a trained model for subsequent automatic shooting control.
Optionally, in an implementation manner, the step S11 further includes, before the step, the step of: the model training mode is entered in response to user input to control the operation of entering model training. The determining that the training completion condition is reached includes: and determining that a training completion condition is reached in response to user input for exiting the model training mode.
Optionally, the operation of entering the model training includes a selection operation of a menu option, or a specific operation of a physical key, or a specific touch gesture input on a touch screen of the electronic device. Optionally, the controlling the model training mode in response to the operation of entering the model training input by the user includes: and responding to the selection operation of the user on the menu option, or the specific operation on the physical key or the specific touch gesture input on the touch screen of the electronic device to control to enter the model training mode.
Optionally, in another implementation manner, the determining that the training completion condition is met includes: when it is determined that the number of times the user manually controls photographing reaches the preset number of times N1, it is determined that the training completion condition is reached. The preset number of times N1 may be the number of times that the training of the default model of the system needs to be performed, or may be a value defined by the user.
Optionally, in another implementation manner, the determining that the training completion condition is met includes: when a user carries out manual control shooting, inputting parameters of a picture obtained by the manual control shooting into the model to obtain a predicted value; determining whether the predicted value meets the shooting condition; and if the predicted value meets the shooting condition, determining that the training completion condition is reached.
The model can be a neural network model, an image processing algorithm model, or the like.
According to the method and the device, the trained model is obtained by pre-training the model, when a subsequent user starts a camera to shoot, shooting can be automatically controlled according to the trained model, and a satisfactory picture which the user wants can be captured in time.
Fig. 2 is a flowchart of a model training process of a shooting control method according to a second embodiment of the present application. As shown in fig. 2, in the second embodiment, the photographing control method may include the steps of:
s21: and responding to the operation of entering the model training input by the user, and controlling to enter the model training mode.
Optionally, the operation of entering the model training includes a selection operation of a menu option, or a click operation of a specific icon, or a specific operation of a physical key, or a specific touch gesture input on a touch screen of the electronic device. For example, the operation of entering model training includes: the method comprises the steps that a user clicks a model training mode icon displayed on an electronic device, or clicks a volume key, or slides upwards and touches a gesture input on a touch screen of the electronic device.
Optionally, the controlling the model training mode in response to the operation of entering the model training input by the user includes: and responding to the selection operation of the user on the menu option, or the specific operation on the physical key or the specific touch gesture input on the touch screen of the electronic device to control to enter the model training mode.
S23: when a user carries out manual control shooting, a current shot picture is taken as a front sample meeting shooting conditions through the model, and parameters of the model are adjusted according to the front sample.
S25: and sampling a picture frame which is not manually controlled and shot by the model according to a preset rule to be used as a back sample, and further adjusting the parameters of the model according to the back sample.
S27: and responding to the operation of exiting the model training mode input by the user, determining that a training completion condition is reached, and finishing training to obtain a trained model for subsequent automatic shooting control.
The operation of exiting the model training also comprises a selection operation of a menu option, or a click operation of a specific icon, or a specific operation of a physical key, or a specific touch gesture input on a touch screen of the electronic device. For example, the operation of exiting the model training includes: the method comprises the following steps that a user cancels and selects a function option entering model training, clicks a model training mode exiting icon displayed on an electronic device, presses a volume key for a long time, inputs a downward sliding touch gesture on a touch screen of the electronic device and the like.
Wherein, steps S23 and S25 in fig. 2 correspond to steps S11 and S13 in fig. 1, and the related descriptions can be referred to each other.
Fig. 3 is a flowchart of a model training process of a shooting control method according to a third embodiment of the present application. As shown in fig. 3, in the third embodiment, the photographing control method may include the steps of:
s31: when a user carries out manual control shooting, a current shot picture is taken as a front sample meeting shooting conditions through the model, and parameters of the model are adjusted according to the front sample.
S33: and judging whether the number of times of shooting manually controlled by a user reaches a preset number N. If so, step S37 is performed, and if not, step S35 is performed.
S35: and sampling a picture frame which is not manually controlled and shot by the model according to a preset rule to be used as a back sample, and further adjusting the parameters of the model according to the back sample.
After the step S35 is completed, the process returns to the step S31.
S37: and when the training completion condition is determined to be reached, finishing the training to obtain a trained model for subsequent automatic shooting control.
Steps S31 and S35 correspond to steps S11 and S13 in the first embodiment shown in fig. 1, respectively, and a more detailed description can be found in relation to steps S11 and S13 in fig. 1.
Fig. 4 is a flowchart of a model training process of a shooting control method according to a fourth embodiment of the present application. As shown in fig. 4, in the fourth embodiment, the photographing control method may include the steps of:
s41: when the user carries out manual control shooting, parameters of a picture obtained through the manual control shooting are input into the model to obtain a predicted value.
S43: it is determined whether the predicted value satisfies the photographing condition. If not, step S45 is performed, and if so, step S47 is performed.
S45: and if the shooting condition is not met, taking the picture obtained by the manual control shooting as the parameter of the front sample adjustment model. After the step S45 is completed, the process returns to the step S41.
S47: and if the shooting condition is met, finishing training to obtain a trained model when the training completion condition is determined to be met, and using the trained model for subsequent automatic shooting control.
In some embodiments, the step S47 further includes: if the predicted value meets the shooting condition, adding one to the counting times of the predicted value meeting the shooting condition to obtain updated counting times; judging whether the current counting times (namely the updated current counting times) exceed the preset times; and if the current counting times exceed the preset times, determining that a training completion condition is reached, and finishing training to obtain a trained model for subsequent automatic shooting control. Obviously, if the current count number does not exceed the preset number, it is determined that the training completion condition is not reached, and the process may return to step S41.
Therefore, the training completion condition is determined to be reached after the number of times that the predicted value meets the shooting condition is determined to exceed the preset number of times, and the accuracy of determining the completion of the training can be guaranteed.
Fig. 5 is a flowchart of a model training process of a shooting control method according to a fifth embodiment of the present application. As shown in fig. 5, in a fifth embodiment, the model training process may include the following steps:
s51: after the automatic shooting function is started, a shooting preview picture is acquired by performing a framing preview in response to a starting operation of the camera.
Alternatively, in some embodiments, the turning on of the auto-shoot function may be done in response to a user operating a setting in a menu option of the camera.
Alternatively, in other embodiments, the starting of the auto-shooting function may be performed in response to a specific touch gesture of the user on the touch screen of the electronic device, for example, in response to a double-click operation performed through a finger joint on the touch screen of the electronic device.
In some embodiments, the operation of turning on the camera may be a click operation on a photographing application icon, a specific operation on a physical key of the electronic device, an operation of a preset touch gesture input in any display interface of the electronic device, or the like.
S53: and when the shooting condition is determined to be met according to the current model, controlling to execute the shooting operation.
In some embodiments, the step S53 specifically includes: and determining whether the current shooting view finding picture meets the shooting condition or not according to the training result in the current model, and controlling to execute the shooting operation when the shooting condition is determined to be met.
The shooting operation comprises a shooting operation, a continuous shooting operation, a video shooting operation and the like.
More specific description about step S53 may refer to the embodiments shown in the following fig. 7 and so on.
S55: and obtaining satisfaction feedback information of the user on the automatic shooting.
Optionally, in an implementation manner, after the automatic photographing is completed, the user may be prompted to perform satisfaction evaluation on the automatic photographing by generating a prompt message, for example, a prompt box including a "satisfaction" option and a "dissatisfaction" option is generated for the user to select, and the satisfaction feedback information of the automatic photographing is obtained according to the selection of the user.
Optionally, in another embodiment, the satisfaction feedback information of the user for the automatic shooting is obtained by detecting the operation of the user on the picture or the video obtained by the automatic shooting. For example, if it is detected that the user deletes the photo or video obtained by the automatic shooting, it is determined that the user is not satisfied with the automatic shooting, and the unsatisfactory satisfaction feedback information is obtained. For another example, if it is detected that the user performs a setting operation of setting the type of favorite, or the like or a sharing operation on the photo or video obtained by the automatic shooting, it is determined that the user is satisfied with the automatic shooting, and satisfactory satisfaction feedback information is obtained.
S57: and outputting the satisfaction feedback information of the user on the automatic shooting to the currently used model so that the currently used model performs optimization training by using the satisfaction feedback information.
Therefore, in the method and the device, the training of the model can be optimized by collecting the satisfaction feedback information of the user on automatic shooting, and the model is continuously optimized, so that the automatic shooting can be more accurate in subsequent use.
The currently used model may be a model that has been confirmed to be trained, for example, a model that is confirmed to be trained through the method steps shown in fig. 1 to 4, or a model that has not been trained yet. When the model is not trained yet, the training can be preferably realized.
Accordingly, steps S51-S57 in fig. 5 may be performed after step S15 in fig. 1, before step S15 in fig. 1, and even before step S11 in fig. 1. When executed before step S11, the currently used model may be an initial model that is not trained.
In some embodiments, when the preset model is an untrained model, the untrained model automatically acquires a picture of a user when shooting each time, and trains the picture as a positive sample, or further acquires shooting parameters when shooting, trains the picture as the positive sample together, and gradually optimizes the preset model until the training times reach a preset number of times or the satisfaction feedback information fed back by subsequent users is that the satisfactory proportion exceeds the preset proportion, and then determines that the training is completed. In this way, the user can better realize personalization because the user trains the model by himself without adopting another model.
Fig. 6 is a flowchart of a model training process in a shooting control method according to a sixth embodiment of the present application. As shown in fig. 6, in a sixth embodiment, the model training process may include the following steps:
s61: when a user carries out manual control shooting, a current shot picture is taken as a front sample corresponding to the shooting condition and the shooting parameter through the model, and the parameter of the model is adjusted according to the front sample.
In some embodiments, the model saves the front samples, and establishes or updates the correspondence between the front samples and the requirements of the shooting conditions and shooting parameters to adjust the parameters of the model itself. Wherein, the meeting of the shooting condition and the shooting parameter can be marked as the label of the front sample at the same time.
Optionally, in one implementation, the user manually controls the shooting to be completed by pressing a shutter key or a shooting icon.
Alternatively, in another implementation, the user manually controls the photographing to be performed by performing a specific operation on a physical key of the electronic device. For example, the electronic device includes a power key, and manually controls shooting by double-clicking the power key.
The shooting parameters may include aperture size, shutter time, and the like.
S63: and sampling a picture frame which is not manually controlled and shot by the model according to a preset rule to be used as a back sample, and adjusting the parameters of the model according to the back sample.
S65: and when the training completion condition is determined to be reached, finishing the training to obtain the trained model.
The steps S63 and S65 correspond to steps S13 and S15 in fig. 1, and the detailed description can refer to the descriptions of steps S13 and S15 in fig. 1.
Therefore, in a further embodiment, through the training of the model, not only the corresponding relationship between the picture as the positive sample and the shooting condition but also the corresponding relationship between the picture as the positive sample and the shooting parameter are established, and when the automatic shooting is subsequently started, whether the shooting condition is met or not can be automatically determined, and the shooting parameter can be automatically set.
Fig. 7 is a flowchart of a model training process in a shooting control method according to a seventh embodiment of the present application. As shown in fig. 7, in the seventh embodiment, the photographing control method may include the steps of:
s71: when a user carries out manual control shooting, a current shot picture is taken as a front sample meeting shooting conditions through the model, and parameters of the model are adjusted according to the front sample.
S73: and sampling the framed picture frame in a period of time after the positive sample to be used as a negative sample, and adjusting the parameters of the model according to the negative sample.
The time period after the front sample can be 2 seconds, 3 seconds and the like after the front sample obtained by the user through manual control shooting.
S75: and when the training completion condition is determined to be reached, finishing the training to obtain a trained model for subsequent automatic shooting control.
Steps S71 and S75 are the same as steps S11 and S15 in fig. 1, and specific descriptions can be found in reference to the related description in fig. 1. Step S73 is a more detailed way of step S13 in fig. 1, and the relevant points can be referred to each other.
Fig. 8 is a flowchart of a model training process in a shooting control method according to an eighth embodiment of the present application. As shown in fig. 8, in the eighth embodiment, the photographing control method may include the steps of:
s81: when a user carries out manual control shooting, a current shot picture is taken as a front sample meeting shooting conditions through the model, and parameters of the model are adjusted according to the front sample.
S83: and sampling a framed picture frame in a period of time before the positive sample to be used as a negative sample, and adjusting the parameters of the model according to the negative sample.
The time period before the front sample can be 2 seconds, 3 seconds and the like before the front sample obtained by the user through manual control shooting.
When the frame of the picture framed within a period of time before the front sample is sampled, the framing picture/shooting preview picture is automatically captured in advance, a certain number of undetermined samples are stored, and after the shooting is controlled manually, the undetermined samples obtained by the shooting without manual control are determined as the back sample.
S85: and when the training completion condition is determined to be reached, finishing the training to obtain a trained model for subsequent automatic shooting control.
Steps S81 and S85 are the same as steps S11 and S15 in fig. 1, and specific descriptions can be found in reference to the related description in fig. 1. Step S83 is a more detailed way of step S13 in fig. 1, and the relevant points can be referred to each other.
Fig. 9 is a flowchart illustrating a model training process in a photographing control method according to a ninth embodiment of the present application. As shown in fig. 9, in the ninth embodiment, the photographing control method may include the steps of:
s91: when a user carries out manual control shooting, a current shot picture is taken as a front sample meeting shooting conditions through the model, and parameters of the model are adjusted according to the front sample.
S93: and obtaining a picture frame which is not manually controlled to be shot through random sampling to serve as a back sample, and adjusting the parameters of the model according to the back sample.
Thus, a reverse sample can be obtained by randomly sampling a picture frame photographed without manual control.
S95: and when the training completion condition is determined to be reached, finishing the training to obtain a trained model for subsequent automatic shooting control.
Steps S91 and S95 are the same as steps S11 and S15 in fig. 1, and specific descriptions can be found in reference to the related description in fig. 1. Step S93 is a more detailed way of step S13 in fig. 1, and the relevant points can be referred to each other.
Please refer to fig. 10, which is a flowchart illustrating a model training process in a photographing control method according to a tenth embodiment of the present application. As shown in fig. 10, in the tenth embodiment, the photographing control method may include the steps of:
s101: when a user carries out manual control shooting, a current shot picture is taken as a front sample meeting shooting conditions through the model, and parameters of the model are adjusted according to the front sample.
S103: and when the sampling is determined to be needed according to the detection result of the sensor, sampling the picture frame, taking the sampled picture frame as a reverse sample, and adjusting the parameters of the model according to the reverse sample.
The sensor can be a photosensitive sensor or a sound-sensitive sensor and is used for collecting ambient light or sound to determine sampling and using a sampled picture frame as a reverse sample. For example, when the sensor is a sound-sensitive sensor and when sound such as "please prepare for", etc. is collected, it is considered that the state of readiness for shooting has not been entered at this time, and the frame of the picture sampled at this time is a negative sample.
S105: and when the training completion condition is determined to be reached, finishing the training to obtain a trained model for subsequent automatic shooting control.
Steps S101 and S105 are the same as steps S11 and S15 in fig. 1, and specific description may refer to related description in fig. 1. Step S103 is a more detailed way of step S13 in fig. 1, and the relevant points can be referred to each other.
Fig. 11 is a flowchart of a model training process in a shooting control method according to an eleventh embodiment of the present application. As shown in fig. 11, in the eleventh embodiment, the photographing control method may include the steps of:
s111: when a user carries out manual control shooting, a current shot picture is taken as a front sample meeting shooting conditions through the model, and parameters of the model are adjusted according to the front sample.
S113: collecting and storing picture frames which are not manually controlled to be shot, performing composition analysis on the stored picture frames to determine picture frames serving as reverse samples, and adjusting parameters of the model according to the reverse samples.
The composition analysis can be to determine that the picture frame is a reverse sample when the expression in the picture frame is analyzed to have closed-eye expression, unnatural expression and the like.
S115: and when the training completion condition is determined to be reached, finishing the training to obtain a trained model for subsequent automatic shooting control.
Steps S111 and S115 are the same as steps S11 and S15 in fig. 1, and specific description may refer to related description in fig. 1. Step S113 is a more detailed manner of step S13 in fig. 1, and the relevant points can be referred to each other.
Please refer to fig. 12, which is a flowchart illustrating a model training process in a photographing control method according to a twelfth embodiment of the present application. As shown in fig. 12, in the twelfth embodiment, the photographing control method may include the steps of:
s121: when a user carries out manual control shooting, a current shot picture is taken as a front sample meeting shooting conditions through the model, and parameters of the model are adjusted according to the front sample.
S123: and sampling a picture frame obtained by framing between two adjacent manual control shots at a preset time interval to serve as a back sample, and adjusting the parameters of the model according to the back sample.
Wherein, the preset time interval can be 1 second, 2 seconds, etc.
Optionally, in an implementation manner, the two adjacent manual control shots may be two adjacent manual control shots in a framing shooting process performed by opening the camera at the same time.
Optionally, in another implementation manner, two adjacent manual control shots may also be two adjacent manual control shots in the process of framing and shooting when the cameras are opened at different times. For example, after the user turns on the camera and completes the first manual control shooting, the camera is turned off, and the next manual control shooting is completed after turning on the camera, and the framing picture between the first manual control shooting and the second manual control shooting is saved by the currently used model at preset time intervals and is used as a reverse sample.
S125: and when the training completion condition is determined to be reached, finishing the training to obtain a trained model for subsequent automatic shooting control.
Steps S121 and S125 are the same as steps S11 and S15 in fig. 1, and specific description may refer to related description in fig. 1. Step S123 is a more detailed manner of step S13 in FIG. 1, and the relevant points can be referred to each other
Please refer to fig. 13, which is a flowchart illustrating an automatic photographing control process in a photographing control method according to a thirteenth embodiment of the present application.
S131: and acquiring a shooting preview picture.
In some embodiments, the operation of acquiring the shooting preview screen is performed by the camera in response to the operation of turning on the camera, that is, the shooting preview screen is acquired by the camera.
In some embodiments, the operation of opening the camera is a click operation on the photographing application icon, that is, when the camera is opened in response to the click operation on the photographing application icon, the camera is used to obtain the photographing preview picture.
Alternatively, in other embodiments, the operation of turning on the camera is a specific operation on a physical key of the electronic device, for example, the electronic device includes a volume up key and a volume down key, and the operation of turning on the camera is an operation of simultaneously pressing the volume up key and the volume down key. Further, the operation of starting the photographing application is an operation of successively pressing the volume up key and the volume down key within a preset time (for example, 2 seconds).
In other embodiments, the operation of turning on the camera may also be an operation of a preset touch gesture input in any display interface of the electronic device, for example, on a main interface of the electronic device, a user may input a touch gesture with an annular touch track to turn on the camera.
In other embodiments, the operation of turning on the camera may also be an operation of a preset touch gesture input on the touch screen when the electronic device is in a black screen state.
In some embodiments, when the electronic device is a camera, the operation of turning on the camera is an operation of pressing a shutter button/power button of the camera to trigger the camera to be in an on state.
Optionally, in the present application, the obtaining of the shooting preview picture is to obtain the shooting preview picture in real time through a camera.
S133: and analyzing the shot preview picture by adopting a preset model to obtain an analysis result.
The preset model may be a trained model or an untrained model.
Optionally, in some embodiments, the preset model is a trained neural network model; analyzing the shot preview picture by adopting a preset model to obtain an analysis result, and further comprising the following steps of: and analyzing the shot preview picture through a neural network model to obtain an analysis result of the satisfaction degree. The satisfaction is an analysis result of outputting the satisfaction by processing the neural network model according to a model trained in advance by taking all pixels of the shooting preview picture as input.
Optionally, in some embodiments, the preset model is a trained image processing algorithm model, and the preset model is used to analyze the shot preview image to obtain an analysis result, further including: and comparing the shot preview picture with the reference picture by adopting a trained image processing algorithm model, and analyzing an analysis result of the similarity between the preview picture and the reference picture. The reference picture can be a standard picture with specific expressions such as smile, laugh, difficulty, anger and yawning, which are preset by a user.
Further, the trained image processing algorithm model includes a trained target object feature model, the trained image processing algorithm model is used for comparing the shooting preview picture with the reference picture, and the similarity between the shooting preview picture and the reference picture is analyzed, including: analyzing a target object in a shooting preview picture by using a face recognition technology to generate a corresponding target object feature vector; and calculating to obtain the similarity between the shooting preview picture and the reference picture according to the trained target object feature model and the target object feature vector corresponding to the shooting preview picture.
In some embodiments, the calculating, according to the trained target object feature model and the target object feature vector corresponding to the shooting preview picture, a similarity between the shooting preview picture and a reference picture includes: and taking the target object feature vector corresponding to the shooting preview picture as input information of the trained expression feature model, and calculating the similarity between the shooting preview picture and the reference picture through the target object feature vector.
In other embodiments, comparing the captured preview picture with the reference picture by using the trained image processing algorithm model, and analyzing an analysis result of similarity between the preview picture and the reference picture, includes: acquiring pixel information of a shooting preview picture; and comparing the pixel information of the shooting preview picture with the pixel information of the reference picture, and analyzing the similarity between the shooting preview picture and the reference picture. That is, in other embodiments, the similarity between the captured preview picture and the reference picture is derived by comparing pixel information such as pixel grayscale values of the two images.
In other embodiments, when the preset model is a trained image processing algorithm model, analyzing the captured preview image by using the preset model to obtain an analysis result includes: analyzing facial expressions in the shot preview picture by using a face recognition technology to generate corresponding expression feature vectors; and taking the expression feature vector as the input information of the image processing algorithm model to obtain an analysis result including identification information for identifying whether the current shooting condition is met. For example, the identification information may include 1, 0, or the like, which identifies whether the analysis result of the photographing condition is currently satisfied. More specifically, when the identification information is the identifier 1, it indicates that the shooting condition is satisfied, and when the identification information is the identifier 0, it indicates that the shooting condition is not satisfied.
In some embodiments, the trained target object feature model is trained by: providing a plurality of photos with different facial expressions in an initial training set; performing expression analysis on a target object in a plurality of photos provided in an initial training set by using an image recognition technology to generate a corresponding target object feature vector Xi, for example, when the target object feature vector is an expression feature vector, X1 represents the size of the open eye, X2 represents the degree of the raised mouth corner, and X3 represents the size of the open mouth; establishing a training sample set based on the generated target object feature vector and a similarity label between the corresponding photo and the reference picture; and then, training and learning are carried out by using the sample set to obtain a trained target object characteristic model.
In some embodiments, in the present application, the analysis result is obtained by analyzing according to a shooting preview picture obtained in real time. In other embodiments, the analyzing the shooting preview image by using the preset model to obtain the analysis result is to analyze the currently acquired shooting preview image at preset time intervals (for example, 0.2 second) to obtain the current analysis result.
S135: and determining whether the shooting condition is met currently according to the analysis result. If yes, step S137 is executed, otherwise, the process returns to step S133 or the process ends.
In some embodiments, when the analysis result is derived from the neural network model, determining whether the photographing condition is currently satisfied according to the analysis result includes: and when the satisfaction degree is determined to exceed the satisfaction degree preset threshold, determining that the shooting condition is met currently. Wherein, the satisfaction preset threshold value can be 80%, 90% and the like.
In some embodiments, when the analysis result is derived from the image processing algorithm model, determining whether the photographing condition is currently satisfied according to the analysis result includes: and when the similarity is determined to exceed the preset threshold of the similarity, determining that the shooting condition is currently met. Or, determining whether the shooting condition is currently satisfied according to the analysis result, may further include: when the analysis result includes identification information identifying that the shooting condition is currently satisfied, it is determined that the shooting condition is currently satisfied.
In the application, the target object may be a hand, a face, a specific scene, etc.; the target object feature model correspondingly comprises a gesture feature model, an expression feature model, a scene feature model and the like, and the analyzed target object feature vector can comprise a gesture feature vector, an expression feature vector, a scene feature vector and the like.
S137: when it is determined that the photographing condition is satisfied, the photographing operation is controlled to be performed.
In some implementations, the photographing operation is a photographing operation, and the controlling of performing the photographing operation includes: and controlling to execute the photographing operation to obtain the photo corresponding to the current photographing preview picture.
In other implementations, the photographing operation is a continuous photographing operation, and controlling the photographing operation to be performed includes: and controlling to execute continuous shooting operation to obtain a plurality of photos including the photo corresponding to the current shooting preview picture. Optionally, after the continuous shooting operation is performed, the method may further include the following steps: analyzing a plurality of photos obtained by continuous shooting operation to determine the best photo; and keeping the best photo, and deleting other photos obtained by the continuous shooting operation.
In other implementations, the capture operation is a video capture operation, the controlling performing the capture operation comprising: and controlling to execute the video shooting operation to obtain the video file taking the current shooting preview picture as the starting video picture frame. Optionally, after the video file is obtained by performing the video shooting operation, the method may further include the steps of: after the video file is shot, a plurality of video picture frames in the shot video file can be compared to determine the best picture frame; and the best picture frame is intercepted and stored as a photo.
Therefore, in the application, whether the shooting condition is met at present is determined by analyzing the shooting preview picture, whether the picture is the picture expected to be shot by the user can be determined according to the content in the shooting preview picture, and therefore the current splendid moment can be captured in time.
In some embodiments, step S15 includes: besides determining whether the shooting condition is currently satisfied according to the analysis result, shooting parameters including shutter time and aperture size are also determined according to the analysis result.
Controlling to perform a photographing operation when it is determined that the photographing condition is satisfied, further comprising: and when the shooting condition is determined to be met, executing shooting operation according to the determined shooting parameters.
Fig. 14 is a block diagram illustrating a partial structure of an electronic device 100 according to an embodiment of the disclosure. As shown in fig. 14, the electronic device 100 includes a processor 10, a memory 20, and a camera 30. The camera 30 at least includes a rear camera 31 and a front camera 32. The rear camera 31 is used for shooting images behind the electronic device 100 and can be used for shooting operations such as others by a user, and the front camera 32 is used for shooting images in front of the electronic device 100 and can be used for realizing shooting operations such as self-shooting.
In some embodiments, the models described in fig. 1-13 may be programs such as specific algorithmic functions running in the processor 10, such as neural network algorithmic functions, image processing algorithmic functions, and so forth. In other embodiments, the electronic device 100 may further include a model processor independent from the processor 10, where the models shown in fig. 1 to 13 are run and model processors, the processor 10 may generate corresponding instructions to trigger the model processor to run the corresponding models, and the output result of the models is output to the processor 10 through the model processor for the processor 10 to use, so as to perform control such as shooting operation.
The memory 20 has stored therein program instructions.
The processor 10 is configured to call the program instructions stored in the memory 20 to execute a model training process in the shooting control method in any one of the embodiments shown in fig. 1 to 6, and execute a process of performing automatic shooting control using a model in the shooting control method in any one of the embodiments shown in fig. 7 to 14.
For example, the processor 10 is configured to call program instructions stored in the memory 20 to execute the following photographing control method:
when a user manually controls shooting, taking a currently shot picture as a front sample meeting shooting conditions through the model, and adjusting parameters of the model according to the front sample at this time;
sampling a picture frame which is not manually controlled to be shot by a model according to a preset rule to serve as a back sample, and adjusting parameters of the model according to the back sample; and
and when the training completion condition is determined to be reached, finishing the training to obtain a trained model for subsequent automatic shooting control.
In some embodiments, the processor 10 calls a program instruction to execute the steps of, when the user performs the manual control shooting, using the model to take the currently shot picture as the front sample meeting the shooting condition, and adjusting the parameters of the model according to the current front sample, including: and storing the front sample through the model, and establishing or updating a corresponding relation between the front sample and the shooting condition so as to adjust the parameters of the model. Wherein, satisfying the shooting condition can be marked as a label of the front sample.
In one implementation, the processor calls the program instructions to further perform: the model training mode is entered in response to user input to control the operation of entering model training. The processor invokes the program instruction execution to determine that a training completion condition is met, comprising: and determining that a training completion condition is reached in response to user input for exiting the model training mode.
Optionally, the operation of entering the model training includes a selection operation of a menu option, or a specific operation of a physical key, or a specific touch gesture input on a touch screen of the electronic device. Optionally, the controlling the model training mode in response to the operation of entering the model training input by the user includes: and responding to the selection operation of the user on the menu option, or the specific operation on the physical key or the specific touch gesture input on the touch screen of the electronic device to control to enter the model training mode.
Optionally, in another implementation, the determining that the execution of the processor call program instruction reaches the training completion condition includes: when it is determined that the number of times the user manually controls photographing reaches the preset number of times N1, it is determined that the training completion condition is reached. The preset number of times N1 may be the number of times that the training of the default model of the system needs to be performed, or may be a value defined by the user.
Optionally, in another implementation, the determining that the execution of the processor call program instruction reaches the training completion condition includes: and (4) using the front sample to test the model, determining whether the test result reaches a preset threshold value, and determining that the training completion condition is reached when the test result reaches the preset threshold value.
The processor 10 may be a microcontroller, a microprocessor, a single chip, a digital signal processor, or the like.
The memory 20 may be any memory device capable of storing information, such as a memory card, a solid-state memory, a micro hard disk, and an optical disk.
As shown in fig. 14, the electronic device 100 further includes an input unit 40 and an output unit 50. The input unit 40 may include a touch panel, a mouse, a microphone, physical keys including a power key, a volume key, and the like. The output unit 50 may include a display screen, a speaker, and the like. In some embodiments, the touch panel of the input unit 40 and the display screen of the output unit 50 are integrated to form a touch screen, while providing both touch input and display output functions.
In some embodiments, the present application further provides a computer-readable storage medium, in which program instructions are stored, and the program instructions are used by the main processing unit 20 to call and execute all or part of the steps in any of the shooting control methods shown in fig. 1 to 14. In some embodiments, the computer storage medium is the memory 20, and may be any storage device capable of storing information, such as a memory card, a solid-state memory, a micro hard disk, an optical disk, and the like.
The shooting control method and the electronic device 100 can firstly train the model, and can be used for subsequent automatic shooting control after the model training is finished, and capture the wonderful moment in time. Specifically, whether shooting conditions are met or not can be automatically judged through the model according to the shooting preview picture, shooting is carried out when the shooting conditions are met, and the highlight moment including the content corresponding to the current shooting preview picture can be captured timely.
While the invention has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. A computer program stored/distributed on a suitable medium supplied together with or as part of other hardware, may also take other distributed forms, such as via the Internet or other wired or wireless telecommunication systems.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the present invention has been described with reference to particular embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (14)

  1. A shooting control method, characterized by comprising:
    when a user manually controls shooting, taking a currently shot picture as a front sample meeting shooting conditions through the model, and adjusting parameters of the model according to the front sample at this time;
    sampling a picture frame which is not manually controlled and shot by a model according to a preset rule to serve as a back sample, and adjusting parameters of the model according to the back sample; and
    and when the training completion condition is determined to be reached, finishing the training to obtain a trained model for subsequent automatic shooting control.
  2. The shooting control method according to claim 1, characterized by further comprising: responding to the operation of entering model training input by a user, and controlling to enter a model training mode;
    when the training completion condition is determined to be reached, ending the training to obtain a trained model for subsequent automatic shooting control, comprising:
    and responding to the operation of exiting the model training mode input by the user, determining that a training completion condition is reached, and finishing training to obtain a trained model for subsequent automatic shooting control.
  3. The photographing control method according to claim 1, wherein the ending of training upon determining that the training completion condition is reached to obtain a trained model for subsequent automatic photographing control includes: and when the number of times of manually controlling shooting by the user reaches the preset number of times, determining that a training completion condition is reached, and finishing training to obtain a trained model for subsequent automatic shooting control.
  4. The photographing control method according to claim 1, wherein the ending of training upon determining that the training completion condition is reached to obtain a trained model for subsequent automatic photographing control includes:
    when a user carries out manual control shooting, inputting parameters of a picture obtained by the manual control shooting into the model to obtain a predicted value;
    determining whether the predicted value meets the shooting condition;
    and if the predicted value meets the shooting condition, determining that the training completion condition is met, and finishing the training to obtain a trained model for subsequent automatic shooting control.
  5. The photographing control method according to claim 4, wherein the determining that the training completion condition is reached if the predicted value satisfies the photographing condition, and ending the training to obtain a trained model for subsequent automatic photographing control includes:
    if the predicted value meets the shooting condition, adding one to the counting times of the predicted value meeting the shooting condition to obtain updated counting times;
    judging whether the current counting times exceed the preset times or not;
    and if the current counting times exceed the preset times, determining that a training completion condition is reached, and finishing training to obtain a trained model for subsequent automatic shooting control.
  6. The shooting control method according to claim 1, wherein when the user performs the manual shooting control, the model takes a currently shot picture as a positive sample satisfying the shooting condition, and adjusts parameters of the model according to the current positive sample, and the method comprises:
    when a user carries out manual control shooting, a current shot picture is taken as a front sample corresponding to the shooting condition and the shooting parameter through the model, and the parameter of the model is adjusted according to the front sample.
  7. The photographing control method according to any one of claims 1 to 6, wherein the photographing control method further comprises the steps of:
    after the automatic shooting function is started, performing framing preview in response to the starting operation of the camera to obtain a shooting preview picture;
    when the preview picture information meets the shooting condition according to the current model, controlling to execute automatic shooting operation;
    obtaining satisfaction feedback information of the user on the automatic shooting;
    and outputting the satisfaction feedback information of the user on the automatic shooting to the currently used model so that the currently used model carries out optimization training by using the satisfaction feedback information and the preview picture information.
  8. The photographing control method according to any one of claims 1 to 6, wherein the sampling of the picture frame, which is not manually controlled to be photographed, as the negative surface sample by the model in a preset rule includes: and sampling a picture frame obtained by framing between two adjacent manual control shooting at a preset time interval to be used as a back sample.
  9. The photographing control method according to any one of claims 1 to 6, wherein the sampling of the picture frame, which is not manually controlled to be photographed, as the negative surface sample by the model in a preset rule includes: and sampling the picture frames viewed in a period of time after or before the positive sample to serve as the negative sample.
  10. The photographing control method according to any one of claims 1 to 6, wherein the sampling of the picture frame, which is not manually controlled to be photographed, as the negative surface sample by the model in a preset rule includes: and obtaining a picture frame which is not manually controlled to be shot as a reverse sample through random sampling.
  11. The photographing control method according to any one of claims 1 to 6, wherein the sampling of the picture frame, which is not manually controlled to be photographed, as the negative surface sample by the model in a preset rule includes: and when the sampling is determined to be needed according to the detection result of the sensor, sampling the picture frame, and taking the picture frame obtained by sampling as a back sample.
  12. The photographing control method according to any one of claims 1 to 6, wherein the sampling of the picture frame, which is not manually controlled to be photographed, as the negative surface sample by the model in a preset rule includes: and collecting and storing the picture frames which are not manually controlled to be shot, and performing composition analysis on the stored picture frames to determine the picture frames serving as the reverse samples.
  13. An electronic device, comprising:
    a memory for storing program instructions; and
    a processor for calling the program instructions to execute the photographing control method according to any one of claims 1 to 12.
  14. A computer-readable storage medium, characterized in that a program instruction for causing a computer to execute the photographing control method according to any one of claims 1 to 12 after being called is stored in the computer-readable storage medium.
CN201880070205.3A 2018-05-07 2018-05-07 Shooting control method and electronic device Pending CN111279683A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/085900 WO2019213820A1 (en) 2018-05-07 2018-05-07 Photographing control method and electronic device

Publications (1)

Publication Number Publication Date
CN111279683A true CN111279683A (en) 2020-06-12

Family

ID=68467670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880070205.3A Pending CN111279683A (en) 2018-05-07 2018-05-07 Shooting control method and electronic device

Country Status (2)

Country Link
CN (1) CN111279683A (en)
WO (1) WO2019213820A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111369585B (en) * 2020-02-28 2023-09-29 上海顺久电子科技有限公司 Image processing method and device
CN116467607B (en) * 2023-03-28 2024-03-01 阿里巴巴(中国)有限公司 Information matching method and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160104284A1 (en) * 2014-10-10 2016-04-14 Facebook, Inc. Post-manufacture camera calibration
CN107025437A (en) * 2017-03-16 2017-08-08 南京邮电大学 Intelligent photographing method and device based on intelligent composition and micro- Expression analysis
CN107124555A (en) * 2017-05-31 2017-09-01 广东欧珀移动通信有限公司 Control method, device, computer equipment and the computer-readable recording medium of focusing
CN107909629A (en) * 2017-11-06 2018-04-13 广东欧珀移动通信有限公司 Recommendation method, apparatus, storage medium and the terminal device of paster

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679455A (en) * 2017-08-29 2018-02-09 平安科技(深圳)有限公司 Target tracker, method and computer-readable recording medium
CN107635095A (en) * 2017-09-20 2018-01-26 广东欧珀移动通信有限公司 Shoot method, apparatus, storage medium and the capture apparatus of photo

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160104284A1 (en) * 2014-10-10 2016-04-14 Facebook, Inc. Post-manufacture camera calibration
CN107025437A (en) * 2017-03-16 2017-08-08 南京邮电大学 Intelligent photographing method and device based on intelligent composition and micro- Expression analysis
CN107124555A (en) * 2017-05-31 2017-09-01 广东欧珀移动通信有限公司 Control method, device, computer equipment and the computer-readable recording medium of focusing
CN107909629A (en) * 2017-11-06 2018-04-13 广东欧珀移动通信有限公司 Recommendation method, apparatus, storage medium and the terminal device of paster

Also Published As

Publication number Publication date
WO2019213820A1 (en) 2019-11-14

Similar Documents

Publication Publication Date Title
CN109257645B (en) Video cover generation method and device
WO2022028184A1 (en) Photography control method and apparatus, electronic device, and storage medium
CN109379572B (en) Image conversion method, image conversion device, electronic equipment and storage medium
WO2019213819A1 (en) Photographing control method and electronic device
CN111279684A (en) Shooting control method and electronic device
US20090174805A1 (en) Digital camera focusing using stored object recognition
WO2017031901A1 (en) Human-face recognition method and apparatus, and terminal
CN112118380B (en) Camera control method, device, equipment and storage medium
CN106408603A (en) Camera method and device
CN105430262A (en) Photographing control method and photographing control device
WO2018120662A1 (en) Photographing method, photographing apparatus and terminal
WO2022227393A1 (en) Image photographing method and apparatus, electronic device, and computer readable storage medium
EP3975046B1 (en) Method and apparatus for detecting occluded image and medium
EP3933675A1 (en) Method and apparatus for detecting finger occlusion image, and storage medium
CN110717399A (en) Face recognition method and electronic terminal equipment
CN110969120B (en) Image processing method and device, electronic equipment and readable storage medium
CN111279683A (en) Shooting control method and electronic device
CN107122697A (en) Automatic obtaining method and device, the electronic equipment of photo
CN105224950A (en) The recognition methods of filter classification and device
CN106412417A (en) Method and device for shooting images
CN105592263A (en) Image capturing method, device and terminal
CN113079311B (en) Image acquisition method and device, electronic equipment and storage medium
CN105530439B (en) Method, apparatus and terminal for capture pictures
CN111464734B (en) Method and device for processing image data
CN105812642A (en) Information processing method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200612