WO2019213818A1 - Photographing control method, and electronic device - Google Patents

Photographing control method, and electronic device Download PDF

Info

Publication number
WO2019213818A1
WO2019213818A1 PCT/CN2018/085898 CN2018085898W WO2019213818A1 WO 2019213818 A1 WO2019213818 A1 WO 2019213818A1 CN 2018085898 W CN2018085898 W CN 2018085898W WO 2019213818 A1 WO2019213818 A1 WO 2019213818A1
Authority
WO
WIPO (PCT)
Prior art keywords
shooting
model
target object
photographing
trained
Prior art date
Application number
PCT/CN2018/085898
Other languages
French (fr)
Chinese (zh)
Inventor
王星泽
Original Assignee
合刃科技(武汉)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合刃科技(武汉)有限公司 filed Critical 合刃科技(武汉)有限公司
Priority to PCT/CN2018/085898 priority Critical patent/WO2019213818A1/en
Priority to CN201880070282.9A priority patent/CN111279684A/en
Publication of WO2019213818A1 publication Critical patent/WO2019213818A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present application relates to the field of electronic devices, and in particular, to a photographing control method for an electronic device and the electronic device.
  • the present application provides a shooting control method and an electronic device, which can set the shooting parameters in time, so that a beautiful moment can be captured in time with a high shooting quality.
  • a shooting control method comprising: acquiring a shooting preview screen; analyzing the shooting preview image with a preset model to obtain shooting parameters for shooting.
  • an electronic device including a camera, a memory, and a processor.
  • the memory is for storing program instructions.
  • the processor is configured to execute the shooting control method by calling the program instruction, the shooting control method includes: acquiring a shooting preview image by using a camera; analyzing the shooting preview image by using a preset model to obtain shooting parameters for use in Shooting.
  • a computer readable storage medium storing program instructions for executing a shooting control method after the computer calls, the shooting control method comprising: acquiring a shooting Previewing the picture; analyzing the shooting preview picture with a preset model to obtain shooting parameters for shooting.
  • the shooting control method and the electronic device of the present application determine the current shooting parameters by analyzing the shooting preview screen, and can determine the shooting parameters in time according to the shooting preview screen, so that the shooting quality can be timely according to the shooting parameters when shooting is required. Capture the current moments.
  • FIG. 1 is a flow chart of a photographing control method in a first embodiment of the present application.
  • FIG. 2 is a flowchart of a photographing control method in a second embodiment of the present application
  • FIG. 3 is a flowchart of a photographing control method in a third embodiment of the present application.
  • FIG. 4 is a flow chart of a photographing control method in a fourth embodiment of the present application.
  • FIG. 5 is a flowchart of a photographing control method in a fifth embodiment of the present application.
  • FIG. 6 is a flowchart of a photographing control method in a sixth embodiment of the present application
  • FIG. 7 is a flowchart of a photographing control method in a seventh embodiment of the present application.
  • FIG. 8 is a flowchart of a photographing control method in an eighth embodiment of the present application.
  • FIG. 9 is a flowchart of a photographing control method in a ninth embodiment of the present application.
  • FIG. 10 is a block diagram showing a schematic partial structure of an electronic device according to an embodiment of the present application.
  • the shooting control method of the present application can be applied to an electronic device.
  • the electronic device can include a camera.
  • the electronic device can acquire a shooting preview screen and display a shooting preview image through the camera, and the electronic device can perform photographing, continuous shooting, video shooting, etc. through the camera.
  • the camera includes a front camera and a rear camera, and the operations of photographing, continuous shooting, video shooting, etc. can be performed by a rear camera or a self-timer by a front camera.
  • FIG. 1 is a flowchart of a shooting control method in a first embodiment of the present application.
  • the shooting control method is applied to an electronic device.
  • the photographing control method includes the following steps:
  • the operation of acquiring the shooting preview screen is performed by the camera in response to the operation of turning on the camera, that is, the shooting preview screen is acquired by the camera.
  • the shooting preview screen is a framing screen when the camera is turned on for framing.
  • the operation of turning on the camera is a click operation on the photographing application icon, that is, when the camera is turned on in response to a click operation on the photographing application icon, the shooting preview screen is acquired by the camera.
  • the operation of turning on the camera is a specific operation of a physical button of the electronic device.
  • the electronic device includes a volume up button and a volume down button, and the operation of turning on the camera is to increase the volume and volume. Simultaneous pressing of small keys.
  • the operation of turning on the photographing application is an operation of pressing the volume up key and the volume down key in a preset time (for example, 2 seconds).
  • the operation of turning on the camera may also be an operation of a preset touch gesture input in any display interface of the electronic device.
  • the user may input a circular touch track. Turn on the camera with a touch gesture.
  • the operation of turning on the camera may also be an operation of a preset touch gesture input on the touch screen when the electronic device is in a black screen state.
  • the operation of turning on the camera is an operation that presses the shutter button/power button of the camera to trigger the camera to be in an activated state.
  • acquiring a shooting preview screen is to obtain a shooting preview screen in real time through a camera.
  • S13 The shooting preview image is analyzed by using a preset model to obtain shooting parameters for shooting.
  • the preset model may be a trained model or an untrained model.
  • the preset model is a trained neural network model
  • the shooting preview image is analyzed by using a preset model to obtain shooting parameters for shooting, including: taking all the pixels of the preview image as Input, calculated by the neural network model and output shooting parameters for shooting.
  • the preset model is a trained image processing algorithm model
  • the trained image processing algorithm model includes the trained target object feature model
  • the preset preview model is used to analyze the captured preview image.
  • Obtaining shooting parameters for shooting comprising: using an image recognition technology to analyze a target object in the captured preview image to generate a corresponding target object feature vector; according to the trained target object feature model and the target object feature corresponding to the captured preview image The vector obtains an analysis result; the shooting parameters are determined for the shooting based on the analysis result.
  • the analysis result is obtained according to the trained target object feature model and the target object feature vector corresponding to the captured preview image, including: the target object model that has been trained and the target corresponding to the captured preview image
  • the object feature vector calculates the analysis result of the target object feature corresponding to the captured preview screen.
  • determining the shooting parameters for the shooting according to the analysis result includes: determining, according to the correspondence between the preset target object features and the shooting parameters, the shooting parameters corresponding to the obtained target object features.
  • the preset model is a trained image processing algorithm model
  • the trained image processing algorithm model includes the trained target object feature model
  • the preset model is used to capture the preview image.
  • the imaging parameters are obtained for the imaging, including: taking the target object feature vector corresponding to the captured preview image as the input information of the trained target object feature model, and obtaining the shooting parameters through the trained target object feature model.
  • the analysis result is that the analysis result is obtained according to the real-time acquired shooting preview screen.
  • the shooting preview screen is analyzed by using a preset model to obtain shooting parameters: the currently acquired shooting preview screen is analyzed every time by a preset time (for example, 0.2 seconds) to obtain shooting parameters.
  • the target object may be a hand, a face, a specific scene, etc.; the target object feature model includes a gesture feature model, an expression feature model, and a scene feature model, etc., and the analyzed target object feature vector may include a gesture.
  • the feature vector, the expression feature vector, the scene feature vector, and the like, and the analyzed target object features may also include gesture features, expression features, and scene features.
  • the shooting parameters include, but are not limited to, at least one of a shutter time, an aperture size, a sensitivity, and the like.
  • the shooting parameters can be determined in time according to the shooting preview screen, so that the current exciting moment can be captured in time with a higher shooting quality according to the shooting parameters when shooting is required.
  • FIG. 2 is a flowchart of a shooting control method in a second embodiment of the present application.
  • the preset model is a trained image processing algorithm model
  • the trained image processing algorithm model includes the trained target object feature model
  • the shooting control method includes the following steps:
  • S23 analyzing the target object in the shooting preview image by using image recognition technology, generating a corresponding target object feature vector, and obtaining an analysis result according to the trained target object feature model and the target object feature vector corresponding to the captured preview image.
  • the analysis result is obtained according to the trained target object feature model and the target object feature vector corresponding to the captured preview image, including: the target object model that has been trained and the target corresponding to the captured preview image
  • the object feature vector calculates the analysis result of the target object feature corresponding to the captured preview screen.
  • the target object may include an object such as a hand, a face, and a specific scene.
  • the target object feature model may include a gesture feature model, an expression feature model, a scene feature model, etc.
  • the analyzed target object feature vector may include laughing and angry.
  • Expression eigenvectors such as yawning, or gesture feature vectors such as "OK” gestures, "V-shaped", or landscape feature vectors such as flowers, birds, and mountains.
  • the analyzed target features may also include laughter, anger, yawning, etc.
  • Emoticon features, or "OK" gestures, "V-shaped” and other gesture features, or landscape features such as flowers, birds, mountains.
  • using the preset model to analyze the captured preview image and obtaining the analysis result may further specifically include: using the image recognition technology to perform the gesture in the captured preview image.
  • the analysis generates a corresponding gesture feature vector; and obtains an analysis result according to the gesture feature model and the gesture feature vector corresponding to the captured preview image.
  • the analysis result is obtained according to the gesture feature model and the gesture feature vector corresponding to the captured preview image, including: calculating a shooting preview by using the trained gesture feature model and the target object feature vector corresponding to the captured preview image The analysis result of the gesture feature corresponding to the picture.
  • the trained target object feature model includes a plurality of reference images and a reference target object feature vector of the plurality of reference images, the target corresponding to the captured target object feature model and the captured preview image
  • the object feature vector calculates an analysis result of the target object feature corresponding to the captured preview image, including: the target object feature vector corresponding to the captured preview image and the reference target object in the plurality of reference images by the trained target object feature model
  • the feature vectors are compared, a reference image corresponding to the reference object feature vector with the highest similarity is determined, and the target object feature is derived from the reference image. For example, after the reference image is determined, the target object feature is determined based on the label of the reference image.
  • S25 The shooting parameters are determined according to the analysis result for shooting.
  • determining a shooting parameter for the shooting according to the analysis result including: according to the preset target object feature and the shooting parameter Corresponding relationship determines the shooting parameters corresponding to the target object features.
  • the shooting parameters include, but are not limited to, at least one of a shutter time, an aperture size, a sensitivity, and the like.
  • the target object feature is used to determine the shooting parameter, for example, the corresponding relationship between the shutter time and/or the aperture size and the target object feature is preset.
  • the shooting parameters include the shutter time and/or the aperture size
  • the target object feature may be an expression feature, a gesture feature, a scene feature, etc.
  • the corresponding relationship between the shutter time and/or the aperture size and the expression feature is preset, or preset
  • There is a correspondence between the shutter time and/or the aperture size and the gesture feature, or a correspondence between the shutter time and/or the aperture size and the specific scene feature is preset.
  • a correspondence relationship between the shutter time and/or the aperture size and the gesture feature of the thumb and the index finger distance may be preset, wherein the shutter time may be logarithmically related to the distance between the thumb and the index finger. . Therefore, when the gesture feature of the distance between the thumb and the index finger is obtained according to the analysis result, the shutter time can be determined according to a preset correspondence relationship between the shutter time and/or the aperture size and the gesture feature of the thumb and the index finger distance.
  • the shooting parameters such as the aperture size and/or the aperture size are controlled to achieve exposure intensity and/or shooting depth of field.
  • a corresponding relationship between a shooting parameter such as a shutter time and/or a aperture size and a different expression feature, such as setting a shutter time and/or a aperture size, and smiling, laughing, and angry may be set.
  • the corresponding relationship between the expression features such as crying, yawning, and the like, and corresponding expression features are obtained according to the analysis result, and the corresponding shooting parameters can be determined according to the correspondence between the shooting parameters and the different expression features and the obtained expression features.
  • the shooting parameter is determined according to the analysis result for shooting, including: according to the trained target object feature model
  • the resulting analysis results including the shooting parameters, and the shooting parameters are determined for shooting.
  • the target object feature model may be a model completed by the following training steps: using an image or face recognition technique to analyze a target object in each photo in the initial training set to generate a corresponding target object feature vector. And establishing a training sample set based on the generated target object feature vector and the similarity label between the corresponding photo and the reference picture; and using the sample set for training learning, obtaining the trained target object feature model.
  • the shooting parameters can be determined according to the target object in the shooting preview screen, and the shooting parameters corresponding to the target object can be set in time and more reasonably, and the shooting quality can be improved and the setting of the shooting parameters can be completed in time.
  • the foregoing correspondence may be a correspondence table stored in a memory of the electronic device.
  • the step S21 corresponds to the step S11 in FIG. 1 .
  • Steps S23 to S25 correspond to step S13 in FIG. 1, and related descriptions can be referred to each other.
  • FIG. 3 is a flowchart of a shooting control method in a third embodiment of the present application.
  • the preset model is a trained image processing algorithm model
  • the trained image processing algorithm model includes the trained target object feature model
  • the shooting control method includes the following steps:
  • S301 Acquire a shooting preview screen.
  • S303 Analyze a target object in the preview image by using an image recognition technology to generate a corresponding target object feature vector.
  • the target object feature vector corresponding to the captured preview image is taken as the input information of the trained target object feature model, and the captured target object feature model is used to obtain the shooting parameters for shooting.
  • the target object may include an object such as a hand, a face, and a specific scene.
  • the target object feature model may include a gesture feature model, an expression feature model, a scene feature model, and the like, and the analyzed target object feature vector may include a laugh.
  • Expression eigenvectors such as anger, yawning, etc., or gesture feature vectors such as "OK” gestures, "V-shaped", or vector features such as flowers, birds, mountains, etc.
  • the analyzed target features may also include laughing, angry, playing Yawns and other expression features, or "OK" gestures, "V-shaped” and other gesture features, or flowers, birds, mountains and other features.
  • the object feature model obtains the shooting parameters for the shooting, and may include: taking the gesture feature vector corresponding to the shooting preview image as the input information of the trained gesture feature model, and obtaining the corresponding shooting parameter by using the trained gesture feature model. Used for shooting.
  • the step S301 corresponds to the step S11 in FIG. 1 .
  • Steps S303 to S305 correspond to step S13 in FIG. 1, and related descriptions can be referred to each other.
  • Steps S303 and S305 also have a certain correspondence with steps S203 and S205 in FIG. 2, and related features may also be referred to each other.
  • FIG. 4 is a flowchart of a shooting control method in a fourth embodiment of the present application.
  • the preset model is a trained neural network algorithm model
  • the shooting control method includes the following steps:
  • the shooting parameters can be directly calculated by the trained neural network model by taking all pixels of the preview image as input information of the trained neural network model.
  • the steps S31 to S33 correspond to the steps S11 to S13 in FIG. 1 , and the description relationship between the upper and lower positions is mutually related, and the related descriptions can refer to each other.
  • FIG. 5 is a flowchart of a shooting control method in a fifth embodiment of the present application.
  • the difference from the first embodiment is that, in the fifth embodiment, it is further determined whether the shooting condition is currently satisfied.
  • the photographing control method includes:
  • step S43 The shooting preview screen is analyzed by using a preset model to determine whether the shooting condition is currently satisfied. If yes, go to step S45, otherwise go back to step S43 or the process ends.
  • the preset model is a trained neural network model; analyzing the captured preview image by using the preset model to determine whether the shooting condition is currently satisfied, and further comprising: capturing a preview image by using the trained neural network model The analysis is performed to obtain an analysis result including satisfaction; and based on the analysis result, it is determined whether the shooting condition is currently satisfied.
  • the satisfaction degree is that the neural network model outputs an analysis result including the satisfaction degree by processing all the pixels of the preview image and processing according to the pre-trained model.
  • the satisfaction preset threshold may be 80%, 90%, and the like.
  • the shooting preview screen is analyzed by using the preset model, not only the shooting parameters are analyzed, but also the analysis result including the satisfaction is analyzed to determine whether the shooting conditions are satisfied.
  • the preset preview model is used to analyze the captured preview image to determine whether the shooting condition is currently met, including: using the trained image processing.
  • the algorithm model compares the shot preview screen with the reference picture, analyzes the analysis result including the similarity between the preview picture and the reference picture, and determines whether the current shooting condition is satisfied according to the analysis result.
  • the determining whether the current shooting condition is met according to the analysis result includes: determining that the shooting condition is currently satisfied when determining that the similarity exceeds the similarity preset threshold.
  • the similarity preset threshold may be 80%, 90%, and the like.
  • the reference picture may be a standard picture preset by the user with a specific expression such as smiling, laughing, sad, angry, yawning, or a standard picture with gestures such as an "OK” gesture or a "V-shaped” gesture. It can also be a standard picture with flowers, birds, mountains and other scenery.
  • the preview image when the preview image is analyzed by using the preset model, not only the shooting parameters are analyzed, but also the similarity with the reference image is analyzed to determine whether the shooting condition is satisfied.
  • the trained image processing algorithm model includes the trained target object feature model, and the trained image processing algorithm model is used to compare the captured preview image with the reference image, and the similarity between the captured preview image and the reference image is analyzed, including The image recognition technology is used to analyze the target object in the preview image, and the corresponding target object feature vector is generated. The captured preview image and the reference image are calculated according to the trained target object feature model and the target object feature vector corresponding to the captured preview image. Similarity.
  • the similarity between the captured preview image and the reference image is calculated according to the trained target object feature model and the target object feature vector corresponding to the captured preview image, including: taking the target object feature vector corresponding to the captured preview image as The input information of the target object model of the training is calculated, and the similarity between the shooting preview picture and the reference picture is calculated by the target object feature model.
  • the captured image processing algorithm model is used to compare the shooting preview image with the reference image, and the similarity between the shooting preview image and the reference image is analyzed, including: acquiring pixel information of the shooting preview image; The pixel information of the screen is compared with the pixel information of the reference picture, and the similarity between the shooting preview picture and the reference picture is analyzed. That is, in other embodiments, the similarity between the captured preview picture and the reference picture is obtained by comparing the pixel information such as the pixel grayscale value of the two images.
  • determining the current shooting condition by analyzing the shooting preview image by using the preset model including: analyzing the target object in the shooting preview image by using an image recognition technology, and generating a corresponding target object feature vector;
  • the target object feature vector is used as input information of the image processing algorithm model, and an analysis result including identification information indicating whether the shooting condition is currently satisfied is obtained; and whether the shooting condition is satisfied is determined according to the analysis result.
  • the identification information may be an identifier of 1, 0, etc., which identifies whether the analysis result of the shooting condition is currently satisfied. More specifically, when the identification information is 1, the identification satisfies the shooting condition, and when the identification information is 0, the identification does not satisfy the shooting condition.
  • Determining whether the photographing condition is satisfied according to the analysis result includes: determining that the photographing condition is currently satisfied when the analysis result includes identifying the identifier information that currently meets the photographing condition; and when the analysis result includes identifying the identifier information that does not satisfy the photographing condition at present, Make sure that the shooting conditions are not currently met.
  • the target object may include an object such as a face, a hand, and a specific scene.
  • the target object feature model may include an expression feature model, a gesture feature model, a scene feature model, and the like, and the analyzed target object feature vector may include the foregoing. Emoticon, angry, yawning and other expression vector, or "OK" gesture, "V-shaped” and other gesture feature vectors, or flower, bird, mountain and other scene feature vectors.
  • the trained expression feature model is completed by training: providing multiple photos with different expression faces in the initial training set; using face recognition technology, The characters in the multiple photos provided in the initial training set perform facial expression analysis to generate a corresponding expression feature vector Xi.
  • X1 represents the size of the eye opening
  • X2 represents the degree of the mouth angle rising
  • X3 represents the size of the mouth opening
  • the expression feature vector and the similarity label between the corresponding photo and the reference picture establish a training sample set; and then use the sample set to perform training learning, and obtain the trained expression feature model.
  • the similarity between the shooting preview screen and the reference picture may be style similarity, color similarity, element layout similarity, grayscale similarity, and the like.
  • the shooting preview parameter is analyzed by using a preset model to determine shooting parameters.
  • the photographing operation is a photographing operation
  • the photographing operation is performed according to the photographing parameter control, including: performing a photographing operation according to the photographing parameter control, and obtaining a photograph corresponding to the current photographing preview screen.
  • the photographing operation is a continuous shooting operation
  • the photographing operation is performed according to the photographing parameter control, including: controlling to perform the continuous shooting operation, and obtaining a plurality of photographs including the photograph corresponding to the current photographing preview screen.
  • further steps may be further included: analyzing a plurality of photos obtained by the continuous shooting operation to determine the best photo; and retaining the best photo, and performing the continuous shooting operation Get other photos to delete.
  • the photographing operation is a video photographing operation
  • the photographing operation is performed according to the photographing parameter control, including: performing a photographing operation according to the photographing parameter control, and obtaining a video file that uses the current photographed preview screen as a starting video frame frame.
  • the method may further include: after the video file is captured, the plurality of video frame frames in the captured video file may be compared to determine an optimal screen. Frame; and intercept the best picture frame to save as a photo.
  • the shooting preview screen by analyzing the shooting preview screen to determine whether the shooting condition is currently satisfied, whether or not the user desires to take the picture can be determined according to the content in the shooting preview screen, so that the current exciting moment can be captured in time, and also according to the shooting.
  • the content in the preview screen automatically determines the shooting parameters, and the shooting operation can be performed in accordance with the shooting parameters of the current shooting preview screen, ensuring high shooting quality.
  • the step S41 corresponds to the step S11 in FIG. 1 .
  • Step S45 corresponds to step S13 in FIG. 1 and steps S23, S25 and the like in FIG. 2.
  • step S13 in FIG. 1, step S23, S25 in FIG. 2, and the like are examples of steps S13 in FIG. 1, step S23, S25 in FIG. 2, and the like.
  • step S43 is performed before step S45, that is, determining that the shooting parameters including the shutter time or the aperture size are performed after determining that the shooting condition is satisfied, and the shooting preview screen is performed after the determined shooting condition is satisfied. Analysis determines the shooting parameters, thus avoiding the determination of shooting parameters every time, avoiding the waste of computing resources.
  • step S43 and step S45 are performed simultaneously, that is, the shooting preview screen is analyzed by using a preset model to determine whether the shooting condition is met and the shooting parameters are determined according to the analysis result, and the preset model is adopted.
  • the shooting preview screen it is analyzed at the same time to determine whether the shooting conditions and shooting parameters are satisfied.
  • the target object and the target object feature model for determining whether the shooting condition is satisfied may be the same as the target object and the target object feature model for determining the shooting parameters.
  • the target object is a facial expression
  • the target object feature model is an expression feature model.
  • the analysis of the captured preview image by using the preset model may include: using the image recognition technology to perform the facial expression in the captured preview image. Analyzing, generating a corresponding expression feature vector; obtaining similarity with the reference image according to the expression feature model and the expression feature vector corresponding to the preview image, or directly obtaining identification information indicating whether the shooting condition is satisfied, and according to the expression feature model and the shooting preview screen
  • the corresponding emoticon vector obtains the emoticon feature or directly obtains the shooting parameters.
  • the facial expression in the shooting preview screen it is possible to simultaneously determine whether the shooting condition is satisfied and determine the shooting parameter.
  • the target object and the target object feature model for determining whether the shooting condition is satisfied may be different from the target object and the target object feature model for determining the shooting parameters.
  • the target object and the target object feature model for determining whether the shooting condition is satisfied are the first target object and the first target object feature model, respectively, and the target object and the target object feature model for determining the shooting parameter are respectively the second target object.
  • a second target object feature model is the target object and the target object feature model for determining whether the shooting condition is satisfied.
  • the target object and the target object feature model for determining whether the shooting condition is satisfied are a face and an expression feature model, respectively
  • the target object and the target object feature model for determining the shooting parameters are gestures and gesture features, respectively.
  • the analysis result obtained by analyzing the shooting preview screen by using the preset model may include: analyzing the facial expression in the shooting preview image by using the image recognition technology, generating a corresponding expression feature vector, and using the image recognition technology to capture the preview image.
  • the gesture is analyzed to obtain a gesture feature vector; the similarity with the reference image is obtained according to the expression feature model and the expression feature vector corresponding to the preview image, or the identification information that directly identifies whether the shooting condition is satisfied; and the gesture feature model and the shooting preview screen are obtained according to the gesture feature model
  • the corresponding emoticon vector obtains a gesture feature or directly obtains a shooting parameter.
  • the shooting parameters are determined according to whether the facial expression in the shooting preview screen satisfies the shooting condition and the gesture in the shooting preview screen.
  • FIG. 6 is a flowchart of a shooting control method in a sixth embodiment of the present application.
  • the photographing control method includes:
  • Step S51 corresponds to step S11 in FIG. 1.
  • Step S51 corresponds to step S11 in FIG. 1.
  • S53 The shooting preview screen is analyzed by using a preset model to determine whether the shooting condition is currently satisfied and the shooting parameters for shooting are determined.
  • step S53 includes: calculating whether the output includes the shooting condition by using the neural network model by taking all pixels of the preview image as input of the neural network model And information such as shooting parameters.
  • step S53 includes: Determining the expression similarity according to the trained expression feature model and the expression feature vector corresponding to the captured preview image, and determining the gesture feature corresponding to the captured preview image according to the trained gesture feature model and the gesture feature vector corresponding to the captured preview image.
  • Determining that the shooting condition is satisfied when determining that the expression similarity is greater than the similarity preset threshold determining the gesture feature according to the gesture feature model and the gesture feature vector corresponding to the captured preview image, for example, obtaining the distance between the index finger and the thumb, and predefining The corresponding relationship between the gesture feature and the shooting parameter determines the shooting parameter corresponding to the obtained gesture feature, for example, the shutter time is determined according to the correspondence between the gesture feature of the index finger and the thumb and the shutter time.
  • step S53 may further include: using the expression feature vector corresponding to the captured preview image as an expression feature.
  • the input information of the model, and the identification information identifying whether the shooting condition is satisfied is obtained by the trained expression feature model, and the gesture feature vector corresponding to the shooting preview image is used as the input information of the trained gesture feature model, and the trained The gesture feature model obtains the shooting parameters, thereby obtaining information including the identification information that identifies whether the shooting condition is satisfied and the shooting parameters; and the identification information obtained according to the trained expression feature model to satisfy the shooting condition and the basis
  • the shooting parameters derived from the trained gesture feature model determine whether the shooting conditions are met and the shooting parameters are determined.
  • Step S55 corresponds to step S47 in FIG. 5.
  • Step S53 - has a certain correspondence with steps S43 - S45 in FIG. 5 , and specifically determines whether the current shooting condition and the shooting parameters are obtained according to the analysis result, and may further refer to the description of steps S43 to S45 in FIG. 5 .
  • the shooting preview screen can be analyzed while determining whether the shooting condition and the shooting parameters are satisfied, and when the shooting condition is satisfied, the shooting operation is performed with the shooting parameters, and the shooting parameters can be quickly performed with better shooting parameters. Shooting, you can capture the moments in time with high shooting quality.
  • FIG. 7 is a flowchart of a shooting control method in a seventh embodiment of the present application.
  • the photographing control method includes:
  • Step S61 corresponds to step S11 in FIG. 1.
  • Step S61 corresponds to step S11 in FIG. 1.
  • the shooting preview screen is analyzed by using a preset model to determine whether the shooting condition is currently satisfied and the shooting parameters for shooting are determined.
  • step S65 includes: determining that the shooting condition is satisfied when the similarity between the shooting preview screen and the reference picture exceeds the similarity preset threshold, and controlling to perform the continuous shooting operation.
  • step S65 may further include: when the satisfaction exceeds the satisfaction preset threshold, determining that the shooting condition is satisfied, and controlling to perform continuous shooting operating.
  • the requirement of the shooting condition when performing the continuous shooting operation may be slightly lower than the requirement when the photographing operation is performed.
  • the comparison similarity preset threshold or the satisfaction preset threshold may be slightly lower than the execution of the photographing when performing the continuous shooting operation.
  • the similarity preset threshold or satisfaction preset threshold of the operation comparison may be slightly lower than the execution of the photographing when performing the continuous shooting operation.
  • the similarity preset threshold or the satisfaction preset threshold when performing the continuous shooting operation may be 70%, which is lower than the similarity preset threshold or the satisfaction preset threshold of 80% or higher in performing the photographing operation comparison. .
  • the reference picture is a picture with a laughing expression
  • the contrast expression vector is an expression feature vector X2 indicating the degree of the mouth angle rising
  • the continuous shooting operation is controlled when it is determined that the mouth angle rises to 70% of the reference picture.
  • the user continues to smile for a short period of time, and the user's smile reaches the maximum level.
  • This expression will be captured by the continuous shooting operation, and the photo that ensures the best shooting effect can be ensured by the continuous shooting operation.
  • Steps S61 to S65 correspond to steps S51 to S55 in FIG. 6, respectively.
  • steps S51 to S55 in FIG. 6 correspond to steps S51 to S55 in FIG. 6, respectively.
  • steps S51 to S55 in FIG. 6 correspond to steps S51 to S55 in FIG. 6, respectively.
  • the seventh embodiment shown in FIG. 7 may further include the following steps:
  • S67 Analyze a plurality of photos obtained by the continuous shooting operation to determine the best photo.
  • step S67 may include: analyzing the plurality of photos by using the trained neural network model to obtain satisfaction, and determining the photo with the highest satisfaction as the best photo.
  • step S67 may include: comparing a plurality of photos obtained by continuous shooting with a reference picture, and determining a photo with the highest similarity with the reference picture as the best photo.
  • the shooting control method may further include:
  • the electronic device includes a memory in which a plurality of albums are created, "preserve the best photos” to store the best photos in a certain album, such as in a camera album. Among them, by deleting other photos, it can effectively avoid occupying too much storage space.
  • FIG. 8 is a flowchart of a shooting control method in an eighth embodiment of the present application.
  • the photographing control method includes:
  • Step S71 corresponds to step S11 in FIG. 1.
  • Step S71 corresponds to step S11 in FIG. 1.
  • the shooting preview screen is analyzed by using a preset model to determine whether the shooting condition is currently satisfied and the shooting parameters for shooting are determined.
  • the requirement of the shooting condition when performing the video shooting operation may also be slightly lower than the requirement when the photographing operation is performed.
  • the comparison similarity preset threshold or the satisfaction preset threshold may be slightly lower than the execution when performing the continuous shooting operation.
  • the similarity preset threshold or the satisfaction preset threshold of the photographing operation comparison may be slightly lower than the execution when performing the continuous shooting operation.
  • step S77 may include: when the similarity between the shooting preview screen and the reference picture exceeds the similarity preset threshold, determining that the shooting condition is satisfied, and controlling to perform the video shooting operation.
  • the method may further include: when the satisfaction exceeds the satisfaction preset threshold, determining that the shooting condition is met, and controlling to perform the video shooting operation.
  • the eighth embodiment shown in FIG. 8 may further include the following steps:
  • step S77 may include: analyzing the plurality of video picture frames by using the trained neural network model to obtain satisfaction, and determining a video frame frame with the highest satisfaction as the optimal picture. frame.
  • step S77 may include: comparing a plurality of video picture frames in the video file with the reference picture, and determining a video picture frame with the highest similarity with the reference picture as the optimal picture frame.
  • the shooting control method may further include:
  • the electronic device includes a memory in which a plurality of albums are created, and “the best picture frame is truncated to be saved as a photo” to store the best picture frame in a photo/photo format in an album. , for example, stored in a camera album.
  • FIG. 9 is a flowchart of a shooting control method in a ninth embodiment of the present application.
  • the photographing control method may include the following steps:
  • S83 analyzing the shooting preview screen by using a preset model to determine whether the shooting condition is currently satisfied and determining a shooting parameter for shooting
  • steps S51 to S55 in FIG. 6 and the steps S51 to S55 in FIG. 6 respectively correspond to the flowcharts in the other embodiments corresponding to FIG. 7.
  • steps S51 to S55 in FIG. 6 and other embodiments respectively correspond to the flowcharts in the other embodiments corresponding to FIG. 7.
  • steps S51 to S55 in FIG. 6 and other embodiments respectively correspond to the flowcharts in the other embodiments corresponding to FIG. 7.
  • steps S51 to S55 in FIG. 6 and other embodiments. A description of the relevant steps in .
  • S87 Acquire the user's satisfaction feedback information about the automatic shooting.
  • the user may be prompted to perform satisfaction evaluation on the automatic photographing by generating prompt information, for example, generating a prompt box including “satisfactory” and “unsatisfactory” options.
  • prompt information for example, generating a prompt box including “satisfactory” and “unsatisfactory” options.
  • the satisfaction feedback information of the automatic photographing is obtained.
  • the user's satisfaction with the automatic shooting is obtained by detecting the user's operation on the photo or video obtained by the automatic shooting. For example, if it is detected that the user deletes the photo or video obtained by the automatic shooting, it is determined that the user is not satisfied with the automatic shooting, and the satisfaction feedback information that is unsatisfactory is obtained. For example, if it is detected that the user has set a photo or video obtained by the automatic shooting to a favorite or favorite type setting operation or a sharing operation, it is determined that the user is satisfied with the automatic shooting, and obtains I got feedback on satisfaction with satisfaction.
  • S89 Output the satisfaction feedback information of the user to the current automatic shooting to the currently used model, so that the currently used model uses the satisfaction feedback information for optimal training.
  • the training of the model can be optimized, and the model is continuously optimized, so that the automatic shooting in subsequent use can be more accurate.
  • the currently used model may be a model that has been confirmed by training, or a model that has not been trained yet. When it is confirmed that the training is completed, the model can be further optimized. When the model is not yet trained, the training can be better achieved.
  • the preset model in any of the embodiments 1 to 9 may also be an untrained model, and the automatic shooting is performed according to the model after the user initiates the automatic shooting function or the model is automatically started. And optimize and train the current model based on the feedback feedback from the user feedback.
  • the untrained model when the preset model is an untrained model, automatically acquires a picture each time the user performs shooting, and performs training as a positive sample, or further acquires shooting parameters at the time of shooting. Training together as a positive sample, or sampling the frame frame when the user does not manually control the shooting as a reverse sample by using a preset rule, and gradually optimizing the preset model until the number of training reaches a preset number of times or satisfaction of subsequent user feedback If the degree of satisfaction is greater than the preset ratio, the training is completed. In this way, since the user himself trains the model without using another person's model, personalization can be better achieved.
  • FIG. 10 is a block diagram showing a schematic partial structure of an electronic device 100 according to an embodiment of the present application.
  • the electronic device 100 includes a processor 10, a memory 20, and a camera 30.
  • the camera 30 includes at least a rear camera 31 and a front camera 32.
  • the rear camera 31 is used to capture an image behind the electronic device 100, and can be used for a user to take a shooting operation such as a person.
  • the front camera 32 is used to capture an image in front of the electronic device 100, and can be used to perform a self-photographing and the like.
  • the models in FIGS. 1-9 may be programs such as specific algorithm functions running in processor 10, such as neural network algorithm functions, image processing algorithm functions, and the like.
  • the electronic device 100 may further include a model processor that is independent of the processor 10.
  • the models in FIGS. 1-9 are in the operation and model processors, and the processor 10 may generate corresponding instructions as needed.
  • the trigger model processor runs the corresponding model, and the output result of the model is output to the processor 10 through the model processor for use by the processor 10, and control such as a shooting operation is performed.
  • Program instructions are stored in the memory 20.
  • the processor 10 is configured to call a program instruction stored in the memory 20 to execute the photographing control method in any of the embodiments shown in FIGS.
  • the processor 10 is configured to call a program instruction stored in the memory 20 to execute the following shooting control method:
  • the shooting preview screen is acquired by the camera 30; the shooting preview screen is analyzed by using a preset model to obtain shooting parameters for shooting.
  • the operation of acquiring the shooting preview screen is performed by the camera in response to the operation of turning on the camera, that is, the shooting preview screen is acquired by the camera.
  • the operation of turning on the camera is a click operation on the photographing application icon, that is, when the camera is turned on in response to a click operation on the photographing application icon, the shooting preview screen is acquired by the camera.
  • the operation of turning on the camera is a specific operation of a physical button of the electronic device.
  • the electronic device includes a volume up button and a volume down button, and the operation of turning on the camera is to increase the volume and volume. Simultaneous pressing of small keys.
  • the operation of turning on the photographing application is an operation of pressing the volume up key and the volume down key in a preset time (for example, 2 seconds).
  • the operation of turning on the camera may also be an operation of a preset touch gesture input in any display interface of the electronic device.
  • the user may input a circular touch track. Turn on the camera with a touch gesture.
  • the operation of turning on the camera may also be an operation of a preset touch gesture input on the touch screen when the electronic device is in a black screen state.
  • the operation of turning on the camera is an operation that presses the shutter button/power button of the camera to trigger the camera to be in an activated state.
  • acquiring a shooting preview screen is to obtain a shooting preview screen in real time through a camera.
  • the preset model may be a trained model or an untrained model.
  • the preset model is a trained neural network model
  • the shooting preview image is analyzed by using a preset model to obtain shooting parameters for shooting, including: taking all the pixels of the preview image as Input, calculated by the neural network model and output shooting parameters for shooting.
  • the preset model is a trained image processing algorithm model
  • the trained image processing algorithm model includes the trained target object feature model
  • the preset preview model is used to analyze the captured preview image.
  • the shooting parameters are obtained for shooting, including: using image recognition technology to analyze the target object in the shooting preview image, generating a corresponding target object feature vector; and analyzing according to the target object feature model and the target object feature vector corresponding to the shooting preview image. Results; the shooting parameters are determined for the shooting based on the analysis results.
  • the analysis result is obtained according to the target object feature model and the target object feature vector corresponding to the captured preview image, including: the target object feature model corresponding to the trained target object feature model and the captured preview image
  • the analysis result of the target object feature corresponding to the shooting preview screen is calculated.
  • determining the shooting parameters for the shooting according to the analysis result includes: determining, according to the correspondence between the preset target object features and the shooting parameters, the shooting parameters corresponding to the obtained target object features.
  • the preset model is a trained image processing algorithm model
  • the trained image processing algorithm model includes the trained target object feature model
  • the preset model is used to capture the preview image.
  • the imaging parameters are obtained for the imaging, including: taking the target object feature vector corresponding to the captured preview image as the input information of the trained target object feature model, and obtaining the shooting parameters through the trained target object feature model.
  • the analysis result is that the analysis result is obtained according to the real-time acquired shooting preview screen.
  • the shooting preview screen is analyzed by using a preset model to obtain shooting parameters: the currently acquired shooting preview screen is analyzed every time by a preset time (for example, 0.2 seconds) to obtain shooting parameters.
  • the target object may be a hand, a face, a specific scene, etc.
  • the target object feature model includes a gesture feature model, an expression feature model, and a scene feature model, etc.
  • the analyzed target object feature vector may include a gesture.
  • the shooting parameters can be determined in time according to the shooting preview screen, so that the current exciting moment can be captured in time with high shooting quality according to the shooting parameters when shooting is required.
  • the processor 10 can be a microcontroller, a microprocessor, a single chip microcomputer, a digital signal processor, or the like.
  • the memory 20 can be any storage device that can store information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like.
  • the electronic device 100 further includes an input unit 40 and an output unit 50.
  • the input unit 40 may include a touch panel, a mouse, a microphone, a physical button including a power key, a volume key, and the like.
  • the output unit 50 can include a display screen, a speaker, and the like.
  • the touch panel of the input unit 40 and the display screen of the output unit 50 are integrated to form a touch screen while providing the functionality of touch input and display output.
  • the electronic device 100 can be a portable electronic device having a camera 30, such as a mobile phone, a tablet computer, or a notebook computer, and can also be a camera device such as a camera or a video camera.
  • a camera 30 such as a mobile phone, a tablet computer, or a notebook computer
  • a camera device such as a camera or a video camera.
  • the present application further provides a computer readable storage medium, where a plurality of program instructions are stored in the computer readable storage medium, and the program instructions are executed by the main processing unit 20, and executed as shown in FIG. All or part of the steps in any of the shooting control methods.
  • the computer storage medium is the memory 20, and may be any storage device that can store information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like.
  • the photographing control method and the electronic device 100 of the present application can automatically determine whether the photographing condition is satisfied according to the photographing preview screen, and perform photographing when the photographing condition is satisfied, and can capture the highlight moment including the content corresponding to the current photographing preview screen in time.
  • embodiments of the present invention can be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program is stored/distributed in a suitable medium, provided with other hardware or as part of the hardware, or in other distributed forms, such as over the Internet or other wired or wireless telecommunication systems.

Abstract

Provided is a photographing control method, comprising: acquiring a preview image; and using a preset model to analyze the preview image so as to obtain a photographing parameter to be used in a photographing operation. Also provided is an electronic device implementing the photographing control method. In the photographing control method and the electronic device of the present invention, a preview image is analyzed to determine a current photographing parameter. Since a photographing parameter can be timely determined according to a preview image, the photographing parameter can be used to timely capture a high quality image of a current scene of interest.

Description

拍摄控制方法及电子装置Shooting control method and electronic device 技术领域Technical field
本申请涉及电子设备领域,尤其涉及一种用于电子装置的拍摄控制方法及所述电子装置。The present application relates to the field of electronic devices, and in particular, to a photographing control method for an electronic device and the electronic device.
背景技术Background technique
现在,随着人们生活水平的提高,拍照已经成了为生活中并不可少的常用功能。现在,不论是照相机还是具有相机功能的手机、平板电脑等电子装置,像素都越来越高,拍照质量都越来越好。然而,目前的照相机、手机等电子装置,在进行拍照控制时,往往还需要用户去手动设置拍摄参数,或者根据当前的环境去设置拍摄参数,然而,目前通过手动设置拍摄参数或者根据当前的环境去设置拍摄参数都有一定的滞后性,导致当设置好拍摄参数后,精彩画面已经一闪而过了,往往导致了遗憾地错过了精彩画面。Now, with the improvement of people's living standards, taking pictures has become a common function that is not indispensable in life. Nowadays, whether it is a camera or an electronic device such as a mobile phone or a tablet computer with a camera function, pixels are getting higher and higher, and the quality of photographs is getting better and better. However, current electronic devices such as cameras and mobile phones often require the user to manually set shooting parameters or perform shooting parameters according to the current environment when performing photographing control. However, currently, the shooting parameters are manually set or according to the current environment. There are certain hysteresis in setting the shooting parameters, which leads to the flashing of the wonderful picture when the shooting parameters are set, which often leads to the regret of missing the wonderful picture.
发明内容Summary of the invention
本申请提供一种拍摄控制方法及电子装置,能够及时进行拍摄参数的设置,从而能够以较高的拍摄质量及时捕捉到精彩的瞬间。The present application provides a shooting control method and an electronic device, which can set the shooting parameters in time, so that a wonderful moment can be captured in time with a high shooting quality.
一方面,提供一种拍摄控制方法,所述拍摄控制方法包括:获取拍摄预览画面;采用预设模型对所述拍摄预览画面进行分析而得到拍摄参数以用于拍摄。In one aspect, a shooting control method is provided, the shooting control method comprising: acquiring a shooting preview screen; analyzing the shooting preview image with a preset model to obtain shooting parameters for shooting.
另一方面,提供一种电子装置,所述电子装置包括摄像头、存储器以及处理器。所述存储器用于存储程序指令。所述处理器用于调用所述程序指令执行一种拍摄控制方法,所述拍摄控制方法包括:通过摄像头获取拍摄预览画面;采用预设模型对所述拍摄预览画面进行分析而得到拍摄参数以用于拍摄。In another aspect, an electronic device is provided, the electronic device including a camera, a memory, and a processor. The memory is for storing program instructions. The processor is configured to execute the shooting control method by calling the program instruction, the shooting control method includes: acquiring a shooting preview image by using a camera; analyzing the shooting preview image by using a preset model to obtain shooting parameters for use in Shooting.
再一方面,还提供一种计算机可读存储介质,所述计算机可读存储介质存储有程序指令,所述程序指令供计算机调用后执行一种拍摄控制方法,所述拍摄控制方法包括:获取拍摄预览画面;采用预设模型对所述拍摄预览画面进行分析而得到拍摄参数以用于拍摄。In still another aspect, a computer readable storage medium is provided, the computer readable storage medium storing program instructions for executing a shooting control method after the computer calls, the shooting control method comprising: acquiring a shooting Previewing the picture; analyzing the shooting preview picture with a preset model to obtain shooting parameters for shooting.
本申请的拍摄控制方法及电子装置,通过分析拍摄预览画面确定当前的拍摄参数,能够根据拍摄预览画面来及时确定拍摄参数,从而在需要拍摄时能够根据所述拍摄参数以较高的拍摄质量及时捕捉当前精彩的瞬间。The shooting control method and the electronic device of the present application determine the current shooting parameters by analyzing the shooting preview screen, and can determine the shooting parameters in time according to the shooting preview screen, so that the shooting quality can be timely according to the shooting parameters when shooting is required. Capture the current moments.
附图说明DRAWINGS
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还 可以根据这些附图获得其他的明显变形方式。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings to be used in the embodiments will be briefly described below. Obviously, the drawings in the following description are only some embodiments of the present application, Those skilled in the art can also obtain other obvious modifications according to these drawings without any creative work.
图1为本申请第一实施例中的拍摄控制方法的流程图。1 is a flow chart of a photographing control method in a first embodiment of the present application.
图2为本申请第二实施例中的拍摄控制方法的流程图2 is a flowchart of a photographing control method in a second embodiment of the present application
图3为本申请第三实施例中的拍摄控制方法的流程图。FIG. 3 is a flowchart of a photographing control method in a third embodiment of the present application.
图4为本申请第四实施例中的拍摄控制方法的流程图。4 is a flow chart of a photographing control method in a fourth embodiment of the present application.
图5为本申请第五实施例中的拍摄控制方法的流程图。FIG. 5 is a flowchart of a photographing control method in a fifth embodiment of the present application.
图6为本申请第六实施例中的拍摄控制方法的流程图6 is a flowchart of a photographing control method in a sixth embodiment of the present application
图7为本申请第七实施例中的拍摄控制方法的流程图。FIG. 7 is a flowchart of a photographing control method in a seventh embodiment of the present application.
图8为本申请第八实施例中的拍摄控制方法的流程图。FIG. 8 is a flowchart of a photographing control method in an eighth embodiment of the present application.
图9为本申请第九实施例中的拍摄控制方法的流程图。FIG. 9 is a flowchart of a photographing control method in a ninth embodiment of the present application.
图10为本申请一实施例中的电子装置的示意出部分结构的框图。FIG. 10 is a block diagram showing a schematic partial structure of an electronic device according to an embodiment of the present application.
具体实施方式detailed description
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application are clearly and completely described in the following with reference to the drawings in the embodiments of the present application. It is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application without departing from the inventive scope are the scope of the present application.
本申请的拍摄控制方法可应用于一电子装置中,电子装置可以包括摄像头,电子装置可通过摄像头获取拍摄预览画面并显示拍摄预览画面,电子装置可通过摄像头进行拍照、连拍、视频拍摄等操作。其中,摄像头包括前置摄像头和后置摄像头,拍照、连拍、视频拍摄等操作可为通过后置摄像头进行的拍摄,也可为通过前置摄像头进行的自拍。The shooting control method of the present application can be applied to an electronic device. The electronic device can include a camera. The electronic device can acquire a shooting preview screen and display a shooting preview image through the camera, and the electronic device can perform photographing, continuous shooting, video shooting, etc. through the camera. . Among them, the camera includes a front camera and a rear camera, and the operations of photographing, continuous shooting, video shooting, etc. can be performed by a rear camera or a self-timer by a front camera.
请参阅图1,为本申请第一实施例中的拍摄控制方法的流程图。拍摄控制方法应用于一电子装置中。在第一实施例中,拍摄控制方法包括如下步骤:Please refer to FIG. 1 , which is a flowchart of a shooting control method in a first embodiment of the present application. The shooting control method is applied to an electronic device. In the first embodiment, the photographing control method includes the following steps:
S11:获取拍摄预览画面。S11: Acquire a shooting preview screen.
在一些实施例中,获取拍摄预览画面的操作为响应开启摄像头的操作后通过摄像头来进行的,即,为通过摄像头来获取拍摄预览画面。其中,所述拍摄预览画面即为开启摄像头进行取景时的取景画面。In some embodiments, the operation of acquiring the shooting preview screen is performed by the camera in response to the operation of turning on the camera, that is, the shooting preview screen is acquired by the camera. The shooting preview screen is a framing screen when the camera is turned on for framing.
在一些实施例中,开启摄像头的操作为对拍照应用图标的点击操作,即,在响应对拍照应用图标的点击操作而开启摄像头时,则通过摄像头去获取拍摄预览画面。In some embodiments, the operation of turning on the camera is a click operation on the photographing application icon, that is, when the camera is turned on in response to a click operation on the photographing application icon, the shooting preview screen is acquired by the camera.
或者,在另一些实施例中,开启摄像头的操作为对电子装置的物理按键的特定操作,例如,电子装置包括音量增加键和音量减小键,开启摄像头的操作为对音量增加键和音量减小键的同时按压的操作。进一步的,开启拍照应用的操作为在预设时间(例如2秒)内先后按压音量增加键及音量减小键的操作。Alternatively, in other embodiments, the operation of turning on the camera is a specific operation of a physical button of the electronic device. For example, the electronic device includes a volume up button and a volume down button, and the operation of turning on the camera is to increase the volume and volume. Simultaneous pressing of small keys. Further, the operation of turning on the photographing application is an operation of pressing the volume up key and the volume down key in a preset time (for example, 2 seconds).
在另一些实施例中,开启摄像头的操作还可为在电子装置的任一显示界面中输入的预设触摸手势的操作,例如,在电子装置的主界面上,用户可输入一个具有环形触摸轨迹的触摸手势而开启摄像头。In other embodiments, the operation of turning on the camera may also be an operation of a preset touch gesture input in any display interface of the electronic device. For example, on the main interface of the electronic device, the user may input a circular touch track. Turn on the camera with a touch gesture.
在另一些实施例中,开启摄像头的操作还可为在电子装置处于黑屏状态下在触摸屏上输入的预设触摸手势的操作。In other embodiments, the operation of turning on the camera may also be an operation of a preset touch gesture input on the touch screen when the electronic device is in a black screen state.
在一些实施例中,当电子装置为照相机时,开启摄像头的操作为对照相机的快门按键/电源按键进行按压而触发照相机处于启动状态的操作。In some embodiments, when the electronic device is a camera, the operation of turning on the camera is an operation that presses the shutter button/power button of the camera to trigger the camera to be in an activated state.
可选的,本申请中,获取拍摄预览画面为通过摄像头实时获取拍摄预览画面。Optionally, in the present application, acquiring a shooting preview screen is to obtain a shooting preview screen in real time through a camera.
S13:采用预设模型对拍摄预览画面进行分析而得到拍摄参数以用于拍摄。S13: The shooting preview image is analyzed by using a preset model to obtain shooting parameters for shooting.
其中,预设模型可为已训练完成的模型,也可为未训练完成的模型。The preset model may be a trained model or an untrained model.
可选的,在一些实施例中,预设模型为已训练的神经网络模型,采用预设模型对拍摄预览画面进行分析而得到拍摄参数以用于拍摄,包括:将拍摄预览画面的所有像素作为输入,通过神经网络模型进行计算后输出拍摄参数以用于拍摄。Optionally, in some embodiments, the preset model is a trained neural network model, and the shooting preview image is analyzed by using a preset model to obtain shooting parameters for shooting, including: taking all the pixels of the preview image as Input, calculated by the neural network model and output shooting parameters for shooting.
可选的,在另一些实施例中,预设模型为已训练的图像处理算法模型,已训练的图像处理算法模型包括已训练的目标对象特征模型,采用预设模型对拍摄预览画面进行分析而得到拍摄参数以用于拍摄,包括:利用图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量;根据已训练的目标对象特征模型以及拍摄预览画面对应的目标对象特征向量得到分析结果;根据所述分析结果确定出所述拍摄参数以用于拍摄。Optionally, in other embodiments, the preset model is a trained image processing algorithm model, and the trained image processing algorithm model includes the trained target object feature model, and the preset preview model is used to analyze the captured preview image. Obtaining shooting parameters for shooting, comprising: using an image recognition technology to analyze a target object in the captured preview image to generate a corresponding target object feature vector; according to the trained target object feature model and the target object feature corresponding to the captured preview image The vector obtains an analysis result; the shooting parameters are determined for the shooting based on the analysis result.
可选的,在一种实现方式中,根据已训练的目标对象特征模型以及拍摄预览画面对应的目标对象特征向量得到分析结果,包括:通过已训练的目标对象特征模型和拍摄预览画面对应的目标对象特征向量计算得到拍摄预览画面对应的目标对象特征这一分析结果。Optionally, in an implementation manner, the analysis result is obtained according to the trained target object feature model and the target object feature vector corresponding to the captured preview image, including: the target object model that has been trained and the target corresponding to the captured preview image The object feature vector calculates the analysis result of the target object feature corresponding to the captured preview screen.
在一种实现方式中,根据分析结果确定拍摄参数以用于拍摄,包括:根据预设的目标对象特征与拍摄参数的对应关系,确定得到的目标对象特征所对应的拍摄参数。In an implementation manner, determining the shooting parameters for the shooting according to the analysis result includes: determining, according to the correspondence between the preset target object features and the shooting parameters, the shooting parameters corresponding to the obtained target object features.
可选的,在另一种实现方式中,预设模型为已训练的图像处理算法模型,已训练的图像处理算法模型包括已训练的目标对象特征模型时,采用预设模型对拍摄预览画面进行分析而得到拍摄参数以用于拍摄,包括:将拍摄预览画面对应的目标对象特征向量作为已训练的目标对象特征模型的输入信息,而通过已训练的目标对象特征模型得出拍摄参数。Optionally, in another implementation manner, the preset model is a trained image processing algorithm model, and when the trained image processing algorithm model includes the trained target object feature model, the preset model is used to capture the preview image. The imaging parameters are obtained for the imaging, including: taking the target object feature vector corresponding to the captured preview image as the input information of the trained target object feature model, and obtaining the shooting parameters through the trained target object feature model.
在一些实施例中,本申请中,分析结果为根据实时获取的拍摄预览画面进行分析而得到相应的分析结果。在另一些实施例中,采用预设模型对拍摄预览画面进行分析而得到拍摄参数:每间隔预设时间(例如0.2秒)对当前获取到的拍摄预览画面进行分析而得到拍摄参数。In some embodiments, in the present application, the analysis result is that the analysis result is obtained according to the real-time acquired shooting preview screen. In other embodiments, the shooting preview screen is analyzed by using a preset model to obtain shooting parameters: the currently acquired shooting preview screen is analyzed every time by a preset time (for example, 0.2 seconds) to obtain shooting parameters.
其中,本申请中,目标对象可为手部、脸部、特定的景物等;目标对象特征模型相应包括手势特征模型、表情特征模型以及景物特征模型等,分析出的目标对象特征向量可包括手势特征向量、表情特征向量和景物特征向量等,分析出的目标对象特征也可包括手势特征、表情特征和景物特征等。In the present application, the target object may be a hand, a face, a specific scene, etc.; the target object feature model includes a gesture feature model, an expression feature model, and a scene feature model, etc., and the analyzed target object feature vector may include a gesture. The feature vector, the expression feature vector, the scene feature vector, and the like, and the analyzed target object features may also include gesture features, expression features, and scene features.
其中,所述拍摄参数包括但不限于:快门时间、光圈大小、感光度等中的至少一个。The shooting parameters include, but are not limited to, at least one of a shutter time, an aperture size, a sensitivity, and the like.
从而,本申请中,通过分析拍摄预览画面确定当前的拍摄参数,能够根据拍摄预览画面来及时确定拍摄参数,从而在需要拍摄时能够根据拍摄参数以较 高的拍摄质量及时捕捉当前精彩的瞬间。Therefore, in the present application, by determining the current shooting parameters by analyzing the shooting preview screen, the shooting parameters can be determined in time according to the shooting preview screen, so that the current exciting moment can be captured in time with a higher shooting quality according to the shooting parameters when shooting is required.
请参阅图2,为本申请第二实施例中的拍摄控制方法的流程图。在第二实施例中,预设模型为已训练的图像处理算法模型,已训练的图像处理算法模型包括已训练的目标对象特征模型,拍摄控制方法包括如下步骤:Please refer to FIG. 2 , which is a flowchart of a shooting control method in a second embodiment of the present application. In the second embodiment, the preset model is a trained image processing algorithm model, and the trained image processing algorithm model includes the trained target object feature model, and the shooting control method includes the following steps:
S21:获取拍摄预览画面。S21: Acquire a shooting preview screen.
S23:利用图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量,以及根据已训练的目标对象特征模型以及拍摄预览画面对应的目标对象特征向量得到分析结果。S23: analyzing the target object in the shooting preview image by using image recognition technology, generating a corresponding target object feature vector, and obtaining an analysis result according to the trained target object feature model and the target object feature vector corresponding to the captured preview image.
可选的,在一种实现方式中,根据已训练的目标对象特征模型以及拍摄预览画面对应的目标对象特征向量得到分析结果,包括:通过已训练的目标对象特征模型和拍摄预览画面对应的目标对象特征向量计算得到拍摄预览画面对应的目标对象特征这一分析结果。Optionally, in an implementation manner, the analysis result is obtained according to the trained target object feature model and the target object feature vector corresponding to the captured preview image, including: the target object model that has been trained and the target corresponding to the captured preview image The object feature vector calculates the analysis result of the target object feature corresponding to the captured preview screen.
其中,目标对象可包括手部、脸部、特定景物等对象,目标对象特征模型可相应包括手势特征模型、表情特征模型、景物特征模型等,分析出的目标对象特征向量可包括大笑、生气、打哈欠等表情特征向量,或“OK”手势、“V字形”等手势特征向量,或花朵、鸟、山等景物特征向量,分析出的目标特征也可包括大笑、生气、打哈欠等表情特征,或“OK”手势、“V字形”等手势特征,或花朵、鸟、山等景物特征。The target object may include an object such as a hand, a face, and a specific scene. The target object feature model may include a gesture feature model, an expression feature model, a scene feature model, etc., and the analyzed target object feature vector may include laughing and angry. Expression eigenvectors such as yawning, or gesture feature vectors such as "OK" gestures, "V-shaped", or landscape feature vectors such as flowers, birds, and mountains. The analyzed target features may also include laughter, anger, yawning, etc. Emoticon features, or "OK" gestures, "V-shaped" and other gesture features, or landscape features such as flowers, birds, mountains.
以目标对象为手部,目标对象特征模型为手势特征模型为例,采用预设模型对拍摄预览画面进行分析而得到分析结果,可进一步具体包括:利用图像识别技术对拍摄预览画面中的手势进行分析,生成对应的手势特征向量;根据手势特征模型以及拍摄预览画面对应的手势特征向量得到分析结果。Taking the target object as the hand and the target object feature model as the gesture feature model as an example, using the preset model to analyze the captured preview image and obtaining the analysis result may further specifically include: using the image recognition technology to perform the gesture in the captured preview image. The analysis generates a corresponding gesture feature vector; and obtains an analysis result according to the gesture feature model and the gesture feature vector corresponding to the captured preview image.
进一步的,在一种实现方式中,根据手势特征模型以及拍摄预览画面对应的手势特征向量得到分析结果,包括:通过已训练的手势特征模型和拍摄预览画面对应的目标对象特征向量计算得到拍摄预览画面对应的手势特征这一分析结果。Further, in an implementation manner, the analysis result is obtained according to the gesture feature model and the gesture feature vector corresponding to the captured preview image, including: calculating a shooting preview by using the trained gesture feature model and the target object feature vector corresponding to the captured preview image The analysis result of the gesture feature corresponding to the picture.
在一些实施例中,所述已训练的目标对象特征模型包括了多个基准图像以及多个基准图像的基准目标对象特征向量,所述通过已训练的目标对象特征模型和拍摄预览画面对应的目标对象特征向量计算得到拍摄预览画面对应的目标对象特征这一分析结果,包括:通过所述已训练的目标对象特征模型将拍摄预览画面对应的目标对象特征向量与多个基准图像中的基准目标对象特征向量进行比较,确定相似度最高的基准目标对象特征向量对应的基准图像,并根据所述基准图像得出目标对象特征。例如,在确定出基准图像后,根据基准图像的标签来确定目标对象特征。In some embodiments, the trained target object feature model includes a plurality of reference images and a reference target object feature vector of the plurality of reference images, the target corresponding to the captured target object feature model and the captured preview image The object feature vector calculates an analysis result of the target object feature corresponding to the captured preview image, including: the target object feature vector corresponding to the captured preview image and the reference target object in the plurality of reference images by the trained target object feature model The feature vectors are compared, a reference image corresponding to the reference object feature vector with the highest similarity is determined, and the target object feature is derived from the reference image. For example, after the reference image is determined, the target object feature is determined based on the label of the reference image.
S25:根据分析结果确定出拍摄参数以用于拍摄。S25: The shooting parameters are determined according to the analysis result for shooting.
在一些实施例中,当分析结果为根据已训练的目标对象特征模型得出的目标对象特征时,根据分析结果确定拍摄参数以用于拍摄,包括:根据预设的目标对象特征与拍摄参数的对应关系,确定得到目标对象特征所对应的拍摄参数。In some embodiments, when the analysis result is a target object feature obtained according to the trained target object feature model, determining a shooting parameter for the shooting according to the analysis result, including: according to the preset target object feature and the shooting parameter Corresponding relationship determines the shooting parameters corresponding to the target object features.
其中,拍摄参数包括但不限于:快门时间、光圈大小、感光度等中的至少一个。The shooting parameters include, but are not limited to, at least one of a shutter time, an aperture size, a sensitivity, and the like.
可选的,预先可以设置:使用目标对象特征来确定拍摄参数,例如预先设置有快门时间和/或光圈大小与目标对象特征的对应关系。例如,设拍摄参数包括快门时间和/或光圈大小,目标对象特征可为表情特征、手势特征、景物特征等时,预先设置有快门时间和/或光圈大小与表情特征的对应关系,或预先设置有快门时间和/或光圈大小与手势特征的对应关系,或者预先设置有快门时间和/或光圈大小与特定景物特征的对应关系。Optionally, it may be set in advance that the target object feature is used to determine the shooting parameter, for example, the corresponding relationship between the shutter time and/or the aperture size and the target object feature is preset. For example, if the shooting parameters include the shutter time and/or the aperture size, and the target object feature may be an expression feature, a gesture feature, a scene feature, etc., the corresponding relationship between the shutter time and/or the aperture size and the expression feature is preset, or preset There is a correspondence between the shutter time and/or the aperture size and the gesture feature, or a correspondence between the shutter time and/or the aperture size and the specific scene feature is preset.
特别的,当目标对象特征为手势特征时,可预先设置有快门时间和/或光圈大小与拇指和食指间距这一手势特征的对应关系,其中,快门时间与拇指和食指间距可呈对数关系。从而,当根据分析结果得出的拇指和食指之间的距离这一手势特征时,可根据预先设置的来快门时间和/或光圈大小与拇指和食指间距这一手势特征的对应关系确定快门时间和/或光圈大小等拍摄参数,而实现对曝光强度和/或拍摄景深的控制。In particular, when the target object feature is a gesture feature, a correspondence relationship between the shutter time and/or the aperture size and the gesture feature of the thumb and the index finger distance may be preset, wherein the shutter time may be logarithmically related to the distance between the thumb and the index finger. . Therefore, when the gesture feature of the distance between the thumb and the index finger is obtained according to the analysis result, the shutter time can be determined according to a preset correspondence relationship between the shutter time and/or the aperture size and the gesture feature of the thumb and the index finger distance. The shooting parameters such as the aperture size and/or the aperture size are controlled to achieve exposure intensity and/or shooting depth of field.
又例如,当目标对象特征为表情特征时,可预先设置的快门时间和/或光圈大小等拍摄参数与不同表情特征的对应关系,例如设置快门时间和/或光圈大小与微笑、大笑、生气、哭泣、打哈欠等表情特征的对应关系,根据分析结果得到对应的表情特征时,则可根据拍摄参数与不同表情特征的对应关系以及得到的表情特征确定得到对应的拍摄参数。For another example, when the target object feature is an expression feature, a corresponding relationship between a shooting parameter such as a shutter time and/or a aperture size and a different expression feature, such as setting a shutter time and/or a aperture size, and smiling, laughing, and angry may be set. The corresponding relationship between the expression features such as crying, yawning, and the like, and corresponding expression features are obtained according to the analysis result, and the corresponding shooting parameters can be determined according to the correspondence between the shooting parameters and the different expression features and the obtained expression features.
在另一些实施例中,当分析结果为已训练的目标对象特征模型得出的包括拍摄参数的分析结果时,根据分析结果确定拍摄参数以用于拍摄,包括:根据已训练的目标对象特征模型得出的包括拍摄参数的分析结果,确定拍摄参数以用于拍摄。In other embodiments, when the analysis result is an analysis result including a shooting parameter obtained by the trained target object feature model, the shooting parameter is determined according to the analysis result for shooting, including: according to the trained target object feature model The resulting analysis results including the shooting parameters, and the shooting parameters are determined for shooting.
在一些实施例中,目标对象特征模型可为通过如下的训练步骤完成的模型:利用图像或人脸识别技术,对初始训练集中每张照片中的目标对象进行分析,生成对应的目标对象特征向量;基于生成的目标对象特征向量和对应的照片与基准图片间的相似度标签建立训练样本集;再用该样本集进行训练学习,得到训练完成的目标对象特征模型。In some embodiments, the target object feature model may be a model completed by the following training steps: using an image or face recognition technique to analyze a target object in each photo in the initial training set to generate a corresponding target object feature vector. And establishing a training sample set based on the generated target object feature vector and the similarity label between the corresponding photo and the reference picture; and using the sample set for training learning, obtaining the trained target object feature model.
从而,本申请中,能够根据拍摄预览画面中的目标对象确定出拍摄参数,能够及时以及更加合理的设置与目标对象对应的拍摄参数,可提高拍摄品质以及确保及时完成拍摄参数的设置。Therefore, in the present application, the shooting parameters can be determined according to the target object in the shooting preview screen, and the shooting parameters corresponding to the target object can be set in time and more reasonably, and the shooting quality can be improved and the setting of the shooting parameters can be completed in time.
其中,前述的对应关系可为存储于电子装置的存储器中的对应关系表。The foregoing correspondence may be a correspondence table stored in a memory of the electronic device.
其中,步骤S21与图1中步骤S11对应,更具体的说明可参照图1的相关描述,在此不再赘述。步骤S23~S25对应图1中步骤S13,相关的描述可相互参照。The step S21 corresponds to the step S11 in FIG. 1 . For more specific description, reference may be made to the related description of FIG. 1 , and details are not described herein again. Steps S23 to S25 correspond to step S13 in FIG. 1, and related descriptions can be referred to each other.
请参阅图3,为本申请第三实施例中的拍摄控制方法的流程图。在第三实施例中,预设模型为已训练的图像处理算法模型,已训练的图像处理算法模型包括已训练的目标对象特征模型,拍摄控制方法包括如下步骤:Please refer to FIG. 3 , which is a flowchart of a shooting control method in a third embodiment of the present application. In the third embodiment, the preset model is a trained image processing algorithm model, and the trained image processing algorithm model includes the trained target object feature model, and the shooting control method includes the following steps:
S301:获取拍摄预览画面。S301: Acquire a shooting preview screen.
S303:利用图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量。S303: Analyze a target object in the preview image by using an image recognition technology to generate a corresponding target object feature vector.
S305:将拍摄预览画面对应的目标对象特征向量作为所述已训练的目标对 象特征模型的输入信息,而通过所述已训练的目标对象特征模型得出拍摄参数以用于拍摄。S305: The target object feature vector corresponding to the captured preview image is taken as the input information of the trained target object feature model, and the captured target object feature model is used to obtain the shooting parameters for shooting.
其中,所述目标对象可包括手部、脸部、特定景物等对象,目标对象特征模型可相应包括手势特征模型、表情特征模型、景物特征模型等,分析出的目标对象特征向量可包括大笑、生气、打哈欠等表情特征向量,或“OK”手势、“V字形”等手势特征向量,或花朵、鸟、山等景物特征向量,分析出的目标特征也可包括大笑、生气、打哈欠等表情特征,或“OK”手势、“V字形”等手势特征,或花朵、鸟、山等景物特征。The target object may include an object such as a hand, a face, and a specific scene. The target object feature model may include a gesture feature model, an expression feature model, a scene feature model, and the like, and the analyzed target object feature vector may include a laugh. Expression eigenvectors such as anger, yawning, etc., or gesture feature vectors such as "OK" gestures, "V-shaped", or vector features such as flowers, birds, mountains, etc., and the analyzed target features may also include laughing, angry, playing Yawns and other expression features, or "OK" gestures, "V-shaped" and other gesture features, or flowers, birds, mountains and other features.
以目标对象为手部,目标对象特征模型为手势特征模型为例,将拍摄预览画面对应的目标对象特征向量作为所述已训练的目标对象特征模型的输入信息,而通过所述已训练的目标对象特征模型得出拍摄参数以用于拍摄,可包括:将拍摄预览画面对应的手势特征向量作为已训练的手势特征模型的输入信息,而通过已训练的手势特征模型得出对应的拍摄参数以用于拍摄。Taking the target object as the hand and the target object feature model as the gesture feature model, taking the target object feature vector corresponding to the captured preview image as the input information of the trained target object feature model, and passing the trained target The object feature model obtains the shooting parameters for the shooting, and may include: taking the gesture feature vector corresponding to the shooting preview image as the input information of the trained gesture feature model, and obtaining the corresponding shooting parameter by using the trained gesture feature model. Used for shooting.
其中,步骤S301与图1中步骤S11对应,更具体的说明可参照图1的相关描述,在此不再赘述。步骤S303~S305对应图1中步骤S13,相关的描述可相互参照。步骤S303、S305还与图2中的步骤S203、S205具有一定的对应关系,相关的特征也可相互参照。The step S301 corresponds to the step S11 in FIG. 1 . For a more specific description, reference may be made to the related description in FIG. 1 , and details are not described herein again. Steps S303 to S305 correspond to step S13 in FIG. 1, and related descriptions can be referred to each other. Steps S303 and S305 also have a certain correspondence with steps S203 and S205 in FIG. 2, and related features may also be referred to each other.
请参阅图4,为本申请第四实施例中的拍摄控制方法的流程图。在第四实施例中,预设模型为已训练的神经网络算法模型,拍摄控制方法包括如下步骤:Please refer to FIG. 4 , which is a flowchart of a shooting control method in a fourth embodiment of the present application. In the fourth embodiment, the preset model is a trained neural network algorithm model, and the shooting control method includes the following steps:
S31:获取拍摄预览画面。S31: Acquire a shooting preview screen.
S33:将拍摄预览画面的所有像素作为输入,通过已训练的神经网络模型进行计算后输出拍摄参数以用于拍摄。S33: Taking all the pixels of the preview image as input, the calculation is performed by the trained neural network model, and the shooting parameters are output for shooting.
即,在一些实施例中,通过将拍摄预览画面的所有像素作为已训练的神经网络模型的输入信息,通过所述已训练的神经网络模型即可直接计算得出所述拍摄参数。That is, in some embodiments, the shooting parameters can be directly calculated by the trained neural network model by taking all pixels of the preview image as input information of the trained neural network model.
其中,步骤S31~S33与图1中步骤S11~S13对应,相互之间为上位或下位的描述关系,相关的描述可相互参照。The steps S31 to S33 correspond to the steps S11 to S13 in FIG. 1 , and the description relationship between the upper and lower positions is mutually related, and the related descriptions can refer to each other.
请参阅图5,为本申请第五实施例中的拍摄控制方法的流程图。与第一实施例的区别在于,第五实施例中,还进一步去确定当前是否满足拍摄条件。在第五实施例中,拍摄控制方法包括:Please refer to FIG. 5 , which is a flowchart of a shooting control method in a fifth embodiment of the present application. The difference from the first embodiment is that, in the fifth embodiment, it is further determined whether the shooting condition is currently satisfied. In the fifth embodiment, the photographing control method includes:
S41:获取拍摄预览画面。S41: Acquire a shooting preview screen.
S43:采用预设模型对拍摄预览画面进行分析而确定当前是否满足拍摄条件。如果是则执行步骤S45,否则返回步骤S43或流程结束。S43: The shooting preview screen is analyzed by using a preset model to determine whether the shooting condition is currently satisfied. If yes, go to step S45, otherwise go back to step S43 or the process ends.
在一些实施例中,预设模型为已训练的神经网络模型时;采用预设模型对拍摄预览画面进行分析而确定当前是否满足拍摄条件,进一步包括:通过已训练的神经网络模型对拍摄预览画面进行分析,得出包括满意度的分析结果;根据分析结果确定当前是否满足拍摄条件。具体的,在一些实施例中,在确定满意度超过满意度预设阈值时,确定当前满足拍摄条件。其中,满意度为神经网络模型通过将拍摄预览画面的所有像素作为输入,而根据预先训练的模型进行处理而输出包括满意度的分析结果。其中,满意度预设阈值可为80%、90%等值。In some embodiments, when the preset model is a trained neural network model; analyzing the captured preview image by using the preset model to determine whether the shooting condition is currently satisfied, and further comprising: capturing a preview image by using the trained neural network model The analysis is performed to obtain an analysis result including satisfaction; and based on the analysis result, it is determined whether the shooting condition is currently satisfied. Specifically, in some embodiments, when it is determined that the satisfaction exceeds the satisfaction preset threshold, it is determined that the shooting condition is currently satisfied. Among them, the satisfaction degree is that the neural network model outputs an analysis result including the satisfaction degree by processing all the pixels of the preview image and processing according to the pre-trained model. The satisfaction preset threshold may be 80%, 90%, and the like.
即,在一些实施例中,采用预设模型对拍摄预览画面进行分析时不但分析得出拍摄参数,还去分析得出包括满意度的分析结果而确定是否满足拍摄条件。That is, in some embodiments, when the shooting preview screen is analyzed by using the preset model, not only the shooting parameters are analyzed, but also the analysis result including the satisfaction is analyzed to determine whether the shooting conditions are satisfied.
可选的,在另一些实施例中,预设模型为已训练的图像处理算法模型时,采用预设模型对拍摄预览画面进行分析而确定当前是否满足拍摄条件,包括:采用已训练的图像处理算法模型将拍摄预览画面与基准图片进行比较,分析出包括预览画面与基准图片的相似度的分析结果;根据分析结果确定当前是否满足拍摄条件。其中,根据分析结果确定当前是否满足拍摄条件,包括:在确定相似度超过相似度预设阈值时,确定当前满足拍摄条件。其中,相似度预设阈值可为80%、90%等值。Optionally, in another embodiment, when the preset model is a trained image processing algorithm model, the preset preview model is used to analyze the captured preview image to determine whether the shooting condition is currently met, including: using the trained image processing. The algorithm model compares the shot preview screen with the reference picture, analyzes the analysis result including the similarity between the preview picture and the reference picture, and determines whether the current shooting condition is satisfied according to the analysis result. The determining whether the current shooting condition is met according to the analysis result includes: determining that the shooting condition is currently satisfied when determining that the similarity exceeds the similarity preset threshold. The similarity preset threshold may be 80%, 90%, and the like.
其中,基准图片可为用户预先设定的具有微笑、大笑、难过、生气、打哈欠等特定表情的标准图片,也可为具有“OK”手势、“V字形”手势等手势的标准图片,也可为具有花朵、鸟、山等景物的标准图片。The reference picture may be a standard picture preset by the user with a specific expression such as smiling, laughing, sad, angry, yawning, or a standard picture with gestures such as an "OK" gesture or a "V-shaped" gesture. It can also be a standard picture with flowers, birds, mountains and other scenery.
即,在一些实施例中,采用预设模型对拍摄预览画面进行分析时不但分析得出拍摄参数,还去分析得出与基准图片的相似度而去确定是否满足拍摄条件。That is, in some embodiments, when the preview image is analyzed by using the preset model, not only the shooting parameters are analyzed, but also the similarity with the reference image is analyzed to determine whether the shooting condition is satisfied.
进一步的,已训练的图像处理算法模型包括已训练的目标对象特征模型,采用已训练的图像处理算法模型将拍摄预览画面与基准图片进行比较,分析出拍摄预览画面与基准图片的相似度,包括:利用图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量;根据已训练的目标对象特征模型和拍摄预览画面对应的目标对象特征向量计算得到拍摄预览画面与基准图片的相似度。Further, the trained image processing algorithm model includes the trained target object feature model, and the trained image processing algorithm model is used to compare the captured preview image with the reference image, and the similarity between the captured preview image and the reference image is analyzed, including The image recognition technology is used to analyze the target object in the preview image, and the corresponding target object feature vector is generated. The captured preview image and the reference image are calculated according to the trained target object feature model and the target object feature vector corresponding to the captured preview image. Similarity.
在一些实施例中,根据已训练的目标对象特征模型和拍摄预览画面对应的目标对象特征向量计算得到拍摄预览画面与基准图片的相似度,包括:将拍摄预览画面对应的目标对象特征向量作为已训练的目标对象特征模型的输入信息,而通过目标对象特征模型计算得出拍摄预览画面与基准图片的相似度。In some embodiments, the similarity between the captured preview image and the reference image is calculated according to the trained target object feature model and the target object feature vector corresponding to the captured preview image, including: taking the target object feature vector corresponding to the captured preview image as The input information of the target object model of the training is calculated, and the similarity between the shooting preview picture and the reference picture is calculated by the target object feature model.
在另一些实施例中,采用已训练的图像处理算法模型将拍摄预览画面与基准图片进行比较,分析出拍摄预览画面与基准图片的相似度,包括:获取拍摄预览画面的像素信息;将拍摄预览画面的像素信息与基准图片的像素信息进行比较,分析出拍摄预览画面与基准图片的相似度。即,在另一些实施例中,通过对比两个图像的像素灰阶值等像素信息来得出拍摄预览画面和基准图片的相似度。In other embodiments, the captured image processing algorithm model is used to compare the shooting preview image with the reference image, and the similarity between the shooting preview image and the reference image is analyzed, including: acquiring pixel information of the shooting preview image; The pixel information of the screen is compared with the pixel information of the reference picture, and the similarity between the shooting preview picture and the reference picture is analyzed. That is, in other embodiments, the similarity between the captured preview picture and the reference picture is obtained by comparing the pixel information such as the pixel grayscale value of the two images.
在另一些实施例中,采用预设模型对拍摄预览画面进行分析而确定当前是否满足拍摄条件,包括:利用图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量;将目标对象特征向量作为图像处理算法模型的输入信息,而得出包括标识当前是否满足拍摄条件的标识信息的分析结果;以及根据所述分析结果确定是否满足拍摄条件。例如,标识信息可为1、0等标识当前是否满足拍摄条件的分析结果的标识符。更具体的,当标识信息为1时,标识满足拍摄条件,当标识信息为0时,标识不满足拍摄条件。所述根据所述分析结果确定是否满足拍摄条件,包括:在分析结果包括标识当前满足拍摄条件的标识信息时,确定当前满足拍摄条件;在分析结果包括标识当前不满足拍摄条件的标识信息时,确定当前不满足拍摄条件。In another embodiment, determining the current shooting condition by analyzing the shooting preview image by using the preset model, including: analyzing the target object in the shooting preview image by using an image recognition technology, and generating a corresponding target object feature vector; The target object feature vector is used as input information of the image processing algorithm model, and an analysis result including identification information indicating whether the shooting condition is currently satisfied is obtained; and whether the shooting condition is satisfied is determined according to the analysis result. For example, the identification information may be an identifier of 1, 0, etc., which identifies whether the analysis result of the shooting condition is currently satisfied. More specifically, when the identification information is 1, the identification satisfies the shooting condition, and when the identification information is 0, the identification does not satisfy the shooting condition. Determining whether the photographing condition is satisfied according to the analysis result includes: determining that the photographing condition is currently satisfied when the analysis result includes identifying the identifier information that currently meets the photographing condition; and when the analysis result includes identifying the identifier information that does not satisfy the photographing condition at present, Make sure that the shooting conditions are not currently met.
其中,目标对象可包括脸部、手部、特定景物等对象,目标对象特征模型可包括表情特征模型、手势特征模型、景物特征模型等,分析出的目标对象特征向量可包括如前所述的大笑、生气、打哈欠等表情特征向量,或“OK”手势、“V字形”等手势特征向量,或花朵、鸟、山等景物特征向量。The target object may include an object such as a face, a hand, and a specific scene. The target object feature model may include an expression feature model, a gesture feature model, a scene feature model, and the like, and the analyzed target object feature vector may include the foregoing. Emoticon, angry, yawning and other expression vector, or "OK" gesture, "V-shaped" and other gesture feature vectors, or flower, bird, mountain and other scene feature vectors.
在一些实施例中,以表情特征模型为例,已训练的表情特征模型为通过如下的方式进行训练完成:在初始训练集中提供多张具有不同表情人脸的照片;利用人脸识别技术,对初始训练集中提供的多张照片中的人物进行表情分析,生成对应的表情特征向量Xi,例如,X1表示眼睛睁开的大小,X2表示嘴角上扬的程度,X3表示嘴巴张开的大小;基于生成的表情特征向量和对应的照片与基准图片间的相似度标签建立训练样本集;再用样本集进行训练学习,得到训练完成的表情特征模型。In some embodiments, taking the expression feature model as an example, the trained expression feature model is completed by training: providing multiple photos with different expression faces in the initial training set; using face recognition technology, The characters in the multiple photos provided in the initial training set perform facial expression analysis to generate a corresponding expression feature vector Xi. For example, X1 represents the size of the eye opening, X2 represents the degree of the mouth angle rising, and X3 represents the size of the mouth opening; The expression feature vector and the similarity label between the corresponding photo and the reference picture establish a training sample set; and then use the sample set to perform training learning, and obtain the trained expression feature model.
其中,拍摄预览画面与基准图片的相似度可为风格相似度、色彩相似度、元素布局相似度、灰阶相似度等。The similarity between the shooting preview screen and the reference picture may be style similarity, color similarity, element layout similarity, grayscale similarity, and the like.
S45:采用预设模型对拍摄预览画面进行分析而确定拍摄参数。S45: The shooting preview parameter is analyzed by using a preset model to determine shooting parameters.
S47:根据拍摄参数控制执行拍摄操作。S47: Perform a shooting operation according to the shooting parameter control.
在一些实现方式中,拍摄操作为拍照操作,根据拍摄参数控制执行拍摄操作,包括:根据拍摄参数控制执行拍照操作,而得到当前拍摄预览画面对应的照片。In some implementations, the photographing operation is a photographing operation, and the photographing operation is performed according to the photographing parameter control, including: performing a photographing operation according to the photographing parameter control, and obtaining a photograph corresponding to the current photographing preview screen.
在另一些实现方式中,拍摄操作为连拍操作,根据拍摄参数控制执行拍摄操作,包括:控制执行连拍操作,而得到包括当前拍摄预览画面对应照片在内的多张照片。可选的,在执行连拍操作后,还可包括进一步的步骤:对连拍操作获取到的多张照片进行分析,确定出最佳的照片;以及保留最佳的照片,而对连拍操作获取到的其他照片进行删除。In other implementations, the photographing operation is a continuous shooting operation, and the photographing operation is performed according to the photographing parameter control, including: controlling to perform the continuous shooting operation, and obtaining a plurality of photographs including the photograph corresponding to the current photographing preview screen. Optionally, after performing the continuous shooting operation, further steps may be further included: analyzing a plurality of photos obtained by the continuous shooting operation to determine the best photo; and retaining the best photo, and performing the continuous shooting operation Get other photos to delete.
在另一些实现方式中,拍摄操作为视频拍摄操作,根据拍摄参数控制执行拍摄操作,包括:根据拍摄参数控制执行拍摄操作,而得到以当前拍摄预览画面作为起始视频画面帧的视频文件。可选的,在执行视频拍摄操作得到视频文件后,还可包括步骤:在拍摄得到视频文件后,还可对拍摄到的视频文件中的多个视频画面帧进行比较,确定出最佳的画面帧;以及将最佳的画面帧截取出来作为照片保存。In other implementations, the photographing operation is a video photographing operation, and the photographing operation is performed according to the photographing parameter control, including: performing a photographing operation according to the photographing parameter control, and obtaining a video file that uses the current photographed preview screen as a starting video frame frame. Optionally, after the video recording operation is performed to obtain the video file, the method may further include: after the video file is captured, the plurality of video frame frames in the captured video file may be compared to determine an optimal screen. Frame; and intercept the best picture frame to save as a photo.
从而,本申请中,通过分析拍摄预览画面确定当前是否满足拍摄条件,能够根据拍摄预览画面中的内容来确定是否为用户期望拍下的画面,从而能够及时捕捉当前精彩的瞬间,同时还根据拍摄预览画面中的内容来自动确定拍摄参数,能够以符合当前拍摄预览画面的拍摄参数来进行拍摄操作,确保了较高的拍摄品质。Therefore, in the present application, by analyzing the shooting preview screen to determine whether the shooting condition is currently satisfied, whether or not the user desires to take the picture can be determined according to the content in the shooting preview screen, so that the current exciting moment can be captured in time, and also according to the shooting. The content in the preview screen automatically determines the shooting parameters, and the shooting operation can be performed in accordance with the shooting parameters of the current shooting preview screen, ensuring high shooting quality.
其中,步骤S41与图1中的步骤S11对应,具体的介绍可参考图1中的步骤S11的相关描述。步骤S45与图1中的步骤S13、图2中的步骤S23、S25等对应,具体的介绍可参考图1中的步骤S13、图2中的步骤S23、S25等的相关描述。The step S41 corresponds to the step S11 in FIG. 1 . For a specific description, reference may be made to the related description of step S11 in FIG. 1 . Step S45 corresponds to step S13 in FIG. 1 and steps S23, S25 and the like in FIG. 2. For specific introduction, reference may be made to step S13 in FIG. 1, step S23, S25 in FIG. 2, and the like.
其中,在一些实施例中,步骤S43执行于步骤S45之前,即,确定包括快门时间或光圈大小等拍摄参数是在确定满足拍摄条件之后进行的,在满足确定 拍摄条件之后再对拍摄预览画面进行分析确定拍摄参数,从而避免了每次都去确定拍摄参数,避免了计算资源的浪费。Wherein, in some embodiments, step S43 is performed before step S45, that is, determining that the shooting parameters including the shutter time or the aperture size are performed after determining that the shooting condition is satisfied, and the shooting preview screen is performed after the determined shooting condition is satisfied. Analysis determines the shooting parameters, thus avoiding the determination of shooting parameters every time, avoiding the waste of computing resources.
在一些实施例中,步骤S43和步骤S45是同时进行的,即,采用预设模型对拍摄预览画面进行分析确定是否满足拍摄条件和根据分析结果确定拍摄参数是同时进行的,在采用预设模型对拍摄预览画面进行分析时,同时分析得出是否满足拍摄条件以及拍摄参数。In some embodiments, step S43 and step S45 are performed simultaneously, that is, the shooting preview screen is analyzed by using a preset model to determine whether the shooting condition is met and the shooting parameters are determined according to the analysis result, and the preset model is adopted. When analyzing the shooting preview screen, it is analyzed at the same time to determine whether the shooting conditions and shooting parameters are satisfied.
在一些实施例中,当预设模型为图像处理算法模型时,用于确定是否满足拍摄条件的目标对象及目标对象特征模型与用于确定出拍摄参数的目标对象及目标对象特征模型可相同。In some embodiments, when the preset model is an image processing algorithm model, the target object and the target object feature model for determining whether the shooting condition is satisfied may be the same as the target object and the target object feature model for determining the shooting parameters.
例如,目标对象都为脸部表情,目标对象特征模型为表情特征模型,采用预设模型对拍摄预览画面进行分析而得到分析结果可包括:利用图像识别技术对拍摄预览画面中的脸部表情进行分析,生成对应的表情特征向量;根据表情特征模型以及拍摄预览画面对应的表情特征向量得到与基准图片的相似度或者直接得到标识是否满足拍摄条件的标识信息,以及根据表情特征模型以及拍摄预览画面对应的表情特征向量得到表情特征或者直接得到拍摄参数。For example, the target object is a facial expression, and the target object feature model is an expression feature model. The analysis of the captured preview image by using the preset model may include: using the image recognition technology to perform the facial expression in the captured preview image. Analyzing, generating a corresponding expression feature vector; obtaining similarity with the reference image according to the expression feature model and the expression feature vector corresponding to the preview image, or directly obtaining identification information indicating whether the shooting condition is satisfied, and according to the expression feature model and the shooting preview screen The corresponding emoticon vector obtains the emoticon feature or directly obtains the shooting parameters.
即,根据拍摄预览画面中的脸部表情可同时确定是否满足拍摄条件以及确定得出拍摄参数。That is, according to the facial expression in the shooting preview screen, it is possible to simultaneously determine whether the shooting condition is satisfied and determine the shooting parameter.
在另一些实施例中,当预设模型为图像处理算法模型时,用于确定是否满足拍摄条件的目标对象及目标对象特征模型与用于确定拍摄参数的目标对象及目标对象特征模型可不同。例如,用于确定是否满足拍摄条件的目标对象及目标对象特征模型分别为第一目标对象和第一目标对象特征模型,用于确定拍摄参数的目标对象及目标对象特征模型分别为第二目标对象和第二目标对象特征模型。In other embodiments, when the preset model is an image processing algorithm model, the target object and the target object feature model for determining whether the shooting condition is satisfied may be different from the target object and the target object feature model for determining the shooting parameters. For example, the target object and the target object feature model for determining whether the shooting condition is satisfied are the first target object and the first target object feature model, respectively, and the target object and the target object feature model for determining the shooting parameter are respectively the second target object. And a second target object feature model.
在更具体的例子中,用于确定是否满足拍摄条件的目标对象及目标对象特征模型分别为脸部以及表情特征模型,用于确定拍摄参数的目标对象及目标对象特征模型分别为手势和手势特征模型。采用预设模型对拍摄预览画面进行分析而得到分析结果可包括:利用图像识别技术对拍摄预览画面中的脸部表情进行分析,生成对应的表情特征向量,以及利用图像识别技术对拍摄预览画面中的手势进行分析得到手势特征向量;根据表情特征模型以及拍摄预览画面对应的表情特征向量得到与基准图片的相似度或者直接得到标识是否满足拍摄条件的标识信息;以及根据手势特征模型以及拍摄预览画面对应的表情特征向量得到手势特征或者直接得到拍摄参数。In a more specific example, the target object and the target object feature model for determining whether the shooting condition is satisfied are a face and an expression feature model, respectively, and the target object and the target object feature model for determining the shooting parameters are gestures and gesture features, respectively. model. The analysis result obtained by analyzing the shooting preview screen by using the preset model may include: analyzing the facial expression in the shooting preview image by using the image recognition technology, generating a corresponding expression feature vector, and using the image recognition technology to capture the preview image. The gesture is analyzed to obtain a gesture feature vector; the similarity with the reference image is obtained according to the expression feature model and the expression feature vector corresponding to the preview image, or the identification information that directly identifies whether the shooting condition is satisfied; and the gesture feature model and the shooting preview screen are obtained according to the gesture feature model The corresponding emoticon vector obtains a gesture feature or directly obtains a shooting parameter.
即,根据拍摄预览画面中的脸部表情是否满足拍摄条件以及根据拍摄预览画面中的手势确定得出拍摄参数。That is, the shooting parameters are determined according to whether the facial expression in the shooting preview screen satisfies the shooting condition and the gesture in the shooting preview screen.
请参阅图6,为本申请第六实施例中的拍摄控制方法的流程图。在第六实施例中,拍摄控制方法包括:Please refer to FIG. 6 , which is a flowchart of a shooting control method in a sixth embodiment of the present application. In the sixth embodiment, the photographing control method includes:
S51:获取拍摄预览画面。S51: Acquire a shooting preview screen.
步骤S51与图1中的步骤S11对应,具体的介绍可参考图1中的步骤S11的相关描述。Step S51 corresponds to step S11 in FIG. 1. For a specific description, reference may be made to the related description of step S11 in FIG. 1.
S53:采用预设模型对拍摄预览画面进行分析而确定当前是否满足拍摄条件 以及确定出用于拍摄的拍摄参数。S53: The shooting preview screen is analyzed by using a preset model to determine whether the shooting condition is currently satisfied and the shooting parameters for shooting are determined.
在一些实施例中,当已训练的模型为已训练的神经网络模型时,步骤S53包括:通过将拍摄预览画面的所有像素作为神经网络模型的输入,通过神经网络模型计算输出包括是否满足拍摄条件以及拍摄参数在内的信息。In some embodiments, when the trained model is a trained neural network model, step S53 includes: calculating whether the output includes the shooting condition by using the neural network model by taking all pixels of the preview image as input of the neural network model And information such as shooting parameters.
在一些实施例中,当已训练的模型为已训练的图像处理算法模型时,以已训练的图像处理算法模型包括已训练的表情特征模型和已训练的手势特征模型为例,步骤S53包括:根据已训练的表情特征模型和拍摄预览画面对应的表情特征向量确定得出表情相似度,以及根据已训练的手势特征模型和拍摄预览画面对应的手势特征向量确定得出拍摄预览画面对应的手势特征;在确定表情相似度大于相似度预设阈值时确定满足拍摄条件;根据手势特征模型和拍摄预览画面对应的手势特征向量确定得出的手势特征,例如得出食指和拇指的距离,以及预先定义的手势特征与拍摄参数的对应关系来确定得出的手势特征对应的拍摄参数,例如根据食指和拇指的距离这一手势特征与快门时间的对应关系来确定快门时间。In some embodiments, when the trained model is a trained image processing algorithm model, taking the trained image processing algorithm model including the trained expression feature model and the trained gesture feature model as an example, step S53 includes: Determining the expression similarity according to the trained expression feature model and the expression feature vector corresponding to the captured preview image, and determining the gesture feature corresponding to the captured preview image according to the trained gesture feature model and the gesture feature vector corresponding to the captured preview image. Determining that the shooting condition is satisfied when determining that the expression similarity is greater than the similarity preset threshold; determining the gesture feature according to the gesture feature model and the gesture feature vector corresponding to the captured preview image, for example, obtaining the distance between the index finger and the thumb, and predefining The corresponding relationship between the gesture feature and the shooting parameter determines the shooting parameter corresponding to the obtained gesture feature, for example, the shutter time is determined according to the correspondence between the gesture feature of the index finger and the thumb and the shutter time.
在另一些实施例中,以已训练的图像处理算法模型包括已训练的表情特征模型和已训练的手势特征模型为例,步骤S53也可包括:将拍摄预览画面对应的表情特征向量作为表情特征模型的输入信息,而通过已训练的表情特征模型得出标识是否满足拍摄条件的标识信息,以及将拍摄预览画面对应的手势特征向量作为已训练的手势特征模型的输入信息,而通过已训练的手势特征模型得出拍摄参数,从而,得出包括标识是否满足拍摄条件的标识信息以及拍摄参数在内的信息;根据已训练的表情特征模型得出的标识是否满足拍摄条件的标识信息以及根据已训练的手势特征模型得出的拍摄参数来确定是否满足拍摄条件以及确定出拍摄参数。In other embodiments, taking the trained image processing algorithm model including the trained expression feature model and the trained gesture feature model as an example, step S53 may further include: using the expression feature vector corresponding to the captured preview image as an expression feature. The input information of the model, and the identification information identifying whether the shooting condition is satisfied is obtained by the trained expression feature model, and the gesture feature vector corresponding to the shooting preview image is used as the input information of the trained gesture feature model, and the trained The gesture feature model obtains the shooting parameters, thereby obtaining information including the identification information that identifies whether the shooting condition is satisfied and the shooting parameters; and the identification information obtained according to the trained expression feature model to satisfy the shooting condition and the basis The shooting parameters derived from the trained gesture feature model determine whether the shooting conditions are met and the shooting parameters are determined.
S55:在确定当前满足拍摄条件时,根据拍摄参数控制执行拍摄操作。S55: When it is determined that the shooting condition is currently satisfied, the shooting operation is performed according to the shooting parameter control.
步骤S55与图5中的步骤S47对应,具体的介绍可参考图5中的步骤S47的相关描述。步骤S53~与图5中的步骤S43~S45也有一定对应性,具体根据分析结果确定当前是否满足拍摄条件以及得到拍摄参数的方式,可进一步参考图5中的步骤S43~S45的描述。Step S55 corresponds to step S47 in FIG. 5. For a specific description, reference may be made to the related description of step S47 in FIG. 5. Step S53 - has a certain correspondence with steps S43 - S45 in FIG. 5 , and specifically determines whether the current shooting condition and the shooting parameters are obtained according to the analysis result, and may further refer to the description of steps S43 to S45 in FIG. 5 .
从而,在第六实施例中,可对拍摄预览画面进行分析同时去确定出是否满足拍摄条件以及拍摄参数,并在满足拍摄条件时,以拍摄参数执行拍摄操作,可快速以较佳的拍摄参数进行拍摄,能够以较高的拍摄质量及时捕捉精彩瞬间。Therefore, in the sixth embodiment, the shooting preview screen can be analyzed while determining whether the shooting condition and the shooting parameters are satisfied, and when the shooting condition is satisfied, the shooting operation is performed with the shooting parameters, and the shooting parameters can be quickly performed with better shooting parameters. Shooting, you can capture the moments in time with high shooting quality.
请参阅图7,为本申请第七实施例中的拍摄控制方法的流程图。在第七实施例中,拍摄控制方法包括:Please refer to FIG. 7 , which is a flowchart of a shooting control method in a seventh embodiment of the present application. In the seventh embodiment, the photographing control method includes:
S61:获取拍摄预览画面。S61: Acquire a shooting preview screen.
步骤S61与图1中的步骤S11对应,具体的介绍可参考图1中的步骤S11的相关描述。Step S61 corresponds to step S11 in FIG. 1. For a specific description, reference may be made to the related description of step S11 in FIG. 1.
S63:采用预设模型对拍摄预览画面进行分析而确定当前是否满足拍摄条件以及确定出用于拍摄的拍摄参数。S63: The shooting preview screen is analyzed by using a preset model to determine whether the shooting condition is currently satisfied and the shooting parameters for shooting are determined.
S65:在确定当前满足拍摄条件时,根据拍摄参数控制执行连拍操作,而得到包括当前拍照预览画面对应照片在内的多张照片。S65: When it is determined that the shooting condition is currently satisfied, the continuous shooting operation is performed according to the shooting parameter control, and a plurality of photos including the photo corresponding to the current photo preview screen are obtained.
在一些实施例中,步骤S65包括:在拍摄预览画面与基准图片的相似度超过相似度预设阈值时,确定满足拍摄条件,控制执行连拍操作。In some embodiments, step S65 includes: determining that the shooting condition is satisfied when the similarity between the shooting preview screen and the reference picture exceeds the similarity preset threshold, and controlling to perform the continuous shooting operation.
在另一些实施例中,当通过已训练的神经网络模型得到满意度这一分析结果时,步骤S65也可包括:当满意度超过满意度预设阈值时,确定满足拍摄条件,控制执行连拍操作。In other embodiments, when the analysis result of the satisfaction is obtained by the trained neural network model, step S65 may further include: when the satisfaction exceeds the satisfaction preset threshold, determining that the shooting condition is satisfied, and controlling to perform continuous shooting operating.
其中,执行连拍操作时的拍摄条件的要求可略低于执行拍照操作时的要求,具体的,执行连拍操作时对比的相似度预设阈值或满意度预设阈值可略低于执行拍照操作对比的相似度预设阈值或满意度预设阈值。The requirement of the shooting condition when performing the continuous shooting operation may be slightly lower than the requirement when the photographing operation is performed. Specifically, the comparison similarity preset threshold or the satisfaction preset threshold may be slightly lower than the execution of the photographing when performing the continuous shooting operation. The similarity preset threshold or satisfaction preset threshold of the operation comparison.
例如,执行连拍操作时对比的相似度预设阈值或满意度预设阈值可为70%,低于执行拍照操作对比的为80%或者更高的相似度预设阈值或满意度预设阈值。For example, the similarity preset threshold or the satisfaction preset threshold when performing the continuous shooting operation may be 70%, which is lower than the similarity preset threshold or the satisfaction preset threshold of 80% or higher in performing the photographing operation comparison. .
从而,当即将达到最佳拍摄效果时即进行连拍,而可以确保连拍中拍摄到最佳拍摄效果的照片。例如,设基准图片为具有大笑表情的图片,而对比的表情特征向量为表示嘴角上扬的程度的表情特征向量X2,当判断嘴角上扬程度达到基准图片的70%时则控制连拍操作,而用户继续微笑达到最大上扬程度的时间较短,用户微笑达到最大上扬程度这一表情将被连拍操作拍摄到,而能够确保连拍操作拍摄到最佳拍摄效果的照片。Thus, continuous shooting is achieved when the best shooting effect is about to be achieved, and a photograph of the best shooting effect in continuous shooting can be ensured. For example, the reference picture is a picture with a laughing expression, and the contrast expression vector is an expression feature vector X2 indicating the degree of the mouth angle rising, and the continuous shooting operation is controlled when it is determined that the mouth angle rises to 70% of the reference picture. The user continues to smile for a short period of time, and the user's smile reaches the maximum level. This expression will be captured by the continuous shooting operation, and the photo that ensures the best shooting effect can be ensured by the continuous shooting operation.
其中,步骤S61~S65分别与图6中的步骤S51~S55对应,更具体的介绍可参照图6中的步骤S51~S55的相关描述。Steps S61 to S65 correspond to steps S51 to S55 in FIG. 6, respectively. For a more specific description, reference may be made to the related descriptions of steps S51 to S55 in FIG. 6.
如图7所示,图7所示的第七实施例还可包括如下步骤:As shown in FIG. 7, the seventh embodiment shown in FIG. 7 may further include the following steps:
S67:对连拍操作获取到的多张照片进行分析,确定出最佳的照片。S67: Analyze a plurality of photos obtained by the continuous shooting operation to determine the best photo.
可选的,在一种实现方式中,步骤S67可包括:采用已训练的神经网络模型对该多张照片进行分析得到满意度,并确定满意度最高的照片作为最佳的照片。Optionally, in an implementation manner, step S67 may include: analyzing the plurality of photos by using the trained neural network model to obtain satisfaction, and determining the photo with the highest satisfaction as the best photo.
可选的,在另一种实现方式中,步骤S67可包括:将连拍获得的多张照片与基准图片进行比较,确定与基准图片相似度最高的照片作为最佳的照片。Optionally, in another implementation manner, step S67 may include: comparing a plurality of photos obtained by continuous shooting with a reference picture, and determining a photo with the highest similarity with the reference picture as the best photo.
可选的,如图7所示,拍摄控制方法还可进一步包括:Optionally, as shown in FIG. 7, the shooting control method may further include:
S69:保留最佳的照片,而对连拍操作获取到的其他照片进行删除。S69: Keep the best photos and delete other photos obtained by the continuous shooting operation.
在一些实施例中,电子装置包括存储器,存储器中建立有若干相册,“保留最佳的照片”为将最佳的照片存储于某一相册中,例如存储于相机相册中。其中,通过将其他照片进行删除,可有效避免占用过多的存储空间。In some embodiments, the electronic device includes a memory in which a plurality of albums are created, "preserve the best photos" to store the best photos in a certain album, such as in a camera album. Among them, by deleting other photos, it can effectively avoid occupying too much storage space.
请参阅图八,为本申请第八实施例中的拍摄控制方法的流程图。在第八实施例中,拍摄控制方法包括:Please refer to FIG. 8 , which is a flowchart of a shooting control method in an eighth embodiment of the present application. In the eighth embodiment, the photographing control method includes:
S71:获取拍摄预览画面。S71: Acquire a shooting preview screen.
步骤S71与图1中的步骤S11对应,具体的介绍可参考图1中的步骤S11的相关描述。Step S71 corresponds to step S11 in FIG. 1. For a specific description, reference may be made to the related description of step S11 in FIG. 1.
S73:采用预设模型对拍摄预览画面进行分析而确定当前是否满足拍摄条件以及确定出用于拍摄的拍摄参数。S73: The shooting preview screen is analyzed by using a preset model to determine whether the shooting condition is currently satisfied and the shooting parameters for shooting are determined.
S75:在确定当前满足拍摄条件时,根据拍摄参数控制执行视频拍摄操作,而得到以当前拍摄预览画面作为起始视频画面帧的视频文件。S75: When it is determined that the shooting condition is currently satisfied, the video shooting operation is performed according to the shooting parameter control, and the video file with the current shooting preview picture as the starting video picture frame is obtained.
其中,执行视频拍摄操作时的拍摄条件的要求也可略低于执行拍照操作时的要求,具体的,执行连拍操作时对比的相似度预设阈值或满意度预设阈值可略低于执行拍照操作对比的相似度预设阈值或满意度预设阈值。The requirement of the shooting condition when performing the video shooting operation may also be slightly lower than the requirement when the photographing operation is performed. Specifically, the comparison similarity preset threshold or the satisfaction preset threshold may be slightly lower than the execution when performing the continuous shooting operation. The similarity preset threshold or the satisfaction preset threshold of the photographing operation comparison.
从而,当即将达到最佳拍摄效果时即进行视频拍摄,而可以确保视频文件中包括最佳拍摄效果的视频画面帧。Thus, video shooting is performed when the best shooting effect is about to be achieved, and the video frame frame including the best shooting effect in the video file can be ensured.
其中,其中,步骤S71~S75与图7所示的第七实施例中的步骤S61~S65分别对应,更具体的介绍可参见图7的相关描述。例如,步骤S77,可以包括:在拍摄预览画面与基准图片的相似度超过相似度预设阈值时,确定满足拍摄条件,控制执行视频拍摄操作。可选的,步骤S77,也可以包括:当满意度超过满意度预设阈值时,确定满足拍摄条件,控制执行视频拍摄操作。The steps S71-S75 correspond to the steps S61-S65 in the seventh embodiment shown in FIG. 7, respectively. For a more specific introduction, reference may be made to the related description of FIG. For example, step S77 may include: when the similarity between the shooting preview screen and the reference picture exceeds the similarity preset threshold, determining that the shooting condition is satisfied, and controlling to perform the video shooting operation. Optionally, in step S77, the method may further include: when the satisfaction exceeds the satisfaction preset threshold, determining that the shooting condition is met, and controlling to perform the video shooting operation.
如图8所示,图8所示的第八实施例还可包括如下步骤:As shown in FIG. 8, the eighth embodiment shown in FIG. 8 may further include the following steps:
S77:对拍摄到的视频文件中的多个视频画面帧进行比较,确定出最佳的画面帧。S77: Compare multiple video picture frames in the captured video file to determine an optimal picture frame.
可选的,在一种实现方式中,步骤S77可包括:采用已训练的神经网络模型对该多个视频画面帧进行分析得到满意度,并确定满意度最高的视频画面帧作为最佳的画面帧。Optionally, in an implementation manner, step S77 may include: analyzing the plurality of video picture frames by using the trained neural network model to obtain satisfaction, and determining a video frame frame with the highest satisfaction as the optimal picture. frame.
可选的,在另一种实现方式中,步骤S77可包括:将视频文件中的多个视频画面帧与基准图片进行比较,确定与基准图片相似度最高的视频画面帧作为最佳的画面帧。Optionally, in another implementation manner, step S77 may include: comparing a plurality of video picture frames in the video file with the reference picture, and determining a video picture frame with the highest similarity with the reference picture as the optimal picture frame. .
可选的,如图8所示,拍摄控制方法还可进一步包括:Optionally, as shown in FIG. 8, the shooting control method may further include:
S79:将最佳的画面帧截取出来作为照片保存。S79: The best picture frame is intercepted and saved as a photo.
在一些实施例中,电子装置包括存储器,存储器中建立有若干相册,“将最佳的画面帧截取出来作为照片保存”为将最佳的画面帧以图片/照片的格式存储于某一相册中,例如存储于相机相册中。In some embodiments, the electronic device includes a memory in which a plurality of albums are created, and “the best picture frame is truncated to be saved as a photo” to store the best picture frame in a photo/photo format in an album. , for example, stored in a camera album.
请参阅图9,为本申请第九实施例中的拍摄控制方法的流程图。如图9所示,在第九实施例中,拍摄控制方法可包括如下步骤:Please refer to FIG. 9 , which is a flowchart of a shooting control method in a ninth embodiment of the present application. As shown in FIG. 9, in the ninth embodiment, the photographing control method may include the following steps:
S81:获取拍摄预览画面。S81: Acquire a shooting preview screen.
S83:采用预设模型对拍摄预览画面进行分析而确定当前是否满足拍摄条件以及确定出用于拍摄的拍摄参数S83: analyzing the shooting preview screen by using a preset model to determine whether the shooting condition is currently satisfied and determining a shooting parameter for shooting
S85:在确定当前满足拍摄条件时,根据拍摄参数控制执行拍摄操作。S85: When it is determined that the shooting condition is currently satisfied, the shooting operation is performed according to the shooting parameter control.
其中,步骤S81~S85图6中的步骤S51~S55分别对应以及与图7对应的其他实施例中的流程图也对应,具体的实现方式可参照图6中的步骤S51~S55以及其他实施例中的相关步骤的描述。The steps S51 to S55 in FIG. 6 and the steps S51 to S55 in FIG. 6 respectively correspond to the flowcharts in the other embodiments corresponding to FIG. 7. For specific implementation, refer to steps S51 to S55 in FIG. 6 and other embodiments. A description of the relevant steps in .
S87:获取用户对本次自动拍摄的满意度反馈信息。S87: Acquire the user's satisfaction feedback information about the automatic shooting.
可选的,在一实现方式中,可在自动拍照完成后,通过产生提示信息来提示用户对本次自动拍照进行满意度评价,例如产生包括“满意”和“不满意”选项的提示框来供用户选择,并根据用户选择而获取得到本次自动拍照的满意度反馈信息。Optionally, in an implementation manner, after the automatic photographing is completed, the user may be prompted to perform satisfaction evaluation on the automatic photographing by generating prompt information, for example, generating a prompt box including “satisfactory” and “unsatisfactory” options. For the user to select, and according to the user's choice, the satisfaction feedback information of the automatic photographing is obtained.
可选的,在另一实施方式中,通过侦测用户对本次自动拍摄得到的照片或视频的操作而获取用户对本次自动拍摄的满意度反馈信息。例如,如果侦测到 用户删除了本次自动拍摄得到的照片或视频,则确定用户对本次自动拍摄不满意,而获取到了为不满意的满意度反馈信息。又例如,如果侦测到用户对本次自动拍摄得到的照片或视频进行了设置为最爱、喜欢等类型的设置操作或者进行了分享的操作,则确定用户对本次自动拍摄满意,而获取到了为满意的满意度反馈信息。Optionally, in another embodiment, the user's satisfaction with the automatic shooting is obtained by detecting the user's operation on the photo or video obtained by the automatic shooting. For example, if it is detected that the user deletes the photo or video obtained by the automatic shooting, it is determined that the user is not satisfied with the automatic shooting, and the satisfaction feedback information that is unsatisfactory is obtained. For example, if it is detected that the user has set a photo or video obtained by the automatic shooting to a favorite or favorite type setting operation or a sharing operation, it is determined that the user is satisfied with the automatic shooting, and obtains I got feedback on satisfaction with satisfaction.
S89:将用户对本次自动拍摄的满意度反馈信息输出至当前使用的模型,以使得当前使用的模型利用满意度反馈信息进行优化训练。S89: Output the satisfaction feedback information of the user to the current automatic shooting to the currently used model, so that the currently used model uses the satisfaction feedback information for optimal training.
从而,本申请中,通过搜集用户对自动拍摄的满意度反馈信息,可以对模型的训练进行优化,而不断优化模型,以使得后续使用中进行自动拍摄时能够更加准确。Therefore, in the present application, by collecting user satisfaction feedback information for automatic shooting, the training of the model can be optimized, and the model is continuously optimized, so that the automatic shooting in subsequent use can be more accurate.
其中,当前使用的模型可为已经经过确认训练完成的模型,也可以是还未训练完成的模型。当为确认训练完成的模型,可进一步进行优化,当为还未训练完成的模型,可以更优地实现训练。Among them, the currently used model may be a model that has been confirmed by training, or a model that has not been trained yet. When it is confirmed that the training is completed, the model can be further optimized. When the model is not yet trained, the training can be better achieved.
其中,如前所述,图1~9中任一实施例中的预设模型也可为未训练完成的模型,而在用户启动自动拍摄功能或者模型开机自动启动后,根据模型去进行自动拍摄,并根据用户反馈的满意度反馈信息对当前的模型进行优化和训练。As described above, the preset model in any of the embodiments 1 to 9 may also be an untrained model, and the automatic shooting is performed according to the model after the user initiates the automatic shooting function or the model is automatically started. And optimize and train the current model based on the feedback feedback from the user feedback.
在一些实施例中,当预设模型为未训练完成的模型时,未训练完成的模型自动获取用户每次进行拍摄时的画面,并作为正面样本进行训练,或者进一步获取拍摄时的拍摄参数,一起作为正面样本进行训练,或者还以预设规则采样用户未进行手动控制拍摄时的画面帧作为反面样本,而对预设模型进行逐步优化,直到训练次数达到预设次数或者后续用户反馈的满意度反馈信息为满意的比例超过预设比例,则确定训练完成。在此方式下,由于用户自己训练模型,而不需要采用别人的模型,因此能够更好地实现个性化。In some embodiments, when the preset model is an untrained model, the untrained model automatically acquires a picture each time the user performs shooting, and performs training as a positive sample, or further acquires shooting parameters at the time of shooting. Training together as a positive sample, or sampling the frame frame when the user does not manually control the shooting as a reverse sample by using a preset rule, and gradually optimizing the preset model until the number of training reaches a preset number of times or satisfaction of subsequent user feedback If the degree of satisfaction is greater than the preset ratio, the training is completed. In this way, since the user himself trains the model without using another person's model, personalization can be better achieved.
请参阅图10,为本申请一实施例中的电子装置100的示意出部分结构的框图。如图10所示,电子装置100包括处理器10、存储器20、摄像头30。其中,摄像头30至少包括后置摄像头31及前置摄像头32。后置摄像头31用于拍摄电子装置100后方的影像,可用于供用户拍摄他人等拍摄操作,前置摄像头32用于拍摄电子装置100前方的影像,可用于实现自拍等拍摄操作。Please refer to FIG. 10 , which is a block diagram showing a schematic partial structure of an electronic device 100 according to an embodiment of the present application. As shown in FIG. 10, the electronic device 100 includes a processor 10, a memory 20, and a camera 30. The camera 30 includes at least a rear camera 31 and a front camera 32. The rear camera 31 is used to capture an image behind the electronic device 100, and can be used for a user to take a shooting operation such as a person. The front camera 32 is used to capture an image in front of the electronic device 100, and can be used to perform a self-photographing and the like.
在一些实施例中,图1~图9中的模型可为运行于处理器10中的特定算法函数等程序,例如为神经网络算法函数、图像处理算法函数等等。在另一些实施例中,电子装置100还可包括独立于处理器10之外的模型处理器,图1~图9中的模型为运行与模型处理器中,处理器10可根据需要产生相应指令而触发模型处理器运行对应的模型,且模型的输出结果通过模型处理器输出给处理器10而供处理器10进行使用,而执行拍摄操作等控制。In some embodiments, the models in FIGS. 1-9 may be programs such as specific algorithm functions running in processor 10, such as neural network algorithm functions, image processing algorithm functions, and the like. In other embodiments, the electronic device 100 may further include a model processor that is independent of the processor 10. The models in FIGS. 1-9 are in the operation and model processors, and the processor 10 may generate corresponding instructions as needed. The trigger model processor runs the corresponding model, and the output result of the model is output to the processor 10 through the model processor for use by the processor 10, and control such as a shooting operation is performed.
存储器20中存储有程序指令。Program instructions are stored in the memory 20.
处理器10用于调用存储器20存储的程序指令而执行如图1~9所示的任一实施例中的拍摄控制方法。The processor 10 is configured to call a program instruction stored in the memory 20 to execute the photographing control method in any of the embodiments shown in FIGS.
例如,处理器10用于调用存储器20存储的程序指令执行如下的拍摄控制方法:For example, the processor 10 is configured to call a program instruction stored in the memory 20 to execute the following shooting control method:
通过摄像头30获取拍摄预览画面;采用预设模型对拍摄预览画面进行分析 而得到拍摄参数以用于拍摄。The shooting preview screen is acquired by the camera 30; the shooting preview screen is analyzed by using a preset model to obtain shooting parameters for shooting.
在一些实施例中,获取拍摄预览画面的操作为响应开启摄像头的操作后通过摄像头来进行的,即,为通过摄像头来获取拍摄预览画面。In some embodiments, the operation of acquiring the shooting preview screen is performed by the camera in response to the operation of turning on the camera, that is, the shooting preview screen is acquired by the camera.
在一些实施例中,开启摄像头的操作为对拍照应用图标的点击操作,即,在响应对拍照应用图标的点击操作而开启摄像头时,则通过摄像头去获取拍摄预览画面。In some embodiments, the operation of turning on the camera is a click operation on the photographing application icon, that is, when the camera is turned on in response to a click operation on the photographing application icon, the shooting preview screen is acquired by the camera.
或者,在另一些实施例中,开启摄像头的操作为对电子装置的物理按键的特定操作,例如,电子装置包括音量增加键和音量减小键,开启摄像头的操作为对音量增加键和音量减小键的同时按压的操作。进一步的,开启拍照应用的操作为在预设时间(例如2秒)内先后按压音量增加键及音量减小键的操作。Alternatively, in other embodiments, the operation of turning on the camera is a specific operation of a physical button of the electronic device. For example, the electronic device includes a volume up button and a volume down button, and the operation of turning on the camera is to increase the volume and volume. Simultaneous pressing of small keys. Further, the operation of turning on the photographing application is an operation of pressing the volume up key and the volume down key in a preset time (for example, 2 seconds).
在另一些实施例中,开启摄像头的操作还可为在电子装置的任一显示界面中输入的预设触摸手势的操作,例如,在电子装置的主界面上,用户可输入一个具有环形触摸轨迹的触摸手势而开启摄像头。In other embodiments, the operation of turning on the camera may also be an operation of a preset touch gesture input in any display interface of the electronic device. For example, on the main interface of the electronic device, the user may input a circular touch track. Turn on the camera with a touch gesture.
在另一些实施例中,开启摄像头的操作还可为在电子装置处于黑屏状态下在触摸屏上输入的预设触摸手势的操作。In other embodiments, the operation of turning on the camera may also be an operation of a preset touch gesture input on the touch screen when the electronic device is in a black screen state.
在一些实施例中,当电子装置为照相机时,开启摄像头的操作为对照相机的快门按键/电源按键进行按压而触发照相机处于启动状态的操作。In some embodiments, when the electronic device is a camera, the operation of turning on the camera is an operation that presses the shutter button/power button of the camera to trigger the camera to be in an activated state.
可选的,本申请中,获取拍摄预览画面为通过摄像头实时获取拍摄预览画面。Optionally, in the present application, acquiring a shooting preview screen is to obtain a shooting preview screen in real time through a camera.
在一些实施例中,预设模型可为已训练完成的模型,也可为未训练完成的模型。In some embodiments, the preset model may be a trained model or an untrained model.
可选的,在一些实施例中,预设模型为已训练的神经网络模型,采用预设模型对拍摄预览画面进行分析而得到拍摄参数以用于拍摄,包括:将拍摄预览画面的所有像素作为输入,通过神经网络模型进行计算后输出拍摄参数以用于拍摄。Optionally, in some embodiments, the preset model is a trained neural network model, and the shooting preview image is analyzed by using a preset model to obtain shooting parameters for shooting, including: taking all the pixels of the preview image as Input, calculated by the neural network model and output shooting parameters for shooting.
可选的,在另一些实施例中,预设模型为已训练的图像处理算法模型,已训练的图像处理算法模型包括已训练的目标对象特征模型,采用预设模型对拍摄预览画面进行分析而得到拍摄参数以用于拍摄,包括:利用图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量;根据目标对象特征模型以及拍摄预览画面对应的目标对象特征向量得到分析结果;根据所述分析结果确定出所述拍摄参数以用于拍摄。Optionally, in other embodiments, the preset model is a trained image processing algorithm model, and the trained image processing algorithm model includes the trained target object feature model, and the preset preview model is used to analyze the captured preview image. The shooting parameters are obtained for shooting, including: using image recognition technology to analyze the target object in the shooting preview image, generating a corresponding target object feature vector; and analyzing according to the target object feature model and the target object feature vector corresponding to the shooting preview image. Results; the shooting parameters are determined for the shooting based on the analysis results.
可选的,在一种实现方式中,根据目标对象特征模型以及拍摄预览画面对应的目标对象特征向量得到分析结果,包括:通过已训练的目标对象特征模型和拍摄预览画面对应的目标对象特征向量计算得到拍摄预览画面对应的目标对象特征这一分析结果。Optionally, in an implementation manner, the analysis result is obtained according to the target object feature model and the target object feature vector corresponding to the captured preview image, including: the target object feature model corresponding to the trained target object feature model and the captured preview image The analysis result of the target object feature corresponding to the shooting preview screen is calculated.
在一种实现方式中,根据分析结果确定拍摄参数以用于拍摄,包括:根据预设的目标对象特征与拍摄参数的对应关系,确定得到的目标对象特征所对应的拍摄参数。In an implementation manner, determining the shooting parameters for the shooting according to the analysis result includes: determining, according to the correspondence between the preset target object features and the shooting parameters, the shooting parameters corresponding to the obtained target object features.
可选的,在另一种实现方式中,预设模型为已训练的图像处理算法模型,已训练的图像处理算法模型包括已训练的目标对象特征模型时,采用预设模型 对拍摄预览画面进行分析而得到拍摄参数以用于拍摄,包括:将拍摄预览画面对应的目标对象特征向量作为已训练的目标对象特征模型的输入信息,而通过已训练的目标对象特征模型得出拍摄参数。Optionally, in another implementation manner, the preset model is a trained image processing algorithm model, and when the trained image processing algorithm model includes the trained target object feature model, the preset model is used to capture the preview image. The imaging parameters are obtained for the imaging, including: taking the target object feature vector corresponding to the captured preview image as the input information of the trained target object feature model, and obtaining the shooting parameters through the trained target object feature model.
在一些实施例中,本申请中,分析结果为根据实时获取的拍摄预览画面进行分析而得到相应的分析结果。在另一些实施例中,采用预设模型对拍摄预览画面进行分析而得到拍摄参数:每间隔预设时间(例如0.2秒)对当前获取到的拍摄预览画面进行分析而得到拍摄参数。In some embodiments, in the present application, the analysis result is that the analysis result is obtained according to the real-time acquired shooting preview screen. In other embodiments, the shooting preview screen is analyzed by using a preset model to obtain shooting parameters: the currently acquired shooting preview screen is analyzed every time by a preset time (for example, 0.2 seconds) to obtain shooting parameters.
其中,本申请中,目标对象可为手部、脸部、特定的景物等;目标对象特征模型相应包括手势特征模型、表情特征模型以及景物特征模型等,分析出的目标对象特征向量可包括手势特征向量、表情特征向量和警务特征向量等。In the present application, the target object may be a hand, a face, a specific scene, etc.; the target object feature model includes a gesture feature model, an expression feature model, and a scene feature model, etc., and the analyzed target object feature vector may include a gesture. Feature vectors, emoticon feature vectors, and police feature vectors.
从而,本申请中,通过分析拍摄预览画面确定当前的拍摄参数,能够根据拍摄预览画面来及时确定拍摄参数,从而在需要拍摄时能够根据拍摄参数以较高的拍摄质量及时捕捉当前精彩的瞬间。Therefore, in the present application, by determining the current shooting parameters by analyzing the shooting preview screen, the shooting parameters can be determined in time according to the shooting preview screen, so that the current exciting moment can be captured in time with high shooting quality according to the shooting parameters when shooting is required.
其中,处理器10可为微控制器、微处理器、单片机、数字信号处理器等。The processor 10 can be a microcontroller, a microprocessor, a single chip microcomputer, a digital signal processor, or the like.
存储器20可为存储卡、固态存储器、微硬盘、光盘等任意可存储信息的存储设备。The memory 20 can be any storage device that can store information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like.
如图10所示,电子装置100还包括输入单元40及输出单元50。输入单元40可包括触摸面板、鼠标、麦克风、包括电源键、音量键在内的物理按键等。输出单元50可包括显示屏、扬声器等。在一些实施例中,输入单元40的触摸面板和输出单元50的显示屏整合在一起形成触摸屏,而同时提供触摸输入和显示输出的功能。As shown in FIG. 10, the electronic device 100 further includes an input unit 40 and an output unit 50. The input unit 40 may include a touch panel, a mouse, a microphone, a physical button including a power key, a volume key, and the like. The output unit 50 can include a display screen, a speaker, and the like. In some embodiments, the touch panel of the input unit 40 and the display screen of the output unit 50 are integrated to form a touch screen while providing the functionality of touch input and display output.
其中,电子装置100可为手机、平板电脑、笔记本电脑等具有摄像头30的便携式电子装置,也可为照相机、摄像机等照相装置。The electronic device 100 can be a portable electronic device having a camera 30, such as a mobile phone, a tablet computer, or a notebook computer, and can also be a camera device such as a camera or a video camera.
在一些实施例中,本申请还提供一种计算机可读存储介质,计算机可读存储介质中存储有若干程序指令,若干程序指令供主处理单元20调用执行后,执行如图1-10所示的任一拍摄控制方法中的全部或部分步骤。在一些实施例中,计算机存储介质即为存储器20,可为存储卡、固态存储器、微硬盘、光盘等任意可存储信息的存储设备。In some embodiments, the present application further provides a computer readable storage medium, where a plurality of program instructions are stored in the computer readable storage medium, and the program instructions are executed by the main processing unit 20, and executed as shown in FIG. All or part of the steps in any of the shooting control methods. In some embodiments, the computer storage medium is the memory 20, and may be any storage device that can store information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like.
本申请的拍摄控制方法及电子装置100可根据拍摄预览画面自动判断是否满足拍摄条件,并在满足拍摄条件时进行拍摄,可及时捕捉到包括当前拍摄预览画面对应内容的精彩瞬间。The photographing control method and the electronic device 100 of the present application can automatically determine whether the photographing condition is satisfied according to the photographing preview screen, and perform photographing when the photographing condition is satisfied, and can capture the highlight moment including the content corresponding to the current photographing preview screen in time.
尽管在此结合各实施例对本发明进行了描述,然而,在实施所要求保护的本发明过程中,本领域技术人员通过查看附图、公开内容、以及所附权利要求书,可理解并实现公开实施例的其他变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其他单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。Although the present invention has been described herein in connection with the embodiments of the present invention, those skilled in the art can understand and implement the disclosure by the drawings, the disclosure, and the appended claims. Other variations of the embodiments. In the claims, the word "comprising" does not exclude other components or steps, and "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill several of the functions recited in the claims. Certain measures are recited in mutually different dependent claims, but this does not mean that the measures are not combined to produce a good effect.
本领域技术人员应明白,本发明的实施例可提供为方法、装置(设备)、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中 包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机程序存储/分布在合适的介质中,与其它硬件一起提供或作为硬件的一部分,也可以采用其他分布形式,如通过Internet或其它有线或无线电信系统。Those skilled in the art will appreciate that embodiments of the present invention can be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code. The computer program is stored/distributed in a suitable medium, provided with other hardware or as part of the hardware, or in other distributed forms, such as over the Internet or other wired or wireless telecommunication systems.
本发明是参照本发明实施例的方法、装置(设备)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention has been described with reference to flowchart illustrations and/or block diagrams of the methods, apparatus, and computer program products of the embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
以上所揭露的仅为本申请一种实施例而已,当然不能以此来限定本申请之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本申请权利要求所作的等同变化,仍属于申请所涵盖的范围。The above disclosure is only an embodiment of the present application, and of course, the scope of the application should not be limited thereto, and those skilled in the art can understand all or part of the process of implementing the above embodiments, and according to the claims of the present application. The equivalent change is still within the scope of the application.

Claims (13)

  1. 一种拍摄控制方法,其特征在于,所述拍摄控制方法包括:A shooting control method, characterized in that the shooting control method comprises:
    获取拍摄预览画面;Get a preview of the shot;
    采用预设模型对所述拍摄预览画面进行分析而得到拍摄参数以用于拍摄。The shooting preview screen is analyzed using a preset model to obtain shooting parameters for shooting.
  2. 如权利要求1所述的拍摄控制方法,其特征在于,所述预设模型为已训练的神经网络模型,所述采用预设模型对所述拍摄预览画面进行分析而得到拍摄参数以用于拍摄,包括:The photographing control method according to claim 1, wherein the preset model is a trained neural network model, and the photographing preview image is analyzed by using a preset model to obtain shooting parameters for photographing. ,include:
    将拍摄预览画面的所有像素作为输入,通过所述神经网络模型进行计算后输出所述拍摄参数以用于拍摄。All the pixels of the preview image are taken as inputs, and the shooting parameters are output by the neural network model for shooting.
  3. 如权利要求1所述的拍摄控制方法,其特征在于,所述预设模型为已训练的图像处理算法模型,所述已训练的图像处理算法模型包括已训练的目标对象特征模型,采用预设模型对所述拍摄预览画面进行分析而得到拍摄参数以用于拍摄,包括:The photographing control method according to claim 1, wherein the preset model is a trained image processing algorithm model, and the trained image processing algorithm model includes a trained target object feature model, and a preset is adopted. The model analyzes the shooting preview screen to obtain shooting parameters for shooting, including:
    利用图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量;Using the image recognition technology to analyze the target object in the preview image to generate a corresponding target object feature vector;
    根据所述已训练的目标对象特征模型以及所述拍摄预览画面对应的目标对象特征向量得到分析结果;Obtaining an analysis result according to the trained target object feature model and the target object feature vector corresponding to the captured preview image;
    根据所述分析结果确定出拍摄参数以用于拍摄。The shooting parameters are determined for the shooting based on the analysis results.
  4. 如权利要求3所述的拍摄控制方法,其特征在于,所述根据所述目标对象特征模型以及所述拍摄预览画面对应的目标对象特征向量得到分析结果,包括:通过已训练的目标对象特征模型和拍摄预览画面对应的目标对象特征向量计算得到拍摄预览画面对应的目标对象特征这一分析结果;The photographing control method according to claim 3, wherein the obtaining the analysis result according to the target object feature model and the target object feature vector corresponding to the shot preview screen comprises: passing the trained target object feature model The analysis result of the target object feature corresponding to the captured preview screen is calculated by the target object feature vector corresponding to the captured preview screen;
    所述根据所述分析结果确定拍摄参数以用于拍摄,包括:Determining the shooting parameters for shooting according to the analysis result, including:
    根据预设的目标对象特征与拍摄参数的对应关系,确定得到的目标对象特征所对应的拍摄参数。The shooting parameters corresponding to the obtained target object features are determined according to the correspondence between the preset target object features and the shooting parameters.
  5. [根据细则26改正17.07.2018] 
    如权利要求1所述的拍摄控制方法,其特征在于,所述预设模型为已训练的图像处理算法模型,所述已训练的图像处理算法模型包括已训练的目标对象特征模型,所述采用预设模型对所述拍摄预览画面进行分析而得到拍摄参数以用于拍摄,包括:将拍摄预览画面对应的目标对象特征向量作为所述已训练的目标对象特征模型的输入信息,而通过所述已训练的目标对象特征模型得出拍摄参数以用于拍摄。
    [Correct according to Rule 26 17.07.2018]
    The photographing control method according to claim 1, wherein the preset model is a trained image processing algorithm model, and the trained image processing algorithm model includes a trained target object feature model, and the adopting The preset model analyzes the shooting preview image to obtain shooting parameters for shooting, including: using a target object feature vector corresponding to the shooting preview image as input information of the trained target object feature model, by using the The trained target object feature model derives shooting parameters for shooting.
  6. 如权利要求3-5任一项所述的拍摄控制方法,其特征在于,所述目标对象为手部,所述目标对象特征模型为手势特征模型,所述目标对象特征向量为手势特征向量。The photographing control method according to any one of claims 3-5, wherein the target object is a hand, the target object feature model is a gesture feature model, and the target object feature vector is a gesture feature vector.
  7. 如权利要求1所述的拍摄控制方法,其特征在于,获取拍摄预览画面,所述拍摄控制方法还包括:The photographing control method according to claim 1, wherein the photographing control screen is acquired, the photographing control method further comprising:
    采用预设模型对所述拍摄预览画面进行分析而确定当前是否满足拍摄条件;The shooting preview screen is analyzed by using a preset model to determine whether the shooting condition is currently satisfied;
    在采用预设模型对所述拍摄预览画面进行分析而得到拍摄参数之后,所述拍摄控制方法还包括:After the shooting preview image is analyzed by using the preset model to obtain the shooting parameters, the shooting control method further includes:
    在确定满足拍摄条件时,根据所述拍摄参数进行拍摄。When it is determined that the shooting condition is satisfied, shooting is performed according to the shooting parameters.
  8. 如权利要求7所述的拍摄控制方法,其特征在于,所述采用预设模型对所述拍摄预览画面进行分析而确定当前是否满足拍摄条件,还包括:The photographing control method according to claim 7, wherein the analyzing the photographing preview screen by using a preset model to determine whether the photographing condition is currently satisfied, further comprising:
    通过已训练的神经网络模型对拍摄预览画面进行分析,得出满意度这一分析结果;The captured preview image is analyzed by the trained neural network model, and the analysis result of satisfaction is obtained;
    在确定满意度超过满意度预设阈值时,确定当前满足拍摄条件。When it is determined that the satisfaction exceeds the satisfaction preset threshold, it is determined that the shooting condition is currently satisfied.
  9. 如权利要求7所述的拍摄控制方法,其特征在于,采用预设模型对所述拍摄预览画面进行分析而确定当前是否满足拍摄条件,还包括:The photographing control method according to claim 7, wherein the photographing preview screen is analyzed by using a preset model to determine whether the photographing condition is currently satisfied, and further includes:
    采用已训练的图像处理算法模型将拍摄预览画面与基准图片进行比较,分析出拍摄预览画面与基准图片的相似度这一分析结果;The trained image processing algorithm model is used to compare the shooting preview image with the reference image, and analyze the analysis result of the similarity between the shooting preview image and the reference image;
    在确定相似度超过相似度预设阈值时,确定当前满足拍摄条件。When it is determined that the similarity exceeds the similarity preset threshold, it is determined that the shooting condition is currently satisfied.
  10. 如权利要求7所述的拍摄控制方法,其特征在于,所述采用预设模型对所述拍摄预览画面进行分析而确定当前是否满足拍摄条件,包括:The photographing control method according to claim 7, wherein the analyzing the photographing preview screen by using a preset model to determine whether the photographing condition is currently met comprises:
    利用图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量;Using the image recognition technology to analyze the target object in the preview image to generate a corresponding target object feature vector;
    将目标对象特征向量作为已训练的图像处理算法模型的输入信息,而得出包括标识当前是否满足拍摄条件的标识信息的分析结果;Using the target object feature vector as the input information of the trained image processing algorithm model, and obtaining an analysis result including the identification information that identifies whether the shooting condition is currently satisfied;
    在标识信息标识当前满足拍摄条件时,确定当前满足拍摄条件。When the identification information identifies that the shooting condition is currently satisfied, it is determined that the shooting condition is currently satisfied.
  11. 如权利要求7-10任一项所述的拍摄控制方法,其特征在于,所述方法还包括:The photographing control method according to any one of claims 7 to 10, wherein the method further comprises:
    获取用户对执行拍摄操作得到的照片或视频进行反馈的满意度反馈信息;Obtaining satisfaction feedback information that the user gives feedback on the photos or videos obtained by performing the shooting operation;
    将所述满意度反馈信息输出给所述预设模型,以使得所述预设模型利用所述满意度反馈信息进行优化训练。And outputting the satisfaction feedback information to the preset model, so that the preset model performs optimization training by using the satisfaction feedback information.
  12. 一种电子装置,其特征在于,所述电子装置包括:An electronic device, the electronic device comprising:
    摄像头;camera;
    存储器,用于存储程序指令;以及a memory for storing program instructions;
    处理器,用于调用所述程序指令执行如权利要求1-11任一项所述的拍摄控制方法。a processor for invoking the program instruction to perform the photographing control method according to any one of claims 1-11.
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有程序指令,所述程序指令用于供计算机调用后执行如权利要求1-11任一项所述的拍摄控制方法。A computer readable storage medium, wherein the computer readable storage medium stores program instructions for performing a shooting control according to any one of claims 1 to 11 after being invoked by a computer method.
PCT/CN2018/085898 2018-05-07 2018-05-07 Photographing control method, and electronic device WO2019213818A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/085898 WO2019213818A1 (en) 2018-05-07 2018-05-07 Photographing control method, and electronic device
CN201880070282.9A CN111279684A (en) 2018-05-07 2018-05-07 Shooting control method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/085898 WO2019213818A1 (en) 2018-05-07 2018-05-07 Photographing control method, and electronic device

Publications (1)

Publication Number Publication Date
WO2019213818A1 true WO2019213818A1 (en) 2019-11-14

Family

ID=68467664

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/085898 WO2019213818A1 (en) 2018-05-07 2018-05-07 Photographing control method, and electronic device

Country Status (2)

Country Link
CN (1) CN111279684A (en)
WO (1) WO2019213818A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383224A (en) * 2020-03-19 2020-07-07 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN112911139A (en) * 2021-01-15 2021-06-04 广州富港生活智能科技有限公司 Article shooting method and device, electronic equipment and storage medium
WO2023006009A1 (en) * 2021-07-30 2023-02-02 维沃移动通信有限公司 Photographing parameter determination method and apparatus, and electronic device
CN116320716A (en) * 2023-05-25 2023-06-23 荣耀终端有限公司 Picture acquisition method, model training method and related devices

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565599A (en) * 2020-11-27 2021-03-26 Oppo广东移动通信有限公司 Image shooting method and device, electronic equipment, server and storage medium
CN114051095A (en) * 2021-11-12 2022-02-15 苏州臻迪智能科技有限公司 Remote processing method of video stream data and shooting system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110157392A1 (en) * 2009-12-30 2011-06-30 Altek Corporation Method for adjusting shooting condition of digital camera through motion detection
CN103716547A (en) * 2014-01-15 2014-04-09 厦门美图之家科技有限公司 Smart mode photographing method
CN104125396A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image shooting method and device
CN104469131A (en) * 2014-09-05 2015-03-25 宇龙计算机通信科技(深圳)有限公司 Method, device and terminal for displaying shooting control
CN106101541A (en) * 2016-06-29 2016-11-09 捷开通讯(深圳)有限公司 A kind of terminal, photographing device and image pickup method based on personage's emotion thereof
CN107566529A (en) * 2017-10-18 2018-01-09 维沃移动通信有限公司 A kind of photographic method, mobile terminal and cloud server
CN107820020A (en) * 2017-12-06 2018-03-20 广东欧珀移动通信有限公司 Method of adjustment, device, storage medium and the mobile terminal of acquisition parameters
CN107995422A (en) * 2017-11-30 2018-05-04 广东欧珀移动通信有限公司 Image capturing method and device, computer equipment, computer-readable recording medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101762769B1 (en) * 2011-04-18 2017-08-07 삼성전자주식회사 Apparatus and method for capturing subject in photographing device
CN106454071A (en) * 2016-09-09 2017-02-22 捷开通讯(深圳)有限公司 Terminal and automatic shooting method based on gestures
CN106372627A (en) * 2016-11-07 2017-02-01 捷开通讯(深圳)有限公司 Automatic photographing method and device based on face image recognition and electronic device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110157392A1 (en) * 2009-12-30 2011-06-30 Altek Corporation Method for adjusting shooting condition of digital camera through motion detection
CN103716547A (en) * 2014-01-15 2014-04-09 厦门美图之家科技有限公司 Smart mode photographing method
CN104125396A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image shooting method and device
CN104469131A (en) * 2014-09-05 2015-03-25 宇龙计算机通信科技(深圳)有限公司 Method, device and terminal for displaying shooting control
CN106101541A (en) * 2016-06-29 2016-11-09 捷开通讯(深圳)有限公司 A kind of terminal, photographing device and image pickup method based on personage's emotion thereof
CN107566529A (en) * 2017-10-18 2018-01-09 维沃移动通信有限公司 A kind of photographic method, mobile terminal and cloud server
CN107995422A (en) * 2017-11-30 2018-05-04 广东欧珀移动通信有限公司 Image capturing method and device, computer equipment, computer-readable recording medium
CN107820020A (en) * 2017-12-06 2018-03-20 广东欧珀移动通信有限公司 Method of adjustment, device, storage medium and the mobile terminal of acquisition parameters

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383224A (en) * 2020-03-19 2020-07-07 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111383224B (en) * 2020-03-19 2024-04-16 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN112911139A (en) * 2021-01-15 2021-06-04 广州富港生活智能科技有限公司 Article shooting method and device, electronic equipment and storage medium
WO2023006009A1 (en) * 2021-07-30 2023-02-02 维沃移动通信有限公司 Photographing parameter determination method and apparatus, and electronic device
CN116320716A (en) * 2023-05-25 2023-06-23 荣耀终端有限公司 Picture acquisition method, model training method and related devices
CN116320716B (en) * 2023-05-25 2023-10-20 荣耀终端有限公司 Picture acquisition method, model training method and related devices

Also Published As

Publication number Publication date
CN111279684A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
WO2019213818A1 (en) Photographing control method, and electronic device
AU2017261537B2 (en) Automated selection of keeper images from a burst photo captured set
US10706892B2 (en) Method and apparatus for finding and using video portions that are relevant to adjacent still images
WO2019213819A1 (en) Photographing control method and electronic device
WO2022028184A1 (en) Photography control method and apparatus, electronic device, and storage medium
WO2021027537A1 (en) Method and apparatus for taking identification photo, device and storage medium
US10467498B2 (en) Method and device for capturing images using image templates
WO2017031901A1 (en) Human-face recognition method and apparatus, and terminal
JP5169139B2 (en) Camera and image recording program
KR20140026512A (en) Automatically optimizing capture of images of one or more subjects
WO2021169686A1 (en) Photo capture control method and apparatus and computer readable storage medium
WO2019214574A1 (en) Image capturing method and apparatus, and electronic terminal
EP3975046B1 (en) Method and apparatus for detecting occluded image and medium
CN108600610A (en) Shoot householder method and device
WO2019213820A1 (en) Photographing control method and electronic device
JP2019186791A (en) Imaging apparatus, control method of the imaging apparatus, and control program
KR101431651B1 (en) Apparatus and method for mobile photo shooting for a blind person
WO2019205566A1 (en) Method and device for displaying image
WO2018232669A1 (en) Method for controlling camera to photograph and mobile terminal
WO2015074261A1 (en) Method for saving photographed image, and electronic device
WO2019218111A1 (en) Electronic device and photographing control method
JP6296833B2 (en) Image / audio processing apparatus, image / audio processing method, and program
CN117412156A (en) Image shooting method, device, electronic equipment and computer readable storage medium
CN114915730A (en) Photographing method and photographing apparatus
CN117201837A (en) Video generation method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18918156

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18918156

Country of ref document: EP

Kind code of ref document: A1