WO2019213819A1 - Photographing control method and electronic device - Google Patents

Photographing control method and electronic device Download PDF

Info

Publication number
WO2019213819A1
WO2019213819A1 PCT/CN2018/085899 CN2018085899W WO2019213819A1 WO 2019213819 A1 WO2019213819 A1 WO 2019213819A1 CN 2018085899 W CN2018085899 W CN 2018085899W WO 2019213819 A1 WO2019213819 A1 WO 2019213819A1
Authority
WO
WIPO (PCT)
Prior art keywords
shooting
model
photographing
analysis result
control method
Prior art date
Application number
PCT/CN2018/085899
Other languages
French (fr)
Chinese (zh)
Inventor
王星泽
Original Assignee
合刃科技(武汉)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合刃科技(武汉)有限公司 filed Critical 合刃科技(武汉)有限公司
Priority to CN201880070402.5A priority Critical patent/CN111295875A/en
Priority to PCT/CN2018/085899 priority patent/WO2019213819A1/en
Publication of WO2019213819A1 publication Critical patent/WO2019213819A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present application relates to the field of electronic devices, and in particular, to a photographing control method for an electronic device and the electronic device.
  • the subject may be in good condition when framing, but the moment when the photo is taken may appear that the eyes are not open, the smile is relatively stiff, and the final photographs are often unsatisfactory.
  • the cute expression of the baby is often fleeting, and it is difficult to take a satisfactory photo in time by the user operating the shutter button or shooting an icon.
  • the application provides a shooting control method and an electronic device, which can perform a shooting operation in time to capture a wonderful moment.
  • a shooting control method includes: acquiring a shooting preview screen; analyzing the shooting preview image by using a preset model to obtain an analysis result; determining, according to the analysis result, whether the current shooting condition is met And control to perform the shooting operation when it is determined that the shooting conditions are satisfied.
  • an electronic device including a camera, a memory, and a processor.
  • the memory is for storing program instructions.
  • the processor is configured to execute the shooting control method by calling the program instruction, the shooting control method includes: acquiring a shooting preview image by using a camera; and analyzing the shooting preview image by using a preset model to obtain an analysis result; The analysis result determines whether the shooting condition is currently satisfied; and when it is determined that the shooting condition is satisfied, the control performs the shooting operation.
  • a computer readable storage medium storing program instructions for executing a shooting control method after the computer calls, the shooting control method comprising: acquiring a shooting Previewing a picture; analyzing the shooting preview picture by using a preset model to obtain an analysis result; determining whether the shooting condition is currently satisfied according to the analysis result; and controlling to perform a shooting operation when it is determined that the shooting condition is satisfied.
  • the photographing control method and the electronic device of the present application can automatically determine whether the photographing condition is satisfied according to the photographing preview screen, and perform photographing when the photographing condition is satisfied, and can capture a wonderful moment including the content corresponding to the current photographing preview screen in time.
  • FIG. 1 is a flow chart of a photographing control method in a first embodiment of the present application.
  • FIG. 2 is a flow chart of a photographing control method in a second embodiment of the present application.
  • FIG. 3 is a flowchart of a photographing control method in a third embodiment of the present application.
  • FIG. 4 is a flow chart of a photographing control method in a fourth embodiment of the present application.
  • FIG. 5 is a flowchart of a photographing control method in a fifth embodiment of the present application.
  • FIG. 6 is a flowchart of a photographing control method in a sixth embodiment of the present application.
  • FIG. 7 is a flowchart of a photographing control method in a seventh embodiment of the present application.
  • FIG. 8 is a flowchart of a photographing control method in an eighth embodiment of the present application.
  • FIG. 9 is a flowchart of a photographing control method in a ninth embodiment of the present application.
  • FIG. 10 is a flowchart of a photographing control method in a tenth embodiment of the present application.
  • FIG. 11 is a flowchart of a model training process in a shooting control method according to an embodiment of the present application.
  • FIG. 12 is a flowchart of a model training process in a shooting control method according to another embodiment of the present application.
  • FIG. 13 is a flowchart of a model training process in a photographing control method in still another embodiment of the present application.
  • FIG. 14 is a block diagram showing a schematic partial structure of an electronic device according to an embodiment of the present application.
  • the shooting control method of the present application can be applied to an electronic device.
  • the electronic device can include a camera.
  • the electronic device can acquire a shooting preview screen and display a shooting preview image through the camera, and the electronic device can perform photographing, continuous shooting, video shooting, etc. through the camera.
  • the camera includes a front camera and a rear camera, and the operations of photographing, continuous shooting, video shooting, etc. can be performed by a rear camera or a self-timer by a front camera.
  • FIG. 1 is a flowchart of a shooting control method in a first embodiment of the present application.
  • the shooting control method is applied to an electronic device.
  • the method includes the following steps:
  • the operation of acquiring the shooting preview screen is performed by the camera in response to the operation of turning on the camera, that is, the shooting preview screen is acquired by the camera.
  • the operation of turning on the camera is a click operation on the photographing application icon, that is, when the camera is turned on in response to a click operation on the photographing application icon, the shooting preview screen is acquired by the camera.
  • the operation of turning on the camera is a specific operation of a physical button of the electronic device.
  • the electronic device includes a volume up button and a volume down button, and the operation of turning on the camera is to increase the volume and volume. Simultaneous pressing of small keys.
  • the operation of turning on the photographing application is an operation of pressing the volume up key and the volume down key in a preset time (for example, 2 seconds).
  • the operation of turning on the camera may also be an operation of a preset touch gesture input in any display interface of the electronic device.
  • the user may input a circular touch track. Turn on the camera with a touch gesture.
  • the operation of turning on the camera may also be an operation of a preset touch gesture input on the touch screen when the electronic device is in a black screen state.
  • the operation of turning on the camera is an operation that presses the shutter button/power button of the camera to trigger the camera to be in an activated state.
  • acquiring a shooting preview screen is to obtain a shooting preview screen in real time through a camera.
  • the preset model may be a trained model or an untrained model.
  • the preset model is a trained neural network model; and the preset preview model is used to analyze the captured preview image to obtain an analysis result, and further includes: analyzing the captured preview image by using a neural network model, The result of this analysis of satisfaction is obtained.
  • the satisfaction degree is that the neural network model outputs the satisfaction result by processing the pre-trained model by taking all the pixels of the preview image as input.
  • the preset model is a trained image processing algorithm model
  • the preset preview model is used to analyze the captured preview image to obtain an analysis result
  • the shooting preview screen is compared with the reference picture, and the analysis result of the similarity between the preview picture and the reference picture is analyzed.
  • the reference picture may be a standard picture preset by the user with a specific expression such as smile, laughter, sadness, anger, yawning, and the like.
  • the trained image processing algorithm model includes the trained target object feature model, and the trained image processing algorithm model is used to compare the captured preview image with the reference image, and the similarity between the captured preview image and the reference image is analyzed, including The image recognition technology is used to analyze the target object in the preview image, and the corresponding target object feature vector is generated. The captured preview image and the reference image are calculated according to the trained target object feature model and the target object feature vector corresponding to the captured preview image. Similarity.
  • the calculating the similarity between the captured preview image and the reference image according to the trained target object feature model and the target object feature vector corresponding to the captured preview image comprises: capturing the target object feature vector corresponding to the preview image As the input information of the trained expression feature model, the similarity between the shooting preview picture and the reference picture is calculated by the target object feature vector.
  • comparing the shooting preview image with the reference image by using the trained image processing algorithm model, and analyzing the analysis result of the similarity between the preview image and the reference image comprising: acquiring pixel information of the shooting preview image; The pixel information of the shooting preview screen is compared with the pixel information of the reference picture, and the similarity between the shooting preview picture and the reference picture is analyzed. That is, in other embodiments, the similarity between the captured preview picture and the reference picture is obtained by comparing the pixel information such as the pixel grayscale value of the two images.
  • the analyzing the captured preview image by using the preset model to obtain the analysis result includes: using the face recognition technology to capture the preview image.
  • the facial expression is analyzed to generate a corresponding expression feature vector; the expression feature vector is used as input information of the image processing algorithm model, and an analysis result including identification information indicating whether the shooting condition is currently satisfied is obtained.
  • the identification information may include an identifier of 1, 0, etc., which identifies whether the analysis result of the shooting condition is currently satisfied. More specifically, when the identification information is the identifier 1, it indicates that the shooting condition is satisfied, and when the identification information is the identifier 0, it indicates that the shooting condition is not satisfied.
  • the trained target object feature model is completed by training in a plurality of photos having different expression faces in the initial training set; and using the image recognition technology to provide a plurality of pieces in the initial training set
  • the target object in the photo performs emoticon analysis to generate a corresponding target object feature vector Xi.
  • X1 represents the size of the eye opening
  • X2 represents the degree of the mouth angle rising
  • X3 represents the mouth opening.
  • the size of the opening; the training sample set is established based on the generated target object feature vector and the similarity label between the corresponding photo and the reference picture; and the sample set is used for training learning to obtain the trained target object feature model.
  • the analysis result is that the analysis result is obtained according to the real-time acquired shooting preview screen.
  • the step of analyzing the shot preview image by using the preset model to obtain an analysis result is that the currently acquired shot preview screen is analyzed every time by a preset time (for example, 0.2 seconds) to obtain a current analysis result. .
  • step S15 Determine whether the shooting condition is currently satisfied according to the analysis result. If yes, step S17 is performed, otherwise, it returns to step S13 or the process ends.
  • determining whether the shooting condition is currently satisfied according to the analysis result includes: determining that the shooting condition is currently satisfied when determining that the satisfaction exceeds the satisfaction preset threshold.
  • the satisfaction preset threshold may be 80%, 90%, and the like.
  • determining whether the shooting condition is currently satisfied according to the analysis result includes: determining that the shooting condition is currently satisfied when determining that the similarity exceeds the similarity preset threshold.
  • determining whether the shooting condition is currently satisfied according to the analysis result may further include: determining that the shooting condition is currently satisfied when the analysis result includes identifying the identification information that currently meets the shooting condition.
  • the target object may be a hand, a face, a specific scene, etc.
  • the target object feature model includes a gesture feature model, an expression feature model, and a scene feature model, etc.
  • the analyzed target object feature vector may include a gesture.
  • Control determines to perform a shooting operation when it is determined that the shooting condition is satisfied.
  • the photographing operation is a photographing operation
  • controlling to perform the photographing operation includes: controlling to perform a photographing operation to obtain a photograph corresponding to the current photograph preview screen.
  • the shooting operation is a continuous shooting operation
  • controlling to perform a shooting operation includes: controlling to perform a continuous shooting operation, and obtaining a plurality of photos including a photo corresponding to the current photo preview screen.
  • further steps may be further included: analyzing a plurality of photos obtained by the continuous shooting operation to determine the best photo; and retaining the best photo, and performing the continuous shooting operation Get other photos to delete.
  • the photographing operation is a video photographing operation
  • controlling to perform a photographing operation includes: controlling to perform a video photographing operation to obtain a video file that uses the current photograph preview screen as a starting video frame frame.
  • the method may further include: after the video file is captured, the plurality of video frame frames in the captured video file may be compared to determine an optimal screen. Frame; and intercept the best picture frame to save as a photo.
  • FIG. 2 is a flowchart of a shooting control method in a second embodiment of the present application.
  • the method comprises the steps of:
  • the reference picture may be a standard picture preset by the user with a specific expression such as smiling, laughing, sad, angry, yawning, or a standard picture with gestures such as an "OK" gesture or a "V-shaped” gesture. It can also be a standard picture with flowers, birds, mountains and other scenery.
  • the trained image processing algorithm model includes a target object feature model
  • step S23 specifically includes: analyzing, by using an image recognition technology, a target object in the captured preview image to generate a corresponding target object feature vector; and according to the target object feature The target object feature vector corresponding to the model and the preview image is calculated to obtain the similarity between the preview image and the reference picture.
  • the trained image processing algorithm model includes the trained target object feature model, and the trained image processing algorithm model is used to compare the captured preview image with the reference image, and the similarity between the captured preview image and the reference image is analyzed, including The image recognition technology is used to analyze the target object in the preview image, and the corresponding target object feature vector is generated. The captured preview image and the reference image are calculated according to the trained target object feature model and the target object feature vector corresponding to the captured preview image. Similarity.
  • the similarity between the captured preview image and the reference image is calculated according to the trained target object feature model and the target object feature vector corresponding to the captured preview image, including: taking the target object feature vector corresponding to the captured preview image as The input information of the target object model of the training is calculated, and the similarity between the shooting preview picture and the reference picture is calculated by the target object feature model.
  • the captured image processing algorithm model is used to compare the shooting preview image with the reference image, and the similarity between the shooting preview image and the reference image is analyzed, including: acquiring pixel information of the shooting preview image; The pixel information of the screen is compared with the pixel information of the reference picture, and the similarity between the shooting preview picture and the reference picture is analyzed. That is, in other embodiments, the similarity between the captured preview picture and the reference picture is obtained by comparing the pixel information such as the pixel grayscale value of the two images.
  • the target object may include an object such as a face, a hand, and a specific scene.
  • the target object feature model may include an expression feature model, a gesture feature model, a scene feature model, and the like, and the analyzed target object feature vector may include the foregoing. Emoticon, angry, yawning and other expression vector, or "OK" gesture, "V-shaped” and other gesture feature vectors, or flower, bird, mountain and other scene feature vectors.
  • the trained expression feature model can be trained by providing a plurality of photos with different expression faces in the initial training set; using face recognition technology, Performing an expression analysis on the characters in the plurality of photos provided in the initial training set to generate a corresponding expression feature vector Xi, for example, X1 represents the size of the eye opening, X2 represents the degree of the mouth angle rising, and X3 represents the size of the mouth opening;
  • X1 represents the size of the eye opening
  • X2 represents the degree of the mouth angle rising
  • X3 represents the size of the mouth opening
  • the generated expression feature vector and the similarity label between the corresponding photo and the reference picture establish a training sample set; and then use the sample set to perform training learning, and obtain the trained expression feature model.
  • the similarity preset threshold may be 80%, 90%, or the like.
  • the reference picture includes a reference picture having a laughing expression, and it is determined that the shooting condition is satisfied when the similarity between the shooting preview picture and the reference picture having the laughing expression reaches 80%, and the automatic shooting is triggered.
  • the similarity between the shooting preview screen and the reference picture may include, but is not limited to, the similarity of the picture style, the similarity of the colors, the similarity of the content layout, the similarity of the pixel gray levels, and the like.
  • the user may first set a reference picture that is considered satisfactory, and perform model training.
  • the user wants to take a picture
  • the user first obtains a preview picture, and then uses the trained model to analyze the similarity between the preview picture and the reference picture.
  • the similarity reaches the similarity preset threshold, automatic shooting is triggered, so that a satisfactory photo similar to the reference picture can be obtained, and when the target object is a face and the image processing algorithm model is an expression feature model, the image can be reduced. Taking a photo when the expression is unnatural, you can also take a satisfactory photo with a satisfactory expression in time.
  • the step S21 corresponds to the step S11 in FIG. 1 .
  • Step S23 may be a more specific step of step S13 in FIG. 1, and related descriptions may be referred to each other.
  • Step S25 corresponds to steps S15 and S17 in FIG. 1, and related descriptions may also be referred to each other.
  • FIG. 3 is a flowchart of a shooting control method in a third embodiment of the present application.
  • the preset model is a trained image processing algorithm model
  • the trained image processing algorithm model includes a trained target object feature model.
  • the shooting control method includes the following steps:
  • S301 Acquire a shooting preview screen.
  • S303 Analyze a target object in the preview image by using an image recognition technology to generate a corresponding target object feature vector.
  • the identifier information may be an identifier that identifies whether the photographing condition is currently satisfied, such as 1, 0, and the like. More specifically, when the identification information is 1, it indicates that the shooting condition is satisfied, and when the identification information is 0, it indicates that the shooting condition is not satisfied. Obviously, the identification information can also be information such as "yes" or "no".
  • the step S307 includes: when the identification result includes the identification information that currently meets the shooting condition, determining that the shooting condition is currently satisfied, and controlling to perform the shooting operation.
  • the analysis result includes identifying the identification information that satisfies the shooting condition, for example, including the identifier "1", determining that the shooting condition is currently satisfied, and when the analysis result includes identifying the identification information that does not satisfy the shooting condition, for example, including the identifier "0" When it is determined that the shooting conditions are not met.
  • the step S301 corresponds to the step S11 in FIG. 1 .
  • the step S303 and the step S305 may be the steps corresponding to the step S13 in FIG. 1 , and the description relationship between the upper and lower positions is mutually related, and the related descriptions may refer to each other.
  • Step S307 corresponds to steps S15, S17 in Fig. 1, and the related description can also be referred to each other.
  • FIG. 4 is a flowchart of a shooting control method in a fourth embodiment of the present application.
  • the photographing control method includes the following steps:
  • step S33 includes: comparing the captured preview image with the reference image by using the trained image processing algorithm model to obtain a similarity between the captured preview image and the reference image.
  • the trained image processing algorithm model includes a target object feature model.
  • the trained model is used to compare the captured preview image with the reference image to obtain a similarity between the captured preview image and the reference image, including: through image recognition technology.
  • the target object in the shooting preview screen is analyzed to generate a corresponding target object feature vector; and the similarity between the preview image and the reference image is calculated according to the target object feature model and the target object feature vector corresponding to the preview image.
  • analyzing the captured preview image by using the preset model to obtain the analysis result may further include: analyzing the target object in the captured preview image by using an image recognition technology, and generating a corresponding target object feature vector; The vector is used as input information of the target object feature model, and an analysis result including identification information indicating whether the shooting condition is currently satisfied is derived.
  • the analysis result of satisfaction can also be obtained by the trained neural network model.
  • step S35 when the analysis result of the satisfaction is obtained by the trained image processing algorithm model, step S35 includes: when the similarity between the shooting preview screen and the reference picture exceeds the similarity preset threshold or the analysis result includes the identification When the identification information that satisfies the shooting condition is currently determined, it is determined that the shooting condition is satisfied, and the control performs the continuous shooting operation.
  • step S35 may further include: when the satisfaction exceeds the satisfaction preset threshold, determining that the shooting condition is satisfied, and controlling to perform continuous shooting operating.
  • the requirement of the shooting condition when performing the continuous shooting operation may be slightly lower than the requirement when the photographing operation is performed.
  • the comparison similarity preset threshold or the satisfaction preset threshold may be slightly lower than the execution of the photographing when performing the continuous shooting operation.
  • the similarity preset threshold or satisfaction preset threshold of the operation comparison may be slightly lower than the execution of the photographing when performing the continuous shooting operation.
  • the similarity preset threshold or the satisfaction preset threshold when performing the continuous shooting operation may be 70%, which is lower than the similarity preset threshold or the satisfaction preset threshold of 80% or higher in performing the photographing operation comparison. .
  • the reference picture is a picture with a laughing expression
  • the contrast expression vector is an expression feature vector X2 indicating the degree of the mouth angle rising
  • the continuous shooting operation is controlled when it is determined that the mouth angle rises to 70% of the reference picture.
  • the user continues to smile for a short period of time, and the user's smile reaches the maximum level.
  • This expression will be captured by the continuous shooting operation, and the photo that ensures the best shooting effect can be ensured by the continuous shooting operation.
  • the step S31 corresponds to the step S11 in FIG. 1 .
  • the step S33 may be the step S13 in FIG. 1 , the step S23 in FIG. 2 and the steps corresponding to the steps S303 and S305 in FIG. 3 , and the description relationship between the upper and lower positions is mutually related, and the related descriptions may refer to each other.
  • Step S35 corresponds to steps S15 and S17 in FIG. 1, step S25 in FIG. 2, and step S307 in FIG. 3, and is a description relationship of upper or lower positions, and the related descriptions may also be referred to each other.
  • FIG. 5 is a flowchart of a shooting control method in a fifth embodiment of the present application.
  • the fifth embodiment differs from the fourth embodiment in that the photos obtained by the continuous shooting operation are also screened.
  • the photographing control method in the fifth embodiment includes the following steps:
  • step S47 may include: analyzing the plurality of photos by using the trained neural network model to obtain satisfaction, and determining the photo with the highest satisfaction as the best photo.
  • step S47 may include: comparing a plurality of photos obtained by the continuous shooting with the reference image, and determining a photo with the highest similarity with the reference image as the best photo.
  • the shooting control method may further include:
  • the electronic device includes a memory in which a plurality of albums are created, "preserve the best photos” to store the best photos in a certain album, such as in a camera album. Among them, by deleting other photos, it can effectively avoid occupying too much storage space.
  • the steps S41 to S45 are respectively the same as the steps S31 to S35 in the third embodiment shown in FIG. 4, and the descriptions of the steps S31 to S35 in FIG. 4 can be referred to in more detail.
  • FIG. 6 is a flowchart of a shooting control method in a sixth embodiment of the present application.
  • the sixth embodiment differs from the fourth embodiment in that the photographing operation is a video shooting and not a continuous shooting operation.
  • the photographing control method in the sixth embodiment includes the following steps:
  • S53 analyzing the shooting preview image by using a preset model to obtain an analysis result.
  • the requirement of the shooting condition when performing the video shooting operation may also be slightly lower than the requirement when the photographing operation is performed.
  • the comparison similarity preset threshold or the satisfaction preset threshold may be slightly lower than the execution when performing the continuous shooting operation.
  • the similarity preset threshold or the satisfaction preset threshold of the photographing operation comparison may be slightly lower than the execution when performing the continuous shooting operation.
  • step S55 may include: when the similarity between the shooting preview screen and the reference picture exceeds the similarity preset threshold, determining that the shooting condition is satisfied, and controlling to perform the video shooting operation.
  • the method may further include: when the satisfaction exceeds the satisfaction preset threshold, determining that the shooting condition is met, and controlling to perform a video shooting operation.
  • the shooting control method may further include:
  • step S57 may include: analyzing the plurality of video picture frames by using the trained neural network model to obtain satisfaction, and determining a video frame frame with the highest satisfaction as the optimal picture. frame.
  • step S57 may include: comparing a plurality of video picture frames in the video file with the reference picture, and determining a video picture frame with the highest similarity with the reference picture as the optimal picture frame. .
  • the shooting control method may further include:
  • the electronic device includes a memory in which a plurality of albums are created, and “the best picture frame is truncated to be saved as a photo” to store the best picture frame in a photo/photo format in an album. , for example, stored in a camera album.
  • FIG. 7 is a flowchart of a shooting control method in a seventh embodiment of the present application.
  • the photographing control method in the seventh embodiment includes the following steps:
  • S63 The shooting result is analyzed by using a preset model to obtain an analysis result.
  • S65 Determine whether the shooting condition is currently satisfied and determine the shutter time according to the analysis result.
  • step S65 may include:
  • the target object feature is determined to determine whether the shooting condition is satisfied and the shutter time is determined.
  • the expression feature for example, preset Corresponding relationship between the expression and the shooting condition
  • the correspondence between the shutter time and the gesture feature for example, a correspondence relationship between the smile expression and the photographing condition
  • the correspondence between the shutter time and the gesture feature of the thumb and the index finger distance wherein The shutter time can be logarithmically related to the distance between the thumb and the index finger.
  • the user's expression feature is determined to be a smile according to the analysis result, it is determined that the shooting condition is satisfied, and the shutter time is determined based on the gesture feature of the distance between the thumb and the index finger according to the analysis result, and the control of the exposure intensity is realized.
  • the trained model is a trained image processing algorithm model
  • the trained image processing algorithm model includes a trained target object feature model, specifically including a trained expression feature model and Training gesture feature model.
  • the trained expression feature model may be a model completed by the following training steps: using facial recognition technology to perform expression analysis on a character in each photo in the initial training set to generate a corresponding expression feature vector; The feature vector and the similarity label between the corresponding photo and the reference picture establish a training sample set; and the training set is further performed by using the sample set to obtain the trained expression feature model.
  • the trained gesture feature model is a model completed by the following training steps: using an image recognition technology to analyze a hand part in each photo in the initial training set to generate a corresponding gesture feature vector; based on the generated gesture feature
  • the training sample set is established by the similarity tag between the vector and the corresponding photo and the reference picture; and the training set is further performed by using the sample set to obtain the gesture feature model of the training completion.
  • the step S63 may specifically include: determining an expression similarity according to the expression feature model and the expression feature vector corresponding to the shooting preview image, and determining, according to the gesture feature model and the gesture feature vector corresponding to the shooting preview image, the gesture feature corresponding to the shooting preview image. .
  • the trained gesture feature model includes a plurality of reference images and a reference target object feature vector of the plurality of reference images, the determined according to the gesture feature model and the gesture feature vector corresponding to the captured preview image.
  • the capturing the gesture feature corresponding to the preview image includes: comparing, by using the gesture feature model, the gesture feature vector corresponding to the captured preview image with the reference gesture feature vector of the plurality of reference images, and determining the reference gesture feature vector with the highest similarity.
  • a reference image, and the target object feature is derived from the reference image. For example, after the reference image is determined, the gesture feature is determined based on the label of the reference image.
  • the step S65 may specifically include: determining that the shooting condition is satisfied when determining that the expression similarity is greater than the similarity preset threshold; determining the gesture feature according to the gesture feature model and the gesture feature vector corresponding to the shooting preview image, for example, obtaining the index finger and the thumb a distance, and determining a shutter time corresponding to the derived gesture feature according to a correspondence between a predefined gesture feature and a shutter time, for example, determining a shutter time according to a correspondence between a gesture feature of the index finger and the thumb and a shutter time .
  • the foregoing correspondence may be a correspondence table or the like stored in a memory of the electronic device.
  • the step S63 may further include: using the expression feature vector corresponding to the preview image as the input information of the expression feature model, and determining, by the trained expression feature model, whether the identifier meets the shooting condition.
  • the result of the identification information, and the gesture feature vector corresponding to the captured preview image is used as the input information of the trained gesture feature model, and the result including the shooting setting parameter is obtained by the trained gesture feature model. Thereby, the analysis result including whether the shooting condition and the shooting setting parameter are satisfied is obtained.
  • the step S65 may include: determining whether the identifier obtained according to the trained expression feature model satisfies the shooting condition and the shooting setting parameter according to the trained gesture feature model.
  • the identification information indicating whether the shooting condition is satisfied and the analysis result of the shooting setting parameter, that is, the analysis result of whether or not the shooting condition and the shooting setting parameter are satisfied are obtained.
  • step S65 includes: determining whether the current shooting condition is satisfied and determining the shooting setting parameter according to the analysis result of the trained neural network model including whether the shooting condition and the shooting setting parameter are currently satisfied.
  • the photo preview screen may be analyzed to determine an analysis result of the age identity of the photographed person, and the shutter time is determined according to the age identity. For example, when it is determined that the person being photographed is an infant, the expression of the infant is often fleeting, thereby reducing the shutter time, increasing the shutter speed, and ensuring that a wonderful moment can be captured in time.
  • the steps S61 and S63 in FIG. 7 correspond to the steps S11 and S13 in FIG. 1 respectively, and the related descriptions can refer to each other.
  • the shooting control method further includes:
  • FIG. 8 is a flowchart of a shooting control method in an eighth embodiment of the present application.
  • the photographing control method in the eighth embodiment and the method in the seventh embodiment shown in FIG. 7 are different from the step S65 in FIG.
  • the photographing control method in the eighth embodiment includes the following steps:
  • S73 The shooting preview screen is analyzed by using a preset model to obtain an analysis result.
  • S75 Determine whether the shooting condition is currently satisfied and determine the aperture size according to the analysis result.
  • step S75 may include:
  • the target object feature is determined to determine whether the shooting condition is satisfied and the shutter time is determined.
  • the expression features obtained from the analysis results determine whether the shooting conditions are satisfied
  • the aperture size is determined based on the gesture characteristics obtained from the analysis results. For example, it may be set in advance: using a smile to determine whether to take a picture, using a gesture feature to determine the aperture size, for example, a correspondence between the aperture size and the distance between the thumb and the index finger is set in advance, and further, the aperture size is logarithmically related to the distance between the thumb and the index finger. .
  • Steps S71 and S73 in FIG. 8 correspond to steps S11 and S13 in FIG. 1 respectively, and also correspond to steps S61 and S63 in FIG. 7, and related descriptions can be referred to each other.
  • Step S75 in FIG. 8 corresponds to step S65 in FIG. 7 , except that the aperture size is determined.
  • a more specific implementation of the step S75 in FIG. 8 is obtained by replacing the shutter time with the aperture size, and details are not described herein again.
  • the shutter time and the aperture size in FIGS. 7 and 8 are merely specific examples of the shooting setting parameters.
  • other shooting setting parameters such as parameters such as sensitivity, may also be determined based on the analysis result.
  • the setting parameters are also taken at the same time, for example, the shooting setting parameters such as the shutter time and the aperture size are determined.
  • the distance between the thumb and the index finger of the user's gesture can simultaneously correspond to the shutter time and the aperture size, and the corresponding shutter time and aperture size can be simultaneously obtained by analyzing the user gesture.
  • at least one shooting setting parameter including a shutter time, an aperture size, and the like can be determined based on the analysis result.
  • determining whether the shooting condition is satisfied and determining the shooting setting parameter based on the analysis result are performed simultaneously according to the analysis result.
  • determining the shooting setting parameter including the shutter time or the aperture size is performed after determining that the shooting condition is satisfied based on the analysis result. That is, after the determination of the shooting condition is satisfied, the shooting setting parameter is determined according to the analysis result, thereby avoiding determining the shooting setting parameter every time, and avoiding waste of computing resources.
  • the image processing algorithm model may be used to perform image analysis on the photo preview image to obtain the analysis result of the number of faces, and the aperture size is determined according to the number of faces to ensure that each face is good. Exposure.
  • the shooting control method further includes:
  • the shooting operation is performed according to the determined aperture size. Specifically, after the aperture size is adjusted to the determined aperture size, a corresponding photographing operation is performed.
  • FIG. 9 is a flowchart of a shooting control method in a ninth embodiment of the present application.
  • the photographing control method in the ninth embodiment includes the following steps:
  • S83 The video stream formed by the preview image is analyzed by using a preset model to obtain an analysis result.
  • step S85 Determine whether the shooting condition is currently satisfied according to the analysis result. If yes, step S87 is performed, otherwise, step S83 is returned.
  • the video stream may be analyzed by a preset model such as a trained neural network model or a trained image processing algorithm model.
  • a preset model such as a trained neural network model or a trained image processing algorithm model.
  • the capturing of the user completes an action may determine the action completed by the user through the change of the multi-frame video frame frame, and may pre-define the corresponding relationship between the preset action and the shooting condition, and determine that the action completed by the user is a preset action.
  • the shooting conditions are determined.
  • each frame of the video frame in the video stream may also be analyzed by a preset model such as a trained neural network model or a trained image processing algorithm model to obtain an analysis result, and determined according to the analysis result. Whether the shooting conditions are met.
  • the analysis result is obtained by analyzing each frame of the video picture frame, and determining whether the shooting condition is satisfied according to the analysis result may refer to the related description of any of the foregoing embodiments.
  • the shooting setting parameter may include at least one of a shutter time, an aperture size, and the like.
  • step S87 is performed after step S85.
  • step S87 may be performed simultaneously with step S85.
  • S89 Perform the shooting operation according to the shooting setting parameters. Specifically, after the shooting parameters are adjusted to the determined shooting setting parameters, a corresponding shooting operation is performed.
  • FIG. 10 is a flowchart of a shooting control method in a tenth embodiment of the present application.
  • the photographing control method in the tenth embodiment includes the following steps:
  • S91 Acquire a shooting preview image through the camera and capture the sound signal through the microphone.
  • step S95 Determine whether the shooting condition is met according to the first analysis result and the second analysis result. If yes, step S97 is performed, otherwise, step S93 is returned.
  • the first analysis result is that the speech content is obtained by the speech analysis model, and S95 includes:
  • Determining whether the preliminary shooting condition is satisfied according to the first analysis result for example, determining that the preset voice content is “photographing”, if the voice content obtained by the first analysis result and the preset voice content “photographing” If it is met, it is determined that the initial shooting conditions are met;
  • a second analysis result obtained by analyzing the captured preview image by the trained model determines whether the shooting condition is satisfied
  • the second analysis result obtained by analyzing the shot preview image by the trained model determines whether the photographing condition is satisfied and corresponds to step S15 in FIG. 1 .
  • the specific implementation manner may refer to at least step S15 in FIG. 1 and related Related steps of other embodiments.
  • the shooting setting parameter may include at least one of a shutter time, an aperture size, and the like.
  • step S97 is performed after step S95.
  • step S97 may be performed simultaneously with step S95.
  • S99 Perform a shooting operation according to the determined shooting setting parameters. Specifically, after the shooting parameters are adjusted to the determined shooting setting parameters, a corresponding shooting operation is performed.
  • the accuracy of the shooting control can be further improved by adding other inputs as a basis for judging whether or not the shooting conditions are met.
  • the method may further include: training the model to obtain the trained model.
  • FIG. 11 is a flowchart of a model training process in a shooting control method according to an embodiment of the present application.
  • the model training process may include the following steps:
  • the model saves the positive samples and establishes or updates the positive samples and the correspondence that satisfies the shooting conditions to adjust the parameters of the model itself.
  • the photographing condition can be marked as a label of the front sample.
  • the user manually controls the shooting to be done by pressing a shutter button or a photo icon.
  • the user manually controls the shooting to be performed by performing a specific operation on a physical button of the electronic device.
  • the electronic device includes a power button, and manual control shooting is achieved by double-clicking the power button.
  • S103 sample the frame frame that is not manually controlled by the model by using a preset rule as a reverse sample, and adjust the parameters of the model according to the reverse sample.
  • sampling, by using a model, a frame frame that is not manually controlled to be taken as a reverse sample by using a preset rule includes: sampling a frame frame that is framing for a period of time after the front sample as a reverse sample.
  • sampling, by using a preset rule, a frame frame that is not manually controlled to be taken as a reverse sample includes: sampling a frame frame that is framing for a period of time before the front sample as a reverse sample.
  • the shooting framing picture is automatically intercepted in advance and a certain number of pending samples are stored.
  • the non-manual control shooting is determined. The sample is determined to be the reverse sample.
  • the frame frame that is not manually controlled may also be obtained as a reverse sample by random sampling.
  • the sampling is determined by an additional sensor, such as a photosensitive or acoustic sensor to collect ambient light or sound to determine the sampling, and the sampled picture frame is used as a negative sample.
  • an additional sensor such as a photosensitive or acoustic sensor to collect ambient light or sound to determine the sampling
  • the image frame that is not manually controlled is sampled by the model as a reverse sample by using a preset rule, including: collecting a frame frame that is not manually controlled, further analyzing factors such as composition, and then determining whether to sample. For the reverse sample.
  • sampling the frame frame that is not manually controlled by the model by using a preset rule as the reverse sample includes: sampling two adjacent manual control shots at preset time intervals The picture frame obtained by the framing is taken as the reverse sample.
  • the model may save the sampled back samples, and may also establish a correspondence between the back samples and the photographing conditions to adjust the parameters of the model itself.
  • the reverse sample is a picture that does not satisfy the shooting condition; the label that does not satisfy the shooting condition can be marked as a label of the reverse sample.
  • the preset time interval can be 1 second, 2 seconds, and the like.
  • two adjacent manual control shots may be two adjacent manual control shots during the framing shot performed by the same camera opening.
  • the two adjacent manual control shots may also be two different manual control shots during the framing shooting.
  • the framing screen between the first manual control shooting and the second manual control shooting is The currently used model is saved at a preset time interval as a negative sample.
  • the method further includes the step of: controlling the electronic device to enter the model training mode in response to the user input entering the model training. Determining that the training completion condition is reached includes: determining that the training completion condition is reached in response to the user-entered operation of the exit model training mode.
  • the operations of entering the model training include a selection operation of the menu option, or a specific operation on the physical button, or a specific touch gesture input on the touch screen of the electronic device.
  • the control electronic device in response to the user-entered operation of entering the model training, enters the model training mode, including: responding to the user's selection operation of the menu option, or performing a specific operation on the physical button, or inputting on the touch screen of the electronic device.
  • the specific touch gesture controls the electronic device to enter the model training mode.
  • determining that the training completion condition is reached includes: determining that the training completion condition is reached when it is determined that the number of times the user manually controls the shooting reaches the preset number of times N1.
  • the preset number of times N1 may be the number of times the system default model training needs to be executed, or may be a user-defined value.
  • determining that the training completion condition is reached includes: using the positive sample of the current time to test the model, determining whether the test result reaches a preset threshold, and determining that the training result is reached after the test result reaches a preset threshold. Complete the conditions.
  • the above model may be a model such as a neural network model or an image processing algorithm model.
  • the trained model is obtained by training the model in advance, and when the user turns on the camera for shooting, the shooting can be automatically controlled according to the trained model, and the satisfactory picture desired by the user can be captured in time.
  • FIG. 12 is a flowchart of a model training process in a shooting control method according to another embodiment of the present application.
  • the model training process may include the following steps:
  • S111 Perform a framing preview in response to the opening operation of the camera to obtain a shooting preview screen.
  • the step S111 specifically includes: after the automatic shooting function is turned on, performing a framing preview in response to the opening operation of the camera to obtain a shooting preview screen. That is, the model training process shown in FIG. 12 can be performed after the automatic shooting function is turned on.
  • turning on the automatic shooting function may be accomplished in response to a user setting operation in a menu option of the camera.
  • turning on the automatic shooting function may also be performed in response to the user's specific touch gesture on the touch screen of the electronic device, for example, in response to double-clicking on the touch screen of the electronic device through the knuckles After the completion.
  • the photographing control method shown in any one of the embodiments 1 to 10 in the present application can be performed after the automatic shooting function is turned on by the electronic device.
  • Step S113 corresponds to step S17 in the first embodiment shown in FIG. 1 and step S25 in the second embodiment shown in FIG. 2 .
  • step S17 in FIG. 1 and other implementations A description of the relevant steps in the example.
  • S115 Acquire user satisfaction feedback information about the automatic shooting.
  • the user may be prompted to perform satisfaction evaluation on the automatic photographing by generating prompt information, for example, generating a prompt box including “satisfactory” and “unsatisfactory” options.
  • prompt information for example, generating a prompt box including “satisfactory” and “unsatisfactory” options.
  • the satisfaction feedback information of the automatic photographing is obtained.
  • the user's satisfaction with the automatic shooting is obtained by detecting the user's operation on the photo or video obtained by the automatic shooting. For example, if it is detected that the user deletes the photo or video obtained by the automatic shooting, it is determined that the user is not satisfied with the automatic shooting, and the satisfaction feedback information that is unsatisfactory is obtained. For example, if it is detected that the user has set a photo or video obtained by the automatic shooting to a favorite or favorite type setting operation or a sharing operation, it is determined that the user is satisfied with the automatic shooting, and obtains I got feedback on satisfaction with satisfaction.
  • S117 Output the satisfaction feedback information of the user to the current automatic shooting to the currently used model, so that the currently used model uses the satisfaction feedback information for optimal training.
  • the training of the model can be optimized, and the model is continuously optimized, so that the automatic shooting in subsequent use can be more accurate.
  • the currently used model may be a model that has been confirmed by training, or a model that has not been trained yet. When it is confirmed that the training is completed, the model can be further optimized. When the model is not yet trained, the training can be better achieved.
  • the steps S111 S S117 in FIG. 12 can be performed after the step S105 in FIG. 11 , and can also be performed before the step S105 in FIG. 11 , and can even be performed before the step S101 in FIG. 11 .
  • the currently used model may be an untrained initial model.
  • the preset model in any one of the embodiments 1 to 10 as described above may also be an untrained model, and after the user starts the automatic shooting function, the automatic shooting is performed according to the current model, and according to The feedback feedback from the user feedback optimizes and trains the current model.
  • the untrained model when the preset model is an untrained model, automatically acquires a picture each time the user performs shooting, and performs training as a frontal sample, or further acquires a shooting setting parameter at the time of shooting.
  • the training is performed as a positive sample, and the preset model is gradually optimized until the number of training reaches a preset number of times or the satisfaction feedback information of subsequent user feedback is a satisfactory ratio exceeding a preset ratio, and the training is determined to be completed. In this way, since the user himself trains the model without using another person's model, personalization can be better achieved.
  • FIG. 13 is a flowchart of a model training process in a shooting control method according to still another embodiment of the present application.
  • the model training process may include the following steps:
  • the model saves the front samples and establishes or updates the front samples and the correspondences that satisfy the shooting conditions and the shooting settings parameters to adjust the parameters of the model itself.
  • the shooting conditions and the shooting setting parameters can be marked as labels of the front samples at the same time.
  • the user manually controls the shooting to be done by pressing a shutter button or a photo icon.
  • the user manually controls the shooting to be performed by performing a specific operation on a physical button of the electronic device.
  • the electronic device includes a power button, and manual control shooting is achieved by double-clicking the power button.
  • the shooting setting parameters may include parameters such as aperture size, shutter time, and sensitivity.
  • S123 The screen frame that is not manually controlled is sampled by the model as a reverse sample by using a preset rule, and the parameters of the model itself are adjusted according to the reverse sample.
  • Step S123 and step S125 correspond to step S103 and step S105 in FIG. 11, respectively.
  • FIG. 14 is a block diagram showing a schematic partial structure of an electronic device 100 according to an embodiment of the present application.
  • the electronic device 100 includes a processor 10, a memory 20, and a camera 30.
  • the camera 30 includes at least a rear camera 31 and a front camera 32.
  • the rear camera 31 is used to capture an image behind the electronic device 100, and can be used for a user to take a shooting operation such as a person.
  • the front camera 32 is used to capture an image in front of the electronic device 100, and can be used to perform a self-photographing and the like.
  • the models in FIGS. 1-13 may be programs such as specific algorithm functions running in processor 10, such as neural network algorithm functions, image processing algorithm functions, and the like.
  • the electronic device 100 may further include a model processor that is independent of the processor 10.
  • the models in FIGS. 1 to 13 are in the operation and model processors, and the processor 10 may generate corresponding instructions as needed.
  • the trigger model processor runs the corresponding model, and the output result of the model is output to the processor 10 through the model processor for use by the processor 10, and control such as a shooting operation is performed.
  • Program instructions are stored in the memory 20.
  • the processor 10 is configured to call a program instruction stored in the memory 20 to execute the photographing control method in any of the embodiments shown in FIGS. 1 to 9, and perform photographing control in any of the embodiments as shown in FIGS. 10 to 12.
  • the model training process in the method is configured to call a program instruction stored in the memory 20 to execute the photographing control method in any of the embodiments shown in FIGS. 1 to 9, and perform photographing control in any of the embodiments as shown in FIGS. 10 to 12.
  • the processor 10 is configured to call a program instruction stored in the memory 20 to execute the following shooting control method:
  • the analysis result is obtained by analyzing the shooting preview screen by using a preset model
  • the control When it is determined that the shooting conditions are satisfied, the control performs a shooting operation.
  • the processor 10 when the calling program instruction is executed to determine whether the shooting condition is currently satisfied according to the analysis result, is further invoked to execute the program instruction: determining a shooting setting parameter including at least one of a shutter time and an aperture size according to the analysis result. .
  • the processor 10 controls to perform a photographing operation when it is determined that the photographing condition is satisfied, and further includes the processor 10 performing a photographing operation according to the determined photographing setting parameter when determining that the photographing condition is satisfied.
  • the processor 10 can be a microcontroller, a microprocessor, a single chip microcomputer, a digital signal processor, or the like.
  • the memory 20 can be any storage device that can store information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like.
  • the electronic device 100 further includes an input unit 40 and an output unit 50.
  • the input unit 40 may include a touch panel, a mouse, a microphone, a physical button including a power key, a volume key, and the like.
  • the output unit 50 can include a display screen, a speaker, and the like.
  • the touch panel of the input unit 40 and the display screen of the output unit 50 are integrated to form a touch screen while providing the functionality of touch input and display output.
  • the electronic device 100 can be a portable electronic device having a camera 30, such as a mobile phone, a tablet computer, or a notebook computer, and can also be a camera device such as a camera or a video camera.
  • a camera 30 such as a mobile phone, a tablet computer, or a notebook computer
  • a camera device such as a camera or a video camera.
  • the present application further provides a computer readable storage medium, where a plurality of program instructions are stored in the computer readable storage medium, and the program instructions are executed by the main processing unit 20, and executed as shown in FIG. 1-13. All or part of the steps in any of the shooting control methods.
  • the computer storage medium is the memory 20, and may be any storage device that can store information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like.
  • the photographing control method and the electronic device 100 of the present application can automatically determine whether the photographing condition is satisfied according to the photographing preview screen, and perform photographing when the photographing condition is satisfied, and can capture the highlight moment including the content corresponding to the current photographing preview screen in time.
  • embodiments of the present invention can be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program is stored/distributed in a suitable medium, provided with other hardware or as part of the hardware, or in other distributed forms, such as over the Internet or other wired or wireless telecommunication systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

Provided is a photographing control method, comprising: acquiring a preview image; using a preset model to analyze the preview image so as to obtain an analysis result; determining, according to the analysis result, whether a photographing condition is satisfied at a current moment; and if it is determined that the photographing condition is satisfied, controlling a photographing operation to be executed. Also provided is an electronic device implementing the photographing control method. In the photographing control method and the electronic device of the present invention, whether a photographing condition is satisfied can be automatically determined according to a preview image, and if the photographing condition is satisfied, a photographing operation is performed to timely capture an image of a scene of interest containing content corresponding to a current preview image.

Description

拍摄控制方法及电子装置Shooting control method and electronic device 技术领域Technical field
本申请涉及电子设备领域,尤其涉及一种用于电子装置的拍摄控制方法及所述电子装置。The present application relates to the field of electronic devices, and in particular, to a photographing control method for an electronic device and the electronic device.
背景技术Background technique
现在,随着人们生活水平的提高,拍照已经成了为生活中并不可少的常用功能。现在,不论是照相机还是具有相机功能的手机、平板电脑等电子装置,像素都越来越高,拍照质量都越来越好。然而,目前的照相机、手机等电子装置,在进行拍照控制时,往往还需要用户通过按快门键或拍照图标等启动拍照,由于用户的操作往往有一定滞后性,导致了往往无法及时捕捉精彩的瞬间,而使得常常无法拍摄到满意的照片,或者由于滞后,反而拍摄到了不满意的照片。例如,被拍摄者可能在取景时状态很好,但是进行拍照的瞬间可能出现正好眼睛没有睁开,笑容比较僵硬等状况,最终拍摄的照片往往难以令人满意。再例如,如给小宝宝拍照时,小宝宝的可爱表情往往转瞬即逝,通过用户操作快门键或拍摄图标等难以及时拍到满意的照片。Now, with the improvement of people's living standards, taking pictures has become a common function that is not indispensable in life. Nowadays, whether it is a camera or an electronic device such as a mobile phone or a tablet computer with a camera function, pixels are getting higher and higher, and the quality of photographs is getting better and better. However, current electronic devices such as cameras and mobile phones often require the user to initiate a photograph by pressing a shutter button or a photographing icon when performing photographing control. Since the user's operation often has a certain lag, it is often impossible to capture the wonderful time in time. In an instant, it is often impossible to take a satisfactory photo, or because of the lag, it has taken an unsatisfactory photo. For example, the subject may be in good condition when framing, but the moment when the photo is taken may appear that the eyes are not open, the smile is relatively stiff, and the final photographs are often unsatisfactory. For example, when taking pictures of a baby, the cute expression of the baby is often fleeting, and it is difficult to take a satisfactory photo in time by the user operating the shutter button or shooting an icon.
发明内容Summary of the invention
本申请提供一种拍摄控制方法及电子装置,能够及时进行拍摄操作,捕捉到精彩的瞬间。The application provides a shooting control method and an electronic device, which can perform a shooting operation in time to capture a wonderful moment.
一方面,提供一种拍摄控制方法,所述拍摄控制方法包括:获取拍摄预览画面;采用预设模型对所述拍摄预览画面进行分析而得到分析结果;根据所述分析结果确定当前是否满足拍摄条件;以及在确定满足拍摄条件时,控制执行拍摄操作。In one aspect, a shooting control method is provided, the shooting control method includes: acquiring a shooting preview screen; analyzing the shooting preview image by using a preset model to obtain an analysis result; determining, according to the analysis result, whether the current shooting condition is met And control to perform the shooting operation when it is determined that the shooting conditions are satisfied.
另一方面,提供一种电子装置,所述电子装置包括摄像头、存储器以及处理器。所述存储器用于存储程序指令。所述处理器用于调用所述程序指令执行一种拍摄控制方法,所述拍摄控制方法包括:通过摄像头获取拍摄预览画面;采用预设模型对所述拍摄预览画面进行分析而得到分析结果;根据所述分析结果确定当前是否满足拍摄条件;以及在确定满足拍摄条件时,控制执行拍摄操作。In another aspect, an electronic device is provided, the electronic device including a camera, a memory, and a processor. The memory is for storing program instructions. The processor is configured to execute the shooting control method by calling the program instruction, the shooting control method includes: acquiring a shooting preview image by using a camera; and analyzing the shooting preview image by using a preset model to obtain an analysis result; The analysis result determines whether the shooting condition is currently satisfied; and when it is determined that the shooting condition is satisfied, the control performs the shooting operation.
再一方面,还提供一种计算机可读存储介质,所述计算机可读存储介质存储有程序指令,所述程序指令供计算机调用后执行一种拍摄控制方法,所述拍摄控制方法包括:获取拍摄预览画面;采用预设模型对所述拍摄预览画面进行分析而得到分析结果;根据所述分析结果确定当前是否满足拍摄条件;以及在确定满足拍摄条件时,控制执行拍摄操作。In still another aspect, a computer readable storage medium is provided, the computer readable storage medium storing program instructions for executing a shooting control method after the computer calls, the shooting control method comprising: acquiring a shooting Previewing a picture; analyzing the shooting preview picture by using a preset model to obtain an analysis result; determining whether the shooting condition is currently satisfied according to the analysis result; and controlling to perform a shooting operation when it is determined that the shooting condition is satisfied.
本申请的拍摄控制方法及电子装置,可根据拍摄预览画面自动判断是否满足拍摄条件,并在满足拍摄条件时进行拍摄,可及时捕捉到包括当前拍摄预览画面对应内容的精彩瞬间。The photographing control method and the electronic device of the present application can automatically determine whether the photographing condition is satisfied according to the photographing preview screen, and perform photographing when the photographing condition is satisfied, and can capture a wonderful moment including the content corresponding to the current photographing preview screen in time.
附图说明DRAWINGS
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的明显变形方式。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings to be used in the embodiments will be briefly described below. Obviously, the drawings in the following description are only some embodiments of the present application, Those skilled in the art can also obtain other obvious modifications according to these drawings without any creative work.
图1为本申请第一实施例中的拍摄控制方法的流程图。1 is a flow chart of a photographing control method in a first embodiment of the present application.
图2为本申请第二实施例中的拍摄控制方法的流程图。2 is a flow chart of a photographing control method in a second embodiment of the present application.
图3为本申请第三实施例中的拍摄控制方法的流程图。FIG. 3 is a flowchart of a photographing control method in a third embodiment of the present application.
图4为本申请第四实施例中的拍摄控制方法的流程图。4 is a flow chart of a photographing control method in a fourth embodiment of the present application.
图5为本申请第五实施例中的拍摄控制方法的流程图。FIG. 5 is a flowchart of a photographing control method in a fifth embodiment of the present application.
图6为本申请第六实施例中的拍摄控制方法的流程图。FIG. 6 is a flowchart of a photographing control method in a sixth embodiment of the present application.
图7为本申请第七实施例中的拍摄控制方法的流程图。FIG. 7 is a flowchart of a photographing control method in a seventh embodiment of the present application.
图8为本申请第八实施例中的拍摄控制方法的流程图。FIG. 8 is a flowchart of a photographing control method in an eighth embodiment of the present application.
图9为本申请第九实施例中的拍摄控制方法的流程图。FIG. 9 is a flowchart of a photographing control method in a ninth embodiment of the present application.
图10为本申请第十实施例中的拍摄控制方法的流程图。FIG. 10 is a flowchart of a photographing control method in a tenth embodiment of the present application.
图11为本申请一实施例中的拍摄控制方法中的模型训练过程的流程图。FIG. 11 is a flowchart of a model training process in a shooting control method according to an embodiment of the present application.
图12为本申请另一实施例中的拍摄控制方法中的模型训练过程的流程图。FIG. 12 is a flowchart of a model training process in a shooting control method according to another embodiment of the present application.
图13为本申请再一实施例中的拍摄控制方法中的模型训练过程的流程图。FIG. 13 is a flowchart of a model training process in a photographing control method in still another embodiment of the present application.
图14为本申请一实施例中的电子装置的示意出部分结构的框图。FIG. 14 is a block diagram showing a schematic partial structure of an electronic device according to an embodiment of the present application.
具体实施方式detailed description
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application are clearly and completely described in the following with reference to the drawings in the embodiments of the present application. It is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application without departing from the inventive scope are the scope of the present application.
本申请的拍摄控制方法可应用于一电子装置中,电子装置可以包括摄像头,电子装置可通过摄像头获取拍摄预览画面并显示拍摄预览画面,电子装置可通过摄像头进行拍照、连拍、视频拍摄等操作。其中,摄像头包括前置摄像头和后置摄像头,拍照、连拍、视频拍摄等操作可为通过后置摄像头进行的拍摄,也可为通过前置摄像头进行的自拍。The shooting control method of the present application can be applied to an electronic device. The electronic device can include a camera. The electronic device can acquire a shooting preview screen and display a shooting preview image through the camera, and the electronic device can perform photographing, continuous shooting, video shooting, etc. through the camera. . Among them, the camera includes a front camera and a rear camera, and the operations of photographing, continuous shooting, video shooting, etc. can be performed by a rear camera or a self-timer by a front camera.
请参阅图1,为本申请第一实施例中的拍摄控制方法的流程图。拍摄控制方法应用于一电子装置中。方法包括如下步骤:Please refer to FIG. 1 , which is a flowchart of a shooting control method in a first embodiment of the present application. The shooting control method is applied to an electronic device. The method includes the following steps:
S11:获取拍摄预览画面。S11: Acquire a shooting preview screen.
在一些实施例中,获取拍摄预览画面的操作为响应开启摄像头的操作后通过摄像头来进行的,即,为通过摄像头来获取拍摄预览画面。In some embodiments, the operation of acquiring the shooting preview screen is performed by the camera in response to the operation of turning on the camera, that is, the shooting preview screen is acquired by the camera.
在一些实施例中,开启摄像头的操作为对拍照应用图标的点击操作,即,在响应对拍照应用图标的点击操作而开启摄像头时,则通过摄像头去获取拍摄预览画面。In some embodiments, the operation of turning on the camera is a click operation on the photographing application icon, that is, when the camera is turned on in response to a click operation on the photographing application icon, the shooting preview screen is acquired by the camera.
或者,在另一些实施例中,开启摄像头的操作为对电子装置的物理按键的特定操作,例如,电子装置包括音量增加键和音量减小键,开启摄像头的操作为对音量增加键和音量减小键的同时按压的操作。进一步的,开启拍照应用的操作为在预设时间(例如2秒)内先后按压音量增加键及音量减小键的操作。Alternatively, in other embodiments, the operation of turning on the camera is a specific operation of a physical button of the electronic device. For example, the electronic device includes a volume up button and a volume down button, and the operation of turning on the camera is to increase the volume and volume. Simultaneous pressing of small keys. Further, the operation of turning on the photographing application is an operation of pressing the volume up key and the volume down key in a preset time (for example, 2 seconds).
在另一些实施例中,开启摄像头的操作还可为在电子装置的任一显示界面中输入的预设触摸手势的操作,例如,在电子装置的主界面上,用户可输入一个具有环形触摸轨迹的触摸手势而开启摄像头。In other embodiments, the operation of turning on the camera may also be an operation of a preset touch gesture input in any display interface of the electronic device. For example, on the main interface of the electronic device, the user may input a circular touch track. Turn on the camera with a touch gesture.
在另一些实施例中,开启摄像头的操作还可为在电子装置处于黑屏状态下在触摸屏上输入的预设触摸手势的操作。In other embodiments, the operation of turning on the camera may also be an operation of a preset touch gesture input on the touch screen when the electronic device is in a black screen state.
在一些实施例中,当电子装置为照相机时,开启摄像头的操作为对照相机的快门按键/电源按键进行按压而触发照相机处于启动状态的操作。In some embodiments, when the electronic device is a camera, the operation of turning on the camera is an operation that presses the shutter button/power button of the camera to trigger the camera to be in an activated state.
可选的,本申请中,获取拍摄预览画面为通过摄像头实时获取拍摄预览画面。Optionally, in the present application, acquiring a shooting preview screen is to obtain a shooting preview screen in real time through a camera.
S13:采用预设模型对拍摄预览画面进行分析而得到分析结果。S13: analyzing the shooting preview image by using a preset model to obtain an analysis result.
其中,预设模型可为已训练完成的模型,也可为未训练完成的模型。The preset model may be a trained model or an untrained model.
可选的,在一些实施例中,预设模型为已训练的神经网络模型;采用预设模型对拍摄预览画面进行分析得出分析结果,进一步包括:通过神经网络模型对拍摄预览画面进行分析,得出满意度这一分析结果。其中,满意度为神经网络模型通过将拍摄预览画面的所有像素作为输入,而根据预先训练的模型进行处理而输出满意度这一分析结果。Optionally, in some embodiments, the preset model is a trained neural network model; and the preset preview model is used to analyze the captured preview image to obtain an analysis result, and further includes: analyzing the captured preview image by using a neural network model, The result of this analysis of satisfaction is obtained. Among them, the satisfaction degree is that the neural network model outputs the satisfaction result by processing the pre-trained model by taking all the pixels of the preview image as input.
可选的,在另一些实施例中,预设模型为已训练的图像处理算法模型,采用预设模型对拍摄预览画面进行分析得出分析结果,进一步包括:采用已训练的图像处理算法模型将拍摄预览画面与基准图片进行比较,分析出预览画面与基准图片的相似度这一分析结果。其中, 基准图片可为用户预先设定的具有微笑、大笑、难过、生气、打哈欠等特定表情的标准图片。Optionally, in other embodiments, the preset model is a trained image processing algorithm model, and the preset preview model is used to analyze the captured preview image to obtain an analysis result, and further includes: using a trained image processing algorithm model The shooting preview screen is compared with the reference picture, and the analysis result of the similarity between the preview picture and the reference picture is analyzed. The reference picture may be a standard picture preset by the user with a specific expression such as smile, laughter, sadness, anger, yawning, and the like.
进一步的,已训练的图像处理算法模型包括已训练的目标对象特征模型,采用已训练的图像处理算法模型将拍摄预览画面与基准图片进行比较,分析出拍摄预览画面与基准图片的相似度,包括:利用图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量;根据已训练的目标对象特征模型和拍摄预览画面对应的目标对象特征向量计算得到拍摄预览画面与基准图片的相似度。Further, the trained image processing algorithm model includes the trained target object feature model, and the trained image processing algorithm model is used to compare the captured preview image with the reference image, and the similarity between the captured preview image and the reference image is analyzed, including The image recognition technology is used to analyze the target object in the preview image, and the corresponding target object feature vector is generated. The captured preview image and the reference image are calculated according to the trained target object feature model and the target object feature vector corresponding to the captured preview image. Similarity.
在一些实施例中,所述根据已训练的目标对象特征模型和拍摄预览画面对应的目标对象特征向量计算得到拍摄预览画面与基准图片的相似度,包括:将拍摄预览画面对应的目标对象特征向量作为已训练的表情特征模型的输入信息,而通过所述目标对象特征向量计算得出拍摄预览画面与基准图片的相似度。In some embodiments, the calculating the similarity between the captured preview image and the reference image according to the trained target object feature model and the target object feature vector corresponding to the captured preview image comprises: capturing the target object feature vector corresponding to the preview image As the input information of the trained expression feature model, the similarity between the shooting preview picture and the reference picture is calculated by the target object feature vector.
在另一些实施例中,采用已训练的图像处理算法模型将拍摄预览画面与基准图片进行比较,分析出预览画面与基准图片的相似度这一分析结果,包括:获取拍摄预览画面的像素信息;将拍摄预览画面的像素信息与基准图片的像素信息进行比较,分析出拍摄预览画面与基准图片的相似度。即,在另一些实施例中,通过对比两个图像的像素灰阶值等像素信息来得出拍摄预览画面和基准图片的相似度。In another embodiment, comparing the shooting preview image with the reference image by using the trained image processing algorithm model, and analyzing the analysis result of the similarity between the preview image and the reference image, comprising: acquiring pixel information of the shooting preview image; The pixel information of the shooting preview screen is compared with the pixel information of the reference picture, and the similarity between the shooting preview picture and the reference picture is analyzed. That is, in other embodiments, the similarity between the captured preview picture and the reference picture is obtained by comparing the pixel information such as the pixel grayscale value of the two images.
在另一些实施例中,当预设模型为已训练的图像处理算法模型时,所述采用预设模型对拍摄预览画面进行分析而得到分析结果,包括:利用人脸识别技术对拍摄预览画面中的脸部表情进行分析,生成对应的表情特征向量;将表情特征向量作为图像处理算法模型的输入信息,而得出包括标识当前是否满足拍摄条件的标识信息的分析结果。例如,所述标识信息可包括1、0等标识当前是否满足拍摄条件的分析结果的标识符。更具体的,当标识信息为标识符1时,表示满足拍摄条件,当标识信息为标识符0时,表示不满足拍摄条件。In another embodiment, when the preset model is a trained image processing algorithm model, the analyzing the captured preview image by using the preset model to obtain the analysis result includes: using the face recognition technology to capture the preview image. The facial expression is analyzed to generate a corresponding expression feature vector; the expression feature vector is used as input information of the image processing algorithm model, and an analysis result including identification information indicating whether the shooting condition is currently satisfied is obtained. For example, the identification information may include an identifier of 1, 0, etc., which identifies whether the analysis result of the shooting condition is currently satisfied. More specifically, when the identification information is the identifier 1, it indicates that the shooting condition is satisfied, and when the identification information is the identifier 0, it indicates that the shooting condition is not satisfied.
在一些实施例中,已训练的目标对象特征模型为通过如下的方式进行训练完成:在初始训练集中提供多张具有不同表情人脸的照片;利用图像识别技术,对初始训练集中提供的多张照片中的目标对象进行表情分析,生成对应的目标对象特征向量Xi,例如,当目标对象特征向量为表情特征向量时,X1表示眼睛睁开的大小,X2表示嘴角上扬的程度,X3表示嘴巴张开的大小;基于生成的目标对象特征向量和对应的照片与基准图片间的相似度标签建立训练样本集;再用样本集进行训练学习,得到训练完成的目标对象特征模型。In some embodiments, the trained target object feature model is completed by training in a plurality of photos having different expression faces in the initial training set; and using the image recognition technology to provide a plurality of pieces in the initial training set The target object in the photo performs emoticon analysis to generate a corresponding target object feature vector Xi. For example, when the target object feature vector is an expression feature vector, X1 represents the size of the eye opening, X2 represents the degree of the mouth angle rising, and X3 represents the mouth opening. The size of the opening; the training sample set is established based on the generated target object feature vector and the similarity label between the corresponding photo and the reference picture; and the sample set is used for training learning to obtain the trained target object feature model.
在一些实施例中,本申请中,所述分析结果为根据实时获取的拍摄预览画面进行分析而得到相应的分析结果。在另一些实施例中,所述采用预设模型对拍摄预览画面进行分析而得到分析结果为每间隔预设时间(例如0.2秒)对当前获取到的拍摄预览画面进行分析而得到当前的分析结果。In some embodiments, in the present application, the analysis result is that the analysis result is obtained according to the real-time acquired shooting preview screen. In some other embodiments, the step of analyzing the shot preview image by using the preset model to obtain an analysis result is that the currently acquired shot preview screen is analyzed every time by a preset time (for example, 0.2 seconds) to obtain a current analysis result. .
S15:根据分析结果确定当前是否满足拍摄条件。如果满足,则执行步骤S17,否则,返回步骤S13或者流程结束。S15: Determine whether the shooting condition is currently satisfied according to the analysis result. If yes, step S17 is performed, otherwise, it returns to step S13 or the process ends.
在一些实施例中,当分析结果为根据神经网络模型得出时,根据分析结果确定当前是否满足拍摄条件,包括:在确定满意度超过满意度预设阈值时,确定当前满足拍摄条件。其中,满意度预设阈值可为80%、90%等值。In some embodiments, when the analysis result is obtained according to the neural network model, determining whether the shooting condition is currently satisfied according to the analysis result includes: determining that the shooting condition is currently satisfied when determining that the satisfaction exceeds the satisfaction preset threshold. The satisfaction preset threshold may be 80%, 90%, and the like.
在一些实施例中,当分析结果为根据图像处理算法模型得出时,根据分析结果确定当前是否满足拍摄条件,包括:在确定相似度超过相似度预设阈值时,确定当前满足拍摄条件。或者,根据分析结果确定当前是否满足拍摄条件,还可包括:在分析结果包括标识当前满足拍摄条件的标识信息时,确定当前满足拍摄条件。In some embodiments, when the analysis result is obtained according to the image processing algorithm model, determining whether the shooting condition is currently satisfied according to the analysis result includes: determining that the shooting condition is currently satisfied when determining that the similarity exceeds the similarity preset threshold. Alternatively, determining whether the shooting condition is currently satisfied according to the analysis result may further include: determining that the shooting condition is currently satisfied when the analysis result includes identifying the identification information that currently meets the shooting condition.
其中,本申请中,目标对象可为手部、脸部、特定的景物等;目标对象特征模型相应包括手势特征模型、表情特征模型以及景物特征模型等,分析出的目标对象特征向量可包括手势特征向量、表情特征向量和景物特征向量等。In the present application, the target object may be a hand, a face, a specific scene, etc.; the target object feature model includes a gesture feature model, an expression feature model, and a scene feature model, etc., and the analyzed target object feature vector may include a gesture. Feature vectors, expression feature vectors, and scene feature vectors.
S17:在确定满足拍摄条件时,控制执行拍摄操作。S17: Control determines to perform a shooting operation when it is determined that the shooting condition is satisfied.
在一些实现方式中,拍摄操作为拍照操作,控制执行拍摄操作,包括:控制执行拍照操作,而得到当前拍照预览画面对应的照片。In some implementations, the photographing operation is a photographing operation, and controlling to perform the photographing operation includes: controlling to perform a photographing operation to obtain a photograph corresponding to the current photograph preview screen.
在另一些实现方式中,拍摄操作为连拍操作,控制执行拍摄操作,包括:控制执行连拍操作,而得到包括当前拍照预览画面对应照片在内的多张照片。可选的,在执行连拍操作后,还可包括进一步的步骤:对连拍操作获取到的多张照片进行分析,确定出最佳的照片;以及保留最佳的照片,而对连拍操作获取到的其他照片进行删除。In other implementations, the shooting operation is a continuous shooting operation, and controlling to perform a shooting operation includes: controlling to perform a continuous shooting operation, and obtaining a plurality of photos including a photo corresponding to the current photo preview screen. Optionally, after performing the continuous shooting operation, further steps may be further included: analyzing a plurality of photos obtained by the continuous shooting operation to determine the best photo; and retaining the best photo, and performing the continuous shooting operation Get other photos to delete.
在另一些实现方式中,拍摄操作为视频拍摄操作,控制执行拍摄操作,包括:控制执行视频拍摄操作,而得到以当前拍照预览画面作为起始视频画面帧的视频文件。可选的,在执行视频拍摄操作得到视频文件后,还可包括步骤:在拍摄得到视频文件后,还可对拍摄到的视频文件中的多个视频画面帧进行比较,确定出最佳的画面帧;以及将最佳的画面帧截取出来作为照片保存。In other implementations, the photographing operation is a video photographing operation, and controlling to perform a photographing operation includes: controlling to perform a video photographing operation to obtain a video file that uses the current photograph preview screen as a starting video frame frame. Optionally, after the video recording operation is performed to obtain the video file, the method may further include: after the video file is captured, the plurality of video frame frames in the captured video file may be compared to determine an optimal screen. Frame; and intercept the best picture frame to save as a photo.
从而,本申请中,通过分析拍照预览画面确定当前是否满足拍摄条件,能够根据拍照预览画面中的内容来确定是否为用户期望拍下的画面,从而能够及时捕捉当前精彩的瞬间。Therefore, in the present application, by analyzing the photographing preview screen to determine whether the photographing condition is currently satisfied, whether or not the photograph desired by the user is photographed can be determined according to the content in the photograph preview screen, so that the current exciting moment can be captured in time.
请参阅图2,为本申请第二实施例中的拍摄控制方法的流程图。在第二实施例中,方法包括如下步骤:Please refer to FIG. 2 , which is a flowchart of a shooting control method in a second embodiment of the present application. In a second embodiment, the method comprises the steps of:
S21:获取拍摄预览画面。S21: Acquire a shooting preview screen.
S23:采用已训练的图像处理算法模型对拍摄预览画面与基准图片进行比较得到拍摄预览画面与基准图片的相似度这一分析结果。其中,基准图片可为用户预先设定的具有微笑、大笑、难过、生气、打哈欠等特定表情的标准图片,也可为具有“OK”手势、“V字形”手势等手势的标准图片,也可为具有花朵、鸟、山等景物的标准图片。S23: Comparing the shooting preview image with the reference image by using the trained image processing algorithm model to obtain an analysis result of the similarity between the shooting preview image and the reference image. The reference picture may be a standard picture preset by the user with a specific expression such as smiling, laughing, sad, angry, yawning, or a standard picture with gestures such as an "OK" gesture or a "V-shaped" gesture. It can also be a standard picture with flowers, birds, mountains and other scenery.
可选的,已训练的图像处理算法模型包括目标对象特征模型,步骤S23具体包括:通过图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量;以及根据目标对象特征模型和预览画面对应的目标对象特征向量计算得到预览画面与基准图片的相似度。Optionally, the trained image processing algorithm model includes a target object feature model, and step S23 specifically includes: analyzing, by using an image recognition technology, a target object in the captured preview image to generate a corresponding target object feature vector; and according to the target object feature The target object feature vector corresponding to the model and the preview image is calculated to obtain the similarity between the preview image and the reference picture.
进一步的,已训练的图像处理算法模型包括已训练的目标对象特征模型,采用已训练的图像处理算法模型将拍摄预览画面与基准图片进行比较,分析出拍摄预览画面与基准图片的相似度,包括:利用图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量;根据已训练的目标对象特征模型和拍摄预览画面对应的目标对象特征向量计算得到拍摄预览画面与基准图片的相似度。Further, the trained image processing algorithm model includes the trained target object feature model, and the trained image processing algorithm model is used to compare the captured preview image with the reference image, and the similarity between the captured preview image and the reference image is analyzed, including The image recognition technology is used to analyze the target object in the preview image, and the corresponding target object feature vector is generated. The captured preview image and the reference image are calculated according to the trained target object feature model and the target object feature vector corresponding to the captured preview image. Similarity.
在一些实施例中,根据已训练的目标对象特征模型和拍摄预览画面对应的目标对象特征向量计算得到拍摄预览画面与基准图片的相似度,包括:将拍摄预览画面对应的目标对象特征向量作为已训练的目标对象特征模型的输入信息,而通过目标对象特征模型计算得出拍摄预览画面与基准图片的相似度。In some embodiments, the similarity between the captured preview image and the reference image is calculated according to the trained target object feature model and the target object feature vector corresponding to the captured preview image, including: taking the target object feature vector corresponding to the captured preview image as The input information of the target object model of the training is calculated, and the similarity between the shooting preview picture and the reference picture is calculated by the target object feature model.
在另一些实施例中,采用已训练的图像处理算法模型将拍摄预览画面与基准图片进行比较,分析出拍摄预览画面与基准图片的相似度,包括:获取拍摄预览画面的像素信息;将拍摄预览画面的像素信息与基准图片的像素信息进行比较,分析出拍摄预览画面与基准图片的相似度。即,在另一些实施例中,通过对比两个图像的像素灰阶值等像素信息来得出拍摄预览画面和基准图片的相似度。In other embodiments, the captured image processing algorithm model is used to compare the shooting preview image with the reference image, and the similarity between the shooting preview image and the reference image is analyzed, including: acquiring pixel information of the shooting preview image; The pixel information of the screen is compared with the pixel information of the reference picture, and the similarity between the shooting preview picture and the reference picture is analyzed. That is, in other embodiments, the similarity between the captured preview picture and the reference picture is obtained by comparing the pixel information such as the pixel grayscale value of the two images.
其中,目标对象可包括脸部、手部、特定景物等对象,目标对象特征模型可包括表情特征模型、手势特征模型、景物特征模型等,分析出的目标对象特征向量可包括如前所述的大笑、生气、打哈欠等表情特征向量,或“OK”手势、“V字形”等手势特征向量,或花朵、鸟、山等景物特征向量。在一些实施例中,以表情特征模型为例,,已训练的表情特征模型可通过如下的方式训练得出:在初始训练集中提供多张具有不同表情人脸的照片;利用人脸识别技术,对初始训练集中提供的多张照片中的人物进行表情分析,生成对应的表情特征向量Xi,例如,X1表示眼睛睁开的大小,X2表示嘴角上扬的程度,X3表示嘴巴张开的大小;基于生成的表情特征向量和对应的照片与基准图片间的相似度标签建立训练样本集;再用样本集进行训练学习,得到训练完成的表情特征模型。The target object may include an object such as a face, a hand, and a specific scene. The target object feature model may include an expression feature model, a gesture feature model, a scene feature model, and the like, and the analyzed target object feature vector may include the foregoing. Emoticon, angry, yawning and other expression vector, or "OK" gesture, "V-shaped" and other gesture feature vectors, or flower, bird, mountain and other scene feature vectors. In some embodiments, taking the expression feature model as an example, the trained expression feature model can be trained by providing a plurality of photos with different expression faces in the initial training set; using face recognition technology, Performing an expression analysis on the characters in the plurality of photos provided in the initial training set to generate a corresponding expression feature vector Xi, for example, X1 represents the size of the eye opening, X2 represents the degree of the mouth angle rising, and X3 represents the size of the mouth opening; The generated expression feature vector and the similarity label between the corresponding photo and the reference picture establish a training sample set; and then use the sample set to perform training learning, and obtain the trained expression feature model.
S25:在拍摄预览画面与基准图片的相似度达到相似度预设阈值时,确定满足拍摄条件, 控制执行拍摄操作。S25: When the similarity between the shooting preview screen and the reference picture reaches the similarity preset threshold, it is determined that the shooting condition is satisfied, and the shooting operation is controlled to be performed.
其中,相似度预设阈值可为80%、90%等。例如,基准图片包括具有大笑表情的基准图片,在拍摄预览画面与具有大笑表情的基准图片的相似度达到80%时确定满足拍摄条件,而触发自动拍照。The similarity preset threshold may be 80%, 90%, or the like. For example, the reference picture includes a reference picture having a laughing expression, and it is determined that the shooting condition is satisfied when the similarity between the shooting preview picture and the reference picture having the laughing expression reaches 80%, and the automatic shooting is triggered.
其中,本申请中,所述拍摄预览画面与基准图片的相似度可包括但不限于:图片风格的相似度、色彩的相似度、内容布局的相似度、像素灰阶的相似度等。In the present application, the similarity between the shooting preview screen and the reference picture may include, but is not limited to, the similarity of the picture style, the similarity of the colors, the similarity of the content layout, the similarity of the pixel gray levels, and the like.
本实施例中,用户可先设定认为满意的基准图片,并进行模型训练,后续在用户要拍照时,先获取预览画面,再采用已训练的模型分析预览画面与基准图片的相似度,在相似度达到相似度预设阈值时触发自动拍摄,因此能够获取到与基准图片相近似的满意照片,而当所述目标对象为脸部,采用的图像处理算法模型为表情特征模型时,能够减少在表情不自然时拍下照片,也能及时地拍下具有满意表情等的满意照片。In this embodiment, the user may first set a reference picture that is considered satisfactory, and perform model training. When the user wants to take a picture, the user first obtains a preview picture, and then uses the trained model to analyze the similarity between the preview picture and the reference picture. When the similarity reaches the similarity preset threshold, automatic shooting is triggered, so that a satisfactory photo similar to the reference picture can be obtained, and when the target object is a face and the image processing algorithm model is an expression feature model, the image can be reduced. Taking a photo when the expression is unnatural, you can also take a satisfactory photo with a satisfactory expression in time.
其中,步骤S21与图1中步骤S11对应,更具体的说明可参照图1的相关描述,在此不再赘述。步骤S23可为图1中步骤S13的更具体的步骤,相关的描述可相互参照。步骤S25对应图1中的步骤S15、S17,相关的描述也可相互参照。The step S21 corresponds to the step S11 in FIG. 1 . For more specific description, reference may be made to the related description of FIG. 1 , and details are not described herein again. Step S23 may be a more specific step of step S13 in FIG. 1, and related descriptions may be referred to each other. Step S25 corresponds to steps S15 and S17 in FIG. 1, and related descriptions may also be referred to each other.
请参阅图3,为本申请第三实施例中的拍摄控制方法的流程图。在第三实施例中,所述预设模型为已训练的图像处理算法模型,所述已训练的图像处理算法模型包括已训练的目标对象特征模型。所述拍摄控制方法包括如下步骤:Please refer to FIG. 3 , which is a flowchart of a shooting control method in a third embodiment of the present application. In a third embodiment, the preset model is a trained image processing algorithm model, and the trained image processing algorithm model includes a trained target object feature model. The shooting control method includes the following steps:
S301:获取拍摄预览画面。S301: Acquire a shooting preview screen.
S303:利用图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量。S303: Analyze a target object in the preview image by using an image recognition technology to generate a corresponding target object feature vector.
S305:将目标对象特征向量作为已训练的目标对象特征模型的输入信息,而得出包括标识当前是否满足拍摄条件的标识信息的分析结果。S305: Taking the target object feature vector as the input information of the trained target object feature model, and obtaining an analysis result including the identifier information that identifies whether the shooting condition is currently satisfied.
可选的,所述标识信息可为1、0等标识当前是否满足拍摄条件的标识符。更具体的,当标识信息为1时,表示满足拍摄条件,当标识信息为0时,表示不满足拍摄条件。显然,标识信息也可为“是”或“否”等信息。Optionally, the identifier information may be an identifier that identifies whether the photographing condition is currently satisfied, such as 1, 0, and the like. More specifically, when the identification information is 1, it indicates that the shooting condition is satisfied, and when the identification information is 0, it indicates that the shooting condition is not satisfied. Obviously, the identification information can also be information such as "yes" or "no".
S307:根据分析结果确定满足拍摄条件时,控制执行拍摄操作。S307: When it is determined according to the analysis result that the shooting condition is satisfied, the control performs a shooting operation.
具体的,所述步骤S307包括:在分析结果中包括标识当前满足拍摄条件的标识信息时,确定当前满足拍摄条件,控制执行拍摄操作。Specifically, the step S307 includes: when the identification result includes the identification information that currently meets the shooting condition, determining that the shooting condition is currently satisfied, and controlling to perform the shooting operation.
例如,当分析结果包括标识满足拍摄条件的标识信息时,例如包括标识符“1”时,确定当前满足拍摄条件,当分析结果包括标识不满足拍摄条件的标识信息时,例如包括标识符“0”时,确定不满足拍摄条件。For example, when the analysis result includes identifying the identification information that satisfies the shooting condition, for example, including the identifier "1", determining that the shooting condition is currently satisfied, and when the analysis result includes identifying the identification information that does not satisfy the shooting condition, for example, including the identifier "0" When it is determined that the shooting conditions are not met.
其中,步骤S301与图1中步骤S11对应,更具体的说明可参照图1的相关描述,在此不再赘述。步骤S303及步骤S305可为图1中步骤S13对应的步骤,相互之间为上位或下位的描述关系,相关的描述可相互参照。步骤S307与图1中的步骤S15、S17相对应,相关的描述也可相互参照。The step S301 corresponds to the step S11 in FIG. 1 . For a more specific description, reference may be made to the related description in FIG. 1 , and details are not described herein again. The step S303 and the step S305 may be the steps corresponding to the step S13 in FIG. 1 , and the description relationship between the upper and lower positions is mutually related, and the related descriptions may refer to each other. Step S307 corresponds to steps S15, S17 in Fig. 1, and the related description can also be referred to each other.
请参阅图4,为本申请第四实施例中的拍摄控制方法的流程图。在第四实施例中,拍摄控制方法包括如下步骤:Please refer to FIG. 4 , which is a flowchart of a shooting control method in a fourth embodiment of the present application. In the fourth embodiment, the photographing control method includes the following steps:
S31:获取拍摄预览画面。S31: Acquire a shooting preview screen.
S33:采用预设模型对拍摄预览画面进行分析得到分析结果。S33: analyzing the shooting preview image by using a preset model to obtain an analysis result.
可选的,步骤S33包括:采用已训练的图像处理算法模型对拍摄预览画面与基准图片进行比较得到拍摄预览画面与基准图片的相似度。Optionally, step S33 includes: comparing the captured preview image with the reference image by using the trained image processing algorithm model to obtain a similarity between the captured preview image and the reference image.
其中,已训练的图像处理算法模型包括目标对象特征模型,可选的,采用已训练的模型对拍摄预览画面与基准图片进行比较得到拍摄预览画面与基准图片的相似度,包括:通过图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量;以及根据目标对象特征模型和预览画面对应的目标对象特征向量计算得到预览画面与基准图片的相似度。The trained image processing algorithm model includes a target object feature model. Optionally, the trained model is used to compare the captured preview image with the reference image to obtain a similarity between the captured preview image and the reference image, including: through image recognition technology. The target object in the shooting preview screen is analyzed to generate a corresponding target object feature vector; and the similarity between the preview image and the reference image is calculated according to the target object feature model and the target object feature vector corresponding to the preview image.
在一些实施例中,采用预设模型对拍摄预览画面进行分析得到分析结果也可包括:利用图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量;将目标对象特征向量作为目标对象特征模型的输入信息,而得出包括标识当前是否满足拍摄条件的标识信息的分析结果。In some embodiments, analyzing the captured preview image by using the preset model to obtain the analysis result may further include: analyzing the target object in the captured preview image by using an image recognition technology, and generating a corresponding target object feature vector; The vector is used as input information of the target object feature model, and an analysis result including identification information indicating whether the shooting condition is currently satisfied is derived.
显然,如前所述,在另一些实施例中,也可通过已训练的神经网络模型得到满意度这一分析结果。Obviously, as mentioned above, in other embodiments, the analysis result of satisfaction can also be obtained by the trained neural network model.
S35:根据分析结果确定当前满足拍摄条件时,控制执行连拍操作,而得到包括当前拍照预览画面对应照片在内的多张照片。S35: When it is determined according to the analysis result that the shooting condition is currently satisfied, the control performs a continuous shooting operation, and obtains a plurality of photos including the photo corresponding to the current photo preview screen.
在一些实施例中,当通过已训练的图像处理算法模型得到满意度这一分析结果时,步骤S35包括:在拍摄预览画面与基准图片的相似度超过相似度预设阈值时或者分析结果包括标识当前满足拍摄条件的标识信息时,确定满足拍摄条件,控制执行连拍操作。In some embodiments, when the analysis result of the satisfaction is obtained by the trained image processing algorithm model, step S35 includes: when the similarity between the shooting preview screen and the reference picture exceeds the similarity preset threshold or the analysis result includes the identification When the identification information that satisfies the shooting condition is currently determined, it is determined that the shooting condition is satisfied, and the control performs the continuous shooting operation.
在另一些实施例中,当通过已训练的神经网络模型得到满意度这一分析结果时,步骤S35也可包括:当满意度超过满意度预设阈值时,确定满足拍摄条件,控制执行连拍操作。In other embodiments, when the analysis result of the satisfaction is obtained by the trained neural network model, step S35 may further include: when the satisfaction exceeds the satisfaction preset threshold, determining that the shooting condition is satisfied, and controlling to perform continuous shooting operating.
其中,执行连拍操作时的拍摄条件的要求可略低于执行拍照操作时的要求,具体的,执行连拍操作时对比的相似度预设阈值或满意度预设阈值可略低于执行拍照操作对比的相似度预设阈值或满意度预设阈值。The requirement of the shooting condition when performing the continuous shooting operation may be slightly lower than the requirement when the photographing operation is performed. Specifically, the comparison similarity preset threshold or the satisfaction preset threshold may be slightly lower than the execution of the photographing when performing the continuous shooting operation. The similarity preset threshold or satisfaction preset threshold of the operation comparison.
例如,执行连拍操作时对比的相似度预设阈值或满意度预设阈值可为70%,低于执行拍照操作对比的为80%或者更高的相似度预设阈值或满意度预设阈值。For example, the similarity preset threshold or the satisfaction preset threshold when performing the continuous shooting operation may be 70%, which is lower than the similarity preset threshold or the satisfaction preset threshold of 80% or higher in performing the photographing operation comparison. .
从而,当即将达到最佳拍摄效果时即进行连拍,而可以确保连拍中拍摄到最佳拍摄效果的照片。例如,设基准图片为具有大笑表情的图片,而对比的表情特征向量为表示嘴角上扬的程度的表情特征向量X2,当判断嘴角上扬程度达到基准图片的70%时则控制连拍操作,而用户继续微笑达到最大上扬程度的时间较短,用户微笑达到最大上扬程度这一表情将被连拍操作拍摄到,而能够确保连拍操作拍摄到最佳拍摄效果的照片。Thus, continuous shooting is achieved when the best shooting effect is about to be achieved, and a photograph of the best shooting effect in continuous shooting can be ensured. For example, the reference picture is a picture with a laughing expression, and the contrast expression vector is an expression feature vector X2 indicating the degree of the mouth angle rising, and the continuous shooting operation is controlled when it is determined that the mouth angle rises to 70% of the reference picture. The user continues to smile for a short period of time, and the user's smile reaches the maximum level. This expression will be captured by the continuous shooting operation, and the photo that ensures the best shooting effect can be ensured by the continuous shooting operation.
其中,步骤S31与图1中步骤S11对应,更具体的说明可参照图1的相关描述,在此不再赘述。步骤S33可为图1中步骤S13、图2中的步骤S23以及图3中的步骤S303、S305对应的步骤,相互之间为上位或下位的描述关系,相关的描述可相互参照。步骤S35与图1中的步骤S15、S17、图2中的步骤S25以及图3中的步骤S307相对应,相互之间为上位或下位的描述关系,相关的描述也可相互参照。The step S31 corresponds to the step S11 in FIG. 1 . For more specific description, reference may be made to the related description of FIG. 1 , and details are not described herein again. The step S33 may be the step S13 in FIG. 1 , the step S23 in FIG. 2 and the steps corresponding to the steps S303 and S305 in FIG. 3 , and the description relationship between the upper and lower positions is mutually related, and the related descriptions may refer to each other. Step S35 corresponds to steps S15 and S17 in FIG. 1, step S25 in FIG. 2, and step S307 in FIG. 3, and is a description relationship of upper or lower positions, and the related descriptions may also be referred to each other.
请参阅图5,为本申请第五实施例中的拍摄控制方法的流程图。第五实施例与第四实施例的区别在于,还对连拍操作得到的照片进行筛选。第五实施例中的拍摄控制方法包括如下步骤:Please refer to FIG. 5 , which is a flowchart of a shooting control method in a fifth embodiment of the present application. The fifth embodiment differs from the fourth embodiment in that the photos obtained by the continuous shooting operation are also screened. The photographing control method in the fifth embodiment includes the following steps:
S41:获取拍摄预览画面。S41: Acquire a shooting preview screen.
S43:采用已训练的模型对拍摄预览画面进行分析得到分析结果。S43: analyzing the captured preview image by using the trained model to obtain an analysis result.
S45:根据分析结果确定当前满足拍摄条件时,控制执行连拍操作,而得到包括当前拍照预览画面对应照片在内的多张照片。S45: When it is determined according to the analysis result that the shooting condition is currently satisfied, the control performs a continuous shooting operation, and obtains a plurality of photos including the photo corresponding to the current photo preview screen.
S47:对连拍操作获取到的多张照片进行分析,确定出最佳的照片。S47: Analyze a plurality of photos obtained by the continuous shooting operation to determine the best photo.
可选的,在一种实现方式中,步骤S47可包括:采用已训练的神经网络模型对该多张照片进行分析得到满意度,并确定满意度最高的照片作为最佳的照片。Optionally, in an implementation manner, step S47 may include: analyzing the plurality of photos by using the trained neural network model to obtain satisfaction, and determining the photo with the highest satisfaction as the best photo.
可选的,在另一种实现方式中,步骤S47可包括:将连拍获得的多张照片与基准图片进行比较,确定与基准图片相似度最高的照片作为最佳的照片。Optionally, in another implementation manner, step S47 may include: comparing a plurality of photos obtained by the continuous shooting with the reference image, and determining a photo with the highest similarity with the reference image as the best photo.
可选的,如图5所示,拍摄控制方法还可进一步包括:Optionally, as shown in FIG. 5, the shooting control method may further include:
S49:保留最佳的照片,而对连拍操作获取到的其他照片进行删除。S49: The best photo is retained, and other photos obtained by the continuous shooting operation are deleted.
在一些实施例中,电子装置包括存储器,存储器中建立有若干相册,“保留最佳的照片”为将最佳的照片存储于某一相册中,例如存储于相机相册中。其中,通过将其他照片进行删除,可有效避免占用过多的存储空间。In some embodiments, the electronic device includes a memory in which a plurality of albums are created, "preserve the best photos" to store the best photos in a certain album, such as in a camera album. Among them, by deleting other photos, it can effectively avoid occupying too much storage space.
其中,步骤S41~S45与图4所示的第三实施例中的步骤S31~S35分别对应相同,更详细 的介绍可参考图4中关于步骤S31~S35的描述。The steps S41 to S45 are respectively the same as the steps S31 to S35 in the third embodiment shown in FIG. 4, and the descriptions of the steps S31 to S35 in FIG. 4 can be referred to in more detail.
请参阅图6,为本申请第六实施例中的拍摄控制方法的流程图。第六实施例与第四实施例的区别在于,拍摄操作是视频拍摄而并非连拍操作。第六实施例中的拍摄控制方法包括如下步骤:Please refer to FIG. 6 , which is a flowchart of a shooting control method in a sixth embodiment of the present application. The sixth embodiment differs from the fourth embodiment in that the photographing operation is a video shooting and not a continuous shooting operation. The photographing control method in the sixth embodiment includes the following steps:
S51:获取拍摄预览画面。S51: Acquire a shooting preview screen.
S53:采用预设模型对拍摄预览画面进行分析得到分析结果。S53: analyzing the shooting preview image by using a preset model to obtain an analysis result.
S55:根据分析结果确定当前满足拍摄条件时,控制执行视频拍摄操作,而得到以当前拍摄预览画面作为起始视频画面帧的视频文件。S55: When it is determined according to the analysis result that the shooting condition is currently satisfied, the control performs a video capturing operation, and obtains a video file that uses the current shooting preview screen as a starting video frame frame.
其中,执行视频拍摄操作时的拍摄条件的要求也可略低于执行拍照操作时的要求,具体的,执行连拍操作时对比的相似度预设阈值或满意度预设阈值可略低于执行拍照操作对比的相似度预设阈值或满意度预设阈值。The requirement of the shooting condition when performing the video shooting operation may also be slightly lower than the requirement when the photographing operation is performed. Specifically, the comparison similarity preset threshold or the satisfaction preset threshold may be slightly lower than the execution when performing the continuous shooting operation. The similarity preset threshold or the satisfaction preset threshold of the photographing operation comparison.
从而,当即将达到最佳拍摄效果时即进行视频拍摄,而可以确保视频文件中包括最佳拍摄效果的视频画面帧。Thus, video shooting is performed when the best shooting effect is about to be achieved, and the video frame frame including the best shooting effect in the video file can be ensured.
其中,步骤S51~S55与图4所示的第四实施例中的步骤S31~S35分别对应,更具体的介绍可参见图4的相关描述。例如,步骤S55,可以包括:在拍摄预览画面与基准图片的相似度超过相似度预设阈值时,确定满足拍摄条件,控制执行视频拍摄操作。可选的,步骤S55,也可以包括:当满意度超过满意度预设阈值时,确定满足拍摄条件,控制执行视频拍摄操作。The steps S51-S55 correspond to the steps S31-S35 in the fourth embodiment shown in FIG. 4 respectively. For a more specific introduction, reference may be made to the related description of FIG. For example, step S55 may include: when the similarity between the shooting preview screen and the reference picture exceeds the similarity preset threshold, determining that the shooting condition is satisfied, and controlling to perform the video shooting operation. Optionally, in step S55, the method may further include: when the satisfaction exceeds the satisfaction preset threshold, determining that the shooting condition is met, and controlling to perform a video shooting operation.
可选的,如图6所示,拍摄控制方法还可进一步包括:Optionally, as shown in FIG. 6, the shooting control method may further include:
S57:对拍摄到的视频文件中的多个视频画面帧进行比较,确定出最佳的画面帧。S57: Compare a plurality of video picture frames in the captured video file to determine an optimal picture frame.
可选的,在一种实现方式中,步骤S57可包括:采用已训练的神经网络模型对该多个视频画面帧进行分析得到满意度,并确定满意度最高的视频画面帧作为最佳的画面帧。Optionally, in an implementation manner, step S57 may include: analyzing the plurality of video picture frames by using the trained neural network model to obtain satisfaction, and determining a video frame frame with the highest satisfaction as the optimal picture. frame.
可选的,在另一种实现方式中,步骤S57可包括:将视频文件中的多个视频画面帧与基准图片进行比较,确定与基准图片相似度最高的视频画面帧作为最佳的画面帧。Optionally, in another implementation manner, step S57 may include: comparing a plurality of video picture frames in the video file with the reference picture, and determining a video picture frame with the highest similarity with the reference picture as the optimal picture frame. .
可选的,如图6所示,拍摄控制方法还可进一步包括:Optionally, as shown in FIG. 6, the shooting control method may further include:
S59:将最佳的画面帧截取出来作为照片保存。S59: The best picture frame is cut out and saved as a photo.
在一些实施例中,电子装置包括存储器,存储器中建立有若干相册,“将最佳的画面帧截取出来作为照片保存”为将最佳的画面帧以图片/照片的格式存储于某一相册中,例如存储于相机相册中。In some embodiments, the electronic device includes a memory in which a plurality of albums are created, and “the best picture frame is truncated to be saved as a photo” to store the best picture frame in a photo/photo format in an album. , for example, stored in a camera album.
请参阅图7,为本申请第七实施例中的拍摄控制方法的流程图。第七实施例中的拍摄控制方法包括如下步骤:Please refer to FIG. 7 , which is a flowchart of a shooting control method in a seventh embodiment of the present application. The photographing control method in the seventh embodiment includes the following steps:
S61:获取拍摄预览画面。S61: Acquire a shooting preview screen.
S63:采用预设模型对拍摄预览画面进行分析得到分析结果。S63: The shooting result is analyzed by using a preset model to obtain an analysis result.
S65:根据分析结果确定当前是否满足拍摄条件以及确定快门时间。S65: Determine whether the shooting condition is currently satisfied and determine the shutter time according to the analysis result.
可选的,在一些实施例中,步骤S65可包括:Optionally, in some embodiments, step S65 may include:
根据分析结果得出目标对象特征确定是否满足拍摄条件和确定快门时间。例如,以根据表情特征确定是否满足拍摄条件,以及根据分析结果得出的手势特征确定快门时间为例,预先可以设置:使用表情特征来决定是否拍照、使用手势特征来确定快门时间,例如预先设置有表情和满足拍摄条件的对应关系,快门时间与手势特征的对应关系,例如预先设置有微笑表情和满足拍摄条件的对应关系,快门时间与拇指和食指间距这一手势特征的对应关系,其中,快门时间与拇指和食指间距可呈对数关系。从而,当根据分析结果确定用户的表情特征为微笑时确定满足拍摄条件,以及根据分析结果得出的拇指和食指之间的距离这一手势特征来确定快门时间,而实现对曝光强度的控制。According to the analysis result, the target object feature is determined to determine whether the shooting condition is satisfied and the shutter time is determined. For example, in order to determine whether the shooting condition is satisfied according to the expression feature, and determining the shutter time according to the gesture feature obtained by the analysis result, it may be set in advance to use the expression feature to decide whether to take a photo, use the gesture feature to determine the shutter time, for example, preset Corresponding relationship between the expression and the shooting condition, the correspondence between the shutter time and the gesture feature, for example, a correspondence relationship between the smile expression and the photographing condition, the correspondence between the shutter time and the gesture feature of the thumb and the index finger distance, wherein The shutter time can be logarithmically related to the distance between the thumb and the index finger. Thereby, when the user's expression feature is determined to be a smile according to the analysis result, it is determined that the shooting condition is satisfied, and the shutter time is determined based on the gesture feature of the distance between the thumb and the index finger according to the analysis result, and the control of the exposure intensity is realized.
可选的,在一种实现方式中,已训练的模型为已训练的图像处理算法模型,已训练的图像处理算法模型包括的已训练的目标对象特征模型具体包括已训练的表情特征模型和已训练的手势特征模型。所述已训练的表情特征模型可为通过如下的训练步骤完成的模型:利用人脸识别技术,对初始训练集中每张照片中的人物进行表情分析,生成对应的表情特征向量; 基于生成的表情特征向量和对应的照片与基准图片间的相似度标签建立训练样本集;再用该样本集进行训练学习,得到所述训练完成的表情特征模型。所述已训练的手势特征模型为通过如下的训练步骤完成的模型:利用图像识别技术,对初始训练集中每张照片中的手部部位进行分析,生成对应的手势特征向量;基于生成的手势特征向量和对应的照片与基准图片间的相似度标签建立训练样本集;再用该样本集进行训练学习,得到所述训练完成的手势特征模型。Optionally, in an implementation manner, the trained model is a trained image processing algorithm model, and the trained image processing algorithm model includes a trained target object feature model, specifically including a trained expression feature model and Training gesture feature model. The trained expression feature model may be a model completed by the following training steps: using facial recognition technology to perform expression analysis on a character in each photo in the initial training set to generate a corresponding expression feature vector; The feature vector and the similarity label between the corresponding photo and the reference picture establish a training sample set; and the training set is further performed by using the sample set to obtain the trained expression feature model. The trained gesture feature model is a model completed by the following training steps: using an image recognition technology to analyze a hand part in each photo in the initial training set to generate a corresponding gesture feature vector; based on the generated gesture feature The training sample set is established by the similarity tag between the vector and the corresponding photo and the reference picture; and the training set is further performed by using the sample set to obtain the gesture feature model of the training completion.
步骤S63可具体包括:根据表情特征模型和拍摄预览画面对应的表情特征向量确定得出表情相似度,以及根据手势特征模型和拍摄预览画面对应的手势特征向量确定得出拍摄预览画面对应的手势特征。The step S63 may specifically include: determining an expression similarity according to the expression feature model and the expression feature vector corresponding to the shooting preview image, and determining, according to the gesture feature model and the gesture feature vector corresponding to the shooting preview image, the gesture feature corresponding to the shooting preview image. .
在一些实施例中,所述已训练的手势特征模型包括了多个基准图像以及多个基准图像的基准目标对象特征向量,所述根据手势特征模型和拍摄预览画面对应的手势特征向量确定得出拍摄预览画面对应的手势特征,包括:通过所述手势特征模型将拍摄预览画面对应的手势特征向量与多个基准图像中的基准手势特征向量进行比较,确定相似度最高的基准手势特征向量对应的基准图像,并根据所述基准图像得出目标对象特征。例如,在确定出基准图像后,根据基准图像的标签来确定手势特征。In some embodiments, the trained gesture feature model includes a plurality of reference images and a reference target object feature vector of the plurality of reference images, the determined according to the gesture feature model and the gesture feature vector corresponding to the captured preview image. The capturing the gesture feature corresponding to the preview image includes: comparing, by using the gesture feature model, the gesture feature vector corresponding to the captured preview image with the reference gesture feature vector of the plurality of reference images, and determining the reference gesture feature vector with the highest similarity. a reference image, and the target object feature is derived from the reference image. For example, after the reference image is determined, the gesture feature is determined based on the label of the reference image.
步骤S65可具体包括:在确定表情相似度大于相似度预设阈值时确定满足拍摄条件;根据手势特征模型和拍摄预览画面对应的手势特征向量确定得出的手势特征,例如得出食指和拇指的距离,以及根据预先定义的手势特征与快门时间的对应关系来确定所述得出的手势特征对应的快门时间,例如根据食指和拇指的距离这一手势特征与快门时间的对应关系来确定快门时间。The step S65 may specifically include: determining that the shooting condition is satisfied when determining that the expression similarity is greater than the similarity preset threshold; determining the gesture feature according to the gesture feature model and the gesture feature vector corresponding to the shooting preview image, for example, obtaining the index finger and the thumb a distance, and determining a shutter time corresponding to the derived gesture feature according to a correspondence between a predefined gesture feature and a shutter time, for example, determining a shutter time according to a correspondence between a gesture feature of the index finger and the thumb and a shutter time .
其中,前述的对应关系可为存储于电子装置的存储器中的对应关系表等。The foregoing correspondence may be a correspondence table or the like stored in a memory of the electronic device.
在另一些实施例中,所述步骤S63也可包括:将拍摄预览画面对应的表情特征向量作为表情特征模型的输入信息,而通过所述已训练的表情特征模型得出包括标识是否满足拍摄条件的标识信息的结果,以及将拍摄预览画面对应的手势特征向量作为所述已训练的手势特征模型的输入信息,而通过所述已训练的手势特征模型得出包括拍摄设置参数的结果。从而,得出所述包括是否满足拍摄条件和拍摄设置参数的分析结果。In other embodiments, the step S63 may further include: using the expression feature vector corresponding to the preview image as the input information of the expression feature model, and determining, by the trained expression feature model, whether the identifier meets the shooting condition. The result of the identification information, and the gesture feature vector corresponding to the captured preview image is used as the input information of the trained gesture feature model, and the result including the shooting setting parameter is obtained by the trained gesture feature model. Thereby, the analysis result including whether the shooting condition and the shooting setting parameter are satisfied is obtained.
在另一些实施例中,所述步骤S65可相应包括:根据已训练的表情特征模型得出的标识是否满足拍摄条件的标识信息以及根据已训练的手势特征模型得出的拍摄设置参数来得到包括标识是否满足拍摄条件的标识信息以及拍摄设置参数的分析结果,即,得到是否满足拍摄条件以及拍摄设置参数等信息的分析结果。In other embodiments, the step S65 may include: determining whether the identifier obtained according to the trained expression feature model satisfies the shooting condition and the shooting setting parameter according to the trained gesture feature model. The identification information indicating whether the shooting condition is satisfied and the analysis result of the shooting setting parameter, that is, the analysis result of whether or not the shooting condition and the shooting setting parameter are satisfied are obtained.
可选的,在另一种实现方式中,当已训练的模型为已训练的神经网络模型时,通过将拍摄预览画面的所有像素作为输入,神经网络模型的输出即为包括是否满足拍摄条件以及拍摄设置参数在内的信息的分析结果;步骤S65包括:根据已训练的神经网络模型得出的包括了当前是否满足拍摄条件以及拍摄设置参数的分析结果确定当前是否满足拍摄条件以及确定拍摄设置参数。Optionally, in another implementation manner, when the trained model is a trained neural network model, by taking all the pixels of the preview image as input, the output of the neural network model includes whether the shooting condition is met or not The analysis result of the information including the setting parameter is taken; step S65 includes: determining whether the current shooting condition is satisfied and determining the shooting setting parameter according to the analysis result of the trained neural network model including whether the shooting condition and the shooting setting parameter are currently satisfied. .
可选的,在另一些实施例中,还可对拍照预览画面进行分析确定被拍照者的年龄身份这一分析结果,并根据年龄身份来确定快门时间。例如,当确定被拍照者为婴幼儿时,由于婴幼儿表情往往转瞬即逝,从而,减少快门时间,提高快门速度,确保能够及时抓拍到精彩的瞬间。Optionally, in other embodiments, the photo preview screen may be analyzed to determine an analysis result of the age identity of the photographed person, and the shutter time is determined according to the age identity. For example, when it is determined that the person being photographed is an infant, the expression of the infant is often fleeting, thereby reducing the shutter time, increasing the shutter speed, and ensuring that a wonderful moment can be captured in time.
其中,图7中的步骤S61、S63分别和图1中的步骤S11、S13对应,相关的描述可相互参照。The steps S61 and S63 in FIG. 7 correspond to the steps S11 and S13 in FIG. 1 respectively, and the related descriptions can refer to each other.
可选的,如图7所示,拍摄控制方法还进一步包括:Optionally, as shown in FIG. 7, the shooting control method further includes:
S67:在确定满足拍摄条件时,根据确定出的快门时间执行拍摄操作。具体的,为将快门时间调节为确定出的快门时间后,执行相应的拍摄操作。S67: When it is determined that the shooting condition is satisfied, the shooting operation is performed according to the determined shutter time. Specifically, after the shutter time is adjusted to the determined shutter time, a corresponding shooting operation is performed.
请参阅图8,为本申请第八实施例中的拍摄控制方法的流程图。第八实施例中的拍摄控制方法与图7所示的第七实施例的方法在于与图7中的步骤S65有一定区别。第八实施例中 的拍摄控制方法包括如下步骤:Please refer to FIG. 8 , which is a flowchart of a shooting control method in an eighth embodiment of the present application. The photographing control method in the eighth embodiment and the method in the seventh embodiment shown in FIG. 7 are different from the step S65 in FIG. The photographing control method in the eighth embodiment includes the following steps:
S71:获取拍摄预览画面。S71: Acquire a shooting preview screen.
S73:采用预设模型对拍摄预览画面进行分析得到分析结果。S73: The shooting preview screen is analyzed by using a preset model to obtain an analysis result.
S75:根据分析结果确定当前是否满足拍摄条件以及确定光圈大小。S75: Determine whether the shooting condition is currently satisfied and determine the aperture size according to the analysis result.
可选的,在一些实施例中,步骤S75可包括:Optionally, in some embodiments, step S75 may include:
根据分析结果得出目标对象特征确定是否满足拍摄条件和确定快门时间。例如,根据分析结果得出的表情特征确定是否满足拍摄条件,以及根据分析结果得出的手势特征确定光圈大小。例如,预先可以设置:使用微笑来决定是否拍照、使用手势特征来确定光圈大小,例如预先设置有光圈大小与拇指和食指间距的对应关系,进一步的,光圈大小与拇指和食指间距呈对数关系。从而,当根据分析结果确定用户的表情为微笑时确定满足拍摄条件,以及根据分析结果得出手势对应的拇指和食指之间的距离来确定光圈大小,而实现对照片景深的控制。According to the analysis result, the target object feature is determined to determine whether the shooting condition is satisfied and the shutter time is determined. For example, the expression features obtained from the analysis results determine whether the shooting conditions are satisfied, and the aperture size is determined based on the gesture characteristics obtained from the analysis results. For example, it may be set in advance: using a smile to determine whether to take a picture, using a gesture feature to determine the aperture size, for example, a correspondence between the aperture size and the distance between the thumb and the index finger is set in advance, and further, the aperture size is logarithmically related to the distance between the thumb and the index finger. . Therefore, when it is determined according to the analysis result that the user's expression is a smile, it is determined that the shooting condition is satisfied, and the distance between the thumb and the index finger corresponding to the gesture is determined according to the analysis result to determine the aperture size, and the control of the photo depth of field is realized.
其中,图8中的步骤S71、S73分别和图1中的步骤S11、S13对应,也与图7中的步骤S61、S63对应,相关的描述可相互参照。Steps S71 and S73 in FIG. 8 correspond to steps S11 and S13 in FIG. 1 respectively, and also correspond to steps S61 and S63 in FIG. 7, and related descriptions can be referred to each other.
图8中的步骤S75与图7中的步骤S65对应,区别仅在于确定的是光圈大小,更具体的实现方式可以参照图7中步骤S65的描述,通过将图7中步骤S65的描述中的快门时间替换为光圈大小即可得出图8中的步骤S75的更具体的实现方式,在此不再赘述。Step S75 in FIG. 8 corresponds to step S65 in FIG. 7 , except that the aperture size is determined. For a more specific implementation, reference may be made to the description of step S65 in FIG. 7 by using the description in step S65 in FIG. 7 . A more specific implementation of the step S75 in FIG. 8 is obtained by replacing the shutter time with the aperture size, and details are not described herein again.
显然,图7和图8中的快门时间、光圈大小仅仅是作为拍摄设置参数的具体例子。在一些实施例中,还可以根据分析结果确定其他的拍摄设置参数,例如,感光度等参数。Obviously, the shutter time and the aperture size in FIGS. 7 and 8 are merely specific examples of the shooting setting parameters. In some embodiments, other shooting setting parameters, such as parameters such as sensitivity, may also be determined based on the analysis result.
在另一些实施例中,根据分析结果确定是否满足拍摄条件时,还同时拍摄设置参数,例如确定快门时间、光圈大小等拍摄设置参数。例如,用户的手势的拇指和食指的距离可同时对应快门时间和光圈大小,通过分析用户手势而可以同时得出对应的快门时间和光圈大小。从而,本申请中,可根据分析结果确定包括快门时间、光圈大小等的至少一个拍摄设置参数。In other embodiments, when it is determined according to the analysis result whether the shooting condition is satisfied, the setting parameters are also taken at the same time, for example, the shooting setting parameters such as the shutter time and the aperture size are determined. For example, the distance between the thumb and the index finger of the user's gesture can simultaneously correspond to the shutter time and the aperture size, and the corresponding shutter time and aperture size can be simultaneously obtained by analyzing the user gesture. Thus, in the present application, at least one shooting setting parameter including a shutter time, an aperture size, and the like can be determined based on the analysis result.
在一些实施例中,根据分析结果确定是否满足拍摄条件和根据分析结果确定拍摄设置参数是同时进行的。In some embodiments, determining whether the shooting condition is satisfied and determining the shooting setting parameter based on the analysis result are performed simultaneously according to the analysis result.
在另一些实施例中,确定包括快门时间或光圈大小等拍摄设置参数是在根据分析结果确定满足拍摄条件之后进行的。即,满足确定拍摄条件之后再去根据分析结果确定拍摄设置参数,从而避免了每次都去确定拍摄设置参数,避免了计算资源的浪费。In still other embodiments, determining the shooting setting parameter including the shutter time or the aperture size is performed after determining that the shooting condition is satisfied based on the analysis result. That is, after the determination of the shooting condition is satisfied, the shooting setting parameter is determined according to the analysis result, thereby avoiding determining the shooting setting parameter every time, and avoiding waste of computing resources.
在另一些实施例中,还可通过图像处理算法模型对拍照预览画面进行图像分析得到人脸的数量这一分析结果,并根据人脸的数量确定得到光圈大小,以确保每个人脸都有良好的曝光。In other embodiments, the image processing algorithm model may be used to perform image analysis on the photo preview image to obtain the analysis result of the number of faces, and the aperture size is determined according to the number of faces to ensure that each face is good. Exposure.
可选的,如图8所示,拍摄控制方法还进一步包括:Optionally, as shown in FIG. 8, the shooting control method further includes:
S77:在确定满足拍摄条件时,根据确定出的光圈大小执行拍摄操作。具体的,为将光圈大小调节为确定出的光圈大小后,执行相应的拍摄操作。S77: When it is determined that the shooting condition is satisfied, the shooting operation is performed according to the determined aperture size. Specifically, after the aperture size is adjusted to the determined aperture size, a corresponding photographing operation is performed.
请参阅图9,为本申请第九实施例中的拍摄控制方法的流程图。第九实施例中的拍摄控制方法包括如下步骤:Please refer to FIG. 9 , which is a flowchart of a shooting control method in a ninth embodiment of the present application. The photographing control method in the ninth embodiment includes the following steps:
S81:获取拍摄预览画面。S81: Acquire a shooting preview screen.
S83:采用预设模型对拍摄预览画面形成的视频流进行分析得到分析结果。S83: The video stream formed by the preview image is analyzed by using a preset model to obtain an analysis result.
S85:根据分析结果确定当前是否满足拍摄条件。如果满足,则执行步骤S87,否则,返回步骤S83。S85: Determine whether the shooting condition is currently satisfied according to the analysis result. If yes, step S87 is performed, otherwise, step S83 is returned.
在一些实施例中,可通过已训练的神经网络模型或已训练的图像处理算法模型等预设模型对视频流进行分析,在捕捉到用户完成某一个动作,例如摇动手指时,确定满足拍摄条件。其中,捕捉到用户完成某一个动作可通过多帧视频画面帧的变化确定出用户完成的动作,并可预先定义预设动作和满足拍摄条件的对应关系,当确定用户完成的动作为预设动作时,则确定满足拍摄条件。In some embodiments, the video stream may be analyzed by a preset model such as a trained neural network model or a trained image processing algorithm model. When the user is captured to complete an action, such as shaking a finger, it is determined that the shooting condition is met. . The capturing of the user completes an action may determine the action completed by the user through the change of the multi-frame video frame frame, and may pre-define the corresponding relationship between the preset action and the shooting condition, and determine that the action completed by the user is a preset action. When it is determined, the shooting conditions are determined.
在一些实施例中,也可通过已训练的神经网络模型或已训练的图像处理算法模型等预设 模型对视频流中的每帧视频画面帧进行分析而得到分析结果,并根据分析结果来确定是否满足拍摄条件。其中,每帧视频画面帧的分析而得到分析结果,并根据分析结果来确定是否满足拍摄条件,可参照前述的任一实施例的相关描述。In some embodiments, each frame of the video frame in the video stream may also be analyzed by a preset model such as a trained neural network model or a trained image processing algorithm model to obtain an analysis result, and determined according to the analysis result. Whether the shooting conditions are met. The analysis result is obtained by analyzing each frame of the video picture frame, and determining whether the shooting condition is satisfied according to the analysis result may refer to the related description of any of the foregoing embodiments.
S87:根据分析结果确定拍摄设置参数。S87: Determine shooting setting parameters according to the analysis result.
其中,拍摄设置参数可包括快门时间、光圈大小等中的至少一个。可选的,步骤S87执行于步骤S85之后。可选的,在另一些实施例中,步骤S87可与步骤S85同时执行。其中,根据分析结果确定快门时间、光圈大小等拍摄设置参数的具体实现方式可参考图7中步骤S65以及图8中步骤S75的相关描述。The shooting setting parameter may include at least one of a shutter time, an aperture size, and the like. Optionally, step S87 is performed after step S85. Optionally, in other embodiments, step S87 may be performed simultaneously with step S85. For a specific implementation manner of determining the shooting setting parameters such as the shutter time and the aperture size according to the analysis result, refer to step S65 in FIG. 7 and the related description in step S75 in FIG. 8.
S89:根据拍摄设置参数执行拍摄操作。具体的,为将拍摄参数调节为确定出的拍摄设置参数后,执行相应的拍摄操作。S89: Perform the shooting operation according to the shooting setting parameters. Specifically, after the shooting parameters are adjusted to the determined shooting setting parameters, a corresponding shooting operation is performed.
请参阅图10,为本申请第十实施例中的拍摄控制方法的流程图。第十实施例中的拍摄控制方法包括如下步骤:Please refer to FIG. 10 , which is a flowchart of a shooting control method in a tenth embodiment of the present application. The photographing control method in the tenth embodiment includes the following steps:
S91:通过摄像头获取拍摄预览画面以及通过麦克风捕捉声音信号。S91: Acquire a shooting preview image through the camera and capture the sound signal through the microphone.
S93:采用预设模型对声音信号进行分析得到第一分析结果以及对拍摄预览画面进行分析得到第二分析结果。S93: analyzing the sound signal by using a preset model to obtain a first analysis result and analyzing the shooting preview image to obtain a second analysis result.
S95:根据第一分析结果以及第二分析结果确定是否满足拍摄条件。如果满足,则执行步骤S97,否则,返回步骤S93。S95: Determine whether the shooting condition is met according to the first analysis result and the second analysis result. If yes, step S97 is performed, otherwise, step S93 is returned.
在一些实施例中,第一分析结果为通过语音分析模型得出语音内容,S95包括:In some embodiments, the first analysis result is that the speech content is obtained by the speech analysis model, and S95 includes:
根据第一分析结果确定是否与预设语音内容相同来确定是否满足初步的拍摄条件,例如预设语音内容为“拍照”,如果第一分析结果得出的语音内容与预设语音内容“拍照”符合,则确定满足初步的拍摄条件;Determining whether the preliminary shooting condition is satisfied according to the first analysis result, for example, determining that the preset voice content is “photographing”, if the voice content obtained by the first analysis result and the preset voice content “photographing” If it is met, it is determined that the initial shooting conditions are met;
通过已训练的模型对拍摄预览画面进行分析得出的第二分析结果确定是否满足拍摄条件;以及a second analysis result obtained by analyzing the captured preview image by the trained model determines whether the shooting condition is satisfied;
在通过第一分析结果确定满足初步的拍摄条件以及通过第二分析结果确定满足拍摄条件时,最终确定当前满足拍摄条件。When it is determined by the first analysis result that the preliminary photographing condition is satisfied and the photographing condition is satisfied by the second analysis result, it is finally determined that the photographing condition is currently satisfied.
其中,通过已训练的模型对拍摄预览画面进行分析得出的第二分析结果确定是否满足拍摄条件与图1中的步骤S15对应,具体的实现方式可至少参阅图1中的步骤S15以及相关的其他实施例的相关步骤。The second analysis result obtained by analyzing the shot preview image by the trained model determines whether the photographing condition is satisfied and corresponds to step S15 in FIG. 1 . The specific implementation manner may refer to at least step S15 in FIG. 1 and related Related steps of other embodiments.
S97:根据第二分析结果确定拍摄设置参数。S97: Determine a shooting setting parameter according to the second analysis result.
其中,拍摄设置参数可包括快门时间、光圈大小等中的至少一个。在一些实施例中,步骤S97执行于步骤S95之后。可选的,在另一些实施例中,步骤S97可与步骤S95同时执行。其中,根据第二分析结果确定快门时间、光圈大小等拍摄设置参数的具体实现方式可参考图7中步骤S65以及图8中步骤S75的相关描述。The shooting setting parameter may include at least one of a shutter time, an aperture size, and the like. In some embodiments, step S97 is performed after step S95. Optionally, in other embodiments, step S97 may be performed simultaneously with step S95. For a specific implementation manner of determining the shooting setting parameters such as the shutter time and the aperture size according to the second analysis result, refer to step S65 in FIG. 7 and the related description in step S75 in FIG. 8.
S99:根据确定出的拍摄设置参数执行拍摄操作。具体的,为将拍摄参数调节为确定出的拍摄设置参数后,执行相应的拍摄操作。S99: Perform a shooting operation according to the determined shooting setting parameters. Specifically, after the shooting parameters are adjusted to the determined shooting setting parameters, a corresponding shooting operation is performed.
从而,在一些实施例中,通过增加其他的输入作为是否符合拍摄条件的判断依据,能够进一步提高拍摄控制的准确度。Thus, in some embodiments, the accuracy of the shooting control can be further improved by adding other inputs as a basis for judging whether or not the shooting conditions are met.
可选的,在图1-图10的任一实施例中的拍摄控制方法的第一个步骤之前,还可包括步骤:对模型进行训练,得到已训练的模型。Optionally, before the first step of the shooting control method in any of the embodiments of FIG. 1 to FIG. 10, the method may further include: training the model to obtain the trained model.
请参阅图11,为本申请一实施例中的拍摄控制方法中的模型训练过程的流程图。如图11所示,模型训练过程可包括如下步骤:Please refer to FIG. 11 , which is a flowchart of a model training process in a shooting control method according to an embodiment of the present application. As shown in FIG. 11, the model training process may include the following steps:
S101:在用户进行手动控制拍摄时,通过模型将当前拍摄的画面作为满足拍摄条件的正面样本,并根据本次的正面样本调整模型自身的参数。S101: When the user performs manual control shooting, the currently captured picture is taken as a positive sample satisfying the shooting condition by the model, and the parameters of the model itself are adjusted according to the positive sample of the current time.
在一些实施例中,模型保存正面样本,并建立或更新正面样本和满足拍摄条件的对应关系,来调整模型自身的参数。其中,满足拍摄条件可作为正面样本的标签来进行标记。In some embodiments, the model saves the positive samples and establishes or updates the positive samples and the correspondence that satisfies the shooting conditions to adjust the parameters of the model itself. Among them, the photographing condition can be marked as a label of the front sample.
可选的,在一种实现方式中,用户手动控制拍摄为通过按压快门键或拍照图标来完成的。Optionally, in one implementation, the user manually controls the shooting to be done by pressing a shutter button or a photo icon.
可选的,在另一种实现方式中,用户手动控制拍摄为通过对电子装置的物理按键执行特定的操作来完成的。例如,电子装置包括电源键,通过对电源键进行双击而实现手动控制拍摄。Optionally, in another implementation, the user manually controls the shooting to be performed by performing a specific operation on a physical button of the electronic device. For example, the electronic device includes a power button, and manual control shooting is achieved by double-clicking the power button.
S103:通过模型以预设规则采样未手动控制拍摄的画面帧作为反面样本,并根据反面样本来调整模型自身的参数。S103: sample the frame frame that is not manually controlled by the model by using a preset rule as a reverse sample, and adjust the parameters of the model according to the reverse sample.
可选的,在一实现方式中,通过模型以预设规则采样未手动控制拍摄的画面帧作为反面样本,包括:对正面样本后一段时间内取景到的画面帧进行采样作为反面样本。Optionally, in an implementation manner, sampling, by using a model, a frame frame that is not manually controlled to be taken as a reverse sample by using a preset rule includes: sampling a frame frame that is framing for a period of time after the front sample as a reverse sample.
可选的,在另一实现方式中,通过模型以预设规则采样未手动控制拍摄的画面帧作为反面样本,包括:对正面样本前一段时间内取景到的画面帧进行采样作为反面样本。其中,当对正面样本前一段时间内取景到的画面帧进行采样时,预先还自动截取拍摄取景画面而存储了一定数量的待定样本,待手动控制拍摄后,该些非手动控制拍摄得到的待定样本确定为反面样本。Optionally, in another implementation manner, sampling, by using a preset rule, a frame frame that is not manually controlled to be taken as a reverse sample, includes: sampling a frame frame that is framing for a period of time before the front sample as a reverse sample. Wherein, when the picture frame that is framing within a certain period of time before the front sample is sampled, the shooting framing picture is automatically intercepted in advance and a certain number of pending samples are stored. After the manual control shooting, the non-manual control shooting is determined. The sample is determined to be the reverse sample.
可选的,一些实现方式中,也可通过随机采样得到未手动控制拍摄的画面帧作为反面样本。Optionally, in some implementation manners, the frame frame that is not manually controlled may also be obtained as a reverse sample by random sampling.
可选的,在另一些实施例中,通过一个额外的传感器决定采样,例如一个光敏或声敏传感器来采集环境光线或者声音来决定采样,并将采样的画面帧作为反面样本。Optionally, in other embodiments, the sampling is determined by an additional sensor, such as a photosensitive or acoustic sensor to collect ambient light or sound to determine the sampling, and the sampled picture frame is used as a negative sample.
可选的,在另一些实现方式中,通过模型以预设规则采样未手动控制拍摄的画面帧作为反面样本,包括:采集未手动控制拍摄的画面帧,进一步分析构图等因素,再决定是否采样为反面样本。Optionally, in other implementation manners, the image frame that is not manually controlled is sampled by the model as a reverse sample by using a preset rule, including: collecting a frame frame that is not manually controlled, further analyzing factors such as composition, and then determining whether to sample. For the reverse sample.
可选的,在另一实现方式中,在一些实施例中,通过模型以预设规则采样未手动控制拍摄的画面帧作为反面样本包括:以预设时间间隔采样相邻两次手动控制拍摄之间取景得到的画面帧作为反面样本。Optionally, in another implementation, in some embodiments, sampling the frame frame that is not manually controlled by the model by using a preset rule as the reverse sample includes: sampling two adjacent manual control shots at preset time intervals The picture frame obtained by the framing is taken as the reverse sample.
在一些实施例中,模型可保存采样得到的反面样本,并也可建立反面样本和不满足拍摄条件的对应关系,来调整模型自身的参数。其中,反面样本为不满足拍摄条件的画面;不满足拍摄条件可作为反面样本的标签来进行标记。预设时间间隔可为1秒、2秒等值。In some embodiments, the model may save the sampled back samples, and may also establish a correspondence between the back samples and the photographing conditions to adjust the parameters of the model itself. The reverse sample is a picture that does not satisfy the shooting condition; the label that does not satisfy the shooting condition can be marked as a label of the reverse sample. The preset time interval can be 1 second, 2 seconds, and the like.
可选的,在一种实现方式中,相邻两次手动控制拍摄可为同一次摄像头打开进行的取景拍摄过程中的相邻两次手动控制拍摄。Optionally, in one implementation, two adjacent manual control shots may be two adjacent manual control shots during the framing shot performed by the same camera opening.
可选的,在另一种实现方式中,相邻两次手动控制拍摄也可以是不同次摄像头打开进行取景拍摄过程中的相邻两次手动控制拍摄。例如,用户打开摄像头完成了第一次手动控制拍摄后,关闭摄像头,并在下一次打开摄像头完成第二次手动控制拍摄,第一次手动控制拍摄和第二次手动控制拍摄之间的取景画面被当前使用的模型以预设时间间隔保存而作为反面样本。Optionally, in another implementation manner, the two adjacent manual control shots may also be two different manual control shots during the framing shooting. For example, after the user turns on the camera and completes the first manual control shooting, the camera is turned off, and the next time the camera is turned on to complete the second manual control shooting, the framing screen between the first manual control shooting and the second manual control shooting is The currently used model is saved at a preset time interval as a negative sample.
S105:在确定达到训练完成条件时,结束训练,得到已训练的模型,以用于后续的自动拍摄控制。S105: When it is determined that the training completion condition is reached, the training is ended, and the trained model is obtained for subsequent automatic shooting control.
可选的,在一种实现方式中,步骤S101之前还包括步骤:响应用户输入的进入模型训练的操作控制电子装置进入模型训练模式。确定达到训练完成条件,包括:响应用户输入的退出模型训练模式的操作时,确定达到训练完成条件。Optionally, in an implementation manner, before step S101, the method further includes the step of: controlling the electronic device to enter the model training mode in response to the user input entering the model training. Determining that the training completion condition is reached includes: determining that the training completion condition is reached in response to the user-entered operation of the exit model training mode.
可选的,进入模型训练的操作包括对菜单选项的选择操作,或者对物理按键的特定操作,或者在电子装置的触摸屏上输入的特定触摸手势。可选的,响应用户输入的进入模型训练的操作,控制电子装置进入模型训练模式,包括:响应用户对菜单选项的选择操作,或者对物理按键的特定操作,或者在电子装置的触摸屏上输入的特定触摸手势而控制电子装置进入模型训练模式。Optionally, the operations of entering the model training include a selection operation of the menu option, or a specific operation on the physical button, or a specific touch gesture input on the touch screen of the electronic device. Optionally, in response to the user-entered operation of entering the model training, the control electronic device enters the model training mode, including: responding to the user's selection operation of the menu option, or performing a specific operation on the physical button, or inputting on the touch screen of the electronic device. The specific touch gesture controls the electronic device to enter the model training mode.
可选的,在另一种实现方式中,确定达到训练完成条件,包括:在确定用户手动控制拍摄的次数达到预设次数N1时,确定达到训练完成条件。其中,预设次数N1可为系统默认的模型训练完成需要执行的次数,也可为用户自定义的值。Optionally, in another implementation manner, determining that the training completion condition is reached includes: determining that the training completion condition is reached when it is determined that the number of times the user manually controls the shooting reaches the preset number of times N1. The preset number of times N1 may be the number of times the system default model training needs to be executed, or may be a user-defined value.
可选的,在另一种实现方式中,确定达到训练完成条件,包括:使用本次的正面样本去 测试模型,确定测试结果是否达到预设阈值,在测试结果达到预设阈值,确定达到训练完成条件。Optionally, in another implementation manner, determining that the training completion condition is reached includes: using the positive sample of the current time to test the model, determining whether the test result reaches a preset threshold, and determining that the training result is reached after the test result reaches a preset threshold. Complete the conditions.
其中,上述的模型可为神经网络模型、图像处理算法模型等模型。The above model may be a model such as a neural network model or an image processing algorithm model.
本申请中,通过预先训练模型而得到已训练的模型,在后续用户开启摄像头进行拍摄时,能够根据已训练的模型自动控制拍摄,能够及时捕捉用户想要的满意画面。In the present application, the trained model is obtained by training the model in advance, and when the user turns on the camera for shooting, the shooting can be automatically controlled according to the trained model, and the satisfactory picture desired by the user can be captured in time.
请参阅图12,为本申请另一实施例中的拍摄控制方法中的模型训练过程的流程图。如图12所示,模型训练过程可包括如下步骤:Please refer to FIG. 12 , which is a flowchart of a model training process in a shooting control method according to another embodiment of the present application. As shown in FIG. 12, the model training process may include the following steps:
S111:响应摄像头的开启操作进行取景预览而获取拍摄预览画面。S111: Perform a framing preview in response to the opening operation of the camera to obtain a shooting preview screen.
在一些实施例中,所述步骤S111具体包括:在开启自动拍摄功能之后,响应摄像头的开启操作进行取景预览而获取拍摄预览画面。即,所述图12所示的模型训练过程可为在开启自动拍摄功能后进行的。In some embodiments, the step S111 specifically includes: after the automatic shooting function is turned on, performing a framing preview in response to the opening operation of the camera to obtain a shooting preview screen. That is, the model training process shown in FIG. 12 can be performed after the automatic shooting function is turned on.
可选的,在一些实施例中,开启自动拍摄功能可为响应用户对相机的菜单选项中的设置操作完成的。Alternatively, in some embodiments, turning on the automatic shooting function may be accomplished in response to a user setting operation in a menu option of the camera.
可选的,在另一些实施例中,开启自动拍摄功能也可为响应用户在电子装置的触摸屏上的特定触摸手势后完成的,例如,响应在电子装置的触摸屏上通过指关节进行双击的操作后完成的。Optionally, in other embodiments, turning on the automatic shooting function may also be performed in response to the user's specific touch gesture on the touch screen of the electronic device, for example, in response to double-clicking on the touch screen of the electronic device through the knuckles After the completion.
可以了解的是,本申请中的图1~10中任一实施例所示的拍照控制方法,可为在电子装置开启自动拍摄功能后进行的。It can be understood that the photographing control method shown in any one of the embodiments 1 to 10 in the present application can be performed after the automatic shooting function is turned on by the electronic device.
S113:根据预设模型确定满足拍摄条件时,控制执行拍摄操作。S113: When it is determined according to the preset model that the shooting condition is satisfied, the control performs a shooting operation.
其中,步骤S113与图1所示的第一实施例中的步骤S17、图2所示的第二实施例中的步骤S25等步骤对应,具体的实现方式可参照图1中步骤S17以及其他实施例中的相关步骤的描述。Step S113 corresponds to step S17 in the first embodiment shown in FIG. 1 and step S25 in the second embodiment shown in FIG. 2 . For specific implementation, refer to step S17 in FIG. 1 and other implementations. A description of the relevant steps in the example.
S115:获取用户对本次自动拍摄的满意度反馈信息。S115: Acquire user satisfaction feedback information about the automatic shooting.
可选的,在一实现方式中,可在自动拍照完成后,通过产生提示信息来提示用户对本次自动拍照进行满意度评价,例如产生包括“满意”和“不满意”选项的提示框来供用户选择,并根据用户选择而获取得到本次自动拍照的满意度反馈信息。Optionally, in an implementation manner, after the automatic photographing is completed, the user may be prompted to perform satisfaction evaluation on the automatic photographing by generating prompt information, for example, generating a prompt box including “satisfactory” and “unsatisfactory” options. For the user to select, and according to the user's choice, the satisfaction feedback information of the automatic photographing is obtained.
可选的,在另一实施方式中,通过侦测用户对本次自动拍摄得到的照片或视频的操作而获取用户对本次自动拍摄的满意度反馈信息。例如,如果侦测到用户删除了本次自动拍摄得到的照片或视频,则确定用户对本次自动拍摄不满意,而获取到了为不满意的满意度反馈信息。又例如,如果侦测到用户对本次自动拍摄得到的照片或视频进行了设置为最爱、喜欢等类型的设置操作或者进行了分享的操作,则确定用户对本次自动拍摄满意,而获取到了为满意的满意度反馈信息。Optionally, in another embodiment, the user's satisfaction with the automatic shooting is obtained by detecting the user's operation on the photo or video obtained by the automatic shooting. For example, if it is detected that the user deletes the photo or video obtained by the automatic shooting, it is determined that the user is not satisfied with the automatic shooting, and the satisfaction feedback information that is unsatisfactory is obtained. For example, if it is detected that the user has set a photo or video obtained by the automatic shooting to a favorite or favorite type setting operation or a sharing operation, it is determined that the user is satisfied with the automatic shooting, and obtains I got feedback on satisfaction with satisfaction.
S117:将用户对本次自动拍摄的满意度反馈信息输出至当前使用的模型,以使得当前使用的模型利用满意度反馈信息进行优化训练。S117: Output the satisfaction feedback information of the user to the current automatic shooting to the currently used model, so that the currently used model uses the satisfaction feedback information for optimal training.
从而,本申请中,通过搜集用户对自动拍摄的满意度反馈信息,可以对模型的训练进行优化,而不断优化模型,以使得后续使用中进行自动拍摄时能够更加准确。Therefore, in the present application, by collecting user satisfaction feedback information for automatic shooting, the training of the model can be optimized, and the model is continuously optimized, so that the automatic shooting in subsequent use can be more accurate.
其中,当前使用的模型可为已经经过确认训练完成的模型,也可以是还未训练完成的模型。当为确认训练完成的模型,可进一步进行优化,当为还未训练完成的模型,可以更优地实现训练。Among them, the currently used model may be a model that has been confirmed by training, or a model that has not been trained yet. When it is confirmed that the training is completed, the model can be further optimized. When the model is not yet trained, the training can be better achieved.
相应的,图12中的步骤S111~S117可执行于图11中的步骤S105之后,也可执行于图11中的步骤S105之前,甚至还可以执行于图11中的步骤S101之前。当执行于步骤S101之前时,当前使用的模型可为一未经过训练的初始模型。Correspondingly, the steps S111 S S117 in FIG. 12 can be performed after the step S105 in FIG. 11 , and can also be performed before the step S105 in FIG. 11 , and can even be performed before the step S101 in FIG. 11 . When executed before step S101, the currently used model may be an untrained initial model.
其中,如前所述的图1~10中任一实施例中的预设模型也可为未训练完成的模型,而在用户启动自动拍摄功能后,根据当前的模型去进行自动拍摄,并根据用户反馈的满意度反馈信息对当前的模型进行优化和训练。The preset model in any one of the embodiments 1 to 10 as described above may also be an untrained model, and after the user starts the automatic shooting function, the automatic shooting is performed according to the current model, and according to The feedback feedback from the user feedback optimizes and trains the current model.
在一些实施例中,当预设模型为未训练完成的模型时,未训练完成的模型自动获取用户 每次进行拍摄时的画面,并作为正面样本进行训练,或者进一步获取拍摄时的拍摄设置参数,一起作为正面样本进行训练,而对预设模型进行逐步优化,直到训练次数达到预设次数或者后续用户反馈的满意度反馈信息为满意的比例超过预设比例,则确定训练完成。在此方式下,由于用户自己训练模型,而不需要采用别人的模型,因此能够更好地实现个性化。In some embodiments, when the preset model is an untrained model, the untrained model automatically acquires a picture each time the user performs shooting, and performs training as a frontal sample, or further acquires a shooting setting parameter at the time of shooting. The training is performed as a positive sample, and the preset model is gradually optimized until the number of training reaches a preset number of times or the satisfaction feedback information of subsequent user feedback is a satisfactory ratio exceeding a preset ratio, and the training is determined to be completed. In this way, since the user himself trains the model without using another person's model, personalization can be better achieved.
请参阅图13,为本申请再一实施例中的拍摄控制方法中的模型训练过程的流程图。如图13所示,模型训练过程可包括如下步骤:Please refer to FIG. 13 , which is a flowchart of a model training process in a shooting control method according to still another embodiment of the present application. As shown in FIG. 13, the model training process may include the following steps:
S121:在用户进行手动控制拍摄时,模型将当前拍摄的画面作为与满足拍摄条件以及拍摄设置参数对应的正面样本,并根据本次的正面样本调整模型自身的参数。S121: When the user performs manual control shooting, the model takes the currently photographed screen as a positive sample corresponding to the photographing condition and the photographing setting parameter, and adjusts the model's own parameters according to the current front sample.
在一些实施例中,模型保存正面样本,并建立或更新正面样本和满足拍摄条件以及拍摄设置参数的对应关系,来调整模型自身的参数。其中,满足拍摄条件以及拍摄设置参数可同时作为正面样本的标签来进行标记。In some embodiments, the model saves the front samples and establishes or updates the front samples and the correspondences that satisfy the shooting conditions and the shooting settings parameters to adjust the parameters of the model itself. Among them, the shooting conditions and the shooting setting parameters can be marked as labels of the front samples at the same time.
可选的,在一种实现方式中,用户手动控制拍摄为通过按压快门键或拍照图标来完成的。Optionally, in one implementation, the user manually controls the shooting to be done by pressing a shutter button or a photo icon.
可选的,在另一种实现方式中,用户手动控制拍摄为通过对电子装置的物理按键执行特定的操作来完成的。例如,电子装置包括电源键,通过对电源键进行双击而实现手动控制拍摄。Optionally, in another implementation, the user manually controls the shooting to be performed by performing a specific operation on a physical button of the electronic device. For example, the electronic device includes a power button, and manual control shooting is achieved by double-clicking the power button.
拍摄设置参数可包括光圈大小、快门时间、感光度等参数。The shooting setting parameters may include parameters such as aperture size, shutter time, and sensitivity.
S123:通过模型以预设规则采样未手动控制拍摄的画面帧作为反面样本,并根据反面样本来调整模型自身的参数。S123: The screen frame that is not manually controlled is sampled by the model as a reverse sample by using a preset rule, and the parameters of the model itself are adjusted according to the reverse sample.
S125:在确定达到训练完成条件时,结束训练,得到已训练的模型。S125: When it is determined that the training completion condition is reached, the training is ended, and the trained model is obtained.
其中,步骤S123和步骤S125分别与图11中的步骤S103以及步骤S105对应,具体的介绍可参照图11中步骤S103以及步骤S105的描述。Step S123 and step S125 correspond to step S103 and step S105 in FIG. 11, respectively. For specific introduction, reference may be made to the description of step S103 and step S105 in FIG.
从而,在再一实施例中,通过模型的训练,不但建立作为正面样本的画面与拍摄条件的对应关系,而建立作为正面样本的画面与拍摄设置参数的对应关系,在后续启动自动拍摄时,能够自动确定是否满足拍摄条件并且能够自动进行拍摄设置参数的设置。Therefore, in another embodiment, by the training of the model, not only the correspondence between the picture as the front sample and the shooting condition is established, but the correspondence between the picture as the front sample and the shooting setting parameter is established, and when the automatic shooting is subsequently started, It is possible to automatically determine whether the shooting conditions are satisfied and the setting of the shooting setting parameters can be automatically performed.
请参阅图14,为本申请一实施例中的电子装置100的示意出部分结构的框图。如图14所示,电子装置100包括处理器10、存储器20、摄像头30。其中,摄像头30至少包括后置摄像头31及前置摄像头32。后置摄像头31用于拍摄电子装置100后方的影像,可用于供用户拍摄他人等拍摄操作,前置摄像头32用于拍摄电子装置100前方的影像,可用于实现自拍等拍摄操作。Please refer to FIG. 14 , which is a block diagram showing a schematic partial structure of an electronic device 100 according to an embodiment of the present application. As shown in FIG. 14, the electronic device 100 includes a processor 10, a memory 20, and a camera 30. The camera 30 includes at least a rear camera 31 and a front camera 32. The rear camera 31 is used to capture an image behind the electronic device 100, and can be used for a user to take a shooting operation such as a person. The front camera 32 is used to capture an image in front of the electronic device 100, and can be used to perform a self-photographing and the like.
在一些实施例中,图1~图13中的模型可为运行于处理器10中的特定算法函数等程序,例如为神经网络算法函数、图像处理算法函数等等。在另一些实施例中,电子装置100还可包括独立于处理器10之外的模型处理器,图1~图13中的模型为运行与模型处理器中,处理器10可根据需要产生相应指令而触发模型处理器运行对应的模型,且模型的输出结果通过模型处理器输出给处理器10而供处理器10进行使用,而执行拍摄操作等控制。In some embodiments, the models in FIGS. 1-13 may be programs such as specific algorithm functions running in processor 10, such as neural network algorithm functions, image processing algorithm functions, and the like. In other embodiments, the electronic device 100 may further include a model processor that is independent of the processor 10. The models in FIGS. 1 to 13 are in the operation and model processors, and the processor 10 may generate corresponding instructions as needed. The trigger model processor runs the corresponding model, and the output result of the model is output to the processor 10 through the model processor for use by the processor 10, and control such as a shooting operation is performed.
存储器20中存储有程序指令。Program instructions are stored in the memory 20.
处理器10用于调用存储器20存储的程序指令而执行如图1~9所示的任一实施例中的拍摄控制方法,以及执行如图10~12所示的任一实施例中的拍摄控制方法中的模型训练过程。The processor 10 is configured to call a program instruction stored in the memory 20 to execute the photographing control method in any of the embodiments shown in FIGS. 1 to 9, and perform photographing control in any of the embodiments as shown in FIGS. 10 to 12. The model training process in the method.
例如,处理器10用于调用存储器20存储的程序指令执行如下的拍摄控制方法:For example, the processor 10 is configured to call a program instruction stored in the memory 20 to execute the following shooting control method:
通过摄像头30获取拍摄预览画面;Obtaining a shooting preview screen through the camera 30;
采用预设模型对拍摄预览画面进行分析而得到分析结果;The analysis result is obtained by analyzing the shooting preview screen by using a preset model;
根据分析结果确定当前是否满足拍摄条件;According to the analysis result, it is determined whether the shooting condition is currently satisfied;
在确定满足拍摄条件时,控制执行拍摄操作。When it is determined that the shooting conditions are satisfied, the control performs a shooting operation.
从而,本申请中,通过分析拍照预览画面确定当前是否满足拍摄条件,能够根据拍照预览画面中的内容来确定是否为用户期望拍下的画面,从而能够及时捕捉当前精彩的瞬间。Therefore, in the present application, by analyzing the photographing preview screen to determine whether the photographing condition is currently satisfied, whether or not the photograph desired by the user is photographed can be determined according to the content in the photograph preview screen, so that the current exciting moment can be captured in time.
在一些实施例中,处理器10在调用程序指令执行根据分析结果确定当前是否满足拍摄条件之外,还调用程序指令执行:根据分析结果确定包括快门时间、光圈大小中的至少一个的 拍摄设置参数。In some embodiments, the processor 10, when the calling program instruction is executed to determine whether the shooting condition is currently satisfied according to the analysis result, is further invoked to execute the program instruction: determining a shooting setting parameter including at least one of a shutter time and an aperture size according to the analysis result. .
处理器10在确定满足拍摄条件时,控制执行拍摄操作,还包括:处理器10在确定满足拍摄条件时,根据确定出的拍摄设置参数执行拍摄操作。The processor 10 controls to perform a photographing operation when it is determined that the photographing condition is satisfied, and further includes the processor 10 performing a photographing operation according to the determined photographing setting parameter when determining that the photographing condition is satisfied.
其中,处理器10可为微控制器、微处理器、单片机、数字信号处理器等。The processor 10 can be a microcontroller, a microprocessor, a single chip microcomputer, a digital signal processor, or the like.
存储器20可为存储卡、固态存储器、微硬盘、光盘等任意可存储信息的存储设备。The memory 20 can be any storage device that can store information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like.
如图14所示,电子装置100还包括输入单元40及输出单元50。输入单元40可包括触摸面板、鼠标、麦克风、包括电源键、音量键在内的物理按键等。输出单元50可包括显示屏、扬声器等。在一些实施例中,输入单元40的触摸面板和输出单元50的显示屏整合在一起形成触摸屏,而同时提供触摸输入和显示输出的功能。As shown in FIG. 14, the electronic device 100 further includes an input unit 40 and an output unit 50. The input unit 40 may include a touch panel, a mouse, a microphone, a physical button including a power key, a volume key, and the like. The output unit 50 can include a display screen, a speaker, and the like. In some embodiments, the touch panel of the input unit 40 and the display screen of the output unit 50 are integrated to form a touch screen while providing the functionality of touch input and display output.
其中,电子装置100可为手机、平板电脑、笔记本电脑等具有摄像头30的便携式电子装置,也可为照相机、摄像机等照相装置。The electronic device 100 can be a portable electronic device having a camera 30, such as a mobile phone, a tablet computer, or a notebook computer, and can also be a camera device such as a camera or a video camera.
在一些实施例中,本申请还提供一种计算机可读存储介质,计算机可读存储介质中存储有若干程序指令,若干程序指令供主处理单元20调用执行后,执行如图1-13所示的任一拍摄控制方法中的全部或部分步骤。在一些实施例中,计算机存储介质即为存储器20,可为存储卡、固态存储器、微硬盘、光盘等任意可存储信息的存储设备。In some embodiments, the present application further provides a computer readable storage medium, where a plurality of program instructions are stored in the computer readable storage medium, and the program instructions are executed by the main processing unit 20, and executed as shown in FIG. 1-13. All or part of the steps in any of the shooting control methods. In some embodiments, the computer storage medium is the memory 20, and may be any storage device that can store information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like.
本申请的拍摄控制方法及电子装置100可根据拍摄预览画面自动判断是否满足拍摄条件,并在满足拍摄条件时进行拍摄,可及时捕捉到包括当前拍摄预览画面对应内容的精彩瞬间。The photographing control method and the electronic device 100 of the present application can automatically determine whether the photographing condition is satisfied according to the photographing preview screen, and perform photographing when the photographing condition is satisfied, and can capture the highlight moment including the content corresponding to the current photographing preview screen in time.
尽管在此结合各实施例对本发明进行了描述,然而,在实施所要求保护的本发明过程中,本领域技术人员通过查看附图、公开内容、以及所附权利要求书,可理解并实现公开实施例的其他变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其他单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。Although the present invention has been described herein in connection with the embodiments of the present invention, those skilled in the art can understand and implement the disclosure by the drawings, the disclosure, and the appended claims. Other variations of the embodiments. In the claims, the word "comprising" does not exclude other components or steps, and "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill several of the functions recited in the claims. Certain measures are recited in mutually different dependent claims, but this does not mean that the measures are not combined to produce a good effect.
本领域技术人员应明白,本发明的实施例可提供为方法、装置(设备)、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机程序存储/分布在合适的介质中,与其它硬件一起提供或作为硬件的一部分,也可以采用其他分布形式,如通过Internet或其它有线或无线电信系统。Those skilled in the art will appreciate that embodiments of the present invention can be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code. The computer program is stored/distributed in a suitable medium, provided with other hardware or as part of the hardware, or in other distributed forms, such as over the Internet or other wired or wireless telecommunication systems.
本发明是参照本发明实施例的方法、装置(设备)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention has been described with reference to flowchart illustrations and/or block diagrams of the methods, apparatus, and computer program products of the embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
以上所揭露的仅为本申请一种实施例而已,当然不能以此来限定本申请之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本申请权利要求所作的等同变化,仍属于申请所涵盖的范围。The above disclosure is only an embodiment of the present application, and of course, the scope of the application should not be limited thereto, and those skilled in the art can understand all or part of the process of implementing the above embodiments, and according to the claims of the present application. The equivalent change is still within the scope of the application.

Claims (12)

  1. 一种拍摄控制方法,其特征在于,所述拍摄控制方法包括:A shooting control method, characterized in that the shooting control method comprises:
    获取拍摄预览画面;Get a preview of the shot;
    采用预设模型对所述拍摄预览画面进行分析而得到分析结果;The shooting preview screen is analyzed by using a preset model to obtain an analysis result;
    根据所述分析结果确定当前是否满足拍摄条件;以及Determining whether the shooting condition is currently satisfied according to the analysis result;
    在确定满足拍摄条件时,控制执行拍摄操作。When it is determined that the shooting conditions are satisfied, the control performs a shooting operation.
  2. 如权利要求1所述的拍摄控制方法,其特征在于,所述预设模型为已训练的神经网络模型;所述采用预设模型对所述拍摄预览画面进行分析得出分析结果,包括:The photographing control method according to claim 1, wherein the preset model is a trained neural network model; and the analyzing the photographed preview image by using a preset model to obtain an analysis result, comprising:
    通过已训练的神经网络模型对拍摄预览画面进行分析,得出满意度这一分析结果;The captured preview image is analyzed by the trained neural network model, and the analysis result of satisfaction is obtained;
    所述根据所述分析结果确定当前是否满足拍摄条件,包括:Determining, according to the analysis result, whether the current shooting condition is met, including:
    在确定满意度超过满意度预设阈值时,确定当前满足拍摄条件。When it is determined that the satisfaction exceeds the satisfaction preset threshold, it is determined that the shooting condition is currently satisfied.
  3. 如权利要求1所述的拍摄控制方法,其特征在于,所述预设模型为已训练的图像处理算法模型,所述采用预设模型对所述拍摄预览画面进行分析得出分析结果,包括:The photographing control method according to claim 1, wherein the preset model is a trained image processing algorithm model, and the analyzing the photographed preview image by using the preset model to obtain an analysis result, comprising:
    采用已训练的图像处理算法模型将拍摄预览画面与基准图片进行比较,分析出拍摄预览画面与基准图片的相似度这一分析结果;The trained image processing algorithm model is used to compare the shooting preview image with the reference image, and analyze the analysis result of the similarity between the shooting preview image and the reference image;
    所述根据所述分析结果确定当前是否满足拍摄条件,包括:Determining, according to the analysis result, whether the current shooting condition is met, including:
    在确定相似度超过相似度预设阈值时,确定当前满足拍摄条件。When it is determined that the similarity exceeds the similarity preset threshold, it is determined that the shooting condition is currently satisfied.
  4. 如权利要求1所述的拍摄控制方法,其特征在于,预设模型为已训练的图像处理算法模型,所述已训练的图像处理算法模型包括目标对象特征模型,所述采用预设模型对所述拍摄预览画面进行分析得出分析结果,包括:The photographing control method according to claim 1, wherein the preset model is a trained image processing algorithm model, and the trained image processing algorithm model includes a target object feature model, and the preset model is used The shooting preview screen is analyzed to obtain the analysis results, including:
    利用图像识别技术对拍摄预览画面中的目标对象进行分析,生成对应的目标对象特征向量;Using the image recognition technology to analyze the target object in the preview image to generate a corresponding target object feature vector;
    将目标对象特征向量作为已训练的图像处理算法模型的输入信息,而得出包括标识当前是否满足拍摄条件的标识信息的分析结果;Using the target object feature vector as the input information of the trained image processing algorithm model, and obtaining an analysis result including the identification information that identifies whether the shooting condition is currently satisfied;
    根据分析结果确定当前是否满足拍摄条件,包括:According to the analysis results, it is determined whether the shooting conditions are currently met, including:
    在分析结果包括标识当前满足拍摄条件的标识信息时,确定当前满足拍摄条件。When the analysis result includes identifying the identification information that currently satisfies the shooting condition, it is determined that the shooting condition is currently satisfied.
  5. 如权利要求1所述的拍摄控制方法,其特征在于,所述拍摄操作为连拍操作,所述控制执行拍摄操作,包括:控制执行连拍操作,而得到包括当前拍摄预览画面对应照片在内的多张照片。The photographing control method according to claim 1, wherein the photographing operation is a continuous shooting operation, and the controlling performs a photographing operation, comprising: controlling to perform a continuous shooting operation, and obtaining a photograph corresponding to the current photographing preview screen. Multiple photos.
  6. 如权利要求5所述的拍摄控制方法,其特征在于,所述拍摄控制方法还包括:The photographing control method according to claim 5, wherein the photographing control method further comprises:
    对所述连拍操作获取到的多张照片进行分析,确定出最佳的照片;以及Analyzing a plurality of photos obtained by the continuous shooting operation to determine an optimal photo;
    保留所述最佳的照片,而对所述连拍操作获取到的其他照片进行删除。The best photo is retained, and other photos obtained by the continuous shooting operation are deleted.
  7. 如权利要求1所述的拍摄控制方法,其特征在于:所述拍摄操作为视频拍摄操作,所述控制执行拍摄操作,包括:控制执行视频拍摄操作,而得到以当前拍摄预览画面作为起始视频画面帧的视频文件。The photographing control method according to claim 1, wherein the photographing operation is a video photographing operation, and the controlling performs a photographing operation, comprising: controlling to perform a video photographing operation, and obtaining a current photographed preview screen as a start video. The video file of the picture frame.
  8. 如权利要求7所述的拍摄控制方法,其特征在于,所述拍摄控制方法还包括:The photographing control method according to claim 7, wherein the photographing control method further comprises:
    对所述拍摄到的视频文件中的多个视频画面帧进行比较,确定出最佳的画面帧;以及Comparing a plurality of video picture frames in the captured video file to determine an optimal picture frame;
    将所述最佳的画面帧截取出来作为照片保存。The best picture frame is cut out and saved as a photo.
  9. 如权利要求1所述的拍摄控制方法,其特征在于,所述对所述拍摄预览画面进行分析而得到分析结果,还包括:The photographing control method according to claim 1, wherein the analyzing the photographing preview screen to obtain an analysis result further comprises:
    采用预设模型对预览画面形成的视频流进行分析而得到分析结果。The video stream formed by the preview picture is analyzed by using a preset model to obtain an analysis result.
  10. 如权利要求1-9任一项所述的拍摄控制方法,其特征在于,所述方法还包括:The photographing control method according to any one of claims 1 to 9, wherein the method further comprises:
    获取用户对执行拍摄操作得到的照片或视频进行反馈的满意度反馈信息;Obtaining satisfaction feedback information that the user gives feedback on the photos or videos obtained by performing the shooting operation;
    将所述满意度反馈信息输出给所述预设模型,以使得所述预设模型利用所述满意度反馈 信息进行优化训练。And outputting the satisfaction feedback information to the preset model, so that the preset model performs optimization training by using the satisfaction feedback information.
  11. 一种电子装置,其特征在于,所述电子装置包括:An electronic device, the electronic device comprising:
    摄像头;camera;
    存储器,用于存储程序指令;以及a memory for storing program instructions;
    处理器,用于调用所述程序指令执行如权利要求1-10任一项所述的拍摄控制方法。a processor for invoking the program instruction to perform the photographing control method according to any one of claims 1-10.
  12. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有程序指令,所述程序指令用于供计算机调用后执行如权利要求1-10任一项所述的拍摄控制方法。A computer readable storage medium, wherein the computer readable storage medium stores program instructions for performing a shooting control according to any one of claims 1 to 10 after being invoked by a computer method.
PCT/CN2018/085899 2018-05-07 2018-05-07 Photographing control method and electronic device WO2019213819A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880070402.5A CN111295875A (en) 2018-05-07 2018-05-07 Shooting control method and electronic device
PCT/CN2018/085899 WO2019213819A1 (en) 2018-05-07 2018-05-07 Photographing control method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/085899 WO2019213819A1 (en) 2018-05-07 2018-05-07 Photographing control method and electronic device

Publications (1)

Publication Number Publication Date
WO2019213819A1 true WO2019213819A1 (en) 2019-11-14

Family

ID=68467677

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/085899 WO2019213819A1 (en) 2018-05-07 2018-05-07 Photographing control method and electronic device

Country Status (2)

Country Link
CN (1) CN111295875A (en)
WO (1) WO2019213819A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111050069A (en) * 2019-12-12 2020-04-21 维沃移动通信有限公司 Shooting method and electronic equipment

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014800B (en) * 2021-01-29 2022-09-13 中通服咨询设计研究院有限公司 Intelligent photographing method for surveying operation in communication industry
CN113676660B (en) * 2021-08-11 2023-04-07 维沃移动通信有限公司 Shooting method and device and electronic equipment
CN113949815A (en) * 2021-11-17 2022-01-18 维沃移动通信有限公司 Shooting preview method and device and electronic equipment
CN116501280B (en) * 2023-06-28 2023-10-03 深圳市达浩科技有限公司 Display screen bright and dark line adjusting method and system and LED display screen

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120262593A1 (en) * 2011-04-18 2012-10-18 Samsung Electronics Co., Ltd. Apparatus and method for photographing subject in photographing device
CN104125396A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image shooting method and device
CN104883494A (en) * 2015-04-30 2015-09-02 努比亚技术有限公司 Image snapshot method and device
CN107360371A (en) * 2017-08-04 2017-11-17 上海斐讯数据通信技术有限公司 A kind of automatic photographing method, server and Automatic camera
CN107729872A (en) * 2017-11-02 2018-02-23 北方工业大学 Facial expression recognition method and device based on deep learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201974617U (en) * 2011-06-27 2011-09-14 北京海鑫智圣技术有限公司 LED light source device in police standardized figure acquisition system
US10003722B2 (en) * 2015-03-17 2018-06-19 Disney Enterprises, Inc. Method and system for mimicking human camera operation
CN106709424B (en) * 2016-11-19 2022-11-11 广东中科人人智能科技有限公司 Optimized monitoring video storage system
CN107105341B (en) * 2017-03-30 2020-02-21 联想(北京)有限公司 Video file processing method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120262593A1 (en) * 2011-04-18 2012-10-18 Samsung Electronics Co., Ltd. Apparatus and method for photographing subject in photographing device
CN104125396A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image shooting method and device
CN104883494A (en) * 2015-04-30 2015-09-02 努比亚技术有限公司 Image snapshot method and device
CN107360371A (en) * 2017-08-04 2017-11-17 上海斐讯数据通信技术有限公司 A kind of automatic photographing method, server and Automatic camera
CN107729872A (en) * 2017-11-02 2018-02-23 北方工业大学 Facial expression recognition method and device based on deep learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111050069A (en) * 2019-12-12 2020-04-21 维沃移动通信有限公司 Shooting method and electronic equipment

Also Published As

Publication number Publication date
CN111295875A (en) 2020-06-16

Similar Documents

Publication Publication Date Title
WO2019213819A1 (en) Photographing control method and electronic device
WO2019213818A1 (en) Photographing control method, and electronic device
WO2020114087A1 (en) Method and device for image conversion, electronic equipment, and storage medium
US10170157B2 (en) Method and apparatus for finding and using video portions that are relevant to adjacent still images
WO2022037111A1 (en) Image processing method and apparatus, interactive display apparatus, and electronic device
WO2022028184A1 (en) Photography control method and apparatus, electronic device, and storage medium
JP5169139B2 (en) Camera and image recording program
WO2017084182A1 (en) Method and apparatus for image processing
WO2018120662A1 (en) Photographing method, photographing apparatus and terminal
KR20090098505A (en) Media signal generating method and apparatus using state information
EP3975046B1 (en) Method and apparatus for detecting occluded image and medium
CN104580886A (en) Photographing control method and device
KR20170074822A (en) Face photo album based music playing method, apparatus and terminal device and storage medium
WO2018098968A1 (en) Photographing method, apparatus, and terminal device
CN108600610A (en) Shoot householder method and device
CN111771372A (en) Method and device for determining camera shooting parameters
CN107277368A (en) A kind of image pickup method and filming apparatus for smart machine
WO2019213820A1 (en) Photographing control method and electronic device
CN110913120B (en) Image shooting method and device, electronic equipment and storage medium
CN106412417A (en) Method and device for shooting images
US11961216B2 (en) Photography session assistant
CN113079311B (en) Image acquisition method and device, electronic equipment and storage medium
CN105530439B (en) Method, apparatus and terminal for capture pictures
WO2018232669A1 (en) Method for controlling camera to photograph and mobile terminal
WO2021237744A1 (en) Photographing method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18917606

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18917606

Country of ref document: EP

Kind code of ref document: A1