WO2019213819A1 - Procédé de commande de photographie, et dispositif électronique - Google Patents

Procédé de commande de photographie, et dispositif électronique Download PDF

Info

Publication number
WO2019213819A1
WO2019213819A1 PCT/CN2018/085899 CN2018085899W WO2019213819A1 WO 2019213819 A1 WO2019213819 A1 WO 2019213819A1 CN 2018085899 W CN2018085899 W CN 2018085899W WO 2019213819 A1 WO2019213819 A1 WO 2019213819A1
Authority
WO
WIPO (PCT)
Prior art keywords
shooting
model
photographing
analysis result
control method
Prior art date
Application number
PCT/CN2018/085899
Other languages
English (en)
Chinese (zh)
Inventor
王星泽
Original Assignee
合刃科技(武汉)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合刃科技(武汉)有限公司 filed Critical 合刃科技(武汉)有限公司
Priority to CN201880070402.5A priority Critical patent/CN111295875A/zh
Priority to PCT/CN2018/085899 priority patent/WO2019213819A1/fr
Publication of WO2019213819A1 publication Critical patent/WO2019213819A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present application relates to the field of electronic devices, and in particular, to a photographing control method for an electronic device and the electronic device.
  • the subject may be in good condition when framing, but the moment when the photo is taken may appear that the eyes are not open, the smile is relatively stiff, and the final photographs are often unsatisfactory.
  • the cute expression of the baby is often fleeting, and it is difficult to take a satisfactory photo in time by the user operating the shutter button or shooting an icon.
  • the application provides a shooting control method and an electronic device, which can perform a shooting operation in time to capture a wonderful moment.
  • a shooting control method includes: acquiring a shooting preview screen; analyzing the shooting preview image by using a preset model to obtain an analysis result; determining, according to the analysis result, whether the current shooting condition is met And control to perform the shooting operation when it is determined that the shooting conditions are satisfied.
  • an electronic device including a camera, a memory, and a processor.
  • the memory is for storing program instructions.
  • the processor is configured to execute the shooting control method by calling the program instruction, the shooting control method includes: acquiring a shooting preview image by using a camera; and analyzing the shooting preview image by using a preset model to obtain an analysis result; The analysis result determines whether the shooting condition is currently satisfied; and when it is determined that the shooting condition is satisfied, the control performs the shooting operation.
  • a computer readable storage medium storing program instructions for executing a shooting control method after the computer calls, the shooting control method comprising: acquiring a shooting Previewing a picture; analyzing the shooting preview picture by using a preset model to obtain an analysis result; determining whether the shooting condition is currently satisfied according to the analysis result; and controlling to perform a shooting operation when it is determined that the shooting condition is satisfied.
  • the photographing control method and the electronic device of the present application can automatically determine whether the photographing condition is satisfied according to the photographing preview screen, and perform photographing when the photographing condition is satisfied, and can capture a wonderful moment including the content corresponding to the current photographing preview screen in time.
  • FIG. 1 is a flow chart of a photographing control method in a first embodiment of the present application.
  • FIG. 2 is a flow chart of a photographing control method in a second embodiment of the present application.
  • FIG. 3 is a flowchart of a photographing control method in a third embodiment of the present application.
  • FIG. 4 is a flow chart of a photographing control method in a fourth embodiment of the present application.
  • FIG. 5 is a flowchart of a photographing control method in a fifth embodiment of the present application.
  • FIG. 6 is a flowchart of a photographing control method in a sixth embodiment of the present application.
  • FIG. 7 is a flowchart of a photographing control method in a seventh embodiment of the present application.
  • FIG. 8 is a flowchart of a photographing control method in an eighth embodiment of the present application.
  • FIG. 9 is a flowchart of a photographing control method in a ninth embodiment of the present application.
  • FIG. 10 is a flowchart of a photographing control method in a tenth embodiment of the present application.
  • FIG. 11 is a flowchart of a model training process in a shooting control method according to an embodiment of the present application.
  • FIG. 12 is a flowchart of a model training process in a shooting control method according to another embodiment of the present application.
  • FIG. 13 is a flowchart of a model training process in a photographing control method in still another embodiment of the present application.
  • FIG. 14 is a block diagram showing a schematic partial structure of an electronic device according to an embodiment of the present application.
  • the shooting control method of the present application can be applied to an electronic device.
  • the electronic device can include a camera.
  • the electronic device can acquire a shooting preview screen and display a shooting preview image through the camera, and the electronic device can perform photographing, continuous shooting, video shooting, etc. through the camera.
  • the camera includes a front camera and a rear camera, and the operations of photographing, continuous shooting, video shooting, etc. can be performed by a rear camera or a self-timer by a front camera.
  • FIG. 1 is a flowchart of a shooting control method in a first embodiment of the present application.
  • the shooting control method is applied to an electronic device.
  • the method includes the following steps:
  • the operation of acquiring the shooting preview screen is performed by the camera in response to the operation of turning on the camera, that is, the shooting preview screen is acquired by the camera.
  • the operation of turning on the camera is a click operation on the photographing application icon, that is, when the camera is turned on in response to a click operation on the photographing application icon, the shooting preview screen is acquired by the camera.
  • the operation of turning on the camera is a specific operation of a physical button of the electronic device.
  • the electronic device includes a volume up button and a volume down button, and the operation of turning on the camera is to increase the volume and volume. Simultaneous pressing of small keys.
  • the operation of turning on the photographing application is an operation of pressing the volume up key and the volume down key in a preset time (for example, 2 seconds).
  • the operation of turning on the camera may also be an operation of a preset touch gesture input in any display interface of the electronic device.
  • the user may input a circular touch track. Turn on the camera with a touch gesture.
  • the operation of turning on the camera may also be an operation of a preset touch gesture input on the touch screen when the electronic device is in a black screen state.
  • the operation of turning on the camera is an operation that presses the shutter button/power button of the camera to trigger the camera to be in an activated state.
  • acquiring a shooting preview screen is to obtain a shooting preview screen in real time through a camera.
  • the preset model may be a trained model or an untrained model.
  • the preset model is a trained neural network model; and the preset preview model is used to analyze the captured preview image to obtain an analysis result, and further includes: analyzing the captured preview image by using a neural network model, The result of this analysis of satisfaction is obtained.
  • the satisfaction degree is that the neural network model outputs the satisfaction result by processing the pre-trained model by taking all the pixels of the preview image as input.
  • the preset model is a trained image processing algorithm model
  • the preset preview model is used to analyze the captured preview image to obtain an analysis result
  • the shooting preview screen is compared with the reference picture, and the analysis result of the similarity between the preview picture and the reference picture is analyzed.
  • the reference picture may be a standard picture preset by the user with a specific expression such as smile, laughter, sadness, anger, yawning, and the like.
  • the trained image processing algorithm model includes the trained target object feature model, and the trained image processing algorithm model is used to compare the captured preview image with the reference image, and the similarity between the captured preview image and the reference image is analyzed, including The image recognition technology is used to analyze the target object in the preview image, and the corresponding target object feature vector is generated. The captured preview image and the reference image are calculated according to the trained target object feature model and the target object feature vector corresponding to the captured preview image. Similarity.
  • the calculating the similarity between the captured preview image and the reference image according to the trained target object feature model and the target object feature vector corresponding to the captured preview image comprises: capturing the target object feature vector corresponding to the preview image As the input information of the trained expression feature model, the similarity between the shooting preview picture and the reference picture is calculated by the target object feature vector.
  • comparing the shooting preview image with the reference image by using the trained image processing algorithm model, and analyzing the analysis result of the similarity between the preview image and the reference image comprising: acquiring pixel information of the shooting preview image; The pixel information of the shooting preview screen is compared with the pixel information of the reference picture, and the similarity between the shooting preview picture and the reference picture is analyzed. That is, in other embodiments, the similarity between the captured preview picture and the reference picture is obtained by comparing the pixel information such as the pixel grayscale value of the two images.
  • the analyzing the captured preview image by using the preset model to obtain the analysis result includes: using the face recognition technology to capture the preview image.
  • the facial expression is analyzed to generate a corresponding expression feature vector; the expression feature vector is used as input information of the image processing algorithm model, and an analysis result including identification information indicating whether the shooting condition is currently satisfied is obtained.
  • the identification information may include an identifier of 1, 0, etc., which identifies whether the analysis result of the shooting condition is currently satisfied. More specifically, when the identification information is the identifier 1, it indicates that the shooting condition is satisfied, and when the identification information is the identifier 0, it indicates that the shooting condition is not satisfied.
  • the trained target object feature model is completed by training in a plurality of photos having different expression faces in the initial training set; and using the image recognition technology to provide a plurality of pieces in the initial training set
  • the target object in the photo performs emoticon analysis to generate a corresponding target object feature vector Xi.
  • X1 represents the size of the eye opening
  • X2 represents the degree of the mouth angle rising
  • X3 represents the mouth opening.
  • the size of the opening; the training sample set is established based on the generated target object feature vector and the similarity label between the corresponding photo and the reference picture; and the sample set is used for training learning to obtain the trained target object feature model.
  • the analysis result is that the analysis result is obtained according to the real-time acquired shooting preview screen.
  • the step of analyzing the shot preview image by using the preset model to obtain an analysis result is that the currently acquired shot preview screen is analyzed every time by a preset time (for example, 0.2 seconds) to obtain a current analysis result. .
  • step S15 Determine whether the shooting condition is currently satisfied according to the analysis result. If yes, step S17 is performed, otherwise, it returns to step S13 or the process ends.
  • determining whether the shooting condition is currently satisfied according to the analysis result includes: determining that the shooting condition is currently satisfied when determining that the satisfaction exceeds the satisfaction preset threshold.
  • the satisfaction preset threshold may be 80%, 90%, and the like.
  • determining whether the shooting condition is currently satisfied according to the analysis result includes: determining that the shooting condition is currently satisfied when determining that the similarity exceeds the similarity preset threshold.
  • determining whether the shooting condition is currently satisfied according to the analysis result may further include: determining that the shooting condition is currently satisfied when the analysis result includes identifying the identification information that currently meets the shooting condition.
  • the target object may be a hand, a face, a specific scene, etc.
  • the target object feature model includes a gesture feature model, an expression feature model, and a scene feature model, etc.
  • the analyzed target object feature vector may include a gesture.
  • Control determines to perform a shooting operation when it is determined that the shooting condition is satisfied.
  • the photographing operation is a photographing operation
  • controlling to perform the photographing operation includes: controlling to perform a photographing operation to obtain a photograph corresponding to the current photograph preview screen.
  • the shooting operation is a continuous shooting operation
  • controlling to perform a shooting operation includes: controlling to perform a continuous shooting operation, and obtaining a plurality of photos including a photo corresponding to the current photo preview screen.
  • further steps may be further included: analyzing a plurality of photos obtained by the continuous shooting operation to determine the best photo; and retaining the best photo, and performing the continuous shooting operation Get other photos to delete.
  • the photographing operation is a video photographing operation
  • controlling to perform a photographing operation includes: controlling to perform a video photographing operation to obtain a video file that uses the current photograph preview screen as a starting video frame frame.
  • the method may further include: after the video file is captured, the plurality of video frame frames in the captured video file may be compared to determine an optimal screen. Frame; and intercept the best picture frame to save as a photo.
  • FIG. 2 is a flowchart of a shooting control method in a second embodiment of the present application.
  • the method comprises the steps of:
  • the reference picture may be a standard picture preset by the user with a specific expression such as smiling, laughing, sad, angry, yawning, or a standard picture with gestures such as an "OK" gesture or a "V-shaped” gesture. It can also be a standard picture with flowers, birds, mountains and other scenery.
  • the trained image processing algorithm model includes a target object feature model
  • step S23 specifically includes: analyzing, by using an image recognition technology, a target object in the captured preview image to generate a corresponding target object feature vector; and according to the target object feature The target object feature vector corresponding to the model and the preview image is calculated to obtain the similarity between the preview image and the reference picture.
  • the trained image processing algorithm model includes the trained target object feature model, and the trained image processing algorithm model is used to compare the captured preview image with the reference image, and the similarity between the captured preview image and the reference image is analyzed, including The image recognition technology is used to analyze the target object in the preview image, and the corresponding target object feature vector is generated. The captured preview image and the reference image are calculated according to the trained target object feature model and the target object feature vector corresponding to the captured preview image. Similarity.
  • the similarity between the captured preview image and the reference image is calculated according to the trained target object feature model and the target object feature vector corresponding to the captured preview image, including: taking the target object feature vector corresponding to the captured preview image as The input information of the target object model of the training is calculated, and the similarity between the shooting preview picture and the reference picture is calculated by the target object feature model.
  • the captured image processing algorithm model is used to compare the shooting preview image with the reference image, and the similarity between the shooting preview image and the reference image is analyzed, including: acquiring pixel information of the shooting preview image; The pixel information of the screen is compared with the pixel information of the reference picture, and the similarity between the shooting preview picture and the reference picture is analyzed. That is, in other embodiments, the similarity between the captured preview picture and the reference picture is obtained by comparing the pixel information such as the pixel grayscale value of the two images.
  • the target object may include an object such as a face, a hand, and a specific scene.
  • the target object feature model may include an expression feature model, a gesture feature model, a scene feature model, and the like, and the analyzed target object feature vector may include the foregoing. Emoticon, angry, yawning and other expression vector, or "OK" gesture, "V-shaped” and other gesture feature vectors, or flower, bird, mountain and other scene feature vectors.
  • the trained expression feature model can be trained by providing a plurality of photos with different expression faces in the initial training set; using face recognition technology, Performing an expression analysis on the characters in the plurality of photos provided in the initial training set to generate a corresponding expression feature vector Xi, for example, X1 represents the size of the eye opening, X2 represents the degree of the mouth angle rising, and X3 represents the size of the mouth opening;
  • X1 represents the size of the eye opening
  • X2 represents the degree of the mouth angle rising
  • X3 represents the size of the mouth opening
  • the generated expression feature vector and the similarity label between the corresponding photo and the reference picture establish a training sample set; and then use the sample set to perform training learning, and obtain the trained expression feature model.
  • the similarity preset threshold may be 80%, 90%, or the like.
  • the reference picture includes a reference picture having a laughing expression, and it is determined that the shooting condition is satisfied when the similarity between the shooting preview picture and the reference picture having the laughing expression reaches 80%, and the automatic shooting is triggered.
  • the similarity between the shooting preview screen and the reference picture may include, but is not limited to, the similarity of the picture style, the similarity of the colors, the similarity of the content layout, the similarity of the pixel gray levels, and the like.
  • the user may first set a reference picture that is considered satisfactory, and perform model training.
  • the user wants to take a picture
  • the user first obtains a preview picture, and then uses the trained model to analyze the similarity between the preview picture and the reference picture.
  • the similarity reaches the similarity preset threshold, automatic shooting is triggered, so that a satisfactory photo similar to the reference picture can be obtained, and when the target object is a face and the image processing algorithm model is an expression feature model, the image can be reduced. Taking a photo when the expression is unnatural, you can also take a satisfactory photo with a satisfactory expression in time.
  • the step S21 corresponds to the step S11 in FIG. 1 .
  • Step S23 may be a more specific step of step S13 in FIG. 1, and related descriptions may be referred to each other.
  • Step S25 corresponds to steps S15 and S17 in FIG. 1, and related descriptions may also be referred to each other.
  • FIG. 3 is a flowchart of a shooting control method in a third embodiment of the present application.
  • the preset model is a trained image processing algorithm model
  • the trained image processing algorithm model includes a trained target object feature model.
  • the shooting control method includes the following steps:
  • S301 Acquire a shooting preview screen.
  • S303 Analyze a target object in the preview image by using an image recognition technology to generate a corresponding target object feature vector.
  • the identifier information may be an identifier that identifies whether the photographing condition is currently satisfied, such as 1, 0, and the like. More specifically, when the identification information is 1, it indicates that the shooting condition is satisfied, and when the identification information is 0, it indicates that the shooting condition is not satisfied. Obviously, the identification information can also be information such as "yes" or "no".
  • the step S307 includes: when the identification result includes the identification information that currently meets the shooting condition, determining that the shooting condition is currently satisfied, and controlling to perform the shooting operation.
  • the analysis result includes identifying the identification information that satisfies the shooting condition, for example, including the identifier "1", determining that the shooting condition is currently satisfied, and when the analysis result includes identifying the identification information that does not satisfy the shooting condition, for example, including the identifier "0" When it is determined that the shooting conditions are not met.
  • the step S301 corresponds to the step S11 in FIG. 1 .
  • the step S303 and the step S305 may be the steps corresponding to the step S13 in FIG. 1 , and the description relationship between the upper and lower positions is mutually related, and the related descriptions may refer to each other.
  • Step S307 corresponds to steps S15, S17 in Fig. 1, and the related description can also be referred to each other.
  • FIG. 4 is a flowchart of a shooting control method in a fourth embodiment of the present application.
  • the photographing control method includes the following steps:
  • step S33 includes: comparing the captured preview image with the reference image by using the trained image processing algorithm model to obtain a similarity between the captured preview image and the reference image.
  • the trained image processing algorithm model includes a target object feature model.
  • the trained model is used to compare the captured preview image with the reference image to obtain a similarity between the captured preview image and the reference image, including: through image recognition technology.
  • the target object in the shooting preview screen is analyzed to generate a corresponding target object feature vector; and the similarity between the preview image and the reference image is calculated according to the target object feature model and the target object feature vector corresponding to the preview image.
  • analyzing the captured preview image by using the preset model to obtain the analysis result may further include: analyzing the target object in the captured preview image by using an image recognition technology, and generating a corresponding target object feature vector; The vector is used as input information of the target object feature model, and an analysis result including identification information indicating whether the shooting condition is currently satisfied is derived.
  • the analysis result of satisfaction can also be obtained by the trained neural network model.
  • step S35 when the analysis result of the satisfaction is obtained by the trained image processing algorithm model, step S35 includes: when the similarity between the shooting preview screen and the reference picture exceeds the similarity preset threshold or the analysis result includes the identification When the identification information that satisfies the shooting condition is currently determined, it is determined that the shooting condition is satisfied, and the control performs the continuous shooting operation.
  • step S35 may further include: when the satisfaction exceeds the satisfaction preset threshold, determining that the shooting condition is satisfied, and controlling to perform continuous shooting operating.
  • the requirement of the shooting condition when performing the continuous shooting operation may be slightly lower than the requirement when the photographing operation is performed.
  • the comparison similarity preset threshold or the satisfaction preset threshold may be slightly lower than the execution of the photographing when performing the continuous shooting operation.
  • the similarity preset threshold or satisfaction preset threshold of the operation comparison may be slightly lower than the execution of the photographing when performing the continuous shooting operation.
  • the similarity preset threshold or the satisfaction preset threshold when performing the continuous shooting operation may be 70%, which is lower than the similarity preset threshold or the satisfaction preset threshold of 80% or higher in performing the photographing operation comparison. .
  • the reference picture is a picture with a laughing expression
  • the contrast expression vector is an expression feature vector X2 indicating the degree of the mouth angle rising
  • the continuous shooting operation is controlled when it is determined that the mouth angle rises to 70% of the reference picture.
  • the user continues to smile for a short period of time, and the user's smile reaches the maximum level.
  • This expression will be captured by the continuous shooting operation, and the photo that ensures the best shooting effect can be ensured by the continuous shooting operation.
  • the step S31 corresponds to the step S11 in FIG. 1 .
  • the step S33 may be the step S13 in FIG. 1 , the step S23 in FIG. 2 and the steps corresponding to the steps S303 and S305 in FIG. 3 , and the description relationship between the upper and lower positions is mutually related, and the related descriptions may refer to each other.
  • Step S35 corresponds to steps S15 and S17 in FIG. 1, step S25 in FIG. 2, and step S307 in FIG. 3, and is a description relationship of upper or lower positions, and the related descriptions may also be referred to each other.
  • FIG. 5 is a flowchart of a shooting control method in a fifth embodiment of the present application.
  • the fifth embodiment differs from the fourth embodiment in that the photos obtained by the continuous shooting operation are also screened.
  • the photographing control method in the fifth embodiment includes the following steps:
  • step S47 may include: analyzing the plurality of photos by using the trained neural network model to obtain satisfaction, and determining the photo with the highest satisfaction as the best photo.
  • step S47 may include: comparing a plurality of photos obtained by the continuous shooting with the reference image, and determining a photo with the highest similarity with the reference image as the best photo.
  • the shooting control method may further include:
  • the electronic device includes a memory in which a plurality of albums are created, "preserve the best photos” to store the best photos in a certain album, such as in a camera album. Among them, by deleting other photos, it can effectively avoid occupying too much storage space.
  • the steps S41 to S45 are respectively the same as the steps S31 to S35 in the third embodiment shown in FIG. 4, and the descriptions of the steps S31 to S35 in FIG. 4 can be referred to in more detail.
  • FIG. 6 is a flowchart of a shooting control method in a sixth embodiment of the present application.
  • the sixth embodiment differs from the fourth embodiment in that the photographing operation is a video shooting and not a continuous shooting operation.
  • the photographing control method in the sixth embodiment includes the following steps:
  • S53 analyzing the shooting preview image by using a preset model to obtain an analysis result.
  • the requirement of the shooting condition when performing the video shooting operation may also be slightly lower than the requirement when the photographing operation is performed.
  • the comparison similarity preset threshold or the satisfaction preset threshold may be slightly lower than the execution when performing the continuous shooting operation.
  • the similarity preset threshold or the satisfaction preset threshold of the photographing operation comparison may be slightly lower than the execution when performing the continuous shooting operation.
  • step S55 may include: when the similarity between the shooting preview screen and the reference picture exceeds the similarity preset threshold, determining that the shooting condition is satisfied, and controlling to perform the video shooting operation.
  • the method may further include: when the satisfaction exceeds the satisfaction preset threshold, determining that the shooting condition is met, and controlling to perform a video shooting operation.
  • the shooting control method may further include:
  • step S57 may include: analyzing the plurality of video picture frames by using the trained neural network model to obtain satisfaction, and determining a video frame frame with the highest satisfaction as the optimal picture. frame.
  • step S57 may include: comparing a plurality of video picture frames in the video file with the reference picture, and determining a video picture frame with the highest similarity with the reference picture as the optimal picture frame. .
  • the shooting control method may further include:
  • the electronic device includes a memory in which a plurality of albums are created, and “the best picture frame is truncated to be saved as a photo” to store the best picture frame in a photo/photo format in an album. , for example, stored in a camera album.
  • FIG. 7 is a flowchart of a shooting control method in a seventh embodiment of the present application.
  • the photographing control method in the seventh embodiment includes the following steps:
  • S63 The shooting result is analyzed by using a preset model to obtain an analysis result.
  • S65 Determine whether the shooting condition is currently satisfied and determine the shutter time according to the analysis result.
  • step S65 may include:
  • the target object feature is determined to determine whether the shooting condition is satisfied and the shutter time is determined.
  • the expression feature for example, preset Corresponding relationship between the expression and the shooting condition
  • the correspondence between the shutter time and the gesture feature for example, a correspondence relationship between the smile expression and the photographing condition
  • the correspondence between the shutter time and the gesture feature of the thumb and the index finger distance wherein The shutter time can be logarithmically related to the distance between the thumb and the index finger.
  • the user's expression feature is determined to be a smile according to the analysis result, it is determined that the shooting condition is satisfied, and the shutter time is determined based on the gesture feature of the distance between the thumb and the index finger according to the analysis result, and the control of the exposure intensity is realized.
  • the trained model is a trained image processing algorithm model
  • the trained image processing algorithm model includes a trained target object feature model, specifically including a trained expression feature model and Training gesture feature model.
  • the trained expression feature model may be a model completed by the following training steps: using facial recognition technology to perform expression analysis on a character in each photo in the initial training set to generate a corresponding expression feature vector; The feature vector and the similarity label between the corresponding photo and the reference picture establish a training sample set; and the training set is further performed by using the sample set to obtain the trained expression feature model.
  • the trained gesture feature model is a model completed by the following training steps: using an image recognition technology to analyze a hand part in each photo in the initial training set to generate a corresponding gesture feature vector; based on the generated gesture feature
  • the training sample set is established by the similarity tag between the vector and the corresponding photo and the reference picture; and the training set is further performed by using the sample set to obtain the gesture feature model of the training completion.
  • the step S63 may specifically include: determining an expression similarity according to the expression feature model and the expression feature vector corresponding to the shooting preview image, and determining, according to the gesture feature model and the gesture feature vector corresponding to the shooting preview image, the gesture feature corresponding to the shooting preview image. .
  • the trained gesture feature model includes a plurality of reference images and a reference target object feature vector of the plurality of reference images, the determined according to the gesture feature model and the gesture feature vector corresponding to the captured preview image.
  • the capturing the gesture feature corresponding to the preview image includes: comparing, by using the gesture feature model, the gesture feature vector corresponding to the captured preview image with the reference gesture feature vector of the plurality of reference images, and determining the reference gesture feature vector with the highest similarity.
  • a reference image, and the target object feature is derived from the reference image. For example, after the reference image is determined, the gesture feature is determined based on the label of the reference image.
  • the step S65 may specifically include: determining that the shooting condition is satisfied when determining that the expression similarity is greater than the similarity preset threshold; determining the gesture feature according to the gesture feature model and the gesture feature vector corresponding to the shooting preview image, for example, obtaining the index finger and the thumb a distance, and determining a shutter time corresponding to the derived gesture feature according to a correspondence between a predefined gesture feature and a shutter time, for example, determining a shutter time according to a correspondence between a gesture feature of the index finger and the thumb and a shutter time .
  • the foregoing correspondence may be a correspondence table or the like stored in a memory of the electronic device.
  • the step S63 may further include: using the expression feature vector corresponding to the preview image as the input information of the expression feature model, and determining, by the trained expression feature model, whether the identifier meets the shooting condition.
  • the result of the identification information, and the gesture feature vector corresponding to the captured preview image is used as the input information of the trained gesture feature model, and the result including the shooting setting parameter is obtained by the trained gesture feature model. Thereby, the analysis result including whether the shooting condition and the shooting setting parameter are satisfied is obtained.
  • the step S65 may include: determining whether the identifier obtained according to the trained expression feature model satisfies the shooting condition and the shooting setting parameter according to the trained gesture feature model.
  • the identification information indicating whether the shooting condition is satisfied and the analysis result of the shooting setting parameter, that is, the analysis result of whether or not the shooting condition and the shooting setting parameter are satisfied are obtained.
  • step S65 includes: determining whether the current shooting condition is satisfied and determining the shooting setting parameter according to the analysis result of the trained neural network model including whether the shooting condition and the shooting setting parameter are currently satisfied.
  • the photo preview screen may be analyzed to determine an analysis result of the age identity of the photographed person, and the shutter time is determined according to the age identity. For example, when it is determined that the person being photographed is an infant, the expression of the infant is often fleeting, thereby reducing the shutter time, increasing the shutter speed, and ensuring that a wonderful moment can be captured in time.
  • the steps S61 and S63 in FIG. 7 correspond to the steps S11 and S13 in FIG. 1 respectively, and the related descriptions can refer to each other.
  • the shooting control method further includes:
  • FIG. 8 is a flowchart of a shooting control method in an eighth embodiment of the present application.
  • the photographing control method in the eighth embodiment and the method in the seventh embodiment shown in FIG. 7 are different from the step S65 in FIG.
  • the photographing control method in the eighth embodiment includes the following steps:
  • S73 The shooting preview screen is analyzed by using a preset model to obtain an analysis result.
  • S75 Determine whether the shooting condition is currently satisfied and determine the aperture size according to the analysis result.
  • step S75 may include:
  • the target object feature is determined to determine whether the shooting condition is satisfied and the shutter time is determined.
  • the expression features obtained from the analysis results determine whether the shooting conditions are satisfied
  • the aperture size is determined based on the gesture characteristics obtained from the analysis results. For example, it may be set in advance: using a smile to determine whether to take a picture, using a gesture feature to determine the aperture size, for example, a correspondence between the aperture size and the distance between the thumb and the index finger is set in advance, and further, the aperture size is logarithmically related to the distance between the thumb and the index finger. .
  • Steps S71 and S73 in FIG. 8 correspond to steps S11 and S13 in FIG. 1 respectively, and also correspond to steps S61 and S63 in FIG. 7, and related descriptions can be referred to each other.
  • Step S75 in FIG. 8 corresponds to step S65 in FIG. 7 , except that the aperture size is determined.
  • a more specific implementation of the step S75 in FIG. 8 is obtained by replacing the shutter time with the aperture size, and details are not described herein again.
  • the shutter time and the aperture size in FIGS. 7 and 8 are merely specific examples of the shooting setting parameters.
  • other shooting setting parameters such as parameters such as sensitivity, may also be determined based on the analysis result.
  • the setting parameters are also taken at the same time, for example, the shooting setting parameters such as the shutter time and the aperture size are determined.
  • the distance between the thumb and the index finger of the user's gesture can simultaneously correspond to the shutter time and the aperture size, and the corresponding shutter time and aperture size can be simultaneously obtained by analyzing the user gesture.
  • at least one shooting setting parameter including a shutter time, an aperture size, and the like can be determined based on the analysis result.
  • determining whether the shooting condition is satisfied and determining the shooting setting parameter based on the analysis result are performed simultaneously according to the analysis result.
  • determining the shooting setting parameter including the shutter time or the aperture size is performed after determining that the shooting condition is satisfied based on the analysis result. That is, after the determination of the shooting condition is satisfied, the shooting setting parameter is determined according to the analysis result, thereby avoiding determining the shooting setting parameter every time, and avoiding waste of computing resources.
  • the image processing algorithm model may be used to perform image analysis on the photo preview image to obtain the analysis result of the number of faces, and the aperture size is determined according to the number of faces to ensure that each face is good. Exposure.
  • the shooting control method further includes:
  • the shooting operation is performed according to the determined aperture size. Specifically, after the aperture size is adjusted to the determined aperture size, a corresponding photographing operation is performed.
  • FIG. 9 is a flowchart of a shooting control method in a ninth embodiment of the present application.
  • the photographing control method in the ninth embodiment includes the following steps:
  • S83 The video stream formed by the preview image is analyzed by using a preset model to obtain an analysis result.
  • step S85 Determine whether the shooting condition is currently satisfied according to the analysis result. If yes, step S87 is performed, otherwise, step S83 is returned.
  • the video stream may be analyzed by a preset model such as a trained neural network model or a trained image processing algorithm model.
  • a preset model such as a trained neural network model or a trained image processing algorithm model.
  • the capturing of the user completes an action may determine the action completed by the user through the change of the multi-frame video frame frame, and may pre-define the corresponding relationship between the preset action and the shooting condition, and determine that the action completed by the user is a preset action.
  • the shooting conditions are determined.
  • each frame of the video frame in the video stream may also be analyzed by a preset model such as a trained neural network model or a trained image processing algorithm model to obtain an analysis result, and determined according to the analysis result. Whether the shooting conditions are met.
  • the analysis result is obtained by analyzing each frame of the video picture frame, and determining whether the shooting condition is satisfied according to the analysis result may refer to the related description of any of the foregoing embodiments.
  • the shooting setting parameter may include at least one of a shutter time, an aperture size, and the like.
  • step S87 is performed after step S85.
  • step S87 may be performed simultaneously with step S85.
  • S89 Perform the shooting operation according to the shooting setting parameters. Specifically, after the shooting parameters are adjusted to the determined shooting setting parameters, a corresponding shooting operation is performed.
  • FIG. 10 is a flowchart of a shooting control method in a tenth embodiment of the present application.
  • the photographing control method in the tenth embodiment includes the following steps:
  • S91 Acquire a shooting preview image through the camera and capture the sound signal through the microphone.
  • step S95 Determine whether the shooting condition is met according to the first analysis result and the second analysis result. If yes, step S97 is performed, otherwise, step S93 is returned.
  • the first analysis result is that the speech content is obtained by the speech analysis model, and S95 includes:
  • Determining whether the preliminary shooting condition is satisfied according to the first analysis result for example, determining that the preset voice content is “photographing”, if the voice content obtained by the first analysis result and the preset voice content “photographing” If it is met, it is determined that the initial shooting conditions are met;
  • a second analysis result obtained by analyzing the captured preview image by the trained model determines whether the shooting condition is satisfied
  • the second analysis result obtained by analyzing the shot preview image by the trained model determines whether the photographing condition is satisfied and corresponds to step S15 in FIG. 1 .
  • the specific implementation manner may refer to at least step S15 in FIG. 1 and related Related steps of other embodiments.
  • the shooting setting parameter may include at least one of a shutter time, an aperture size, and the like.
  • step S97 is performed after step S95.
  • step S97 may be performed simultaneously with step S95.
  • S99 Perform a shooting operation according to the determined shooting setting parameters. Specifically, after the shooting parameters are adjusted to the determined shooting setting parameters, a corresponding shooting operation is performed.
  • the accuracy of the shooting control can be further improved by adding other inputs as a basis for judging whether or not the shooting conditions are met.
  • the method may further include: training the model to obtain the trained model.
  • FIG. 11 is a flowchart of a model training process in a shooting control method according to an embodiment of the present application.
  • the model training process may include the following steps:
  • the model saves the positive samples and establishes or updates the positive samples and the correspondence that satisfies the shooting conditions to adjust the parameters of the model itself.
  • the photographing condition can be marked as a label of the front sample.
  • the user manually controls the shooting to be done by pressing a shutter button or a photo icon.
  • the user manually controls the shooting to be performed by performing a specific operation on a physical button of the electronic device.
  • the electronic device includes a power button, and manual control shooting is achieved by double-clicking the power button.
  • S103 sample the frame frame that is not manually controlled by the model by using a preset rule as a reverse sample, and adjust the parameters of the model according to the reverse sample.
  • sampling, by using a model, a frame frame that is not manually controlled to be taken as a reverse sample by using a preset rule includes: sampling a frame frame that is framing for a period of time after the front sample as a reverse sample.
  • sampling, by using a preset rule, a frame frame that is not manually controlled to be taken as a reverse sample includes: sampling a frame frame that is framing for a period of time before the front sample as a reverse sample.
  • the shooting framing picture is automatically intercepted in advance and a certain number of pending samples are stored.
  • the non-manual control shooting is determined. The sample is determined to be the reverse sample.
  • the frame frame that is not manually controlled may also be obtained as a reverse sample by random sampling.
  • the sampling is determined by an additional sensor, such as a photosensitive or acoustic sensor to collect ambient light or sound to determine the sampling, and the sampled picture frame is used as a negative sample.
  • an additional sensor such as a photosensitive or acoustic sensor to collect ambient light or sound to determine the sampling
  • the image frame that is not manually controlled is sampled by the model as a reverse sample by using a preset rule, including: collecting a frame frame that is not manually controlled, further analyzing factors such as composition, and then determining whether to sample. For the reverse sample.
  • sampling the frame frame that is not manually controlled by the model by using a preset rule as the reverse sample includes: sampling two adjacent manual control shots at preset time intervals The picture frame obtained by the framing is taken as the reverse sample.
  • the model may save the sampled back samples, and may also establish a correspondence between the back samples and the photographing conditions to adjust the parameters of the model itself.
  • the reverse sample is a picture that does not satisfy the shooting condition; the label that does not satisfy the shooting condition can be marked as a label of the reverse sample.
  • the preset time interval can be 1 second, 2 seconds, and the like.
  • two adjacent manual control shots may be two adjacent manual control shots during the framing shot performed by the same camera opening.
  • the two adjacent manual control shots may also be two different manual control shots during the framing shooting.
  • the framing screen between the first manual control shooting and the second manual control shooting is The currently used model is saved at a preset time interval as a negative sample.
  • the method further includes the step of: controlling the electronic device to enter the model training mode in response to the user input entering the model training. Determining that the training completion condition is reached includes: determining that the training completion condition is reached in response to the user-entered operation of the exit model training mode.
  • the operations of entering the model training include a selection operation of the menu option, or a specific operation on the physical button, or a specific touch gesture input on the touch screen of the electronic device.
  • the control electronic device in response to the user-entered operation of entering the model training, enters the model training mode, including: responding to the user's selection operation of the menu option, or performing a specific operation on the physical button, or inputting on the touch screen of the electronic device.
  • the specific touch gesture controls the electronic device to enter the model training mode.
  • determining that the training completion condition is reached includes: determining that the training completion condition is reached when it is determined that the number of times the user manually controls the shooting reaches the preset number of times N1.
  • the preset number of times N1 may be the number of times the system default model training needs to be executed, or may be a user-defined value.
  • determining that the training completion condition is reached includes: using the positive sample of the current time to test the model, determining whether the test result reaches a preset threshold, and determining that the training result is reached after the test result reaches a preset threshold. Complete the conditions.
  • the above model may be a model such as a neural network model or an image processing algorithm model.
  • the trained model is obtained by training the model in advance, and when the user turns on the camera for shooting, the shooting can be automatically controlled according to the trained model, and the satisfactory picture desired by the user can be captured in time.
  • FIG. 12 is a flowchart of a model training process in a shooting control method according to another embodiment of the present application.
  • the model training process may include the following steps:
  • S111 Perform a framing preview in response to the opening operation of the camera to obtain a shooting preview screen.
  • the step S111 specifically includes: after the automatic shooting function is turned on, performing a framing preview in response to the opening operation of the camera to obtain a shooting preview screen. That is, the model training process shown in FIG. 12 can be performed after the automatic shooting function is turned on.
  • turning on the automatic shooting function may be accomplished in response to a user setting operation in a menu option of the camera.
  • turning on the automatic shooting function may also be performed in response to the user's specific touch gesture on the touch screen of the electronic device, for example, in response to double-clicking on the touch screen of the electronic device through the knuckles After the completion.
  • the photographing control method shown in any one of the embodiments 1 to 10 in the present application can be performed after the automatic shooting function is turned on by the electronic device.
  • Step S113 corresponds to step S17 in the first embodiment shown in FIG. 1 and step S25 in the second embodiment shown in FIG. 2 .
  • step S17 in FIG. 1 and other implementations A description of the relevant steps in the example.
  • S115 Acquire user satisfaction feedback information about the automatic shooting.
  • the user may be prompted to perform satisfaction evaluation on the automatic photographing by generating prompt information, for example, generating a prompt box including “satisfactory” and “unsatisfactory” options.
  • prompt information for example, generating a prompt box including “satisfactory” and “unsatisfactory” options.
  • the satisfaction feedback information of the automatic photographing is obtained.
  • the user's satisfaction with the automatic shooting is obtained by detecting the user's operation on the photo or video obtained by the automatic shooting. For example, if it is detected that the user deletes the photo or video obtained by the automatic shooting, it is determined that the user is not satisfied with the automatic shooting, and the satisfaction feedback information that is unsatisfactory is obtained. For example, if it is detected that the user has set a photo or video obtained by the automatic shooting to a favorite or favorite type setting operation or a sharing operation, it is determined that the user is satisfied with the automatic shooting, and obtains I got feedback on satisfaction with satisfaction.
  • S117 Output the satisfaction feedback information of the user to the current automatic shooting to the currently used model, so that the currently used model uses the satisfaction feedback information for optimal training.
  • the training of the model can be optimized, and the model is continuously optimized, so that the automatic shooting in subsequent use can be more accurate.
  • the currently used model may be a model that has been confirmed by training, or a model that has not been trained yet. When it is confirmed that the training is completed, the model can be further optimized. When the model is not yet trained, the training can be better achieved.
  • the steps S111 S S117 in FIG. 12 can be performed after the step S105 in FIG. 11 , and can also be performed before the step S105 in FIG. 11 , and can even be performed before the step S101 in FIG. 11 .
  • the currently used model may be an untrained initial model.
  • the preset model in any one of the embodiments 1 to 10 as described above may also be an untrained model, and after the user starts the automatic shooting function, the automatic shooting is performed according to the current model, and according to The feedback feedback from the user feedback optimizes and trains the current model.
  • the untrained model when the preset model is an untrained model, automatically acquires a picture each time the user performs shooting, and performs training as a frontal sample, or further acquires a shooting setting parameter at the time of shooting.
  • the training is performed as a positive sample, and the preset model is gradually optimized until the number of training reaches a preset number of times or the satisfaction feedback information of subsequent user feedback is a satisfactory ratio exceeding a preset ratio, and the training is determined to be completed. In this way, since the user himself trains the model without using another person's model, personalization can be better achieved.
  • FIG. 13 is a flowchart of a model training process in a shooting control method according to still another embodiment of the present application.
  • the model training process may include the following steps:
  • the model saves the front samples and establishes or updates the front samples and the correspondences that satisfy the shooting conditions and the shooting settings parameters to adjust the parameters of the model itself.
  • the shooting conditions and the shooting setting parameters can be marked as labels of the front samples at the same time.
  • the user manually controls the shooting to be done by pressing a shutter button or a photo icon.
  • the user manually controls the shooting to be performed by performing a specific operation on a physical button of the electronic device.
  • the electronic device includes a power button, and manual control shooting is achieved by double-clicking the power button.
  • the shooting setting parameters may include parameters such as aperture size, shutter time, and sensitivity.
  • S123 The screen frame that is not manually controlled is sampled by the model as a reverse sample by using a preset rule, and the parameters of the model itself are adjusted according to the reverse sample.
  • Step S123 and step S125 correspond to step S103 and step S105 in FIG. 11, respectively.
  • FIG. 14 is a block diagram showing a schematic partial structure of an electronic device 100 according to an embodiment of the present application.
  • the electronic device 100 includes a processor 10, a memory 20, and a camera 30.
  • the camera 30 includes at least a rear camera 31 and a front camera 32.
  • the rear camera 31 is used to capture an image behind the electronic device 100, and can be used for a user to take a shooting operation such as a person.
  • the front camera 32 is used to capture an image in front of the electronic device 100, and can be used to perform a self-photographing and the like.
  • the models in FIGS. 1-13 may be programs such as specific algorithm functions running in processor 10, such as neural network algorithm functions, image processing algorithm functions, and the like.
  • the electronic device 100 may further include a model processor that is independent of the processor 10.
  • the models in FIGS. 1 to 13 are in the operation and model processors, and the processor 10 may generate corresponding instructions as needed.
  • the trigger model processor runs the corresponding model, and the output result of the model is output to the processor 10 through the model processor for use by the processor 10, and control such as a shooting operation is performed.
  • Program instructions are stored in the memory 20.
  • the processor 10 is configured to call a program instruction stored in the memory 20 to execute the photographing control method in any of the embodiments shown in FIGS. 1 to 9, and perform photographing control in any of the embodiments as shown in FIGS. 10 to 12.
  • the model training process in the method is configured to call a program instruction stored in the memory 20 to execute the photographing control method in any of the embodiments shown in FIGS. 1 to 9, and perform photographing control in any of the embodiments as shown in FIGS. 10 to 12.
  • the processor 10 is configured to call a program instruction stored in the memory 20 to execute the following shooting control method:
  • the analysis result is obtained by analyzing the shooting preview screen by using a preset model
  • the control When it is determined that the shooting conditions are satisfied, the control performs a shooting operation.
  • the processor 10 when the calling program instruction is executed to determine whether the shooting condition is currently satisfied according to the analysis result, is further invoked to execute the program instruction: determining a shooting setting parameter including at least one of a shutter time and an aperture size according to the analysis result. .
  • the processor 10 controls to perform a photographing operation when it is determined that the photographing condition is satisfied, and further includes the processor 10 performing a photographing operation according to the determined photographing setting parameter when determining that the photographing condition is satisfied.
  • the processor 10 can be a microcontroller, a microprocessor, a single chip microcomputer, a digital signal processor, or the like.
  • the memory 20 can be any storage device that can store information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like.
  • the electronic device 100 further includes an input unit 40 and an output unit 50.
  • the input unit 40 may include a touch panel, a mouse, a microphone, a physical button including a power key, a volume key, and the like.
  • the output unit 50 can include a display screen, a speaker, and the like.
  • the touch panel of the input unit 40 and the display screen of the output unit 50 are integrated to form a touch screen while providing the functionality of touch input and display output.
  • the electronic device 100 can be a portable electronic device having a camera 30, such as a mobile phone, a tablet computer, or a notebook computer, and can also be a camera device such as a camera or a video camera.
  • a camera 30 such as a mobile phone, a tablet computer, or a notebook computer
  • a camera device such as a camera or a video camera.
  • the present application further provides a computer readable storage medium, where a plurality of program instructions are stored in the computer readable storage medium, and the program instructions are executed by the main processing unit 20, and executed as shown in FIG. 1-13. All or part of the steps in any of the shooting control methods.
  • the computer storage medium is the memory 20, and may be any storage device that can store information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like.
  • the photographing control method and the electronic device 100 of the present application can automatically determine whether the photographing condition is satisfied according to the photographing preview screen, and perform photographing when the photographing condition is satisfied, and can capture the highlight moment including the content corresponding to the current photographing preview screen in time.
  • embodiments of the present invention can be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program is stored/distributed in a suitable medium, provided with other hardware or as part of the hardware, or in other distributed forms, such as over the Internet or other wired or wireless telecommunication systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de commande de photographie consistant à : acquérir une image de prévisualisation ; utiliser un modèle prédéfini pour analyser l'image de prévisualisation de sorte à obtenir un résultat d'analyse ; déterminer, d'après le résultat d'analyse, si une condition de photographie est satisfaite ou non à un moment actuel ; et, dans l'affirmative, commander qu'une opération de photographie soit exécutée. La présente invention concerne également un dispositif électronique implémentant le procédé de commande de photographie. Dans le procédé de commande de photographie et le dispositif électronique selon la présente invention, la satisfaction d'une condition de photographie peut être automatiquement déterminée d'après une image de prévisualisation et, si la condition de photographie est satisfaite, une opération de photographie est exécutée pour capturer en temps opportun une image d'une scène d'intérêt contenant un contenu correspondant à une image de prévisualisation actuelle.
PCT/CN2018/085899 2018-05-07 2018-05-07 Procédé de commande de photographie, et dispositif électronique WO2019213819A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880070402.5A CN111295875A (zh) 2018-05-07 2018-05-07 拍摄控制方法及电子装置
PCT/CN2018/085899 WO2019213819A1 (fr) 2018-05-07 2018-05-07 Procédé de commande de photographie, et dispositif électronique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/085899 WO2019213819A1 (fr) 2018-05-07 2018-05-07 Procédé de commande de photographie, et dispositif électronique

Publications (1)

Publication Number Publication Date
WO2019213819A1 true WO2019213819A1 (fr) 2019-11-14

Family

ID=68467677

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/085899 WO2019213819A1 (fr) 2018-05-07 2018-05-07 Procédé de commande de photographie, et dispositif électronique

Country Status (2)

Country Link
CN (1) CN111295875A (fr)
WO (1) WO2019213819A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111050069A (zh) * 2019-12-12 2020-04-21 维沃移动通信有限公司 一种拍摄方法及电子设备

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014800B (zh) * 2021-01-29 2022-09-13 中通服咨询设计研究院有限公司 一种用于通讯行业查勘作业的智能拍照方法
CN113676660B (zh) * 2021-08-11 2023-04-07 维沃移动通信有限公司 一种拍摄方法、装置及电子设备
CN113949815A (zh) * 2021-11-17 2022-01-18 维沃移动通信有限公司 一种拍摄预览方法、装置及电子设备
CN116501280B (zh) * 2023-06-28 2023-10-03 深圳市达浩科技有限公司 一种显示屏亮暗线调节方法、系统及led显示屏

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120262593A1 (en) * 2011-04-18 2012-10-18 Samsung Electronics Co., Ltd. Apparatus and method for photographing subject in photographing device
CN104125396A (zh) * 2014-06-24 2014-10-29 小米科技有限责任公司 图像拍摄方法和装置
CN104883494A (zh) * 2015-04-30 2015-09-02 努比亚技术有限公司 一种图像抓拍的方法及装置
CN107360371A (zh) * 2017-08-04 2017-11-17 上海斐讯数据通信技术有限公司 一种自动拍照方法、服务器及自动拍照装置
CN107729872A (zh) * 2017-11-02 2018-02-23 北方工业大学 基于深度学习的人脸表情识别方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201974617U (zh) * 2011-06-27 2011-09-14 北京海鑫智圣技术有限公司 公安标准化人像采集系统中的led光源设备
US10003722B2 (en) * 2015-03-17 2018-06-19 Disney Enterprises, Inc. Method and system for mimicking human camera operation
CN106709424B (zh) * 2016-11-19 2022-11-11 广东中科人人智能科技有限公司 一种优化的监控视频存储系统
CN107105341B (zh) * 2017-03-30 2020-02-21 联想(北京)有限公司 视频文件的处理方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120262593A1 (en) * 2011-04-18 2012-10-18 Samsung Electronics Co., Ltd. Apparatus and method for photographing subject in photographing device
CN104125396A (zh) * 2014-06-24 2014-10-29 小米科技有限责任公司 图像拍摄方法和装置
CN104883494A (zh) * 2015-04-30 2015-09-02 努比亚技术有限公司 一种图像抓拍的方法及装置
CN107360371A (zh) * 2017-08-04 2017-11-17 上海斐讯数据通信技术有限公司 一种自动拍照方法、服务器及自动拍照装置
CN107729872A (zh) * 2017-11-02 2018-02-23 北方工业大学 基于深度学习的人脸表情识别方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111050069A (zh) * 2019-12-12 2020-04-21 维沃移动通信有限公司 一种拍摄方法及电子设备

Also Published As

Publication number Publication date
CN111295875A (zh) 2020-06-16

Similar Documents

Publication Publication Date Title
WO2019213819A1 (fr) Procédé de commande de photographie, et dispositif électronique
WO2019213818A1 (fr) Procédé de commande de photographie, et dispositif électronique
WO2020114087A1 (fr) Procédé et dispositif de conversion d'image, équipement électronique et support de stockage
US10170157B2 (en) Method and apparatus for finding and using video portions that are relevant to adjacent still images
WO2022037111A1 (fr) Procédé et appareil de traitement d'image, appareil d'affichage interactif, et dispositif électronique
WO2022028184A1 (fr) Procédé et appareil de commande de photographie, dispositif électronique et support de stockage
JP5169139B2 (ja) カメラ、および画像記録プログラム
WO2017084182A1 (fr) Procédé et appareil de traitement d'image
WO2018120662A1 (fr) Procédé de photographie, appareil de photographie et terminal
KR20090098505A (ko) 상태 정보를 이용하여 미디어 신호를 생성하는 방법 및장치
EP3975046B1 (fr) Procédé et appareil de détection d'image occluse et support
CN104580886A (zh) 拍摄控制方法及装置
KR20170074822A (ko) 얼굴 앨범을 기반으로 한 음악 재생 방법, 장치 및 단말장치
WO2018098968A1 (fr) Procédé de photographie, appareil, et dispositif terminal
CN108600610A (zh) 拍摄辅助方法和装置
CN111771372A (zh) 相机拍摄参数的确定方法和装置
CN107277368A (zh) 一种用于智能设备的拍摄方法及拍摄装置
WO2019213820A1 (fr) Procédé de commande de photographie et dispositif électronique
CN110913120B (zh) 图像拍摄方法及装置、电子设备、存储介质
CN106412417A (zh) 拍摄图像的方法及装置
US11961216B2 (en) Photography session assistant
CN113079311B (zh) 图像获取方法及装置、电子设备、存储介质
CN105530439B (zh) 用于抓拍照片的方法、装置及终端
WO2018232669A1 (fr) Procédé de commande d'appareil photo de terminal mobile, et terminal mobile
WO2021237744A1 (fr) Procédé et appareil de prise de vue photographique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18917606

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18917606

Country of ref document: EP

Kind code of ref document: A1