WO2019218111A1 - Electronic device and photographing control method - Google Patents

Electronic device and photographing control method Download PDF

Info

Publication number
WO2019218111A1
WO2019218111A1 PCT/CN2018/086692 CN2018086692W WO2019218111A1 WO 2019218111 A1 WO2019218111 A1 WO 2019218111A1 CN 2018086692 W CN2018086692 W CN 2018086692W WO 2019218111 A1 WO2019218111 A1 WO 2019218111A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
framing
framing information
subject
Prior art date
Application number
PCT/CN2018/086692
Other languages
French (fr)
Chinese (zh)
Inventor
王星泽
Original Assignee
合刃科技(武汉)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合刃科技(武汉)有限公司 filed Critical 合刃科技(武汉)有限公司
Priority to PCT/CN2018/086692 priority Critical patent/WO2019218111A1/en
Priority to CN201880069652.7A priority patent/CN111279682A/en
Publication of WO2019218111A1 publication Critical patent/WO2019218111A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present application relates to the field of electronic devices, and in particular, to an electronic device having a photographing function and a photographing control method for the electronic device.
  • the subject may be in good condition when framing, but the moment when the photo is taken may appear that the eyes are not open, the smile is relatively stiff, and the final photographs are often unsatisfactory.
  • the cute expression of the baby is often fleeting, and it is difficult to take a satisfactory photo in time by the user operating the shutter button or shooting an icon.
  • the application provides an electronic device and a shooting control method, which can perform a shooting operation in time according to the framing information, and capture a wonderful moment.
  • a photographing control method comprising: acquiring a framing information of a subject by a framing information acquiring unit when photographing a framing; and receiving a framing of the subject acquired by the framing information acquiring unit And analyzing the framing information of the subject to obtain an analysis result; and when determining that the shooting condition is satisfied according to the analysis result, controlling to perform a shooting operation to perform at least an imaging process by the first image processor to obtain a corresponding photo or video.
  • an electronic device including a view information acquisition unit, a first image processor, and a processor.
  • the framing information acquisition unit is configured to acquire framing information of the subject when the framing is taken.
  • the first image processor is configured to perform an imaging process.
  • the processor is configured to receive the framing information of the object acquired by the finder information acquiring unit, analyze the framing information of the object to obtain an analysis result, and control to perform a shooting operation when determining that the shooting condition is satisfied according to the analysis result At least an imaging process is performed by the first image processor to obtain a corresponding photo or video.
  • a computer readable storage medium storing program instructions, the program instructions being executed by a computer to execute a shooting control method, the shooting control method comprising: shooting At the time of framing, acquiring framing information of the subject by the framing information acquiring unit; receiving framing information of the subject acquired by the framing information acquiring unit, analyzing the framing information of the subject to obtain an analysis result; As a result, when it is determined that the photographing condition is satisfied, control is performed to perform a photographing operation to at least perform imaging processing by the first image processor to obtain a corresponding photograph or video.
  • the photographing control method and the electronic device of the present application can determine whether the photographing condition is currently satisfied by the framing information of the subject acquired by the framing information acquiring unit, and can automatically determine the timing of the photographing, and capture the exciting moment in time.
  • the image processing is performed by the first image processor to obtain a corresponding photo or video. It avoids the waste of processing resources and further saves energy.
  • FIG. 1 is a block diagram showing a schematic partial configuration of an electronic device in a first embodiment of the present application.
  • FIG. 2 is a block diagram showing a schematic partial structure of an electronic device in a second embodiment of the present application.
  • FIG. 3 is a block diagram showing a schematic partial structure of an electronic device in a third embodiment of the present application.
  • FIG. 4 is a block diagram showing a schematic partial structure of an electronic device in a fourth embodiment of the present application.
  • FIG. 5 is a flowchart of a photographing control method in the first embodiment of the present application.
  • FIG. 6 is a flowchart of a photographing control method in a second embodiment of the present application.
  • FIG. 7 is a flowchart of a photographing control method in a third embodiment of the present application.
  • FIG. 8 is a flowchart of a photographing control method in a fourth embodiment of the present application.
  • FIG. 9 is a flowchart of a photographing control method in a fifth embodiment of the present application.
  • FIG. 10 is a flowchart of a photographing control method in a sixth embodiment of the present application.
  • FIG. 11 is a flowchart of a photographing control method in a seventh embodiment of the present application.
  • FIG. 12 is a flowchart of a photographing control method in an eighth embodiment of the present application.
  • FIG. 13 is a flowchart of a photographing control method in a ninth embodiment of the present application.
  • FIG. 1 is a block diagram showing a partial structure of an electronic device 100 in a first embodiment of the present application.
  • the electronic device 100 includes a framing information acquiring unit 10 , an image processor 20 , and a processor 30 .
  • the finder information acquiring unit 10 is configured to acquire framing information of the subject when the framing is taken.
  • the image processor 20 is used to perform an imaging process.
  • the processor 30 is configured to receive the framing information of the subject acquired by the finder information acquiring unit 10, analyze the framing information of the subject, and obtain an analysis result, and determine, when the imaging condition is met according to the analysis result, the control is executed.
  • the photographing operation is performed at least by the image processor 20 to obtain a corresponding photograph or video.
  • the framing information of the subject acquired by the finder information acquiring unit 10 can determine whether the shooting condition is currently satisfied, and can automatically determine the timing of the photographing, and capture the exciting moment in time.
  • the framing information acquisition unit 10 which is independent of the image processor 20, performs the framing information acquisition, and determines that the shooting condition is satisfied
  • the image processing is performed by the image processor 20 to obtain a corresponding photo or video. It avoids the waste of processing resources and further saves energy.
  • the photographic framing is performed in response to a framing operation.
  • the processor 30 controls to perform shooting framing when receiving the shooting framing operation, and starts the framing information acquiring unit 10, so that the framing information acquiring unit 10 acquires framing information of the captured object. Therefore, the framing information acquiring unit 10 can be in a closed state at ordinary times to reduce power consumption of the power source.
  • the shooting framing operation is a click operation on a photo application icon.
  • the shooting framing operation is a specific operation of a physical button of the electronic device.
  • the electronic device includes a volume increasing button and a volume reducing button
  • the shooting framing operation is a volume increasing button and a volume reduction. Simultaneous pressing of small keys.
  • the shooting framing operation is an operation of sequentially pressing the volume increasing key and the volume reducing key within a preset time (for example, 2 seconds).
  • the shooting framing operation may also be an operation of a preset touch gesture input in any display interface of the electronic device, for example, on the main interface of the electronic device, the user may input a ring touch
  • the touch gesture of the trajectory turns on the photographing function to perform framing.
  • the shooting framing operation may also be an operation of a preset touch gesture input on the touch screen when the electronic device is in a black screen state.
  • the shooting framing operation is an operation of pressing a shutter button/power button of the camera to trigger the camera to be in an activated state.
  • the processor 30 analyzes the framing information of the subject to obtain an analysis result, and specifically includes: analyzing the framing information of the subject by using a preset model to obtain an analysis result.
  • the corresponding analysis result can be quickly obtained, and the shooting condition can be determined quickly according to the analysis result.
  • FIG. 2 is a block diagram showing a schematic partial structure of the electronic device 100 in the second embodiment.
  • the finder information acquiring unit 10 includes a depth sensor 11 , and the framing information is specifically obtained by the depth sensor 11 .
  • the depth sensor 11 is used to acquire depth information of a subject when shooting a framing. That is, the depth information is the aforementioned framing information.
  • the processor analyzes the framing information of the object by using a preset model to obtain an analysis result, including: receiving depth information of the object, and analyzing the depth information by using a preset model to obtain the Analysis results.
  • the depth information includes spatial geometric distribution information of the subject.
  • the depth information when the subject includes a face, the depth information will include spatial geometric distribution information of the face.
  • the depth information When the subject includes a person's hand, the depth information will include spatial geometrical distribution information of the hand.
  • the depth information When the subject includes a scene, the depth information will include spatial geometric distribution information of the scene.
  • the preset model is operated to determine that the change/distribution of the spatial geometric position of the face is close to the reference expression to be captured, and An analysis result whose similarity is greater than a preset threshold is obtained.
  • the processor 30 determines that the shooting condition is currently satisfied when the analysis result whose similarity is greater than the preset threshold is obtained.
  • the depth sensor 11 is independent of the image sensor 20, and the power consumption is small, and the processing speed is faster.
  • the depth information acquired by the depth sensor 11 determines whether to take a photo, which can effectively save energy and can be quickly.
  • the acquisition of the framing information ensures that the analysis results are obtained in time, and the shooting operation is performed in time when it is determined that the shooting conditions are met.
  • the electronic device 100 further includes a first lens 21 and a second lens 22 .
  • the first lens 21 is disposed corresponding to the image sensor 20, and the image sensor is configured to perform imaging processing by receiving light through the first lens 21.
  • the first lens 21 and the image sensor 20 may be combined into a first camera module 201.
  • the second lens 22 is disposed corresponding to the depth sensor 11, and the depth sensor 11 is configured to acquire the finder information of the object by receiving the light through the second lens 22.
  • the first lens 21 and the second lens 22 can form a pseudo-camera structure on the appearance of the electronic device 100.
  • the first lens 21 and the second lens 22 may be disposed on the front surface of the electronic device 100 as a front imaging lens, or may be disposed on the back surface of the electronic device 100 as a rear imaging lens.
  • the shooting operation in the present application may be a front shooting operation or a rear shooting operation.
  • FIG. 3 is a block diagram showing a schematic partial structure of the electronic device 100 in the third embodiment.
  • the difference between the third embodiment and the second embodiment is that the second lens 22 can be cancelled, and the image sensor 20 and the depth sensor 11 can be disposed corresponding to the first lens 21, and the same is shared.
  • the first lens 21 receives light.
  • the depth sensor 11 is disposed side by side with the image sensor 20 and is disposed to the first lens 21, and the area of the depth sensor 11 facing the first lens 21 is smaller than the The image sensor 20 faces the area of the first lens 21, thereby minimizing the impact on the imaging of the image sensor 20.
  • the projected area of the depth sensor 11 on the first lens 21 is 1/3, 1/4, etc. of the projected area of the image sensor 20 on the first lens 21, as long as a small amount of light comes in to obtain the View information can be.
  • the image sensor 20 is disposed opposite to the first lens 21, and a half mirror is disposed obliquely between the image sensor 20 and the first lens 21, and the depth sensor 11 is The light from the first lens 21 can be received on the light path of the reflective light of the half mirror.
  • the first lens 21 can be disposed on the front or back of the electronic device 100.
  • the depth sensor may not require a lens, that is, the first lens 21 is only disposed corresponding to the image sensor 20, and the depth sensor does not need a lens, but collects depth information by itself.
  • the foregoing depth sensor 11 may include at least one of a Stereo System depth sensor, a Structured Light depth sensor, and a Time Of Flight (TOF) depth sensor.
  • the Stereo System depth sensor is an object that uses a dual camera to capture an object, and then calculates depth information such as object distance information by a triangle principle.
  • the structured light sensor collects specific light information onto the surface of the object through the structured light, and then collects the depth information such as the object distance information by the camera.
  • the time-of-flight depth sensor continuously transmits the light pulse to the target object, then receives the light returned from the object with the sensor, and obtains the target object distance by detecting the flight (round-trip) time of the light pulse, that is, according to the change of the optical signal caused by the object.
  • the depth sensor 11 comprises two of the above three depth sensors, for example, the depth sensor 11 comprises a binocular ranging depth sensor and a structured light depth sensor, that is, adding a structured light technology based on the binocular ranging technology, To combine the advantages of two depth sensors.
  • FIG. 4 is a block diagram showing a schematic partial structure of the electronic device 100 in the fourth embodiment.
  • the finder information acquiring unit 10 includes an image sensor 12.
  • the image sensor 12 is configured to acquire image information of a subject in response to a shooting framing operation. That is, in the fourth embodiment, the image information is the aforementioned framing information.
  • the processor 30 is configured to receive the image information, and analyze the image information by using a preset model to obtain the analysis result.
  • the electronic device 100 also includes a first lens 21 and a second lens 22.
  • the first lens 21 is disposed corresponding to the image sensor 20, and the image sensor is configured to perform imaging processing by receiving light through the first lens 21.
  • the first lens 21 and the image sensor 20 may be combined into a first camera module 201.
  • the second lens 22 is disposed corresponding to the image sensor 12, and the image sensor 12 is configured to acquire image information of a subject by receiving light through the second lens 22.
  • the second lens 22 and the image sensor 12 can be combined into a second camera module 202.
  • the first camera module 201 and the second camera module 202 may constitute a dual camera structure.
  • the processor 30 determines that the shooting condition is controlled according to the analysis result, and the corresponding photo or video is obtained for controlling the imaging process by the image processor 20 and the image sensor 12 simultaneously. .
  • the image sensor 12 also participates in the imaging process of the final photo or video, which can effectively improve the shooting quality.
  • the image sensor 12 is only used to acquire image information for the preset model analysis to derive the analysis result, and does not participate in the imaging process.
  • the resolution of the second camera module 202 combined with the image sensor 12 may be significantly smaller than the first combination of the first lens 21 and the image sensor 20 Camera module 201.
  • the resolution of the second camera module 202 is 30 W pixels
  • the resolution of the first camera module 201 is 2000 W pixels.
  • the imaging resolution of the image sensor 12 for acquiring the framing information is also significantly smaller than the imaging resolution of the image sensor 20.
  • the image information when the image information is acquired by the second camera module 202, the image information only needs to satisfy the analysis result, and the resolution of the second camera module 202 and the image are
  • the imaging resolution of the sensor 20 is set lower, saving processing resources and reducing power consumption.
  • the second lens 22 can also be cancelled.
  • the image sensor 20 and the image sensor 12 can be disposed corresponding to the first lens 21, and the first lens 21 is shared to receive light. .
  • the image sensor 12 is disposed side by side with the image sensor 20 and is disposed to the first lens 21, and the area of the image sensor 12 facing the first lens 21 is smaller than the The image sensor 20 faces the area of the first lens 21, thereby minimizing the impact on the imaging of the image sensor 20.
  • the projected area of the image sensor 12 on the first lens 21 is 1/3, 1/4, etc. of the projected area of the image sensor 20 on the first lens 21, as long as a small amount of light comes in to obtain the View information can be.
  • the processor 30 can also be used to perform the following operations.
  • the electronic device 100 on which the following operations are based may be the electronic device 100 described in any of the aforementioned FIGS. 1 to 4.
  • the preset model is a neural network model
  • the processor 30 analyzes the framing information of the subject by using a preset model to obtain an analysis result, including: the processor passes a neural network model The framing information of the subject is analyzed to obtain the analysis result of the satisfaction.
  • the processor 30 determines that the shooting condition is met according to the analysis result, and the processor 30 determines that the shooting condition is currently satisfied when determining that the satisfaction exceeds the satisfaction preset threshold.
  • the preset model is an image processing algorithm model
  • the processor 30 analyzes the framing information of the subject by using a preset model to obtain an analysis result, including: the processor 30 adopts The image processing algorithm model compares the framing information of the subject with the reference picture, and analyzes the analysis result of the similarity between the framing information of the subject and the reference picture.
  • the processor 30 determines that the shooting condition is currently satisfied according to the analysis result, and the processor 30 determines that the shooting condition is currently satisfied when determining that the similarity exceeds the similarity preset threshold.
  • the similarity preset threshold may be 80%, 90%, and the like.
  • the image processing algorithm model includes an expression feature model.
  • the processor 30 compares the framing information of the object with the reference image by using an image processing algorithm model, and analyzes
  • the analysis result of the similarity between the framing information of the photographic subject and the reference picture includes: analyzing the facial expression information in the framing information of the captured object by using the face recognition technology to generate a corresponding expression feature vector;
  • the expression feature vector corresponding to the framing information of the subject is calculated to obtain the similarity between the framing information and the reference picture.
  • the expression feature model may include an expression feature vector corresponding to a plurality of reference pictures, and the plurality of similarities are determined by comparing the similarity between the expression feature vector corresponding to the view information and the expression feature vector of each reference picture. degree.
  • the processor 30 determines that the similarity with any of the reference pictures exceeds the similarity preset threshold, and determines that the shooting condition is currently satisfied.
  • the reference picture corresponding to the expression feature model may be a standard picture including a specific expression such as a face laughing, sadness, anger, yawning, and the like.
  • the image processing algorithm model includes a gesture feature model
  • the processor 30 compares the framing information of the object with the reference image by using an image processing algorithm model, and analyzes
  • the analysis result of the similarity between the framing information of the object and the reference picture includes: analyzing the gesture information in the framing information of the object by using the image recognition technology to generate a corresponding gesture feature vector; according to the gesture feature model and The gesture feature vector corresponding to the framing information of the subject is calculated to obtain the similarity between the framing information of the subject and the reference picture.
  • the gesture feature model may include a gesture feature vector corresponding to the plurality of reference pictures, and the plurality of similarities are determined by comparing the similarity between the expression feature vector corresponding to the view information and the gesture feature vector of each of the reference pictures. degree.
  • the processor 30 determines that the similarity with any of the reference pictures exceeds the similarity preset threshold, and determines that the shooting condition is currently satisfied.
  • the reference picture corresponding to the gesture feature model may be a standard picture including a specific gesture such as an “OK” gesture, a “vertical thumb” gesture, a “V-shaped” gesture, and the like.
  • the image processing algorithm model includes a scene feature model.
  • the processor uses an image processing algorithm model to compare the framing information of the object with the reference image, and analyzes the The analysis result of the similarity between the framing information of the photographic subject and the reference picture includes: using image recognition technology to analyze the scene information in the framing information of the object to generate a corresponding scene feature vector; according to the scene feature model and being photographed The scene feature vector corresponding to the framing information of the object calculates the similarity between the framing information of the subject and the reference picture.
  • the scene feature model may include a scene feature vector corresponding to the plurality of reference pictures, and the plurality of similarities are determined by comparing the similarity between the scene feature vector corresponding to the finder information and the scene feature vector of each of the reference pictures. degree.
  • the processor 30 determines that the similarity between the framing information and any of the reference pictures exceeds the similarity preset threshold, that is, determines the scene feature vector corresponding to the framing information When the similarity determined by the scene feature vector of the reference picture exceeds the similarity preset threshold, it is determined that the shooting condition is currently satisfied.
  • the reference picture corresponding to the scene feature model may be a standard picture including scenes such as “waves”, “mountains”, and “flowers”.
  • the feature vector such as the corresponding expression feature vector may be a three-dimensional vector.
  • the feature vector such as the corresponding expression feature vector may be a two-dimensional vector.
  • the processor 30 controls to perform a photographing operation, including: the processor 30 controls to perform a continuous shooting operation to obtain a plurality of photos of the current subject.
  • the processor 30 is further configured to analyze a plurality of photos acquired by the continuous shooting operation to determine an optimal photo; and retain the optimal photo for the connection The other photos obtained by the shooting operation are deleted.
  • the processor 30 controls to perform a photographing operation, including: controlling to perform a video capture operation to obtain a currently captured video.
  • the processor 30 is further configured to compare a plurality of video picture frames in the captured video to determine an optimal picture frame; and intercept the optimal picture frame. Save as a photo.
  • the electronic device 100 further includes a memory 40, and the processor 30 retains the best photo or intercepts the best picture frame to save as a photo to control the most A good photo or the frame of the picture is saved in a certain preset album in the memory.
  • the processor 30 is further configured to acquire satisfaction feedback information that the user feeds back a photo or video obtained by the shooting operation, and output the satisfaction feedback information to the preset model, so that The preset model performs optimization training using the satisfaction feedback information.
  • the preset model described above is a trained model or an untrained model. Using the satisfaction feedback information, when the model has been trained, further optimization can be performed, and when the model has not been trained, the training can be better achieved.
  • program instructions are stored in the memory 40, and the processor 30 performs the aforementioned functions/operations for invoking program instructions stored in the memory 40.
  • program instructions are solidified in the processor 30, and the processor 30 performs the aforementioned functions/operations for calling its own program instructions.
  • the processor 30 can be a microcontroller, a microprocessor, a single chip, a digital signal processor, or the like.
  • the memory 40 can be any storage device that can store information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like.
  • the electronic device 100 can be a portable electronic device having a camera 30, such as a mobile phone, a tablet computer, a notebook computer, or the like, or a camera device such as a camera or a video camera.
  • a camera 30 such as a mobile phone, a tablet computer, a notebook computer, or the like
  • a camera device such as a camera or a video camera.
  • the electronic device 100 may further include an input and output unit 50 for inputting by a user and providing an output of a signal such as a display signal, a sound signal, and the like.
  • the input output unit 50 can include a touch screen while providing touch input and display output.
  • the input/output unit 50 may further include a power button, a physical button such as a volume button, a sound output device such as a speaker, and the like.
  • the program instruction stored in the memory 40 is further executed by the processor 30 to execute the shooting control method described in any of the following embodiments of FIG. 5 to FIG.
  • the operation performed by the processor 30 may further refer to the introduction of the shooting control method in any of the following embodiments of FIG. 5 to FIG.
  • FIG. 5 is a flowchart of a shooting control method in the first embodiment of the present application.
  • the photographing control method is applied to the electronic device shown in any of the foregoing embodiments.
  • the method includes the following steps:
  • the framing information acquiring unit acquires the framing information of the subject.
  • the shooting framing is performed in response to the shooting framing operation
  • the step S51 may specifically include: controlling the shooting framing in response to the shooting framing operation, and starting the framing information acquiring unit to acquire the framing of the subject. information. Therefore, the framing information acquiring unit can be started only when shooting and framing, and can be turned off normally to reduce energy consumption.
  • the operation of capturing the framing is a click operation on the photo application icon.
  • the shooting framing operation is a specific operation on a physical button of the electronic device, for example, the electronic device includes a volume increasing button and a volume reducing button, and the shooting framing operation is a volume increasing button and a volume.
  • the operation of capturing the framing is an operation of sequentially pressing the volume up key and the volume down key within a preset time (for example, 2 seconds).
  • the shooting framing operation may also be an operation of a preset touch gesture input in any display interface of the electronic device, for example, on the main interface of the electronic device, the user may input a ring touch
  • the touch gesture of the trajectory turns on the photographing function and starts shooting framing.
  • the shooting framing operation may also be an operation of a preset touch gesture input on the touch screen when the electronic device is in a black screen state.
  • the shooting framing operation is an operation of pressing a shutter button/power button of the camera to trigger the camera to be in an activated state.
  • S53 Receive the framing information of the subject acquired by the framing information acquiring unit, and analyze the framing information of the subject to obtain an analysis result.
  • the analyzing the framing information of the subject to obtain an analysis result comprises: analyzing the framing information of the subject by using a preset model to obtain an analysis result.
  • the controlling to perform a photographing operation comprises: controlling to perform a photographing operation, and at least performing an imaging process by the first image processor of the electronic device to obtain a photograph corresponding to the subject currently being photographed.
  • the controlling to perform a photographing operation includes: controlling to perform a continuous shooting operation, and performing imaging processing by at least a first image processor of the electronic device to obtain a plurality of photographs of a current subject.
  • the photographing control method further comprises: analyzing a plurality of photos acquired by the continuous shooting operation to determine an optimal photo; and retaining the optimal photo, and The other photos obtained by the shooting operation are deleted.
  • controlling to perform a photographing operation includes: controlling to perform a video photographing operation, and at least performing an imaging process by the first image processor of the electronic device to obtain a video of a current subject.
  • the photographing control method further comprises: comparing a plurality of video picture frames in the captured video to determine an optimal picture frame; and extracting the optimal picture frame. Save as a photo.
  • the finder information acquiring unit includes a depth sensor, and the framing information is specifically obtained by the depth sensor.
  • the method includes the following steps:
  • the depth information includes spatial geometric distribution information of the subject
  • S63 Receive depth information of the object acquired by the depth sensor, and analyze the depth information of the object by using a preset model to obtain an analysis result.
  • the step S63 may include: calculating the depth information by using the preset model, and determining that a change in a spatial geometric position of the face is close to a reference expression to be captured. At the same time, an analysis result whose similarity is greater than a preset threshold is obtained.
  • the determining that the shooting condition is satisfied according to the analysis result may include: determining that the shooting condition is currently satisfied when the analysis result whose similarity is greater than the preset threshold is obtained.
  • the depth sensor is independent of the image sensor 20, and the power consumption is small, the processing speed is faster, and the depth information acquired by the depth sensor determines whether to take a picture, which can effectively save energy and can quickly perform framing.
  • the acquisition of information ensures that the analysis results are obtained in a timely manner, and the shooting operation is performed in time when it is determined that the shooting conditions are met.
  • the steps S61 to S65 correspond to the steps S51 to S55 in FIG. 5 respectively, and are the same or the relationship of the upper and lower positions, and the specific description can refer to each other.
  • FIG. 7 is a flowchart of a shooting control method in a third embodiment of the present application.
  • the framing information acquiring unit includes a second image sensor, and the framing information is specifically image information obtained by the second image sensor.
  • the method includes the following steps:
  • S73 Receive image information of the object acquired by the second image sensor, and analyze the depth information of the object by using a preset model to obtain an analysis result.
  • the controlling performs a photographing operation to obtain a corresponding photo or video by performing at least an imaging process by the first image processor of the electronic device, including: controlling simultaneously through the first The image processor and the second image sensor perform imaging processing to obtain a corresponding photo or video.
  • the imaging process is simultaneously performed by the first image processor and the second image processor to achieve the effect of the dual camera imaging, and the imaging quality is improved.
  • the controlling performs a photographing operation to obtain an image or a video according to at least an image processing performed by a first image processor of the electronic device, including: passing only the electronic device The first image processor performs imaging processing to obtain a corresponding photo or video.
  • an imaging resolution of the first image processor for acquiring the framing information is also significantly smaller than an imaging resolution of the image sensor, where the first image processor acquires The image information only needs to satisfy the analysis result to be accurate, and the imaging resolution of the first image processor is set lower, which can save processing resources and reduce energy consumption.
  • the step S71 corresponds to the step S51 in FIG. 5 .
  • Steps S73 and S75 respectively correspond to steps S53 and S55 in FIG. 5, which are steps of the same or upper and lower positions, and specific descriptions can be referred to each other.
  • FIG. 8 is a flowchart of a shooting control method in a fourth embodiment of the present application.
  • the preset model is specifically a neural network model
  • the shooting control method includes the following steps:
  • the framing information acquiring unit acquires the framing information of the subject.
  • the framing information acquiring unit may include a depth sensor or an image sensor, and the framing information may be depth information or image information.
  • S83 Receive the framing information of the subject acquired by the framing information acquiring unit, analyze the framing information of the subject by using a neural network model, and obtain an analysis result of the satisfaction.
  • the satisfaction degree is an analysis result directly output by the neural network model by taking all the pixels of the framing information of the subject as input and processing according to the current training result of the neural network model.
  • the satisfaction threshold may be 80%, 90%, and the like.
  • the steps S81 to S85 correspond to the steps S51 to S55 in FIG. 5, respectively, and have the same or upper and lower positions. The specific description can be referred to each other.
  • FIG. 9 is a flowchart of a shooting control method in a fifth embodiment of the present application.
  • the preset model is specifically an image processing algorithm model
  • the shooting control method includes the following steps:
  • the framing information acquisition unit acquires the framing information of the subject.
  • the framing information acquiring unit may include a depth sensor or an image sensor, and the framing information may be depth information or image information.
  • S93 Receive the framing information of the object acquired by the finder information acquiring unit, compare the framing information of the object with the reference image by using an image processing algorithm model, and analyze the similarity between the framing information of the object and the reference image. The result of this analysis.
  • the image processing algorithm model includes an expression feature model
  • the image processing algorithm model compares the framing information of the captured object with the reference image.
  • the analysis result of analyzing the similarity between the framing information of the subject and the reference picture comprises: analyzing the facial expression information in the framing information of the subject by using a face recognition technology to generate a corresponding expression feature vector;
  • the similarity between the framing information and the reference picture is calculated according to the expression feature model and the expression feature vector corresponding to the framing information of the subject.
  • the expression feature model may include an expression feature vector corresponding to a plurality of reference pictures, and the plurality of similarities are determined by comparing the similarity between the expression feature vector corresponding to the view information and the expression feature vector of each reference picture. degree.
  • determining that the shooting condition is currently satisfied comprises: determining that the similarity with any of the reference pictures exceeds the similarity preset threshold When it is determined that the similarity determined by the expression feature vector corresponding to the framing information and the expression feature vector of any of the reference pictures exceeds the similarity preset threshold, it is determined that the shooting condition is currently satisfied.
  • the reference picture corresponding to the expression feature model may be a standard picture including a specific expression such as a face laughing, sadness, anger, yawning, and the like.
  • the image processing algorithm model includes a gesture feature model
  • the image processing algorithm model uses the framing information and the reference of the object to be photographed.
  • the image is compared, and the analysis result of the similarity between the framing information of the object and the reference image is analyzed, including: analyzing the gesture information in the framing information of the object by using image recognition technology, and generating a corresponding gesture feature vector;
  • the similarity between the framing information of the subject and the reference picture is calculated according to the gesture feature model and the gesture feature vector corresponding to the framing information of the subject.
  • the gesture feature model may include a gesture feature vector corresponding to the plurality of reference pictures, and the plurality of similarities are determined by comparing the similarity between the expression feature vector corresponding to the view information and the gesture feature vector of each of the reference pictures. degree.
  • determining that the shooting condition is currently satisfied comprises: determining that the similarity with any of the reference pictures exceeds the similarity preset threshold When it is determined that the similarity between the gesture feature vector corresponding to the framing information and the gesture feature vector of any of the reference pictures exceeds the similarity preset threshold, it is determined that the shooting condition is currently satisfied.
  • the reference picture corresponding to the gesture feature model may be a standard picture including a specific gesture such as an “OK” gesture, a “vertical thumb” gesture, a “V-shaped” gesture, and the like.
  • the image processing algorithm model includes a scene feature model.
  • the processor uses an image processing algorithm model to compare the framing information of the object with the reference image, and analyzes the The analysis result of the similarity between the framing information of the photographic subject and the reference picture includes: using image recognition technology to analyze the scene information in the framing information of the object to generate a corresponding scene feature vector; according to the scene feature model and being photographed The scene feature vector corresponding to the framing information of the object calculates the similarity between the framing information of the subject and the reference picture.
  • the scene feature model may include a scene feature vector corresponding to the plurality of reference pictures, and the plurality of similarities are determined by comparing the similarity between the scene feature vector corresponding to the finder information and the scene feature vector of each of the reference pictures. degree.
  • determining that the shooting condition is currently satisfied comprises: determining that the similarity between the framing information and any of the reference pictures exceeds the similarity
  • the threshold is set, that is, when the similarity determined by the scene feature vector corresponding to the framing information and the scene feature vector of any of the reference pictures exceeds the similarity preset threshold, it is determined that the shooting condition is currently satisfied.
  • the reference picture corresponding to the scene feature model may be a standard picture including scenes such as “waves”, “mountains”, and “flowers”.
  • the feature vector such as the corresponding expression feature vector may be a three-dimensional vector.
  • the feature vector such as the corresponding expression feature vector may be a two-dimensional vector.
  • the steps S91 to S95 correspond to the steps S51 to S55 in FIG. 5, respectively, and have the same or upper and lower positions. The specific description can be referred to each other.
  • FIG. 10 is a flowchart of a shooting control method in a sixth embodiment of the present application.
  • the photographing control method includes the following steps:
  • S101 Acquire framing information of the subject by the framing information acquiring unit when the framing is performed.
  • S103 Receive the framing information of the subject acquired by the framing information acquiring unit, and analyze the framing information of the subject by using a preset model to obtain an analysis result.
  • the preset model includes an image processing algorithm model
  • the step S103 includes: comparing the framing information of the object with the reference image by using an image processing algorithm model to obtain a similarity between the framing information of the object and the reference image. degree.
  • the image processing algorithm model includes an expression feature model.
  • the image processing algorithm model compares the framing information of the object with the reference image to obtain a similarity between the framing information of the object and the reference image.
  • the method includes: analyzing facial expressions in the framing information of the object by using a face recognition technology to generate a corresponding expression feature vector; and calculating the image according to the expression feature model corresponding to the expression feature model and the finder information of the photographic subject The similarity between the framing information of the object and the reference image.
  • the analysis result of satisfaction can also be obtained by a neural network model.
  • S105 When it is determined according to the analysis result that the shooting condition is satisfied, controlling to perform a continuous shooting operation to perform imaging processing by at least the first image processor of the electronic device to obtain a plurality of photos of the current subject.
  • the step S105 includes: when the similarity between the framing information of the subject and the reference picture exceeds the similarity preset threshold, determining that the shooting condition is satisfied, and controlling to perform the continuous shooting operation.
  • the step S105 may further include: when the satisfaction exceeds the satisfaction preset threshold, determining that the shooting condition is satisfied, and controlling to perform the continuous shooting operation. .
  • the requirement of the shooting condition when performing the continuous shooting operation may be slightly lower than the requirement when the photographing operation is performed.
  • the comparison similarity preset threshold or the satisfaction preset threshold may be slightly lower than the execution of the photographing when performing the continuous shooting operation.
  • the similarity preset threshold or satisfaction preset threshold of the operation comparison may be slightly lower than the execution of the photographing when performing the continuous shooting operation.
  • the similarity preset threshold or the satisfaction preset threshold when performing the continuous shooting operation may be 70%, which is lower than the similarity preset threshold or the satisfaction preset threshold of 80% or higher in performing the photographing operation comparison. .
  • the reference picture is a picture with a laughing expression
  • the contrast expression vector is an expression feature vector X2 indicating the degree of the mouth angle rising
  • the continuous shooting operation is controlled when it is determined that the mouth angle rises to 70% of the reference picture.
  • the user continues to smile for a short period of time, and the user's smile reaches the maximum level.
  • This expression will be captured by the continuous shooting operation, and the photo that ensures the best shooting effect can be ensured by the continuous shooting operation.
  • the steps S101 to S105 correspond to the steps S51 to S55 in FIG. 5 respectively, and the descriptions of the same or upper and lower positions are mutually related, and the related descriptions can be referred to each other.
  • the steps S101 to S105 also correspond to steps S81 to S85 in FIG. 8 and respectively correspond to steps S91 to S95 in FIG. 9, and the related descriptions can be referred to each other.
  • the shooting control method may further include the following steps:
  • S107 Analyze a plurality of photos obtained by the continuous shooting operation to determine an optimal photo.
  • the step S107 may include: analyzing the multiple photos by using a neural network model to obtain satisfaction, and determining a photo with the highest satisfaction as the best photo.
  • the step S107 may include: comparing a plurality of photos obtained by the continuous shooting with the reference image, and determining a photo with the highest similarity with the reference image as the optimal photo.
  • the shooting control method may further include:
  • S109 retain the best photo, and delete other photos obtained by the continuous shooting operation.
  • the electronic device includes a memory, a plurality of albums are created in the memory, and the retaining the optimal photo is to store the best photo in a certain album, such as a camera. In the album. Among them, by deleting other photos, it can effectively avoid occupying too much storage space.
  • FIG. 11 is a flowchart of a shooting control method in a seventh embodiment of the present application.
  • the photographing control method includes the following steps:
  • the framing information acquisition unit acquires the framing information of the subject.
  • S113 Receive the framing information of the subject acquired by the framing information acquiring unit, and analyze the framing information of the subject by using a preset model to obtain an analysis result.
  • the requirement of the shooting condition when performing the video shooting operation may also be slightly lower than the requirement when the photographing operation is performed.
  • the comparison similarity preset threshold or the satisfaction preset threshold may be slightly lower than the execution when performing the continuous shooting operation.
  • the similarity preset threshold or the satisfaction preset threshold of the photographing operation comparison may be slightly lower than the execution when performing the continuous shooting operation.
  • the steps S111 to S115 respectively correspond to the steps S101 to S105 in the sixth embodiment shown in FIG. 10 .
  • the steps S111-S11 also correspond to the steps S51-S55 in FIG. 5 respectively, and the description relationship of the same or the upper and lower positions is mutually related, and the related descriptions can refer to each other.
  • the steps S111 to S115 also correspond to steps S81 to S85 in FIG. 8 and respectively correspond to steps S91 to S95 in FIG. 9, and the related descriptions can be referred to each other.
  • the shooting control method may further include:
  • the step S117 may include: analyzing the plurality of video picture frames by using a neural network model to obtain satisfaction, and determining a video frame with the highest satisfaction as the optimal one. Picture frame.
  • the step S117 may include: comparing a plurality of video picture frames in the video with a reference picture, and determining a video picture frame with the highest similarity with the reference picture as the The best picture frame.
  • the shooting control method may further include:
  • the electronic device includes a memory, a plurality of albums are created in the memory, and the "snap the best picture frame is saved as a photo" for the best
  • the frame of the picture is stored in an album in the form of a picture/photo, for example, stored in a camera album.
  • FIG. 12 is a flowchart of a shooting control method in an eighth embodiment of the present application.
  • the eighth embodiment is compared with the photographing control method in the first embodiment shown in FIG. 5, in that after controlling the execution of the photographing operation, there is a subsequent step of receiving user satisfaction feedback information, and the photographing control method includes The following steps:
  • the framing information acquisition unit acquires the framing information of the subject.
  • S123 Receive the framing information of the subject acquired by the framing information acquiring unit, and analyze the framing information of the subject by using a preset model to obtain an analysis result.
  • S127 Acquire the user's satisfaction feedback information about the automatic shooting.
  • the user may be prompted to perform satisfaction evaluation on the automatic photographing by generating prompt information, for example, generating a prompt box including “satisfactory” and “unsatisfactory” options.
  • prompt information for example, generating a prompt box including “satisfactory” and “unsatisfactory” options.
  • the satisfaction feedback information of the automatic photographing is obtained.
  • the user's satisfaction with the automatic shooting is obtained by detecting the user's operation on the photo or video obtained by the automatic shooting. For example, if it is detected that the user deletes the photo or video obtained by the automatic shooting, it is determined that the user is not satisfied with the automatic shooting, and the satisfaction feedback information that is unsatisfactory is obtained. For example, if it is detected that the user has set a photo or video obtained by the automatic shooting to a favorite or favorite type setting operation or a sharing operation, it is determined that the user is satisfied with the automatic shooting, and obtains I got feedback on satisfaction with satisfaction.
  • S129 Output the satisfaction feedback information of the user to the automatic shooting to the currently used model, so that the currently used model uses the satisfaction feedback information to perform optimization training.
  • the training of the model can be optimized, and the model is continuously optimized, so that the automatic shooting in subsequent use can be more accurate.
  • the model currently used may be a model that has been confirmed by training, or a model that has not been trained. When it is confirmed that the training is completed, the model can be further optimized. When the model is not yet trained, the training can be better achieved.
  • the steps S121 to S125 respectively correspond to the steps S101 to S105 in the sixth embodiment shown in FIG. 10 .
  • the steps S121 to S125 also correspond to the steps S51 to S55 in FIG. 5 respectively, and the description relationship of the same or the upper and lower positions is mutually related, and the related descriptions can refer to each other.
  • the steps S121 to S125 also correspond to steps S81 to S85 in FIG. 8 and respectively correspond to steps S91 to S95 in FIG. 9, and the related descriptions can be referred to each other.
  • FIG. 13 is a flowchart of a shooting control method in a ninth embodiment of the present application.
  • the preset model is an untrained model, and is gradually completed with the manual operation performed by the user.
  • the shooting control method includes the following steps:
  • the model saves the frontal sample and establishes or updates the positive sample and the correspondence that satisfies the shooting conditions to adjust the parameters of the model itself.
  • the photographing condition can be marked as a label of the front sample.
  • the user manually controls the shooting to be done by pressing a shutter button or a photo icon.
  • the user manually controls the shooting to be performed by performing a specific operation on a physical button of the electronic device.
  • the electronic device includes a power button, and manual control shooting is achieved by double-clicking the power button.
  • S133 Sample a frame obtained by framing between two adjacent manual control shots at a preset time interval as a reverse sample, and adjust parameters of the model according to the reverse sample.
  • the model may save the back samples of the samples, and may also establish a correspondence between the back samples and the photographing conditions to adjust the parameters of the model itself.
  • the back surface sample is a screen that does not satisfy the shooting condition; the labeling condition may be used as a label of the back surface sample for marking.
  • the preset time interval may be 1 second, 2 seconds, and the like.
  • the two adjacent manual control shots may be two adjacent manual control shots during the framing shooting performed by the same camera opening.
  • the two adjacent manual control shots may also be two different manual control shots during the framing shooting process when different cameras are turned on. For example, after the user turns on the camera and completes the first manual control shooting, the camera is turned off, and the next time the camera is turned on to complete the second manual control shooting, the framing screen between the first manual control shooting and the second manual control shooting is The currently used model is saved at a preset time interval as a negative sample.
  • the step S131 further includes the step of: entering the model training mode in response to the user input entering the model training.
  • the determining that the training completion condition is reached includes determining that the training completion condition is reached in response to the user inputting the operation of exiting the model training mode.
  • the operation of entering the model training includes a selection operation of a menu option, or a specific operation on a physical button, or a specific touch gesture input on a touch screen of the electronic device.
  • the operation of entering the model training in response to the user input controls the entering the model training mode, including: responding to the user's selection operation of the menu option, or performing a specific operation on the physical button, or inputting on the touch screen of the electronic device.
  • the specific touch gesture is controlled to enter the model training mode.
  • the determining that the training completion condition is reached includes: determining that the training completion condition is reached when it is determined that the number of times the user manually controls the shooting reaches the preset number of times N1.
  • the preset number of times N1 may be the number of times the system default model training needs to be performed, or may be a user-defined value.
  • the determining that the training completion condition is met includes: using the positive sample of the current time to test the model, determining whether the test result reaches a preset threshold, and determining that the test result reaches a preset threshold, determining The training completion conditions are reached.
  • the method steps shown in Figure 13 are performed after the automatic shooting function is turned on. For example, it is performed in response to the user turning on the automatic shooting function in the setting operation in the camera's menu option.
  • the above model may be a model such as a neural network model or an image processing algorithm model.
  • the user's training model is customized by the user's shooting operation without using another person's model, so that personalization can be better achieved.
  • the models described herein may be programs such as specific algorithm functions running in processor 30, such as neural network algorithm functions, image processing algorithm functions, and the like.
  • the electronic device 100 may further include a model processor that is independent of the processor 30.
  • the model described in the present application is in an operation and model processor, and the processor 30 may be The model processor is triggered to run the corresponding model according to the need to generate a corresponding instruction, and the output result of the model is output to the processor 30 by the model processor for use by the processor 30, and control such as a shooting operation is performed.
  • the photographing control method and the electronic device 100 of the present application can automatically determine whether the photographing condition is satisfied according to the photographing information of the photographed object, and perform photographing when the photographing condition is satisfied, and can capture the wonderful content corresponding to the viewfinder information of the currently photographed object in time. moment.
  • embodiments of the present invention can be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program is stored/distributed in a suitable medium, provided with other hardware or as part of the hardware, or in other distributed forms, such as over the Internet or other wired or wireless telecommunication systems.

Abstract

The present application provides a photographing control method. The photographing control method comprises: during view-finding in photographing, acquiring view-finding information about a photographed object by means of a view-finding information acquisition unit; receiving the view-finding information, acquired by the view-finding information acquisition unit, about the photographed object, analyzing the view-finding information about the photographed object, so as to obtain an analysis result; and when it is determined according to the analysis result that a photographing condition is satisfied, controlling to execute a photographing operation and performing imaging processing at least by means of the first image processor, so as to obtain a corresponding photograph or video. The present application further provides an electronic device using said photographing control method. The photographing control method and the electronic device of the present application can use the view-finding information, which is acquired by the view-finding information acquisition unit and independent of the first image processor, about the photographed object to determine whether the photographing condition is satisfied currently, can automatically determine the timing for photographing, and captures wonderful moments in time, thereby avoiding waste of processing resources, and reducing power consumption.

Description

电子装置及拍摄控制方法Electronic device and shooting control method 技术领域Technical field
本申请涉及电子设备领域,尤其涉及一种具有拍摄功能的电子装置及用于电子装置的拍摄控制方法。The present application relates to the field of electronic devices, and in particular, to an electronic device having a photographing function and a photographing control method for the electronic device.
背景技术Background technique
现在,随着人们生活水平的提高,拍照已经成了为生活中并不可少的常用功能。现在,不论是照相机还是具有相机功能的手机、平板电脑等电子装置,像素都越来越高,拍照质量都越来越好。然而,目前的照相机、手机等电子装置,在进行拍照控制时,往往还需要用户通过按快门键或拍照图标等启动拍照,由于用户的操作往往有一定滞后性,导致了往往无法及时捕捉精彩的瞬间,而使得常常无法拍摄到满意的照片,或者由于滞后,反而拍摄到了不满意的照片。例如,被拍摄者可能在取景时状态很好,但是进行拍照的瞬间可能出现正好眼睛没有睁开,笑容比较僵硬等状况,最终拍摄的照片往往难以令人满意。再例如,如给小宝宝拍照时,小宝宝的可爱表情往往转瞬即逝,通过用户操作快门键或拍摄图标等难以及时拍到满意的照片。Now, with the improvement of people's living standards, taking pictures has become a common function that is not indispensable in life. Nowadays, whether it is a camera or an electronic device such as a mobile phone or a tablet computer with a camera function, pixels are getting higher and higher, and the quality of photographs is getting better and better. However, current electronic devices such as cameras and mobile phones often require the user to initiate a photograph by pressing a shutter button or a photographing icon when performing photographing control. Since the user's operation often has a certain lag, it is often impossible to capture the wonderful time in time. In an instant, it is often impossible to take a satisfactory photo, or because of the lag, it has taken an unsatisfactory photo. For example, the subject may be in good condition when framing, but the moment when the photo is taken may appear that the eyes are not open, the smile is relatively stiff, and the final photographs are often unsatisfactory. For example, when taking pictures of a baby, the cute expression of the baby is often fleeting, and it is difficult to take a satisfactory photo in time by the user operating the shutter button or shooting an icon.
发明内容Summary of the invention
本申请提供一种电子装置及拍摄控制方法,能够根据取景信息及时进行拍摄操作,捕捉到精彩的瞬间。The application provides an electronic device and a shooting control method, which can perform a shooting operation in time according to the framing information, and capture a wonderful moment.
一方面,提供一种拍摄控制方法,所述拍摄控制方法包括步骤:在拍摄取景时,通过取景信息获取单元获取被拍摄对象的取景信息;接收所述取景信息获取单元获取的被拍摄对象的取景信息,对所述被拍摄对象的取景信息进行分析得到分析结果;以及根据分析结果确定满足拍摄条件时,控制执行拍摄操作而至少通过所述第一图像处理器进行成像处理而得到相应的照片或视频。In one aspect, a photographing control method is provided, the photographing control method comprising: acquiring a framing information of a subject by a framing information acquiring unit when photographing a framing; and receiving a framing of the subject acquired by the framing information acquiring unit And analyzing the framing information of the subject to obtain an analysis result; and when determining that the shooting condition is satisfied according to the analysis result, controlling to perform a shooting operation to perform at least an imaging process by the first image processor to obtain a corresponding photo or video.
另一方面,提供一种电子装置,所述电子装置包括取景信息获取单元、第一图像处理器及处理器。所述取景信息获取单元用于在拍摄取景时获取被拍摄对象的取景信息。所述第一图像处理器用于进行成像处理。所述处理器用于接收所述取景信息获取单元获取的被拍摄对象的取景信息,对所述被拍摄对象的取景信息进行分析得到分析结果,根据分析结果确定满足拍摄条件时,控制执行拍摄操作而至少通过所述第一图像处理器进行成像处理而得到相应的照片或视频。In another aspect, an electronic device is provided, the electronic device including a view information acquisition unit, a first image processor, and a processor. The framing information acquisition unit is configured to acquire framing information of the subject when the framing is taken. The first image processor is configured to perform an imaging process. The processor is configured to receive the framing information of the object acquired by the finder information acquiring unit, analyze the framing information of the object to obtain an analysis result, and control to perform a shooting operation when determining that the shooting condition is satisfied according to the analysis result At least an imaging process is performed by the first image processor to obtain a corresponding photo or video.
再一方面,还提供一种计算机可读存储介质,所述计算机可读存储介质存储有程序指令,所述程序指令供计算机调用后执行一种拍摄控制方法,所述拍摄控制方法包括:在拍摄取景时,通过取景信息获取单元获取被拍摄对象的取景信息;接收所述取景信息获取单元获取的被拍摄对象的取景信息,对所述被拍摄对象的取景信息进行分析得到分析结果;以及根据分析结果确定满足拍摄 条件时,控制执行拍摄操作而至少通过所述第一图像处理器进行成像处理而得到相应的照片或视频。In still another aspect, a computer readable storage medium is provided, the computer readable storage medium storing program instructions, the program instructions being executed by a computer to execute a shooting control method, the shooting control method comprising: shooting At the time of framing, acquiring framing information of the subject by the framing information acquiring unit; receiving framing information of the subject acquired by the framing information acquiring unit, analyzing the framing information of the subject to obtain an analysis result; As a result, when it is determined that the photographing condition is satisfied, control is performed to perform a photographing operation to at least perform imaging processing by the first image processor to obtain a corresponding photograph or video.
本申请的拍摄控制方法及电子装置,通过取景信息获取单元获取的被拍摄对象的取景信息可确定当前是否满足拍摄条件,可自动确定拍照的时机,及时捕捉到精彩瞬间。同时,通过独立于第一图像处理器之外的取景信息获取单元去进行取景信息的获取后,确定满足拍摄条件时,再通过所述第一图像处理器进行成像处理而得到相应的照片或视频,避免了处理资源的浪费,也进一步节省了电能。The photographing control method and the electronic device of the present application can determine whether the photographing condition is currently satisfied by the framing information of the subject acquired by the framing information acquiring unit, and can automatically determine the timing of the photographing, and capture the exciting moment in time. At the same time, after obtaining the framing information independently of the framing information acquiring unit other than the first image processor, determining that the shooting condition is satisfied, the image processing is performed by the first image processor to obtain a corresponding photo or video. It avoids the waste of processing resources and further saves energy.
附图说明DRAWINGS
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的明显变形方式。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings to be used in the embodiments will be briefly described below. Obviously, the drawings in the following description are only some embodiments of the present application, Those skilled in the art can also obtain other obvious modifications according to these drawings without any creative work.
图1为本申请第一实施例中的电子装置的示意出部分结构的框图。1 is a block diagram showing a schematic partial configuration of an electronic device in a first embodiment of the present application.
图2为本申请第二实施例中的电子装置的示意出部分结构的框图。2 is a block diagram showing a schematic partial structure of an electronic device in a second embodiment of the present application.
图3为本申请第三实施例中的电子装置的示意出部分结构的框图。3 is a block diagram showing a schematic partial structure of an electronic device in a third embodiment of the present application.
图4为本申请第四实施例中的电子装置的示意出部分结构的框图。4 is a block diagram showing a schematic partial structure of an electronic device in a fourth embodiment of the present application.
图5为本申请第一实施例中的拍摄控制方法的流程图。FIG. 5 is a flowchart of a photographing control method in the first embodiment of the present application.
图6为本申请第二实施例中的拍摄控制方法的流程图。FIG. 6 is a flowchart of a photographing control method in a second embodiment of the present application.
图7为本申请第三实施例中的拍摄控制方法的流程图。FIG. 7 is a flowchart of a photographing control method in a third embodiment of the present application.
图8为本申请第四实施例中的拍摄控制方法的流程图。FIG. 8 is a flowchart of a photographing control method in a fourth embodiment of the present application.
图9为本申请第五实施例中的拍摄控制方法的流程图。FIG. 9 is a flowchart of a photographing control method in a fifth embodiment of the present application.
图10为本申请第六实施例中的拍摄控制方法的流程图。FIG. 10 is a flowchart of a photographing control method in a sixth embodiment of the present application.
图11为本申请第七实施例中的拍摄控制方法的流程图。FIG. 11 is a flowchart of a photographing control method in a seventh embodiment of the present application.
图12为本申请第八实施例中的拍摄控制方法的流程图。FIG. 12 is a flowchart of a photographing control method in an eighth embodiment of the present application.
图13为本申请第九实施例中的拍摄控制方法的流程图。FIG. 13 is a flowchart of a photographing control method in a ninth embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application are clearly and completely described in the following with reference to the drawings in the embodiments of the present application. It is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application without departing from the inventive scope are the scope of the present application.
请参阅图1,为本申请第一实施例中的电子装置100示意出部分结构的框图。如图1所示,所述电子装置100包括取景信息获取单元10、图像处理器20以及处理器30。Please refer to FIG. 1 , which is a block diagram showing a partial structure of an electronic device 100 in a first embodiment of the present application. As shown in FIG. 1 , the electronic device 100 includes a framing information acquiring unit 10 , an image processor 20 , and a processor 30 .
所述取景信息获取单元10用于在拍摄取景时获取被拍摄对象的取景信息。The finder information acquiring unit 10 is configured to acquire framing information of the subject when the framing is taken.
所述图像处理器20用于进行成像处理。The image processor 20 is used to perform an imaging process.
所述处理器30用于接收所述取景信息获取单元10获取的被拍摄对象的取 景信息,对所述被拍摄对象的取景信息进行分析得到分析结果,根据分析结果确定满足拍摄条件时,控制执行拍摄操作而至少通过所述图像处理器20进行成像处理而得到相应的照片或视频。The processor 30 is configured to receive the framing information of the subject acquired by the finder information acquiring unit 10, analyze the framing information of the subject, and obtain an analysis result, and determine, when the imaging condition is met according to the analysis result, the control is executed. The photographing operation is performed at least by the image processor 20 to obtain a corresponding photograph or video.
从而,本申请中,通过取景信息获取单元10获取的被拍摄对象的取景信息可确定当前是否满足拍摄条件,可自动确定拍照的时机,及时捕捉到精彩瞬间。同时,通过独立于图像处理器20之外的取景信息获取单元10去进行取景信息的获取后,确定满足拍摄条件时,再通过所述图像处理器20进行成像处理而得到相应的照片或视频,避免了处理资源的浪费,也进一步节省了电能。Therefore, in the present application, the framing information of the subject acquired by the finder information acquiring unit 10 can determine whether the shooting condition is currently satisfied, and can automatically determine the timing of the photographing, and capture the exciting moment in time. At the same time, after the framing information acquisition unit 10, which is independent of the image processor 20, performs the framing information acquisition, and determines that the shooting condition is satisfied, the image processing is performed by the image processor 20 to obtain a corresponding photo or video. It avoids the waste of processing resources and further saves energy.
在一些实施例中,所述拍摄取景为响应拍摄取景操作而进行的。In some embodiments, the photographic framing is performed in response to a framing operation.
可选的,所述处理器30在接收到拍摄取景操作时控制进行拍摄取景,并启动所述取景信息获取单元10,而使得所述取景信息获取单元10获取被拍摄对象的取景信息。从而,所述取景信息获取单元10平时可处于关闭状态,降低电源能耗。Optionally, the processor 30 controls to perform shooting framing when receiving the shooting framing operation, and starts the framing information acquiring unit 10, so that the framing information acquiring unit 10 acquires framing information of the captured object. Therefore, the framing information acquiring unit 10 can be in a closed state at ordinary times to reduce power consumption of the power source.
在一些实施例中,所述拍摄取景操作为对拍照应用图标的点击操作。In some embodiments, the shooting framing operation is a click operation on a photo application icon.
在另一些实施例中,所述拍摄取景操作为对电子装置的物理按键的特定操作,例如,电子装置包括音量增加键和音量减小键,所述拍摄取景操作为对音量增加键和音量减小键的同时按压的操作。进一步的,所述拍摄取景操作为在预设时间(例如2秒)内先后按压所述音量增加键及音量减小键的操作。In other embodiments, the shooting framing operation is a specific operation of a physical button of the electronic device. For example, the electronic device includes a volume increasing button and a volume reducing button, and the shooting framing operation is a volume increasing button and a volume reduction. Simultaneous pressing of small keys. Further, the shooting framing operation is an operation of sequentially pressing the volume increasing key and the volume reducing key within a preset time (for example, 2 seconds).
在另一些实施例中,所述拍摄取景操作还可为在电子装置的任一显示界面中输入的预设触摸手势的操作,例如,在电子装置的主界面上,用户可输入一个具有环形触摸轨迹的触摸手势而开启拍照功能而进行拍摄取景。In other embodiments, the shooting framing operation may also be an operation of a preset touch gesture input in any display interface of the electronic device, for example, on the main interface of the electronic device, the user may input a ring touch The touch gesture of the trajectory turns on the photographing function to perform framing.
在另一些实施例中,所述拍摄取景操作还可为在电子装置处于黑屏状态下在触摸屏上输入的预设触摸手势的操作。In still other embodiments, the shooting framing operation may also be an operation of a preset touch gesture input on the touch screen when the electronic device is in a black screen state.
在一些实施例中,当所述电子装置为照相机时,所述拍摄取景操作为对照相机的快门按键/电源按键进行按压而触发照相机处于启动状态的操作。In some embodiments, when the electronic device is a camera, the shooting framing operation is an operation of pressing a shutter button/power button of the camera to trigger the camera to be in an activated state.
在一些实施例中,所述处理器30对所述被拍摄对象的取景信息进行分析得到分析结果,具体包括:采用预设模型对所述被拍摄对象的取景信息进行分析得到分析结果。In some embodiments, the processor 30 analyzes the framing information of the subject to obtain an analysis result, and specifically includes: analyzing the framing information of the subject by using a preset model to obtain an analysis result.
从而通过模型分析,可快速得到相应的分析结果,即可快速根据分析结果确定是否满足拍摄条件。Therefore, through the model analysis, the corresponding analysis result can be quickly obtained, and the shooting condition can be determined quickly according to the analysis result.
请参阅图2,为第二实施例中的电子装置100的示意出部分结构的框图。如图2所示,在另一实施例中,所述取景信息获取单元10包括深度传感器11,所述取景信息具体为所述深度传感器11获取得到的深度信息。Please refer to FIG. 2, which is a block diagram showing a schematic partial structure of the electronic device 100 in the second embodiment. As shown in FIG. 2 , in another embodiment, the finder information acquiring unit 10 includes a depth sensor 11 , and the framing information is specifically obtained by the depth sensor 11 .
即,所述深度传感器11用于在拍摄取景时获取被拍摄对象的深度信息。即,所述深度信息为前述的取景信息。所述处理器采用预设模型对所述被拍摄对象的取景信息进行分析得到分析结果,包括:接收所述被拍摄对象的深度信息,并采用预设模型对所述深度信息进行分析得到所述分析结果。That is, the depth sensor 11 is used to acquire depth information of a subject when shooting a framing. That is, the depth information is the aforementioned framing information. The processor analyzes the framing information of the object by using a preset model to obtain an analysis result, including: receiving depth information of the object, and analyzing the depth information by using a preset model to obtain the Analysis results.
在一些实施例中,所述深度信息包括被拍摄对象的空间几何分布信息。In some embodiments, the depth information includes spatial geometric distribution information of the subject.
例如,当被拍摄对象包括人脸时,所述深度信息将包括面部的空间几何分布信息。当被拍摄对象包括人的手部时,所述深度信息将包括手部的空间几何 分布信息。当被拍摄对象包括景物时,所述深度信息将包括景物的空间几何分布信息。For example, when the subject includes a face, the depth information will include spatial geometric distribution information of the face. When the subject includes a person's hand, the depth information will include spatial geometrical distribution information of the hand. When the subject includes a scene, the depth information will include spatial geometric distribution information of the scene.
在一些实施例中,当采用预设模型对深度传感器11获得的取景信息进行分析时,所述预设模型经过运算,判定面部的空间几何位置的变化/分布接近所要捕捉的基准表情时,而得到相似度大于预设阈值的分析结果。In some embodiments, when the framing information obtained by the depth sensor 11 is analyzed by using a preset model, the preset model is operated to determine that the change/distribution of the spatial geometric position of the face is close to the reference expression to be captured, and An analysis result whose similarity is greater than a preset threshold is obtained.
所述处理器30在得到相似度大于预设阈值的分析结果时,确定当前满足拍摄条件。The processor 30 determines that the shooting condition is currently satisfied when the analysis result whose similarity is greater than the preset threshold is obtained.
其中,所述深度传感器11独立于所述图像传感器20,耗电量小,处理速度更快,通过所述深度传感器11获取到的深度信息来决定是否拍照,能够有效节省能耗,且能快速进行取景信息的获取,确保及时得出分析结果,并在确定满足拍摄条件时,及时执行拍摄操作。The depth sensor 11 is independent of the image sensor 20, and the power consumption is small, and the processing speed is faster. The depth information acquired by the depth sensor 11 determines whether to take a photo, which can effectively save energy and can be quickly The acquisition of the framing information ensures that the analysis results are obtained in time, and the shooting operation is performed in time when it is determined that the shooting conditions are met.
如图2所示,所述电子装置100还包括第一镜头21以及第二镜头22。所述第一镜头21与所述图像传感器20对应设置,所述图像传感器用于通过所述第一镜头21接收光线而进行成像处理。可选的,所述第一镜头21与所述图像传感器20可组合成第一摄像模组201。As shown in FIG. 2 , the electronic device 100 further includes a first lens 21 and a second lens 22 . The first lens 21 is disposed corresponding to the image sensor 20, and the image sensor is configured to perform imaging processing by receiving light through the first lens 21. Optionally, the first lens 21 and the image sensor 20 may be combined into a first camera module 201.
所述第二镜头22与所述深度传感器11对应设置,所述深度传感器11用于通过所述第二镜头22接收光线而获取被拍摄对象的取景信息。The second lens 22 is disposed corresponding to the depth sensor 11, and the depth sensor 11 is configured to acquire the finder information of the object by receiving the light through the second lens 22.
从外观上看,第一镜头21与所述第二镜头22可在电子装置100的外观上构成仿双摄结构。The first lens 21 and the second lens 22 can form a pseudo-camera structure on the appearance of the electronic device 100.
其中,所述第一镜头21与所述第二镜头22可设置于电子装置100的正面而作为前置摄像镜头,也可以设置于电子装置100的背面而作为后置摄像镜头。本申请中的拍摄操作可以是前置拍摄操作,也可以是后置拍摄操作。The first lens 21 and the second lens 22 may be disposed on the front surface of the electronic device 100 as a front imaging lens, or may be disposed on the back surface of the electronic device 100 as a rear imaging lens. The shooting operation in the present application may be a front shooting operation or a rear shooting operation.
请参阅图3,为第三实施例中的电子装置100的示意出部分结构的框图。其中,第三实施例与第二实施例的区别在于,所述第二镜头22可取消,所述图像传感器20、所述深度传感器11均可对应所述第一镜头21设置,而共用所述第一镜头21来接收光线。Please refer to FIG. 3, which is a block diagram showing a schematic partial structure of the electronic device 100 in the third embodiment. The difference between the third embodiment and the second embodiment is that the second lens 22 can be cancelled, and the image sensor 20 and the depth sensor 11 can be disposed corresponding to the first lens 21, and the same is shared. The first lens 21 receives light.
在一些实施例中,所述深度传感器11与所述图像传感器20并排设置,且均正对所述第一镜头21设置,所述深度传感器11正对所述第一镜头21的区域小于所述图像传感器20正对所述第一镜头21的区域,从而尽量减少对图像传感器20的成像造成影响。例如,所述深度传感器11在第一镜头21上的投影面积为所述图像传感器20在第一镜头21上的投影面积的1/3、1/4等,只要有少量光线进来能够获取所述取景信息即可。In some embodiments, the depth sensor 11 is disposed side by side with the image sensor 20 and is disposed to the first lens 21, and the area of the depth sensor 11 facing the first lens 21 is smaller than the The image sensor 20 faces the area of the first lens 21, thereby minimizing the impact on the imaging of the image sensor 20. For example, the projected area of the depth sensor 11 on the first lens 21 is 1/3, 1/4, etc. of the projected area of the image sensor 20 on the first lens 21, as long as a small amount of light comes in to obtain the View information can be.
在另一些实施例中,所述图像传感器20与所述第一镜头21正对设置,所述图像传感器20与所述第一镜头21之间可倾斜设置半反半透镜,所述深度传感器11可设置于半反半透镜的反光光线的出光光路上,而接收第一镜头21进来的光线。In other embodiments, the image sensor 20 is disposed opposite to the first lens 21, and a half mirror is disposed obliquely between the image sensor 20 and the first lens 21, and the depth sensor 11 is The light from the first lens 21 can be received on the light path of the reflective light of the half mirror.
同样的,所述第一镜头21可设置于电子装置100的正面或背面。Similarly, the first lens 21 can be disposed on the front or back of the electronic device 100.
在另一些实施例中,所述深度传感器可不需要镜头,即所述第一镜头21仅对应所述图像传感器20设置,所述深度传感器无需镜头,而通过自身进行深度信息的采集。In other embodiments, the depth sensor may not require a lens, that is, the first lens 21 is only disposed corresponding to the image sensor 20, and the depth sensor does not need a lens, but collects depth information by itself.
其中,前述的深度传感器11可包括双目测距(Stereo System)深度传感器、结构光(Structured Light)深度传感器、飞行时间(Time Of Flight,TOF)深度传感器中的至少一种。其中,双目测距(Stereo System)深度传感器为利用双摄像头拍摄物体,再通过三角形原理计算物体距离信息等深度信息。结构光传感器为通过结构光投射特定的光信息到物体表面后,由摄像头采集而得到物体距离信息等深度信息。飞行时间深度传感器通过给目标物体连续发送光脉冲,然后用传感器接收从物体返回的光,通过探测光脉冲的飞行(往返)时间来得到目标物距离,即根据物体造成的光信号的变化来计算物体的位置和深度等信息,进而复原整个三维空间。优选地,深度传感器11包括以上三种深度传感器中的两种,例如深度传感器11包括双目测距深度传感器和结构光深度传感器,即在双目测距技术的基础上加上结构光技术,以结合两种深度传感器的优势。The foregoing depth sensor 11 may include at least one of a Stereo System depth sensor, a Structured Light depth sensor, and a Time Of Flight (TOF) depth sensor. Among them, the Stereo System depth sensor is an object that uses a dual camera to capture an object, and then calculates depth information such as object distance information by a triangle principle. The structured light sensor collects specific light information onto the surface of the object through the structured light, and then collects the depth information such as the object distance information by the camera. The time-of-flight depth sensor continuously transmits the light pulse to the target object, then receives the light returned from the object with the sensor, and obtains the target object distance by detecting the flight (round-trip) time of the light pulse, that is, according to the change of the optical signal caused by the object. Information such as the position and depth of the object, thereby restoring the entire three-dimensional space. Preferably, the depth sensor 11 comprises two of the above three depth sensors, for example, the depth sensor 11 comprises a binocular ranging depth sensor and a structured light depth sensor, that is, adding a structured light technology based on the binocular ranging technology, To combine the advantages of two depth sensors.
请参阅图4,为第四实施例中的电子装置100的示意出部分结构的框图。在第四实施例中,所述取景信息获取单元10包括图像传感器12。Please refer to FIG. 4, which is a block diagram showing a schematic partial structure of the electronic device 100 in the fourth embodiment. In the fourth embodiment, the finder information acquiring unit 10 includes an image sensor 12.
所述图像传感器12用于在响应拍摄取景操作时获取被拍摄对象的图像信息。即,在第四实施例中,所述图像信息为前述的取景信息。The image sensor 12 is configured to acquire image information of a subject in response to a shooting framing operation. That is, in the fourth embodiment, the image information is the aforementioned framing information.
所述处理器30用于接收所述图像信息,并采用预设模型对所述图像信息进行分析得到所述分析结果。The processor 30 is configured to receive the image information, and analyze the image information by using a preset model to obtain the analysis result.
与第二实施例相同,在第四实施例中,所述电子装置100同样包括第一镜头21和第二镜头22。As in the second embodiment, in the fourth embodiment, the electronic device 100 also includes a first lens 21 and a second lens 22.
所述第一镜头21与所述图像传感器20对应设置,所述图像传感器用于通过所述第一镜头21接收光线而进行成像处理。可选的,所述第一镜头21与所述图像传感器20可组合成第一摄像模组201。The first lens 21 is disposed corresponding to the image sensor 20, and the image sensor is configured to perform imaging processing by receiving light through the first lens 21. Optionally, the first lens 21 and the image sensor 20 may be combined into a first camera module 201.
所述第二镜头22与所述图像传感器12对应设置,所述图像传感器12用于通过所述第二镜头22接收光线而获取被拍摄对象的图像信息。在一些实施例中,所述第二镜头22与所述图像传感器12可组合成第二摄像模组202。The second lens 22 is disposed corresponding to the image sensor 12, and the image sensor 12 is configured to acquire image information of a subject by receiving light through the second lens 22. In some embodiments, the second lens 22 and the image sensor 12 can be combined into a second camera module 202.
所述第一摄像模组201和所述第二摄像模组202可构成双摄像头结构。The first camera module 201 and the second camera module 202 may constitute a dual camera structure.
在一些实施例中,所述处理器30根据分析结果确定满足拍摄条件控制进行拍摄操作时,为控制同时通过所述图像处理器20和所述图像传感器12进行成像处理而得到相应的照片或视频。In some embodiments, the processor 30 determines that the shooting condition is controlled according to the analysis result, and the corresponding photo or video is obtained for controlling the imaging process by the image processor 20 and the image sensor 12 simultaneously. .
从而,所述图像传感器12也参与最终照片或视频的成像处理,可有效地提高了拍摄品质。Thus, the image sensor 12 also participates in the imaging process of the final photo or video, which can effectively improve the shooting quality.
显然,在一些实施例中,所述图像传感器12仅用于获取图像信息而供所述预设模型分析得出所述分析结果,而并不参与成像处理。Obviously, in some embodiments, the image sensor 12 is only used to acquire image information for the preset model analysis to derive the analysis result, and does not participate in the imaging process.
在一些实施例中,所述第二镜头22与所述图像传感器12组合成的第二摄像模组202分辨率可显著地小于所述第一镜头21与所述图像传感器20组合成的第一摄像模组201。例如,所述第二摄像模组202的分辨率为30W像素,而所述第一摄像模组201的分辨率为2000W像素。相应的,用于获取所述取景信息的所述图像传感器12的成像分辨率也显著地小于所述图像传感器20的成像分辨率。In some embodiments, the resolution of the second camera module 202 combined with the image sensor 12 may be significantly smaller than the first combination of the first lens 21 and the image sensor 20 Camera module 201. For example, the resolution of the second camera module 202 is 30 W pixels, and the resolution of the first camera module 201 is 2000 W pixels. Correspondingly, the imaging resolution of the image sensor 12 for acquiring the framing information is also significantly smaller than the imaging resolution of the image sensor 20.
从而,所述第二摄像模组202在进行图像信息的获取时,所述图像信息只 需满足所述分析结果能够准确即可,将所述第二摄像模组202的分辨率及所述图像传感器20的成像分辨率设置得较低,可节省处理资源和降低能耗。Therefore, when the image information is acquired by the second camera module 202, the image information only needs to satisfy the analysis result, and the resolution of the second camera module 202 and the image are The imaging resolution of the sensor 20 is set lower, saving processing resources and reducing power consumption.
在另一些实施例中,所述第二镜头22也可取消,所述图像传感器20、所述图像传感器12均可对应所述第一镜头21设置,而共用所述第一镜头21来接收光线。In other embodiments, the second lens 22 can also be cancelled. The image sensor 20 and the image sensor 12 can be disposed corresponding to the first lens 21, and the first lens 21 is shared to receive light. .
在一些实施例中,所述图像传感器12与所述图像传感器20并排设置,且均正对所述第一镜头21设置,所述图像传感器12正对所述第一镜头21的区域小于所述图像传感器20正对所述第一镜头21的区域,从而尽量减少对图像传感器20的成像造成影响。例如,所述图像传感器12在第一镜头21上的投影面积为所述图像传感器20在第一镜头21上的投影面积的1/3、1/4等,只要有少量光线进来能够获取所述取景信息即可。其中,所述处理器30还可用于执行如下的操作。如下的操作所基于的电子装置100可为前述的图1~4中任一附图所述的电子装置100。In some embodiments, the image sensor 12 is disposed side by side with the image sensor 20 and is disposed to the first lens 21, and the area of the image sensor 12 facing the first lens 21 is smaller than the The image sensor 20 faces the area of the first lens 21, thereby minimizing the impact on the imaging of the image sensor 20. For example, the projected area of the image sensor 12 on the first lens 21 is 1/3, 1/4, etc. of the projected area of the image sensor 20 on the first lens 21, as long as a small amount of light comes in to obtain the View information can be. The processor 30 can also be used to perform the following operations. The electronic device 100 on which the following operations are based may be the electronic device 100 described in any of the aforementioned FIGS. 1 to 4.
在一些实施例中,所述预设模型为神经网络模型;所述处理器30采用预设模型对所述被拍摄对象的取景信息进行分析得到分析结果,包括:所述处理器通过神经网络模型对所述被拍摄对象的取景信息进行分析,得出满意度这一分析结果。In some embodiments, the preset model is a neural network model; the processor 30 analyzes the framing information of the subject by using a preset model to obtain an analysis result, including: the processor passes a neural network model The framing information of the subject is analyzed to obtain the analysis result of the satisfaction.
所述处理器30根据分析结果确定满足拍摄条件,包括:所述处理器30在确定满意度超过满意度预设阈值时,确定当前满足拍摄条件。The processor 30 determines that the shooting condition is met according to the analysis result, and the processor 30 determines that the shooting condition is currently satisfied when determining that the satisfaction exceeds the satisfaction preset threshold.
在另一些实施例中,所述预设模型为图像处理算法模型,所述处理器30采用预设模型对所述被拍摄对象的取景信息进行分析得到分析结果,包括:所述处理器30采用图像处理算法模型将被拍摄对象的取景信息与基准图片进行比较,分析出被拍摄对象的取景信息与基准图片的相似度这一分析结果。In other embodiments, the preset model is an image processing algorithm model, and the processor 30 analyzes the framing information of the subject by using a preset model to obtain an analysis result, including: the processor 30 adopts The image processing algorithm model compares the framing information of the subject with the reference picture, and analyzes the analysis result of the similarity between the framing information of the subject and the reference picture.
所述处理器30根据所述分析结果确定当前满足拍摄条件,包括:所述处理器30在确定相似度超过相似度预设阈值时,确定当前满足拍摄条件。其中,所述相似度预设阈值可为80%、90%等值。The processor 30 determines that the shooting condition is currently satisfied according to the analysis result, and the processor 30 determines that the shooting condition is currently satisfied when determining that the similarity exceeds the similarity preset threshold. The similarity preset threshold may be 80%, 90%, and the like.
可选的,所述图像处理算法模型包括表情特征模型,当被拍摄对象包括人脸时,所述处理器30采用图像处理算法模型将被拍摄对象的取景信息与基准图片进行比较,分析出被拍摄对象的取景信息与基准图片的相似度这一分析结果,包括:利用人脸识别技术对被拍摄对象的取景信息中的脸部表情信息进行分析,生成对应的表情特征向量;根据表情特征模型和被拍摄对象的取景信息对应的表情特征向量计算得到取景信息与基准图片的相似度。Optionally, the image processing algorithm model includes an expression feature model. When the object includes a human face, the processor 30 compares the framing information of the object with the reference image by using an image processing algorithm model, and analyzes The analysis result of the similarity between the framing information of the photographic subject and the reference picture includes: analyzing the facial expression information in the framing information of the captured object by using the face recognition technology to generate a corresponding expression feature vector; The expression feature vector corresponding to the framing information of the subject is calculated to obtain the similarity between the framing information and the reference picture.
在一些实施例中,所述表情特征模型可包括对应多个基准图片的表情特征向量,通过比较取景信息对应的表情特征向量与每一基准图片的表情特征向量的相似度而确定得到多个相似度。In some embodiments, the expression feature model may include an expression feature vector corresponding to a plurality of reference pictures, and the plurality of similarities are determined by comparing the similarity between the expression feature vector corresponding to the view information and the expression feature vector of each reference picture. degree.
进一步的,当包括多个基准图片时,所述处理器30确定与任一基准图片的相似度超过所述相似度预设阈值时,确定当前满足拍摄条件。Further, when the plurality of reference pictures are included, the processor 30 determines that the similarity with any of the reference pictures exceeds the similarity preset threshold, and determines that the shooting condition is currently satisfied.
其中,所述表情特征模型对应的基准图片可为包括人脸大笑、难过、生气、打哈欠等特定表情的标准图片。The reference picture corresponding to the expression feature model may be a standard picture including a specific expression such as a face laughing, sadness, anger, yawning, and the like.
可选的,所述图像处理算法模型包括手势特征模型,当被拍摄对象包括人 的手部时,所述处理器30采用图像处理算法模型将被拍摄对象的取景信息与基准图片进行比较,分析出被拍摄对象的取景信息与基准图片的相似度这一分析结果,包括:利用图像识别技术对被拍摄对象的取景信息中的手势信息进行分析,生成对应的手势特征向量;根据手势特征模型和被拍摄对象的取景信息对应的手势特征向量计算得到被拍摄对象的取景信息与基准图片的相似度。Optionally, the image processing algorithm model includes a gesture feature model, and when the object to be photographed includes a human hand, the processor 30 compares the framing information of the object with the reference image by using an image processing algorithm model, and analyzes The analysis result of the similarity between the framing information of the object and the reference picture includes: analyzing the gesture information in the framing information of the object by using the image recognition technology to generate a corresponding gesture feature vector; according to the gesture feature model and The gesture feature vector corresponding to the framing information of the subject is calculated to obtain the similarity between the framing information of the subject and the reference picture.
在一些实施例中,所述手势特征模型可包括对应多个基准图片的手势特征向量,通过比较取景信息对应的表情特征向量与每一基准图片的手势特征向量的相似度而确定得到多个相似度。In some embodiments, the gesture feature model may include a gesture feature vector corresponding to the plurality of reference pictures, and the plurality of similarities are determined by comparing the similarity between the expression feature vector corresponding to the view information and the gesture feature vector of each of the reference pictures. degree.
进一步的,当包括多个基准图片时,所述处理器30确定与任一基准图片的相似度超过所述相似度预设阈值时,确定当前满足拍摄条件。Further, when the plurality of reference pictures are included, the processor 30 determines that the similarity with any of the reference pictures exceeds the similarity preset threshold, and determines that the shooting condition is currently satisfied.
其中,所述手势特征模型对应的基准图片可为包括“OK”手势、“竖大拇指”手势、“V字形”手势等特定手势的标准图片。The reference picture corresponding to the gesture feature model may be a standard picture including a specific gesture such as an “OK” gesture, a “vertical thumb” gesture, a “V-shaped” gesture, and the like.
在一些实施例中,所述图像处理算法模型包括景物特征模型,当被拍摄对象包括景物时,所述处理器采用图像处理算法模型将被拍摄对象的取景信息与基准图片进行比较,分析出被拍摄对象的取景信息与基准图片的相似度这一分析结果,包括:利用图像识别技术对被拍摄对象的取景信息中的景物信息进行分析,生成对应的景物特征向量;根据景物特征模型和被拍摄对象的取景信息对应的景物特征向量计算得到被拍摄对象的取景信息与基准图片的相似度。In some embodiments, the image processing algorithm model includes a scene feature model. When the object includes a scene, the processor uses an image processing algorithm model to compare the framing information of the object with the reference image, and analyzes the The analysis result of the similarity between the framing information of the photographic subject and the reference picture includes: using image recognition technology to analyze the scene information in the framing information of the object to generate a corresponding scene feature vector; according to the scene feature model and being photographed The scene feature vector corresponding to the framing information of the object calculates the similarity between the framing information of the subject and the reference picture.
在一些实施例中,所述景物特征模型可包括对应多个基准图片的景物特征向量,通过比较取景信息对应的景物特征向量与每一基准图片的景物特征向量的相似度而确定得到多个相似度。In some embodiments, the scene feature model may include a scene feature vector corresponding to the plurality of reference pictures, and the plurality of similarities are determined by comparing the similarity between the scene feature vector corresponding to the finder information and the scene feature vector of each of the reference pictures. degree.
进一步的,当包括多个基准图片时,所述处理器30确定取景信息与任一基准图片的相似度超过所述相似度预设阈值时,即,确定取景信息对应的景物特征向量与任一基准图片的景物特征向量确定的相似度超过所述相似度预设阈值时,确定当前满足拍摄条件。Further, when the plurality of reference pictures are included, the processor 30 determines that the similarity between the framing information and any of the reference pictures exceeds the similarity preset threshold, that is, determines the scene feature vector corresponding to the framing information When the similarity determined by the scene feature vector of the reference picture exceeds the similarity preset threshold, it is determined that the shooting condition is currently satisfied.
其中,所述景物特征模型对应的基准图片可为包括“海浪”、“山峰”、“花朵”等景物的标准图片。The reference picture corresponding to the scene feature model may be a standard picture including scenes such as “waves”, “mountains”, and “flowers”.
其中,前述的取景信息为深度信息时,对应的表情特征向量等特征向量可为三维向量,前述的取景信息为图像信息时,对应的表情特征向量等特征向量可为二维向量。When the foregoing framing information is depth information, the feature vector such as the corresponding expression feature vector may be a three-dimensional vector. When the framing information is image information, the feature vector such as the corresponding expression feature vector may be a two-dimensional vector.
在一些实施例中,所述处理器30控制执行拍摄操作,包括:所述处理器30控制执行连拍操作,而得到当前被拍摄对象的多张照片。In some embodiments, the processor 30 controls to perform a photographing operation, including: the processor 30 controls to perform a continuous shooting operation to obtain a plurality of photos of the current subject.
在一些实施例中,所述处理器30还用于对所述连拍操作获取到的多张照片进行分析,确定出最佳的照片;以及保留所述最佳的照片,而对所述连拍操作获取到的其他照片进行删除。In some embodiments, the processor 30 is further configured to analyze a plurality of photos acquired by the continuous shooting operation to determine an optimal photo; and retain the optimal photo for the connection The other photos obtained by the shooting operation are deleted.
在一些实施例中,所述处理器30控制执行拍摄操作,包括:控制执行视频拍摄操作,而得到当前被拍摄的视频。In some embodiments, the processor 30 controls to perform a photographing operation, including: controlling to perform a video capture operation to obtain a currently captured video.
在一些实施例中,所述处理器30还用于对所述拍摄到的视频中的多个视频画面帧进行比较,确定出最佳的画面帧;以及将所述最佳的画面帧截取出来作为照片保存。In some embodiments, the processor 30 is further configured to compare a plurality of video picture frames in the captured video to determine an optimal picture frame; and intercept the optimal picture frame. Save as a photo.
如图1-4所示,所述电子装置100还包括存储器40,所述处理器30保留所述最佳的照片或将所述最佳的画面帧截取出来作为照片保存为控制将所述最佳的照片或所述画面帧保存在存储器的某一预设相册中。As shown in FIG. 1-4, the electronic device 100 further includes a memory 40, and the processor 30 retains the best photo or intercepts the best picture frame to save as a photo to control the most A good photo or the frame of the picture is saved in a certain preset album in the memory.
在一些实施例中,所述处理器30还用于获取用户对执行拍摄操作得到的照片或视频进行反馈的满意度反馈信息;将所述满意度反馈信息输出给所述预设模型,以使得所述预设模型利用所述满意度反馈信息进行优化训练。In some embodiments, the processor 30 is further configured to acquire satisfaction feedback information that the user feeds back a photo or video obtained by the shooting operation, and output the satisfaction feedback information to the preset model, so that The preset model performs optimization training using the satisfaction feedback information.
其中,上述的预设模型为已训练完成的模型或未训练完成的模型。利用所述满意度反馈信息,当为已经训练完成的模型,可进一步进行优化,当为还未训练完成的模型时,可以更优地实现训练。The preset model described above is a trained model or an untrained model. Using the satisfaction feedback information, when the model has been trained, further optimization can be performed, and when the model has not been trained, the training can be better achieved.
在一些实施例中,所述存储器40中存储有程序指令,所述处理器30为用于调用所述存储器40中存储的程序指令而执行前述的功能/操作。在另一些实施例中,所述处理器30中固化有程序指令,所述处理器30为调用自身的程序指令而执行前述的功能/操作。In some embodiments, program instructions are stored in the memory 40, and the processor 30 performs the aforementioned functions/operations for invoking program instructions stored in the memory 40. In other embodiments, program instructions are solidified in the processor 30, and the processor 30 performs the aforementioned functions/operations for calling its own program instructions.
其中,所述处理器30可为微控制器、微处理器、单片机、数字信号处理器等。所述存储器40可为存储卡、固态存储器、微硬盘、光盘等任意可存储信息的存储设备。The processor 30 can be a microcontroller, a microprocessor, a single chip, a digital signal processor, or the like. The memory 40 can be any storage device that can store information such as a memory card, a solid state memory, a micro hard disk, an optical disk, or the like.
所述电子装置100可为手机、平板电脑、笔记本电脑等具有摄像头30的便携式电子装置,也可为照相机、摄像机等照相装置。The electronic device 100 can be a portable electronic device having a camera 30, such as a mobile phone, a tablet computer, a notebook computer, or the like, or a camera device such as a camera or a video camera.
如图1所示,所述电子装置100还可包括输入输出单元50,所述输入输出单元50用于供用户进行输入并提供显示信号、声音信号等信号的输出。在一些实施例中,所述输入输出单元50可包括触摸屏,同时提供触摸输入和显示输出。可选的,所述输入输出单元50还可包括电源键、音量键等物理按键、喇叭等声音输出器件等。As shown in FIG. 1, the electronic device 100 may further include an input and output unit 50 for inputting by a user and providing an output of a signal such as a display signal, a sound signal, and the like. In some embodiments, the input output unit 50 can include a touch screen while providing touch input and display output. Optionally, the input/output unit 50 may further include a power button, a physical button such as a volume button, a sound output device such as a speaker, and the like.
其中,所述存储器40中存储的程序指令,还供所述处理器30调用后执行如下图5-图13任一实施例所述的拍摄控制方法。所述处理器30执行的操作还可进一步参考下图5~图13任一实施例中的拍摄控制方法的介绍。The program instruction stored in the memory 40 is further executed by the processor 30 to execute the shooting control method described in any of the following embodiments of FIG. 5 to FIG. The operation performed by the processor 30 may further refer to the introduction of the shooting control method in any of the following embodiments of FIG. 5 to FIG.
请参阅图5,为本申请第一实施例中的拍摄控制方法的流程图。所述拍摄控制方法应用于前述的任一实施例所示的电子装置中。所述方法包括如下步骤:Please refer to FIG. 5 , which is a flowchart of a shooting control method in the first embodiment of the present application. The photographing control method is applied to the electronic device shown in any of the foregoing embodiments. The method includes the following steps:
S51:在进行拍摄取景时,通过取景信息获取单元获取被拍摄对象的取景信息。S51: When performing the framing, the framing information acquiring unit acquires the framing information of the subject.
在一些实施例中,所述拍摄取景为响应拍摄取景操作进行的,所述步骤S51可具体包括:响应拍摄取景操作,控制进行拍摄取景,并启动所述取景信息获取单元获取被拍摄对象的取景信息。从而所述取景信息获取单元可在进行拍摄取景时才启动,平时可关闭,降低能耗。In some embodiments, the shooting framing is performed in response to the shooting framing operation, and the step S51 may specifically include: controlling the shooting framing in response to the shooting framing operation, and starting the framing information acquiring unit to acquire the framing of the subject. information. Therefore, the framing information acquiring unit can be started only when shooting and framing, and can be turned off normally to reduce energy consumption.
在一些实施例中,所述拍摄取景的操作为对拍照应用图标的点击操作。In some embodiments, the operation of capturing the framing is a click operation on the photo application icon.
在另一些实施例中,所述拍摄取景操作的为对电子装置的物理按键的特定操作,例如,电子装置包括音量增加键和音量减小键,所述拍摄取景操作为对音量增加键和音量减小键的同时按压的操作。进一步的,所述拍摄取景的操作为在预设时间(例如2秒)内先后按压所述音量增加键及音量减小键的操作。In other embodiments, the shooting framing operation is a specific operation on a physical button of the electronic device, for example, the electronic device includes a volume increasing button and a volume reducing button, and the shooting framing operation is a volume increasing button and a volume. Reduce the pressing operation of the key while pressing. Further, the operation of capturing the framing is an operation of sequentially pressing the volume up key and the volume down key within a preset time (for example, 2 seconds).
在另一些实施例中,所述拍摄取景操作还可为在电子装置的任一显示界面 中输入的预设触摸手势的操作,例如,在电子装置的主界面上,用户可输入一个具有环形触摸轨迹的触摸手势而开启拍照功能而启动拍摄取景。In other embodiments, the shooting framing operation may also be an operation of a preset touch gesture input in any display interface of the electronic device, for example, on the main interface of the electronic device, the user may input a ring touch The touch gesture of the trajectory turns on the photographing function and starts shooting framing.
在另一些实施例中,所述拍摄取景操作还可为在电子装置处于黑屏状态下在触摸屏上输入的预设触摸手势的操作。In still other embodiments, the shooting framing operation may also be an operation of a preset touch gesture input on the touch screen when the electronic device is in a black screen state.
在一些实施例中,当所述电子装置为照相机时,所述拍摄取景操作为对照相机的快门按键/电源按键进行按压而触发照相机处于启动状态的操作。In some embodiments, when the electronic device is a camera, the shooting framing operation is an operation of pressing a shutter button/power button of the camera to trigger the camera to be in an activated state.
S53:接收所述取景信息获取单元获取的被拍摄对象的取景信息,对所述被拍摄对象的取景信息进行分析得到分析结果。S53: Receive the framing information of the subject acquired by the framing information acquiring unit, and analyze the framing information of the subject to obtain an analysis result.
在一些实施例中,所述对所述被拍摄对象的取景信息进行分析得到分析结果,包括:采用预设模型对所述被拍摄对象的取景信息进行分析得到分析结果。In some embodiments, the analyzing the framing information of the subject to obtain an analysis result comprises: analyzing the framing information of the subject by using a preset model to obtain an analysis result.
S55:根据分析结果确定满足拍摄条件时,控制执行拍摄操作而至少通过所述电子装置的第一图像处理器进行成像处理而得到相应的照片或视频。S55: When it is determined according to the analysis result that the shooting condition is satisfied, controlling to perform a shooting operation to perform imaging processing at least by the first image processor of the electronic device to obtain a corresponding photo or video.
在一些实施例中,所述控制执行拍摄操作,包括:控制执行拍照操作,而至少通过所述电子装置的第一图像处理器进行成像处理得到被拍摄对象当前对应的照片。In some embodiments, the controlling to perform a photographing operation comprises: controlling to perform a photographing operation, and at least performing an imaging process by the first image processor of the electronic device to obtain a photograph corresponding to the subject currently being photographed.
在一些实施例中,所述控制执行拍摄操作,包括:控制执行连拍操作,而至少通过所述电子装置的第一图像处理器进行成像处理得到当前被拍摄对象的多张照片。In some embodiments, the controlling to perform a photographing operation includes: controlling to perform a continuous shooting operation, and performing imaging processing by at least a first image processor of the electronic device to obtain a plurality of photographs of a current subject.
在一些实施例中,所述拍照控制方法还包括:对所述连拍操作获取到的多张照片进行分析,确定出最佳的照片;以及保留所述最佳的照片,而对所述连拍操作获取到的其他照片进行删除。In some embodiments, the photographing control method further comprises: analyzing a plurality of photos acquired by the continuous shooting operation to determine an optimal photo; and retaining the optimal photo, and The other photos obtained by the shooting operation are deleted.
在另一些实施例中,所述控制执行拍摄操作,包括:控制执行视频拍摄操作,而而至少通过所述电子装置的第一图像处理器进行成像处理得到当前被拍摄对象的视频。In still other embodiments, the controlling to perform a photographing operation includes: controlling to perform a video photographing operation, and at least performing an imaging process by the first image processor of the electronic device to obtain a video of a current subject.
在一些实施例中,所述拍照控制方法还包括:对所述拍摄到的视频中的多个视频画面帧进行比较,确定出最佳的画面帧;以及将所述最佳的画面帧截取出来作为照片保存。In some embodiments, the photographing control method further comprises: comparing a plurality of video picture frames in the captured video to determine an optimal picture frame; and extracting the optimal picture frame. Save as a photo.
请参阅图6,为本申请第二实施例中的拍摄控制方法的流程图。在第二实施例中,所述取景信息获取单元包括深度传感器,所述取景信息具体为所述深度传感器获取得到的深度信息。所述方法包括如下步骤:Please refer to FIG. 6 , which is a flowchart of a shooting control method in a second embodiment of the present application. In the second embodiment, the finder information acquiring unit includes a depth sensor, and the framing information is specifically obtained by the depth sensor. The method includes the following steps:
S61:在进行拍摄取景时,通过深度传感器获取被拍摄对象的深度信息。S61: When performing the framing, the depth information of the subject is acquired by the depth sensor.
在一些实施例中,所述深度信息包括被拍摄对象的空间几何分布信息In some embodiments, the depth information includes spatial geometric distribution information of the subject
S63:接收深度传感器获取的被拍摄对象的深度信息,采用预设模型对所述被拍摄对象的深度信息进行分析得到分析结果。S63: Receive depth information of the object acquired by the depth sensor, and analyze the depth information of the object by using a preset model to obtain an analysis result.
S65:根据分析结果确定满足拍摄条件时,控制执行拍摄操作而至少通过所述电子装置的第一图像处理器进行成像处理而得到相应的照片或视频。S65: When it is determined according to the analysis result that the shooting condition is satisfied, controlling to perform a shooting operation to perform imaging processing at least by the first image processor of the electronic device to obtain a corresponding photo or video.
在一些实施例中,在被拍摄对象包括人脸时,所述步骤S63可包括:采用所述预设模型对所述深度信息进行运算,判定面部的空间几何位置的变化接近所要捕捉的基准表情时,而得到相似度大于预设阈值的分析结果。所述根据分析结果确定满足拍摄条件,可包括:在得到相似度大于预设阈值的分析结果时, 确定当前满足拍摄条件。In some embodiments, when the subject includes a human face, the step S63 may include: calculating the depth information by using the preset model, and determining that a change in a spatial geometric position of the face is close to a reference expression to be captured. At the same time, an analysis result whose similarity is greater than a preset threshold is obtained. The determining that the shooting condition is satisfied according to the analysis result may include: determining that the shooting condition is currently satisfied when the analysis result whose similarity is greater than the preset threshold is obtained.
从而,所述深度传感器独立于所述图像传感器20,耗电量小,处理速度更快,通过所述深度传感器获取到的深度信息来决定是否拍照,能够有效节省能耗,且能快速进行取景信息的获取,确保及时得出分析结果,并在确定满足拍摄条件时,及时执行拍摄操作。Therefore, the depth sensor is independent of the image sensor 20, and the power consumption is small, the processing speed is faster, and the depth information acquired by the depth sensor determines whether to take a picture, which can effectively save energy and can quickly perform framing. The acquisition of information ensures that the analysis results are obtained in a timely manner, and the shooting operation is performed in time when it is determined that the shooting conditions are met.
其中,所述步骤S61~S65分别与图5中的步骤S51~S55对应,为相同或上下位关系的步骤,具体的描述可相互参照。The steps S61 to S65 correspond to the steps S51 to S55 in FIG. 5 respectively, and are the same or the relationship of the upper and lower positions, and the specific description can refer to each other.
请参阅图7,为本申请第三实施例中的拍摄控制方法的流程图。在第三实施例中,所述取景信息获取单元包括第二图像传感器,所述取景信息具体为所述第二图像传感器获取得到的图像信息。所述方法包括如下步骤:Please refer to FIG. 7 , which is a flowchart of a shooting control method in a third embodiment of the present application. In the third embodiment, the framing information acquiring unit includes a second image sensor, and the framing information is specifically image information obtained by the second image sensor. The method includes the following steps:
S71:在进行拍摄取景时,通过第二图像传感器获取被拍摄对象的图像信息。S71: When performing the shooting framing, the image information of the subject is acquired by the second image sensor.
S73:接收第二图像传感器获取的被拍摄对象的图像信息,采用预设模型对所述被拍摄对象深度信息进行分析得到分析结果。S73: Receive image information of the object acquired by the second image sensor, and analyze the depth information of the object by using a preset model to obtain an analysis result.
S75:根据分析结果确定满足拍摄条件时,控制执行拍摄操作而至少通过所述电子装置的第一图像处理器进行成像处理而得到相应的照片或视频。S75: When it is determined according to the analysis result that the shooting condition is satisfied, controlling to perform a shooting operation to perform imaging processing at least by the first image processor of the electronic device to obtain a corresponding photo or video.
可选的,在一种实现方式中,所述控制执行拍摄操作而至少通过所述电子装置的第一图像处理器进行成像处理而得到相应的照片或视频,包括:控制同时通过所述第一图像处理器和所述第二图像传感器进行成像处理而得到相应的照片或视频。Optionally, in an implementation manner, the controlling performs a photographing operation to obtain a corresponding photo or video by performing at least an imaging process by the first image processor of the electronic device, including: controlling simultaneously through the first The image processor and the second image sensor perform imaging processing to obtain a corresponding photo or video.
从而,通过第一图像处理器和第二图像处理器同时进行成像处理,达到双摄像头成像的效果,提高了成像品质。Thereby, the imaging process is simultaneously performed by the first image processor and the second image processor to achieve the effect of the dual camera imaging, and the imaging quality is improved.
可选的,在另一种实现方式中,所述控制执行拍摄操作而至少通过所述电子装置的第一图像处理器进行成像处理而得到相应的照片或视频,包括:仅通过所述电子装置的第一图像处理器进行成像处理而得到相应的照片或视频。Optionally, in another implementation manner, the controlling performs a photographing operation to obtain an image or a video according to at least an image processing performed by a first image processor of the electronic device, including: passing only the electronic device The first image processor performs imaging processing to obtain a corresponding photo or video.
可选的,在一些实施例中,用于获取所述取景信息的所述第一图像处理器的成像分辨率也显著地小于所述图像传感器的成像分辨率,所述第一图像处理器获取的图像信息只需满足所述分析结果能够准确即可,将第一图像处理器的成像分辨率设置得较低,可节省处理资源和降低能耗。Optionally, in some embodiments, an imaging resolution of the first image processor for acquiring the framing information is also significantly smaller than an imaging resolution of the image sensor, where the first image processor acquires The image information only needs to satisfy the analysis result to be accurate, and the imaging resolution of the first image processor is set lower, which can save processing resources and reduce energy consumption.
其中,所述步骤S71与图5中的步骤S51对应,具体的介绍可参见图5中的步骤S51的相关描述。步骤S73、S75分别和图5中的步骤S53、S55对应,为相同或上下位关系的步骤,具体的描述可相互参照。The step S71 corresponds to the step S51 in FIG. 5 . For a specific description, refer to the related description of step S51 in FIG. 5 . Steps S73 and S75 respectively correspond to steps S53 and S55 in FIG. 5, which are steps of the same or upper and lower positions, and specific descriptions can be referred to each other.
请参阅图8,为本申请第四实施例中的拍摄控制方法的流程图。在第四实施例中,所述预设模型具体为神经网络模型,所述拍摄控制方法包括如下步骤:Please refer to FIG. 8 , which is a flowchart of a shooting control method in a fourth embodiment of the present application. In the fourth embodiment, the preset model is specifically a neural network model, and the shooting control method includes the following steps:
S81:在进行拍摄取景时,通过取景信息获取单元获取被拍摄对象的取景信息。其中,所述取景信息获取单元可包括深度传感器或图像传感器,所述取景信息可为深度信息或图像信息。S81: When shooting the framing, the framing information acquiring unit acquires the framing information of the subject. The framing information acquiring unit may include a depth sensor or an image sensor, and the framing information may be depth information or image information.
S83:接收所述取景信息获取单元获取的被拍摄对象的取景信息,采用神经网络模型对所述被拍摄对象的取景信息进行分析,得出满意度这一分析结果。S83: Receive the framing information of the subject acquired by the framing information acquiring unit, analyze the framing information of the subject by using a neural network model, and obtain an analysis result of the satisfaction.
在一些实施例中,所述满意度为神经网络模型通过将被拍摄对象的取景信息的所有像素作为输入,并根据神经网络模型当前的训练结果进行处理而直接 输出的分析结果。In some embodiments, the satisfaction degree is an analysis result directly output by the neural network model by taking all the pixels of the framing information of the subject as input and processing according to the current training result of the neural network model.
S85:在确定满意度超过满意度预设阈值时,确定当前满足拍摄条件,控制执行拍摄操作而至少通过所述电子装置的第一图像处理器进行成像处理而得到相应的照片或视频。其中,所述满意度阈值可为80%、90%等值。S85: When it is determined that the satisfaction exceeds the satisfaction preset threshold, determining that the shooting condition is currently satisfied, controlling to perform a shooting operation, and at least performing imaging processing by the first image processor of the electronic device to obtain a corresponding photo or video. Wherein, the satisfaction threshold may be 80%, 90%, and the like.
其中,步骤S81~S85分别与图5中的步骤S51~S55对应,为相同或上下位的关系,具体的介绍可相互参照。The steps S81 to S85 correspond to the steps S51 to S55 in FIG. 5, respectively, and have the same or upper and lower positions. The specific description can be referred to each other.
请参阅图9,为本申请第五实施例中的拍摄控制方法的流程图。在第五实施例中,所述预设模型具体为图像处理算法模型,所述拍摄控制方法包括如下步骤:Please refer to FIG. 9 , which is a flowchart of a shooting control method in a fifth embodiment of the present application. In the fifth embodiment, the preset model is specifically an image processing algorithm model, and the shooting control method includes the following steps:
S91:在进行拍摄取景时,通过取景信息获取单元获取被拍摄对象的取景信息。其中,所述取景信息获取单元可包括深度传感器或图像传感器,所述取景信息可为深度信息或图像信息。S91: When the framing is performed, the framing information acquisition unit acquires the framing information of the subject. The framing information acquiring unit may include a depth sensor or an image sensor, and the framing information may be depth information or image information.
S93:接收所述取景信息获取单元获取的被拍摄对象的取景信息,采用图像处理算法模型将被拍摄对象的取景信息与基准图片进行比较,分析出被拍摄对象的取景信息与基准图片的相似度这一分析结果。S93: Receive the framing information of the object acquired by the finder information acquiring unit, compare the framing information of the object with the reference image by using an image processing algorithm model, and analyze the similarity between the framing information of the object and the reference image. The result of this analysis.
S95:在确定相似度超过相似度预设阈值时,确定当前满足拍摄条件,控制执行拍摄操作而至少通过所述电子装置的第一图像处理器进行成像处理而得到相应的照片或视频。S95: When it is determined that the similarity exceeds the similarity preset threshold, determining that the shooting condition is currently satisfied, controlling to perform a shooting operation to perform imaging processing at least by the first image processor of the electronic device to obtain a corresponding photo or video.
可选的,在一种实现方式中,所述图像处理算法模型包括表情特征模型,当被拍摄对象包括人脸时,所述采用图像处理算法模型将被拍摄对象的取景信息与基准图片进行比较,分析出被拍摄对象的取景信息与基准图片的相似度这一分析结果,包括:利用人脸识别技术对被拍摄对象的取景信息中的脸部表情信息进行分析,生成对应的表情特征向量;根据表情特征模型和被拍摄对象的取景信息对应的表情特征向量计算得到取景信息与基准图片的相似度。Optionally, in an implementation manner, the image processing algorithm model includes an expression feature model, and when the captured object includes a human face, the image processing algorithm model compares the framing information of the captured object with the reference image. The analysis result of analyzing the similarity between the framing information of the subject and the reference picture comprises: analyzing the facial expression information in the framing information of the subject by using a face recognition technology to generate a corresponding expression feature vector; The similarity between the framing information and the reference picture is calculated according to the expression feature model and the expression feature vector corresponding to the framing information of the subject.
在一些实施例中,所述表情特征模型可包括对应多个基准图片的表情特征向量,通过比较取景信息对应的表情特征向量与每一基准图片的表情特征向量的相似度而确定得到多个相似度。In some embodiments, the expression feature model may include an expression feature vector corresponding to a plurality of reference pictures, and the plurality of similarities are determined by comparing the similarity between the expression feature vector corresponding to the view information and the expression feature vector of each reference picture. degree.
进一步的,当包括多个基准图片时,所述在确定相似度超过相似度预设阈值时,确定当前满足拍摄条件,包括:确定与任一基准图片的相似度超过所述相似度预设阈值时,即,确定取景信息对应的表情特征向量与任一基准图片的表情特征向量确定的相似度超过所述相似度预设阈值时,确定当前满足拍摄条件。Further, when the plurality of reference pictures are included, when determining that the similarity exceeds the similarity preset threshold, determining that the shooting condition is currently satisfied comprises: determining that the similarity with any of the reference pictures exceeds the similarity preset threshold When it is determined that the similarity determined by the expression feature vector corresponding to the framing information and the expression feature vector of any of the reference pictures exceeds the similarity preset threshold, it is determined that the shooting condition is currently satisfied.
其中,所述表情特征模型对应的基准图片可为包括人脸大笑、难过、生气、打哈欠等特定表情的标准图片。The reference picture corresponding to the expression feature model may be a standard picture including a specific expression such as a face laughing, sadness, anger, yawning, and the like.
可选的,在另一种实现方式中,所述图像处理算法模型包括手势特征模型,当被拍摄对象包括人的手部时,所述采用图像处理算法模型将被拍摄对象的取景信息与基准图片进行比较,分析出被拍摄对象的取景信息与基准图片的相似度这一分析结果,包括:利用图像识别技术对被拍摄对象的取景信息中的手势信息进行分析,生成对应的手势特征向量;根据手势特征模型和被拍摄对象的取景信息对应的手势特征向量计算得到被拍摄对象的取景信息与基准图片的相 似度。Optionally, in another implementation manner, the image processing algorithm model includes a gesture feature model, and when the object to be photographed includes a human hand, the image processing algorithm model uses the framing information and the reference of the object to be photographed. The image is compared, and the analysis result of the similarity between the framing information of the object and the reference image is analyzed, including: analyzing the gesture information in the framing information of the object by using image recognition technology, and generating a corresponding gesture feature vector; The similarity between the framing information of the subject and the reference picture is calculated according to the gesture feature model and the gesture feature vector corresponding to the framing information of the subject.
在一些实施例中,所述手势特征模型可包括对应多个基准图片的手势特征向量,通过比较取景信息对应的表情特征向量与每一基准图片的手势特征向量的相似度而确定得到多个相似度。In some embodiments, the gesture feature model may include a gesture feature vector corresponding to the plurality of reference pictures, and the plurality of similarities are determined by comparing the similarity between the expression feature vector corresponding to the view information and the gesture feature vector of each of the reference pictures. degree.
进一步的,当包括多个基准图片时,所述在确定相似度超过相似度预设阈值时,确定当前满足拍摄条件,包括:确定与任一基准图片的相似度超过所述相似度预设阈值时,即,确定取景信息对应的手势特征向量与任一基准图片的手势特征向量确定的相似度超过所述相似度预设阈值时,确定当前满足拍摄条件。Further, when the plurality of reference pictures are included, when determining that the similarity exceeds the similarity preset threshold, determining that the shooting condition is currently satisfied comprises: determining that the similarity with any of the reference pictures exceeds the similarity preset threshold When it is determined that the similarity between the gesture feature vector corresponding to the framing information and the gesture feature vector of any of the reference pictures exceeds the similarity preset threshold, it is determined that the shooting condition is currently satisfied.
其中,所述手势特征模型对应的基准图片可为包括“OK”手势、“竖大拇指”手势、“V字形”手势等特定手势的标准图片。The reference picture corresponding to the gesture feature model may be a standard picture including a specific gesture such as an “OK” gesture, a “vertical thumb” gesture, a “V-shaped” gesture, and the like.
在一些实施例中,所述图像处理算法模型包括景物特征模型,当被拍摄对象包括景物时,所述处理器采用图像处理算法模型将被拍摄对象的取景信息与基准图片进行比较,分析出被拍摄对象的取景信息与基准图片的相似度这一分析结果,包括:利用图像识别技术对被拍摄对象的取景信息中的景物信息进行分析,生成对应的景物特征向量;根据景物特征模型和被拍摄对象的取景信息对应的景物特征向量计算得到被拍摄对象的取景信息与基准图片的相似度。In some embodiments, the image processing algorithm model includes a scene feature model. When the object includes a scene, the processor uses an image processing algorithm model to compare the framing information of the object with the reference image, and analyzes the The analysis result of the similarity between the framing information of the photographic subject and the reference picture includes: using image recognition technology to analyze the scene information in the framing information of the object to generate a corresponding scene feature vector; according to the scene feature model and being photographed The scene feature vector corresponding to the framing information of the object calculates the similarity between the framing information of the subject and the reference picture.
在一些实施例中,所述景物特征模型可包括对应多个基准图片的景物特征向量,通过比较取景信息对应的景物特征向量与每一基准图片的景物特征向量的相似度而确定得到多个相似度。In some embodiments, the scene feature model may include a scene feature vector corresponding to the plurality of reference pictures, and the plurality of similarities are determined by comparing the similarity between the scene feature vector corresponding to the finder information and the scene feature vector of each of the reference pictures. degree.
进一步的,当包括多个基准图片时,所述在确定相似度超过相似度预设阈值时,确定当前满足拍摄条件,包括:确定取景信息与任一基准图片的相似度超过所述相似度预设阈值时,即,确定取景信息对应的景物特征向量与任一基准图片的景物特征向量确定的相似度超过所述相似度预设阈值时,确定当前满足拍摄条件。Further, when the plurality of reference pictures are included, when determining that the similarity exceeds the similarity preset threshold, determining that the shooting condition is currently satisfied comprises: determining that the similarity between the framing information and any of the reference pictures exceeds the similarity When the threshold is set, that is, when the similarity determined by the scene feature vector corresponding to the framing information and the scene feature vector of any of the reference pictures exceeds the similarity preset threshold, it is determined that the shooting condition is currently satisfied.
其中,所述景物特征模型对应的基准图片可为包括“海浪”、“山峰”、“花朵”等景物的标准图片。The reference picture corresponding to the scene feature model may be a standard picture including scenes such as “waves”, “mountains”, and “flowers”.
其中,前述的取景信息为深度信息时,对应的表情特征向量等特征向量可为三维向量,前述的取景信息为图像信息时,对应的表情特征向量等特征向量可为二维向量。When the foregoing framing information is depth information, the feature vector such as the corresponding expression feature vector may be a three-dimensional vector. When the framing information is image information, the feature vector such as the corresponding expression feature vector may be a two-dimensional vector.
其中,步骤S91~S95分别与图5中的步骤S51~S55对应,为相同或上下位的关系,具体的介绍可相互参照。The steps S91 to S95 correspond to the steps S51 to S55 in FIG. 5, respectively, and have the same or upper and lower positions. The specific description can be referred to each other.
请参阅图10,为本申请第六实施例中的拍摄控制方法的流程图。在第六实施例中,所述拍摄控制方法包括如下步骤:Please refer to FIG. 10 , which is a flowchart of a shooting control method in a sixth embodiment of the present application. In the sixth embodiment, the photographing control method includes the following steps:
S101:在进行拍摄取景时,通过取景信息获取单元获取被拍摄对象的取景信息。S101: Acquire framing information of the subject by the framing information acquiring unit when the framing is performed.
S103:接收所述取景信息获取单元获取的被拍摄对象的取景信息,采用预设模型对所述被拍摄对象的取景信息进行分析得到分析结果。S103: Receive the framing information of the subject acquired by the framing information acquiring unit, and analyze the framing information of the subject by using a preset model to obtain an analysis result.
可选的,所述预设模型包括图像处理算法模型,所述步骤S103包括:采用图像处理算法模型对被拍摄对象的取景信息与基准图片进行比较得到被拍摄对 象的取景信息与基准图片的相似度。Optionally, the preset model includes an image processing algorithm model, and the step S103 includes: comparing the framing information of the object with the reference image by using an image processing algorithm model to obtain a similarity between the framing information of the object and the reference image. degree.
例如,所述图像处理算法模型包括表情特征模型,可选的,所述采用图像处理算法模型对被拍摄对象的取景信息与基准图片进行比较得到被拍摄对象的取景信息与基准图片的相似度,包括:通过人脸识别技术对被拍摄对象的取景信息中的脸部表情进行分析,生成对应的表情特征向量;以及根据表情特征模型和被拍摄对象的取景信息对应的表情特征向量计算得到被拍摄对象的取景信息与基准图片的相似度。For example, the image processing algorithm model includes an expression feature model. Optionally, the image processing algorithm model compares the framing information of the object with the reference image to obtain a similarity between the framing information of the object and the reference image. The method includes: analyzing facial expressions in the framing information of the object by using a face recognition technology to generate a corresponding expression feature vector; and calculating the image according to the expression feature model corresponding to the expression feature model and the finder information of the photographic subject The similarity between the framing information of the object and the reference image.
显然,如前所述,在另一些实施例中,也可通过神经网络模型得到满意度这一分析结果。Obviously, as mentioned above, in other embodiments, the analysis result of satisfaction can also be obtained by a neural network model.
S105:根据分析结果确定满足拍摄条件时,控制执行连拍操作而至少通过所述电子装置的第一图像处理器进行成像处理而得到当前被拍摄对象的多张照片。S105: When it is determined according to the analysis result that the shooting condition is satisfied, controlling to perform a continuous shooting operation to perform imaging processing by at least the first image processor of the electronic device to obtain a plurality of photos of the current subject.
在一些实施例中,所述步骤S105包括:在被拍摄对象的取景信息与基准图片的相似度超过相似度预设阈值时,确定满足拍摄条件,控制执行连拍操作。In some embodiments, the step S105 includes: when the similarity between the framing information of the subject and the reference picture exceeds the similarity preset threshold, determining that the shooting condition is satisfied, and controlling to perform the continuous shooting operation.
在另一些实施例中,当通过神经网络模型得到满意度这一分析结果时,所述步骤S105也可包括:当满意度超过满意度预设阈值时,确定满足拍摄条件,控制执行连拍操作。In other embodiments, when the analysis result of the satisfaction is obtained by the neural network model, the step S105 may further include: when the satisfaction exceeds the satisfaction preset threshold, determining that the shooting condition is satisfied, and controlling to perform the continuous shooting operation. .
其中,执行连拍操作时的拍摄条件的要求可略低于执行拍照操作时的要求,具体的,执行连拍操作时对比的相似度预设阈值或满意度预设阈值可略低于执行拍照操作对比的相似度预设阈值或满意度预设阈值。The requirement of the shooting condition when performing the continuous shooting operation may be slightly lower than the requirement when the photographing operation is performed. Specifically, the comparison similarity preset threshold or the satisfaction preset threshold may be slightly lower than the execution of the photographing when performing the continuous shooting operation. The similarity preset threshold or satisfaction preset threshold of the operation comparison.
例如,执行连拍操作时对比的相似度预设阈值或满意度预设阈值可为70%,低于执行拍照操作对比的为80%或者更高的相似度预设阈值或满意度预设阈值。For example, the similarity preset threshold or the satisfaction preset threshold when performing the continuous shooting operation may be 70%, which is lower than the similarity preset threshold or the satisfaction preset threshold of 80% or higher in performing the photographing operation comparison. .
从而,当即将达到最佳拍摄效果时即进行连拍,而可以确保连拍中拍摄到最佳拍摄效果的照片。例如,设基准图片为具有大笑表情的图片,而对比的表情特征向量为表示嘴角上扬的程度的表情特征向量X2,当判断嘴角上扬程度达到基准图片的70%时则控制连拍操作,而用户继续微笑达到最大上扬程度的时间较短,用户微笑达到最大上扬程度这一表情将被连拍操作拍摄到,而能够确保连拍操作拍摄到最佳拍摄效果的照片。Thus, continuous shooting is achieved when the best shooting effect is about to be achieved, and a photograph of the best shooting effect in continuous shooting can be ensured. For example, the reference picture is a picture with a laughing expression, and the contrast expression vector is an expression feature vector X2 indicating the degree of the mouth angle rising, and the continuous shooting operation is controlled when it is determined that the mouth angle rises to 70% of the reference picture. The user continues to smile for a short period of time, and the user's smile reaches the maximum level. This expression will be captured by the continuous shooting operation, and the photo that ensures the best shooting effect can be ensured by the continuous shooting operation.
其中,所述步骤S101~S105分别与图5中步骤S51~S55对应,相互之间为相同或上下位的描述关系,相关的描述可相互参照。所述步骤S101~S105也分别与图8中的步骤S81~S85对应,以及分别与图9中的步骤S91~S95对应,相关的描述可相互参照。The steps S101 to S105 correspond to the steps S51 to S55 in FIG. 5 respectively, and the descriptions of the same or upper and lower positions are mutually related, and the related descriptions can be referred to each other. The steps S101 to S105 also correspond to steps S81 to S85 in FIG. 8 and respectively correspond to steps S91 to S95 in FIG. 9, and the related descriptions can be referred to each other.
可选的,如图10所示,所述拍摄控制方法还可包括步骤:Optionally, as shown in FIG. 10, the shooting control method may further include the following steps:
S107:对所述连拍操作获取到的多张照片进行分析,确定出最佳的照片。S107: Analyze a plurality of photos obtained by the continuous shooting operation to determine an optimal photo.
可选的,在一种实现方式中,所述步骤S107可包括:采用神经网络模型对该多张照片进行分析得到满意度,并确定满意度最高的照片作为所述最佳的照片。Optionally, in an implementation manner, the step S107 may include: analyzing the multiple photos by using a neural network model to obtain satisfaction, and determining a photo with the highest satisfaction as the best photo.
可选的,在另一种实现方式中,所述步骤S107可包括:将连拍获得的多张照片与基准图片进行比较,确定与基准图片相似度最高的照片作为所述最佳的 照片。Optionally, in another implementation manner, the step S107 may include: comparing a plurality of photos obtained by the continuous shooting with the reference image, and determining a photo with the highest similarity with the reference image as the optimal photo.
可选的,如图10所示,所述拍摄控制方法还可进一步包括:Optionally, as shown in FIG. 10, the shooting control method may further include:
S109:保留所述最佳的照片,并对所述连拍操作获取到的其他照片进行删除。S109: retain the best photo, and delete other photos obtained by the continuous shooting operation.
在一些实施例中,所述电子装置包括存储器,所述存储器中建立有若干相册,所述保留所述最佳的照片为将所述最佳的照片存储于某一相册中,例如存储于相机相册中。其中,通过将其他照片进行删除,可有效避免占用过多的存储空间。In some embodiments, the electronic device includes a memory, a plurality of albums are created in the memory, and the retaining the optimal photo is to store the best photo in a certain album, such as a camera. In the album. Among them, by deleting other photos, it can effectively avoid occupying too much storage space.
请参阅图11,为本申请第七实施例中的拍摄控制方法的流程图。在第七实施例中,所述拍摄控制方法包括如下步骤:Please refer to FIG. 11 , which is a flowchart of a shooting control method in a seventh embodiment of the present application. In the seventh embodiment, the photographing control method includes the following steps:
S111:在进行拍摄取景时,通过取景信息获取单元获取被拍摄对象的取景信息。S111: When the framing is performed, the framing information acquisition unit acquires the framing information of the subject.
S113:接收所述取景信息获取单元获取的被拍摄对象的取景信息,采用预设模型对所述被拍摄对象的取景信息进行分析得到分析结果。S113: Receive the framing information of the subject acquired by the framing information acquiring unit, and analyze the framing information of the subject by using a preset model to obtain an analysis result.
S115:根据分析结果确定满足拍摄条件时,控制执行视频拍摄操作而至少通过所述电子装置的第一图像处理器进行成像处理而得到当前被拍摄对象的视频。S115: When it is determined that the shooting condition is satisfied according to the analysis result, controlling to perform a video capturing operation to obtain an image of the current subject by at least an imaging process performed by the first image processor of the electronic device.
其中,执行视频拍摄操作时的拍摄条件的要求也可略低于执行拍照操作时的要求,具体的,执行连拍操作时对比的相似度预设阈值或满意度预设阈值可略低于执行拍照操作对比的相似度预设阈值或满意度预设阈值。The requirement of the shooting condition when performing the video shooting operation may also be slightly lower than the requirement when the photographing operation is performed. Specifically, the comparison similarity preset threshold or the satisfaction preset threshold may be slightly lower than the execution when performing the continuous shooting operation. The similarity preset threshold or the satisfaction preset threshold of the photographing operation comparison.
从而,当即将达到最佳拍摄效果时即进行视频拍摄,而可以确保视频中包括最佳拍摄效果的视频画面帧。Thus, video shooting is performed when the best shooting effect is about to be achieved, and the video frame frame including the best shooting effect in the video can be ensured.
其中,步骤S111~S115与图10所示的第六实施例中的步骤S101~S105分别对应,更具体的介绍可参见图10的相关描述。所述步骤S111~S11也分别与图5中步骤S51~S55对应,相互之间为相同或上下位的描述关系,相关的描述可相互参照。所述步骤S111~S115也分别与图8中的步骤S81~S85对应,以及分别与图9中的步骤S91~S95对应,相关的描述可相互参照。The steps S111 to S115 respectively correspond to the steps S101 to S105 in the sixth embodiment shown in FIG. 10 . For a more specific introduction, reference may be made to the related description of FIG. 10 . The steps S111-S11 also correspond to the steps S51-S55 in FIG. 5 respectively, and the description relationship of the same or the upper and lower positions is mutually related, and the related descriptions can refer to each other. The steps S111 to S115 also correspond to steps S81 to S85 in FIG. 8 and respectively correspond to steps S91 to S95 in FIG. 9, and the related descriptions can be referred to each other.
可选的,如图11所示,所述拍摄控制方法还可进一步包括:Optionally, as shown in FIG. 11, the shooting control method may further include:
S117:对所述拍摄到的视频中的多个视频画面帧进行比较,确定出最佳的画面帧。S117: Compare a plurality of video picture frames in the captured video to determine an optimal picture frame.
可选的,在一种实现方式中,所述步骤S117可包括:采用神经网络模型对该多个视频画面帧进行分析得到满意度,并确定满意度最高的视频画面帧作为所述最佳的画面帧。Optionally, in an implementation manner, the step S117 may include: analyzing the plurality of video picture frames by using a neural network model to obtain satisfaction, and determining a video frame with the highest satisfaction as the optimal one. Picture frame.
可选的,在另一种实现方式中,所述步骤S117可包括:将所述视频中的多个视频画面帧与基准图片进行比较,确定与基准图片相似度最高的视频画面帧作为所述最佳的画面帧。Optionally, in another implementation manner, the step S117 may include: comparing a plurality of video picture frames in the video with a reference picture, and determining a video picture frame with the highest similarity with the reference picture as the The best picture frame.
可选的,如图11所示,所述拍摄控制方法还可进一步包括:Optionally, as shown in FIG. 11, the shooting control method may further include:
S119:将所述最佳的画面帧截取出来作为照片保存。S119: The optimal picture frame is cut out and saved as a photo.
在一些实施例中,如前所述,所述电子装置包括存储器,所述存储器中建立有若干相册,所述“将所述最佳的画面帧截取出来作为照片保存”为将所述最佳 的画面帧以图片/照片的格式存储于某一相册中,例如存储于相机相册中。In some embodiments, as previously described, the electronic device includes a memory, a plurality of albums are created in the memory, and the "snap the best picture frame is saved as a photo" for the best The frame of the picture is stored in an album in the form of a picture/photo, for example, stored in a camera album.
请参阅图12,为本申请第八实施例中的拍摄控制方法的流程图。所述第八实施例与图5所示的第一实施例中的拍摄控制方法相比,在于控制执行拍摄操作之后,还有后续的接收用户满意度反馈信息等步骤,所述拍摄控制方法包括如下步骤:Please refer to FIG. 12 , which is a flowchart of a shooting control method in an eighth embodiment of the present application. The eighth embodiment is compared with the photographing control method in the first embodiment shown in FIG. 5, in that after controlling the execution of the photographing operation, there is a subsequent step of receiving user satisfaction feedback information, and the photographing control method includes The following steps:
S121:在进行拍摄取景时,通过取景信息获取单元获取被拍摄对象的取景信息。S121: When the framing is performed, the framing information acquisition unit acquires the framing information of the subject.
S123:接收所述取景信息获取单元获取的被拍摄对象的取景信息,采用预设模型对所述被拍摄对象的取景信息进行分析得到分析结果。S123: Receive the framing information of the subject acquired by the framing information acquiring unit, and analyze the framing information of the subject by using a preset model to obtain an analysis result.
S125:根据分析结果确定满足拍摄条件时,控制执行拍摄操作而至少通过所述电子装置的第一图像处理器进行成像处理而得到当前被拍摄对象的照片或视频。S125: When it is determined that the shooting condition is satisfied according to the analysis result, controlling to perform a photographing operation to obtain an image or a video of the current subject at least by performing imaging processing by the first image processor of the electronic device.
S127:获取用户对本次自动拍摄的满意度反馈信息。S127: Acquire the user's satisfaction feedback information about the automatic shooting.
可选的,在一实现方式中,可在自动拍照完成后,通过产生提示信息来提示用户对本次自动拍照进行满意度评价,例如产生包括“满意”和“不满意”选项的提示框来供用户选择,并根据用户选择而获取得到本次自动拍照的满意度反馈信息。Optionally, in an implementation manner, after the automatic photographing is completed, the user may be prompted to perform satisfaction evaluation on the automatic photographing by generating prompt information, for example, generating a prompt box including “satisfactory” and “unsatisfactory” options. For the user to select, and according to the user's choice, the satisfaction feedback information of the automatic photographing is obtained.
可选的,在另一实施方式中,通过侦测用户对本次自动拍摄得到的照片或视频的操作而获取用户对本次自动拍摄的满意度反馈信息。例如,如果侦测到用户删除了本次自动拍摄得到的照片或视频,则确定用户对本次自动拍摄不满意,而获取到了为不满意的满意度反馈信息。又例如,如果侦测到用户对本次自动拍摄得到的照片或视频进行了设置为最爱、喜欢等类型的设置操作或者进行了分享的操作,则确定用户对本次自动拍摄满意,而获取到了为满意的满意度反馈信息。Optionally, in another embodiment, the user's satisfaction with the automatic shooting is obtained by detecting the user's operation on the photo or video obtained by the automatic shooting. For example, if it is detected that the user deletes the photo or video obtained by the automatic shooting, it is determined that the user is not satisfied with the automatic shooting, and the satisfaction feedback information that is unsatisfactory is obtained. For example, if it is detected that the user has set a photo or video obtained by the automatic shooting to a favorite or favorite type setting operation or a sharing operation, it is determined that the user is satisfied with the automatic shooting, and obtains I got feedback on satisfaction with satisfaction.
S129:将用户对本次自动拍摄的满意度反馈信息输出至当前使用的模型,以使得当前使用的模型利用所述满意度反馈信息进行优化训练。S129: Output the satisfaction feedback information of the user to the automatic shooting to the currently used model, so that the currently used model uses the satisfaction feedback information to perform optimization training.
从而,本申请中,通过搜集用户对自动拍摄的满意度反馈信息,可以对模型的训练进行优化,而不断优化模型,以使得后续使用中进行自动拍摄时能够更加准确。Therefore, in the present application, by collecting user satisfaction feedback information for automatic shooting, the training of the model can be optimized, and the model is continuously optimized, so that the automatic shooting in subsequent use can be more accurate.
其中,所述当前使用的模型可为已经经过确认训练完成的模型,也可以是还未训练完成的模型。当为确认训练完成的模型,可进一步进行优化,当为还未训练完成的模型,可以更优地实现训练。The model currently used may be a model that has been confirmed by training, or a model that has not been trained. When it is confirmed that the training is completed, the model can be further optimized. When the model is not yet trained, the training can be better achieved.
其中,步骤S121~S125与图10所示的第六实施例中的步骤S101~S105分别对应,更具体的介绍可参见图10的相关描述。所述步骤S121~S125也分别与图5中步骤S51~S55对应,相互之间为相同或上下位的描述关系,相关的描述可相互参照。所述步骤S121~S125也分别与图8中的步骤S81~S85对应,以及分别与图9中的步骤S91~S95对应,相关的描述可相互参照。The steps S121 to S125 respectively correspond to the steps S101 to S105 in the sixth embodiment shown in FIG. 10 . For a more specific introduction, reference may be made to the related description of FIG. 10 . The steps S121 to S125 also correspond to the steps S51 to S55 in FIG. 5 respectively, and the description relationship of the same or the upper and lower positions is mutually related, and the related descriptions can refer to each other. The steps S121 to S125 also correspond to steps S81 to S85 in FIG. 8 and respectively correspond to steps S91 to S95 in FIG. 9, and the related descriptions can be referred to each other.
请参阅图13,为本申请第九实施例中的拍摄控制方法的流程图。所述第九实施例中,所述预设模型为未训练的模型,并逐渐随着用户手动进行的拍摄操作而逐渐训练完成,所述拍摄控制方法包括如下步骤:Please refer to FIG. 13 , which is a flowchart of a shooting control method in a ninth embodiment of the present application. In the ninth embodiment, the preset model is an untrained model, and is gradually completed with the manual operation performed by the user. The shooting control method includes the following steps:
S131:在用户进行手动控制拍摄时,通过模型将当前拍摄的画面作为满足拍摄条件的正面样本,并根据本次的正面样本调整模型自身的参数。S131: When the user performs manual control shooting, the currently taken picture is taken as a positive sample satisfying the shooting condition by the model, and the parameters of the model itself are adjusted according to the current positive sample.
在一些实施例中,所述模型保存所述正面样本,并建立或更新所述正面样本和满足拍摄条件的对应关系,来调整模型自身的参数。其中,满足拍摄条件可作为所述正面样本的标签来进行标记。In some embodiments, the model saves the frontal sample and establishes or updates the positive sample and the correspondence that satisfies the shooting conditions to adjust the parameters of the model itself. Among them, the photographing condition can be marked as a label of the front sample.
可选的,在一种实现方式中,用户手动控制拍摄为通过按压快门键或拍照图标来完成的。Optionally, in one implementation, the user manually controls the shooting to be done by pressing a shutter button or a photo icon.
可选的,在另一种实现方式中,用户手动控制拍摄为通过对电子装置的物理按键执行特定的操作来完成的。例如,电子装置包括电源键,通过对电源键进行双击而实现手动控制拍摄。Optionally, in another implementation, the user manually controls the shooting to be performed by performing a specific operation on a physical button of the electronic device. For example, the electronic device includes a power button, and manual control shooting is achieved by double-clicking the power button.
S133:通过模型以预设时间间隔采样相邻两次手动控制拍摄之间取景得到的画面帧作为反面样本,并根据所述反面样本来调整模型自身的参数。S133: Sample a frame obtained by framing between two adjacent manual control shots at a preset time interval as a reverse sample, and adjust parameters of the model according to the reverse sample.
在一些实施例中,所述模型可保存所述采样得到的反面样本,并也可建立反面样本和不满足拍摄条件的对应关系,来调整模型自身的参数。其中,所述反面样本为不满足拍摄条件的画面;不满足拍摄条件可作为所述反面样本的标签来进行标记。所述预设时间间隔可为1秒、2秒等值。In some embodiments, the model may save the back samples of the samples, and may also establish a correspondence between the back samples and the photographing conditions to adjust the parameters of the model itself. The back surface sample is a screen that does not satisfy the shooting condition; the labeling condition may be used as a label of the back surface sample for marking. The preset time interval may be 1 second, 2 seconds, and the like.
可选的,在一种实现方式中,所述相邻两次手动控制拍摄可为同一次摄像头打开进行的取景拍摄过程中的相邻两次手动控制拍摄。Optionally, in an implementation manner, the two adjacent manual control shots may be two adjacent manual control shots during the framing shooting performed by the same camera opening.
可选的,在另一种实现方式中,所述相邻两次手动控制拍摄也可以是不同次摄像头打开进行取景拍摄过程中的相邻两次手动控制拍摄。例如,用户打开摄像头完成了第一次手动控制拍摄后,关闭摄像头,并在下一次打开摄像头完成第二次手动控制拍摄,第一次手动控制拍摄和第二次手动控制拍摄之间的取景画面被当前使用的模型以预设时间间隔保存而作为反面样本。Optionally, in another implementation manner, the two adjacent manual control shots may also be two different manual control shots during the framing shooting process when different cameras are turned on. For example, after the user turns on the camera and completes the first manual control shooting, the camera is turned off, and the next time the camera is turned on to complete the second manual control shooting, the framing screen between the first manual control shooting and the second manual control shooting is The currently used model is saved at a preset time interval as a negative sample.
S135:在确定达到训练完成条件时,结束训练而得到已训练的模型,以用于后续的自动拍摄控制。S135: When it is determined that the training completion condition is reached, the training is ended to obtain the trained model for subsequent automatic shooting control.
可选的,在一种实现方式中,所述步骤S131之前还包括步骤:响应用户输入的进入模型训练的操作控制进入模型训练模式。所述确定达到训练完成条件,包括:响应用户输入的退出所述模型训练模式的操作时,确定达到训练完成条件。Optionally, in an implementation manner, the step S131 further includes the step of: entering the model training mode in response to the user input entering the model training. The determining that the training completion condition is reached includes determining that the training completion condition is reached in response to the user inputting the operation of exiting the model training mode.
可选的,所述进入模型训练的操作包括对菜单选项的选择操作,或者对物理按键的特定操作,或者在电子装置的触摸屏上输入的特定触摸手势。可选的,所述响应用户输入的进入模型训练的操作,控制进入模型训练模式,包括:响应用户对菜单选项的选择操作,或者对物理按键的特定操作,或者在电子装置的触摸屏上输入的特定触摸手势而控制进入模型训练模式。Optionally, the operation of entering the model training includes a selection operation of a menu option, or a specific operation on a physical button, or a specific touch gesture input on a touch screen of the electronic device. Optionally, the operation of entering the model training in response to the user input controls the entering the model training mode, including: responding to the user's selection operation of the menu option, or performing a specific operation on the physical button, or inputting on the touch screen of the electronic device. The specific touch gesture is controlled to enter the model training mode.
可选的,在另一种实现方式中,所述确定达到训练完成条件,包括:在确定用户手动控制拍摄的次数达到预设次数N1时,确定达到训练完成条件。其中,所述预设次数N1可为系统默认的模型训练完成需要执行的次数,也可为用户自定义的值。Optionally, in another implementation manner, the determining that the training completion condition is reached includes: determining that the training completion condition is reached when it is determined that the number of times the user manually controls the shooting reaches the preset number of times N1. The preset number of times N1 may be the number of times the system default model training needs to be performed, or may be a user-defined value.
可选的,在另一种实现方式中,所述确定达到训练完成条件,包括:使用本次的正面样本去测试模型,确定测试结果是否达到预设阈值,在测试结果达 到预设阈值,确定达到训练完成条件。Optionally, in another implementation manner, the determining that the training completion condition is met includes: using the positive sample of the current time to test the model, determining whether the test result reaches a preset threshold, and determining that the test result reaches a preset threshold, determining The training completion conditions are reached.
可选的,在一些实施例中,图13中所示的方法步骤为在开启自动拍摄功能之后进行的。例如,响应用户对相机的菜单选项中的设置操作开启自动拍摄功能之后进行的。Alternatively, in some embodiments, the method steps shown in Figure 13 are performed after the automatic shooting function is turned on. For example, it is performed in response to the user turning on the automatic shooting function in the setting operation in the camera's menu option.
在另一些实施例中,也可以通过一些智能算法确定是否进行模型训练,例如,通过确定电子装置当前所处的地理位置、当前的时间在内等在内的因素来确定是否进入模型训练模式。具体的,如果电子装置当前处于经常在的地理位置,例如家中,则一般此时用户可能不是要去刻意拍摄新的地点的照片或视频,此时进入模型训练模式,则不会太大程度上造成对用户拍照的干扰。In other embodiments, it may also be determined by some intelligent algorithms whether to perform model training, for example, determining whether to enter the model training mode by determining factors such as the geographic location of the electronic device, the current time, and the like. Specifically, if the electronic device is currently in a geographically frequent location, such as a home, then at this time, the user may not want to take a photo or video of a new location, and entering the model training mode at this time will not be too large. Causes interference with the user's photo.
其中,上述的模型可为神经网络模型、图像处理算法模型等模型。The above model may be a model such as a neural network model or an image processing algorithm model.
本申请中,通过用户的拍摄操作来定制用户的训练模型,而不需要采用别人的模型,因此能够更好地实现个性化。In the present application, the user's training model is customized by the user's shooting operation without using another person's model, so that personalization can be better achieved.
在一些实施例中,本申请中所述的模型可为运行于处理器30中的特定算法函数等程序,例如为神经网络算法函数、图像处理算法函数等等。在另一些实施例中,所述电子装置100还可包括独立于所述处理器30之外的模型处理器,本申请中所述的模型为运行与模型处理器中,所述处理器30可根据需要产生相应指令而触发模型处理器运行对应的模型,且模型的输出结果通过所述模型处理器输出给所述处理器30而供处理器30进行使用,而执行拍摄操作等控制。In some embodiments, the models described herein may be programs such as specific algorithm functions running in processor 30, such as neural network algorithm functions, image processing algorithm functions, and the like. In other embodiments, the electronic device 100 may further include a model processor that is independent of the processor 30. The model described in the present application is in an operation and model processor, and the processor 30 may be The model processor is triggered to run the corresponding model according to the need to generate a corresponding instruction, and the output result of the model is output to the processor 30 by the model processor for use by the processor 30, and control such as a shooting operation is performed.
本申请的拍摄控制方法及电子装置100可根据被拍摄对象的取景信息自动判断是否满足拍摄条件,并在满足拍摄条件时进行拍摄,可及时捕捉到包括当前被拍摄对象的取景信息对应内容的精彩瞬间。The photographing control method and the electronic device 100 of the present application can automatically determine whether the photographing condition is satisfied according to the photographing information of the photographed object, and perform photographing when the photographing condition is satisfied, and can capture the wonderful content corresponding to the viewfinder information of the currently photographed object in time. moment.
本领域技术人员应明白,本发明的实施例可提供为方法、装置(设备)、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机程序存储/分布在合适的介质中,与其它硬件一起提供或作为硬件的一部分,也可以采用其他分布形式,如通过Internet或其它有线或无线电信系统。Those skilled in the art will appreciate that embodiments of the present invention can be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code. The computer program is stored/distributed in a suitable medium, provided with other hardware or as part of the hardware, or in other distributed forms, such as over the Internet or other wired or wireless telecommunication systems.
本发明是参照本发明实施例的方法、装置(设备)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention has been described with reference to flowchart illustrations and/or block diagrams of the methods, apparatus, and computer program products of the embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing device to produce a machine for the execution of instructions for execution by a processor of a computer or other programmable data processing device. Means for implementing the functions specified in one or more of the flow or in a block or blocks of the flow chart.
以上所揭露的仅为本申请一种实施例而已,当然不能以此来限定本申请之权利范围,本领域普通技术人员可以理解实现上述实施例的全部或部分流程,并依本申请权利要求所作的等同变化,仍属于申请所涵盖的范围。The above disclosure is only an embodiment of the present application, and of course, the scope of the application should not be limited thereto, and those skilled in the art can understand all or part of the process of implementing the above embodiments, and according to the claims of the present application. The equivalent change is still within the scope of the application.

Claims (20)

  1. 一种拍摄控制方法,其特征在于,所述拍摄控制方法包括:A shooting control method, characterized in that the shooting control method comprises:
    在拍摄取景时,通过取景信息获取单元获取被拍摄对象的取景信息;When the framing is taken, the framing information acquisition unit acquires framing information of the subject;
    接收所述取景信息获取单元获取的被拍摄对象的取景信息,对所述被拍摄对象的取景信息进行分析得到分析结果;以及Receiving the framing information of the subject acquired by the framing information acquiring unit, analyzing the framing information of the subject to obtain an analysis result;
    根据分析结果确定满足拍摄条件时,控制执行拍摄操作而至少通过第一图像处理器进行成像处理而得到相应的照片或视频。When it is determined according to the analysis result that the photographing condition is satisfied, the photographing operation is controlled to perform at least the image processing by the first image processor to obtain a corresponding photograph or video.
  2. 如权利要求1所述的拍摄控制方法,其特征在于,对所述被拍摄对象的取景信息进行分析得到分析结果,包括:采用预设模型对所述被拍摄对象的取景信息进行分析得到分析结果。The photographing control method according to claim 1, wherein analyzing the framing information of the subject to obtain an analysis result comprises: analyzing a framing information of the subject by using a preset model to obtain an analysis result .
  3. 如权利要求2所述的拍摄控制方法,其特征在于,所述取景信息获取单元包括深度传感器,所述通过取景信息获取单元获取被拍摄对象的取景信息,包括:通过所述深度传感器获取被拍摄对象的深度信息;The photographing control method according to claim 2, wherein the framing information acquiring unit includes a depth sensor, and the framing information acquiring unit acquires framing information of the photographic subject, including: capturing by the depth sensor The depth information of the object;
    所述采用预设模型对所述被拍摄对象的取景信息进行分析得到分析结果,包括:接收所述被拍摄对象的深度信息,并采用预设模型对所述深度信息进行分析得到所述分析结果。The analyzing the framing information of the object by using the preset model to obtain the analysis result includes: receiving depth information of the object to be photographed, and analyzing the depth information by using a preset model to obtain the analysis result. .
  4. 如权利要求3所述的拍摄控制方法,其特征在于,所述深度传感器包括双目测距深度传感器、结构光深度传感器、飞行时间深度传感器中的至少一种。The photographing control method according to claim 3, wherein the depth sensor comprises at least one of a binocular ranging depth sensor, a structured light depth sensor, and a time-of-flight depth sensor.
  5. 如权利要求2所述的拍摄控制方法,其特征在于,所述取景信息获取单元包括第二图像传感器,所述通过取景信息获取单元获取被拍摄对象的取景信息,包括:通过所述第二图像传感器获取被拍摄对象的图像信息;The photographing control method according to claim 2, wherein the framing information acquiring unit includes a second image sensor, and the framing information acquiring unit acquires framing information of the photographed object, including: passing the second image The sensor acquires image information of the subject;
    所述采用预设模型对所述被拍摄对象的取景信息进行分析得到分析结果,包括:接收所述图像信息,并采用预设模型对所述图像信息进行分析得到所述分析结果。The analyzing the framing information of the subject by using the preset model to obtain the analysis result includes: receiving the image information, and analyzing the image information by using a preset model to obtain the analysis result.
  6. 如权利要求5所述的拍摄控制方法,其特征在于,所述根据分析结果确定满足拍摄条件时,控制执行拍摄操作而至少通过所述第一图像处理器进行成像处理而得到相应的照片或视频,包括:The photographing control method according to claim 5, wherein, when it is determined that the photographing condition is satisfied according to the analysis result, controlling to perform a photographing operation to obtain an image or video by at least processing by the first image processor ,include:
    根据分析结果确定满足拍摄条件控制进行拍摄操作时,控制同时通过所述第一图像处理器和所述第二图像传感器进行成像处理而得到相应的照片或视频。When it is determined that the photographing condition control is satisfied according to the analysis result, the photographing operation is simultaneously performed by the first image processor and the second image sensor to obtain a corresponding photograph or video.
  7. 如权利要求2所述的拍摄控制方法,其特征在于,所述预设模型为神经网络模型;所述采用预设模型对所述被拍摄对象的取景信息进行分析得到分析结果,包括:The photographing control method according to claim 2, wherein the preset model is a neural network model; and the framing information of the photographed object is analyzed by using a preset model to obtain an analysis result, including:
    通过神经网络模型对所述被拍摄对象的取景信息进行分析,得出满意度这一分析结果;The framing information of the subject is analyzed by a neural network model, and the analysis result of satisfaction is obtained;
    所述根据分析结果确定满足拍摄条件,包括:The determining, according to the analysis result, that the shooting condition is met, includes:
    在确定满意度超过满意度预设阈值时,确定当前满足拍摄条件。When it is determined that the satisfaction exceeds the satisfaction preset threshold, it is determined that the shooting condition is currently satisfied.
  8. 如权利要求2所述的拍摄控制方法,其特征在于,所述预设模型为图像 处理算法模型,所述采用预设模型对所述被拍摄对象的取景信息进行分析得到分析结果,包括:The photographing control method according to claim 2, wherein the preset model is an image processing algorithm model, and the framing information of the photographed object is analyzed by using a preset model to obtain an analysis result, including:
    采用图像处理算法模型将被拍摄对象的取景信息与基准图片进行比较,分析出被拍摄对象的取景信息与基准图片的相似度这一分析结果;The image processing algorithm model is used to compare the framing information of the object with the reference image, and analyze the analysis result of the similarity between the framing information of the object and the reference image;
    所述根据所述分析结果确定当前满足拍摄条件,包括:Determining, according to the analysis result, that the current shooting condition is met, including:
    在确定相似度超过相似度预设阈值时,确定当前满足拍摄条件。When it is determined that the similarity exceeds the similarity preset threshold, it is determined that the shooting condition is currently satisfied.
  9. 如权利要求8所述的拍摄控制方法,其特征在于,所述图像处理算法模型包括表情特征模型,所述采用图像处理算法模型将被拍摄对象的取景信息与基准图片进行比较,分析出被拍摄对象的取景信息与基准图片的相似度这一分析结果,包括:The photographing control method according to claim 8, wherein the image processing algorithm model includes an emoticon feature model, and the image processing algorithm model compares the framing information of the photographed object with the reference image, and analyzes the photographed image. The analysis results of the similarity between the framing information of the object and the reference image, including:
    利用人脸识别技术对被拍摄对象的取景信息中的脸部表情信息进行分析,生成对应的表情特征向量;Using facial recognition technology to analyze facial expression information in the framing information of the object to generate a corresponding expression feature vector;
    根据表情特征模型和被拍摄对象的取景信息对应的表情特征向量计算得到被拍摄对象的取景信息与基准图片的相似度。The similarity between the framing information of the subject and the reference picture is calculated according to the expression feature vector and the expression feature vector corresponding to the framing information of the subject.
  10. 如权利要求8所述的拍摄控制方法,其特征在于,所述图像处理算法模型包括手势特征模型,所述采用图像处理算法模型将被拍摄对象的取景信息与基准图片进行比较,分析出被拍摄对象的取景信息与基准图片的相似度这一分析结果,包括:The photographing control method according to claim 8, wherein the image processing algorithm model comprises a gesture feature model, and the image processing algorithm model compares the framing information of the object with the reference image, and analyzes the captured image. The analysis results of the similarity between the framing information of the object and the reference image, including:
    利用图像识别技术对被拍摄对象的取景信息中的手势信息进行分析,生成对应的手势特征向量;The image recognition technology is used to analyze the gesture information in the framing information of the object to generate a corresponding gesture feature vector;
    根据手势特征模型和被拍摄对象的取景信息对应的手势特征向量计算得到被拍摄对象的取景信息与基准图片的相似度。The similarity between the framing information of the subject and the reference picture is calculated according to the gesture feature model and the gesture feature vector corresponding to the framing information of the subject.
  11. 如权利要求8所述的拍摄控制方法,其特征在于,所述图像处理算法模型包括景物特征模型,所述采用图像处理算法模型将被拍摄对象的取景信息与基准图片进行比较,分析出被拍摄对象的取景信息与基准图片的相似度这一分析结果,包括:The photographing control method according to claim 8, wherein the image processing algorithm model comprises a scene feature model, and the image processing algorithm model compares the framing information of the object with the reference image, and analyzes the photographed image. The analysis results of the similarity between the framing information of the object and the reference image, including:
    利用图像识别技术对被拍摄对象的取景信息中的景物信息进行分析,生成对应的景物特征向量;The image recognition technology is used to analyze the scene information in the framing information of the object to generate a corresponding scene feature vector;
    根据景物特征模型和被拍摄对象的取景信息对应的景物特征向量计算得到被拍摄对象的取景信息与基准图片的相似度。The similarity between the framing information of the subject and the reference picture is calculated according to the scene feature model and the scene feature vector corresponding to the framing information of the subject.
  12. 一种电子装置,其特征在于,所述电子装置包括:An electronic device, the electronic device comprising:
    取景信息获取单元,用于在拍摄取景时获取被拍摄对象的取景信息;a framing information acquiring unit, configured to acquire framing information of the subject when shooting the framing;
    第一图像处理器,用于进行成像处理;a first image processor for performing an imaging process;
    存储器,存储有程序指令;以及a memory in which program instructions are stored;
    处理器,用于调用所述程序指令而执行如权利要求1-11任一项所述的拍摄控制方法。A processor for invoking the program instructions to perform the photographing control method according to any one of claims 1-11.
  13. 如权利要求12所述的电子装置,其特征在于,所述取景信息获取单元包括深度传感器,所述深度传感器用于在拍摄取景时获取被拍摄对象的深度信息作为所述取景信息。The electronic device according to claim 12, wherein the framing information acquisition unit includes a depth sensor for acquiring depth information of the subject as the framing information when the framing is taken.
  14. 如权利要求13所述的电子装置,其特征在于,所述深度传感器包括双 目测距深度传感器、结构光深度传感器、飞行时间深度传感器中的至少一种。The electronic device of claim 13, wherein the depth sensor comprises at least one of a binocular ranging depth sensor, a structured light depth sensor, and a time of flight depth sensor.
  15. 如权利要求12所述的电子装置,其特征在于,所述取景信息获取单元包括第二图像传感器,所述第二图像传感器用于在拍摄取景时获取被拍摄对象的图像信息而作为所述取景信息。The electronic device according to claim 12, wherein the framing information acquisition unit comprises a second image sensor, and the second image sensor is configured to acquire image information of the photographic subject as the framing when capturing the framing information.
  16. 如权利要求15所述的电子装置,其特征在于,所述第二图像传感器的成像分辨率低于所述第一图像传感器的成像分辨率。The electronic device of claim 15 wherein the imaging resolution of the second image sensor is lower than the imaging resolution of the first image sensor.
  17. 如权利要求15所述的电子装置,其特征在于,所述电子装置包括第一镜头和第二镜头,所述第一镜头与所述图像传感器对应设置,所述第一图像传感器用于通过所述第一镜头接收光线而进行成像处理,所述第二镜头与所述第二图像传感器对应设置,所述图像传感器用于通过所述第二镜头接收光线而获取被拍摄对象的图像信息。The electronic device according to claim 15, wherein the electronic device comprises a first lens and a second lens, the first lens is disposed corresponding to the image sensor, and the first image sensor is used to pass through The first lens receives the light for performing an imaging process, and the second lens is disposed corresponding to the second image sensor, and the image sensor is configured to acquire the image information of the object by receiving the light through the second lens.
  18. 如权利要求15所述的电子装置,其特征在于,所述电子装置包括第一镜头,所述第一图像传感器与所述第二图像传感器并排设置,且均正对所述第一镜头设置。The electronic device according to claim 15, wherein said electronic device comprises a first lens, said first image sensor being disposed side by side with said second image sensor, and each being disposed opposite said first lens.
  19. 如权利要求18所述的电子装置,其特征在于,所述第二图像传感器正对所述第一镜头的区域小于所述第一图像传感器正对所述第一镜头的区域。The electronic device according to claim 18, wherein the area of the second image sensor facing the first lens is smaller than the area of the first image sensor facing the first lens.
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有程序指令,所述程序指令用于供计算机调用后执行如权利要求1-11任一项所述的拍摄控制方法。A computer readable storage medium, wherein the computer readable storage medium stores program instructions for performing a shooting control according to any one of claims 1 to 11 after being invoked by a computer method.
PCT/CN2018/086692 2018-05-14 2018-05-14 Electronic device and photographing control method WO2019218111A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/086692 WO2019218111A1 (en) 2018-05-14 2018-05-14 Electronic device and photographing control method
CN201880069652.7A CN111279682A (en) 2018-05-14 2018-05-14 Electronic device and shooting control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/086692 WO2019218111A1 (en) 2018-05-14 2018-05-14 Electronic device and photographing control method

Publications (1)

Publication Number Publication Date
WO2019218111A1 true WO2019218111A1 (en) 2019-11-21

Family

ID=68539185

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/086692 WO2019218111A1 (en) 2018-05-14 2018-05-14 Electronic device and photographing control method

Country Status (2)

Country Link
CN (1) CN111279682A (en)
WO (1) WO2019218111A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831380A (en) * 2011-06-15 2012-12-19 康佳集团股份有限公司 Body action identification method and system based on depth image induction
CN104125396A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image shooting method and device
CN104506769A (en) * 2014-12-04 2015-04-08 广东欧珀移动通信有限公司 Shooting method and terminal
CN104639819A (en) * 2013-11-07 2015-05-20 中兴通讯股份有限公司 Portable terminal and method for taking pictures using same
CN104883494A (en) * 2015-04-30 2015-09-02 努比亚技术有限公司 Image snapshot method and device
US20150373258A1 (en) * 2014-06-24 2015-12-24 Cyberlink Corp. Systems and Methods for Automatically Capturing Digital Images Based on Adaptive Image-Capturing Templates
CN106372627A (en) * 2016-11-07 2017-02-01 捷开通讯(深圳)有限公司 Automatic photographing method and device based on face image recognition and electronic device
CN106454071A (en) * 2016-09-09 2017-02-22 捷开通讯(深圳)有限公司 Terminal and automatic shooting method based on gestures
CN106503658A (en) * 2016-10-31 2017-03-15 维沃移动通信有限公司 automatic photographing method and mobile terminal
CN107423707A (en) * 2017-07-25 2017-12-01 深圳帕罗人工智能科技有限公司 A kind of face Emotion identification method based under complex environment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831380A (en) * 2011-06-15 2012-12-19 康佳集团股份有限公司 Body action identification method and system based on depth image induction
CN104639819A (en) * 2013-11-07 2015-05-20 中兴通讯股份有限公司 Portable terminal and method for taking pictures using same
CN104125396A (en) * 2014-06-24 2014-10-29 小米科技有限责任公司 Image shooting method and device
US20150373258A1 (en) * 2014-06-24 2015-12-24 Cyberlink Corp. Systems and Methods for Automatically Capturing Digital Images Based on Adaptive Image-Capturing Templates
CN104506769A (en) * 2014-12-04 2015-04-08 广东欧珀移动通信有限公司 Shooting method and terminal
CN104883494A (en) * 2015-04-30 2015-09-02 努比亚技术有限公司 Image snapshot method and device
CN106454071A (en) * 2016-09-09 2017-02-22 捷开通讯(深圳)有限公司 Terminal and automatic shooting method based on gestures
CN106503658A (en) * 2016-10-31 2017-03-15 维沃移动通信有限公司 automatic photographing method and mobile terminal
CN106372627A (en) * 2016-11-07 2017-02-01 捷开通讯(深圳)有限公司 Automatic photographing method and device based on face image recognition and electronic device
CN107423707A (en) * 2017-07-25 2017-12-01 深圳帕罗人工智能科技有限公司 A kind of face Emotion identification method based under complex environment

Also Published As

Publication number Publication date
CN111279682A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
US10165199B2 (en) Image capturing apparatus for photographing object according to 3D virtual object
RU2679199C1 (en) Method and device for controlling photoshoot of unmanned aircraft
TWI442328B (en) Shadow and reflection identification in image capturing devices
WO2020103526A1 (en) Photographing method and device, storage medium and terminal device
JP4196714B2 (en) Digital camera
CN104243818B (en) Image processing method, device and equipment
TWI653581B (en) Image tracking system and image tracking method
US11562471B2 (en) Arrangement for generating head related transfer function filters
WO2019213818A1 (en) Photographing control method, and electronic device
WO2018120662A1 (en) Photographing method, photographing apparatus and terminal
KR102407190B1 (en) Image capture apparatus and method for operating the image capture apparatus
WO2019213819A1 (en) Photographing control method and electronic device
WO2014194676A1 (en) Photographing method, picture management method and equipment
WO2016187985A1 (en) Photographing device, tracking photographing method and system, and computer storage medium
WO2014131329A1 (en) Image acquisition method and device
JP2004317699A (en) Digital camera
WO2019179364A1 (en) Photographing method and device and smart device
TW200926010A (en) Method and apparatus for image capturing
WO2019214574A1 (en) Image capturing method and apparatus, and electronic terminal
TW201801516A (en) Image capturing apparatus and photo composition method thereof
TW201338516A (en) Image capturing device and method thereof and human recognition photograph system
CN108156376A (en) Image-pickup method, device, terminal and storage medium
WO2014161386A1 (en) Video apparatus and photography method thereof
WO2019084756A1 (en) Image processing method and device, and aerial vehicle
US20130308829A1 (en) Still image extraction apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18918735

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.03.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18918735

Country of ref document: EP

Kind code of ref document: A1