WO2021042364A1 - Method and device for taking picture - Google Patents
Method and device for taking picture Download PDFInfo
- Publication number
- WO2021042364A1 WO2021042364A1 PCT/CN2019/104674 CN2019104674W WO2021042364A1 WO 2021042364 A1 WO2021042364 A1 WO 2021042364A1 CN 2019104674 W CN2019104674 W CN 2019104674W WO 2021042364 A1 WO2021042364 A1 WO 2021042364A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- mode
- capture
- capture mode
- image
- frame
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W88/00—Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
- H04W88/02—Terminal devices
Definitions
- the present application relates to the field of image processing technology, and more specifically, to a method and apparatus for capturing images.
- Smart capture is an important photographing function of current smart terminals.
- the smart terminal with the smart capture function enabled can score multiple frames of images to be selected based on certain scoring rules, thereby selecting the highest-scoring frame of image and recommending it to the user.
- the scoring rules are single, and often focus on facial information and ignore other information. Even if no human face is detected in the image to be selected, it is only based on the optical flow information of the two frames before and after as the scoring basis.
- This scoring mechanism is not universal, and the recommended optimal frames may not be ideal. For example, for images that include high-speed moving objects, the above methods cannot recommend wonderful images at the moment of movement to users, and the capture effect is not ideal. Therefore, there is an urgent need to develop a flexible capture solution suitable for more scenes.
- the present application provides a method and device for capturing an image, so that the captured image is more in line with the actual shooting scene.
- a method of capturing an image includes: determining the first capture mode among the preset multiple capture modes according to the captured multi-frame images; using the evaluation strategy corresponding to the first capture mode to determine the first capture mode among the captured multi-frame images to be selected A capture frame image corresponding to a capture mode; the evaluation strategy is one of a variety of preset evaluation strategies.
- the capture mode can be determined according to the actual shooting scene.
- an evaluation strategy corresponding to the first capture mode can be selected from a plurality of preset evaluation strategies, so as to use the evaluation strategy to determine the captured frame image. Therefore, the captured image obtained is more in line with the actual shooting scene, which is conducive to obtaining an ideal capture effect, the flexibility is improved and it is suitable for more scenes.
- the aforementioned multiple capture modes include one or more of the following: facial expression capture mode, group photo capture mode, sports capture mode, multiplayer sports capture mode, pet capture mode And landscape capture mode.
- each of the multiple capture modes corresponds to at least one of the foregoing preset multiple evaluation strategies, and each evaluation strategy includes One or more scoring parameters used for image scoring and the mode weight of each scoring parameter.
- the using the evaluation strategy corresponding to the first capture mode to determine the captured frame image corresponding to the first capture mode among the captured multiple frames to be selected includes: using at least one evaluation corresponding to the first capture mode An evaluation strategy in one or more of the scoring parameters and the mode weight of each scoring parameter to calculate the score of each frame of the image to be selected in the multiple frames of images to be selected; according to the multiple scores of the multiple frames of images to be selected, Determine the captured frame image corresponding to the first capture mode among the multiple frames of images to be selected.
- the scoring parameters can be assigned to different capture modes, and different weights can be applied to each scoring parameter. Therefore, the scoring results obtained by scoring the same image based on different capture modes are different.
- the evaluation strategy corresponding to the first capture mode is selected to score multiple frames of images to be selected, so as to determine the captured frame image.
- the captured frame image obtained in this way is combined with the evaluation strategy corresponding to the first capture mode, so it can better meet the requirements of the first capture mode and conform to the actual shooting scene.
- the captured frame image has the highest score among multiple frames to be selected.
- the selected image with the highest score is the image selected from the multiple frames of images to be selected that best meets the requirements of the first capture mode, of course. It best suits the actual shooting scene.
- different evaluation strategies corresponding to different capture modes include the same scoring parameters, and different evaluation strategies include different mode weights.
- the different evaluation strategies mentioned here include different mode weights. Specifically, it may mean that the mode weights applied to the same scoring parameter in different evaluation strategies are different. Moreover, when the evaluation strategy includes multiple scoring parameters, different evaluation strategies impose different mode weights on at least one scoring parameter.
- different weights can be applied to the same scoring parameter.
- different weights can be applied to the scoring parameter of expression intensity. In the sports capture mode, a lower weight can be applied, and in the expression capture mode, a higher weight can be applied.
- different weights can also be applied to the scoring parameter of posture height. In the sports capture mode, a higher weight can be applied, while in the expression capture mode, a lower weight can be applied.
- different evaluation strategies corresponding to different capture modes may respectively include different mode weights corresponding to the same scoring parameter.
- each capture mode includes one or more capture categories, and each capture category corresponds to an evaluation strategy; at least one corresponding to the first capture mode
- each evaluation strategy includes one or more scoring parameters corresponding to the first snapping mode, a mode weight of each scoring parameter, and a category weight corresponding to one snapping category.
- the determining the first snapping mode among the preset multiple snapping modes further includes: determining the first snapping category in the first snapping mode according to multiple frames of images.
- the one or more scoring parameters and the mode weight of each scoring parameter in one of the at least one evaluation strategy corresponding to the first snapping mode are used to calculate each frame of the image to be selected in the multiple frames of image to be selected
- the scoring includes: using one or more scoring parameters corresponding to the first snapshot mode and the mode weight of each scoring parameter, as well as the category weight of each scoring parameter corresponding to the first snapshot category, to calculate multiple frames to be selected The score of each frame of the image to be selected.
- this application not only proposes an evaluation strategy corresponding to the capture mode, assigns different scoring parameters and mode weights to different capture modes, and further proposes the corresponding to the capture category in the capture mode The category weight. That is, the different capture categories in the capture mode are further refined, and the details of the different capture categories are further weighted.
- the selected captured frame images can not only meet the requirements of the first capture mode, but also take into account the capture category of the subject, and find out the images that can better present the wonderful moments of the capture.
- different evaluation strategies corresponding to different capture categories include the same scoring parameters, and different evaluation strategies include different category weights.
- the category weights included in the different evaluation strategies mentioned here are different. Specifically, it may mean that the category weights applied to the same scoring parameter in different evaluation strategies are different. Moreover, when the evaluation strategy includes multiple scoring parameters, different evaluation strategies apply different category weights to at least one scoring parameter.
- different evaluation strategies corresponding to different capture categories may respectively include different category weights corresponding to the same scoring parameter.
- Different weights can be applied to the scoring parameter of expression intensity. In the sports capture mode, a lower weight can be applied, and in the expression capture mode, a higher weight can be applied. Conversely, different weights can also be applied to the scoring parameter of posture height. In the sports capture mode, a higher weight can be applied, while in the expression capture mode, a lower weight can be applied.
- the method further includes: invoking at least one detection model corresponding to the first capture mode to perform image recognition on multiple frames of images to be selected to output a recognition result; Determine the value of one or more scoring parameters based on the recognition result.
- At least one detection model is used to perform image recognition on the image to be selected.
- the at least one detection model may include, for example, one or more of a face attribute detection model, a human frame detection model, a scene recognition model, a posture point estimation model, and an action detection model. Through these detection models, different attention points of the image can be detected, and the value of each scoring parameter can be determined according to the recognition result.
- the above-mentioned detection model may be obtained through machine learning training, for example.
- the aforementioned detection model may be a model embedded in a neural network processing unit (NPU). This application does not limit this.
- the at least one detection model includes a pose estimation model and a motion detection model.
- the first capture mode is an expression capture mode or a group photo capture mode
- the at least one detection model includes a face attribute detection model
- detection models corresponding to the different capture modes listed above are only examples, and should not constitute any limitation to this application.
- the same multiple detection models can also be called to perform image recognition on the images to be selected.
- different weights can be applied to each scoring parameter according to different capture modes, so as to achieve similar effects.
- the first capture mode is determined from the preset multiple capture modes according to the captured multi-frame images, including: in the video mode or the preview mode, According to the multiple frames of images, the first capture mode is determined among multiple functional capture modes.
- the method provided in this application can not only be applied in the smart capture mode, but also can run synchronously with other modes.
- the smart capture mode in the video recording mode, if it is detected that the image meets the trigger condition of the first capture mode, the smart capture mode can be run in the background at the same time to enter the first capture mode.
- the first capture mode in the preview mode, if it is detected that the image meets the trigger condition of the first capture mode, the first capture mode can be automatically activated. Therefore, the device can automatically switch between multiple modes, which is conducive to obtaining an ideal captured frame image.
- the determining the first capture mode among the preset multiple capture modes according to the captured multi-frame images includes: comparing the captured images based on the first frame rate The mode detection is performed on multiple frames of images to determine the first capture mode among the preset multiple capture modes.
- the calling at least one detection model corresponding to the first capture mode to perform image recognition on multiple frames of images to be selected includes: calling at least one detection model corresponding to the first capture mode to perform image recognition on the multiple frames of images to be selected at a second frame rate Perform image recognition; wherein the first frame rate is less than the second frame rate.
- mode detection can be performed based on a lower frame rate. This method can be applied to the above-mentioned video mode or preview mode. Before entering the smart capture mode, you can use a lower frame rate for mode detection. Once the first capture mode is determined, that is, the smart capture mode is entered, a higher frame rate can be used for image recognition. Therefore, it is possible to perform mode detection based on a low frame rate before entering the smart capture mode, saving power consumption caused by a high frame rate.
- pattern detection and image recognition can also be performed at the same frame rate.
- the mode is determined at a higher frame rate.
- image recognition is still performed in a higher mode. This application does not limit this.
- this application does not limit the specific value of the frame rate.
- the method further includes: determining a second capture mode based on the newly captured multi-frame images, where the second capture mode is One of the multiple snapping modes is different from the first snapping mode; switching to the second snapping mode.
- the newly captured image After entering the first capture mode, the newly captured image can also be continuously detected.
- the second capture mode can be automatically switched to.
- the switching to the second capture mode includes: switching to the second capture mode when the running time of the first capture mode exceeds a preset protection period.
- the protection period can also be preset for each capture mode. During the protection period, even if it is detected that the newly captured image meets the trigger condition of another capture mode, the mode switching is not performed. After the protection period is exceeded, if it is detected that the newly captured image meets the trigger condition of another capture mode, the mode can be switched.
- an image capturing apparatus which includes various modules or units for executing the method in any one of the possible implementation manners of the first aspect.
- a device for capturing images including a processor, a memory, the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that the device for waking up the screen executes the first
- the processor is used to call and run the computer program from the memory, so that the device for waking up the screen executes the first
- processors there are one or more processors and one or more memories.
- the memory may be integrated with the processor, or the memory and the processor may be provided separately.
- an electronic device in a fourth aspect, includes the image capturing apparatus as described in the second or third aspect.
- a computer program product includes: a computer program (also called code, or instruction), which when the computer program is executed, causes a computer to execute any one of the above-mentioned first aspects.
- a computer program also called code, or instruction
- a computer-readable medium stores a computer program (also called code, or instruction) when it runs on a computer or at least one processor, so that the computer or the At least one processor executes the method in any possible implementation manner of the first aspect.
- a computer program also called code, or instruction
- a chip system in a seventh aspect, includes a processor for supporting the chip system to implement the functions involved in any possible implementation manner of the first aspect.
- FIG. 1 is a schematic diagram of an electronic device provided by an embodiment of the present application.
- FIG. 2 is a schematic flowchart of a method for capturing an image provided by an embodiment of the present application
- Figure 3 is a schematic diagram of a mobile phone interface provided by an embodiment of the present application.
- FIG. 4 is a schematic flowchart of a method for capturing an image according to another embodiment of the present application.
- FIG. 5 is a schematic flowchart of a method for capturing an image provided by another embodiment of the present application.
- FIG. 6 is a schematic flowchart of a method for capturing an image according to still another embodiment of the present application.
- Fig. 7 is a schematic block diagram of an image capturing apparatus provided by an embodiment of the present application.
- the method of capturing images provided by the embodiments of the application can be applied to mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, and ultra-mobile personal
- AR augmented reality
- VR virtual reality
- electronic devices such as a computer (ultra-mobile personal computer, UMPC), netbook, and personal digital assistant (personal digital assistant, PDA)
- UMPC ultra-mobile personal computer
- PDA personal digital assistant
- the image capturing apparatus provided in the embodiments of the present application may be configured in the various electronic devices listed above, or may be the various electronic devices listed above. This application does not limit this.
- FIG. 1 shows a schematic structural diagram of an electronic device 100.
- the electronic device 100 may include a processor 110.
- the processor 110 may include one or more processing units.
- the processor 110 may include a central processing unit (CPU), a neural network processing unit (NPU), an application processor (AP), a modem processor, and a graphics processing unit (graphics processing unit, GPU), image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), and one or more of the baseband processor.
- the different processing units may be independent devices or integrated in one or more processors.
- the controller may be the nerve center and command center of the electronic device 100.
- the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
- NPU is a neural-network (NN) processor.
- NN neural-network
- applications such as intelligent cognition of the electronic device 100 can be realized, such as: image recognition, face detection, human frame detection, scene detection, gesture point detection, motion detection, and so on.
- one or more detection models may be embedded in the NPU, such as the face attribute detection model, pose estimation model, motion detection model, human frame detection model, and scene detection model described below. one or more.
- Each detection model can be obtained by algorithm training based on machine learning. For example, it can be obtained by training based on support vector machines (VSM), convolutional neural networks (convolutional neural networks, CNN), or recurrent neural networks (RNN). It should be understood that this application does not limit the specific training method.
- VSM support vector machines
- CNN convolutional neural networks
- RNN recurrent neural networks
- Each detection model can correspond to a processor in the NPU; or, each detection model can correspond to a processing unit in the NPU, and the functions of multiple detection models can be integrated into a processor through multiple processing units to fulfill. This application does not limit this.
- the NPU may also have a communication connection with one or more other processors in the processor 100.
- the NPU may have a communication connection with the GPU, ISP, and application processor. This application does not limit this.
- the electronic device 100 further includes a memory 120.
- the memory 120 may be used to store computer executable program code, where the executable program code includes instructions.
- the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the memory 120.
- the memory 120 may include a program storage area and a data storage area.
- the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
- the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
- the memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
- UFS universal flash storage
- a memory may be provided in the processor 110.
- the memory in the processor 110 is a cache memory.
- the memory can store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
- the memory may also exist independently of the processor 110, such as the memory 120 as shown in the figure. This application does not limit this.
- the electronic device 100 further includes a transceiver 130.
- the electronic device 100 may also include one or more of the input unit 160, the display unit 170, the audio circuit 180, the camera 190, and the sensor 101.
- the audio The circuit can also be coupled to a speaker 182, a microphone 184, and the like.
- the electronic device 100 implements a display function through a GPU, a display unit 170, an application processor, and the like.
- the GPU is an image processing microprocessor, which is connected to the display unit 170 and the application processor.
- the GPU is used to perform mathematical and geometric calculations for graphics rendering.
- the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
- the display unit 170 is used to display images, videos, and the like.
- the display unit 170 includes a display panel.
- the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
- the electronic device 100 may include one or more display units 170.
- the electronic device 100 may implement a shooting function through an ISP, a camera 190, a video codec, a GPU, a display unit 160, an application processor, and the like.
- the ISP is used to process the data fed back by the camera 190. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
- ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
- the ISP may be provided in the camera 190.
- the camera 190 is used to capture still images or dynamic videos.
- the object generates an optical image through the lens and is projected to the photosensitive element.
- the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
- CMOS complementary metal-oxide-semiconductor
- the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
- ISP outputs digital image signals to DSP for processing.
- DSP converts digital image signals into standard image signals in RGB, YUV and other formats.
- the electronic device 100 may include one or more cameras 190.
- the camera 190 may be used to capture images and display the captured images in the capture interface.
- the photosensitive element converts the collected optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
- the ISP outputs the digital image signal to the DSP for related image processing.
- Video codecs are used to compress or decompress digital video.
- the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
- MPEG moving picture experts group
- MPEG2 MPEG2, MPEG3, MPEG4, and so on.
- the application processor outputs a sound signal through an audio device (such as a speaker 182, etc.), or displays an image or video through the display unit 170.
- the aforementioned electronic device 100 may further include a power supply 150 for providing power to various devices or circuits in the terminal device.
- the electronic device 100 shown in FIG. 1 can implement each process of the method embodiments shown in FIG. 2 and FIG. 4 to FIG. 6.
- the operations and/or functions of each module or unit in the electronic device 100 are respectively for implementing the corresponding processes in the foregoing method embodiments.
- FIG. 1 is only for ease of understanding, and exemplarily shows each module or unit in an electronic device and the connection relationship between the modules or units, but this should not constitute any limitation to the application. This application does not limit the modules and units specifically included in the electronic device and their mutual connection relationship.
- FIG. 2 is a schematic flowchart of an image capturing method 200 provided by an embodiment of the present application.
- the method 200 provided in FIG. 2 may be executed by an electronic device or a processor in the electronic device.
- an electronic device is used as an execution subject to describe the embodiments of the present application.
- the method 200 may include step 210 to step 240.
- step 210 according to the captured multiple frames of images, a first capture mode is determined among a plurality of preset capture modes.
- the electronic device may periodically detect the captured image, and according to the detection result, determine the capture mode suitable for the current shooting.
- the capture mode applicable to the current shooting is recorded as the first capture mode.
- the image captured by the electronic device can be stored in the cache.
- the cache for example, may be a part of the storage space in the camera module of the electronic device, or may exist independently of the camera module, which is not limited in this application.
- the electronic device can continuously obtain multiple frames of images from the cache.
- the multiple frames of images can be input to one or more detection models.
- the electronic device can call one or more detection models to detect the multi-frame images, and based on the detection results output by the detection models, determine the first capture mode from a plurality of preset capture modes. Based on the determination of the first capture mode, the electronic device may enable the first capture mode.
- the preset multiple capture modes include one or more of the following: facial expression capture mode, group photo capture mode, sports capture mode, multiplayer sports capture mode, pet capture mode, and landscape capture mode.
- the aforementioned multiple capture modes are all defined as smart capture modes in the embodiments of the present application.
- the above-mentioned first capture mode is a smart capture mode.
- the electronic device can enter the smart capture mode in advance, and then determine the first capture mode among the preset multiple capture modes according to the captured multi-frame images; it can also be in the photo mode or video mode, based on the detection of the captured images , Automatically determine the first capture mode. This application does not limit this.
- the multiple preset capture modes described herein may specifically refer to that the electronic device pre-stores the evaluation strategy corresponding to each capture mode in the multiple capture modes.
- the electronic device determines to use one of the capture modes, it can call the corresponding evaluation strategy to evaluate the captured multiple frames of images to be selected.
- the process of evaluating multiple frames of images to be selected by the electronic device using the evaluation strategy will be described in detail in conjunction with step 220, which is omitted here.
- the electronic device can call one or more of the face attribute detection model, the human frame detection model, and the scene recognition model to detect the captured multi-frame images, and according to each The detection result output by the detection model determines the first capture mode.
- the electronic device can detect the captured multi-frame images by calling one or more of the face attribute detection model, the pose estimation model, and the motion detection model, and output according to each detection model The detection result determines the first capture mode.
- the face attribute detection model, the human frame detection model, the scene recognition model, the pose estimation model, and the action detection model listed above may all be models obtained by training based on machine learning algorithms. Based on different functions, different models are defined. Among them, based on different functions, face attribute detection models can be further divided into face feature point detection models, open and closed eyes detection models, and so on. This application does not limit this.
- the names of the detection models are merely examples for ease of understanding, and this application does not exclude the possibility of replacing the detection models listed above with other names to achieve the same or similar functions.
- the determination of the first capture mode may be determined based on the detection result of a certain detection model on the image, or it may be determined based on the detection result of the image by multiple detection models.
- the electronic device calls multiple detection models to determine the first capture mode, the multiple detection models can run simultaneously or alternately.
- the electronic device can comprehensively consider the detection results of the image based on multiple detection models.
- the capture mode can be determined as the first capture mode. The following uses a number of examples to illustrate the specific process of the electronic device determining the first capture mode by calling one or more detection models.
- the electronic device may call the face attribute detection model to detect the face in the image, and determine the first capture mode according to the detection result.
- the face attribute detection model can be specifically obtained through machine learning algorithm training.
- the face attribute detection model detects a face in an image, it can detect each feature point of the face.
- the electronic device can exclude passers-by entry scenes and sports scenes based on the detected face position and depth information. In this case, the detection result of the image by the face attribute detection model satisfies the trigger condition of the expression capture mode, and it can be determined that the first capture mode is the expression capture mode.
- the electronic device can exclude passersby entry scenes and sports scenes based on the position and depth information of the faces.
- the detection result of the image by the face attribute detection model satisfies the trigger condition of the group photo capture mode, and it can be determined that the first capture mode is the group photo capture mode.
- the electronic device may call the scene recognition model to detect the shooting scene of the image, and determine the first capture mode according to the detection result.
- the scene recognition model can be specifically obtained by training based on multiple predefined scenes through a machine learning algorithm.
- the motion scene is output.
- the electronic device can determine that the shooting object is in a motion state according to the motion scene detected by the scene recognition model.
- the detection result of the scene recognition model on the image satisfies the trigger condition of the motion capture mode, and it can be determined that the first capture mode is the motion capture mode.
- the aforementioned sports scene may include: a court (for example, including a basketball court, a football field, etc.), a swimming pool, or a running track.
- the electronic device may also call the scene recognition model and the human frame detection model to detect the image, and synthesize the output results of the scene recognition model and the human frame detection model to determine the first capture mode. For example, when the scene in the image is detected as a predefined motion scene through the scene recognition model, and multiple human frames are detected in the image through the human frame detection model, the scene recognition model and the human frame detection model detect the image The trigger condition of the multi-person sports capture mode is satisfied, and the first capture mode can be determined to be the multi-person sports capture mode.
- the electronic device may call the human body frame detection model to detect the human body frame in the image.
- the human frame detection model can be obtained through machine learning algorithm training to be used to detect the human frame in the image.
- the electronic device can also call other motion area detection algorithms to determine the motion area in the image.
- the motion area in the image can be determined based on optical flow information and the like. This application does not limit this.
- the motion area in the image can be determined to be foreground motion , Not background motion or relative camera motion.
- the electronic device may also determine that the first capture mode is a multi-person motion capture mode when there are multiple human body frames detected by the human body frame detection model.
- the electronic device may call the pose estimation model to detect multiple pose points of the human body in the image.
- the pose estimation model can be obtained by training based on a plurality of predefined pose points (or feature points) through a machine learning algorithm.
- the foregoing multiple predefined posture points include, for example, head, shoulders, neck, elbows, hips, legs, knees, ankles, and so on. For the sake of brevity, I will not list them all here.
- the pose estimation model can be used to detect multiple pose points in the image and determine the coordinate information of each pose point.
- the coordinate information of each posture point may be represented by the two-dimensional coordinates of the corresponding pixel point in the image.
- the pixel (u, v) represents the u-th row and v-th column of the pixel in the two-dimensional image.
- the coordinate information of each posture point can be represented by the three-dimensional coordinates of the corresponding pixel in the image.
- the pixel (u, v) can further carry depth information d, and the three-dimensional coordinates of the pixel can be expressed as (u, v, d). The depth information is used to indicate the distance between the pixel and the camera.
- the pose estimation model can be based on multiple pose points of the human body, and the human body frame can be further estimated.
- the human skeleton frame can be obtained by connecting multiple posture points.
- the electronic device can detect the coincidence degree of the determined motion region according to the human body frame estimated by the pose estimation model and the motion region algorithm, thereby determining whether the triggering condition of the motion capture mode is satisfied.
- the specific method is similar to the method introduced above in conjunction with the human frame detection model. For brevity, it will not be repeated here.
- the electronic device can determine whether the photographed object is in a moving state according to the coordinate information of each posture point in the multiple frames of images before and after.
- the coordinate information of some or all of the posture points will change relatively in the multiple frames before and after the image.
- the changes of the posture points of the human body under the motion state can be obtained. Therefore, when the electronic device determines that the trigger condition of the motion capture mode is satisfied according to the coordinate information of the multiple pose points in each frame of the image detected by the pose estimation model, it can be determined that the first capture mode is the motion capture mode.
- the electronic device can call the posture estimation model and the motion detection model to identify the motion category of the object in the image.
- the pose estimation model can be used to determine multiple pose points in an image and the coordinate information of each pose point.
- the coordinate information of multiple posture points in each frame of image can be input to the motion detection model to determine the motion category of the subject.
- the action detection model can be obtained by training based on multiple predefined action categories through a machine learning algorithm.
- the motion detection model can determine the motion category of the subject based on the training sample and the coordinate changes of the above-mentioned posture points.
- the action category includes, for example: running, jumping, shooting, kicking, rock climbing, swimming, diving, skating, etc.
- the action category can be determined as the action category of the subject.
- the motion detection model can output the motion category of the subject. For example, when the motion detection model detects that the human body performs a specific action (such as the action categories listed above) in the image, the electronic device can determine that the image meets the trigger condition of the motion capture mode, thereby determining the first capture mode It is a sports capture mode.
- a number of examples for the electronic device to determine the first capture mode are listed above in combination with the functions of each model, but it should be understood that these examples should not constitute any limitation to this application. Multiple models can also be used in combination, and the first capture mode suitable for the current shooting is determined based on the trigger conditions predefined for various capture modes.
- the electronic device can also call corresponding models in sequence according to the priority of multiple capture modes.
- the sports capture mode has a higher priority than the expression capture mode.
- the electronic device may sequentially call the detection models based on the detection models corresponding to the different capture modes to determine the first capture mode.
- the motion capture mode can be determined by calling the human frame detection model, or the scene recognition model, or the pose estimation model, or the pose estimation model and the motion detection model to detect the captured multiple frames of images.
- the expression capture mode can be determined by calling the multi-frame images captured by the face attribute detection model to detect. It should be understood that the relationships between the various modes and models listed here are only examples, and should not constitute any limitation on this application.
- a capture mode can be determined by the detection results of multiple models. In other words, a capture mode can be determined by calling multiple models to detect multiple frames of images captured. This application does not limit the models corresponding to various capture modes.
- the electronic device may first call the human frame detection model or the scene recognition model. When it is determined that the captured multi-frame images meet the triggering condition of the motion capture mode, it can be directly determined that the first capture mode is the motion capture mode. The electronic device can no longer call the face attribute detection model, thereby saving the time for determining the capture mode and at the same time saving power consumption caused by the running of the model.
- step 220 the evaluation strategy corresponding to the first capture mode is used to determine the captured frame image corresponding to the first capture mode from among the captured images to be selected.
- the evaluation strategy is one of a variety of preset evaluation strategies.
- Each evaluation strategy can be used to define a rule or method for determining the captured frame image among multiple frames to be selected.
- the captured frame image corresponding to the first capture mode may specifically refer to an image that is determined based on the first capture mode to best show the wonderful moments of the first capture mode.
- the first capture mode is a sports capture mode
- the captured frame image may be, for example, the image that best reflects the wonderful action moment of the subject among the captured multiple frames to be selected.
- the first capture mode is an expression capture mode, such as a smiling face capture
- the captured frame image may be, for example, the image at the moment when the subject's smile is the brightest among the captured multiple frames to be selected.
- the first capture mode is a group photo capture mode
- the captured frame image may be, for example, the best moment of the expression, demeanor, or composition of each subject in the captured images to be selected.
- the multiple frames of images to be selected herein and the multiple frames of images described in step 210 above may not overlap each other, or partially overlap.
- the overlap mentioned here may specifically mean that a certain frame of image in step 210 and a certain frame of to-be-selected image in step 220 are the same frame of image, or in other words, are images captured at the same time point.
- the multi-frame image to be selected may be a multi-frame continuous image captured after the multi-frame image captured in step 210, or it may further include at least part of the multi-frame image captured in step 210. Multi-frame images included.
- the multiple frames of images to be selected may be multiple frames of discontinuous images after the multiple frames of images captured in step 210.
- the multi-frame image to be selected may be a multi-frame image that is after the multi-frame image captured in step 210 and is not continuous with the multi-frame image.
- the captured multi-frame images in step 210 may include, for example, preview images captured before the user performs a photographing operation or a video recording operation. Normally, after turning on the camera, the electronic device enters the photo mode by default. Although the user did not perform the photographing operation, the preview image can still be seen through the photographing interface.
- the above-mentioned multi-frame images may include multi-frame preview images captured in the photographing mode. Thereafter, the electronic device may trigger the first capture mode based on the user's manual adjustment or the detection result of the detection model. That is, the electronic device enters the smart capture mode. In the smart capture mode, although the user may not perform a photo operation, multiple frames of preview images can still be captured.
- the above-mentioned multi-frame images may also include multi-frame preview images captured after entering the smart capture mode.
- the electronic device can save the above-mentioned multi-frame images in a buffer for subsequent use, for example, in step 210 to determine the first capture mode.
- a preview image can still be captured for determining the first capture mode.
- the mode in which the camera is in the camera mode, smart capture mode, or other camera mode before performing the camera operation may be referred to as the preview mode.
- the user can observe the photographed subject in the preview mode, so as to select the appropriate time to perform the photographing operation.
- the preview mode is not necessarily before the photographing operation, and the mode in between two consecutive photographing operations may also be referred to as the preview mode.
- the multiple frames of images to be selected in step 220 may be, for example, N frames of images before and after the moment the shutter is pressed, and N is a positive integer.
- N is 10
- the multiple frames of images to be selected may be the first 10 frames of images and the last 10 frames of images at the moment when the shutter is pressed, a total of 20 frames of images.
- the multiple frames of images to be selected may not overlap with the multiple frames of images described in step 210, or there may be some overlaps. This may depend on the length of time between the time when the shutter is pressed and the time when the electronic device enters the first capture mode, the value of N, and so on.
- the multiple frames of images to be selected may include part or all of the multiple frames of images described in step 210.
- This application does not limit the relationship between the multiple frames of images described in step 210 and the multiple frames of images to be selected in step 220.
- the multiple frames of images to be selected may also be the first N frames of images or the last N frames of images at the moment when the shutter is pressed, which is not limited in the embodiment of the present application.
- the user can press the shutter by clicking the shooting control in the user interface or other buttons for controlling the shooting during the shooting process, and the specific operation of the user pressing the shutter is not limited in this application. It should also be understood that the value of N above is only an example for ease of understanding, and should not constitute any limitation to this application. This application does not limit the specific value of N.
- the multi-frame images described in step 210 may also include video images saved in the video recording mode.
- the video image may be a sequence of continuous multi-frame images.
- the electronic device can save these images in the cache for subsequent use, for example, to determine the first capture mode in step 210 and to determine the captured frame image corresponding to the first capture mode in step 220.
- the multiple frames of images to be selected in step 220 may be, for example, the first N frames of images, the last N frames of images, or the previous and next N frames of images at the moment the shutter is pressed, or they may be based on the provided in the embodiment of the present application.
- the scoring result selected by the evaluation strategy exceeds the preset threshold image. This application does not limit this.
- the captured frame image corresponding to the first capture mode may be an optimal frame image determined from a plurality of captured images to be selected. For example, based on the evaluation strategy corresponding to the first capture mode, multiple captured images to be selected may be scored, and the image with the highest score may be selected as the optimal frame image.
- the method 200 further includes:
- Step 230 Perform image recognition on the captured multiple frames of images to be selected to output a recognition result
- Step 240 Determine the value of each scoring parameter based on the recognition result.
- the electronic device can call one or more of the above detection models to perform image recognition on the captured multiple frames of images to be selected.
- the electronic device can use the evaluation strategy corresponding to the first capture mode to score the captured images to be selected according to the recognition result of the image to be selected, so that the capture frame corresponding to the first capture mode can be determined according to the scoring result image.
- step 210 may specifically include: determining the first capture mode among multiple preset capture modes based on the first frame rate according to the captured multi-frame images.
- step 230 may specifically include: performing image recognition on the captured multiple frames of images to be selected based on the second frame rate to output the recognition result.
- the first frame rate may be equal to or less than the second frame rate. This is related to the shooting mode used by the electronic device. The following will be described in detail in conjunction with the embodiments shown in FIG. 4 to FIG. 6.
- the following describes in detail the specific process of using the evaluation strategy corresponding to the first capture mode to determine the captured frame image corresponding to the first capture mode among the captured multiple frames to be selected.
- each of the multiple capture modes corresponds to at least one of the preset multiple evaluation strategies, and each evaluation strategy includes one or more scoring parameters for image scoring and each The mode weight of the scoring parameter.
- Step 220 may specifically include: using one or more scoring parameters in one of the at least one evaluation strategy corresponding to the first capture mode and the mode weight of each scoring parameter to calculate the captured multiple frames of images to be selected The score of each frame of the image to be selected in the frame; and according to the multiple scores of the multiple frames of images to be selected, the captured frame image corresponding to the first capture mode is determined among the multiple frames of images to be selected.
- the aforementioned multiple capture modes can correspond to multiple preset evaluation strategies.
- Each capture mode can correspond to one or more of the preset multiple evaluation strategies.
- each evaluation strategy may include a scoring parameter set corresponding to the first snapshot mode.
- Each scoring parameter set may include one or more scoring parameters.
- the scoring parameter corresponding to the sports capture mode may include one or more of the following: posture stretch, posture height, and so on.
- the scoring parameters corresponding to the facial expression capture mode may include: facial expression strength, facial occlusion, eyes open and closed, and face angle.
- posture stretch can also be called posture stretch, which can specifically refer to the degree of bending of the limbs and the relative distance to the trunk.
- the posture stretch can be obtained by weighting the angles of the body joints, and the angle parameters of the body joints that are strongly related to the human body movement can be pre-set.
- the degree of posture stretch can be determined by, for example, a posture estimation model and a motion detection model.
- the degree of posture stretch may further include, for example, the angles of the various joints of the human body, such as but not limited to, wrist joint angles, elbow joint angles, arm bending angles, leg bending angles, and knee joints. Angle, ankle angle, etc. For the sake of brevity, I will not list them all here.
- the included angle of each joint may exist as a scoring parameter.
- the posture stretch can be understood as a generalization of the included angle of each joint point.
- the posture height may specifically refer to the height position of the center height of the body in the image.
- the expression intensity can specifically indicate the intensity of a certain expression when the subject expresses it.
- the expression intensity can be determined by, for example, a face attribute detection model, or it can be calculated by feature points.
- the expression intensity can be obtained by weighting the various local features of the face.
- the expression intensity may further include, for example, the size of the mouth corners grinning, the degree of raising the corners of the mouth, and the degree of opening and closing of the eyes. For the sake of brevity, I will not list them all here.
- each local feature included in the expression intensity listed above may exist as a scoring parameter.
- the intensity of expression can be understood as a generalization of the above-mentioned local features.
- Eyes open and closed specifically refers to whether the subject has closed eyes. Eye opening and closing can be determined by, for example, a face attribute detection model.
- Face occlusion specifically refers to whether the face of the subject is occluded and the degree of occlusion. Facial occlusion can be calculated by feature points, for example.
- the face angle specifically refers to whether the face of the subject is tilted and the tilt angle. The face angle can be determined by, for example, a face attribute detection model.
- the scoring parameters can also include sharpness, exposure, and composition.
- the sharpness can specifically refer to the sharpness of the shadow lines and their boundaries on the image. Sharpness is a parameter used to describe image quality.
- Exposure can specifically refer to the process in which the photosensitive element of the camera receives external light and then forms an image. The amount of external light received by the photosensitive element directly affects the brightness of the photo. According to the light receiving degree of the photosensitive element, it can be roughly divided into three situations: underexposure, correct exposure and overexposure.
- Composition can specifically refer to the process of identifying and organizing elements to produce a harmonious photo. The composition may specifically include, but is not limited to, the rule of thirds composition (or called the Jiugongge composition), the symmetrical composition, the frame composition, etc., which are not limited in this application.
- scoring parameters corresponding to the capture modes listed above are only examples, and should not constitute any limitation to this application. This application does not limit the specific content and names of the scoring parameters corresponding to each capture mode.
- the scoring parameters included in different evaluation strategies corresponding to the same capture mode may be the same. However, in different capture modes, the scoring parameters included in different evaluation strategies are not necessarily the same.
- different evaluation strategies corresponding to different capture modes include the same scoring parameters, and different evaluation strategies include different mode weights.
- the different evaluation strategies mentioned here include different mode weights. Specifically, it may mean that the mode weights applied to the same scoring parameter in different evaluation strategies are different. Moreover, when the evaluation strategy includes multiple scoring parameters, different evaluation strategies impose different mode weights on at least one scoring parameter.
- the mode weights corresponding to the same scoring parameter in different evaluation strategies can be different.
- different evaluation strategies corresponding to different capture modes may respectively include different mode weights corresponding to the same scoring parameter.
- the scoring parameters included in the evaluation strategy corresponding to the sports capture mode can include posture height, posture stretch, clarity, exposure, and composition; or, the scoring parameters included in the evaluation strategy corresponding to the sports capture mode can also include posture. Height, posture stretch, expression intensity, eyes open and closed, face occlusion, face angle, clarity, exposure and composition, but the mode weights applied to expression strength, eyes open and closed, face occlusion, and face angle are smaller. For example, it is zero, or close to zero. Therefore, the essence of the two statements is the same.
- scoring parameters listed above are only examples and should not constitute any limitation to this application.
- the scoring parameter corresponding to the motion capture mode includes any one of the scoring parameters of the posture height and the posture stretch, it should fall within the protection scope of the embodiment of the present application.
- the scoring parameters corresponding to the sports capture mode may also include, for example, rotation, etc.
- the application does not limit the scoring parameters corresponding to the sports capture mode and the mode weights thereof.
- the scoring parameters included in the evaluation strategy corresponding to the expression capture mode may include expression strength, eyes open and closed, face occlusion, face angle, clarity, exposure, and composition; or, in the evaluation strategy corresponding to the expression capture mode
- the included scoring parameters can also include expression strength, open and closed eyes, face occlusion, face angle, posture height, posture stretch, clarity, exposure and composition, but the mode weights applied to posture height and posture stretch respectively are smaller , For example, zero, or close to zero. Therefore, the essence of these two statements is the same.
- scoring parameters listed above are only examples and should not constitute any limitation to this application.
- scoring parameters corresponding to the expression capture mode include any of the scoring parameters of expression strength, eyes open and closed, face occlusion, and face angle, it should fall within the protection scope of this application.
- This application does not limit the scoring parameters corresponding to the facial expression capture mode and the mode weights.
- the same scoring parameter has different mode weights.
- the posture height and posture stretch listed above are given a higher weight in the evaluation strategy corresponding to the motion capture mode, and a lower weight is applied to the evaluation strategy corresponding to the expression capture mode.
- no weight for example, the expression intensity, eyes open and closed, face occlusion, and face angle listed above are given a higher weight in the evaluation strategy corresponding to the expression capture mode, and In the evaluation strategy corresponding to the sports capture mode, a lower weight is applied, or no weight is applied (weight 0).
- the mode weights of definition, composition, and exposure have little relationship with the capture mode, so the same mode weights can be defined in different capture modes. It should be understood that the above description is only for the convenience of understanding in combination with different capture modes to explain different scoring parameters and their mode weights, but this should not constitute any limitation to this application.
- evaluation strategy corresponding to the snapshot mode may be predefined. Once the evaluation strategy corresponding to the first capture mode is determined, the scoring parameter used when scoring the image to be selected and the mode weight of each scoring parameter are determined.
- the score of the image to be selected can be evaluated by the formula to make sure.
- G represents the scoring result, G>0;
- I represents the number of scoring parameters, I ⁇ 1 and is an integer;
- i represents the i-th scoring parameter in I scoring parameters, 1 ⁇ i ⁇ I, and i is an integer ;
- T i represents the i-th numerical score parameter, T i ⁇ 0;
- the score for each frame of image may be the result of weighting the scores of each scoring parameter based on the scoring parameter T i corresponding to the first snapping mode and the mode weight ⁇ i of each scoring parameter .
- the electronic device can select one of the multiple evaluation strategies for scoring, for example, select the default evaluation strategy, or the default evaluation strategy, general evaluation Strategy, etc.; or, the multiple evaluation strategies can also correspond to multiple capture categories, and the electronic device can also select the corresponding assessment strategy based on the capture categories determined by the previous detection models of multiple frames of images.
- An evaluation strategy corresponding to the first capture mode may be, for example, the default evaluation strategy described above.
- the following takes different capture modes as examples to describe in detail the specific process of the electronic device determining the captured frame image corresponding to the first capture mode from multiple frames of images to be selected.
- the electronic device can call the detection model corresponding to the first capture mode to perform image recognition on the image to be selected to obtain the recognition result.
- the model corresponding to the first capture mode may include, for example, a pose estimation model and a motion detection mode model.
- the electronic device can call the pose estimation model to perform image recognition on the captured multiple frames of images to be selected.
- the pose estimation model can perform image recognition on each frame of the image to be selected, and obtain the coordinate information of multiple pose points in each frame of the image to be selected.
- the coordinate information of each posture point in each frame of the image to be selected is the recognition result output by the posture estimation model. Since the specific process of determining the coordinate information of each posture point by the posture estimation model has been explained in step 210 above, for the sake of brevity, it will not be repeated here.
- the electronic device can also call an action detection model to perform image recognition on multiple frames of images to be selected.
- the action detection model can be combined with the pose estimation model to identify the action category of the subject.
- the action category recognized according to each frame of the image to be selected may be the recognition result output by the action detection model. Since the specific process of determining the action category by the action detection model has been explained in step 210 above, for the sake of brevity, it will not be repeated here.
- the action detection model can specifically indicate the recognized action category through an action type index or other information that can be used to uniquely indicate an action category. This application does not limit the specific form of the recognition result output by the motion detection model.
- the evaluation strategy does not necessarily correspond to the action category. Therefore, the electronic device may call the motion detection model to recognize the action category, or may not call the motion detection model, which is not limited in this application.
- the model corresponding to the first capture mode is a face attribute detection model.
- the electronic device can call the face attribute detection model to perform image recognition on the captured multiple frames of images to be selected.
- the face attribute detection model can be based on face attributes, for example, including expression categories (such as happy, angry, sad, funny, etc.), eyes closed, feature points, age, etc., to establish a classification model of face attributes, as described above.
- the facial feature point detection model, the open and closed eyes detection model, etc. are used to perform image recognition on each frame of the image to be selected, and output information such as the expression category of the subject, whether the eyes are closed, whether the face is blocked, and age.
- the facial expression category, whether the eyes are closed, whether the face is blocked, and the age of the subject in each frame of the image to be selected are the recognition results output by the face attribute detection model based on the recognition of each frame of the image to be selected.
- the face attribute detection model can detect the expression of the subject based on a variety of pre-trained expression categories. When it is determined that the expression of the subject belongs to any one of a plurality of pre-trained expression categories, the expression category may be determined as the expression category of the subject. In addition, different expression categories can correspond to different priorities. When there are multiple expression categories determined by the face attribute detection model, the expression category with a higher priority can be sorted according to a pre-defined priority, and the expression category with a higher priority is determined as the expression category of the subject.
- the face attribute detection model After the face attribute detection model completes the image recognition of the image to be selected, it can output the recognition result.
- the expression category it can be indicated by an expression type index or other information that can be used to uniquely indicate a kind of expression category; for open and closed eyes, for example, the binary values "0" and "1" can be used to indicate " Eyes closed” and "eyes open”; for feature points, the position of each feature point can be indicated by coordinate information; for age, it can be indicated by a specific numerical value.
- the face attribute detection model may not output the expression category.
- the electronic device can determine the strength of expression, whether the eyes are closed, whether the face is occluded, and other information according to the position and age of each feature point in each frame of the image to be selected, and use the evaluation strategy corresponding to the first capture mode to determine each frame to be selected. Select the image for scoring.
- the above only shows several possible implementations for indicating the detection result for ease of understanding, and should not constitute any limitation to the application. This application does not limit the specific indication method of the detection result.
- the scoring parameters corresponding to the facial expression capture modes listed above are only examples, and should not constitute any limitation in this application. As long as the scoring parameters corresponding to the expression capture mode include any of the parameters of expression intensity, face occlusion, eyes opening and closing, and face angle, it should fall within the protection scope of this application.
- the scoring parameter may also include one or more of expression strength, face occlusion, eyes open and closed, and face angle. For brevity, the following text will not be repeated.
- the electronic device when it calls one or more detection models for image recognition, it can call the detection model corresponding to the first snapping category to perform image recognition, as in the example above.
- the capture category may be a specific sub-mode or classification in the capture mode obtained by further dividing the capture mode.
- the electronic device can also call multiple predefined detection models for image recognition, such as calling a face attribute detection model, a posture estimation model, and an action detection model for image recognition. Based on different capture modes, the mode weights applied to each scoring parameter are different. Therefore, although multiple detection models are called, the electronic device will apply different modes based on the capture mode when scoring based on the results of image recognition. Weights. Therefore, it will not affect the final selection results. Therefore, this application does not limit the specific model used for image recognition.
- the electronic device can obtain the value of each scoring parameter based on the recognition result. For example, if the first capture mode is a sports capture mode, the electronic device can determine parameters such as the height of the human skeleton, the angle of each joint, and the like according to the coordinate information of the posture point.
- the electronic device can determine the value of the scoring parameter of the posture height based on the height of the human skeleton. For example, the center point of the human skeleton, the highest point of the human skeleton, etc. can be used as the value of the posture height. It should be understood that when the electronic device selects a certain point (such as the center point of the human skeleton) as the value of the posture height, the same point is selected for all the images to be selected to determine the value of the posture height.
- the electronic device can determine the posture stretch based on the included angle of each joint point.
- the posture stretch can be determined by the curvature of each joint point of the human body, so the posture stretch can include the value of the curvature of each joint point.
- the value of posture stretch can be weighted by the included angle of each joint point.
- the weight of the included angle of each joint point may be predefined, that is, the weight of the mode described above.
- the determination of the value of each scoring parameter can also be determined in other ways, which is not determined in this application. For example, an existing algorithm, such as an action intensity algorithm, can be used to determine the value of each scoring parameter.
- posture height and posture stretch are only one possible way of expression, and this application does not exclude the possibility of expressing the same or similar meaning through other possible expression ways.
- posture stretch can also be replaced by motion range.
- the electronic device may determine the value of each scoring parameter according to multiple feature points of the human face.
- the electronic device can determine the value of expression intensity according to multiple feature points.
- the expression intensity may be determined by various local features of the human face, such as, but not limited to, the size of the mouth grinning, the degree of raising the corners of the mouth, and the degree of opening and closing of the eyes. Therefore, the value of expression intensity can include the value of each local feature.
- the expression intensity can be obtained by weighting the values of the various local features of the human face. The weight of each local feature can be pre-defined, that is, the mode weight described above.
- the value can be determined by the ratio of the vertical distance and the horizontal distance of the eye.
- the value can be determined by the ratio of the sum of the distance between the upper and lower lips of the mouth and the horizontal distance between the corners of the mouth and the distance between the eyes.
- the value can be determined by the distance between the horizontal line of the corner of the mouth and the lower lip of the mouth.
- the electronic device can determine whether any feature points are missing according to the detected multiple feature points, and if so, it can be considered that the face of the subject is occluded.
- the value can be determined by the ratio of the detected feature points to the predefined feature points.
- the electronic device can also determine values for scoring parameters such as eyes open and closed, and face angles.
- the electronic device can also call an existing algorithm, such as an expression intensity algorithm, to determine a value for each scoring parameter.
- the electronic device can further load scoring parameters such as sharpness, exposure, and composition, and determine the value of each scoring parameter.
- scoring parameters such as sharpness, exposure, and composition
- the value of the scoring parameter of composition can be performed based on different composition methods. For example, in the expression capture mode, you can load the Jiugongge composition; in the group photo capture mode, you can load the symmetrical composition; in the landscape capture mode, you can load the horizontal line composition. Therefore, the definition of the mode weight of the scoring parameter of composition can be defined based on different capture modes.
- the symmetrical composition can be loaded, the distance from the center of all people in the image to the center of the screen and the distance between two adjacent people can be calculated. Weights are applied to these two distances respectively to obtain a weighted sum. The weighted sum can be used as the value of the scoring parameter of the composition.
- the scoring parameter when the scoring parameter is weighted by multiple parameter values, the parameter values can be normalized to the same magnitude, and then weighted. After determining the value of each scoring parameter, the electronic device can use the evaluation strategy corresponding to the first capture mode to determine the score of each frame of the image to be selected.
- each capture mode can correspond to one or more evaluation strategies, and each evaluation strategy defines the scoring parameters and the mode weights.
- the scoring parameters and their mode weights are preset. After the evaluation strategy is determined, the scoring parameters and the weight of the model can be determined. As long as the electronic device substitutes the value of the scoring parameter determined for each frame of the image to be selected, the score for each frame of the image to be selected can be obtained.
- the electronic device can use an evaluation strategy corresponding to the first capture mode to substitute the value of the scoring parameter previously determined for each frame of the image to be selected to calculate the score of each frame of the image to be selected.
- the scoring of each frame of the image to be selected by the electronic device can be based on the formula listed above, for example Calculated. The meaning of each parameter in the formula has been explained above, for the sake of brevity, it will not be repeated here.
- the scores of each scoring parameter can also be normalized to the same magnitude.
- the electronic device may determine the captured frame image corresponding to the first capture mode according to the score of each frame of the image to be selected.
- the captured frame image corresponding to the first capture mode may be, for example, a frame with the highest score determined based on the scores of multiple frames of images to be selected. In other words, among the multiple frames of images to be selected, the score of the captured frame image corresponding to the first capture mode is higher than the score of any frame of the image to be selected except the captured frame image.
- the electronic device may, after determining the captured frame image corresponding to the first capture mode, save the captured frame image in the electronic device, or output to the display unit of the electronic device or the like. This application does not limit this.
- each capture mode can correspond to multiple evaluation strategies.
- the multiple evaluation strategies in each capture mode can further correspond to multiple capture categories.
- one or more capture categories corresponding to the sports capture mode may include at least one of the following: shooting, running, jumping, swimming, kicking, etc.
- the one or more capture categories corresponding to the facial expression capture mode include at least one of the following: happy, angry, sad, funny, etc. It should be understood that the above listing of the capture categories corresponding to each capture mode is only an example, and should not constitute any limitation to this application. This application does not limit the specific capture category corresponding to each capture mode.
- each type of capture may correspond to an evaluation strategy.
- each evaluation strategy includes one or more scoring parameters corresponding to the first snapping mode, the mode weight of each scoring parameter, and the mode weight corresponding to one snapping category.
- Category weight In other words, in the first snapshot mode, each scoring parameter can be defined with a mode weight. Under each capture category corresponding to the first capture mode, each scoring parameter can be further defined with a category weight.
- different evaluation strategies corresponding to different capture categories include the same scoring parameters, and the different evaluation strategies include different category weights.
- the category weights included in the different evaluation strategies mentioned here are different. Specifically, it may mean that the category weights applied to the same scoring parameter in different evaluation strategies are different. Moreover, when the evaluation strategy includes multiple scoring parameters, different evaluation strategies apply different category weights to at least one scoring parameter. In other words, different evaluation strategies corresponding to different capture categories may respectively include different category weights corresponding to the same scoring parameter.
- the capture category determined by the electronic device according to the captured multiple frames of images is the first capture category. Based on the difference in the first capture category, the category weights corresponding to the same scoring parameter in the same capture mode are also different.
- the score of the image to be selected can be evaluated by the formula to make sure.
- ⁇ i represents the category weight of the i-th scoring parameter
- ⁇ i ⁇ 0.
- the category weight can be different.
- the first capture mode is a sports capture mode.
- the corresponding scoring parameters in the sports capture mode may include: posture height, posture stretch, and so on.
- the sports capture mode can include shooting, diving, swimming, running and other action categories.
- the category weight of the posture height is higher than the category weight of the posture stretch degree, and in the posture stretch degree, the category weight of the elbow joint angle and the arm bending angle is higher than other categories (such as knee joint angle). , Ankle joint angle, etc.) have high category weights.
- the category weight of the knee joint angle is higher than the category weight of the posture height
- the category weight of the posture height is higher than the category weight of the arm bending angle.
- the category weight of the leg bending angle is higher than the category weight of the arm bending angle
- the category weight of the arm bending angle is higher than the category weight of the posture height.
- the category weight of the leg bending angle and the arm bending angle is higher than that of the posture height category. It should be understood that these examples are only for ease of understanding and should not constitute any limitation to the application. This application does not limit the distribution of the category weight of each scoring parameter under each capture category.
- the above step 210 may further include: determining the first capture category in the first capture mode according to the captured multiple frames of images.
- the above-mentioned step 220 may further include: using one or more scoring parameters corresponding to the first snapping mode and the mode weight of each scoring parameter, and the category weight of each scoring parameter corresponding to the first snapping category, to calculate The score of each frame of the image to be selected in the multi-frame image to be selected.
- the electronic device may further determine the first capture category by calling the motion detection model or the face attribute detection model.
- the following examples illustrate the specific process of determining the first capture category based on the first capture mode, and scoring multiple frames of images to be selected according to the evaluation strategy corresponding to the first capture category.
- the electronic device may determine the first capture category based on the first capture mode.
- the electronic device can at least call the pose estimation model and the motion detection model to perform image recognition on the image to be selected.
- the action detection model can recognize the action category based on the coordinate information of multiple posture points in each frame of the image to be selected input by the posture estimation model.
- the action detection model can construct the coordinate changes of each posture point during the movement process based on the coordinate information of the multiple posture points in each frame of the image to be selected to determine the first action category.
- the action category can be determined as the default category, or the default category.
- the action category can be classified as a default category.
- the motion detection model can output the action category of the subject as the default category, more specifically, the default category in the sports capture mode.
- the electronic device can at least call the face attribute detection model to perform image recognition on the image to be selected.
- the face attribute detection model can determine the expression category based on the analysis of the feature points of each frame of the image to be selected.
- the expression category may be determined as the default category, or the default category.
- the expression category may be classified as a default category.
- the face attribute detection model can output the expression category of the subject as the default category, more specifically, the default category in the expression capture mode.
- the electronic device can use the corresponding evaluation strategy to substitute various scoring parameters to determine the score of each frame of the image to be selected.
- the electronic device since the capture category is further refined for the first capture mode, when the electronic device weights the value of each scoring parameter, it can further combine the attention to different details of the capture category to apply a category weight to each scoring parameter , To calculate a score that is more in line with the first snap category in the first snap mode.
- the category weight can be applied to the elbow joint angle and the arm bending angle than other (such as knee joint angle, ankle joint angle, etc.)
- the category weight of the parameter is high, and the category weight applied to the attitude height can be higher than the category weight of the attitude stretch degree.
- the electronic device can use the evaluation strategy corresponding to the first capture category in the first capture mode, and substitute the previous value based on the scoring parameter determined for each frame of the image to be selected to calculate each frame of the image to be selected Rating.
- the scoring of each frame of the image to be selected by the electronic device can be based on the formula listed above, for example Calculated. The meaning of each parameter in the formula has been explained above, for the sake of brevity, it will not be repeated here. When the scores of multiple scoring parameters are of different magnitudes, the scores of each scoring parameter can also be normalized to the same magnitude.
- the electronic device may determine the captured frame image corresponding to the first capture mode according to the score of each frame of the image to be selected.
- the captured frame image corresponding to the first capture mode may be, for example, a frame with the highest score determined based on the scores of multiple frames of images to be selected. In other words, among the multiple frames of images to be selected, the score of the captured frame image corresponding to the first capture mode is higher than the score of any frame of the image to be selected except the captured frame image.
- the electronic device may, after determining the captured frame image corresponding to the first capture mode, save the captured frame image in the electronic device, or output to the display unit of the electronic device or the like.
- This application does not limit this.
- the category weight of some scoring parameters can also be defined as zero, or close to zero.
- the scoring parameter corresponding to the action category does not include the scoring parameter. From this perspective, the scoring parameters included in multiple evaluation strategies corresponding to the same capture mode are not necessarily the same.
- the above-mentioned multi-frame images to be selected are the first N frames, the last N frames, or the front and back N frames at the moment when the electronic device presses the shutter, or a frame or frame whose score exceeds the preset threshold during the recording process.
- Multi-frame images These images are stored in the electronic device, and can also be sent to the display unit and presented to the user. This application does not limit this.
- each frame of image may correspond to a time stamp.
- the electronic device can search for and obtain an image matching the time stamp of the captured frame image from the camera module.
- the camera module encodes the image, it can be pushed to the display unit and presented to the user.
- the image captured based on the user's photographing operation can also be saved in the user's photo album after being encoded.
- the captured frame image and/or the frame of the image actually captured by the user can also be distinguished by marks, for example, leave a "best moment" or other similar on the captured frame image. Mark, or leave a different mark on the above two frames of images to show distinction.
- the electronic device may also preprocess the captured multi-frame images before determining the first capture mode based on the captured multi-frame images, so as to obtain a more accurate estimation result.
- the image input to the above one or more models may be the original image or the image after preprocessing, which is not limited in this application.
- the method 200 further includes: performing image preprocessing on the captured multiple frames of images.
- the image processing module in the electronic device such as ISP, can perform image preprocessing on the image.
- Image preprocessing may include, for example, cropping the size of the image of each frame to meet the input size of the pose estimation model; for example, it may include preprocessing the image of each frame, such as subtracting the mean and normalizing.
- Data enhancement such as rotation, etc.
- This application does not limit the specific content and implementation methods of image processing.
- the pre-processed image can meet the input size of the aforementioned multiple models, and at the same time, the diversity of the data can be enhanced to prevent the model from overfitting.
- the capture mode can be determined according to the actual shooting scene.
- An evaluation strategy corresponding to the first capture mode can be selected from a plurality of preset evaluation strategies, and the capture frame image can be determined by using the evaluation strategy. For example, the introduction of scoring parameters such as posture stretch and posture height in the sports capture mode and multi-person sports capture mode, and the introduction of scoring parameters such as expression strength, open and closed eyes, face occlusion, and face angle in the expression capture mode and multi-person photo mode.
- the determined capture frame image corresponding to the first capture mode can be selected based on the corresponding evaluation strategy, and higher mode weights are applied to the different scoring parameters concerned by different capture modes, so that the captured images are more realistic
- the shooting scene is conducive to obtaining the ideal capture effect, the flexibility is improved and it is suitable for more scenes.
- the technical solution provided by the present application may further determine the weight of each scoring parameter based on different capture categories. For example, because different action categories focus on different focuses, among the multiple scoring parameters corresponding to the sports capture mode, the category weights of the scoring parameters configured for different action categories are different. In the motion capture mode, multiple frames of images to be selected are scored based on the category weights transmitted by each score corresponding to the action category, which is conducive to obtaining an ideal captured frame image. Compared with recommending a captured frame image based on optical flow information, the solution provided in this application pays more attention to the action itself, so that a better capturing effect can be obtained. Because the electronic device continuously detects the captured multi-frame images. The image captured by the electronic device changes based on the camera running time. Once it is detected that the image meets the triggering condition of another capture mode different from the first capture mode, it may switch to another capture mode.
- the method further includes: determining a second capture mode based on the newly captured multi-frame images, where the second capture mode is a capture mode that is different from the first capture mode among the foregoing preset multiple capture modes; And switch to the second snapshot mode.
- a protection period can be set for each capture mode.
- the duration of the guard period can be a predefined value.
- the protection period of each capture mode can be of the same duration or different durations, which is not limited in this application. Therefore, before the electronic device switches to the second capture mode, it can be determined whether the running time of the first capture mode exceeds the preset protection period.
- the mode will not be switched, and the mode will remain in the first capture mode. run. However, if the running duration of the first capture mode exceeds the duration range of the protection period, it can be switched to the second switching mode based on the detection of the newly captured multi-frame images.
- FIG. 3 shows an example of a mobile phone interface.
- A in FIG. 3 shows the interface content 301 displayed by the system on the screen of the mobile phone when the mobile phone is in an unlocked state.
- the interface content 301 may include multiple icons, and the multiple icons may correspond to multiple applications (applications, apps), such as Alipay, Weibo, photo album, camera, WeChat, etc., which are not listed here.
- the mobile phone interface display after starting the camera application may be as shown in (b) in FIG. 3, and this interface may be referred to as the shooting interface of the camera.
- the shooting interface may include a viewing frame 303, an album icon 504, a shooting control 305, and the like.
- the viewfinder frame 303 is used to obtain a photographed preview image, and can display the preview image in real time. It should be understood that the preview images described above are not necessarily saved in the album. However, in the embodiment of the present application, the preview image may be stored in the cache of the mobile phone or in other storage units, which is not limited in the present application.
- the album icon 304 is used to quickly enter the album.
- the mobile phone detects that the user clicks on the album icon, it can display the photos or videos that have been taken on the screen.
- the shooting control 305 can be used to take photos or videos. If the camera is in the photographing mode, when the mobile phone detects that the user clicks on the photographing control, the mobile phone performs the photographing operation and saves the photographed photos, which is the photographing stream described above. If the camera is in the video recording mode, when the mobile phone detects that the user clicks the shooting control, the mobile phone performs the video shooting operation; when the mobile phone detects that the user clicks the shooting control again, the video shooting ends. The phone can save the recorded video. In one implementation, the video can be saved by continuous multiple frames of images, that is, the video stream described above.
- the shooting interface may also include a functional control 306 for setting a shooting mode, such as the portrait mode, the photo mode, the video mode, the panoramic mode, etc. shown in (b) of FIG. 3.
- the user can switch the shooting mode by clicking the function control.
- the shooting interface may further include a camera rotation control 307, for example, as shown in (b) of FIG. 3.
- the camera rotation control 307 can be used to control the switching of the front camera and the rear camera.
- FIG. 3 is only for ease of understanding, and a mobile phone is taken as an example to illustrate in detail the process of the user opening the camera function or other functions through operations.
- the mobile phone interface shown in FIG. 3 is only an example, and should not constitute any limitation to this application. Different operating systems and different brands of mobile phones may have different mobile phone interfaces.
- the embodiments of the present application can also be applied to other electronic devices that can be used for taking pictures except for mobile phones.
- the interface shown in the figure is only an example, and should not constitute any limitation to this application.
- FIG. 4 is a schematic flowchart of a method for capturing an image according to another embodiment of the present application.
- the user can manually adjust the camera mode to the smart capture mode.
- the electronic device enters the smart capture mode.
- the embodiment shown in FIG. 4 mainly describes a method for an electronic device in the smart capture mode to capture an image.
- the method shown in FIG. 4 may be executed by an electronic device or a processor in the electronic device.
- the captured multi-frame images are periodically detected at a high frame rate.
- the electronic device can continuously obtain multiple frames of images captured by the electronic device from the cache, and call the aforementioned detection models (such as the face attribute detection model, the pose estimation model, and the motion detection model) to cycle the captured images.
- sexual detection to determine the first capture mode suitable for the current shooting.
- the electronic device can use a higher frame rate for detection, such as 30 frames per second.
- Each detection model can be run alternately to reduce power consumption.
- step 401 may run through the entire process from step 401 to step 410.
- the electronic device calling the detection model to detect multiple frames of images is still being executed by the electronic device. Therefore, in this embodiment, for the sake of brevity, the process of calling the detection model to detect images will not be specifically described.
- step 402 it is determined that the trigger condition of the first capture mode is satisfied.
- the electronic device can determine that a trigger condition of a certain capture mode is satisfied based on the detection of multiple frames of images. For example, the trigger condition of the first snapshot mode is satisfied.
- step 403 can be executed to enable the first capture mode.
- step 401 can be continued to perform periodic detection of the captured multi-frame images at a high frame rate.
- the trigger condition for determining whether a certain capture mode is satisfied is described in detail. For the sake of brevity, I won't repeat them here. It should be noted that the specific capture mode enabled by the electronic device has no effect on the captured image. Electronic equipment only uses corresponding evaluation strategies to evaluate and recommend images based on specific capture modes.
- the electronic device may keep running in the first capture mode, and perform periodic detection of the newly captured image at a high frame rate.
- the electronic device may keep running in the first capture mode until it is determined that the trigger condition of another capture mode (for example, referred to as the second capture mode) is satisfied according to the detection of the newly captured image.
- step 404 is provided for the convenience of describing the following embodiments, and does not mean that the electronic device has performed a new operation.
- the electronic device can always keep running in the first capture mode, and continuously perform periodic detection of newly captured images at a high frame rate.
- the electronic device can run the first capture mode in the background without prompting the user through the shooting interface, so the user may not perceive it; the first capture mode can also be run in the foreground , The user can perceive through the shooting interface. This application does not limit this.
- step 405 it is determined that the trigger condition of the second snapshot mode is satisfied. Since the camera continues to operate, the electronic device can continuously detect the newly captured image. If it is not detected that the newly captured multi-frame image satisfies the trigger condition of another capture mode (such as the second capture mode), that is, it is determined that the trigger condition of the second capture mode is not met, step 404 can be performed to keep in the first capture mode. Mode, while continuously detecting the newly captured images periodically at a high frame rate.
- another capture mode such as the second capture mode
- step 406 may be executed to determine whether the running time of the first capture mode exceeds the preset protection period. If the running time of the first capture mode does not exceed the preset protection period, step 404 may be executed. If the running time of the first capture mode exceeds the preset protection period, step 407 may be executed to enable the second capture mode. That is, the electronic device switches to the second capture mode. After enabling the second capture mode, the electronic device keeps running in the second capture mode until it is determined that the trigger condition of another capture mode (for example, the third capture mode) is satisfied according to the newly captured image. At the same time, the newly captured images are periodically detected at a high frame rate.
- step 404 may be executed. If the running time of the first capture mode exceeds the preset protection period, step 407 may be executed to enable the second capture mode. That is, the electronic device switches to the second capture mode. After enabling the second capture mode, the electronic device keeps running in the second capture mode until it is determined that the trigger condition of another capture mode (for example, the third capture mode) is satisfied
- the figure does not show the step of determining that the electronic device meets the triggering condition of the third capture mode.
- the operation performed by the electronic device may be similar to the operation performed when the trigger condition of the third capture mode is satisfied in the figure.
- it is not here. Go into details again.
- first capture mode and the second capture mode are different capture modes
- third capture mode and the second capture mode are different capture modes
- the first capture mode and the third capture mode may be the same capture mode, It can also be a different capture mode, which is not limited in this application.
- the electronic device runs in the smart capture mode and continuously detects the captured images at a high frame rate.
- step 408 may be executed, and in response to the user's photographing operation, the photograph is taken and the image is saved.
- the saved image can be presented to the user through the display unit after subsequent processing such as encoding.
- the electronic device does not detect the user's photographing operation, it can continue to run in the smart capture mode, and continuously capture new images for periodic detection at a high frame rate.
- the photographing operation may be an operation performed in the first snapshot mode, or it may be an operation performed in the second snapshot mode, depending on whether the electronic device switches to the second snapshot before performing the photographing operation mode. This application does not limit this.
- a corresponding evaluation strategy is used to score the multi-frame images to be selected.
- the electronic device may score the first N frames of images, the next N frames of images, or the previous and next N frames of images at the moment the shutter is pressed based on the user's photographing operation.
- the electronic device can call the corresponding detection model to perform image recognition on multiple frames of images according to the currently running capture mode, and output the recognition result.
- the electronic device can score each frame of the image to be selected according to the detection result.
- the models related to the motion capture mode and the multi-person motion capture mode may include, for example, a pose estimation model and a motion detection model.
- the detection model related to the facial expression capture mode and the group photo mode may include, for example, a face attribute detection model.
- the face attribute model may include, but is not limited to, a face feature point detection model, an open and closed eye model, etc., for example. This application does not limit this.
- the electronic device can call the pose estimation model and the motion detection model to perform image recognition on the captured images to be selected, so as to obtain the coordinate information of the pose points and the action category.
- the electronic device can determine an evaluation strategy according to the motion capture mode and the action category, and use the evaluation strategy to score each frame of captured images to be selected.
- the electronic device can call the face attribute detection model to perform image recognition on the captured images to be selected to obtain one of expression strength, eyes open and closed, face occlusion, and face angle Or multiple recognition results.
- the recognition result may include an expression category and related parameters that can be used to characterize one or more of expression strength, eye opening and closing, facial occlusion, and face angle.
- the electronic device can determine an evaluation strategy according to the expression capture mode and expression category, and use the evaluation strategy to score each captured image to be selected.
- the captured frame image corresponding to the currently running capture mode is determined.
- the electronic device can determine the captured frame image corresponding to the currently running capture mode according to the score of each frame to be selected.
- the electronic device may further obtain an image matching the timestamp from the cache, process the image, and present it to the user through the display unit.
- the electronic device can clean up the cache space occupied during scoring, and release the occupied cache space.
- the user can exit the smart capture mode through manual adjustment, or exit the camera function directly.
- the electronic device also exits the smart capture mode.
- the electronic device that exits the smart capture mode can still remain in the photo mode, and can perform periodic detection of the image at a low frame rate.
- the electronic device that exits the camera function can stop acquiring images and stop detecting. Each detection model can also be stopped.
- the above is only for ease of understanding, and only the first capture mode and the second capture mode are shown, but this should not constitute any limitation to the present application.
- the electronic device can continuously acquire newly captured images and continuously detect the newly captured images. Therefore, as long as the electronic device does not have a camera function, some or all of the above steps 404 to 410 can be executed in a loop. What needs to be noted is that after switching the first capture mode to the second capture mode, the second capture mode becomes the new first capture mode.
- FIG. 4 shows an example in which the method for capturing an image provided by an embodiment of the present application is applied to a specific scene.
- the steps in the figure are only shown for ease of understanding. Each step in the flowchart does not necessarily have to be performed. For example, some steps can be skipped, or some steps can be combined.
- the execution order of each step is not fixed, nor is it limited to that shown in FIG. 4.
- the execution sequence of each step should be determined by its function and internal logic.
- FIG. 5 is a schematic flowchart of a method for capturing an image according to another embodiment of the present application.
- the user can open the smart capture mode without manual operation.
- the user can set the shooting mode to the camera mode.
- the electronic device can be based on periodic detection, and can automatically turn on the smart capture mode.
- the embodiment shown in FIG. 5 mainly describes a method for an electronic device in a photographing mode to photograph an image.
- the method shown in FIG. 5 may be executed by an electronic device or a processor in the electronic device.
- the captured image is periodically detected at a low frame rate.
- the electronic device can continuously obtain multiple frames of images captured by the electronic device from the cache, and call the aforementioned detection models (such as the face attribute detection model, the human frame detection model, and the scene recognition model) to perform periodic detection to determine Whether the trigger condition for entering a certain capture mode is met.
- the electronic device can use a lower frame rate for detection, such as 15 frames per second, to save power consumption. It should be understood that, before entering the smart capture mode, such as step 503, step 501 can be performed continuously.
- step 501 After exiting the smart capture mode, such as after step 509, step 501 can also be continued. It should also be understood that the electronic device calling the detection model to detect multiple frames of images is still being executed by the electronic device. Therefore, in this embodiment, for the sake of brevity, the process of calling the detection model to detect images will not be specifically described.
- step 502 it is determined that the trigger condition of the first capture mode is satisfied.
- the electronic device can determine that a trigger condition of a certain capture mode is satisfied based on the detection of multiple frames of images. For example, the trigger condition of the first snapshot mode is satisfied.
- different triggering conditions have been listed for different capture modes, and combined with the face attribute detection model, the human frame detection model and the scene recognition model, the trigger condition for determining whether a certain capture mode is satisfied is described in detail. For the sake of brevity, I won't repeat them here.
- step 501 If it is determined that the detected multi-frame images do not meet the triggering condition of a certain capture mode, the operation in the photographing mode can be continued, that is, step 501 is continued. If it is determined that the detected multiple frames of images meet the trigger condition of the first capture mode, step 503 may be executed to enter the smart capture mode, and the first capture mode is enabled. Enabling the first capture mode means that the electronic device is switched to the smart capture mode. Therefore, entering the smart capture mode and enabling the first capture mode refer to the same operation.
- the electronic device may also perform periodic detection of the newly captured image at a high frame rate in step 503. In other words, the detection of the image by the electronic device switches from a low frame rate to a high frame rate. For example, the electronic device can detect images at a frame rate of 30 frames per second. It should be understood that before the electronic device exits the smart capture mode, step 501 may be continuously performed to perform periodic detection of newly captured images at a high frame rate.
- step 504 the electronic device keeps running in the first snapshot mode.
- the electronic device may keep running in the first capture mode until it is determined that the trigger condition of another capture mode (for example, referred to as the second capture mode) is satisfied according to the detection of the newly captured image.
- step 504 is set for the convenience of describing the following embodiments, and does not mean that the electronic device has performed a new operation.
- the electronic device can always keep running in the first capture mode, and continuously perform periodic detection of newly captured images at a high frame rate.
- step 505 it is determined that the trigger condition of the second snapshot mode is satisfied. Since the camera continues to operate, the electronic device can continuously detect the newly captured image. If it is not detected that the newly captured image satisfies the trigger condition of another capture mode (such as the second capture mode), that is, it is determined that the trigger condition of the second capture mode is not satisfied, step 504 may be performed to maintain the first capture mode. At the same time, the image is continuously inspected periodically at a high frame rate. If it is determined that it is detected that the newly captured image meets the trigger condition of the second capture mode, it may be considered whether to switch to the second capture mode.
- another capture mode such as the second capture mode
- a protection period can be set for each capture mode.
- the duration of the protection period may be a predefined value, and step 506 is executed to determine whether the running duration of the first capture mode exceeds the protection period. If the running time of the first capture mode does not exceed the protection period, that is, the electronic device detects that the image meets the trigger condition of the second capture mode within the protection period, then step 504 may be performed to maintain the first capture mode. The electronic device kept in the first capture mode can still continue to periodically detect the image at a high frame rate.
- step 507 may be executed to enable the second capture mode , In other words, switch from the first capture mode to the second capture mode.
- the electronic device with the second capture mode enabled can also continuously perform periodic detection of newly captured images at a high frame rate. After the second capture mode is enabled, the electronic device can keep running in the second capture mode until it is determined that the trigger condition of another capture mode (for example, the third capture mode) is satisfied according to the newly captured image. For the sake of brevity, the figure does not show the step of determining that the trigger condition of the third capture mode is satisfied.
- the operation performed by the electronic device may be similar to the operation performed when the trigger condition of the third capture mode is satisfied in the figure. For the sake of brevity, it is not here. Go into details again. Regardless of whether it is switched to the second capture mode, as long as the electronic device is running in the smart capture mode, the captured image can be continuously detected at a high frame rate.
- step 508 it is determined whether a photographing operation is detected within a preset time period. Since the detection of the newly captured image in the smart capture mode is a high frame rate detection, the power consumption is relatively high. In order to reduce power consumption, the electronic device can automatically exit the smart capture mode if it does not detect a photographing operation for a long time. The electronic device that exits the smart capture mode can return to the photo mode. The user may not perceive that the electronic device exits the smart capture mode.
- step 509 may be executed to exit the smart capture mode and return to the photographing mode. Furthermore, step 501 and the steps thereafter can be repeated until the user exits the camera function.
- the duration of the preset time period may be a predefined value. For example, when the electronic device activates the first capture mode in step 503 or the second capture mode in step 507, it can start timing. For example, a timer is started, and the running time of the timer may be, for example, the aforementioned preset time period.
- step 509 may be executed to exit the smart capture mode and return to the photographing mode. If the running time of the timer has not reached, it can continue to run in the smart capture mode. If the user's photographing operation is detected within the preset time period, step 510 may be performed, and in response to the user's photographing operation, the photograph is taken and the image is saved.
- step 511 based on the currently running snapshot mode, a corresponding evaluation strategy is used to score multiple frames of images to be selected.
- step 512 the captured frame image corresponding to the currently running capture mode is determined.
- the specific process from step 510 to step 512 is the same as the specific process from step 406 to step 408 in the above embodiment. Since step 406 to step 408 have been described in detail above, for the sake of brevity, it will not be repeated here.
- the electronic device can exit the camera function in response to the user's operation.
- the electronic device that exits the camera function stops acquiring images and stops detecting. Each detection model can also be stopped.
- duration of the guard period in step 506 and the duration of the preset period described in step 508 may be the same or different, which is not limited in this application. If the two are the same, you can share a timer; if they are different, you can use their own independent timers. Of course, counting by a timer is only one possible implementation, and should not constitute any limitation to this application.
- FIG. 5 shows an example in which the method for capturing an image provided by an embodiment of the present application is applied to a specific scene.
- the steps in the figure are only shown for ease of understanding.
- Each step in the flowchart does not necessarily have to be performed.
- some steps can be skipped, such as step 505 to step 507, or some steps can be combined, such as step 503 and step 504.
- the order of execution of each step is not fixed, nor is it limited to that shown in FIG. 5.
- the execution sequence of each step should be determined by its function and internal logic.
- FIG. 6 is a schematic flowchart of a method for capturing an image according to another embodiment of the present application.
- the user can open the smart capture mode without manual operation.
- the user can set the shooting mode to the video recording mode.
- the electronic device can be based on periodic detection, and can automatically turn on the smart capture mode.
- the embodiment shown in FIG. 6 mainly describes a method for an electronic device in a video recording mode to capture an image.
- step 601 the captured multi-frame images are periodically detected at a low frame rate.
- step 602 it is determined that the trigger condition of the first snapshot mode is satisfied. If the electronic device determines that the trigger condition of the first capture mode is satisfied according to the detection of the captured multi-frame images, step 603 may be executed to enable the first capture mode while recording.
- the first snapshot mode may be a mode running in the background. Judging from the shooting interface, the electronic device is still continuing to record video. Enabling the first capture mode means that the electronic device has enabled the smart capture mode. In other words, the video mode and smart capture mode run in parallel.
- the electronic device starts to periodically detect newly captured images at a high frame rate. That is to say, the detection of the image by the electronic device switches from a low frame rate to a high frame rate.
- the electronic device can detect the preview image at a frame rate of 30 frames per second. And before the electronic device exits the smart capture mode, it can continuously perform periodic detection of newly captured images at a high frame rate.
- step 604 the electronic device keeps running the first snapshot mode in the background.
- the electronic device may keep running the first capture mode in the background until it is determined that the trigger condition of another capture mode (for example, referred to as the second capture mode) is satisfied according to the detection of the newly captured image.
- step 604 is set for the convenience of describing the following embodiments, and does not mean that the electronic device has performed a new operation.
- the electronic device can always keep running the first capture mode in the background, and continuously perform periodic detection of newly captured images at a high frame rate.
- step 605 it is determined that the trigger condition of the second snapshot mode is satisfied. Since the camera continues to operate, the electronic device can continuously detect the newly captured image. If it is not detected that the newly captured image meets the trigger condition of another capture mode (such as the second capture mode), that is, it is determined that the trigger condition of the second capture mode is not met, step 604 can be performed to keep the first capture running in the background Mode, while continuously detecting the newly captured images periodically at a high frame rate. If it is detected that the newly captured image meets the trigger condition of the second capture mode, it may be considered whether to switch to the second capture mode.
- another capture mode such as the second capture mode
- a protection period can be set for each capture mode.
- the duration of the protection period may be a predefined value, and step 606 is executed to determine whether the running duration of the first capture mode exceeds the protection period. If the running time of the first capture mode does not exceed the protection period, that is, the electronic device detects that the newly captured image meets the trigger condition of the second capture mode within the protection period, then step 604 may be performed to keep the first capture mode mode. The electronic device remaining in the first capture mode can still continue to periodically detect the newly captured image at a high frame rate.
- step 607 may be executed to enable the second capture mode, or in other words , Switch from the first capture mode to the second capture mode.
- the electronic device can also continuously perform periodic detection of newly captured images at a high frame rate.
- the electronic device can keep running the second capture mode until it is determined that the trigger condition of another capture mode (for example, the third capture mode) is satisfied according to the newly captured image.
- the figure does not show the step of determining that the trigger condition of the third capture mode is satisfied.
- the operation performed by the electronic device may be similar to the operation performed when the trigger condition of the third capture mode is satisfied in the figure. For the sake of brevity, it is not here. Go into details again. Regardless of whether it is switched to the second capture mode, as long as the smart capture mode is still running in the background of the electronic device, the captured image can be continuously detected at a high frame rate.
- step 608 it is determined whether a photographing operation is detected within a preset time period. Since the detection of the newly captured image in the smart capture mode is a high frame rate detection, the power consumption is relatively high. In order to reduce power consumption, the electronic device can automatically exit the smart capture mode if it does not detect a photographing operation for a long time. Electronic devices that exit the smart capture mode can still continue to record. The user may not perceive that the electronic device exits the smart capture mode. In the preset time period after the second snapping mode is activated, if no photographing operation is detected, step 609 can be executed to exit the smart snapping mode, and the video recording mode remains running. Furthermore, step 601 and the steps thereafter can be repeated until the user exits the camera function. Within a preset period of time after the first capture mode or the second capture mode is activated, if a photographing operation is detected, step 610 may be executed, and in response to the user's photographing operation, the photograph is taken and the image is saved.
- step 611 based on the currently running snapshot mode, a corresponding evaluation strategy is used to score multiple frames of images to be selected.
- step 612 the captured frame image corresponding to the currently running capture mode is determined.
- the electronic device can continuously perform image recognition and scoring on each frame of image, and when the score result exceeds the preset When the threshold is set, this frame of image is recommended to the user. When there are more than one frame of images that exceed the preset threshold, the image with the highest score can be recommended to the user.
- step 610 to step 612 may not be performed.
- the images to be selected may refer to all the images acquired by the electronic device during the recording process.
- FIG. 6 shows an example in which the method for capturing an image provided by an embodiment of the present application is applied to a specific scene.
- the steps in the figure are only shown for ease of understanding.
- Each step in the flowchart does not necessarily have to be performed.
- some steps can be skipped, such as step 605 to step 607, step 610 to step 612, or some steps can be combined, such as step 603 And step 604.
- the order of execution of each step is not fixed, nor is it limited to that shown in FIG. 6.
- the execution sequence of each step should be determined by its function and internal logic.
- the images to be selected are evaluated and recommended.
- the category weight of each scoring parameter can be further determined according to the action category, so as to determine the captured frame images matching different action categories.
- the captured frame image thus determined pays more attention to the details of the action, and it is more likely to find the wonderful image at the moment of movement and recommend it to the user. Therefore, the captured image is more in line with the shooting scene, and the capture effect is better.
- different scoring parameters and mode weights can be used to score the selected images according to different capture modes. For example, in sports capture mode and multi-person sports capture mode, the introduction of posture stretch, posture height, body occlusion and other scoring parameters, in the expression capture mode, multi-person photo mode, the introduction of expression intensity, eyes closed, face occlusion, and face angle
- the scoring parameters can be selected so that the captured frame images recommended to the user can be selected based on the evaluation strategies corresponding to different capturing modes.
- the technical solution provided by the present application may further determine the category weight of each scoring parameter based on different capture categories. For example, because different capture categories focus on different focuses, among the multiple scoring parameters corresponding to the capture mode, the same scoring parameter configured for different capture categories has different category weights. Therefore, it is conducive to obtain an ideal captured frame image.
- a higher mode weight is assigned to the scoring parameters such as the posture height and posture stretch that express the details of the action.
- the same scoring parameter configured for different action categories can have different category weights according to the different attention details to different action categories.
- FIG. 7 is a schematic block diagram of an image capturing apparatus 700 provided by an embodiment of the present application. As shown in FIG. 7, the apparatus 700 may include a mode determining unit 710 and a captured frame determining unit 720.
- the mode determining unit 710 is configured to determine the first capturing mode among the preset multiple capturing modes according to the captured multi-frame images; the capturing frame determining unit 720 is configured to use the first capturing mode corresponding to the
- the evaluation strategy is to determine the captured frame image corresponding to the first capture mode among the captured multiple frames to be selected; the evaluation strategy is one of a plurality of preset evaluation strategies.
- the multiple capture modes include one or more of the following: facial expression capture mode, group photo capture mode, sports capture mode, multi-person sports capture mode, pet capture mode, and landscape capture mode.
- each of the multiple capture modes corresponds to at least one evaluation strategy, and each evaluation strategy includes one or more scoring parameters for image scoring and a mode weight of each scoring parameter.
- the capture frame determination unit 720 is configured to use one or more scoring parameters in one of at least one of the multiple preset evaluation strategies corresponding to the first capture mode and each of the evaluation strategies.
- the weight of the mode weight of each scoring parameter is used to calculate the score of each frame of the image to be selected in the multi-frame image to be selected; and used to determine the score of the image in the multi-frame image to be selected according to the multiple scores of the image to be selected.
- a snapshot frame image corresponding to a snapshot mode.
- the captured frame image has the highest score among the multiple frames of images to be selected.
- different evaluation strategies corresponding to different capture modes include the same scoring parameters, and different evaluation strategies include different mode weights.
- each capture mode includes one or more capture categories, and each capture category corresponds to an evaluation strategy; among at least one evaluation strategy corresponding to the first capture mode, each evaluation strategy includes One or more scoring parameters corresponding to the first snapping mode, the mode weight of each scoring parameter, and the category weight corresponding to one snapping category.
- the mode determining unit 710 is further configured to determine the first capturing category in the first capturing mode according to the multi-frame images; the capturing frame determining unit 720 is configured to use one corresponding to the first capturing mode Or multiple scoring parameters and the mode weight of each scoring parameter, and the category weight of each scoring parameter corresponding to the first snapped category, and calculating the score of each frame of the image to be selected in the multiple frames of image to be selected.
- different evaluation strategies corresponding to different capture categories include the same scoring parameters, and different evaluation strategies include different category weights.
- the capture frame determination unit 720 is further configured to call at least one detection model corresponding to the first capture mode to perform image recognition on the multiple frames of images to be selected to output a recognition result; and to determine the recognition result based on the recognition result.
- the value of one or more scoring parameters is further configured to call at least one detection model corresponding to the first capture mode to perform image recognition on the multiple frames of images to be selected to output a recognition result; and to determine the recognition result based on the recognition result. The value of one or more scoring parameters.
- the at least one detection model when the first capture mode is a motion capture mode or a multi-person motion capture mode, the at least one detection model includes a pose estimation model and a motion detection model.
- the first capture mode is an expression capture mode or a group photo capture mode, and the at least one detection model includes a face attribute detection model.
- the mode determining unit 710 is further configured to perform mode detection on the captured multi-frame image based on the first frame rate to determine the first capturing mode among the preset multiple capturing modes; the capturing frame determining unit 720 is also used to call at least one detection model corresponding to the first capture mode to perform image recognition on the multiple frames of images to be selected at a second frame rate; wherein, the first frame rate is less than the second frame rate.
- the apparatus 700 may include a unit for executing the method executed by the electronic device in the embodiment of the method 200 in FIG. 2.
- the mode determining unit 710 may be used to perform step 210 in the above method 200, and the captured frame determining unit 720 may be used to perform step 220 to step 240 in the above method 200.
- the device 700 may also include one or more detection models.
- the mode determination unit 710 may call the one or more detection models for image detection; the captured frame determination unit 720 may also call the one or more detection models for image recognition.
- the apparatus 700 can also be used to execute the method executed by the electronic device in the embodiments in FIGS. 4 to 6.
- each unit in the device 700 and other operations and/or functions described above are used to implement the corresponding processes of the embodiments in FIG. 4 to FIG. 6, respectively.
- the image capturing apparatus 700 may correspond to at least part of the electronic device in the method embodiment according to the embodiment of the present application.
- the apparatus 700 may be the electronic device, or a component in the electronic device, such as a chip or a chip system.
- the functions implemented by the image capturing apparatus 700 may be implemented by one or more processors executing corresponding programs.
- the application also provides an electronic device or its internal device 700.
- the electronic device or apparatus 700 may include one or more processors to implement the functions of the image capturing apparatus 700 described above.
- the one or more processors may, for example, include or execute the mode determination unit, the capture frame determination unit, and one or more detection models described in the above embodiments.
- the one or more processors may correspond to the processor 110 in the electronic device 100 shown in FIG. 1, for example.
- the mode determination unit, the capture frame determination unit, and one or more detection models may be software, hardware, or a combination thereof.
- the software may be executed by a processor, and the hardware may be embedded in the processor.
- the electronic device or device 700 further includes one or more memories.
- the one or more memories are used to store computer programs and/or data, such as images captured by a camera.
- the one or more memories may correspond to the memory 120 in the electronic device 100 shown in FIG. 1, for example.
- the electronic device may also include a camera, a display unit, and the like.
- the camera may correspond to the camera 190 in the electronic device 100 shown in FIG. 1, for example.
- the display unit may, for example, correspond to the display unit 170 in the electronic unit 100 shown in FIG. 1.
- the processor may obtain the computer program stored in the memory to execute the method flow involved in the above embodiment.
- the memory further includes the one or more preset detection models, so that the processor can obtain the one or more detection models from the memory.
- the present application also provides a computer storage medium that stores computer instructions that, when run on an electronic device, cause the electronic device to execute the above-mentioned related method steps to implement the image capturing method in the above-mentioned embodiment.
- the computer storage medium may, for example, correspond to the memory 120 in the electronic device 100 shown in FIG. 1.
- the mode determination unit, the capture frame determination unit, and one or more detection models involved in the embodiment of FIG. 7 may exist in the form of software and be stored in the computer storage medium.
- the present application also provides a computer program product, which can be stored in the computer storage medium, and when the computer program product runs on a computer, the computer is caused to execute the above-mentioned related steps to implement the image capturing method in the above-mentioned embodiment.
- the image capturing device, electronic device, computer storage medium, computer program product, or chip provided in the embodiments of the present application are all used to execute the corresponding method provided above. Therefore, the beneficial effects that can be achieved can refer to the above The beneficial effects of the corresponding methods provided in the article will not be repeated here.
- the disclosed system, device, and method may be implemented in other ways.
- the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
Provided in the present application are a method and device for taking a picture. The method comprises: determining a first snap photography mode among multiple preset snap photography modes on the basis of multiple captured pictures; and, using an evaluation strategy corresponding to the first snap photography mode, or an evaluation strategy corresponding to first snap photography category in the first snap photography mode, determining a snap-photographed picture corresponding to the first snap photography mode among multiple captured pictures to be selected, where the evaluation strategy used is one among the multiple preset evaluation strategies. As such, the snap-photographed picture being outputted is more in line with the actual photographed scene, thus favoring the acquisition of an ideal snap photography effect.
Description
本申请涉及图像处理技术领域,并且更具体地,涉及一种拍摄图像的方法和装置。The present application relates to the field of image processing technology, and more specifically, to a method and apparatus for capturing images.
智能抓拍是当前智能终端的一项重要的拍照功能。开启了智能抓拍功能的智能终端,可以基于一定的评分规则对多帧待评选图像进行评分,从而选择评分最高的一帧图像推荐给用户。然而,智能终端在对待评选图像进行评分时,评分规则单一,往往注重人脸信息而忽略了其他信息。即便在待评选图像中没有检测到人脸,也仅仅基于前后两帧的光流信息作为评分依据。这种评分机制并不具有普适性,由此而推荐的最优帧可能并不理想,比如对于包括高速运动的物体的画面,以上方法无法将运动瞬间的精彩图像推荐给用户,抓拍效果不理想。因此,亟需开发一种灵活且适用于更多场景的抓拍方案。Smart capture is an important photographing function of current smart terminals. The smart terminal with the smart capture function enabled can score multiple frames of images to be selected based on certain scoring rules, thereby selecting the highest-scoring frame of image and recommending it to the user. However, when the smart terminal is scoring the selected images, the scoring rules are single, and often focus on facial information and ignore other information. Even if no human face is detected in the image to be selected, it is only based on the optical flow information of the two frames before and after as the scoring basis. This scoring mechanism is not universal, and the recommended optimal frames may not be ideal. For example, for images that include high-speed moving objects, the above methods cannot recommend wonderful images at the moment of movement to users, and the capture effect is not ideal. Therefore, there is an urgent need to develop a flexible capture solution suitable for more scenes.
发明内容Summary of the invention
本申请提供一种拍摄图像的方法和装置,以期抓拍的图像更符合实际拍摄场景。The present application provides a method and device for capturing an image, so that the captured image is more in line with the actual shooting scene.
第一方面,提供了一种拍摄图像的方法。该方法包括:根据捕获的多帧图像,在预置的多种抓拍模式中确定第一抓拍模式;使用与第一抓拍模式所对应的评估策略,在捕获的多帧待评选图像中确定与第一抓拍模式对应的抓拍帧图像;该评估策略是预置的多种评估策略中的一种。In the first aspect, a method of capturing an image is provided. The method includes: determining the first capture mode among the preset multiple capture modes according to the captured multi-frame images; using the evaluation strategy corresponding to the first capture mode to determine the first capture mode among the captured multi-frame images to be selected A capture frame image corresponding to a capture mode; the evaluation strategy is one of a variety of preset evaluation strategies.
因此,本申请实施例通过预置多种不同的抓拍模式及其对应的评估策略,可以根据实际拍摄场景,确定抓拍模式。并可以从预置的多种评估策略中选择与第一抓拍模式对应的一种评估策略,以使用该评估策略确定抓拍帧图像。因此,所获得的抓拍图像更符合实际拍摄场景,有利于获得理想的抓拍效果,灵活性获得提升且适用于更多场景。Therefore, in the embodiment of the present application, by presetting multiple different capture modes and corresponding evaluation strategies, the capture mode can be determined according to the actual shooting scene. And an evaluation strategy corresponding to the first capture mode can be selected from a plurality of preset evaluation strategies, so as to use the evaluation strategy to determine the captured frame image. Therefore, the captured image obtained is more in line with the actual shooting scene, which is conducive to obtaining an ideal capture effect, the flexibility is improved and it is suitable for more scenes.
结合第一方面,在第一方面的某些实现方式中,上述多种抓拍模式包括以下一项或多项:表情抓拍模式、合照抓拍模式、运动抓拍模式、多人运动抓拍模式、宠物抓拍模式以及风景抓拍模式。In combination with the first aspect, in some implementations of the first aspect, the aforementioned multiple capture modes include one or more of the following: facial expression capture mode, group photo capture mode, sports capture mode, multiplayer sports capture mode, pet capture mode And landscape capture mode.
可以看到,通过预置多种不同的抓拍模式及其对应的评估策略,以适用于不同的拍摄场景,从而使得所获得的抓拍图像符合实际拍摄场景。It can be seen that a variety of different capture modes and their corresponding evaluation strategies are preset to apply to different shooting scenes, so that the captured images obtained conform to the actual shooting scenes.
应理解,上文列举的抓拍模式仅为示例,不应对本申请构成任何限定。本申请对于上述多种抓拍模式具体包括的内容不作限定。It should be understood that the capture modes listed above are only examples and should not constitute any limitation to this application. This application does not limit the content specifically included in the above-mentioned multiple capture modes.
结合第一方面,在第一方面的某些实现方式中,所述多种抓拍模式中的每种抓拍模式对应上述预置的多种评估策略中的至少一种评估策略,每种评估策略包括用于图像评分的一个或多个评分参数以及每个评分参数的模式权重。所述使用与第一抓拍模式所对应的评估策略,在捕获的多帧待评选图像中确定与第一抓拍模式对应的抓拍帧图像,包括:使用与第一抓拍模式所对应的至少一种评估策略中的一种评估策略中一个或多个评分参数以及每个评分参数的模式权重,计算多帧待评选图像中每帧待评选图像的评分;根据对多帧 待评选图像的多个评分,在多帧待评选图像中确定与第一抓拍模式对应的抓拍帧图像。With reference to the first aspect, in some implementations of the first aspect, each of the multiple capture modes corresponds to at least one of the foregoing preset multiple evaluation strategies, and each evaluation strategy includes One or more scoring parameters used for image scoring and the mode weight of each scoring parameter. The using the evaluation strategy corresponding to the first capture mode to determine the captured frame image corresponding to the first capture mode among the captured multiple frames to be selected includes: using at least one evaluation corresponding to the first capture mode An evaluation strategy in one or more of the scoring parameters and the mode weight of each scoring parameter to calculate the score of each frame of the image to be selected in the multiple frames of images to be selected; according to the multiple scores of the multiple frames of images to be selected, Determine the captured frame image corresponding to the first capture mode among the multiple frames of images to be selected.
因此,对于不同的抓拍模式可以分配不同的评分参数,每一项评分参数还可以施加不同的权重,由此,基于不同的抓拍模式对同一图像进行评分所得到的评分结果是不同的。当确定了第一抓拍模式后,选择该第一抓拍模式所对应的评估策略对多帧待评选图像进行评分,从而确定抓拍帧图像。由此而获得的抓拍帧图像因为结合第一抓拍模式对应的评估策略,因此更能够满足第一抓拍模式的需求,符合实际拍摄场景。Therefore, different scoring parameters can be assigned to different capture modes, and different weights can be applied to each scoring parameter. Therefore, the scoring results obtained by scoring the same image based on different capture modes are different. After the first capture mode is determined, the evaluation strategy corresponding to the first capture mode is selected to score multiple frames of images to be selected, so as to determine the captured frame image. The captured frame image obtained in this way is combined with the evaluation strategy corresponding to the first capture mode, so it can better meet the requirements of the first capture mode and conform to the actual shooting scene.
结合第一方面,在第一方面的某些实现方式中,该抓拍帧图像在多帧待评选图像中具有最高的评分。With reference to the first aspect, in some implementations of the first aspect, the captured frame image has the highest score among multiple frames to be selected.
基于第一抓拍模式对应的评估策略对待评选图像进行评分后,所选出的具有最高评分的图像也就是该多帧待评选图像中选出的最满足第一抓拍模式的需求的图像,当然也就最符合实际拍摄场景。After scoring the image to be selected based on the evaluation strategy corresponding to the first capture mode, the selected image with the highest score is the image selected from the multiple frames of images to be selected that best meets the requirements of the first capture mode, of course. It best suits the actual shooting scene.
结合第一方面,在第一方面的某些实现方式中,不同抓拍模式所对应的不同评估策略所包括的评分参数相同,且不同评估策略所包括的模式权重不同。With reference to the first aspect, in some implementations of the first aspect, different evaluation strategies corresponding to different capture modes include the same scoring parameters, and different evaluation strategies include different mode weights.
这里所说的不同评估策略所包括的模式权重不同,具体可以是指,不同评估策略中对应于同一个评分参数施加的模式权重不同。且,在评估策略包含有多项评分参数的情况下,不同评估策略对至少一项评分参数施加的模式权重不同。The different evaluation strategies mentioned here include different mode weights. Specifically, it may mean that the mode weights applied to the same scoring parameter in different evaluation strategies are different. Moreover, when the evaluation strategy includes multiple scoring parameters, different evaluation strategies impose different mode weights on at least one scoring parameter.
也就是说,在不同的抓拍模式下,结合不同抓拍模式所关注的不同需求,对同一项评分参数可以施加不同的权重。比如,在运动抓拍模式下和表情抓拍模式下,对表情强度这一评分参数就可以施加不同的权重。在运动抓拍模式下可以施加较低的权重,而在表情抓拍模式下可以施加较高的权重。相反,对姿态高度这一评分参数也可以施加不同的权重。在运动抓拍模式下,可以施加较高的权重,而在表情抓拍模式下可以施加较低的权重。That is to say, in different capture modes, combined with different needs of different capture modes, different weights can be applied to the same scoring parameter. For example, in sports capture mode and expression capture mode, different weights can be applied to the scoring parameter of expression intensity. In the sports capture mode, a lower weight can be applied, and in the expression capture mode, a higher weight can be applied. Conversely, different weights can also be applied to the scoring parameter of posture height. In the sports capture mode, a higher weight can be applied, while in the expression capture mode, a lower weight can be applied.
因此,不同抓拍模式所对应的不同评估策略可以分别包括对应于相同的评分参数的不同模式权重。Therefore, different evaluation strategies corresponding to different capture modes may respectively include different mode weights corresponding to the same scoring parameter.
应理解,上文举例仅为便于理解,不应对本申请构成任何限定。It should be understood that the above examples are only for ease of understanding, and should not constitute any limitation on this application.
结合第一方面,在第一方面的某些实现方式中,每种抓拍模式包括一种或多种抓拍类别,每种抓拍类别对应一种评估策略;在与所述第一抓拍模式对应的至少一种评估策略中,每种评估策略包括与所述第一抓拍模式对应的一个或多个评分参数以及每个评分参数的模式权重和与一个抓拍类别对应的类别权重。With reference to the first aspect, in some implementations of the first aspect, each capture mode includes one or more capture categories, and each capture category corresponds to an evaluation strategy; at least one corresponding to the first capture mode In an evaluation strategy, each evaluation strategy includes one or more scoring parameters corresponding to the first snapping mode, a mode weight of each scoring parameter, and a category weight corresponding to one snapping category.
所述在预置的多种抓拍模式中确定第一抓拍模式进一步包括:根据多帧图像确定第一抓拍模式中的第一抓拍类别。The determining the first snapping mode among the preset multiple snapping modes further includes: determining the first snapping category in the first snapping mode according to multiple frames of images.
所述使用与第一抓拍模式所对应的至少一种评估策略中的一种评估策略中一个或多个评分参数以及每个评分参数的模式权重,计算多帧待评选图像中每帧待评选图像的评分,包括:使用与第一抓拍模式所对应的一个或多个评分参数和每个评分参数的模式权重,以及与第一抓拍类别对应的每个评分参数的类别权重,计算多帧待评选图像中每帧待评选图像的评分。The one or more scoring parameters and the mode weight of each scoring parameter in one of the at least one evaluation strategy corresponding to the first snapping mode are used to calculate each frame of the image to be selected in the multiple frames of image to be selected The scoring includes: using one or more scoring parameters corresponding to the first snapshot mode and the mode weight of each scoring parameter, as well as the category weight of each scoring parameter corresponding to the first snapshot category, to calculate multiple frames to be selected The score of each frame of the image to be selected.
为了找到最能够符合拍摄场景的抓拍帧图像,本申请不但提出了与抓拍模式对应的评估策略,为不同抓拍模式分配不同的评分参数和模式权重,还进一步提出了与抓拍模式下的抓拍类别对应的类别权重。即,将抓拍模式下的不同抓拍类别进一步细化,将不同的抓拍类别所关注的细节进一步施加较高的权重。从而使得所选出的抓拍帧图像不但能够满足 第一抓拍模式的需求,还兼顾了拍摄对象的抓拍类别,将能够更好地呈现抓拍的精彩瞬间的图像找出来。In order to find the captured frame image that best fits the shooting scene, this application not only proposes an evaluation strategy corresponding to the capture mode, assigns different scoring parameters and mode weights to different capture modes, and further proposes the corresponding to the capture category in the capture mode The category weight. That is, the different capture categories in the capture mode are further refined, and the details of the different capture categories are further weighted. As a result, the selected captured frame images can not only meet the requirements of the first capture mode, but also take into account the capture category of the subject, and find out the images that can better present the wonderful moments of the capture.
结合第一方面,在第一方面的某些实现方式中,不同抓拍类别对应的不同评估策略所包括的评分参数相同,且不同评估策略所包括的类别权重不同。With reference to the first aspect, in some implementations of the first aspect, different evaluation strategies corresponding to different capture categories include the same scoring parameters, and different evaluation strategies include different category weights.
这里所说的不同评估策略所包括的类别权重不同,具体可以是指,不同评估策略中对应于同一个评分参数施加的类别权重不同。且,在评估策略包含有多项评分参数的情况下,不同评估策略对至少一项评分参数施加的类别权重不同。The category weights included in the different evaluation strategies mentioned here are different. Specifically, it may mean that the category weights applied to the same scoring parameter in different evaluation strategies are different. Moreover, when the evaluation strategy includes multiple scoring parameters, different evaluation strategies apply different category weights to at least one scoring parameter.
也就是说,在同一抓拍模式的不同抓拍类别下,结合不同抓拍类别所关注的不同需求,对同一项评分参数还可以施加不同的权重。比如,在运动抓拍模式下,跳跃和投篮对动作的关注点不同,跳跃更关注的是腿部弯曲夹角,因此在抓拍类别为跳跃时,为腿部弯曲夹角施加更高的权重;而投篮更关注的是胳膊弯曲夹角,因此在抓拍类别为投篮时,为胳膊弯曲夹角施加更高的权重。That is to say, under different capture categories in the same capture mode, combined with different needs of different capture categories, different weights can be applied to the same scoring parameter. For example, in the sports capture mode, jumping and shooting focus on the action differently. Jumping pays more attention to the bending angle of the leg. Therefore, when the capture category is jumping, the bending angle of the leg is given a higher weight; and Shooting is more concerned with the bending angle of the arm, so when the capture category is shooting, a higher weight is applied to the bending angle of the arm.
因此,不同抓拍类别对应的不同评估策略可以分别包括对应于相同的评分参数的不同类别权重。Therefore, different evaluation strategies corresponding to different capture categories may respectively include different category weights corresponding to the same scoring parameter.
应理解,上文举例仅为便于理解,不应对本申请构成任何限定。It should be understood that the above examples are only for ease of understanding, and should not constitute any limitation on this application.
对表情强度这一评分参数就可以施加不同的权重。在运动抓拍模式下可以施加较低的权重,而在表情抓拍模式下可以施加较高的权重。相反,对姿态高度这一评分参数也可以施加不同的权重。在运动抓拍模式下,可以施加较高的权重,而在表情抓拍模式下可以施加较低的权重。Different weights can be applied to the scoring parameter of expression intensity. In the sports capture mode, a lower weight can be applied, and in the expression capture mode, a higher weight can be applied. Conversely, different weights can also be applied to the scoring parameter of posture height. In the sports capture mode, a higher weight can be applied, while in the expression capture mode, a lower weight can be applied.
结合第一方面,在第一方面的某些实现方式中,该方法还包括:调用与所述第一抓拍模式对应的至少一个检测模型对多帧待评选图像进行图像识别,以输出识别结果;基于识别结果确定一个或多个评分参数的数值。With reference to the first aspect, in some implementations of the first aspect, the method further includes: invoking at least one detection model corresponding to the first capture mode to perform image recognition on multiple frames of images to be selected to output a recognition result; Determine the value of one or more scoring parameters based on the recognition result.
在本申请实施例中,通过使用至少一个检测模型来对待评选图像进行图像识别。该至少一个检测模型例如可以包括人脸属性检测模型、人体框检测模型、场景识别模型、姿态点估计模型、动作检测模型中的一个或多个。通过这些检测模型可以对图像的不同关注点进行检测,从而根据识别结果确定每个评分参数的数值。In this embodiment of the present application, at least one detection model is used to perform image recognition on the image to be selected. The at least one detection model may include, for example, one or more of a face attribute detection model, a human frame detection model, a scene recognition model, a posture point estimation model, and an action detection model. Through these detection models, different attention points of the image can be detected, and the value of each scoring parameter can be determined according to the recognition result.
上述检测模型例如可以是通过机器学习训练得到的。在一种可能的设计中,上述检测模型可以是内嵌在神经网络处理单元(neutral-network processing unit,NPU)中的模型。本申请对此不作限定。The above-mentioned detection model may be obtained through machine learning training, for example. In a possible design, the aforementioned detection model may be a model embedded in a neural network processing unit (NPU). This application does not limit this.
可选地,第一抓拍模式为运动抓拍模式或多人运动抓拍模式时,至少一个检测模型包括姿态估计模型和动作检测模型。Optionally, when the first capture mode is a motion capture mode or a multi-person motion capture mode, the at least one detection model includes a pose estimation model and a motion detection model.
可选地,第一抓拍模式为表情抓拍模式或合照抓拍模式,至少一个检测模型包括人脸属性检测模型。Optionally, the first capture mode is an expression capture mode or a group photo capture mode, and the at least one detection model includes a face attribute detection model.
应理解,上文列举的与不同的抓拍模式对应的检测模型仅为示例,不应对本申请构成任何限定。例如,在抓拍模式不同的情况下,也可以调用相同的多个检测模型对待评选图像进行图像识别。在评分过程中,可以根据不同的抓拍模式为每个评分参数施加不同的权重,从而实现相似的效果。It should be understood that the detection models corresponding to the different capture modes listed above are only examples, and should not constitute any limitation to this application. For example, in the case of different capture modes, the same multiple detection models can also be called to perform image recognition on the images to be selected. In the scoring process, different weights can be applied to each scoring parameter according to different capture modes, so as to achieve similar effects.
结合第一方面,在第一方面的某些实现方式中,所述根据捕获的多帧图像,在预置的多种抓拍模式中确定第一抓拍模式,包括:在录像模式或预览模式下,根据多帧图像,在 多种功能抓拍模式中确定第一抓拍模式。With reference to the first aspect, in some implementations of the first aspect, the first capture mode is determined from the preset multiple capture modes according to the captured multi-frame images, including: in the video mode or the preview mode, According to the multiple frames of images, the first capture mode is determined among multiple functional capture modes.
也就是说,本申请提供的方法不但可以应用于智能抓拍模式下,还可以与其他模式同步运行。例如,在录像模式下,若检测到图像满足第一抓拍模式的触发条件,则可以在后台同时运行智能抓拍模式,进入第一抓拍模式。又例如,在预览模式下,若检测到图像满足第一抓拍模式的触发条件,则可以自动启用第一抓拍模式。由此设备可以在多种模式之间自动切换,有利于获得理想的抓拍帧图像。That is to say, the method provided in this application can not only be applied in the smart capture mode, but also can run synchronously with other modes. For example, in the video recording mode, if it is detected that the image meets the trigger condition of the first capture mode, the smart capture mode can be run in the background at the same time to enter the first capture mode. For another example, in the preview mode, if it is detected that the image meets the trigger condition of the first capture mode, the first capture mode can be automatically activated. Therefore, the device can automatically switch between multiple modes, which is conducive to obtaining an ideal captured frame image.
结合第一方面,在第一方面的某些实现方式中,所述根据捕获的多帧图像,在预置的多种抓拍模式中确定第一抓拍模式,包括:基于第一帧率对捕获的多帧图像进行模式检测,以在预置的多种抓拍模式中确定第一抓拍模式。所述调用与第一抓拍模式对应的至少一个检测模型对多帧待评选图像进行图像识别,包括:调用与第一抓拍模式对应的至少一个检测模型,以第二帧率对多帧待评选图像进行图像识别;其中,所述第一帧率小于所述第二帧率。With reference to the first aspect, in some implementations of the first aspect, the determining the first capture mode among the preset multiple capture modes according to the captured multi-frame images includes: comparing the captured images based on the first frame rate The mode detection is performed on multiple frames of images to determine the first capture mode among the preset multiple capture modes. The calling at least one detection model corresponding to the first capture mode to perform image recognition on multiple frames of images to be selected includes: calling at least one detection model corresponding to the first capture mode to perform image recognition on the multiple frames of images to be selected at a second frame rate Perform image recognition; wherein the first frame rate is less than the second frame rate.
也就是说,在确定第一抓拍模式之前,可以基于较低的帧率进行模式检测。这种方式可以适用于上述录像模式或预览模式。在未进入智能抓拍模式之前,可以使用较低的帧率进行模式检测。一旦确定了第一抓拍模式,也就是进入了智能抓拍模式,则可以采用较高的帧率进行图像识别。因此,可以在未进入智能抓拍模式之前可以基于低帧率进行模式检测,节省高帧率带来的功耗。In other words, before determining the first capture mode, mode detection can be performed based on a lower frame rate. This method can be applied to the above-mentioned video mode or preview mode. Before entering the smart capture mode, you can use a lower frame rate for mode detection. Once the first capture mode is determined, that is, the smart capture mode is entered, a higher frame rate can be used for image recognition. Therefore, it is possible to perform mode detection based on a low frame rate before entering the smart capture mode, saving power consumption caused by a high frame rate.
当然,在本申请实施例中,也可以以相同的帧率进行模式检测和图像识别。例如,在智能抓拍模式下,以较高的帧率进行模式确定。并在进入了第一抓拍模式之后,仍以较高的模式进行图像识别。本申请对此不作限定。Of course, in the embodiments of the present application, pattern detection and image recognition can also be performed at the same frame rate. For example, in the smart capture mode, the mode is determined at a higher frame rate. And after entering the first capture mode, image recognition is still performed in a higher mode. This application does not limit this.
此外,本申请对于帧率的具体取值也不做限定。In addition, this application does not limit the specific value of the frame rate.
结合第一方面,在第一方面的某些实现方式中,在确定第一抓拍模式之后,该方法还包括:基于新捕获的多帧图像,确定第二抓拍模式,该第二抓拍模式是所述多种抓拍模式中不同于该第一抓拍模式的一种抓拍模式;切换至第二抓拍模式。With reference to the first aspect, in some implementations of the first aspect, after the first capture mode is determined, the method further includes: determining a second capture mode based on the newly captured multi-frame images, where the second capture mode is One of the multiple snapping modes is different from the first snapping mode; switching to the second snapping mode.
在进入了第一抓拍模式之后,还可以持续对新捕获的图像进行检测。在检测到新捕获的图像满足另一抓拍模式(比如第二抓拍模式)的触发条件时,可以自动切换到第二抓拍模式。可选地,所述切换至第二抓拍模式,包括:在第一抓拍模式的运行时长超出预设的保护时段的情况下,切换到第二抓拍模式。After entering the first capture mode, the newly captured image can also be continuously detected. When it is detected that the newly captured image meets the trigger condition of another capture mode (such as the second capture mode), the second capture mode can be automatically switched to. Optionally, the switching to the second capture mode includes: switching to the second capture mode when the running time of the first capture mode exceeds a preset protection period.
为了避免在多种抓拍模式之间来回切换,避免误触,还可以为每一种抓拍模式预设保护时段。在保护时段内,即便检测到新捕获的图像满足另一抓拍模式的触发条件,也不进行模式切换。在超出保护时段之后,若检测到新捕获的图像满足另一抓拍模式的触发条件,可以进行模式切换。In order to avoid switching back and forth between multiple capture modes and avoid accidental touches, the protection period can also be preset for each capture mode. During the protection period, even if it is detected that the newly captured image meets the trigger condition of another capture mode, the mode switching is not performed. After the protection period is exceeded, if it is detected that the newly captured image meets the trigger condition of another capture mode, the mode can be switched.
第二方面,提供了一种拍摄图像的装置,包括用于执行第一方面中任一种可能实现方式中的方法的各个模块或单元。In a second aspect, an image capturing apparatus is provided, which includes various modules or units for executing the method in any one of the possible implementation manners of the first aspect.
第三方面,提供了一种拍摄图像的装置,包括,处理器,存储器,该存储器用于存储计算机程序,该处理器用于从存储器中调用并运行该计算机程序,使得该唤醒屏幕的装置执行第一方面及其各种可能实现方式中的拍摄图像的方法。In a third aspect, a device for capturing images is provided, including a processor, a memory, the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that the device for waking up the screen executes the first On the one hand and its various possible implementations are the methods of capturing images.
可选地,所述处理器为一个或多个,所述存储器为一个或多个。Optionally, there are one or more processors and one or more memories.
可选地,所述存储器可以与所述处理器集成在一起,或者所述存储器与处理器分离设 置。Optionally, the memory may be integrated with the processor, or the memory and the processor may be provided separately.
第四方面,提供了一种电子设备,该电子设备包括如第二方面或第三方面所述的拍摄图像的装置。In a fourth aspect, an electronic device is provided, and the electronic device includes the image capturing apparatus as described in the second or third aspect.
第五方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序(也可以称为代码,或指令),当所述计算机程序被运行时,使得计算机执行上述第一方面任一种可能实现方式中的方法。In a fifth aspect, a computer program product is provided. The computer program product includes: a computer program (also called code, or instruction), which when the computer program is executed, causes a computer to execute any one of the above-mentioned first aspects. One of the possible implementation methods.
第六方面,提供了一种计算机可读介质,所述计算机可读介质存储有计算机程序(也可以称为代码,或指令)当其在计算机或至少一个处理器上运行时,使得计算机或该至少一个处理器执行上述第一方面任一种可能实现方式中的方法。In a sixth aspect, a computer-readable medium is provided, and the computer-readable medium stores a computer program (also called code, or instruction) when it runs on a computer or at least one processor, so that the computer or the At least one processor executes the method in any possible implementation manner of the first aspect.
第七方面,提供了一种芯片系统,该芯片系统包括处理器,用于支持该芯片系统实现上述第一方面任一种可能实现方式中所涉及的功能。In a seventh aspect, a chip system is provided, and the chip system includes a processor for supporting the chip system to implement the functions involved in any possible implementation manner of the first aspect.
图1是本申请实施例提供的电子设备的示意图;FIG. 1 is a schematic diagram of an electronic device provided by an embodiment of the present application;
图2是本申请实施例提供的拍摄图像的方法的示意性流程图;FIG. 2 is a schematic flowchart of a method for capturing an image provided by an embodiment of the present application;
图3是本申请实施例提供的手机界面的示意图;Figure 3 is a schematic diagram of a mobile phone interface provided by an embodiment of the present application;
图4是本申请另一实施例提供的拍摄图像的方法的示意性流程图;FIG. 4 is a schematic flowchart of a method for capturing an image according to another embodiment of the present application;
图5是本申请又一实施例提供的拍摄图像的方法的示意性流程图;FIG. 5 is a schematic flowchart of a method for capturing an image provided by another embodiment of the present application;
图6是本申请再一实施例提供的拍摄图像的方法的示意性流程图;FIG. 6 is a schematic flowchart of a method for capturing an image according to still another embodiment of the present application;
图7是本申请实施例提供的拍摄图像的装置的示意性框图。Fig. 7 is a schematic block diagram of an image capturing apparatus provided by an embodiment of the present application.
下面将结合附图,对本申请中的技术方案进行描述。本申请实施例提供的拍摄图像的方法可以应用于手机、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等电子设备上,本申请实施例对电子设备的具体类型不作任何限定。本申请实施例提供的拍摄图像的装置可以配置于上文所列举的各类电子设备中,也可以是上文所列举的各类电子设备。本申请对此不作限定。The technical solution in this application will be described below in conjunction with the accompanying drawings. The method of capturing images provided by the embodiments of the application can be applied to mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, and ultra-mobile personal For electronic devices such as a computer (ultra-mobile personal computer, UMPC), netbook, and personal digital assistant (personal digital assistant, PDA), the embodiments of this application do not limit the specific types of electronic devices. The image capturing apparatus provided in the embodiments of the present application may be configured in the various electronic devices listed above, or may be the various electronic devices listed above. This application does not limit this.
示例性地,图1示出了电子设备100的结构示意图。该电子设备100可以包括处理器110。该处理器110可以包括一个或多个处理单元。例如:处理器110可以包括中央处理单元(central processing unit,CPU)、神经网络处理单元(NPU)、应用处理器(application processor,AP)、调制解调处理器、图形处理器(graphics processing unit,GPU)、图像信号处理器(image signal processor,ISP)、控制器、视频编解码器、数字信号处理器(digital signal processor,DSP)以及基带处理器中的一个或多个。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。Illustratively, FIG. 1 shows a schematic structural diagram of an electronic device 100. The electronic device 100 may include a processor 110. The processor 110 may include one or more processing units. For example, the processor 110 may include a central processing unit (CPU), a neural network processing unit (NPU), an application processor (AP), a modem processor, and a graphics processing unit (graphics processing unit, GPU), image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), and one or more of the baseband processor. Among them, the different processing units may be independent devices or integrated in one or more processors.
例如,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。For example, the controller may be the nerve center and command center of the electronic device 100. The controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
NPU为神经网络(neural-network,NN)处理器,通过借鉴生物神经网络结构,例如 借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别、人脸检测、人体框检测、场景检测、姿态点检测、动作检测等。NPU is a neural-network (NN) processor. By learning from the structure of biological neural network, such as the transfer mode between human brain neurons, it can quickly process input information and can continuously learn by itself. Through the NPU, applications such as intelligent cognition of the electronic device 100 can be realized, such as: image recognition, face detection, human frame detection, scene detection, gesture point detection, motion detection, and so on.
在本申请实施例中,NPU中可以内嵌一个或多个检测模型,例如下文中所述的人脸属性检测模型、姿态估计模型、动作检测模型、人体框检测模型以及场景检测模型等中的一个或多个。各检测模型均可以是基于机器学习的算法训练得到。例如,基于支持向量机(support vector machine,VSM)、卷积神经网络(convolutional neural networks,CNN)或者循环神经网络(recurrent neural network,RNN)等训练得到。应理解,本申请对于训练的具体方式不作限定。In the embodiments of the present application, one or more detection models may be embedded in the NPU, such as the face attribute detection model, pose estimation model, motion detection model, human frame detection model, and scene detection model described below. one or more. Each detection model can be obtained by algorithm training based on machine learning. For example, it can be obtained by training based on support vector machines (VSM), convolutional neural networks (convolutional neural networks, CNN), or recurrent neural networks (RNN). It should be understood that this application does not limit the specific training method.
每个检测模型可以对应与NPU中的一个处理器;或者,每个检测模型可以对应于NPU中的一个处理单元,多个检测模型的功能可以通过一个集成在一个处理器中的多个处理单元来实现。本申请对此不作限定。Each detection model can correspond to a processor in the NPU; or, each detection model can correspond to a processing unit in the NPU, and the functions of multiple detection models can be integrated into a processor through multiple processing units to fulfill. This application does not limit this.
此外,NPU还可以与处理器100中的一个或多个其他处理器具有通信连接关系。例如,NPU可以与GPU、ISP以及应用处理器等具有通信连接。本申请对此不作限定。In addition, the NPU may also have a communication connection with one or more other processors in the processor 100. For example, the NPU may have a communication connection with the GPU, ISP, and application processor. This application does not limit this.
可选地,该电子设备100还包括存储器120。该存储器120可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在存储器120的指令,从而执行电子设备100的各种功能应用以及数据处理。存储器120可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,存储器120可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、通用闪存存储器(universal flash storage,UFS)等。Optionally, the electronic device 100 further includes a memory 120. The memory 120 may be used to store computer executable program code, where the executable program code includes instructions. The processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the memory 120. The memory 120 may include a program storage area and a data storage area. Among them, the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function. The data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100. In addition, the memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
在一些可能的实施例中,处理器110中可以设置存储器。例如,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可以从该存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。In some possible embodiments, a memory may be provided in the processor 110. For example, the memory in the processor 110 is a cache memory. The memory can store instructions or data that the processor 110 has just used or used cyclically. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
当然,存储器也可以独立于处理器110存在,如图中所示的存储器120。本申请对此不作限定。Of course, the memory may also exist independently of the processor 110, such as the memory 120 as shown in the figure. This application does not limit this.
可选地,该电子设备100还包括收发器130。除此之外,为了使得电子设备100的功能更加完善,该电子设备100还可以包括输入单元160、显示单元170、音频电路180、摄像头190和传感器101等中的一个或多个,所述音频电路还可以耦合至扬声器182、麦克风184等。Optionally, the electronic device 100 further includes a transceiver 130. In addition, in order to make the functions of the electronic device 100 more complete, the electronic device 100 may also include one or more of the input unit 160, the display unit 170, the audio circuit 180, the camera 190, and the sensor 101. The audio The circuit can also be coupled to a speaker 182, a microphone 184, and the like.
电子设备100通过GPU、显示单元170以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示单元170和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 implements a display function through a GPU, a display unit 170, an application processor, and the like. The GPU is an image processing microprocessor, which is connected to the display unit 170 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
显示单元170用于显示图像,视频等。显示单元170包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD)、有机发光二极管(organic light-emitting diode,OLED)、有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED)、柔性发光二极管(flex light-emitting diode,FLED)、迷你LED(MiniLED)、微Led(MicroLED)、微OLED(Micro-OLED)或量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括一个或多个显示单元170。The display unit 170 is used to display images, videos, and the like. The display unit 170 includes a display panel. The display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode). emitting diode (AMOLED), flexible light-emitting diode (FLED), mini LED (MiniLED), micro LED (MicroLED), micro OLED (Micro-OLED) or quantum dot light emitting diode, QLED) etc. In some embodiments, the electronic device 100 may include one or more display units 170.
电子设备100可以通过ISP、摄像头190、视频编解码器、GPU、显示单元160以及应用处理器等实现拍摄功能。The electronic device 100 may implement a shooting function through an ISP, a camera 190, a video codec, a GPU, a display unit 160, an application processor, and the like.
ISP用于处理摄像头190反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点、亮度、肤色进行算法优化。ISP还可以对拍摄场景的曝光、色温等参数优化。在一些实施例中,ISP可以设置在摄像头190中。The ISP is used to process the data fed back by the camera 190. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing and is converted into an image visible to the naked eye. ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 190.
摄像头190用于捕获静态图像或动态视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB、YUV等格式的图像信号。在一些实施例中,电子设备100可以包括一个或多个摄像头190。The camera 190 is used to capture still images or dynamic videos. The object generates an optical image through the lens and is projected to the photosensitive element. The photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal. ISP outputs digital image signals to DSP for processing. DSP converts digital image signals into standard image signals in RGB, YUV and other formats. In some embodiments, the electronic device 100 may include one or more cameras 190.
例如,在本申请提供的拍摄图像的方法中,摄像头190可用于采集图像,并将采集到的图像显示在拍摄界面中。感光元件把采集到的光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP,做相关的图像加工处理。For example, in the method for capturing images provided in the present application, the camera 190 may be used to capture images and display the captured images in the capture interface. The photosensitive element converts the collected optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal. The ISP outputs the digital image signal to the DSP for related image processing.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
应用处理器通过音频设备(如扬声器182等)输出声音信号,或通过显示单元170显示图像或视频。可选地,上述电子设备100还可以包括电源150,用于给终端设备中的各种器件或电路提供电源。The application processor outputs a sound signal through an audio device (such as a speaker 182, etc.), or displays an image or video through the display unit 170. Optionally, the aforementioned electronic device 100 may further include a power supply 150 for providing power to various devices or circuits in the terminal device.
应理解,图1所示的电子设备100能够实现图2以及图4至图6所示方法实施例的各个过程。电子设备100中的各个模块或单元的操作和/或功能,分别为了实现上述方法实施例中的相应流程。具体可参见上述方法实施例中的描述,为避免重复,此处适当省略详细描述。It should be understood that the electronic device 100 shown in FIG. 1 can implement each process of the method embodiments shown in FIG. 2 and FIG. 4 to FIG. 6. The operations and/or functions of each module or unit in the electronic device 100 are respectively for implementing the corresponding processes in the foregoing method embodiments. For details, please refer to the descriptions in the above method embodiments. To avoid repetition, detailed descriptions are appropriately omitted here.
应理解,图1仅为便于理解,示例性地示出了电子设备中的各模块或单元以及模块或单元之间的连接关系,但这不应对本申请构成任何限定。本申请对于电子设备具体包括的模块、单元以及他们互相之间的连接关系不作限定。It should be understood that FIG. 1 is only for ease of understanding, and exemplarily shows each module or unit in an electronic device and the connection relationship between the modules or units, but this should not constitute any limitation to the application. This application does not limit the modules and units specifically included in the electronic device and their mutual connection relationship.
下面结合附图详细说明本申请实施例提供的方法。图2是本申请实施例提供的拍摄图像的方法200的示意性流程图。图2提供的方法200可以由电子设备或电子设备中的处理器执行。下文中为便于描述,以电子设备作为执行主体来描述本申请实施例。The method provided by the embodiment of the present application will be described in detail below in conjunction with the accompanying drawings. FIG. 2 is a schematic flowchart of an image capturing method 200 provided by an embodiment of the present application. The method 200 provided in FIG. 2 may be executed by an electronic device or a processor in the electronic device. In the following, for ease of description, an electronic device is used as an execution subject to describe the embodiments of the present application.
下面详细说明图2所示的方法200中的各个步骤。如图所示,方法200可以包括步骤210至步骤240。在步骤210中,根据捕获的多帧图像,在预置的多种抓拍模式中确定第一抓拍模式。Each step in the method 200 shown in FIG. 2 will be described in detail below. As shown in the figure, the method 200 may include step 210 to step 240. In step 210, according to the captured multiple frames of images, a first capture mode is determined among a plurality of preset capture modes.
具体地,该电子设备可以对捕获到的图像进行周期性地检测,根据检测结果,确定适用于当前拍摄的抓拍模式。为便于区分和说明,将适用于当前拍摄的抓拍模式记作第一抓拍模式。应理解,该电子设备捕获到的图像可以存入缓存。该缓存例如可以是该电子设备中的相机模块中的一部分存储空间,也可以独立于相机模块而存在,本申请对此不作限定。Specifically, the electronic device may periodically detect the captured image, and according to the detection result, determine the capture mode suitable for the current shooting. For the convenience of distinction and description, the capture mode applicable to the current shooting is recorded as the first capture mode. It should be understood that the image captured by the electronic device can be stored in the cache. The cache, for example, may be a part of the storage space in the camera module of the electronic device, or may exist independently of the camera module, which is not limited in this application.
该电子设备可以持续地从缓存中获取多帧图像。该多帧图像可以被输入至一个或多个检测模型。该电子设备可以调用一个或多个检测模型对该多帧图像进行检测,并基于检测模型输出的检测结果,从预置的多种抓拍模式中确定该第一抓拍模式。基于对第一抓拍模式的确定,该电子设备可以启用该第一抓拍模式。The electronic device can continuously obtain multiple frames of images from the cache. The multiple frames of images can be input to one or more detection models. The electronic device can call one or more detection models to detect the multi-frame images, and based on the detection results output by the detection models, determine the first capture mode from a plurality of preset capture modes. Based on the determination of the first capture mode, the electronic device may enable the first capture mode.
其中,作为示例而非限定,预置的多种抓拍模式包括以下一项或多项:表情抓拍模式、合照抓拍模式、运动抓拍模式、多人运动抓拍模式、宠物抓拍模式以及风景抓拍模式。Among them, as an example and not a limitation, the preset multiple capture modes include one or more of the following: facial expression capture mode, group photo capture mode, sports capture mode, multiplayer sports capture mode, pet capture mode, and landscape capture mode.
上述多种抓拍模式在本申请实施例中均被定义为智能抓拍模式。也就是说,上述第一抓拍模式为智能抓拍模式。电子设备可以预先进入智能抓拍模式,然后根据捕获的多帧图像,在预置的多种抓拍模式中确定第一抓拍模式;也可以在拍照模式或录像模式下,基于对捕获到的图像的检测,自动确定第一抓拍模式。本申请对此不作限定。The aforementioned multiple capture modes are all defined as smart capture modes in the embodiments of the present application. In other words, the above-mentioned first capture mode is a smart capture mode. The electronic device can enter the smart capture mode in advance, and then determine the first capture mode among the preset multiple capture modes according to the captured multi-frame images; it can also be in the photo mode or video mode, based on the detection of the captured images , Automatically determine the first capture mode. This application does not limit this.
这里所述的预置的多种抓拍模式,具体可以是指,该电子设备预先保存了该多种抓拍模式下每种抓拍模式对应的评估策略。当电子设备确定了使用其中的一种抓拍模式时,便可以调用相对应的评估策略对捕获的多帧待评选图像进行评估。后文中会结合步骤220详细说明电子设备使用评估策略对多帧待评选图像进行评估的过程,这里暂且省略。The multiple preset capture modes described herein may specifically refer to that the electronic device pre-stores the evaluation strategy corresponding to each capture mode in the multiple capture modes. When the electronic device determines to use one of the capture modes, it can call the corresponding evaluation strategy to evaluate the captured multiple frames of images to be selected. Hereinafter, the process of evaluating multiple frames of images to be selected by the electronic device using the evaluation strategy will be described in detail in conjunction with step 220, which is omitted here.
下面通过具体的例子详细说明根据多帧图像确定第一抓拍模式的具体过程。The specific process of determining the first capture mode based on multiple frames of images will be described in detail below through specific examples.
例如,在拍照模式或录像模式下,该电子设备可以调用人脸属性检测模型、人体框检测模型和场景识别模型中的一个或多个检测模型,对捕获的多帧图像进行检测,并根据各检测模型输出的检测结果确定第一抓拍模式。For example, in the photo mode or video mode, the electronic device can call one or more of the face attribute detection model, the human frame detection model, and the scene recognition model to detect the captured multi-frame images, and according to each The detection result output by the detection model determines the first capture mode.
目前的智能终端通常都具备拍照功能和录像功能,上述人脸属性检测模型、人体框检测模型和场景识别模型是配置在智能终端中的。因此,在拍照模式或录像模式下,电子设备可以通过调用已有的模型来对图像进行检测,以确定第一抓拍模式。Current smart terminals usually have camera functions and video recording functions. The above-mentioned face attribute detection model, human frame detection model, and scene recognition model are configured in the smart terminal. Therefore, in the photo mode or the video mode, the electronic device can detect the image by calling an existing model to determine the first capture mode.
又例如,在智能抓拍模式下该电子设备可以通过调用人脸属性检测模型、姿态估计模型和动作检测模型中的一个或多个模型,对捕获的多帧图像进行检测,并根据各检测模型输出的检测结果确定第一抓拍模式。For another example, in the smart capture mode, the electronic device can detect the captured multi-frame images by calling one or more of the face attribute detection model, the pose estimation model, and the motion detection model, and output according to each detection model The detection result determines the first capture mode.
上文所列举的人脸属性检测模型、人体框检测模型、场景识别模型、姿态估计模型和动作检测模型均可以是基于机器学习的算法训练得到的模型。基于不同的功能,不同的模型被定义。其中,基于不同的功能,人脸属性检测模型还可进一步分为人脸特征点检测模型、睁闭眼检测模型等。本申请对此不作限定。各检测模型的名称仅为便以理解而示例,本申请也不排除采用其他名称来替换上文列举的各检测模型,以实现相同或相似功能的可能。The face attribute detection model, the human frame detection model, the scene recognition model, the pose estimation model, and the action detection model listed above may all be models obtained by training based on machine learning algorithms. Based on different functions, different models are defined. Among them, based on different functions, face attribute detection models can be further divided into face feature point detection models, open and closed eyes detection models, and so on. This application does not limit this. The names of the detection models are merely examples for ease of understanding, and this application does not exclude the possibility of replacing the detection models listed above with other names to achieve the same or similar functions.
应理解,这些检测模型的具体功能也是由处理器执行其相应的计算机指令而实现的。本申请对用于实现上述各检测模型的处理器的个数和形态均不做限定。在一种可能的设计中,上述人脸属性检测模型、人体框检测模型、场景识别模型、姿态估计模型和动作检测模型均可以内嵌在NPU中。It should be understood that the specific functions of these detection models are also implemented by the processor executing its corresponding computer instructions. This application does not limit the number and form of processors used to implement the above detection models. In a possible design, the aforementioned face attribute detection model, human frame detection model, scene recognition model, pose estimation model, and motion detection model can all be embedded in the NPU.
该第一抓拍模式的确定可以基于某一检测模型对图像的检测结果确定,也可以基于多 个检测模型对图像的检测结果确定。当该电子设备调用多个检测模型来确定第一抓拍模式时,该多个检测模型可以同时运行或交替运行。电子设备可以根据多个检测模型对图像的检测结果综合考虑。当上述一个或多个检测模型对图像的检测结果满足某一抓拍模式的触发条件时,则可以将该抓拍模式确定为第一抓拍模式。下面通过多个例子说明该电子设备通过调用一个或多个检测模型确定第一抓拍模式的具体过程。The determination of the first capture mode may be determined based on the detection result of a certain detection model on the image, or it may be determined based on the detection result of the image by multiple detection models. When the electronic device calls multiple detection models to determine the first capture mode, the multiple detection models can run simultaneously or alternately. The electronic device can comprehensively consider the detection results of the image based on multiple detection models. When the detection result of the image by the one or more detection models meets the trigger condition of a certain capture mode, the capture mode can be determined as the first capture mode. The following uses a number of examples to illustrate the specific process of the electronic device determining the first capture mode by calling one or more detection models.
可选地,该电子设备可以调用人脸属性检测模型在图像中检测人脸,并根据检测结果确定第一抓拍模式。该人脸属性检测模型具体可以通过机器学习的算法训练得到。当人脸属性检测模型在图像中检测到人脸时,便可以对人脸的各特征点进行检测。该电子设备可以根据检测到的人脸位置和深度信息,排除路人入境场景和运动场景。此情况下,人脸属性检测模型对图像的检测结果满足表情抓拍模式的触发条件,可以确定该第一抓拍模式为表情抓拍模式。Optionally, the electronic device may call the face attribute detection model to detect the face in the image, and determine the first capture mode according to the detection result. The face attribute detection model can be specifically obtained through machine learning algorithm training. When the face attribute detection model detects a face in an image, it can detect each feature point of the face. The electronic device can exclude passers-by entry scenes and sports scenes based on the detected face position and depth information. In this case, the detection result of the image by the face attribute detection model satisfies the trigger condition of the expression capture mode, and it can be determined that the first capture mode is the expression capture mode.
当人脸属性检测模型在图像中检测到多张人脸时,该电子设备可以根据人脸位置和深度信息,排除路人入境场景和运动场景。此情况下,人脸属性检测模型对图像的检测结果满足合照抓拍模式的触发条件,可以确定该第一抓拍模式为合照抓拍模式。When the face attribute detection model detects multiple faces in the image, the electronic device can exclude passersby entry scenes and sports scenes based on the position and depth information of the faces. In this case, the detection result of the image by the face attribute detection model satisfies the trigger condition of the group photo capture mode, and it can be determined that the first capture mode is the group photo capture mode.
可选地,该电子设备可以调用场景识别模型对图像的拍摄场景进行检测,并根据检测结果确定第一抓拍模式。该场景识别模型具体可以通过机器学习的算法,基于预定义的多个场景进行训练而得到。当场景识别模型在图像中检测到的场景为预定义的多个运动场景中的某一个时,将该运动场景输出。该电子设备可以根据该场景识别模型所检测到的运动场景,确定拍摄对象处于运动状态。此情况下,场景识别模型对图像的检测结果满足运动抓拍模式的触发条件,可以确定该第一抓拍模式为运动抓拍模式。作为示例而非限定,上述运动场景可以包括:球场(例如包括篮球场、足球场等)、泳池、或跑道等。Optionally, the electronic device may call the scene recognition model to detect the shooting scene of the image, and determine the first capture mode according to the detection result. The scene recognition model can be specifically obtained by training based on multiple predefined scenes through a machine learning algorithm. When the scene detected by the scene recognition model in the image is one of a plurality of predefined motion scenes, the motion scene is output. The electronic device can determine that the shooting object is in a motion state according to the motion scene detected by the scene recognition model. In this case, the detection result of the scene recognition model on the image satisfies the trigger condition of the motion capture mode, and it can be determined that the first capture mode is the motion capture mode. By way of example and not limitation, the aforementioned sports scene may include: a court (for example, including a basketball court, a football field, etc.), a swimming pool, or a running track.
进一步可选地,该电子设备也可以调用场景识别模型和人体框检测模型来对图像进行检测,并综合场景识别模型和人体框检测模型输出的结果,确定第一抓拍模式。比如,当通过场景识别模型检测到图像中的场景为预定义的运动场景,且通过人体框检测模型检测到图像中有多个人体框时,该场景识别模型和人体框检测模型对图像的检测满足多人运动抓拍模式的触发条件,可以确定第一抓拍模式为多人运动抓拍模式。Further optionally, the electronic device may also call the scene recognition model and the human frame detection model to detect the image, and synthesize the output results of the scene recognition model and the human frame detection model to determine the first capture mode. For example, when the scene in the image is detected as a predefined motion scene through the scene recognition model, and multiple human frames are detected in the image through the human frame detection model, the scene recognition model and the human frame detection model detect the image The trigger condition of the multi-person sports capture mode is satisfied, and the first capture mode can be determined to be the multi-person sports capture mode.
可选地,该电子设备可以调用人体框检测模型来对图像中的人体框进行检测。该人体框检测模型可以通过机器学习的算法训练得到,以用于对图像中的人体框进行检测。该电子设备还可调用其他运动区域检测算法来确定图像中的运动区域。比如,可以基于光流信息等确定图像中的运动区域。本申请对此不作限定。当图像中前后两帧图像中的运动区域与人体框的重合区域较大时,如重合区域占整个人体框的比例高于某一预设门限时,可以确定该图像中的运动区域为前景运动,而非背景运动或相机相对运动。此情况下,图像中人体框与运动区域的重合程度满足运动抓拍模式的触发条件,可以确定该第一抓拍模式为运动抓拍模式。进一步地,该电子设备还可以在人体框检测模型检测到的人体框为多个时,确定该第一抓拍模式为多人运动抓拍模式。Optionally, the electronic device may call the human body frame detection model to detect the human body frame in the image. The human frame detection model can be obtained through machine learning algorithm training to be used to detect the human frame in the image. The electronic device can also call other motion area detection algorithms to determine the motion area in the image. For example, the motion area in the image can be determined based on optical flow information and the like. This application does not limit this. When the overlap area between the motion area and the human body frame in the two frames of the image is large, if the ratio of the overlap area to the entire human body frame is higher than a certain preset threshold, the motion area in the image can be determined to be foreground motion , Not background motion or relative camera motion. In this case, the degree of overlap between the human body frame and the motion area in the image meets the triggering condition of the motion capture mode, and it can be determined that the first capture mode is the motion capture mode. Further, the electronic device may also determine that the first capture mode is a multi-person motion capture mode when there are multiple human body frames detected by the human body frame detection model.
可选地,该电子设备可以调用姿态估计模型来对图像中的人体多个姿态点进行检测。该姿态估计模型可以通过机器学习的算法,基于多个预定义的姿态点(或者说,特征点)进行训练得到。上述多个预定义的姿态点比如包括:头、肩、脖、肘、胯、腿、膝、脚踝等。为了简洁,这里不一一列举。姿态估计模型可用于检测图像中多个姿态点,并确定每 个姿态点的坐标信息。Optionally, the electronic device may call the pose estimation model to detect multiple pose points of the human body in the image. The pose estimation model can be obtained by training based on a plurality of predefined pose points (or feature points) through a machine learning algorithm. The foregoing multiple predefined posture points include, for example, head, shoulders, neck, elbows, hips, legs, knees, ankles, and so on. For the sake of brevity, I will not list them all here. The pose estimation model can be used to detect multiple pose points in the image and determine the coordinate information of each pose point.
在一种实现方式中,每个姿态点的坐标信息可以通过图像中对应的像素点的二维坐标来表示。比如,像素点(u,v)表示该像素点在二维图像中的第u行第v列。在另一种实现方式中,每个姿态点的坐标信息可以通过在图像中对应的像素点的三维坐标来表示。比如,像素点(u,v)还可以进一步携带深度信息d,则该像素点的三维坐标可以表示为(u,v,d)。其中深度信息用于表示该像素点与摄像头的距离。应理解,通过像素点的二维坐标(u,v)或三维坐标(u,v,d)来表示姿态点的坐标信息仅为一种可能的实现方式,不应对本申请构成任何限定。In an implementation manner, the coordinate information of each posture point may be represented by the two-dimensional coordinates of the corresponding pixel point in the image. For example, the pixel (u, v) represents the u-th row and v-th column of the pixel in the two-dimensional image. In another implementation manner, the coordinate information of each posture point can be represented by the three-dimensional coordinates of the corresponding pixel in the image. For example, the pixel (u, v) can further carry depth information d, and the three-dimensional coordinates of the pixel can be expressed as (u, v, d). The depth information is used to indicate the distance between the pixel and the camera. It should be understood that expressing the coordinate information of the posture point by the two-dimensional coordinates (u, v) or the three-dimensional coordinates (u, v, d) of the pixel point is only a possible implementation manner, and should not constitute any limitation in this application.
姿态估计模型可以基于人体的多个姿态点,可以进一步估计得到人体框。在同一帧图像上,将多个姿态点连接便可以得到人体骨骼框架。根据该多个姿态点的坐标信息,可以估计人体框的位置和大小。因此,该电子设备可以根据姿态估计模型估计的人体框和运动区域算法检测确定的运动区域的重合度,由此可以确定是否满足运动抓拍模式的触发条件。具体方法与上文中结合人体框检测模型所介绍的方法相似,为了简洁,这里不再重复。The pose estimation model can be based on multiple pose points of the human body, and the human body frame can be further estimated. On the same frame of image, the human skeleton frame can be obtained by connecting multiple posture points. According to the coordinate information of the multiple posture points, the position and size of the human body frame can be estimated. Therefore, the electronic device can detect the coincidence degree of the determined motion region according to the human body frame estimated by the pose estimation model and the motion region algorithm, thereby determining whether the triggering condition of the motion capture mode is satisfied. The specific method is similar to the method introduced above in conjunction with the human frame detection model. For brevity, it will not be repeated here.
在另一种实现方式中,该电子设备可以根据前后多帧图像中各姿态点的坐标信息,确定拍摄对象是否处于运动状态。由于拍摄对象处于运动状态时,在前后多帧图像中,部分或全部姿态点的坐标信息会发生相对变化。通过这些姿态点的坐标信息的相对变化,便可以得到人体的各姿态点在运动状态下的变化。因此,当该电子设备根据姿态估计模型检测到的每帧图像中多个姿态点的坐标信息,确定满足运动抓拍模式的触发条件,可以确定第一抓拍模式为运动抓拍模式。In another implementation manner, the electronic device can determine whether the photographed object is in a moving state according to the coordinate information of each posture point in the multiple frames of images before and after. As the subject is in motion, the coordinate information of some or all of the posture points will change relatively in the multiple frames before and after the image. Through the relative changes of the coordinate information of these posture points, the changes of the posture points of the human body under the motion state can be obtained. Therefore, when the electronic device determines that the trigger condition of the motion capture mode is satisfied according to the coordinate information of the multiple pose points in each frame of the image detected by the pose estimation model, it can be determined that the first capture mode is the motion capture mode.
进一步可选地,该电子设备可以调用姿态估计模型和动作检测模型,识别图像中拍摄对象的动作类别。如上所述,姿态估计模型可用于确定图像中的多个姿态点以及每个姿态点的坐标信息。每帧图像中多个姿态点的坐标信息可以被输入至动作检测模型,以用于确定拍摄对象的动作类别。该动作检测模型可以通过机器学习的算法,基于多个预定义的动作类别进行训练而得到。动作检测模型可以基于训练样本和上述姿态点的坐标变化,确定拍摄对象的动作类别。该动作类别比如包括:奔跑、跳跃、投篮、踢球、攀岩、游泳、跳水、滑冰等。Further optionally, the electronic device can call the posture estimation model and the motion detection model to identify the motion category of the object in the image. As described above, the pose estimation model can be used to determine multiple pose points in an image and the coordinate information of each pose point. The coordinate information of multiple posture points in each frame of image can be input to the motion detection model to determine the motion category of the subject. The action detection model can be obtained by training based on multiple predefined action categories through a machine learning algorithm. The motion detection model can determine the motion category of the subject based on the training sample and the coordinate changes of the above-mentioned posture points. The action category includes, for example: running, jumping, shooting, kicking, rock climbing, swimming, diving, skating, etc.
若该动作检测模型在确定姿态点的坐标变化与某一预定义的动作类别下姿态点的坐标变化相同或近似相同,则可以将该动作类别确定为该拍摄对象的动作类别。动作检测模型可以输出该拍摄对象的动作类别。例如,当动作检测模型在图像中检测到人体进行特定动作(如上文所列举的动作类别等)时,该电子设备可以确定该图像满足运动抓拍模式的触发条件,从而可以确定该第一抓拍模式为运动抓拍模式。If the motion detection model determines that the coordinate change of the posture point is the same or approximately the same as the coordinate change of the posture point under a certain predefined action category, the action category can be determined as the action category of the subject. The motion detection model can output the motion category of the subject. For example, when the motion detection model detects that the human body performs a specific action (such as the action categories listed above) in the image, the electronic device can determine that the image meets the trigger condition of the motion capture mode, thereby determining the first capture mode It is a sports capture mode.
上文结合各模型的功能列举了该电子设备确定第一抓拍模式的多个示例,但应理解,这些示例不应对本申请构成任何限定。多个模型之间还可以结合使用,基于对各种抓拍模式预定义的触发条件,确定适用于当前拍摄使用的第一抓拍模式。A number of examples for the electronic device to determine the first capture mode are listed above in combination with the functions of each model, but it should be understood that these examples should not constitute any limitation to this application. Multiple models can also be used in combination, and the first capture mode suitable for the current shooting is determined based on the trigger conditions predefined for various capture modes.
该电子设备还可以按照多种抓拍模式的优先级,依次调用对应的模型。作为示例而非限定,运动抓拍模式的优先级高于表情抓拍模式。基于上文列举的多种抓拍模式的优先级排序,该电子设备可以基于不同的抓拍模式所对应的检测模型,依次调用检测模型来确定第一抓拍模式。The electronic device can also call corresponding models in sequence according to the priority of multiple capture modes. As an example and not a limitation, the sports capture mode has a higher priority than the expression capture mode. Based on the priority ranking of the multiple capture modes listed above, the electronic device may sequentially call the detection models based on the detection models corresponding to the different capture modes to determine the first capture mode.
举例来说,运动抓拍模式可以通过调用人体框检测模型、或场景识别模型、或姿态估 计模型、或姿态估计模型和动作检测模型对捕获到的多帧图像进行检测而确定。表情抓拍模式可以通过调用人脸属性检测模型捕获到的多帧图像进行检测而确定。应理解,这里列举的各模式与模型之间的关系仅为示例,不应对本申请构成任何限定。如前所述,一种抓拍模式可以由多个模型的检测结果共同确定,或者说,一种抓拍模式可以通过调用多个模型对捕获到的多帧图像进行检测而确定。本申请对于各种抓拍模式分别对应的模型也不做限定。For example, the motion capture mode can be determined by calling the human frame detection model, or the scene recognition model, or the pose estimation model, or the pose estimation model and the motion detection model to detect the captured multiple frames of images. The expression capture mode can be determined by calling the multi-frame images captured by the face attribute detection model to detect. It should be understood that the relationships between the various modes and models listed here are only examples, and should not constitute any limitation on this application. As mentioned above, a capture mode can be determined by the detection results of multiple models. In other words, a capture mode can be determined by calling multiple models to detect multiple frames of images captured. This application does not limit the models corresponding to various capture modes.
由于运动抓拍模式的优先级高于人脸属性检测模型,则该电子设备可以先调用人体框检测模型或场景识别模型。当确定捕获的多帧图像满足运动抓拍模式的触发条件时,则可以直接确定第一抓拍模式为运动抓拍模式。该电子设备可以不再调用人脸属性检测模型,从而可以节省确定抓拍模式的时间,同时可以节省模型运行带来的功耗。Since the motion capture mode has a higher priority than the face attribute detection model, the electronic device may first call the human frame detection model or the scene recognition model. When it is determined that the captured multi-frame images meet the triggering condition of the motion capture mode, it can be directly determined that the first capture mode is the motion capture mode. The electronic device can no longer call the face attribute detection model, thereby saving the time for determining the capture mode and at the same time saving power consumption caused by the running of the model.
应理解,这里结合一个优先级排序的例子详细说明了该电子设备按照多种抓拍模式的优先级排序依次调用模型的过程。但这仅为便于理解而示例,不应对本申请构成任何限定。本申请对于多种抓拍模式之间的优先级排序不作限定。It should be understood that, in conjunction with an example of priority sorting, the process of the electronic device sequentially calling models according to the priority sorting of multiple capture modes is described in detail here. However, this is only an example for ease of understanding, and should not constitute any limitation to this application. This application does not limit the priority ranking among multiple capture modes.
在步骤220中,使用与第一抓拍模式所对应的评估策略,在捕获的多帧待评选图像中确定与第一抓拍模式对应的抓拍帧图像。In step 220, the evaluation strategy corresponding to the first capture mode is used to determine the captured frame image corresponding to the first capture mode from among the captured images to be selected.
其中,该评估策略是预置的多种评估策略中的一种。每种评估策略可用于定义在多帧待评选图像中确定抓拍帧图像的一种规则或方式。与第一抓拍模式对应的抓拍帧图像具体可以是指,基于该第一抓拍模式而确定的最能够展现该第一抓拍模式的精彩瞬间的图像。例如,第一抓拍模式为运动抓拍模式,则该抓拍帧图像例如可以是捕获的多帧待评选图像中最能反映被拍摄对象的精彩动作瞬间的图像。又例如,第一抓拍模式为表情抓拍模式,如笑脸抓拍,则该抓拍帧图像例如可以是捕获的多帧待评选图像中被拍摄对象的笑容最灿烂的那一瞬间的图像。再例如,第一抓拍模式为合照抓拍模式,则抓拍帧图像例如可以是捕获的多帧待评选图像中每一个被拍摄对象的表情、神态、或构图等最好的那一瞬间的图像。Among them, the evaluation strategy is one of a variety of preset evaluation strategies. Each evaluation strategy can be used to define a rule or method for determining the captured frame image among multiple frames to be selected. The captured frame image corresponding to the first capture mode may specifically refer to an image that is determined based on the first capture mode to best show the wonderful moments of the first capture mode. For example, if the first capture mode is a sports capture mode, the captured frame image may be, for example, the image that best reflects the wonderful action moment of the subject among the captured multiple frames to be selected. For another example, the first capture mode is an expression capture mode, such as a smiling face capture, the captured frame image may be, for example, the image at the moment when the subject's smile is the brightest among the captured multiple frames to be selected. For another example, if the first capture mode is a group photo capture mode, the captured frame image may be, for example, the best moment of the expression, demeanor, or composition of each subject in the captured images to be selected.
需要说明的是,这里所述的多帧待评选图像与上文步骤210所述的多帧图像可以互不重叠,或者部分重叠。这里所说的重叠具体可以是指步骤210中的某一帧图像与步骤220中的某一帧待评选图像是同一帧图像,或者说,是同一时间点捕获的图像。比如具有相同的时间戳。例如,该多帧待评选图像可以是上述步骤210中所捕获的多帧图像之后捕获到的多帧连续图像,也可以是进一步包括了上述步骤210中所捕获的多帧图像中至少部分帧图像在内的多帧图像。再例如,该多帧待评选图像可以是步骤210中所捕获的多帧图像之后的多帧不连续图像。再例如,该多帧待评选图像可以是步骤210中所捕获的多帧图像之后且与所述多帧图像不连续的多帧图像。It should be noted that the multiple frames of images to be selected herein and the multiple frames of images described in step 210 above may not overlap each other, or partially overlap. The overlap mentioned here may specifically mean that a certain frame of image in step 210 and a certain frame of to-be-selected image in step 220 are the same frame of image, or in other words, are images captured at the same time point. For example, have the same time stamp. For example, the multi-frame image to be selected may be a multi-frame continuous image captured after the multi-frame image captured in step 210, or it may further include at least part of the multi-frame image captured in step 210. Multi-frame images included. For another example, the multiple frames of images to be selected may be multiple frames of discontinuous images after the multiple frames of images captured in step 210. For another example, the multi-frame image to be selected may be a multi-frame image that is after the multi-frame image captured in step 210 and is not continuous with the multi-frame image.
具体地,步骤210中所述的捕获的多帧图像例如可以包括用户执行拍照操作或录像操作之前捕获的预览图像。通常情况下,在打开相机之后,电子设备会默认进入拍照模式。虽然用户未执行拍照操作,当仍然可以通过拍摄界面看到预览图像。上述多帧图像可以包括在拍照模式下捕获到的多帧预览图像。此后,该电子设备可以基于用户的手动调节或检测模型的检测结果而触发第一抓拍模式。即,电子设备进入智能抓拍模式。在智能抓拍模式下,虽然用户可能未执行拍照操作,当也仍然可以捕获到多帧预览图像。上述多帧图像也可以包括在进入智能抓拍模式之后捕获到的多帧预览图像。电子设备可以将上述多帧图 像保存在缓存中,以供后续使用,比如在步骤210中用来确定第一抓拍模式。Specifically, the captured multi-frame images in step 210 may include, for example, preview images captured before the user performs a photographing operation or a video recording operation. Normally, after turning on the camera, the electronic device enters the photo mode by default. Although the user did not perform the photographing operation, the preview image can still be seen through the photographing interface. The above-mentioned multi-frame images may include multi-frame preview images captured in the photographing mode. Thereafter, the electronic device may trigger the first capture mode based on the user's manual adjustment or the detection result of the detection model. That is, the electronic device enters the smart capture mode. In the smart capture mode, although the user may not perform a photo operation, multiple frames of preview images can still be captured. The above-mentioned multi-frame images may also include multi-frame preview images captured after entering the smart capture mode. The electronic device can save the above-mentioned multi-frame images in a buffer for subsequent use, for example, in step 210 to determine the first capture mode.
此外,在拍照模式或智能抓拍模式下,由于用户未执行拍照操作,仍然可以捕获到预览图像,以用于确定第一抓拍模式。在本申请实施例中,可以将拍照模式或智能抓拍模式或其他拍照模式下,为执行拍照操作之前所处的模式称为预览模式。用户可以在预览模式下观察被拍摄对象,以便选择合适的时机执行拍照操作。应理解,预览模式并不一定是在拍照操作之前,在连续的两次拍照操作之间所处的模式也可以称为预览模式。In addition, in the camera mode or the smart capture mode, because the user has not performed a camera operation, a preview image can still be captured for determining the first capture mode. In the embodiment of the present application, the mode in which the camera is in the camera mode, smart capture mode, or other camera mode before performing the camera operation may be referred to as the preview mode. The user can observe the photographed subject in the preview mode, so as to select the appropriate time to perform the photographing operation. It should be understood that the preview mode is not necessarily before the photographing operation, and the mode in between two consecutive photographing operations may also be referred to as the preview mode.
步骤220中所述的多帧待评选图像例如可以是按下快门瞬间的前后N帧图像,N为正整数。比如,N为10,则所述多帧待评选图像可以是按下快门瞬间的前10帧图像和后10帧图像,共20帧图像。此情况下,该多帧待评选图像与步骤210中所述的多帧图像可能没有重复,也可能会有部分重复。这可能取决于按下快门的时间点与电子设备进入第一抓拍模式之间的时间点之间间隔的时间长度以及N的取值等。比如,N的取值较大,并且在进入第一抓拍模式之后用户进行了拍照,则该多帧待评选图像可能包含了步骤210中所述的多帧图像中的部分或全部。本申请对步骤210中所述的多帧图像和步骤220中所述的多帧待评选图像之间的关系不作限定。可替换地,多帧待评选图像也可以是按下快门瞬间的前N帧图像或者后N帧图像,本申请实施例对此并不限定。The multiple frames of images to be selected in step 220 may be, for example, N frames of images before and after the moment the shutter is pressed, and N is a positive integer. For example, if N is 10, the multiple frames of images to be selected may be the first 10 frames of images and the last 10 frames of images at the moment when the shutter is pressed, a total of 20 frames of images. In this case, the multiple frames of images to be selected may not overlap with the multiple frames of images described in step 210, or there may be some overlaps. This may depend on the length of time between the time when the shutter is pressed and the time when the electronic device enters the first capture mode, the value of N, and so on. For example, if the value of N is large, and the user takes a photo after entering the first capture mode, the multiple frames of images to be selected may include part or all of the multiple frames of images described in step 210. This application does not limit the relationship between the multiple frames of images described in step 210 and the multiple frames of images to be selected in step 220. Alternatively, the multiple frames of images to be selected may also be the first N frames of images or the last N frames of images at the moment when the shutter is pressed, which is not limited in the embodiment of the present application.
应理解,用户在拍照过程中,例如可通过点击用户界面中的拍摄控件或其他控制拍照的按钮来按下快门,本申请对用户按下快门的具体操作不作限定。还应理解,上文N的取值仅为便于理解而示例,不应对本申请构成任何限定。本申请对于N的具体取值不作限定。It should be understood that the user can press the shutter by clicking the shooting control in the user interface or other buttons for controlling the shooting during the shooting process, and the specific operation of the user pressing the shutter is not limited in this application. It should also be understood that the value of N above is only an example for ease of understanding, and should not constitute any limitation to this application. This application does not limit the specific value of N.
此外,步骤210中所述的多帧图像还可以包括录像模式下保存的视频图像。例如,若用户在打开相机功能之后,使用录像模式进行视频录制。用户在通过相机进行视频录制时,虽然可能未执行拍照操作,但事实上,视频图像可以是连续的多帧图像的序列。电子设备可以自行将这些图像保存在缓存中,以供后续使用,比如在步骤210中用来确定第一抓拍模式以及步骤220中用来确定与第一抓拍模式对应的抓拍帧图像。In addition, the multi-frame images described in step 210 may also include video images saved in the video recording mode. For example, if the user uses the video mode to record video after turning on the camera function. When the user performs video recording through the camera, although the photographing operation may not be performed, in fact, the video image may be a sequence of continuous multi-frame images. The electronic device can save these images in the cache for subsequent use, for example, to determine the first capture mode in step 210 and to determine the captured frame image corresponding to the first capture mode in step 220.
在录像模式下,步骤220中所述的多帧待评选图像例如可以是按下快门瞬间的前N帧图像、后N帧图像或前后N帧图像,也可以是基于本申请实施例中所提供的评估策略所评选的评分结果超出预设门限图像。本申请对此不作限定。In the video recording mode, the multiple frames of images to be selected in step 220 may be, for example, the first N frames of images, the last N frames of images, or the previous and next N frames of images at the moment the shutter is pressed, or they may be based on the provided in the embodiment of the present application. The scoring result selected by the evaluation strategy exceeds the preset threshold image. This application does not limit this.
在一种实现方式中,与第一抓拍模式对应的抓拍帧图像可以是从捕获的多帧待评选图像中确定的最优帧图像。例如,可以基于与第一抓拍模式对应的评估策略,对捕获的多帧待评选图像进行评分,从中选择评分结果最高的一帧图像作为最优帧图像。In an implementation manner, the captured frame image corresponding to the first capture mode may be an optimal frame image determined from a plurality of captured images to be selected. For example, based on the evaluation strategy corresponding to the first capture mode, multiple captured images to be selected may be scored, and the image with the highest score may be selected as the optimal frame image.
可选地,在步骤220之前,该方法200还包括:Optionally, before step 220, the method 200 further includes:
步骤230,对捕获的多帧待评选图像进行图像识别,以输出识别结果;Step 230: Perform image recognition on the captured multiple frames of images to be selected to output a recognition result;
步骤240,基于识别结果确定各评分参数的数值。Step 240: Determine the value of each scoring parameter based on the recognition result.
具体地,该电子设备可以调用上述中的一个或多个检测模型对捕获的多帧待评选图像进行图像识别。该电子设备可以根据对待评选图像的识别结果,使用与第一抓拍模式所对应的评估策略,对捕获的多帧待评选图像进行评分,从而可以根据评分结果确定与第一抓拍模式对应的抓拍帧图像。Specifically, the electronic device can call one or more of the above detection models to perform image recognition on the captured multiple frames of images to be selected. The electronic device can use the evaluation strategy corresponding to the first capture mode to score the captured images to be selected according to the recognition result of the image to be selected, so that the capture frame corresponding to the first capture mode can be determined according to the scoring result image.
可选地,上文步骤210具体可以包括:基于第一帧率根据捕获的多帧图像,在预置的多种抓拍模式中确定第一抓拍模式。可选地,步骤230具体可以包括:基于第二帧率对捕获的多帧待评选图像进行图像识别,以输出识别结果。其中,第一帧率可以等于第二帧率, 也可以小于第二帧率。这与该电子设备所使用的拍摄模式相关。后文会结合图4至图6所示的实施例详细说明。Optionally, the above step 210 may specifically include: determining the first capture mode among multiple preset capture modes based on the first frame rate according to the captured multi-frame images. Optionally, step 230 may specifically include: performing image recognition on the captured multiple frames of images to be selected based on the second frame rate to output the recognition result. Wherein, the first frame rate may be equal to or less than the second frame rate. This is related to the shooting mode used by the electronic device. The following will be described in detail in conjunction with the embodiments shown in FIG. 4 to FIG. 6.
下面详细说明使用与第一抓拍模式所对应的评估策略,在捕获的多帧待评选图像中确定与所述第一抓拍模式对应的抓拍帧图像的具体过程。The following describes in detail the specific process of using the evaluation strategy corresponding to the first capture mode to determine the captured frame image corresponding to the first capture mode among the captured multiple frames to be selected.
可选地,该多种抓拍模式中的每种抓拍模式对应预置的多种评估策略中的至少一种评估策略,每种评估策略包括用于图像评分的一个或多个评分参数以及每个评分参数的模式权重。步骤220具体可以包括:使用与第一抓拍模式所对应的至少一种评估策略中的一种评估策略中一个或多个评分参数以及每个评分参数的模式权重,计算捕获的多帧待评选图像中每帧待评选图像的评分;并根据多帧待评选图像的多个评分,在多帧待评选图像中确定与第一抓拍模式对应的抓拍帧图像。Optionally, each of the multiple capture modes corresponds to at least one of the preset multiple evaluation strategies, and each evaluation strategy includes one or more scoring parameters for image scoring and each The mode weight of the scoring parameter. Step 220 may specifically include: using one or more scoring parameters in one of the at least one evaluation strategy corresponding to the first capture mode and the mode weight of each scoring parameter to calculate the captured multiple frames of images to be selected The score of each frame of the image to be selected in the frame; and according to the multiple scores of the multiple frames of images to be selected, the captured frame image corresponding to the first capture mode is determined among the multiple frames of images to be selected.
具体来说,上述多种抓拍模式可以对应预置的多种评估策略。每种抓拍模式可以对应预置的多种评估策略中的一种或多种评估策略。其中,每一种评估策略都可以包括与第一抓拍模式对应的一个评分参数集合。每个评分参数集合可以包括一个或多个评分参数。例如,运动抓拍模式对应的评分参数可以包括以下一项或多项:姿态舒展度、姿态高度等。又例如,表情抓拍模式对应的评分参数可以包括:表情强度、面部遮挡、睁闭眼和人脸角度等。Specifically, the aforementioned multiple capture modes can correspond to multiple preset evaluation strategies. Each capture mode can correspond to one or more of the preset multiple evaluation strategies. Among them, each evaluation strategy may include a scoring parameter set corresponding to the first snapshot mode. Each scoring parameter set may include one or more scoring parameters. For example, the scoring parameter corresponding to the sports capture mode may include one or more of the following: posture stretch, posture height, and so on. For another example, the scoring parameters corresponding to the facial expression capture mode may include: facial expression strength, facial occlusion, eyes open and closed, and face angle.
其中,姿态舒展度也可以称为姿态伸展度,具体可以是指,四肢的弯曲程度以及与躯干的相对远近。姿态舒展度可以由身体各个关节的夹角加权求得,可以预先设定与人体动作强相关的身体各关节夹角参数。姿态舒展度例如可以由姿态估计模型和动作检测模型确定。对应于不同的关节,姿态舒展度例如可以进一步包括人体各关节的夹角,例如包括但不限于,腕关节夹角、肘关节夹角、胳膊弯曲夹角、腿部弯曲夹角、膝关节夹角、踝关节夹角等。为了简洁,这里不一一列举。在本申请实施例中,每个关节的夹角可以作为一个评分参数存在。姿态舒展度可以理解为各关节点的夹角的上位概括。姿态高度具体可以是指,身体的中心高度在图像中的高度位置。表情强度具体可以指示拍摄对象表现某一表情时的强度大小。表情强度例如可以由人脸属性检测模型确定,也可以通过特征点计算得到。表情强度可以由人脸的各个局部特征加权求得。表情强度例如可以进一步包括:嘴角咧开的大小、嘴角上扬的程度、眼睛睁闭的程度等。为了简洁,这里不一一列举。在本申请实施例中,上述列举的表情强度所包括的每项局部特征可以作为一个评分参数存在。表情强度可以理解为上述各项局部特征的上位概括。睁闭眼具体是指拍摄对象是否闭眼。睁闭眼例如可以由人脸属性检测模型确定。面部遮挡具体是指拍摄对象的面部是否被遮挡以及遮挡程度。面部遮挡例如可以通过特征点计算得到。人脸角度具体是指拍摄对象的面部是否倾斜以及倾斜角度。人脸角度例如可以由人脸属性检测模型确定。Among them, posture stretch can also be called posture stretch, which can specifically refer to the degree of bending of the limbs and the relative distance to the trunk. The posture stretch can be obtained by weighting the angles of the body joints, and the angle parameters of the body joints that are strongly related to the human body movement can be pre-set. The degree of posture stretch can be determined by, for example, a posture estimation model and a motion detection model. Corresponding to different joints, the degree of posture stretch may further include, for example, the angles of the various joints of the human body, such as but not limited to, wrist joint angles, elbow joint angles, arm bending angles, leg bending angles, and knee joints. Angle, ankle angle, etc. For the sake of brevity, I will not list them all here. In the embodiment of the present application, the included angle of each joint may exist as a scoring parameter. The posture stretch can be understood as a generalization of the included angle of each joint point. The posture height may specifically refer to the height position of the center height of the body in the image. The expression intensity can specifically indicate the intensity of a certain expression when the subject expresses it. The expression intensity can be determined by, for example, a face attribute detection model, or it can be calculated by feature points. The expression intensity can be obtained by weighting the various local features of the face. The expression intensity may further include, for example, the size of the mouth corners grinning, the degree of raising the corners of the mouth, and the degree of opening and closing of the eyes. For the sake of brevity, I will not list them all here. In the embodiment of the present application, each local feature included in the expression intensity listed above may exist as a scoring parameter. The intensity of expression can be understood as a generalization of the above-mentioned local features. Eyes open and closed specifically refers to whether the subject has closed eyes. Eye opening and closing can be determined by, for example, a face attribute detection model. Face occlusion specifically refers to whether the face of the subject is occluded and the degree of occlusion. Facial occlusion can be calculated by feature points, for example. The face angle specifically refers to whether the face of the subject is tilted and the tilt angle. The face angle can be determined by, for example, a face attribute detection model.
除了以上所列举的各评分参数,在不同的抓拍模式下,评分参数还可以包括清晰度、曝光以及构图等。清晰度具体可以是指,图像上各细部影纹及其边界的清晰程度。清晰度是用于描述图像质量的一个参数。曝光具体可以是指,相机的感光元件接受外界光线,再形成图像的过程。感光元件接收外界光线的多少直接影响照片的亮度。根据感光元件对光线的接收程度,大致可以分为曝光不足、曝光正确和曝光过度这三种情况。构图具体可以是指,确定并组织元素以产生和谐照片的过程。构图具体可以包括但不限于,三分法构图(或者称九宫格构图)、对称式构图、框架式构图等,本申请对此不作限定。In addition to the scoring parameters listed above, in different capture modes, the scoring parameters can also include sharpness, exposure, and composition. The sharpness can specifically refer to the sharpness of the shadow lines and their boundaries on the image. Sharpness is a parameter used to describe image quality. Exposure can specifically refer to the process in which the photosensitive element of the camera receives external light and then forms an image. The amount of external light received by the photosensitive element directly affects the brightness of the photo. According to the light receiving degree of the photosensitive element, it can be roughly divided into three situations: underexposure, correct exposure and overexposure. Composition can specifically refer to the process of identifying and organizing elements to produce a harmonious photo. The composition may specifically include, but is not limited to, the rule of thirds composition (or called the Jiugongge composition), the symmetrical composition, the frame composition, etc., which are not limited in this application.
应理解,上文列举的各抓拍模式对应的评分参数仅为示例,不应对本申请构成任何限定。本申请对于每种抓拍模式对应的评分参数的具体内容和名称均不做限定。可选地,同一抓拍模式所对应的不同评估策略中所包括的评分参数可以是相同的。但在不同的抓拍模式下,不同的评估策略中所包括的评分参数不一定相同。It should be understood that the scoring parameters corresponding to the capture modes listed above are only examples, and should not constitute any limitation to this application. This application does not limit the specific content and names of the scoring parameters corresponding to each capture mode. Optionally, the scoring parameters included in different evaluation strategies corresponding to the same capture mode may be the same. However, in different capture modes, the scoring parameters included in different evaluation strategies are not necessarily the same.
可选地,不同抓拍模式所对应的不同评估策略所包括的评分参数相同,且不同评估策略所包括的模式权重不同。Optionally, different evaluation strategies corresponding to different capture modes include the same scoring parameters, and different evaluation strategies include different mode weights.
这里所说的不同评估策略所包括的模式权重不同,具体可以是指,不同评估策略中对应于同一个评分参数施加的模式权重不同。且,在评估策略包含有多项评分参数的情况下,不同评估策略对至少一项评分参数施加的模式权重不同。The different evaluation strategies mentioned here include different mode weights. Specifically, it may mean that the mode weights applied to the same scoring parameter in different evaluation strategies are different. Moreover, when the evaluation strategy includes multiple scoring parameters, different evaluation strategies impose different mode weights on at least one scoring parameter.
换句话说,在不同的抓拍模式下,不同的评估策略中对应于同一评分参数的模式权重可以不同。或者说,不同抓拍模式所对应的不同评估策略可以分别包括对应于相同的评分参数的不同模式权重。In other words, in different capture modes, the mode weights corresponding to the same scoring parameter in different evaluation strategies can be different. In other words, different evaluation strategies corresponding to different capture modes may respectively include different mode weights corresponding to the same scoring parameter.
例如,运动抓拍模式所对应的评估策略中包括的评分参数可以包括姿态高度、姿态舒展度、清晰度、曝光和构图;或者,运动抓拍模式所对应的评估策略中包括的评分参数也可以包括姿态高度、姿态舒展度、表情强度、睁闭眼、面部遮挡、人脸角度、清晰度、曝光和构图,但对表情强度、睁闭眼、面部遮挡、人脸角度分别施加的模式权重较小,例如为零,或接近零。因此,两种说法的实质是相同的。For example, the scoring parameters included in the evaluation strategy corresponding to the sports capture mode can include posture height, posture stretch, clarity, exposure, and composition; or, the scoring parameters included in the evaluation strategy corresponding to the sports capture mode can also include posture. Height, posture stretch, expression intensity, eyes open and closed, face occlusion, face angle, clarity, exposure and composition, but the mode weights applied to expression strength, eyes open and closed, face occlusion, and face angle are smaller. For example, it is zero, or close to zero. Therefore, the essence of the two statements is the same.
应理解,上文所列举的评分参数仅为示例,不应对本申请构成任何限定。只要运动抓拍模式对应的评分参数中包括姿态高度和姿态舒展度中的任意一项评分参数,均应落入本申请实施例的保护范围内。运动抓拍模式所对应的评分参数中例如还可以包括旋转等,本申请对于运动抓拍模式所对应的评分参数及其模式权重均不做限定。It should be understood that the scoring parameters listed above are only examples and should not constitute any limitation to this application. As long as the scoring parameter corresponding to the motion capture mode includes any one of the scoring parameters of the posture height and the posture stretch, it should fall within the protection scope of the embodiment of the present application. The scoring parameters corresponding to the sports capture mode may also include, for example, rotation, etc. The application does not limit the scoring parameters corresponding to the sports capture mode and the mode weights thereof.
又例如,表情抓拍模式所对应的评估策略中包括的评分参数可以包括表情强度、睁闭眼、面部遮挡、人脸角度、清晰度、曝光和构图;或者,表情抓拍模式所对应的评估策略中包括的评分参数也可以包括表情强度、睁闭眼、面部遮挡、人脸角度、姿态高度、姿态舒展度、清晰度、曝光和构图,但对姿态高度和姿态舒展度分别施加的模式权重较小,例如为零,或接近零。因此,这两种说法的实质是相同的。For another example, the scoring parameters included in the evaluation strategy corresponding to the expression capture mode may include expression strength, eyes open and closed, face occlusion, face angle, clarity, exposure, and composition; or, in the evaluation strategy corresponding to the expression capture mode The included scoring parameters can also include expression strength, open and closed eyes, face occlusion, face angle, posture height, posture stretch, clarity, exposure and composition, but the mode weights applied to posture height and posture stretch respectively are smaller , For example, zero, or close to zero. Therefore, the essence of these two statements is the same.
应理解,上文所列举的评分参数仅为示例,不应对本申请构成任何限定。只要表情抓拍模式对应的评分参数中包括表情强度、睁闭眼、面部遮挡、人脸角度中的任意一项项评分参数,均应落入本申请的保护范围内。本申请对于表情抓拍模式所对应的评分参数及其模式权重均不做限定。It should be understood that the scoring parameters listed above are only examples and should not constitute any limitation to this application. As long as the scoring parameters corresponding to the expression capture mode include any of the scoring parameters of expression strength, eyes open and closed, face occlusion, and face angle, it should fall within the protection scope of this application. This application does not limit the scoring parameters corresponding to the facial expression capture mode and the mode weights.
由上文列举不同的抓拍模式下的评分参数及其模式权重可以看到,对应于不同的抓拍模式,同一评分参数的模式权重不同。例如,上文中列举的姿态高度和姿态舒展度,在运动抓拍模式所对应的评估策略中被施加了较高的权重,而在表情抓拍模式所对应的评估策略中则被施加了较低的权重,或者不施加权重(权重0);又例如,上文中列举的表情强度、睁闭眼、面部遮挡和人脸角度,在表情抓拍模式所对应的评估策略中被施加了较高的权重,而在运动抓拍模式所对应的评估策略中则被施加了较低的权重,或者不施加权重(权重0)。From the above list of scoring parameters and mode weights in different capture modes, it can be seen that corresponding to different capture modes, the same scoring parameter has different mode weights. For example, the posture height and posture stretch listed above are given a higher weight in the evaluation strategy corresponding to the motion capture mode, and a lower weight is applied to the evaluation strategy corresponding to the expression capture mode. , Or no weight (weight 0); for example, the expression intensity, eyes open and closed, face occlusion, and face angle listed above are given a higher weight in the evaluation strategy corresponding to the expression capture mode, and In the evaluation strategy corresponding to the sports capture mode, a lower weight is applied, or no weight is applied (weight 0).
另外,清晰度、构图和曝光的模式权重与抓拍模式关系不大,因此在不同的抓拍模式下可以定义相同的这些模式权重。应理解,上文仅为便于理解结合不同的抓拍模式对不同 的评分参数及其模式权重做了说明,但这不应对本申请构成任何限定。In addition, the mode weights of definition, composition, and exposure have little relationship with the capture mode, so the same mode weights can be defined in different capture modes. It should be understood that the above description is only for the convenience of understanding in combination with different capture modes to explain different scoring parameters and their mode weights, but this should not constitute any limitation to this application.
需要说明的是,与抓拍模式所对应的评估策略可以是预先定义的。一旦确定了第一抓拍模式所对应的评估策略,也就是确定了对待评选图像进行评分时所使用的评分参数和各评分参数的模式权重。It should be noted that the evaluation strategy corresponding to the snapshot mode may be predefined. Once the evaluation strategy corresponding to the first capture mode is determined, the scoring parameter used when scoring the image to be selected and the mode weight of each scoring parameter are determined.
例如,对待评选图像的评分可以通过公式
来确定。其中,G表示评分结果,G>0;I表示评分参数的项数,I≥1且为整数;i表示I个评分参数中的第i个评分参数,1≤i≤I,且i为整数;T
i表示第i个评分参数的数值,T
i≥0;α
i表示第i个评分参数的模式权重,α
i≥0。
For example, the score of the image to be selected can be evaluated by the formula to make sure. Among them, G represents the scoring result, G>0; I represents the number of scoring parameters, I≥1 and is an integer; i represents the i-th scoring parameter in I scoring parameters, 1≤i≤I, and i is an integer ; T i represents the i-th numerical score parameter, T i ≥0; α i weight of the i schematic rates weight parameter, α i ≥0.
在第一抓拍模式下,对每一帧图像的评分可以是基于该第一抓拍模式所对应的评分参数T
i及各评分参数的模式权重α
i,对各评分参数的评分加权后得到的结果。当某一抓拍模式对应多种评估策略时,该电子设备可以从该多种评估策略中自行选择一种来进行评分,例如选择默认的评估策略,或者称,缺省的评估策略、通用的评估策略等;或者,该多种评估策略还可以与多种抓拍类别对应,该电子设备也可以根据此前各检测模型对多帧图像的检测所确定的抓拍类别等,选择所对应的评估策略。
In the first snapping mode, the score for each frame of image may be the result of weighting the scores of each scoring parameter based on the scoring parameter T i corresponding to the first snapping mode and the mode weight α i of each scoring parameter . When a certain capture mode corresponds to multiple evaluation strategies, the electronic device can select one of the multiple evaluation strategies for scoring, for example, select the default evaluation strategy, or the default evaluation strategy, general evaluation Strategy, etc.; or, the multiple evaluation strategies can also correspond to multiple capture categories, and the electronic device can also select the corresponding assessment strategy based on the capture categories determined by the previous detection models of multiple frames of images.
为便于理解,下面首先详细介绍使用与第一抓拍模式对应的一种评估策略对待评选图像进行评分的过程。与该第一抓拍模式对应的一种评估策略例如可以是上文所述默认的评估策略。For ease of understanding, the following first describes in detail the process of scoring images to be selected using an evaluation strategy corresponding to the first capture mode. An evaluation strategy corresponding to the first capture mode may be, for example, the default evaluation strategy described above.
下面以不同的抓拍模式为例,详细说明该电子设备从多帧待评选图像确定与第一抓拍模式对应的抓拍帧图像的具体过程。The following takes different capture modes as examples to describe in detail the specific process of the electronic device determining the captured frame image corresponding to the first capture mode from multiple frames of images to be selected.
首先,该电子设备可以调用与第一抓拍模式对应的检测模型对待评选图像进行图像识别,以获得识别结果。First, the electronic device can call the detection model corresponding to the first capture mode to perform image recognition on the image to be selected to obtain the recognition result.
举例而言,若第一抓拍模式为运动抓拍模式,与该第一抓拍模式对应的模型例如可以包括姿态估计模型和动作检测模式模型。For example, if the first capture mode is a motion capture mode, the model corresponding to the first capture mode may include, for example, a pose estimation model and a motion detection mode model.
该电子设备可以调用姿态估计模型对捕获的多帧待评选图像进行图像识别。姿态估计模型可以对每一帧待评选图像进行图像识别,得到每一帧待评选图像中多个姿态点的坐标信息。每帧待评选图像中各姿态点的坐标信息是该姿态估计模型输出的识别结果。由于上文步骤210中已经对姿态估计模型确定各姿态点的坐标信息的具体过程做了说明,为了简洁,这里不再赘述。The electronic device can call the pose estimation model to perform image recognition on the captured multiple frames of images to be selected. The pose estimation model can perform image recognition on each frame of the image to be selected, and obtain the coordinate information of multiple pose points in each frame of the image to be selected. The coordinate information of each posture point in each frame of the image to be selected is the recognition result output by the posture estimation model. Since the specific process of determining the coordinate information of each posture point by the posture estimation model has been explained in step 210 above, for the sake of brevity, it will not be repeated here.
可选地,该电子设备还可以调用动作检测模型,对多帧待评选图像进行图像识别。动作检测模型可以和姿态估计模型结合,以用于识别拍摄对象的动作类别。根据每帧待评选图像识别的动作类别可以是该动作检测模型输出的识别结果。由于上文步骤210中已经对动作检测模型确定动作类别的具体过程做了说明,为了简洁,这里不再赘述。Optionally, the electronic device can also call an action detection model to perform image recognition on multiple frames of images to be selected. The action detection model can be combined with the pose estimation model to identify the action category of the subject. The action category recognized according to each frame of the image to be selected may be the recognition result output by the action detection model. Since the specific process of determining the action category by the action detection model has been explained in step 210 above, for the sake of brevity, it will not be repeated here.
动作检测模型具体可以通过动作类型的索引或其他可用于唯一地指示一种动作类别的信息来指示所识别出的动作类别。本申请对于动作检测模型输出的识别结果的具体形式不作限定。The action detection model can specifically indicate the recognized action category through an action type index or other information that can be used to uniquely indicate an action category. This application does not limit the specific form of the recognition result output by the motion detection model.
应理解,在本实施例中,由于评估策略与动作类别并不一定对应。故,该电子设备可以调用动作检测模型识别动作类别,也可以不调用动作检测模型,本申请对此不作限定。It should be understood that in this embodiment, the evaluation strategy does not necessarily correspond to the action category. Therefore, the electronic device may call the motion detection model to recognize the action category, or may not call the motion detection model, which is not limited in this application.
又例如,若第一抓拍模式为表情抓拍模式,该第一抓拍模式对应的模型为人脸属性检 测模型。该电子设备可以调用人脸属性检测模型对捕获的多帧待评选图像进行图像识别。人脸属性检测模型可以基于人脸属性,例如包括表情类别(比如高兴、生气、伤心、搞怪等)、睁闭眼、特征点、年龄等,建立人脸属性的分类模型,如上文所述的人脸特征点检测模型、睁闭眼检测模型等,以用于对每一帧待评选图像进行图像识别,并输出被拍摄对象的表情类别、是否闭眼、面部是否被遮挡以及年龄等信息。换句话说,每帧待评选图像中被拍摄对象的表情类别、是否闭眼、面部是否遮挡以及年龄等信息是人脸属性检测模型基于对每帧待评选图像的识别而输出的识别结果。For another example, if the first capture mode is an expression capture mode, the model corresponding to the first capture mode is a face attribute detection model. The electronic device can call the face attribute detection model to perform image recognition on the captured multiple frames of images to be selected. The face attribute detection model can be based on face attributes, for example, including expression categories (such as happy, angry, sad, funny, etc.), eyes closed, feature points, age, etc., to establish a classification model of face attributes, as described above The facial feature point detection model, the open and closed eyes detection model, etc., are used to perform image recognition on each frame of the image to be selected, and output information such as the expression category of the subject, whether the eyes are closed, whether the face is blocked, and age. In other words, the facial expression category, whether the eyes are closed, whether the face is blocked, and the age of the subject in each frame of the image to be selected are the recognition results output by the face attribute detection model based on the recognition of each frame of the image to be selected.
人脸属性检测模型可以基于预先训练的多种表情类别,对拍摄对象的表情进行检测。当确定拍摄对象的表情属于预先训练的多种表情类别中的任意一种时,可以将该表情类别确定为拍摄对象的表情类别。此外,不同的表情类别可以对应不同的优先级。当人脸属性检测模型所确定的表情类别为多项时,可以按照预先定义的优先级排序,将优先级较高的表情类别确定为拍摄对象的表情类别。The face attribute detection model can detect the expression of the subject based on a variety of pre-trained expression categories. When it is determined that the expression of the subject belongs to any one of a plurality of pre-trained expression categories, the expression category may be determined as the expression category of the subject. In addition, different expression categories can correspond to different priorities. When there are multiple expression categories determined by the face attribute detection model, the expression category with a higher priority can be sorted according to a pre-defined priority, and the expression category with a higher priority is determined as the expression category of the subject.
人脸属性检测模型在完成对待评选图像的图像识别后,可以将识别结果输出。具体地,对于表情类别,可以通过表情类型的索引或其他可用于唯一地指示一种表情类别的信息来指示;对于睁闭眼,可以通过例如二进制值“0”和“1”来分别指示“闭眼”和“睁眼”;对于特征点,可以通过坐标信息来指示各特征点的位置;对于年龄,可以通过具体数值来指示。After the face attribute detection model completes the image recognition of the image to be selected, it can output the recognition result. Specifically, for the expression category, it can be indicated by an expression type index or other information that can be used to uniquely indicate a kind of expression category; for open and closed eyes, for example, the binary values "0" and "1" can be used to indicate " Eyes closed" and "eyes open"; for feature points, the position of each feature point can be indicated by coordinate information; for age, it can be indicated by a specific numerical value.
应理解,在本实施例中,由于评估策略与表情类别并不一定对应,故人脸属性检测模型可以不输出表情类别。该电子设备可以根据每帧待评选图像中各特征点的位置、年龄等信息,确定表情强度、是否闭眼、面部是否被遮挡等信息,使用与第一抓拍模式对应的评估策略对每帧待评选图像进行评分。It should be understood that, in this embodiment, since the evaluation strategy does not necessarily correspond to the expression category, the face attribute detection model may not output the expression category. The electronic device can determine the strength of expression, whether the eyes are closed, whether the face is occluded, and other information according to the position and age of each feature point in each frame of the image to be selected, and use the evaluation strategy corresponding to the first capture mode to determine each frame to be selected. Select the image for scoring.
还应理解,上文仅为便于理解示出了几种用于指示检测结果的几种可能的实现方式,不应对本申请构成任何限定。本申请对于检测结果的具体指示方式不作限定。还应理解,上文所列举的表情抓拍模式对应的评分参数仅为示例,不应对本申请构成任何限定。只要表情抓拍模式对应的评分参数中包括表情强度、面部遮挡、睁闭眼和人脸角度中的任意一项参数,均应落入本申请的保护范围内。当然,当第一抓拍模式为合照抓拍模式时,该评分参数也可以包括表情强度、面部遮挡、睁闭眼和人脸角度中的一项或多项。为了简洁,后文不再重复。It should also be understood that the above only shows several possible implementations for indicating the detection result for ease of understanding, and should not constitute any limitation to the application. This application does not limit the specific indication method of the detection result. It should also be understood that the scoring parameters corresponding to the facial expression capture modes listed above are only examples, and should not constitute any limitation in this application. As long as the scoring parameters corresponding to the expression capture mode include any of the parameters of expression intensity, face occlusion, eyes opening and closing, and face angle, it should fall within the protection scope of this application. Of course, when the first capture mode is a group photo capture mode, the scoring parameter may also include one or more of expression strength, face occlusion, eyes open and closed, and face angle. For brevity, the following text will not be repeated.
需要说明的是,该电子设备在调用一个或多个检测模型进行图像识别时,可以调用与第一抓拍类别相对应的检测模型进行图像识别,如上文所示例。其中抓拍类别可以是对抓拍模式进行进一步划分得到的抓拍模式下的具体子模式或分类。该电子设备也可以调用预定义的多个检测模型进行图像识别,如调用人脸属性检测模型、姿态估计模型和动作检测模型进行图像识别。基于不同的抓拍模式,对每个评分参数所施加的模式权重不同,因此虽然调用了多个检测模型,但电子设备在基于图像识别的结果进行评分时会基于抓拍模式的不同而施加不同的模式权重。因此不会对最终的评选结果造成影响。因此,本申请对于用作图像识别的具体模型不作限定。It should be noted that when the electronic device calls one or more detection models for image recognition, it can call the detection model corresponding to the first snapping category to perform image recognition, as in the example above. The capture category may be a specific sub-mode or classification in the capture mode obtained by further dividing the capture mode. The electronic device can also call multiple predefined detection models for image recognition, such as calling a face attribute detection model, a posture estimation model, and an action detection model for image recognition. Based on different capture modes, the mode weights applied to each scoring parameter are different. Therefore, although multiple detection models are called, the electronic device will apply different modes based on the capture mode when scoring based on the results of image recognition. Weights. Therefore, it will not affect the final selection results. Therefore, this application does not limit the specific model used for image recognition.
在获得了对待评选图像的识别结果后,该电子设备可以基于该识别结果获得每个评分参数的数值。举例而言,若该第一抓拍模式为运动抓拍模式,则该电子设备可以根据姿态点的坐标信息,确定人体骨骼高度,各关节的夹角等参数。After obtaining the recognition result of the image to be selected, the electronic device can obtain the value of each scoring parameter based on the recognition result. For example, if the first capture mode is a sports capture mode, the electronic device can determine parameters such as the height of the human skeleton, the angle of each joint, and the like according to the coordinate information of the posture point.
该电子设备可以基于人体骨骼高度确定姿态高度这一评分参数的数值。例如,可以将 人体骨骼的中心点、人体骨骼的最高点等作为姿态高度的数值。应理解,该电子设备在选择用某一个点(如人体骨骼的中心点)作为姿态高度的数值时,对所有待评选图像都选择相同的点来确定姿态高度的数值。The electronic device can determine the value of the scoring parameter of the posture height based on the height of the human skeleton. For example, the center point of the human skeleton, the highest point of the human skeleton, etc. can be used as the value of the posture height. It should be understood that when the electronic device selects a certain point (such as the center point of the human skeleton) as the value of the posture height, the same point is selected for all the images to be selected to determine the value of the posture height.
该电子设备可以基于各关节点的夹角来确定姿态舒展度。具体地,姿态舒展度可以由人体各个关节点的弯曲度来确定,故姿态舒展度可以包括各关节点的弯曲度的数值。如前所述,姿态舒展度的数值可以由各关节点的夹角加权得到。各关节点的夹角的权重可以是预先定义的,即上文所述的模式权重。当然,对各评分参数的数值的确定还可以通过其他方式来确定,本申请对此不作确定。例如,可以调用已有的算法,比如动作强度算法等,来对各项评分参数确定数值。The electronic device can determine the posture stretch based on the included angle of each joint point. Specifically, the posture stretch can be determined by the curvature of each joint point of the human body, so the posture stretch can include the value of the curvature of each joint point. As mentioned above, the value of posture stretch can be weighted by the included angle of each joint point. The weight of the included angle of each joint point may be predefined, that is, the weight of the mode described above. Of course, the determination of the value of each scoring parameter can also be determined in other ways, which is not determined in this application. For example, an existing algorithm, such as an action intensity algorithm, can be used to determine the value of each scoring parameter.
应理解,姿态高度和姿态舒展度仅为一种可能的表述方式,本申请并不排除通过其他可能的表述方式来表达相同或相似含义的可能。例如,姿态舒展度也可以通过动作幅度来替换。又例如,若该第一抓拍模式为表情抓拍模式,则该电子设备可以根据人脸的多个特征点,确定每个评分参数的数值。It should be understood that the posture height and posture stretch are only one possible way of expression, and this application does not exclude the possibility of expressing the same or similar meaning through other possible expression ways. For example, posture stretch can also be replaced by motion range. For another example, if the first capture mode is an expression capture mode, the electronic device may determine the value of each scoring parameter according to multiple feature points of the human face.
以表情强度这一评分参数为例,该电子设备可以根据多个特征点,确定表情强度的数值。具体地,表情强度可以由人脸的各局部特征来确定,例如包括但不限于,嘴巴咧开的大小、嘴角上扬的程度、眼睛睁闭的程度等特征。故表情强度的数值可以包括各局部特征的数值。如前所述,表情强度可以由人脸的各个局部特征的数值的加权得到。各局部特征的权重可以是预先定义的,即上文所述的模式权重。Taking the scoring parameter of expression intensity as an example, the electronic device can determine the value of expression intensity according to multiple feature points. Specifically, the expression intensity may be determined by various local features of the human face, such as, but not limited to, the size of the mouth grinning, the degree of raising the corners of the mouth, and the degree of opening and closing of the eyes. Therefore, the value of expression intensity can include the value of each local feature. As mentioned earlier, the expression intensity can be obtained by weighting the values of the various local features of the human face. The weight of each local feature can be pre-defined, that is, the mode weight described above.
其中,对于眼睛睁闭的程度,例如可通过眼睛的垂直距离和水平距离的比值来确定其数值。对于嘴巴咧开的大小,例如可以通过嘴巴上下唇的距离与嘴角的水平距离之和与眼间距的比值来确定其数值。对于嘴角上扬的程度,例如可以通过嘴角的水平连线与嘴巴下唇的距离来确定其数值。Among them, for the degree of eye opening and closing, for example, the value can be determined by the ratio of the vertical distance and the horizontal distance of the eye. For the size of the mouth grin, for example, the value can be determined by the ratio of the sum of the distance between the upper and lower lips of the mouth and the horizontal distance between the corners of the mouth and the distance between the eyes. Regarding the degree of uplift of the corner of the mouth, for example, the value can be determined by the distance between the horizontal line of the corner of the mouth and the lower lip of the mouth.
再以面部遮挡这一评分参数为例,该电子设备可以根据检测到的多个特征点,确定是否有特征点缺失,若有,则可认为该拍摄对象的面部被遮挡。对于面部遮挡,例如可以通过检测到的特征点与预定义的特征点的比值来确定其数值。Taking the scoring parameter of face occlusion as an example, the electronic device can determine whether any feature points are missing according to the detected multiple feature points, and if so, it can be considered that the face of the subject is occluded. For face occlusion, for example, the value can be determined by the ratio of the detected feature points to the predefined feature points.
除了上文所列举的表情强度、面部遮挡之外,该电子设备还可以对睁闭眼、人脸角度等评分参数确定数值。例如,该电子设备也可以调用已有的算法,如表情强度算法等,来对各项评分参数确定数值。In addition to the expression intensity and facial occlusion listed above, the electronic device can also determine values for scoring parameters such as eyes open and closed, and face angles. For example, the electronic device can also call an existing algorithm, such as an expression intensity algorithm, to determine a value for each scoring parameter.
此外,在不同的抓拍模式下,电子设备均可进一步加载清晰度、曝光、构图等评分参数,并确定各评分参数的数值。其中,对于不同的抓拍模式,可以基于不同的构图法来进行构图这一评分参数的数值。例如,在表情抓拍模式下,可以加载九宫格构图;在合照抓拍模式下,可以加载对称法构图;在风景抓拍模式下,可以加载水平线构图。因此对于构图这一评分参数的模式权重的定义可以基于不同的抓拍模式而定义。In addition, in different capture modes, the electronic device can further load scoring parameters such as sharpness, exposure, and composition, and determine the value of each scoring parameter. Among them, for different capture modes, the value of the scoring parameter of composition can be performed based on different composition methods. For example, in the expression capture mode, you can load the Jiugongge composition; in the group photo capture mode, you can load the symmetrical composition; in the landscape capture mode, you can load the horizontal line composition. Therefore, the definition of the mode weight of the scoring parameter of composition can be defined based on different capture modes.
以合照抓拍模式为例,由于可加载对称法构图,则可以计算图像中所有人的中心到画面中心的距离,以及相邻两人的距离。对这两个距离分别施加权重,得到加权和。该加权和可以作为构图这一评分参数的数值。Take the group photo capture mode as an example. Since the symmetrical composition can be loaded, the distance from the center of all people in the image to the center of the screen and the distance between two adjacent people can be calculated. Weights are applied to these two distances respectively to obtain a weighted sum. The weighted sum can be used as the value of the scoring parameter of the composition.
应理解,上文所列举的用于确定各评分参数的数值的具体方式仅为示例,不应对本申请构成任何限定。由于对各评分参数的数值的确定方式可以参考现有技术,为了简洁,这里不再举例说明。It should be understood that the specific methods for determining the value of each scoring parameter listed above are only examples, and should not constitute any limitation in this application. Since the method for determining the value of each scoring parameter can refer to the prior art, for the sake of brevity, an example is not described here.
需要说明的是,在评分参数由多项参数值加权得到的情况下,可以将各项参数值归一化到同一量级,再进行加权。在确定了各项评分参数的数值之后,该电子设备可以使用第一抓拍模式对应的评估策略,确定每帧待评选图像的评分。It should be noted that when the scoring parameter is weighted by multiple parameter values, the parameter values can be normalized to the same magnitude, and then weighted. After determining the value of each scoring parameter, the electronic device can use the evaluation strategy corresponding to the first capture mode to determine the score of each frame of the image to be selected.
如前所述,每种抓拍模式都可对应一种或多种评估策略,每种评估策略中都定义了评分参数及其模式权重。换句话说,在每种评估策略中,评分参数及其模式权重都是预置的。在确定了评估策略之后,评分参数及其模式权重都可以确定。电子设备只要将对每帧待评选图像确定的评分参数的数值代入,便可以获得对每帧待评选图像的评分。As mentioned earlier, each capture mode can correspond to one or more evaluation strategies, and each evaluation strategy defines the scoring parameters and the mode weights. In other words, in each evaluation strategy, the scoring parameters and their mode weights are preset. After the evaluation strategy is determined, the scoring parameters and the weight of the model can be determined. As long as the electronic device substitutes the value of the scoring parameter determined for each frame of the image to be selected, the score for each frame of the image to be selected can be obtained.
在本实施例中,电子设备可以使用该第一抓拍模式对应的一种评估策略,将此前对每帧待评选图像确定的评分参数的数值代入,以计算每帧待评选图像的评分。该电子设备对每帧待评选图像的评分例如可以通过上文列举的公式
计算得到。式中各参数的含义在上文中已经做了说明,为了简洁,这里不再重复。当多个评分参数的评分在不同的量级时,还可以将各评分参数的评分归一化到同一量级。
In this embodiment, the electronic device can use an evaluation strategy corresponding to the first capture mode to substitute the value of the scoring parameter previously determined for each frame of the image to be selected to calculate the score of each frame of the image to be selected. The scoring of each frame of the image to be selected by the electronic device can be based on the formula listed above, for example Calculated. The meaning of each parameter in the formula has been explained above, for the sake of brevity, it will not be repeated here. When the scores of multiple scoring parameters are of different magnitudes, the scores of each scoring parameter can also be normalized to the same magnitude.
应理解,这里所列举的用于计算评分的公式仅为示例,不应对本申请构成任何限定。本申请并不排除采用其他的计算方式来计算待评选图像的评分的可能。在基于上述公式确定了每帧待评选图像的评分之后,该电子设备可以根据每帧待评选图像的评分确定与第一抓拍模式对应的抓拍帧图像。这里,与第一抓拍模式对应的抓拍帧图像例如可以是基于对多帧待评选图像的评分确定的评分最高的一帧图像。换句话说,在多帧待评选图像中,与第一抓拍模式对应的抓拍帧图像的评分高于除该抓拍帧图像之外的任意一帧待评选图像的评分。电子设备可以在确定了与第一抓拍模式对应的抓拍帧图像后,将该抓拍帧图像保存在该电子设备中,或者,输出至电子设备的显示单元等。本申请对此不作限定。It should be understood that the formula for calculating the score listed here is only an example, and should not constitute any limitation to the application. This application does not exclude the possibility of using other calculation methods to calculate the score of the image to be selected. After determining the score of each frame of the image to be selected based on the above formula, the electronic device may determine the captured frame image corresponding to the first capture mode according to the score of each frame of the image to be selected. Here, the captured frame image corresponding to the first capture mode may be, for example, a frame with the highest score determined based on the scores of multiple frames of images to be selected. In other words, among the multiple frames of images to be selected, the score of the captured frame image corresponding to the first capture mode is higher than the score of any frame of the image to be selected except the captured frame image. The electronic device may, after determining the captured frame image corresponding to the first capture mode, save the captured frame image in the electronic device, or output to the display unit of the electronic device or the like. This application does not limit this.
在另一种实现方式中,每种抓拍模式可以对应多种评估策略。每种抓拍模式下的多种评估策略可以进一步与多种抓拍类别对应。例如,运动抓拍模式对应的一种或多种抓拍类别可以包括以下至少一种:投篮、跑步、跳跃、游泳、踢球等。表情抓拍模式对应的一种或多种抓拍类别包括以下至少一种:高兴、生气、伤心、搞怪等。应理解,上文对每种抓拍模式对应的抓拍类别的列举仅为示例,不应对本申请构成任何限定。本申请对于每种抓拍模式对应的具体抓拍类别不作限定。In another implementation, each capture mode can correspond to multiple evaluation strategies. The multiple evaluation strategies in each capture mode can further correspond to multiple capture categories. For example, one or more capture categories corresponding to the sports capture mode may include at least one of the following: shooting, running, jumping, swimming, kicking, etc. The one or more capture categories corresponding to the facial expression capture mode include at least one of the following: happy, angry, sad, funny, etc. It should be understood that the above listing of the capture categories corresponding to each capture mode is only an example, and should not constitute any limitation to this application. This application does not limit the specific capture category corresponding to each capture mode.
在本申请实施例中,每种抓拍类别可对应一种评估策略。在与第一抓拍模式对应的至少一种评估策略中,每种评估策略除了包括与第一抓拍模式对应的一个或多个评分参数以及每个评分参数的模式权重和与一种抓拍类别对应的类别权重。换句话说,在第一抓拍模式下,每个评分参数可以被定义了模式权重。在第一抓拍模式所对应的每一种抓拍类别下,每个评分参数还可以被进一步定义类别权重。In the embodiment of the present application, each type of capture may correspond to an evaluation strategy. In at least one evaluation strategy corresponding to the first snapping mode, each evaluation strategy includes one or more scoring parameters corresponding to the first snapping mode, the mode weight of each scoring parameter, and the mode weight corresponding to one snapping category. Category weight. In other words, in the first snapshot mode, each scoring parameter can be defined with a mode weight. Under each capture category corresponding to the first capture mode, each scoring parameter can be further defined with a category weight.
可选地,不同抓拍类别对应的不同评估策略所包括的评分参数相同,且所述不同评估策略所包括的类别权重不同。Optionally, different evaluation strategies corresponding to different capture categories include the same scoring parameters, and the different evaluation strategies include different category weights.
这里所说的不同评估策略所包括的类别权重不同,具体可以是指,不同评估策略中对应于同一个评分参数施加的类别权重不同。且,在评估策略包含有多项评分参数的情况下,不同评估策略对至少一项评分参数施加的类别权重不同。换句话说,不同抓拍类别对应的不同评估策略可以分别包括对应于相同的评分参数的不同类别权重。The category weights included in the different evaluation strategies mentioned here are different. Specifically, it may mean that the category weights applied to the same scoring parameter in different evaluation strategies are different. Moreover, when the evaluation strategy includes multiple scoring parameters, different evaluation strategies apply different category weights to at least one scoring parameter. In other words, different evaluation strategies corresponding to different capture categories may respectively include different category weights corresponding to the same scoring parameter.
为方便说明,将该电子设备根据捕获的多帧图像确定的抓拍类别即为第一抓拍类别。基于第一抓拍类别的不同,同一抓拍模式下同一评分参数对应的类别权重也不同。For the convenience of description, the capture category determined by the electronic device according to the captured multiple frames of images is the first capture category. Based on the difference in the first capture category, the category weights corresponding to the same scoring parameter in the same capture mode are also different.
例如,对待评选图像的评分可以通过公式
来确定。其中,β
i表示第i个评分参数的类别权重,β
i≥0。T
i和α
i的含义在上文中已经做了说明,为了简洁,这里不再重复。
For example, the score of the image to be selected can be evaluated by the formula to make sure. Among them, β i represents the category weight of the i-th scoring parameter, and β i ≥0. The meanings of T i and α i have been explained above, for the sake of brevity, they will not be repeated here.
在同一抓拍模式下,对于同一个评分参数,即便模式权重相同,但类别权重可以不同。例如,该第一抓拍模式为运动抓拍模式。运动抓拍模式下对应的评分参数可以包括:姿态高度、姿态舒展度等。该运动抓拍模式下可以包括投篮、跳水、游泳、跑步等多种动作类别。In the same snapshot mode, for the same scoring parameter, even if the mode weight is the same, the category weight can be different. For example, the first capture mode is a sports capture mode. The corresponding scoring parameters in the sports capture mode may include: posture height, posture stretch, and so on. The sports capture mode can include shooting, diving, swimming, running and other action categories.
当第一动作类别为投篮时,姿态高度的类别权重较姿态舒展度的类别权重高,并且在姿态舒展度中,肘关节夹角和胳膊弯曲夹角的类别权重较其他(如膝关节夹角、踝关节夹角等)的类别权重高。当第一动作类别为跳水时,膝关节夹角的类别权重较姿态高度的类别权重高,姿态高度的类别权重较胳膊弯曲夹角的类别权重高。当第一动作类别为游泳时,腿部弯曲夹角的类别权重较胳膊弯曲夹角的类别权重高,胳膊弯曲夹角的类别权重较姿态高度的类别权重高。当第一动作类别为默认类别时,关节夹角越大,表示人体动作越剧烈,就可以认为是精彩瞬间。因此腿部弯曲夹角和胳膊弯曲夹角的类别权重较姿态高度的类别权重高。应理解,这些示例仅为便于理解,不应对本申请构成任何限定。本申请对于各抓拍类别下各评分参数的类别权重的分配不作限定。When the first action category is shooting, the category weight of the posture height is higher than the category weight of the posture stretch degree, and in the posture stretch degree, the category weight of the elbow joint angle and the arm bending angle is higher than other categories (such as knee joint angle). , Ankle joint angle, etc.) have high category weights. When the first action category is diving, the category weight of the knee joint angle is higher than the category weight of the posture height, and the category weight of the posture height is higher than the category weight of the arm bending angle. When the first action category is swimming, the category weight of the leg bending angle is higher than the category weight of the arm bending angle, and the category weight of the arm bending angle is higher than the category weight of the posture height. When the first action category is the default category, the larger the joint angle, the more violent the human body action, which can be considered as a wonderful moment. Therefore, the category weight of the leg bending angle and the arm bending angle is higher than that of the posture height category. It should be understood that these examples are only for ease of understanding and should not constitute any limitation to the application. This application does not limit the distribution of the category weight of each scoring parameter under each capture category.
可选地,上述步骤210可以进一步包括:根据捕获的多帧图像确定第一抓拍模式中的第一抓拍类别。上述步骤220可以进一步包括:使用与第一抓拍模式所对应的一个或多个评分参数和每个评分参数的模式权重,以及与所述第一抓拍类别对应的每个评分参数的类别权重,计算多帧待评选图像中每帧待评选图像的评分。Optionally, the above step 210 may further include: determining the first capture category in the first capture mode according to the captured multiple frames of images. The above-mentioned step 220 may further include: using one or more scoring parameters corresponding to the first snapping mode and the mode weight of each scoring parameter, and the category weight of each scoring parameter corresponding to the first snapping category, to calculate The score of each frame of the image to be selected in the multi-frame image to be selected.
也就是说,该电子设备除了确定第一抓拍模式,还可进一步通过调用动作检测模型或人脸属性检测模型等来确定第一抓拍类别。下面举例说明基于第一抓拍模式确定第一抓拍类别,以及根据第一抓拍类别对应的评估策略对多帧待评选图像进行评分的具体过程。That is to say, in addition to determining the first capture mode, the electronic device may further determine the first capture category by calling the motion detection model or the face attribute detection model. The following examples illustrate the specific process of determining the first capture category based on the first capture mode, and scoring multiple frames of images to be selected according to the evaluation strategy corresponding to the first capture category.
首先,该电子设备可以基于第一抓拍模式确定第一抓拍类别。Firstly, the electronic device may determine the first capture category based on the first capture mode.
举例来说,若第一抓拍模式为运动抓拍模式,则该电子设备至少可以调用姿态估计模型和动作检测模型对待评选图像进行图像识别。如前所述,动作检测模型可以基于姿态估计模型所输入的每帧待评选图像中多个姿态点的坐标信息进行动作类别的识别。动作检测模型可以基于每帧待评选图像中多个姿态点的坐标信息构造各姿态点在运动过程中的坐标变化,确定第一动作类别。For example, if the first capture mode is a motion capture mode, the electronic device can at least call the pose estimation model and the motion detection model to perform image recognition on the image to be selected. As mentioned above, the action detection model can recognize the action category based on the coordinate information of multiple posture points in each frame of the image to be selected input by the posture estimation model. The action detection model can construct the coordinate changes of each posture point during the movement process based on the coordinate information of the multiple posture points in each frame of the image to be selected to determine the first action category.
由于上文步骤210中已经对动作检测模型确定动作类别的具体过程做了说明,为了简洁,这里不再赘述。需要说明的是,若动作检测模型根据待评选图像中姿态点的坐标信息所确定的动作类别不属于预定义的动作类别,可以将该动作类别确定为默认类别,或者称,缺省类别。在本申请实施例中,当该动作检测模型未检测出具体动作类别时,可以将该动作类别归为默认类别。动作检测模型可以输出该拍摄对象的动作类别为默认类别,更具体地说,是运动抓拍模式下的默认类别。Since the specific process of determining the action category by the action detection model has been explained in step 210 above, for the sake of brevity, it will not be repeated here. It should be noted that if the action category determined by the action detection model according to the coordinate information of the posture point in the image to be selected does not belong to the predefined action category, the action category can be determined as the default category, or the default category. In the embodiment of the present application, when the action detection model does not detect a specific action category, the action category can be classified as a default category. The motion detection model can output the action category of the subject as the default category, more specifically, the default category in the sports capture mode.
若该第一抓拍模式为表情抓拍模式,则该电子设备至少可以调用人脸属性检测模型对待评选图像进行图像识别。如前所述,人脸属性检测模型可以基于对每帧待评选图像的特征点的分析确定表情类别。If the first capture mode is an expression capture mode, the electronic device can at least call the face attribute detection model to perform image recognition on the image to be selected. As mentioned above, the face attribute detection model can determine the expression category based on the analysis of the feature points of each frame of the image to be selected.
由于上文步骤210中已经对人脸属性检测模型确定表情类别的具体过程做了说明,为了简洁,这里不再赘述。需要说明的是,若人脸属性检测模型根据待评选图像中的特征点所确定的表情类别不属于预定义的动作类别,可以将该表情类别确定为默认类别,或者称,缺省类别。在本申请实施例中,当该人脸属性检测模型未检测出具体表情类别时,可以将该表情类别归为默认类别。人脸属性检测模型可以输出该拍摄对象的表情类别为默认类别,更具体地说,是表情抓拍模式下的默认类别。该电子设备对各评分参数的数值的确定在上文中已经结合例子做了详细说明,为了简洁,这里不再重复。Since the specific process of determining the facial expression category by the face attribute detection model has been explained in step 210 above, for the sake of brevity, it will not be repeated here. It should be noted that, if the expression category determined by the face attribute detection model according to the feature points in the image to be selected does not belong to the predefined action category, the expression category may be determined as the default category, or the default category. In the embodiment of the present application, when the face attribute detection model does not detect a specific expression category, the expression category may be classified as a default category. The face attribute detection model can output the expression category of the subject as the default category, more specifically, the default category in the expression capture mode. The determination of the value of each scoring parameter by the electronic device has been described in detail above in conjunction with an example, and for the sake of brevity, it will not be repeated here.
在确定了第一抓拍模式下的第一抓拍类别之后,该电子设备可以使用对应的评估策略,将各评分参数代入,确定每帧待评选图像的评分。在本实施例中,由于对第一抓拍模式进一步细化了抓拍类别,电子设备将各评分参数的数值做加权时,可以进一步结合抓拍类别对不同细节的关注,为每个评分参数施加类别权重,以计算更符合第一抓拍模式下的第一抓拍类别的评分。After the first capture category in the first capture mode is determined, the electronic device can use the corresponding evaluation strategy to substitute various scoring parameters to determine the score of each frame of the image to be selected. In this embodiment, since the capture category is further refined for the first capture mode, when the electronic device weights the value of each scoring parameter, it can further combine the attention to different details of the capture category to apply a category weight to each scoring parameter , To calculate a score that is more in line with the first snap category in the first snap mode.
例如,第一抓拍模式为运动抓拍模式,第一抓拍类别为投篮,则可以对肘关节夹角和胳膊弯曲夹角施加的类别权重较其他(如膝关节夹角、踝关节夹角等)评分参数的类别权重高,且对姿态高度施加的类别权重可以较姿态舒展度的类别权重高。For example, if the first capture mode is sports capture mode, and the first capture category is shooting, then the category weight can be applied to the elbow joint angle and the arm bending angle than other (such as knee joint angle, ankle joint angle, etc.) The category weight of the parameter is high, and the category weight applied to the attitude height can be higher than the category weight of the attitude stretch degree.
在本实施例中,电子设备可以使用该第一抓拍模式下的第一抓拍类别对应的评估策略,将此前基于对每帧待评选图像确定的评分参数的数值代入,以计算每帧待评选图像的评分。该电子设备对每帧待评选图像的评分例如可以通过上文列举的公式
计算得到。式中各参数的含义在上文中已经做了说明,为了简洁,这里不再重复。当多个评分参数的评分在不同的量级时,还可以将各评分参数的评分归一化到同一量级。
In this embodiment, the electronic device can use the evaluation strategy corresponding to the first capture category in the first capture mode, and substitute the previous value based on the scoring parameter determined for each frame of the image to be selected to calculate each frame of the image to be selected Rating. The scoring of each frame of the image to be selected by the electronic device can be based on the formula listed above, for example Calculated. The meaning of each parameter in the formula has been explained above, for the sake of brevity, it will not be repeated here. When the scores of multiple scoring parameters are of different magnitudes, the scores of each scoring parameter can also be normalized to the same magnitude.
应理解,这里所列举的用于计算评分的公式仅为示例,不应对本申请构成任何限定。本申请并不限定采用其他的计算方式来计算待评选图像的评分的可能。It should be understood that the formula for calculating the score listed here is only an example, and should not constitute any limitation to the application. This application does not limit the possibility of using other calculation methods to calculate the score of the image to be selected.
在基于上述公式确定了每帧待评选图像的评分之后,该电子设备可以根据每帧待评选图像的评分确定与第一抓拍模式对应的抓拍帧图像。这里,与第一抓拍模式对应的抓拍帧图像例如可以是基于对多帧待评选图像的评分确定的评分最高的一帧图像。换句话说,在多帧待评选图像中,与第一抓拍模式对应的抓拍帧图像的评分高于除该抓拍帧图像之外的任意一帧待评选图像的评分。After determining the score of each frame of the image to be selected based on the above formula, the electronic device may determine the captured frame image corresponding to the first capture mode according to the score of each frame of the image to be selected. Here, the captured frame image corresponding to the first capture mode may be, for example, a frame with the highest score determined based on the scores of multiple frames of images to be selected. In other words, among the multiple frames of images to be selected, the score of the captured frame image corresponding to the first capture mode is higher than the score of any frame of the image to be selected except the captured frame image.
电子设备可以在确定了与第一抓拍模式对应的抓拍帧图像后,将该抓拍帧图像保存在该电子设备中,或者,输出至电子设备的显示单元等。本申请对此不作限定。需要说明的是,在同一种抓拍模式下,对应于不同的抓拍类别,某些评分参数的类别权重也可以定义为零,或,接近于零。当某一评分参数的类别权重被定义为零或接近零时,也可以被理解为该动作类别对应的评分参数中不包括此评分参数。从这个角度来说,同一种抓拍模式对应的多种评估策略中所包括的评分参数不一定相同。应理解,上文列举的多种用于确定对每帧待评选图像的评分的多个示例仅为便于理解,不应对本申请构成任何限定。本领域的技术人员基于相同的构思,可以对上述步骤作出变化或替换,以达到实现相同效果的目的。本申请对于电子设备对每帧待评选图像计算评分的具体过程不作限定。The electronic device may, after determining the captured frame image corresponding to the first capture mode, save the captured frame image in the electronic device, or output to the display unit of the electronic device or the like. This application does not limit this. It should be noted that, in the same capture mode, corresponding to different capture categories, the category weight of some scoring parameters can also be defined as zero, or close to zero. When the category weight of a certain scoring parameter is defined as zero or close to zero, it can also be understood that the scoring parameter corresponding to the action category does not include the scoring parameter. From this perspective, the scoring parameters included in multiple evaluation strategies corresponding to the same capture mode are not necessarily the same. It should be understood that the multiple examples listed above for determining the score for each frame of the image to be selected are only for ease of understanding, and should not constitute any limitation to this application. Based on the same concept, those skilled in the art can make changes or substitutions to the above steps to achieve the same effect. This application does not limit the specific process for the electronic device to calculate the score for each frame of the image to be selected.
如前所述,上述多帧待评选图像是电子设备在按下快门瞬间的前N帧图像、后N帧图像或前后N帧图像,或者,在录像过程中评分超出预设门限的一帧或多帧图像。这些图像被保存在该电子设备中,也可以被发送至显示单元,呈现给用户。本申请对此不作限定。As mentioned above, the above-mentioned multi-frame images to be selected are the first N frames, the last N frames, or the front and back N frames at the moment when the electronic device presses the shutter, or a frame or frame whose score exceeds the preset threshold during the recording process. Multi-frame images. These images are stored in the electronic device, and can also be sent to the display unit and presented to the user. This application does not limit this.
在一种实现方式中,每一帧图像可以对应一个时间戳。该电子设备在确定了与第一抓拍模式对应的抓拍帧图像之后,可以从相机模块查找并获取与该抓拍帧图像的时间戳匹配的图像。相机模块对该图像进行编码后可以推送至显示单元,呈现给用户。同时,基于用户的拍照操作而捕获的图像经编码后也可以保存在用户相册中。为便于用户区分,还可以将抓拍帧图像和/或用户实际拍摄得到的那一帧图像分别通过标记来区分,例如在抓拍帧图像上留下“最佳时刻(best moment)”或其他类似的标记,或者在上述两帧图像上留下不同的标记,以示区分。In one implementation, each frame of image may correspond to a time stamp. After determining the captured frame image corresponding to the first capture mode, the electronic device can search for and obtain an image matching the time stamp of the captured frame image from the camera module. After the camera module encodes the image, it can be pushed to the display unit and presented to the user. At the same time, the image captured based on the user's photographing operation can also be saved in the user's photo album after being encoded. In order to facilitate the user to distinguish, the captured frame image and/or the frame of the image actually captured by the user can also be distinguished by marks, for example, leave a "best moment" or other similar on the captured frame image. Mark, or leave a different mark on the above two frames of images to show distinction.
事实上,该电子设备在根据捕获的多帧图像确定第一抓拍模式之前,还可以对捕获的多帧图像进行预处理,以便获得更加准确的估计结果。也就是说,输入至上述一个或多个模型的图像可以是原始的图像,也可以是经过了预处理之后的图像,本申请对此不作限定。In fact, the electronic device may also preprocess the captured multi-frame images before determining the first capture mode based on the captured multi-frame images, so as to obtain a more accurate estimation result. That is to say, the image input to the above one or more models may be the original image or the image after preprocessing, which is not limited in this application.
可选地,该方法200还包括:对捕获的多帧图像进行图像预处理。具体地,该电子设备中的图像处理模块,如ISP等,可以对图像进行图像预处理。图像预处理例如可以包括:对每一帧的图像的尺寸进行裁剪,以符合姿态估计模型的输入尺寸;又例如可以包括,对每一帧的图像进行预处理,比如包括减均值、归一化、数据增强(比如旋转等)处理。本申请对于图像处理的具体内容和实现方法不作限定。经过预处理后的图像可以符合前述多个模型的输入尺寸,同时可以增强数据的多样性,以防止模型过拟合。Optionally, the method 200 further includes: performing image preprocessing on the captured multiple frames of images. Specifically, the image processing module in the electronic device, such as ISP, can perform image preprocessing on the image. Image preprocessing may include, for example, cropping the size of the image of each frame to meet the input size of the pose estimation model; for example, it may include preprocessing the image of each frame, such as subtracting the mean and normalizing. , Data enhancement (such as rotation, etc.) processing. This application does not limit the specific content and implementation methods of image processing. The pre-processed image can meet the input size of the aforementioned multiple models, and at the same time, the diversity of the data can be enhanced to prevent the model from overfitting.
基于上文所述的技术方案,通过预置多种不同的抓拍模式及其对应的评估策略,可以根据实际拍摄场景,确定抓拍模式。并可以从预置的多种评估策略中选择与第一抓拍模式对应的一种评估策略,并使用该评估策略确定抓拍帧图像。例如在运动抓拍模式、多人运动抓拍模式下引入姿态舒展度、姿态高度等评分参数,在表情抓拍模式、多人合照模式下引入表情强度、睁闭眼、面部遮挡、人脸角度等评分参数,使得所确定的与第一抓拍模式对应的抓拍帧图像能够基于与相应的评估策略来选择,对不同抓拍模式所关注的不同评分参数施加较高的模式权重,从而使得抓拍的图像更符合实际拍摄场景,有利于获得理想的抓拍效果,灵活性获得提升且适用于更多场景。Based on the technical solution described above, by presetting a variety of different capture modes and their corresponding evaluation strategies, the capture mode can be determined according to the actual shooting scene. An evaluation strategy corresponding to the first capture mode can be selected from a plurality of preset evaluation strategies, and the capture frame image can be determined by using the evaluation strategy. For example, the introduction of scoring parameters such as posture stretch and posture height in the sports capture mode and multi-person sports capture mode, and the introduction of scoring parameters such as expression strength, open and closed eyes, face occlusion, and face angle in the expression capture mode and multi-person photo mode. , So that the determined capture frame image corresponding to the first capture mode can be selected based on the corresponding evaluation strategy, and higher mode weights are applied to the different scoring parameters concerned by different capture modes, so that the captured images are more realistic The shooting scene is conducive to obtaining the ideal capture effect, the flexibility is improved and it is suitable for more scenes.
并且,本申请提供的技术方案还可以进一步基于不同的抓拍类别来确定各评分参数的权重。例如,由于不同的动作类别所关注的侧重点不同,在运动抓拍模式所对应的多个评分参数中,为不同的动作类别配置的各评分参数的类别权重不同。在运动抓拍模式下,基于动作类别所对应的各评分传输的类别权重对多帧待评选图像进行评分,有利于获得理想的抓拍帧图像。相比于基于光流信息来推荐抓拍帧图像而言,本申请提供的方案更注重动作本身,因此能够获得更好的抓拍效果。由于电子设备持续性地对捕获的多帧图像进行检测。而电子设备所捕获的图像基于相机运行时间的变化而变化,一旦检测到图像满足不同于第一抓拍模式的另一抓拍模式的触发条件,就可能会切换到另一抓拍模式。In addition, the technical solution provided by the present application may further determine the weight of each scoring parameter based on different capture categories. For example, because different action categories focus on different focuses, among the multiple scoring parameters corresponding to the sports capture mode, the category weights of the scoring parameters configured for different action categories are different. In the motion capture mode, multiple frames of images to be selected are scored based on the category weights transmitted by each score corresponding to the action category, which is conducive to obtaining an ideal captured frame image. Compared with recommending a captured frame image based on optical flow information, the solution provided in this application pays more attention to the action itself, so that a better capturing effect can be obtained. Because the electronic device continuously detects the captured multi-frame images. The image captured by the electronic device changes based on the camera running time. Once it is detected that the image meets the triggering condition of another capture mode different from the first capture mode, it may switch to another capture mode.
可选地,该方法还包括:基于新捕获的多帧图像,确定第二抓拍模式,该第二抓拍模式是上述预置的多种抓拍模式中不同于第一抓拍模式的一种抓拍模式;并切换至第二抓拍模式。为了避免电子设备在多种抓拍模式之间频繁切换造成乒乓效应,可以为每一种抓拍模式设置保护时段。该保护时段的时长可以为预定义值。各抓拍模式的保护时段可以是相同时长的,也可以是不同时长的,本申请对此不作限定。因此,在该电子设备切换至第二抓拍模式之前,可以先判断第一抓拍模式的运行时长是否超出预设的保护时段。若在第一抓拍模式的运行时长还处于保护时段的时长范围内时,不管是否对新捕获的多帧图像的检 测满足第二抓拍模式的触发条件,均不作模式切换,保持在第一抓拍模式运行。但第一抓拍模式的运行时长若超出该保护时段的时长范围,则可以基于对新捕获的多帧图像的检测,切换到第二切换模式。Optionally, the method further includes: determining a second capture mode based on the newly captured multi-frame images, where the second capture mode is a capture mode that is different from the first capture mode among the foregoing preset multiple capture modes; And switch to the second snapshot mode. In order to avoid the ping-pong effect caused by frequent switching of electronic devices between multiple capture modes, a protection period can be set for each capture mode. The duration of the guard period can be a predefined value. The protection period of each capture mode can be of the same duration or different durations, which is not limited in this application. Therefore, before the electronic device switches to the second capture mode, it can be determined whether the running time of the first capture mode exceeds the preset protection period. If the operating time of the first capture mode is still within the protection period, regardless of whether the detection of the newly captured multi-frame images meets the trigger conditions of the second capture mode, the mode will not be switched, and the mode will remain in the first capture mode. run. However, if the running duration of the first capture mode exceeds the duration range of the protection period, it can be switched to the second switching mode based on the detection of the newly captured multi-frame images.
用户在使用电子设备拍照时,可以首先打开电子设备中的相机,手动选择拍摄模式。图3示出了手机界面的一例。图3中的(a)示出了手机在解锁状态下,手机的屏幕显示系统显示的界面内容301。该界面内容301可以包括多个图标,该多个图标可以与多款应用程序(application,App)对应,如支付宝、微博、相册、相机、微信等,这里不一一列举。When a user uses an electronic device to take a photo, he can first turn on the camera in the electronic device and manually select a shooting mode. Figure 3 shows an example of a mobile phone interface. (A) in FIG. 3 shows the interface content 301 displayed by the system on the screen of the mobile phone when the mobile phone is in an unlocked state. The interface content 301 may include multiple icons, and the multiple icons may correspond to multiple applications (applications, apps), such as Alipay, Weibo, photo album, camera, WeChat, etc., which are not listed here.
用户如果希望通过手机进行拍照或录像,首先可以通过点击用户界面上的“相机”图标302来启动相机应用。启动相机应用后的手机界面显示可以如图3中的(b)所示,该界面可以称为相机的拍摄界面。该拍摄界面可以包括取景框303、相册图标504、拍摄控件305等。其中,取景框303用于获取拍摄预览的图像,并可实时显示预览图像。应理解,上文所述的预览图像并不一定会保存在相册中。但在本申请实施例中,该预览图像可以保存在该手机的缓存中,或者其他存储单元中,本申请对此不作限定。If the user wants to take a photo or video with a mobile phone, he can first start the camera application by clicking the "camera" icon 302 on the user interface. The mobile phone interface display after starting the camera application may be as shown in (b) in FIG. 3, and this interface may be referred to as the shooting interface of the camera. The shooting interface may include a viewing frame 303, an album icon 504, a shooting control 305, and the like. Among them, the viewfinder frame 303 is used to obtain a photographed preview image, and can display the preview image in real time. It should be understood that the preview images described above are not necessarily saved in the album. However, in the embodiment of the present application, the preview image may be stored in the cache of the mobile phone or in other storage units, which is not limited in the present application.
相册图标304用于快捷进入相册。当手机检测到用户点击相册图标后,可以在屏幕上展示已经拍摄的照片或视频等。拍摄控件305可用于拍摄照片或者录像。若相机处于拍照模式,当手机检测到用户点击该拍摄控件时,手机执行拍照操作,并将拍摄的照片保存下来,即上文所述的拍照流。若相机处于录像模式,当手机检测到用户点击该拍摄控件,手机执行视频拍摄操作;当手机检测到用户再次点击该拍摄控件时,视频拍摄结束。手机可以将录制的视频保存下来。在一种实现方式中,该视频可以通过连续的多帧图像来保存,即前文所述的视频流。The album icon 304 is used to quickly enter the album. When the mobile phone detects that the user clicks on the album icon, it can display the photos or videos that have been taken on the screen. The shooting control 305 can be used to take photos or videos. If the camera is in the photographing mode, when the mobile phone detects that the user clicks on the photographing control, the mobile phone performs the photographing operation and saves the photographed photos, which is the photographing stream described above. If the camera is in the video recording mode, when the mobile phone detects that the user clicks the shooting control, the mobile phone performs the video shooting operation; when the mobile phone detects that the user clicks the shooting control again, the video shooting ends. The phone can save the recorded video. In one implementation, the video can be saved by continuous multiple frames of images, that is, the video stream described above.
此外,该拍摄界面还可以包括用于设置拍摄模式的功能控件306,例如图3的(b)中示出的人像模式、拍照模式、录像模式、全景模式等。用户可以通过点击该功能控件切换拍摄模式。可选地,该拍摄界面还可以包括摄像头旋转控件307,例如图3的(b)中所示。该摄像头旋转控件307可用于控制前置摄像头和后置摄像头的切换。In addition, the shooting interface may also include a functional control 306 for setting a shooting mode, such as the portrait mode, the photo mode, the video mode, the panoramic mode, etc. shown in (b) of FIG. 3. The user can switch the shooting mode by clicking the function control. Optionally, the shooting interface may further include a camera rotation control 307, for example, as shown in (b) of FIG. 3. The camera rotation control 307 can be used to control the switching of the front camera and the rear camera.
应理解,图3仅为便于理解,以手机为例详细说明了用户通过操作打开拍照功能或其他功能的过程。但应理解,图3所示的手机界面仅为示例,不应对本申请构成任何限定。不同操作系统、不同品牌的手机,手机界面可能会有差异。并且,本申请实施例还可应用于除手机之外的其他可用于拍照的电子设备。并且图中示出的界面仅为示例,不应对本申请构成任何限定。It should be understood that FIG. 3 is only for ease of understanding, and a mobile phone is taken as an example to illustrate in detail the process of the user opening the camera function or other functions through operations. However, it should be understood that the mobile phone interface shown in FIG. 3 is only an example, and should not constitute any limitation to this application. Different operating systems and different brands of mobile phones may have different mobile phone interfaces. In addition, the embodiments of the present application can also be applied to other electronic devices that can be used for taking pictures except for mobile phones. And the interface shown in the figure is only an example, and should not constitute any limitation to this application.
为了更好地理解本申请提供的方法,下面结合几个具体的实施例来进行详细说明。图4是本申请另一实施例提供的拍摄图像的方法的示意性流程图。图4示出的方法中,用户可以手动将拍照模式调整为智能抓拍模式。响应于用户的操作,该电子设备进入智能抓拍模式。换句话说,图4示出的实施例主要描述了处于智能抓拍模式下的电子设备拍摄图像的方法。In order to better understand the method provided by this application, a detailed description will be given below in conjunction with several specific embodiments. FIG. 4 is a schematic flowchart of a method for capturing an image according to another embodiment of the present application. In the method shown in FIG. 4, the user can manually adjust the camera mode to the smart capture mode. In response to the user's operation, the electronic device enters the smart capture mode. In other words, the embodiment shown in FIG. 4 mainly describes a method for an electronic device in the smart capture mode to capture an image.
应理解,图4中示出的方法可以由电子设备或电子设备中的处理器来执行。在步骤401中,对捕获的多帧图像进行高帧率地周期性检测。具体地,该电子设备可以持续地从缓存中获取该电子设备捕获的多帧图像,并调用上述检测模型(例如人脸属性检测模型、姿态估计模型和动作检测模型)对捕获到的图像进行周期性检测,以确定适用于当前拍摄 的第一抓拍模式。在智能抓拍模式下,电子设备可以采用较高的帧率进行检测,比如30帧每秒。各检测模型可以交替运行,以降低功耗。应理解,在退出智能抓拍模式之前,该步骤401可以贯穿步骤401至步骤410的整个流程。还应理解,电子设备调用检测模型对多帧图像的检测仍然是该电子设备在执行,因此在本实施例中,为了简洁,不再对调用检测模型对图像进行检测的过程做特别说明。It should be understood that the method shown in FIG. 4 may be executed by an electronic device or a processor in the electronic device. In step 401, the captured multi-frame images are periodically detected at a high frame rate. Specifically, the electronic device can continuously obtain multiple frames of images captured by the electronic device from the cache, and call the aforementioned detection models (such as the face attribute detection model, the pose estimation model, and the motion detection model) to cycle the captured images. Sexual detection to determine the first capture mode suitable for the current shooting. In the smart capture mode, the electronic device can use a higher frame rate for detection, such as 30 frames per second. Each detection model can be run alternately to reduce power consumption. It should be understood that, before exiting the smart capture mode, step 401 may run through the entire process from step 401 to step 410. It should also be understood that the electronic device calling the detection model to detect multiple frames of images is still being executed by the electronic device. Therefore, in this embodiment, for the sake of brevity, the process of calling the detection model to detect images will not be specifically described.
在步骤402中,确定满足第一抓拍模式的触发条件。具体地,该电子设备可以根据对多帧图像的检测,确定满足某一抓拍模式的触发条件。例如,满足第一抓拍模式的触发条件。此时,则可以执行步骤403,启用第一抓拍模式。当图像不满足任意一种抓拍模式的触发条件时,则可以继续执行步骤401,对捕获的多帧图像进行高帧率地周期性检测。In step 402, it is determined that the trigger condition of the first capture mode is satisfied. Specifically, the electronic device can determine that a trigger condition of a certain capture mode is satisfied based on the detection of multiple frames of images. For example, the trigger condition of the first snapshot mode is satisfied. At this time, step 403 can be executed to enable the first capture mode. When the image does not meet the trigger condition of any one of the capture modes, step 401 can be continued to perform periodic detection of the captured multi-frame images at a high frame rate.
由于上文方法200中已经对于不同的抓拍模式列举了不同的触发条件,并结合人脸属性检测模型、姿态估计模型和动作检测模型对确定是否满足某一抓拍模式的触发条件作了详细说明。为了简洁,这里不再赘述。需要说明的是,电子设备具体启用哪一种抓拍模式对于所捕获的图像并没有影响。电子设备只是基于具体的抓拍模式采用相对应的评估策略来对图像进行评估和推荐。As the above method 200 has listed different trigger conditions for different capture modes, and combined the face attribute detection model, the pose estimation model and the motion detection model, the trigger condition for determining whether a certain capture mode is satisfied is described in detail. For the sake of brevity, I won't repeat them here. It should be noted that the specific capture mode enabled by the electronic device has no effect on the captured image. Electronic equipment only uses corresponding evaluation strategies to evaluate and recommend images based on specific capture modes.
在步骤404中,该电子设备可以保持在第一抓拍模式运行,并对新捕获的图像进行高帧率地周期性检测。该电子设备可以一直保持在第一抓拍模式运行,直到根据对新捕获的图像的检测确定满足另一抓拍模式(例如记作第二抓拍模式)的触发条件。需要说明的是,步骤404为便于描述下文实施例而设置,并不表示电子设备执行了新的操作。该电子设备在步骤403中启用第一抓拍模式之后,便可以一直保持在第一抓拍模式运行,并持续性地对新捕获的图像进行高帧率地周期性检测。还需要说明的是,电子设备启用第一抓拍模式后可以在后台运行该第一抓拍模式,而并未通过拍摄界面向用户提示,因此用户可能并不感知;也可以在前台运行第一抓拍模式,用户可以通过拍摄界面感知。本申请对此不作限定。In step 404, the electronic device may keep running in the first capture mode, and perform periodic detection of the newly captured image at a high frame rate. The electronic device may keep running in the first capture mode until it is determined that the trigger condition of another capture mode (for example, referred to as the second capture mode) is satisfied according to the detection of the newly captured image. It should be noted that step 404 is provided for the convenience of describing the following embodiments, and does not mean that the electronic device has performed a new operation. After enabling the first capture mode in step 403, the electronic device can always keep running in the first capture mode, and continuously perform periodic detection of newly captured images at a high frame rate. It should also be noted that after the first capture mode is enabled, the electronic device can run the first capture mode in the background without prompting the user through the shooting interface, so the user may not perceive it; the first capture mode can also be run in the foreground , The user can perceive through the shooting interface. This application does not limit this.
在步骤405中,确定满足第二抓拍模式的触发条件。由于相机持续运行,该电子设备可以持续性地对新捕获的图像进行检测。若没有检测到新捕获的多帧图像满足另一抓拍模式(如第二抓拍模式)的触发条件,也就是确定不满足第二抓拍模式的触发条件,则可以执行步骤404,保持在第一抓拍模式,同时持续性地对新捕获的图像进行高帧率地周期性检测。In step 405, it is determined that the trigger condition of the second snapshot mode is satisfied. Since the camera continues to operate, the electronic device can continuously detect the newly captured image. If it is not detected that the newly captured multi-frame image satisfies the trigger condition of another capture mode (such as the second capture mode), that is, it is determined that the trigger condition of the second capture mode is not met, step 404 can be performed to keep in the first capture mode. Mode, while continuously detecting the newly captured images periodically at a high frame rate.
若检测到新捕获的多帧图像满足第二抓拍模式的触发条件,则可以执行步骤406,确定第一抓拍模式的运行时长是否超出预设的保护时段。若第一抓拍模式的运行时长未超出预设的保护时段,则可以执行步骤404。若第一抓拍模式的运行时长超出预设的保护时段,则可以执行步骤407,启用第二抓拍模式。即,该电子设备切换至第二抓拍模式。电子设备在启用了第二抓拍模式之后,保持在第二抓拍模式运行,直到根据新捕获的图像确定满足另一抓拍模式(例如记作第三抓拍模式)的触发条件。同时,保持对新捕获的图像进行高帧率地周期性检测。为了简洁,图中对该电子设备确定满足第三抓拍模式的触发条件的步骤并未予以示出。但可以理解,在满足第三抓拍模式的触发条件的情况下,该电子设备所执行的操作可以与图中满足第三抓拍模式的触发条件的情况下所执行的操作相似,为了简洁,这里不再赘述。If it is detected that the newly captured multi-frame images meet the trigger condition of the second capture mode, step 406 may be executed to determine whether the running time of the first capture mode exceeds the preset protection period. If the running time of the first capture mode does not exceed the preset protection period, step 404 may be executed. If the running time of the first capture mode exceeds the preset protection period, step 407 may be executed to enable the second capture mode. That is, the electronic device switches to the second capture mode. After enabling the second capture mode, the electronic device keeps running in the second capture mode until it is determined that the trigger condition of another capture mode (for example, the third capture mode) is satisfied according to the newly captured image. At the same time, the newly captured images are periodically detected at a high frame rate. For the sake of brevity, the figure does not show the step of determining that the electronic device meets the triggering condition of the third capture mode. However, it can be understood that when the trigger condition of the third capture mode is satisfied, the operation performed by the electronic device may be similar to the operation performed when the trigger condition of the third capture mode is satisfied in the figure. For the sake of brevity, it is not here. Go into details again.
应理解,第一抓拍模式和第二抓拍模式是不同的抓拍模式,第三抓拍模式和第二抓拍 模式是不同的抓拍模式,但第一抓拍模式和第三抓拍模式可以是相同的抓拍模式,也可以是不同的抓拍模式,本申请对此不作限定。无论是否切换至第二抓拍模式,该电子设备都运行在智能抓拍模式下,持续性地对捕获的图像进行高帧率地检测。It should be understood that the first capture mode and the second capture mode are different capture modes, the third capture mode and the second capture mode are different capture modes, but the first capture mode and the third capture mode may be the same capture mode, It can also be a different capture mode, which is not limited in this application. Regardless of whether it is switched to the second capture mode, the electronic device runs in the smart capture mode and continuously detects the captured images at a high frame rate.
若电子设备检测到用户的拍照操作,例如用户点击拍摄控件进行拍照操作,则可以执行步骤408,响应于用户的拍照操作,拍照并保存图像。被保存的图像后续经编码等处理后可通过显示单元呈现给用户。若电子设备未检测到用户的拍照操作,可以继续运行在智能抓拍模式下,并持续性地多新捕获的图像进行高帧率地周期性检测。可以理解的是,该拍照操作可能是在第一抓拍模式下执行的操作,也可能是在第二抓拍模式下执行的操作,这取决于该电子设备在执行拍照操作前是否切换到了第二抓拍模式。本申请对此不作限定。If the electronic device detects the user's photographing operation, for example, the user clicks on the photographing control to perform the photographing operation, step 408 may be executed, and in response to the user's photographing operation, the photograph is taken and the image is saved. The saved image can be presented to the user through the display unit after subsequent processing such as encoding. If the electronic device does not detect the user's photographing operation, it can continue to run in the smart capture mode, and continuously capture new images for periodic detection at a high frame rate. It is understandable that the photographing operation may be an operation performed in the first snapshot mode, or it may be an operation performed in the second snapshot mode, depending on whether the electronic device switches to the second snapshot before performing the photographing operation mode. This application does not limit this.
在步骤409中,基于当前运行的抓拍模式,使用相对应的评估策略对待评选的多帧图像进行评分。具体地,电子设备可以基于用户的拍照操作,对快门按下瞬间的前N帧图像、后N帧图像或前后各N帧图像进行评分。这里N的值可以预先定义。本申请对于N的具体取值不作限定。如N=20,则可以对快门按下瞬间的前后各20帧(即,共40帧)图像进行评分。In step 409, based on the currently running snapshot mode, a corresponding evaluation strategy is used to score the multi-frame images to be selected. Specifically, the electronic device may score the first N frames of images, the next N frames of images, or the previous and next N frames of images at the moment the shutter is pressed based on the user's photographing operation. Here the value of N can be predefined. This application does not limit the specific value of N. If N=20, 20 frames (ie, 40 frames in total) images before and after the moment the shutter is pressed can be scored.
该电子设备可以根据当前运行的抓拍模式,调用与之对应的检测模型对多帧图像进行图像识别,并将识别结果输出。该电子设备可以根据检测结果对每一帧待评选图像进行评分。可选地,与运动抓拍模式和多人运动抓拍模式相关的模型例如可以包括姿态估计模型和动作检测模型。可选地,与表情抓拍模式和合照模式相关的检测模型例如可以包括人脸属性检测模型。该人脸属性模型例如可以包括但不限于,人脸特征点检测模型、睁闭眼模型等。本申请对此不作限定。The electronic device can call the corresponding detection model to perform image recognition on multiple frames of images according to the currently running capture mode, and output the recognition result. The electronic device can score each frame of the image to be selected according to the detection result. Optionally, the models related to the motion capture mode and the multi-person motion capture mode may include, for example, a pose estimation model and a motion detection model. Optionally, the detection model related to the facial expression capture mode and the group photo mode may include, for example, a face attribute detection model. The face attribute model may include, but is not limited to, a face feature point detection model, an open and closed eye model, etc., for example. This application does not limit this.
假设当前运行的抓拍模式为运动抓拍模式,该电子设备可以调用姿态估计模型和动作检测模型对捕获的待评选图像进行图像识别,以获得姿态点的坐标信息和动作类别。该电子设备可以根据该运动抓拍模式和动作类别确定评估策略,并使用该评估策略对捕获的每一帧待评选图像进行评分。Assuming that the currently running capture mode is a motion capture mode, the electronic device can call the pose estimation model and the motion detection model to perform image recognition on the captured images to be selected, so as to obtain the coordinate information of the pose points and the action category. The electronic device can determine an evaluation strategy according to the motion capture mode and the action category, and use the evaluation strategy to score each frame of captured images to be selected.
假设当前运行的抓拍模式为表情抓拍模式,该电子设备可以调用人脸属性检测模型对捕获的待评选图像进行图像识别,以获得对表情强度、睁闭眼、面部遮挡和人脸角度中一项或多项的识别结果。该识别结果可以包括表情类别以及可以用于表征表情强度、睁闭眼、面部遮挡和人脸角度中一项或多项的相关参数等。该电子设备可以根据该表情抓拍模式和表情类别确定评估策略,并使用该评估策略对捕获的每一帧待评选图像进行评分。Assuming that the currently running capture mode is the expression capture mode, the electronic device can call the face attribute detection model to perform image recognition on the captured images to be selected to obtain one of expression strength, eyes open and closed, face occlusion, and face angle Or multiple recognition results. The recognition result may include an expression category and related parameters that can be used to characterize one or more of expression strength, eye opening and closing, facial occlusion, and face angle. The electronic device can determine an evaluation strategy according to the expression capture mode and expression category, and use the evaluation strategy to score each captured image to be selected.
在步骤410中,确定与当前运行的抓拍模式所对应的抓拍帧图像。该电子设备可以根据对每一帧待评选图像的评分,确定与当前运行的抓拍模式所对应的抓拍帧图像。该电子设备可以进一步从缓存中获取与该时间戳相匹配的图像,对该图像进行处理后,通过显示单元呈现给用户。In step 410, the captured frame image corresponding to the currently running capture mode is determined. The electronic device can determine the captured frame image corresponding to the currently running capture mode according to the score of each frame to be selected. The electronic device may further obtain an image matching the timestamp from the cache, process the image, and present it to the user through the display unit.
此后,该电子设备可以清理评分时所占用的缓存空间,将被占用的缓存空间释放。可选地,用户可以通过手动调整退出智能抓拍模式,也可以直接退出相机功能。响应于用户的操作,该电子设备也就退出智能抓拍模式。Thereafter, the electronic device can clean up the cache space occupied during scoring, and release the occupied cache space. Optionally, the user can exit the smart capture mode through manual adjustment, or exit the camera function directly. In response to the user's operation, the electronic device also exits the smart capture mode.
可选地,退出智能抓拍模式的电子设备仍然可以保持在拍照模式,可以对图像进行低帧率地周期性检测。可选地,退出相机功能的电子设备则可以停止获取图像,并停止检测。 各检测模型也可以停止运行。Optionally, the electronic device that exits the smart capture mode can still remain in the photo mode, and can perform periodic detection of the image at a low frame rate. Optionally, the electronic device that exits the camera function can stop acquiring images and stop detecting. Each detection model can also be stopped.
应理解,上文仅为便于理解,仅示出了第一抓拍模式和第二抓拍模式,但这不应对本申请构成任何限定。随着相机运行时长的增加,该电子设备可以持续性地获取新捕获的图像,并可持续性地对新捕获的图像进行检测。因此,只要该电子设备未相机功能,就可以循环执行上述步骤404至410中的部分或全部的步骤。所需要注意的是,在将第一抓拍模式切换为第二抓拍模式之后,第二抓拍模式就变成了新的第一抓拍模式。It should be understood that the above is only for ease of understanding, and only the first capture mode and the second capture mode are shown, but this should not constitute any limitation to the present application. As the operating time of the camera increases, the electronic device can continuously acquire newly captured images and continuously detect the newly captured images. Therefore, as long as the electronic device does not have a camera function, some or all of the above steps 404 to 410 can be executed in a loop. What needs to be noted is that after switching the first capture mode to the second capture mode, the second capture mode becomes the new first capture mode.
还应理解,图4示出了本申请实施例提供的拍摄图像的方法应用于具体场景的一例。图中各步骤仅为便于理解而示出。该流程图中的每一个步骤并不一定是必须执行的,例如有些步骤是可以跳过的,或者,有些步骤是可以合并的。各步骤的执行顺序并不是固定不变的,也不限于图4中所示。各个步骤的执行顺序应以其功能和内在逻辑确定。It should also be understood that FIG. 4 shows an example in which the method for capturing an image provided by an embodiment of the present application is applied to a specific scene. The steps in the figure are only shown for ease of understanding. Each step in the flowchart does not necessarily have to be performed. For example, some steps can be skipped, or some steps can be combined. The execution order of each step is not fixed, nor is it limited to that shown in FIG. 4. The execution sequence of each step should be determined by its function and internal logic.
图5是本申请又一实施例提供的拍摄图像的方法的示意性流程图。图5示出的方法中,用户可以不通过手动操作来打开智能抓拍模式,例如用户可以将拍摄模式设置为拍照模式。该电子设备可以基于周期性地检测,并可自动开启智能抓拍模式。换句话说,图5示出的实施例主要描述了处于拍照模式下的电子设备拍摄图像的方法。FIG. 5 is a schematic flowchart of a method for capturing an image according to another embodiment of the present application. In the method shown in FIG. 5, the user can open the smart capture mode without manual operation. For example, the user can set the shooting mode to the camera mode. The electronic device can be based on periodic detection, and can automatically turn on the smart capture mode. In other words, the embodiment shown in FIG. 5 mainly describes a method for an electronic device in a photographing mode to photograph an image.
应理解,图5中示出的方法可以由电子设备或电子设备中的处理器来执行。在步骤501中,对捕获的图像进行低帧率地周期性检测。具体地,电子设备可以持续地从缓存中获取该电子设备捕获的多帧图像,并调用上述检测模型(例如人脸属性检测模型、人体框检测模型和场景识别模型)进行周期性检测,以确定是否满足进入某一抓拍模式的触发条件。在拍照模式下,电子设备可以采用较低地帧率进行检测,比如15帧每秒,以节省功耗。应理解,在进入智能抓拍模式之前,如步骤503之前,该步骤501可以持续进行。在退出智能抓拍模式之后,如步骤509之后,该步骤501也可以继续进行。还应理解,电子设备调用检测模型对多帧图像的检测仍然是该电子设备在执行,因此在本实施例中,为了简洁,不再对调用检测模型对图像进行检测的过程做特别说明。It should be understood that the method shown in FIG. 5 may be executed by an electronic device or a processor in the electronic device. In step 501, the captured image is periodically detected at a low frame rate. Specifically, the electronic device can continuously obtain multiple frames of images captured by the electronic device from the cache, and call the aforementioned detection models (such as the face attribute detection model, the human frame detection model, and the scene recognition model) to perform periodic detection to determine Whether the trigger condition for entering a certain capture mode is met. In the camera mode, the electronic device can use a lower frame rate for detection, such as 15 frames per second, to save power consumption. It should be understood that, before entering the smart capture mode, such as step 503, step 501 can be performed continuously. After exiting the smart capture mode, such as after step 509, step 501 can also be continued. It should also be understood that the electronic device calling the detection model to detect multiple frames of images is still being executed by the electronic device. Therefore, in this embodiment, for the sake of brevity, the process of calling the detection model to detect images will not be specifically described.
在步骤502中,确定满足第一抓拍模式的触发条件。具体地,该电子设备可以根据对多帧图像的检测,确定满足某一抓拍模式的触发条件。例如,满足第一抓拍模式的触发条件。上文方法200中已经对于不同的抓拍模式列举了不同的触发条件,并结合人脸属性检测模型、人体框检测模型和场景识别模型对确定是否满足某一抓拍模式的触发条件作了详细说明。为了简洁,这里不再赘述。In step 502, it is determined that the trigger condition of the first capture mode is satisfied. Specifically, the electronic device can determine that a trigger condition of a certain capture mode is satisfied based on the detection of multiple frames of images. For example, the trigger condition of the first snapshot mode is satisfied. In the above method 200, different triggering conditions have been listed for different capture modes, and combined with the face attribute detection model, the human frame detection model and the scene recognition model, the trigger condition for determining whether a certain capture mode is satisfied is described in detail. For the sake of brevity, I won't repeat them here.
若确定所检测的多帧图像不满足某一抓拍模式的触发条件,则可以继续保持在拍照模式运行,即,继续执行步骤501。若确定所检测的多帧图像满足第一抓拍模式的触发条件,则可以执行步骤503,进入智能抓拍模式,并启用第一抓拍模式。启用第一抓拍模式也就意味着该电子设备切换到了智能抓拍模式。因此,进入智能抓拍模式和启用第一抓拍模式是指同一个操作。此外,该电子设备在步骤503中还可以执行对新捕获的图像进行高帧率地周期性检测。换句话说,该电子设备对图像的检测由低帧率切换至高帧率。例如,电子设备可以以30帧每秒的帧率对图像进行检测。应理解,在该电子设备退出智能抓拍模式之前,可以持续性地执行步骤501,对新捕获的图像进行高帧率地周期性检测。If it is determined that the detected multi-frame images do not meet the triggering condition of a certain capture mode, the operation in the photographing mode can be continued, that is, step 501 is continued. If it is determined that the detected multiple frames of images meet the trigger condition of the first capture mode, step 503 may be executed to enter the smart capture mode, and the first capture mode is enabled. Enabling the first capture mode means that the electronic device is switched to the smart capture mode. Therefore, entering the smart capture mode and enabling the first capture mode refer to the same operation. In addition, the electronic device may also perform periodic detection of the newly captured image at a high frame rate in step 503. In other words, the detection of the image by the electronic device switches from a low frame rate to a high frame rate. For example, the electronic device can detect images at a frame rate of 30 frames per second. It should be understood that before the electronic device exits the smart capture mode, step 501 may be continuously performed to perform periodic detection of newly captured images at a high frame rate.
在步骤504中,该电子设备保持在第一抓拍模式下运行。该电子设备可以一直保持在第一抓拍模式运行,直到根据对新捕获的图像的检测确定满足另一抓拍模式(例如记作第二抓拍模式)的触发条件。需要说明的是,步骤504为便于描述下文实施例而设置,并不 表示电子设备执行了新的操作。该电子设备在步骤503中启用第一抓拍模式之后,便可以一直保持在第一抓拍模式运行,并持续性地对新捕获的图像进行高帧率地周期性检测。In step 504, the electronic device keeps running in the first snapshot mode. The electronic device may keep running in the first capture mode until it is determined that the trigger condition of another capture mode (for example, referred to as the second capture mode) is satisfied according to the detection of the newly captured image. It should be noted that step 504 is set for the convenience of describing the following embodiments, and does not mean that the electronic device has performed a new operation. After enabling the first capture mode in step 503, the electronic device can always keep running in the first capture mode, and continuously perform periodic detection of newly captured images at a high frame rate.
在步骤505中,确定满足第二抓拍模式的触发条件。由于相机持续运行,该电子设备可以持续性地对新捕获的图像进行检测。若没有检测到新捕获的图像满足另一抓拍模式(如第二抓拍模式)的触发条件,也就是确定不满足第二抓拍模式的触发条件,则可以执行步骤504,保持在第一抓拍模式。同时持续性地对图像进行高帧率地周期性检测。若确定检测到新捕获的图像满足第二抓拍模式的触发条件时,则可以考虑是否切换到第二抓拍模式。In step 505, it is determined that the trigger condition of the second snapshot mode is satisfied. Since the camera continues to operate, the electronic device can continuously detect the newly captured image. If it is not detected that the newly captured image satisfies the trigger condition of another capture mode (such as the second capture mode), that is, it is determined that the trigger condition of the second capture mode is not satisfied, step 504 may be performed to maintain the first capture mode. At the same time, the image is continuously inspected periodically at a high frame rate. If it is determined that it is detected that the newly captured image meets the trigger condition of the second capture mode, it may be considered whether to switch to the second capture mode.
为了避免电子设备在多种抓拍模式之间频繁切换造成乒乓效应,可以为每一种抓拍模式设置保护时段。该保护时段的时长可以为预定义值,并执行步骤506,确定第一抓拍模式的运行时长是否超出保护时段。若该第一抓拍模式的运行时长未超出保护时段,即,该电子设备在该保护时段内检测到图像满足第二抓拍模式的触发条件,则可以执行步骤504,保持在第一抓拍模式。保持在第一抓拍模式的电子设备仍然可以继续对图像进行高帧率地周期性检测。In order to avoid the ping-pong effect caused by frequent switching of electronic devices between multiple capture modes, a protection period can be set for each capture mode. The duration of the protection period may be a predefined value, and step 506 is executed to determine whether the running duration of the first capture mode exceeds the protection period. If the running time of the first capture mode does not exceed the protection period, that is, the electronic device detects that the image meets the trigger condition of the second capture mode within the protection period, then step 504 may be performed to maintain the first capture mode. The electronic device kept in the first capture mode can still continue to periodically detect the image at a high frame rate.
若该第一抓拍模式的运行时长超出保护时段,即,该电子设备在该保护时段之外检测到新捕获的图像满足第二抓拍模式的触发条件,则可以执行步骤507,启用第二抓拍模式,或者说,由第一抓拍模式切换为第二抓拍模式。启用了第二抓拍模式的电子设备也可以持续性地对新捕获的图像进行高帧率地周期性检测。该电子设备在启用了第二抓拍模式之后,可以保持在第二抓拍模式运行,直到根据新捕获的图像确定满足另一抓拍模式(例如记作第三抓拍模式)的触发条件。为了简洁,图中对确定满足第三抓拍模式的触发条件的步骤并未予以示出。但可以理解,在满足第三抓拍模式的触发条件的情况下,该电子设备所执行的操作可以与图中满足第三抓拍模式的触发条件的情况下所执行的操作相似,为了简洁,这里不再赘述。无论是否切换至第二抓拍模式,只要该电子设备运行在智能抓拍模式下,就可以持续性地对捕获的图像进行高帧率地检测。If the operating time of the first capture mode exceeds the protection period, that is, the electronic device detects that the newly captured image meets the trigger condition of the second capture mode outside the protection period, then step 507 may be executed to enable the second capture mode , In other words, switch from the first capture mode to the second capture mode. The electronic device with the second capture mode enabled can also continuously perform periodic detection of newly captured images at a high frame rate. After the second capture mode is enabled, the electronic device can keep running in the second capture mode until it is determined that the trigger condition of another capture mode (for example, the third capture mode) is satisfied according to the newly captured image. For the sake of brevity, the figure does not show the step of determining that the trigger condition of the third capture mode is satisfied. However, it can be understood that when the trigger condition of the third capture mode is satisfied, the operation performed by the electronic device may be similar to the operation performed when the trigger condition of the third capture mode is satisfied in the figure. For the sake of brevity, it is not here. Go into details again. Regardless of whether it is switched to the second capture mode, as long as the electronic device is running in the smart capture mode, the captured image can be continuously detected at a high frame rate.
在步骤508中,确定在预设时段内是否检测到拍照操作。由于在智能抓拍模式下对新捕获的图像的检测是高帧率的检测,功耗较大。为了减少功耗,该电子设备可以在长时间内未检测到拍照操作的情况下,自动退出智能抓拍模式。退出智能抓拍模式的电子设备可以回退到拍照模式。用户可能并不感知电子设备退出智能抓拍模式。In step 508, it is determined whether a photographing operation is detected within a preset time period. Since the detection of the newly captured image in the smart capture mode is a high frame rate detection, the power consumption is relatively high. In order to reduce power consumption, the electronic device can automatically exit the smart capture mode if it does not detect a photographing operation for a long time. The electronic device that exits the smart capture mode can return to the photo mode. The user may not perceive that the electronic device exits the smart capture mode.
具体地,在启用第一抓拍模式或第二抓拍模式之后的预设时段内,若未检测到拍照操作,则可以执行步骤509,退出智能抓拍模式,回退到拍照模式。进而可以重复执行步骤501及其之后的步骤,直到用户退出相机功能。该预设时段的时长可以为预定义值。例如,当电子设备在步骤503启用第一抓拍模式或在步骤507启用第二抓拍模式的同时,可以开始计时。例如启动定时器,该定时器的运行时长例如可以是上述预设时段。当在该预设时段内未检测到用户的拍照操作,例如定时器运行超时,则可以执行步骤509退出智能抓拍模式,回退到拍照模式。若定时器的运行时间未到达,则可以一直持续运行在智能抓拍模式。若在该预设时段内检测到用户的拍照操作,可以执行步骤510,响应于用户的拍照操作,拍照并保存图像。Specifically, within a preset period of time after the first capture mode or the second capture mode is activated, if no photographing operation is detected, step 509 may be executed to exit the smart capture mode and return to the photographing mode. Furthermore, step 501 and the steps thereafter can be repeated until the user exits the camera function. The duration of the preset time period may be a predefined value. For example, when the electronic device activates the first capture mode in step 503 or the second capture mode in step 507, it can start timing. For example, a timer is started, and the running time of the timer may be, for example, the aforementioned preset time period. When the user's photographing operation is not detected within the preset time period, for example, the timer runs overtime, step 509 may be executed to exit the smart capture mode and return to the photographing mode. If the running time of the timer has not reached, it can continue to run in the smart capture mode. If the user's photographing operation is detected within the preset time period, step 510 may be performed, and in response to the user's photographing operation, the photograph is taken and the image is saved.
在步骤511中,基于当前运行的抓拍模式,使用相对应的评估策略对的多帧待评选图像进行评分。在步骤512中,确定与当前运行的抓拍模式所对应的抓拍帧图像。步骤510 至步骤512的具体过程与上文实施例步骤406至步骤408的具体过程相同,由于上文已经对步骤406至步骤408做了详细说明,为了简洁,这里不再重复。此后,若用户执行退出相机的操作,该电子设备可以响应于用户的操作,退出相机功能。退出相机功能的电子设备则停止获取图像,并停止检测。各检测模型也可以停止运行。In step 511, based on the currently running snapshot mode, a corresponding evaluation strategy is used to score multiple frames of images to be selected. In step 512, the captured frame image corresponding to the currently running capture mode is determined. The specific process from step 510 to step 512 is the same as the specific process from step 406 to step 408 in the above embodiment. Since step 406 to step 408 have been described in detail above, for the sake of brevity, it will not be repeated here. Thereafter, if the user performs an operation to exit the camera, the electronic device can exit the camera function in response to the user's operation. The electronic device that exits the camera function stops acquiring images and stops detecting. Each detection model can also be stopped.
应理解,步骤506中的保护时段的时长与步骤508中所述的预设时段的时长可以相同,也可以不同,本申请对此不作限定。如果二者相同,则可以共用一个定时器;如果二者不同,则可以使用各自独立的定时器。当然,通过定时器计时仅为一种可能的实现方式,不应对本申请构成任何限定。It should be understood that the duration of the guard period in step 506 and the duration of the preset period described in step 508 may be the same or different, which is not limited in this application. If the two are the same, you can share a timer; if they are different, you can use their own independent timers. Of course, counting by a timer is only one possible implementation, and should not constitute any limitation to this application.
应理解,图5中的部分步骤和描述可以参考图4中的相关描述,为了简洁,这里不再赘述。还应理解,图5示出了本申请实施例提供的拍摄图像的方法应用于具体场景的一例。图中各步骤仅为便于理解而示出。该流程图中的每一个步骤并不一定是必须执行的,例如有些步骤是可以跳过的,例如步骤505至步骤507,或者,有些步骤是可以合并的,例如步骤503和步骤504。各步骤的执行顺序并不是固定不变的,也不限于图5中所示。各个步骤的执行顺序应以其功能和内在逻辑确定。It should be understood that part of the steps and descriptions in FIG. 5 can refer to the related descriptions in FIG. It should also be understood that FIG. 5 shows an example in which the method for capturing an image provided by an embodiment of the present application is applied to a specific scene. The steps in the figure are only shown for ease of understanding. Each step in the flowchart does not necessarily have to be performed. For example, some steps can be skipped, such as step 505 to step 507, or some steps can be combined, such as step 503 and step 504. The order of execution of each step is not fixed, nor is it limited to that shown in FIG. 5. The execution sequence of each step should be determined by its function and internal logic.
图6是本申请又一实施例提供的拍摄图像的方法的示意性流程图。图6示出的方法中,用户可以不通过手动操作来打开智能抓拍模式,例如用户可以将拍摄模式设置为录像模式。该电子设备可以基于周期性地检测,并可自动开启智能抓拍模式。换句话说,图6示出的实施例主要描述了处于录像模式下的电子设备拍摄图像的方法。FIG. 6 is a schematic flowchart of a method for capturing an image according to another embodiment of the present application. In the method shown in FIG. 6, the user can open the smart capture mode without manual operation. For example, the user can set the shooting mode to the video recording mode. The electronic device can be based on periodic detection, and can automatically turn on the smart capture mode. In other words, the embodiment shown in FIG. 6 mainly describes a method for an electronic device in a video recording mode to capture an image.
应理解,图6中示出的方法可以由电子设备或电子设备中的处理器来执行。在步骤601中,对捕获的多帧图像进行低帧率地周期性检测。在步骤602中,确定满足第一抓拍模式的触发条件。若电子设备根据对捕获的多帧图像的检测确定满足第一抓拍模式的触发条件时,则可以执行步骤603,在录像的同时启用第一抓拍模式。该第一抓拍模式可以是在后台运行的模式。从拍摄界面来看,该电子设备仍在继续录制视频。启用第一抓拍模式也就意味着该电子设备启用了智能抓拍模式。换句话说,录像模式和智能抓拍模式并行运行。此外,该电子设备开始对新捕获的图像进行高帧率地周期性检测。也就是说,该电子设备对图像的检测由低帧率切换至高帧率。例如,电子设备可以以30帧每秒的帧率对预览图像进行检测。并且在该电子设备退出智能抓拍模式之前,可以持续性地对新捕获的图像进行高帧率地周期性检测。It should be understood that the method shown in FIG. 6 may be executed by an electronic device or a processor in the electronic device. In step 601, the captured multi-frame images are periodically detected at a low frame rate. In step 602, it is determined that the trigger condition of the first snapshot mode is satisfied. If the electronic device determines that the trigger condition of the first capture mode is satisfied according to the detection of the captured multi-frame images, step 603 may be executed to enable the first capture mode while recording. The first snapshot mode may be a mode running in the background. Judging from the shooting interface, the electronic device is still continuing to record video. Enabling the first capture mode means that the electronic device has enabled the smart capture mode. In other words, the video mode and smart capture mode run in parallel. In addition, the electronic device starts to periodically detect newly captured images at a high frame rate. That is to say, the detection of the image by the electronic device switches from a low frame rate to a high frame rate. For example, the electronic device can detect the preview image at a frame rate of 30 frames per second. And before the electronic device exits the smart capture mode, it can continuously perform periodic detection of newly captured images at a high frame rate.
还应理解,电子设备调用检测模型对多帧图像的检测仍然是该电子设备在执行,因此在本实施例中,为了简洁,不再对调用检测模型对图像进行检测的过程做特别说明。在步骤604中,该电子设备保持在后台运行第一抓拍模式。该电子设备可以一直在后台保持运行该第一抓拍模式,直到根据对新捕获的图像的检测确定满足另一抓拍模式(例如记作第二抓拍模式)的触发条件。需要说明的是,步骤604为便于描述下文实施例而设置,并不表示电子设备执行了新的操作。该电子设备在步骤603中启用第一抓拍模式之后,便可以一直保持在后台运行第一抓拍模式,并持续性地对新捕获的图像进行高帧率地周期性检测。It should also be understood that the electronic device calling the detection model to detect multiple frames of images is still being executed by the electronic device. Therefore, in this embodiment, for the sake of brevity, the process of calling the detection model to detect images will not be specifically described. In step 604, the electronic device keeps running the first snapshot mode in the background. The electronic device may keep running the first capture mode in the background until it is determined that the trigger condition of another capture mode (for example, referred to as the second capture mode) is satisfied according to the detection of the newly captured image. It should be noted that step 604 is set for the convenience of describing the following embodiments, and does not mean that the electronic device has performed a new operation. After enabling the first capture mode in step 603, the electronic device can always keep running the first capture mode in the background, and continuously perform periodic detection of newly captured images at a high frame rate.
在步骤605中,确定满足第二抓拍模式的触发条件。由于相机持续运行,该电子设备可以持续性地对新捕获的图像进行检测。若没有检测到新捕获的图像满足另一抓拍模式(如第二抓拍模式)的触发条件,也就是确定不满足第二抓拍模式的触发条件,则可以执 行步骤604,保持在后台运行第一抓拍模式,同时持续性地对新捕获的图像进行高帧率地周期性检测。若检测到新捕获的图像满足第二抓拍模式的触发条件时,则可以考虑是否切换到第二抓拍模式。In step 605, it is determined that the trigger condition of the second snapshot mode is satisfied. Since the camera continues to operate, the electronic device can continuously detect the newly captured image. If it is not detected that the newly captured image meets the trigger condition of another capture mode (such as the second capture mode), that is, it is determined that the trigger condition of the second capture mode is not met, step 604 can be performed to keep the first capture running in the background Mode, while continuously detecting the newly captured images periodically at a high frame rate. If it is detected that the newly captured image meets the trigger condition of the second capture mode, it may be considered whether to switch to the second capture mode.
为了避免电子设备在多种抓拍模式之间频繁切换造成乒乓效应,可以为每一种抓拍模式设置保护时段。该保护时段的时长可以为预定义值,并执行步骤606,确定第一抓拍模式的运行时长是否超出保护时段。若该第一抓拍模式的运行时长未超出保护时段,即,该电子设备在该保护时段内检测到新捕获的图像满足第二抓拍模式的触发条件,则可以执行步骤604,保持在第一抓拍模式。保持在第一抓拍模式的电子设备仍然可以继续对新捕获的图像进行高帧率地周期性检测。若该第一抓拍模式的运行时长超出保护时段,即,该电子设备在该保护时段之外检测到图像满足第二抓拍模式的触发条件,则可以执行步骤607,启用第二抓拍模式,或者说,由第一抓拍模式切换为第二抓拍模式。电子设备在启用了第二抓拍模式的电子设备也可以持续性地对新捕获的图像进行高帧率地周期性检测。电子设备在启用了第二抓拍模式之后,可以保持在第二抓拍模式运行该第二抓拍模式,直到根据新捕获的图像确定满足另一抓拍模式(例如记作第三抓拍模式)的触发条件。为了简洁,图中对确定满足第三抓拍模式的触发条件的步骤并未予以示出。但可以理解,在满足第三抓拍模式的触发条件的情况下,该电子设备所执行的操作可以与图中满足第三抓拍模式的触发条件的情况下所执行的操作相似,为了简洁,这里不再赘述。无论是否切换至第二抓拍模式,只要智能抓拍模式仍然在该电子设备后台运行,就可以持续性地对捕获的图像进行高帧率地检测。In order to avoid the ping-pong effect caused by frequent switching of electronic devices between multiple capture modes, a protection period can be set for each capture mode. The duration of the protection period may be a predefined value, and step 606 is executed to determine whether the running duration of the first capture mode exceeds the protection period. If the running time of the first capture mode does not exceed the protection period, that is, the electronic device detects that the newly captured image meets the trigger condition of the second capture mode within the protection period, then step 604 may be performed to keep the first capture mode mode. The electronic device remaining in the first capture mode can still continue to periodically detect the newly captured image at a high frame rate. If the running time of the first capture mode exceeds the protection period, that is, the electronic device detects that the image meets the trigger condition of the second capture mode outside the protection period, step 607 may be executed to enable the second capture mode, or in other words , Switch from the first capture mode to the second capture mode. When the electronic device has enabled the second capture mode, the electronic device can also continuously perform periodic detection of newly captured images at a high frame rate. After the second capture mode is enabled, the electronic device can keep running the second capture mode until it is determined that the trigger condition of another capture mode (for example, the third capture mode) is satisfied according to the newly captured image. For the sake of brevity, the figure does not show the step of determining that the trigger condition of the third capture mode is satisfied. However, it can be understood that when the trigger condition of the third capture mode is satisfied, the operation performed by the electronic device may be similar to the operation performed when the trigger condition of the third capture mode is satisfied in the figure. For the sake of brevity, it is not here. Go into details again. Regardless of whether it is switched to the second capture mode, as long as the smart capture mode is still running in the background of the electronic device, the captured image can be continuously detected at a high frame rate.
在步骤608中,确定在预设时段内是否检测到拍照操作。由于在智能抓拍模式下对新捕获的图像的检测是高帧率的检测,功耗较大。为了减少功耗,该电子设备可以在长时间内未检测到拍照操作的情况下,自动退出智能抓拍模式。退出智能抓拍模式的电子设备仍然可以继续录像。用户可能并不感知电子设备退出智能抓拍模式。在启用第二抓拍模式之后的预设时段内,若未检测到拍照操作,则可以执行步骤609,退出智能抓拍模式,录像模式保持运行。进而可以重复执行步骤601及其之后的步骤,直到用户退出相机功能。在启用第一抓拍模式或第二抓拍模式之后的预设时段内,若检测到拍照操作,则可以执行步骤610,响应于用户的拍照操作,进行拍照并保存图像。In step 608, it is determined whether a photographing operation is detected within a preset time period. Since the detection of the newly captured image in the smart capture mode is a high frame rate detection, the power consumption is relatively high. In order to reduce power consumption, the electronic device can automatically exit the smart capture mode if it does not detect a photographing operation for a long time. Electronic devices that exit the smart capture mode can still continue to record. The user may not perceive that the electronic device exits the smart capture mode. In the preset time period after the second snapping mode is activated, if no photographing operation is detected, step 609 can be executed to exit the smart snapping mode, and the video recording mode remains running. Furthermore, step 601 and the steps thereafter can be repeated until the user exits the camera function. Within a preset period of time after the first capture mode or the second capture mode is activated, if a photographing operation is detected, step 610 may be executed, and in response to the user's photographing operation, the photograph is taken and the image is saved.
在步骤611中,基于当前运行的抓拍模式,使用相对应的评估策略对的多帧待评选图像进行评分。在步骤612中,确定与当前运行的抓拍模式所对应的抓拍帧图像。In step 611, based on the currently running snapshot mode, a corresponding evaluation strategy is used to score multiple frames of images to be selected. In step 612, the captured frame image corresponding to the currently running capture mode is determined.
在另一种实现方式中,在步骤603启用了第一抓拍模式或步骤607启用了第二抓拍模式之后,该电子设备可以持续地对每一帧图像进行图像识别和评分,当评分结果超出预设门限时,将此帧图像推荐给用户。当超出预设门限的图像多于一帧时,可以将评分最高的图像推荐给用户。此情况下,可以不执行步骤610至步骤612。此情况下,待评选图像可以是指该电子设备在录像过程中获取到的全部图像。In another implementation manner, after the first capture mode is enabled in step 603 or the second capture mode is enabled in step 607, the electronic device can continuously perform image recognition and scoring on each frame of image, and when the score result exceeds the preset When the threshold is set, this frame of image is recommended to the user. When there are more than one frame of images that exceed the preset threshold, the image with the highest score can be recommended to the user. In this case, step 610 to step 612 may not be performed. In this case, the images to be selected may refer to all the images acquired by the electronic device during the recording process.
应理解,图6中的步骤与上文结合图5描述的步骤相似,其具体过程可以参考上文的相关描述,为了简洁,这里不再赘述。还应理解,图6示出了本申请实施例提供的拍摄图像的方法应用于具体场景的一例。图中各步骤仅为便于理解而示出。该流程图中的每一个步骤并不一定是必须执行的,例如有些步骤是可以跳过的,例如步骤605至步骤607,步骤610至步骤612,或者,有些步骤是可以合并的,例如步骤603和步骤604。各步骤的 执行顺序并不是固定不变的,也不限于图6中所示。各个步骤的执行顺序应以其功能和内在逻辑确定。It should be understood that the steps in FIG. 6 are similar to the steps described above in conjunction with FIG. 5, and the specific process can be referred to the relevant description above. For the sake of brevity, details are not repeated here. It should also be understood that FIG. 6 shows an example in which the method for capturing an image provided by an embodiment of the present application is applied to a specific scene. The steps in the figure are only shown for ease of understanding. Each step in the flowchart does not necessarily have to be performed. For example, some steps can be skipped, such as step 605 to step 607, step 610 to step 612, or some steps can be combined, such as step 603 And step 604. The order of execution of each step is not fixed, nor is it limited to that shown in FIG. 6. The execution sequence of each step should be determined by its function and internal logic.
以上,结合多种可能的场景介绍了本申请实施例提供的拍摄图像的方法。但应理解,这些场景不应对本申请所适用的场景构成任何限定。在电子设备可以通过手动调整来选择具体抓拍模式时,本申请实施例中所提供的基于运动抓拍模式、表情抓拍模式、多人运动抓拍模式和合照抓拍模式等抓拍模式所对应的评分参数对预览图像进行评分和最优帧推荐的过程也可以单独使用。The foregoing describes the method for capturing images provided in the embodiments of the present application in combination with multiple possible scenarios. However, it should be understood that these scenarios should not constitute any limitation to the scenarios applicable to this application. When the electronic device can manually adjust the specific capture mode to be selected, the preview of the scoring parameter corresponding to the capture mode based on the motion capture mode, the expression capture mode, the multi-person sports capture mode, and the group photo capture mode provided in the embodiments of the present application The process of image scoring and optimal frame recommendation can also be used separately.
例如,在运动抓拍模式下,基于运动抓拍模式所对应的评分参数和模式权重,如包括姿态高度、姿态舒展度等,对待评选图像进行评估和推荐。并且还可进一步根据动作类别确定各评分参数的类别权重,以确定出与不同的动作类别相匹配的抓拍帧图像。由此而确定的抓拍帧图像因为更关注动作细节,更有可能将运动瞬间的精彩图像寻找出来,推荐给用户。因此所抓拍的图像也就更符合拍摄场景,抓拍效果较好。For example, in the sports capture mode, based on the scoring parameters and mode weights corresponding to the sports capture mode, including posture height, posture stretch, etc., the images to be selected are evaluated and recommended. Moreover, the category weight of each scoring parameter can be further determined according to the action category, so as to determine the captured frame images matching different action categories. The captured frame image thus determined pays more attention to the details of the action, and it is more likely to find the wonderful image at the moment of movement and recommend it to the user. Therefore, the captured image is more in line with the shooting scene, and the capture effect is better.
基于上文所述的技术方案,通过预置多种不同的抓拍模式及其对应的一种或多种评估策略,可以根据不同的抓拍模式采用不同的评分参数和模式权重对待评选图像进行评分。例如在运动抓拍模式、多人运动抓拍模式下引入姿态舒展度、姿态高度、身体遮挡等评分参数,在表情抓拍模式、多人合照模式下引入表情强度、睁闭眼、面部遮挡、人脸角度等评分参数,使得推荐给用户的抓拍帧图像能够基于与不同抓拍模式所对应的评估策略来选择。Based on the technical solution described above, by presetting multiple different capture modes and their corresponding one or more evaluation strategies, different scoring parameters and mode weights can be used to score the selected images according to different capture modes. For example, in sports capture mode and multi-person sports capture mode, the introduction of posture stretch, posture height, body occlusion and other scoring parameters, in the expression capture mode, multi-person photo mode, the introduction of expression intensity, eyes closed, face occlusion, and face angle The scoring parameters can be selected so that the captured frame images recommended to the user can be selected based on the evaluation strategies corresponding to different capturing modes.
并且,本申请提供的技术方案还可以进一步基于不同的抓拍类别来确定各评分参数的类别权重。例如,由于不同的抓拍类别所关注的侧重点不同,在与抓拍模式所对应的多个评分参数中,为不同的抓拍类别配置的同一评分参数的类别权重不同。因此,有利于获得理想的抓拍帧图像。In addition, the technical solution provided by the present application may further determine the category weight of each scoring parameter based on different capture categories. For example, because different capture categories focus on different focuses, among the multiple scoring parameters corresponding to the capture mode, the same scoring parameter configured for different capture categories has different category weights. Therefore, it is conducive to obtain an ideal captured frame image.
尤其在运动抓拍模式下,为表现动作细节的姿态高度、姿态舒展度等评分参数赋予较高的模式权重。并且可以根据对不同动作类别的关注细节的不同,为不同的动作类别配置的同一评分参数的类别权重不同。相比于基于光流信息来推荐最优帧而言,本申请提供的方案更注重动作本身,因此能够获得更好的运动抓拍效果。Especially in the sports capture mode, a higher mode weight is assigned to the scoring parameters such as the posture height and posture stretch that express the details of the action. And the same scoring parameter configured for different action categories can have different category weights according to the different attention details to different action categories. Compared with recommending the optimal frame based on optical flow information, the solution provided in this application pays more attention to the action itself, so that a better motion capture effect can be obtained.
以上,结合图2至图6详细说明了本申请实施例提供的拍摄图像的方法。以下,结合图7详细说明本申请实施例提供的拍摄图像的装置。Above, the method for capturing images provided by the embodiments of the present application has been described in detail with reference to FIGS. 2 to 6. Hereinafter, the image capturing apparatus provided by the embodiment of the present application will be described in detail with reference to FIG. 7.
图7是本申请实施例提供的拍摄图像的装置700的示意性框图。如图7所示,该装置700可以包括模式确定单元710和抓拍帧确定单元720。FIG. 7 is a schematic block diagram of an image capturing apparatus 700 provided by an embodiment of the present application. As shown in FIG. 7, the apparatus 700 may include a mode determining unit 710 and a captured frame determining unit 720.
具体地,该模式确定单元710用于根据捕获的多帧图像,在预置的多种抓拍模式中确定第一抓拍模式;该抓拍帧确定单元720用于使用与该第一抓拍模式所对应的评估策略,在捕获的多帧待评选图像中确定与该第一抓拍模式对应的抓拍帧图像;该评估策略是预置的多种评估策略中的一种。Specifically, the mode determining unit 710 is configured to determine the first capturing mode among the preset multiple capturing modes according to the captured multi-frame images; the capturing frame determining unit 720 is configured to use the first capturing mode corresponding to the The evaluation strategy is to determine the captured frame image corresponding to the first capture mode among the captured multiple frames to be selected; the evaluation strategy is one of a plurality of preset evaluation strategies.
可选地,该多种抓拍模式包括以下一项或多项:表情抓拍模式、合照抓拍模式、运动抓拍模式、多人运动抓拍模式、宠物抓拍模式以及风景抓拍模式。可选地,该多种抓拍模式中的每种抓拍模式对应至少一种评估策略,每种评估策略包括用于图像评分的一个或多个评分参数以及每个评分参数的模式权重。Optionally, the multiple capture modes include one or more of the following: facial expression capture mode, group photo capture mode, sports capture mode, multi-person sports capture mode, pet capture mode, and landscape capture mode. Optionally, each of the multiple capture modes corresponds to at least one evaluation strategy, and each evaluation strategy includes one or more scoring parameters for image scoring and a mode weight of each scoring parameter.
可选地,该抓拍帧确定单元720用于使用与该第一抓拍模式所对应的预置的多种评估 策略中至少一种评估策略中的一种评估策略中一个或多个评分参数以及每个评分参数的模式权重,计算该多帧待评选图像中每帧待评选图像的评分;并用于根据对该多帧待评选图像的多个评分,在该多帧待评选图像中确定与该第一抓拍模式对应的抓拍帧图像。Optionally, the capture frame determination unit 720 is configured to use one or more scoring parameters in one of at least one of the multiple preset evaluation strategies corresponding to the first capture mode and each of the evaluation strategies. The weight of the mode weight of each scoring parameter is used to calculate the score of each frame of the image to be selected in the multi-frame image to be selected; and used to determine the score of the image in the multi-frame image to be selected according to the multiple scores of the image to be selected. A snapshot frame image corresponding to a snapshot mode.
可选地,该抓拍帧图像在该多帧待评选图像中具有最高的评分。可选地,不同抓拍模式所对应的不同评估策略所包括的评分参数相同,且不同评估策略所包括的模式权重不同。Optionally, the captured frame image has the highest score among the multiple frames of images to be selected. Optionally, different evaluation strategies corresponding to different capture modes include the same scoring parameters, and different evaluation strategies include different mode weights.
可选地,每种抓拍模式包括一种或多种抓拍类别,每种抓拍类别对应一种评估策略;在与该第一抓拍模式对应的至少一种评估策略中,每种评估策略包括与该第一抓拍模式对应的一个或多个评分参数以及每个评分参数的模式权重和与一个抓拍类别对应的类别权重。可选地,该模式确定单元710还用于,根据该多帧图像确定该第一抓拍模式中的第一抓拍类别;该抓拍帧确定单元720用于使用与该第一抓拍模式所对应的一个或多个评分参数和每个评分参数的模式权重,以及与该第一抓拍类别对应的每个评分参数的类别权重,计算该多帧待评选图像中每帧待评选图像的评分。可选地,不同抓拍类别对应的不同评估策略所包括的评分参数相同,且不同评估策略所包括的类别权重不同。Optionally, each capture mode includes one or more capture categories, and each capture category corresponds to an evaluation strategy; among at least one evaluation strategy corresponding to the first capture mode, each evaluation strategy includes One or more scoring parameters corresponding to the first snapping mode, the mode weight of each scoring parameter, and the category weight corresponding to one snapping category. Optionally, the mode determining unit 710 is further configured to determine the first capturing category in the first capturing mode according to the multi-frame images; the capturing frame determining unit 720 is configured to use one corresponding to the first capturing mode Or multiple scoring parameters and the mode weight of each scoring parameter, and the category weight of each scoring parameter corresponding to the first snapped category, and calculating the score of each frame of the image to be selected in the multiple frames of image to be selected. Optionally, different evaluation strategies corresponding to different capture categories include the same scoring parameters, and different evaluation strategies include different category weights.
可选地,该抓拍帧确定单元720还用于调用与该第一抓拍模式对应的至少一个检测模型对该多帧待评选图像进行图像识别,以输出识别结果;并用于基于该识别结果确定该一个或多个评分参数的数值。Optionally, the capture frame determination unit 720 is further configured to call at least one detection model corresponding to the first capture mode to perform image recognition on the multiple frames of images to be selected to output a recognition result; and to determine the recognition result based on the recognition result. The value of one or more scoring parameters.
可选地,该第一抓拍模式为运动抓拍模式或多人运动抓拍模式时,该至少一个检测模型包括姿态估计模型和动作检测模型。可选地,该第一抓拍模式为表情抓拍模式或合照抓拍模式,该至少一个检测模型包括人脸属性检测模型。Optionally, when the first capture mode is a motion capture mode or a multi-person motion capture mode, the at least one detection model includes a pose estimation model and a motion detection model. Optionally, the first capture mode is an expression capture mode or a group photo capture mode, and the at least one detection model includes a face attribute detection model.
可选地,该模式确定单元710还用于,基于第一帧率对该捕获的多帧图像进行模式检测,以在预置的多种抓拍模式中确定第一抓拍模式;该抓拍帧确定单元720还用于,调用与该第一抓拍模式对应的至少一个检测模型,以第二帧率对该多帧待评选图像进行图像识别;其中,第一帧率小于第二帧率。Optionally, the mode determining unit 710 is further configured to perform mode detection on the captured multi-frame image based on the first frame rate to determine the first capturing mode among the preset multiple capturing modes; the capturing frame determining unit 720 is also used to call at least one detection model corresponding to the first capture mode to perform image recognition on the multiple frames of images to be selected at a second frame rate; wherein, the first frame rate is less than the second frame rate.
具体地,该装置700可以包括用于执行图2中的方法200实施例中电子设备执行的方法的单元。该模式确定单元710可用于执行上文方法200中的步骤210,该抓拍帧确定单元720可用于执行上文方法200中的步骤220至步骤240。此外,该装置700还可以包括一个或多个检测模型。在具体实现过程中,该模式确定单元710可以调用该一个或多个检测模型进行图像检测;该抓拍帧确定单元720也可以调用该一个或多个检测模型进行图像识别。Specifically, the apparatus 700 may include a unit for executing the method executed by the electronic device in the embodiment of the method 200 in FIG. 2. The mode determining unit 710 may be used to perform step 210 in the above method 200, and the captured frame determining unit 720 may be used to perform step 220 to step 240 in the above method 200. In addition, the device 700 may also include one or more detection models. In a specific implementation process, the mode determination unit 710 may call the one or more detection models for image detection; the captured frame determination unit 720 may also call the one or more detection models for image recognition.
该装置700还可用于执行图4至图6中的实施例中电子设备执行的方法。并且,该装置700中的各单元和上述其他操作和/或功能分别为了实现图4至图6中的实施例的相应流程。为了简洁,这里不再详述。The apparatus 700 can also be used to execute the method executed by the electronic device in the embodiments in FIGS. 4 to 6. In addition, each unit in the device 700 and other operations and/or functions described above are used to implement the corresponding processes of the embodiments in FIG. 4 to FIG. 6, respectively. For the sake of brevity, I will not go into details here.
应理解,该拍摄图像的装置700可对应于根据本申请实施例的方法实施例中的电子设备的至少部分。例如,装置700可以是该电子设备,或者,该电子设备中的部件,例如芯片或芯片系统等。具体地,该拍摄图像的装置700所实现的功能可以由一个或多个处理器执行相应的程序来实现。It should be understood that the image capturing apparatus 700 may correspond to at least part of the electronic device in the method embodiment according to the embodiment of the present application. For example, the apparatus 700 may be the electronic device, or a component in the electronic device, such as a chip or a chip system. Specifically, the functions implemented by the image capturing apparatus 700 may be implemented by one or more processors executing corresponding programs.
本申请还提供一种电子设备或其内装置700。该电子设备或装置700可以包括一个或多个处理器,以用于实现上述拍摄图像的装置700的功能。该一个或多个处理器例如可以 包括或执行上文实施例中所述的模式确定单元、抓拍帧确定单元以及一个或多个检测模型等。该一个或多个处理器例如可对应于图1中所示的电子设备100中的处理器110。所述模式确定单元、抓拍帧确定单元以及一个或多个检测模型可以是软件、硬件或其结合。所述软件可以被处理器执行,所述硬件可以嵌入处理器之中。The application also provides an electronic device or its internal device 700. The electronic device or apparatus 700 may include one or more processors to implement the functions of the image capturing apparatus 700 described above. The one or more processors may, for example, include or execute the mode determination unit, the capture frame determination unit, and one or more detection models described in the above embodiments. The one or more processors may correspond to the processor 110 in the electronic device 100 shown in FIG. 1, for example. The mode determination unit, the capture frame determination unit, and one or more detection models may be software, hardware, or a combination thereof. The software may be executed by a processor, and the hardware may be embedded in the processor.
可选地,该电子设备或装置700还包括一个或多个存储器。该一个或多个存储器用于存储计算机程序,和/或,数据,例如摄像头捕获的图像等。该一个或多个存储器例如可对应于图1中所示的电子设备100中的存储器120。可选地,该电子设备还可以包括摄像头、显示单元等。该摄像头例如可对应于图1中所示的电子设备100中的摄像头190。该显示单元例如可对应于图1中所示的电子单元100中的显示单元170。所述处理器可以获取存储于存储器中的计算机程序以执行以上实施例涉及的方法流程。所述存储器还包括所述一个或多个预设的检测模型,以便所述处理器可以从所述存储器获取所述一个或多个检测模型。Optionally, the electronic device or device 700 further includes one or more memories. The one or more memories are used to store computer programs and/or data, such as images captured by a camera. The one or more memories may correspond to the memory 120 in the electronic device 100 shown in FIG. 1, for example. Optionally, the electronic device may also include a camera, a display unit, and the like. The camera may correspond to the camera 190 in the electronic device 100 shown in FIG. 1, for example. The display unit may, for example, correspond to the display unit 170 in the electronic unit 100 shown in FIG. 1. The processor may obtain the computer program stored in the memory to execute the method flow involved in the above embodiment. The memory further includes the one or more preset detection models, so that the processor can obtain the one or more detection models from the memory.
本申请还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的拍摄图像的方法。该计算机存储介质例如可对应于图1中所示的电子设备100中的存储器120。图7实施例所涉及的所述模式确定单元、抓拍帧确定单元以及一个或多个检测模型可以以软件形式存在并被存储在所述计算机存储介质。The present application also provides a computer storage medium that stores computer instructions that, when run on an electronic device, cause the electronic device to execute the above-mentioned related method steps to implement the image capturing method in the above-mentioned embodiment. The computer storage medium may, for example, correspond to the memory 120 in the electronic device 100 shown in FIG. 1. The mode determination unit, the capture frame determination unit, and one or more detection models involved in the embodiment of FIG. 7 may exist in the form of software and be stored in the computer storage medium.
本申请还提供了一种计算机程序产品,可存储于所述计算机存储介质,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的拍摄图像的方法。The present application also provides a computer program product, which can be stored in the computer storage medium, and when the computer program product runs on a computer, the computer is caused to execute the above-mentioned related steps to implement the image capturing method in the above-mentioned embodiment.
其中,本申请实施例提供的拍摄图像的装置、电子设备、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。Among them, the image capturing device, electronic device, computer storage medium, computer program product, or chip provided in the embodiments of the present application are all used to execute the corresponding method provided above. Therefore, the beneficial effects that can be achieved can refer to the above The beneficial effects of the corresponding methods provided in the article will not be repeated here.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以计算机软件、电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in the embodiments disclosed in this specification can be implemented by computer software, electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, the specific working process of the system, device and unit described above can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here. In the several embodiments provided in this application, it should be understood that the disclosed system, device, and method may be implemented in other ways. For example, the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。 另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments. In addition, the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Should be covered within the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.
Claims (22)
- 一种拍摄图像的方法,其特征在于,包括:A method for photographing an image, characterized in that it comprises:根据捕获的多帧图像,在预置的多种抓拍模式中确定第一抓拍模式;According to the captured multi-frame images, determine the first capture mode among the preset multiple capture modes;使用与所述第一抓拍模式所对应的评估策略,在捕获的多帧待评选图像中确定与所述第一抓拍模式对应的抓拍帧图像,所述评估策略是预置的多种评估策略中的一种。Use the evaluation strategy corresponding to the first capture mode to determine the captured frame image corresponding to the first capture mode among the captured multiple frames to be selected, and the evaluation strategy is one of the preset multiple evaluation strategies Kind of.
- 如权利要求1所述的方法,其特征在于,所述多种抓拍模式包括以下一项或多项:表情抓拍模式、合照抓拍模式、运动抓拍模式、多人运动抓拍模式、宠物抓拍模式以及风景抓拍模式。The method of claim 1, wherein the multiple capture modes include one or more of the following: facial expression capture mode, group photo capture mode, sports capture mode, multiplayer sports capture mode, pet capture mode, and landscape Snapshot mode.
- 如权利要求1或2所述的方法,其特征在于,所述多种抓拍模式中的每种抓拍模式对应所述预置的多种评估策略中的至少一种评估策略,每种评估策略包括用于图像评分的一个或多个评分参数以及每个评分参数的模式权重;以及The method of claim 1 or 2, wherein each of the multiple capture modes corresponds to at least one of the preset multiple evaluation strategies, and each evaluation strategy includes One or more scoring parameters used for image scoring and the mode weight of each scoring parameter; and所述使用与所述第一抓拍模式所对应的评估策略,在捕获的多帧待评选图像中确定与所述第一抓拍模式对应的抓拍帧图像,包括:The using the evaluation strategy corresponding to the first capture mode to determine the captured frame image corresponding to the first capture mode among the captured multiple frames to be selected includes:使用与所述第一抓拍模式所对应的至少一种评估策略中的一种评估策略中一个或多个评分参数以及每个评分参数的模式权重,计算所述多帧待评选图像中每帧待评选图像的评分;Using one or more scoring parameters in one of the at least one evaluation strategy corresponding to the first capture mode and the mode weight of each scoring parameter to calculate each frame of the multi-frame to-be-selected image Scoring of selected images;根据对所述多帧待评选图像的多个评分,在所述多帧待评选图像中确定与所述第一抓拍模式对应的抓拍帧图像。According to multiple scores for the multiple frames of images to be selected, a captured frame image corresponding to the first capture mode is determined among the multiple frames of images to be selected.
- 如权利要求3所述的方法,其特征在于,所述抓拍帧图像在所述多帧待评选图像中具有最高的评分。The method according to claim 3, wherein the captured frame image has the highest score among the multiple frames to be selected.
- 如权利要求3或4所述的方法,其特征在于,不同抓拍模式所对应的不同评估策略所包括的评分参数相同,且所述不同评估策略所包括的模式权重不同。The method according to claim 3 or 4, wherein different evaluation strategies corresponding to different capture modes include the same scoring parameters, and the different evaluation strategies include different mode weights.
- 如权利要求3至5中任一项所述的方法,其特征在于,每种抓拍模式包括一种或多种抓拍类别,每种抓拍类别对应一种评估策略;在与所述第一抓拍模式对应的至少一种评估策略中,每种评估策略包括与所述第一抓拍模式对应的一个或多个评分参数、每个评分参数的模式权重和与一个抓拍类别对应的类别权重;以及The method according to any one of claims 3 to 5, wherein each capture mode includes one or more capture categories, and each capture category corresponds to an evaluation strategy; in comparison with the first capture mode In the at least one corresponding evaluation strategy, each evaluation strategy includes one or more scoring parameters corresponding to the first snapping mode, a mode weight of each scoring parameter, and a category weight corresponding to one snapping category; and所述在预置的多种抓拍模式中确定第一抓拍模式进一步包括:根据所述多帧图像确定所述第一抓拍模式中的第一抓拍类别;The determining the first capture mode among the preset multiple capture modes further includes: determining the first capture category in the first capture mode according to the multiple frames of images;所述使用与所述第一抓拍模式所对应的至少一种评估策略中的一种评估策略中一个或多个评分参数以及每个评分参数的模式权重,计算所述多帧待评选图像中每帧待评选图像的评分,包括:The use of one or more scoring parameters in one of the at least one evaluation strategy corresponding to the first capture mode and the mode weight of each scoring parameter are used to calculate each of the multiple frames of images to be selected. The scoring of the frame to be selected, including:使用与所述第一抓拍模式所对应的一个或多个评分参数、每个评分参数的模式权重,以及与所述第一抓拍类别对应的每个评分参数的类别权重,计算所述多帧待评选图像中每帧待评选图像的评分。Using one or more scoring parameters corresponding to the first snapping mode, the mode weight of each scoring parameter, and the category weight of each scoring parameter corresponding to the first snapping category, calculate the multi-frame waiting The score of each frame of the image to be selected in the selection image.
- 如权利要求6所述的方法,其特征在于,不同抓拍类别对应的不同评估策略所包括的评分参数相同,且所述不同评估策略所包括的类别权重不同。7. The method according to claim 6, wherein different evaluation strategies corresponding to different capture categories include the same scoring parameters, and the different evaluation strategies include different category weights.
- 如权利要求3至7中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 3 to 7, wherein the method further comprises:调用与所述第一抓拍模式对应的至少一个检测模型对所述多帧待评选图像进行图像识别,以输出识别结果;Calling at least one detection model corresponding to the first capture mode to perform image recognition on the multiple frames of images to be selected to output a recognition result;基于所述识别结果确定所述一个或多个评分参数的数值。The value of the one or more scoring parameters is determined based on the recognition result.
- 如权利要求8所述的方法,其特征在于,所述第一抓拍模式为运动抓拍模式或多人运动抓拍模式时,所述至少一个检测模型包括姿态估计模型和动作检测模型;或者The method of claim 8, wherein when the first capture mode is a motion capture mode or a multi-person motion capture mode, the at least one detection model includes a pose estimation model and a motion detection model; or所述第一抓拍模式为表情抓拍模式或合照抓拍模式,所述至少一个检测模型包括人脸属性检测模型。The first capture mode is an expression capture mode or a group photo capture mode, and the at least one detection model includes a face attribute detection model.
- 如权利要求8或9所述的方法,其特征在于,所述根据捕获的多帧图像,在预置的多种抓拍模式中确定第一抓拍模式,包括:The method according to claim 8 or 9, wherein the determining the first capture mode among the preset multiple capture modes according to the captured multi-frame images comprises:基于第一帧率对所述捕获的多帧图像进行模式检测,以在预置的多种抓拍模式中确定第一抓拍模式;Performing mode detection on the captured multi-frame images based on the first frame rate, so as to determine the first capturing mode among the preset multiple capturing modes;所述调用与所述第一抓拍模式对应的至少一个检测模型对所述多帧待评选图像进行图像识别,包括:The invoking at least one detection model corresponding to the first capture mode to perform image recognition on the multiple frames of images to be selected includes:调用与所述第一抓拍模式对应的至少一个检测模型,以第二帧率对所述多帧待评选图像进行图像识别;Calling at least one detection model corresponding to the first capture mode to perform image recognition on the multiple frames of images to be selected at a second frame rate;其中,所述第一帧率小于所述第二帧率。Wherein, the first frame rate is less than the second frame rate.
- 一种拍摄图像的装置,其特征在于,包括:An image capturing device, characterized in that it comprises:模式确定单元,用于根据捕获的多帧图像,在预置的多种抓拍模式中确定第一抓拍模式;The mode determining unit is configured to determine the first capture mode among the preset multiple capture modes according to the captured multi-frame images;抓拍帧确定单元,用于使用与所述第一抓拍模式所对应的评估策略,在捕获的多帧待评选图像中确定与所述第一抓拍模式对应的抓拍帧图像,所述评估策略是预置的多种评估策略中的一种。The capture frame determination unit is configured to use an evaluation strategy corresponding to the first capture mode to determine a capture frame image corresponding to the first capture mode among the captured multiple frames to be selected, and the evaluation strategy is a pre-selection One of the multiple evaluation strategies for setting up.
- 如权利要求11所述的装置,其特征在于,所述多种抓拍模式包括以下一项或多项:表情抓拍模式、合照抓拍模式、运动抓拍模式、多人运动抓拍模式、宠物抓拍模式以及风景抓拍模式。The device of claim 11, wherein the multiple capture modes include one or more of the following: facial expression capture mode, group photo capture mode, sports capture mode, multiplayer sports capture mode, pet capture mode, and landscape capture mode. Snapshot mode.
- 如权利要求11或12所述的装置,其特征在于,所述多种抓拍模式中的每种抓拍模式对应所述预置的多种评估策略中的至少一种评估策略,每种评估策略包括用于图像评分的一个或多个评分参数以及每个评分参数的模式权重;The device according to claim 11 or 12, wherein each of the multiple capture modes corresponds to at least one of the preset multiple evaluation strategies, and each evaluation strategy includes One or more scoring parameters used for image scoring and the mode weight of each scoring parameter;所述抓拍帧确定单元具体用于,使用与所述第一抓拍模式所对应的至少一种评估策略中的一种评估策略中一个或多个评分参数以及每个评分参数的模式权重,计算所述多帧待评选图像中每帧待评选图像的评分;并用于根据对所述多帧待评选图像的多个评分,在所述多帧待评选图像中确定与所述第一抓拍模式对应的抓拍帧图像。The capture frame determination unit is specifically configured to use one or more scoring parameters in one of the at least one evaluation strategy corresponding to the first capture mode and the mode weight of each scoring parameter to calculate the The score of each frame of the images to be selected in the multiple frames of images to be selected; and used to determine the image corresponding to the first capture mode in the multiple frames of images to be selected according to the multiple scores of the images to be selected Snap a frame image.
- 如权利要求13所述的装置,其特征在于,所述抓拍帧图像在所述多帧待评选图像中具有最高的评分。The device of claim 13, wherein the captured frame image has the highest score among the multiple frames to be selected.
- 如权利要求13或14所述的装置,其特征在于,不同抓拍模式所对应的不同评估策略所包括的评分参数相同,且所述不同评估策略所包括模式权重不同。The device according to claim 13 or 14, wherein different evaluation strategies corresponding to different capture modes include the same scoring parameters, and the different evaluation strategies include different weights.
- 如权利要求13至15中任一项所述的装置,其特征在于,每种抓拍模式包括一种或多种抓拍类别,每种抓拍类别对应一种评估策略;在与所述第一抓拍模式对应的至少一 种评估策略中,每种评估策略包括与所述第一抓拍模式对应的一个或多个评分参数、每个评分参数的模式权重和与一个抓拍类别对应的类别权重;The device according to any one of claims 13 to 15, wherein each capture mode includes one or more capture categories, and each capture category corresponds to an evaluation strategy; in comparison with the first capture mode In the at least one corresponding evaluation strategy, each evaluation strategy includes one or more scoring parameters corresponding to the first snapping mode, a mode weight of each scoring parameter, and a category weight corresponding to one snapping category;所述模式确定单元还用于根据所述多帧图像确定所述第一抓拍模式中的第一抓拍类别;The mode determining unit is further configured to determine a first capture category in the first capture mode according to the multiple frames of images;所述抓拍帧确定单元还用于使用与所述第一抓拍模式所对应的一个或多个评分参数、每个评分参数的模式权重,以及与所述第一抓拍类别对应的每个评分参数的类别权重,计算所述多帧待评选图像中每帧待评选图像的评分。The capture frame determination unit is further configured to use one or more scoring parameters corresponding to the first capture mode, the mode weight of each scoring parameter, and the value of each scoring parameter corresponding to the first capture category. The category weight is used to calculate the score of each frame of the image to be selected in the multiple frames of image to be selected.
- 如权利要求16所述的装置,其特征在于,不同抓拍类别对应的不同评估策略所包括的评分参数相同,且不同的评估策略所包括的类别权重不同。16. The device of claim 16, wherein different evaluation strategies corresponding to different capture categories include the same scoring parameters, and different evaluation strategies include different category weights.
- 如权利要求13至17中任一项所述的装置,其特征在于,所述装置还包括一个或多个检测模型;The device according to any one of claims 13 to 17, wherein the device further comprises one or more detection models;所述抓拍帧确定单元还用于调用与所述第一抓拍模式对应的至少一个检测模型对所述多帧待评选图像进行图像识别,以输出识别结果;并用于基于所述识别结果确定所述一个或多个评分参数的数值。The capture frame determination unit is further configured to call at least one detection model corresponding to the first capture mode to perform image recognition on the multiple frames of images to be selected to output a recognition result; and to determine the recognition result based on the recognition result. The value of one or more scoring parameters.
- 如权利要求18所述的装置,其特征在于,所述第一抓拍模式为运动抓拍模式或多人运动抓拍模式时,所述至少一个检测模型包括姿态估计模型和动作检测模型;或者The device of claim 18, wherein when the first capture mode is a motion capture mode or a multi-person motion capture mode, the at least one detection model includes a pose estimation model and a motion detection model; or所述第一抓拍模式为表情抓拍模式或合照抓拍模式,所述至少一个检测模型包括人脸属性检测模型。The first capture mode is an expression capture mode or a group photo capture mode, and the at least one detection model includes a face attribute detection model.
- 如权利要求18或19所述的装置,其特征在于,所述模式确定单元具体用于,基于第一帧率对所述捕获的多帧图像进行模式检测,以在预置的多种抓拍模式中确定第一抓拍模式;The device according to claim 18 or 19, wherein the mode determining unit is specifically configured to perform mode detection on the captured multi-frame image based on the first frame rate, so as to perform mode detection on the preset multiple capture modes Determine the first snapshot mode in the middle;所述抓拍帧确定单元具体用于,调用与所述第一抓拍模式对应的至少一个检测模型,以第二帧率对所述多帧待评选图像进行图像识别;The capture frame determination unit is specifically configured to call at least one detection model corresponding to the first capture mode to perform image recognition on the multiple frames of images to be selected at a second frame rate;其中,所述第一帧率小于所述第二帧率。Wherein, the first frame rate is less than the second frame rate.
- 一种拍摄图像的装置,其特征在于,包括处理器,所述处理器用于从存储器中调用并运行计算机程序,以执行如权利要求1至10中任一项所述的方法。A device for photographing an image, comprising a processor for calling and running a computer program from a memory to execute the method according to any one of claims 1 to 10.
- 一种计算机可读存储介质,其特征在于,包括计算机程序,当其在电子设备或处理器上运行时,使得所述电子设备或所述处理器执行如权利要求1至10中任一项所述的方法。A computer-readable storage medium, characterized by comprising a computer program, which when running on an electronic device or a processor, causes the electronic device or the processor to execute as described in any one of claims 1 to 10 The method described.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/104674 WO2021042364A1 (en) | 2019-09-06 | 2019-09-06 | Method and device for taking picture |
CN201980012490.8A CN112771612B (en) | 2019-09-06 | 2019-09-06 | Method and device for shooting image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/104674 WO2021042364A1 (en) | 2019-09-06 | 2019-09-06 | Method and device for taking picture |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021042364A1 true WO2021042364A1 (en) | 2021-03-11 |
Family
ID=74852969
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/104674 WO2021042364A1 (en) | 2019-09-06 | 2019-09-06 | Method and device for taking picture |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112771612B (en) |
WO (1) | WO2021042364A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113239220A (en) * | 2021-05-26 | 2021-08-10 | Oppo广东移动通信有限公司 | Image recommendation method and device, terminal and readable storage medium |
CN115802147A (en) * | 2021-09-07 | 2023-03-14 | 荣耀终端有限公司 | Method for snapping image in video and electronic equipment |
WO2023065885A1 (en) * | 2021-10-22 | 2023-04-27 | 荣耀终端有限公司 | Video processing method and electronic device |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113313009A (en) * | 2021-05-26 | 2021-08-27 | Oppo广东移动通信有限公司 | Method, device and terminal for continuously shooting output image and readable storage medium |
CN113326775B (en) * | 2021-05-31 | 2023-12-29 | Oppo广东移动通信有限公司 | Image processing method and device, terminal and readable storage medium |
CN113873144B (en) * | 2021-08-25 | 2023-03-24 | 浙江大华技术股份有限公司 | Image capturing method, image capturing apparatus, and computer-readable storage medium |
CN117692791B (en) * | 2023-07-27 | 2024-10-18 | 荣耀终端有限公司 | Image capturing method, terminal, storage medium and program product |
CN117692792A (en) * | 2023-07-28 | 2024-03-12 | 荣耀终端有限公司 | Image capturing method, terminal, storage medium and program product |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030063322A1 (en) * | 2001-10-01 | 2003-04-03 | Ayumi Itoh | Image taking apparatus |
CN105635567A (en) * | 2015-12-24 | 2016-06-01 | 小米科技有限责任公司 | Shooting method and device |
CN106165017A (en) * | 2014-02-07 | 2016-11-23 | 高通科技公司 | Allow to carry out the instant scene Recognition of scene associated picture amendment before image record or display |
CN106603917A (en) * | 2016-12-16 | 2017-04-26 | 努比亚技术有限公司 | Shooting device and method |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3557481B2 (en) * | 1996-08-28 | 2004-08-25 | カシオ計算機株式会社 | Color gradation display device |
JP2015187641A (en) * | 2014-03-26 | 2015-10-29 | 三星ディスプレイ株式會社Samsung Display Co.,Ltd. | Display device and method for driving display device |
CN106358036B (en) * | 2016-08-31 | 2018-05-08 | 杭州当虹科技有限公司 | A kind of method that virtual reality video is watched with default visual angle |
CN107295236A (en) * | 2017-08-11 | 2017-10-24 | 深圳市唯特视科技有限公司 | A kind of snapshot Difference Imaging method based on time-of-flight sensor |
CN108234870B (en) * | 2017-12-27 | 2019-10-11 | Oppo广东移动通信有限公司 | Image processing method, device, terminal and storage medium |
CN108198177A (en) * | 2017-12-29 | 2018-06-22 | 广东欧珀移动通信有限公司 | Image acquiring method, device, terminal and storage medium |
CN108419019A (en) * | 2018-05-08 | 2018-08-17 | Oppo广东移动通信有限公司 | It takes pictures reminding method, device, storage medium and mobile terminal |
-
2019
- 2019-09-06 WO PCT/CN2019/104674 patent/WO2021042364A1/en active Application Filing
- 2019-09-06 CN CN201980012490.8A patent/CN112771612B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030063322A1 (en) * | 2001-10-01 | 2003-04-03 | Ayumi Itoh | Image taking apparatus |
CN106165017A (en) * | 2014-02-07 | 2016-11-23 | 高通科技公司 | Allow to carry out the instant scene Recognition of scene associated picture amendment before image record or display |
CN105635567A (en) * | 2015-12-24 | 2016-06-01 | 小米科技有限责任公司 | Shooting method and device |
CN106603917A (en) * | 2016-12-16 | 2017-04-26 | 努比亚技术有限公司 | Shooting device and method |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113239220A (en) * | 2021-05-26 | 2021-08-10 | Oppo广东移动通信有限公司 | Image recommendation method and device, terminal and readable storage medium |
CN115802147A (en) * | 2021-09-07 | 2023-03-14 | 荣耀终端有限公司 | Method for snapping image in video and electronic equipment |
WO2023065885A1 (en) * | 2021-10-22 | 2023-04-27 | 荣耀终端有限公司 | Video processing method and electronic device |
US12114061B2 (en) | 2021-10-22 | 2024-10-08 | Honor Device Co., Ltd. | Video processing method and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN112771612A (en) | 2021-05-07 |
CN112771612B (en) | 2022-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021042364A1 (en) | Method and device for taking picture | |
US9288388B2 (en) | Method and portable terminal for correcting gaze direction of user in image | |
US11977981B2 (en) | Device for automatically capturing photo or video about specific moment, and operation method thereof | |
CN108513069B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN110677592B (en) | Subject focusing method and device, computer equipment and storage medium | |
CN107835359A (en) | Triggering method of taking pictures, mobile terminal and the storage device of a kind of mobile terminal | |
WO2024021742A1 (en) | Fixation point estimation method and related device | |
US20220329729A1 (en) | Photographing method, storage medium and electronic device | |
EP4236300A1 (en) | Slow motion video recording method and device | |
CN108574803B (en) | Image selection method and device, storage medium and electronic equipment | |
WO2023045626A1 (en) | Image acquisition method and apparatus, terminal, computer-readable storage medium and computer program product | |
CN111277751A (en) | Photographing method and device, storage medium and electronic equipment | |
US20130308829A1 (en) | Still image extraction apparatus | |
CN117201930B (en) | Photographing method and electronic equipment | |
CN108259767B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN114697530B (en) | Photographing method and device for intelligent view finding recommendation | |
US9323981B2 (en) | Face component extraction apparatus, face component extraction method and recording medium in which program for face component extraction method is stored | |
CN108495038B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN115423752B (en) | Image processing method, electronic equipment and readable storage medium | |
CN117132515A (en) | Image processing method and electronic equipment | |
CN117119284A (en) | Shooting method | |
WO2022206605A1 (en) | Method for determining target object, and photographing method and device | |
WO2022247118A1 (en) | Pushing method, pushing apparatus and electronic device | |
WO2021233051A1 (en) | Interference prompting method and device | |
CN114399622A (en) | Image processing method and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19944350 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19944350 Country of ref document: EP Kind code of ref document: A1 |