CN112532859B - Video acquisition method and electronic equipment - Google Patents

Video acquisition method and electronic equipment Download PDF

Info

Publication number
CN112532859B
CN112532859B CN201910883504.5A CN201910883504A CN112532859B CN 112532859 B CN112532859 B CN 112532859B CN 201910883504 A CN201910883504 A CN 201910883504A CN 112532859 B CN112532859 B CN 112532859B
Authority
CN
China
Prior art keywords
shooting
scene
pictures
electronic device
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910883504.5A
Other languages
Chinese (zh)
Other versions
CN112532859A (en
Inventor
葛璐
康凤霞
丁陈陈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910883504.5A priority Critical patent/CN112532859B/en
Priority to PCT/CN2020/115109 priority patent/WO2021052292A1/en
Publication of CN112532859A publication Critical patent/CN112532859A/en
Application granted granted Critical
Publication of CN112532859B publication Critical patent/CN112532859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

A video acquisition method and an electronic device are provided. In the method, in the delayed shooting mode, the electronic device can recognize a current shooting scene, and the current shooting scene comprises a backlight scene, a dark light scene or a common light scene. The electronic equipment can adjust the shooting parameters and the shooting mode of the camera according to the identified shooting scene, and acquire pictures by utilizing the adjusted shooting parameters and the adjusted shooting mode. The electronic equipment can also determine an adopted video post-processing algorithm according to the identified shooting scene, and process the acquired picture or video by using the video post-processing algorithm. The processed pictures can be coded to obtain video files. By implementing the technical scheme provided by the embodiment of the application, the quality of the video acquired in the delayed photography mode can be improved.

Description

Video acquisition method and electronic equipment
Technical Field
The scheme relates to the technical field of electronics, in particular to a video acquisition method and electronic equipment.
Background
At present, camera applications are one of important applications in electronic devices such as mobile phones and tablet computers. The user can record and share pictures and videos through a camera application on the electronic device. Current user demands for camera applications and photographic effects are also increasing.
With the development of camera related technologies, time-lapse photography has become one of the important modes of camera applications on electronic devices. In the delayed photography mode, the electronic device can acquire a group of pictures through the camera, or acquire a section of video through the camera to perform video frame extraction to obtain a group of pictures. And then, the electronic equipment adjusts the playing frame rate of the group of pictures acquired within the longer recording time to obtain a video file with the playing time being shorter than the recording time. When the video file is played, the process that the object slowly changes in a longer recording time is compressed into a shorter playing time for playing, and a fantastic and wonderful scene which cannot be perceived by naked eyes at ordinary times can be presented.
In the existing time-lapse photography mode, a user can manually adjust some shooting parameters on a user interface of a camera application to shoot high-quality videos under shooting scenes with different brightness. For example, in a scene with low light intensity, a user needs to manually adjust the exposure time to be long, and adjust shooting parameters such as an international organization for standardization (ISO) parameter to improve the quality of a video collected in a dark scene.
However, the manual adjustment of the shooting parameters by the user is cumbersome, and for the user who is not professional shooting, the difficulty of manually adjusting the shooting parameters is high, thereby reducing the convenience of operation.
Disclosure of Invention
The embodiment of the application provides a video acquisition method and electronic equipment, wherein in the delayed shooting mode, the electronic equipment can adjust shooting parameters, shooting modes and video post-processing algorithms according to different shooting scenes, and the quality of videos acquired in the delayed shooting mode can be improved.
In a first aspect, an embodiment of the present application provides a video capture method, where the method includes: the electronic device displays a camera application interface, wherein a delayed photography mode icon is included on the camera application interface. In response to a first user operation acting on the time-lapse photography mode icon, the electronic equipment acquires at least one picture and identifies a first shooting scene according to the at least one picture, wherein the first shooting scene comprises a backlight scene, a normal light scene or a dark light scene. The electronic equipment determines a first shooting parameter according to the first shooting scene, wherein the first shooting parameter is related to exposure. The electronic equipment collects a plurality of pictures according to the first shooting parameter and encodes the pictures to obtain a video file, wherein the frame interval time of the video file when being played is less than or equal to the frame interval time of the collected pictures.
By implementing the method provided by the first aspect, the electronic device may adjust the shooting parameters of the camera according to the identified shooting scene, and acquire a picture by using the adjusted shooting parameters, so as to form a time-lapse shooting video file. The corresponding shooting parameters are used for different shooting scenes, so that the quality of the video collected in the delayed shooting mode can be improved.
With reference to the first aspect, in some embodiments, after the electronic device identifies the first shooting scene according to the at least one picture, the method further includes: the electronic equipment determines a first shooting mode according to the first shooting scene, wherein the first shooting mode comprises a video recording mode or a shooting mode; the electronic equipment collects a plurality of pictures according to the first shooting parameter, and the method comprises the following steps: the electronic equipment collects a plurality of pictures according to the first shooting parameter and the first shooting mode.
In the embodiment of the application, the electronic equipment can also adjust the shooting mode according to the identified shooting scene, and acquire the picture by using the adjusted shooting mode to form the time-delay shooting video file. And corresponding shooting parameters and corresponding shooting modes are used for different shooting scenes, so that the quality of the video acquired in the delayed shooting mode can be further improved.
The following describes the process of forming a delayed video file in a video recording mode and a photographing mode, respectively.
(1) The first shooting mode is a video recording mode
The frame interval time of the collected multiple pictures is a first time interval; the electronic device encodes the multiple pictures to obtain a video file, and the method comprises the following steps: the electronic equipment extracts pictures from the plurality of pictures to obtain frame extracting pictures, and the frame extracting pictures are encoded through a set first frame interval time to obtain a video file, namely the frame interval time when the video file is played is the first frame interval time. The first frame interval time of the video file is less than or equal to the first time interval.
In the video recording mode, the frame interval time (i.e. the first frame interval time) when the obtained delayed video file is played is less than or equal to the frame interval time when the pictures in the acquired video are acquired, and is also less than the frame interval time when the frame extraction pictures are acquired. For example, the frame interval time when the delayed video file is played is 1/24 seconds, the frame interval time when pictures in the captured video are captured is 1/24 seconds, and the frame interval time when the decimated pictures are captured is half an hour.
(2) The first shooting mode is a shooting mode
The frame interval time of the collected multiple pictures is the second time interval. The second time interval is larger than the first time interval, the second time interval is determined by the exposure time, and the first shooting parameter comprises the exposure time. The electronic device encodes the multiple pictures to obtain a video file, and the method comprises the following steps: the electronic equipment obtains the video file through the set second frame interval time coding, namely the frame interval time when the video file is played is the second frame interval time. The second frame interval time is less than the second time interval.
In an embodiment of the application, when the first shooting scene includes the backlight scene or the normal light scene, the first shooting mode is the video recording mode; when the first shooting scene comprises the dim light scene, the first shooting mode is the shooting mode.
In the photographing mode, the camera acquires a picture at regular time intervals, and the time intervals can provide enough exposure time for the picture so as to improve the brightness of the picture. Therefore, the video acquisition in the delayed photography mode is carried out in a dark scene, and the quality of the obtained video is higher due to the higher brightness of each frame of picture.
In the photographing mode, the electronic device executes the acquisition of the next picture only when detecting that the acquisition of one picture is completed. Specifically, the electronic device collects a plurality of pictures according to the first shooting parameter and the first shooting mode, and the method includes: for each picture in the plurality of pictures, the electronic equipment detects whether the picture is acquired within the frame extraction interval; if so, the electronic equipment collects the next picture; if not, the electronic equipment collects the next picture after the picture is collected. Therefore, two frames of pictures can be extracted within the frame extraction interval time, and the condition that the frame extraction fails due to the fact that the frame extraction interval is smaller than the single-frame processing time is reduced.
The electronic equipment can identify a shooting scene according to one picture, and can also identify the shooting scene according to a plurality of pictures.
In some embodiments of the present application, the electronic device may set timestamps for a plurality of pictures in sequence, so that a video file composed of the plurality of pictures may be played at a set frame interval when being played. For example, the unit of the time stamp is 1/8000 seconds, and 1 second corresponds to 8000 in units of time stamps. The electronic device may play 20 pictures per second at the frame rate set in the previous example. The electronic device may obtain the time difference between two adjacent pictures (i.e. the frame interval time when playing), that is, the timestamp increment 8000/20 is 400 timestamp units, that is, the interval 1/20 seconds between two pictures. The electronic device can assign the time stamps to the pictures received in sequence according to the time stamp units. Specifically, the electronic device receives the first picture and sets the timestamp of the first picture to 0. And the electronic equipment receives the second picture, sets the time stamp of the second picture as 400 time stamp units, and obtains the video file by analogy.
In the embodiment of the present application, the first photographing parameter related to the exposure amount may include a shutter, an exposure time, an aperture value, an exposure value, an ISO, and a frame interval. In the embodiment of the application, the exposure amount can represent the light received by the light sensor in the camera within the exposure time. Among the shooting parameters, shutter, exposure time, aperture value, exposure value, and ISO, the electronic device may implement auto focus, auto exposure, auto white balance, and 3A (AF, AE, and AWB) through algorithms to implement automatic adjustment of these shooting parameters.
With reference to the first aspect, in some embodiments, after the electronic device determines the first shooting parameter according to the first shooting scene, the method further includes: the electronic equipment displays a first control on the delayed shooting interface, the first control is used for adjusting the second time interval within a value range which is larger than or equal to the exposure time, and the first shooting parameter comprises the second time interval.
Specifically, when the user operation acting on the delayed photography mode icon is detected, a shooting scene prompt, such as a prompt dim light scene, may be included on a camera application interface displayed by the electronic device, which may be an interface for delayed photography. The interface of the time-lapse photography can comprise a first control, namely a frame extraction interval adjusting control.
With reference to the first aspect, in some embodiments, after the electronic device captures a picture and identifies a first shooting scene according to the captured picture, the method further includes: the electronic equipment determines a first video post-processing algorithm according to the first shooting scene, wherein the first video post-processing algorithm corresponds to the first shooting scene; before the electronic device encodes the plurality of pictures to obtain the video file, the method further includes: the electronic equipment processes the multiple pictures by using the first video post-processing algorithm to obtain processed multiple pictures; the electronic device encodes the multiple pictures to obtain a video file, and the method comprises the following steps: and the electronic equipment encodes the processed multiple pictures to obtain a video file.
In the embodiment of the application, the corresponding video post-processing algorithms are different for a common light scene, a backlight scene or a dim light scene.
In a common light scene, an image processing module can adopt a video post-processing algorithm to perform anti-shake, noise reduction and other processing on a collected picture or video.
And secondly, in a dim light scene, the image processing module can perform treatments such as anti-shake, noise reduction and the like, and can also perform dim light optimization treatment through a dim light optimization algorithm so as to improve the quality of the collected images in the dim light scene.
And thirdly, in a backlight scene, the image processing module can perform processing such as anti-shake and noise reduction, and can also perform processing by using an HDR algorithm. The captured multiple pictures can be combined into one picture using the HDR algorithm. The multiple pictures have different exposure times. The brightness of the pictures is different for the pictures with different exposure time, and the details of the provided pictures are also different, so that the quality of the pictures in the backlight scene is improved.
With reference to the first aspect, in some embodiments, the camera application interface further includes a shooting control, and the electronic device acquires a plurality of pictures according to the first shooting parameter and encodes the plurality of pictures to obtain a video file, including: and responding to a second user operation acting on the shooting control, collecting a plurality of pictures by the electronic equipment according to the first shooting parameters, and coding the pictures to obtain a video file.
In some embodiments of the present application, the pictures in the video file may also include pictures captured by the electronic device prior to the detection of the second user operation.
In some embodiments of the application, the electronic device may further be configured to, in response to the first user operation, perform capturing of multiple pictures according to the first shooting parameter and encode the multiple pictures to obtain the video file. When detecting the first user operation, the electronic device may determine a first shooting parameter and a first shooting mode according to the identified first shooting scene, then collect a plurality of pictures according to the first shooting parameter and the first shooting mode, and encode the plurality of pictures to obtain the video file.
In an embodiment of the present application, after the electronic device determines the first shooting parameter and the first shooting mode according to the first shooting scene, the method further includes: the electronic equipment previews and displays the acquired multiple pictures according to the first shooting parameters and the first shooting mode.
In an embodiment of the present application, a camera application may include a mode loading module, a photographing control module, and a preview display module. The HAL layer may contain modules related to the delayed photography mode of the camera: the system comprises a capability enabling module, an image acquisition module, a scene recognition module and an image processing module.
The method provided by the first aspect of the embodiment of the present application may be specifically implemented as: first, the camera application may load the delayed photography mode in response to an operation of the user to turn on the camera application. After the delayed photography mode is completed by loading, the user can start the delayed photography mode by touching the delayed photography mode icon. Then, the HAL layer can recognize the shooting scene and report to the shooting control module of the application layer. The shooting control module can adjust the shooting parameters and the shooting mode in the delayed shooting mode and send the parameters and the shooting mode back to the image acquisition module of the HAL layer. And finally, the image acquisition module can acquire the picture or the video according to the adjusted shooting parameters and the shooting mode. The image processing module can also determine an adopted video post-processing algorithm according to the identified shooting scene, the collected picture or video is processed by the video post-processing algorithm, and the processed video data can be encoded by the encoding module to obtain a video file. The preview display module can also obtain the processed video data for preview display.
In the embodiment of the application, after the mode loading module finishes loading the modes, the electronic device can display the icon corresponding to each mode.
The embodiment of the application provides a process for recognizing a shooting scene by a scene recognition module according to a picture or a video. The scene recognition module can obtain the exposure parameters of the collected picture according to the picture or the video and determine the brightness difference value of the bright and dark areas of the picture. Specifically, the scene recognition module may determine a shooting scene using the exposure parameters. For example, the exposure parameter is an EV, and the camera application may issue a notification to the HAL layer for detecting the exposure parameter. The scene recognition module can calculate the exposure value of the picture and the brightness difference value of the bright and dark two-part area of the picture. When the exposure value is larger than a first threshold value and the brightness difference value of the bright and dark two-part area of the picture is smaller than a second threshold value, the scene identification module can determine that the shooting scene is a dark light scene. When the exposure value is smaller than a first threshold value and the brightness difference value of the bright and dark two-part area of the picture is larger than a second threshold value, the scene identification module can determine that the shooting scene is a backlight scene. When the exposure value is smaller than a first threshold value and the brightness difference value of the bright and dark two-part area of the picture is smaller than a second threshold value, the scene recognition module can determine that the shooting scene is a normal light scene. Optionally, the scene recognition module may further recognize the shooting scenes corresponding to the multiple pictures by using the above principle, so as to determine the shooting scenes more accurately.
With reference to the first aspect, in some embodiments, after the electronic device captures a picture and identifies a first shooting scene according to the picture, the method further includes: the electronic equipment identifies that the shooting scene is changed from the first shooting scene to a second shooting scene according to the acquired picture; the electronic equipment determines a second shooting parameter according to the second shooting scene; the electronic equipment collects a plurality of pictures according to the second shooting parameter and codes the pictures to obtain the video file.
In the video acquisition method, if the shooting scene is detected to change in the acquisition process, the shooting parameters can be readjusted to improve the quality of the acquired picture, so that the quality of the acquired video is improved.
Optionally, the electronic device may further determine a second shooting mode according to the second shooting scene, and acquire a plurality of pictures according to the second shooting parameter and the second shooting mode.
In a second aspect, an embodiment of the present application provides an electronic device, including: one or more processors, memory, and a display screen; the memory coupled with the one or more processors, the memory to store computer program code, the computer program code including computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform: displaying a camera application interface, wherein the camera application interface comprises a delayed photography mode icon; in response to a first user operation acting on the delayed photography mode icon, acquiring at least one picture and identifying a first shooting scene according to the at least one picture, wherein the first shooting scene comprises a backlight scene, a common light scene or a dark light scene; determining a first shooting parameter according to the first shooting scene, wherein the first shooting parameter is related to exposure; and acquiring a plurality of pictures according to the first shooting parameter, and coding the plurality of pictures to obtain a video file, wherein the frame interval time of the video file when being played is less than or equal to the frame interval time of the plurality of pictures when being acquired.
The electronic device provided by the second aspect can adjust shooting parameters of the camera according to the identified shooting scene, and acquire pictures by using the adjusted shooting parameters to form a time-delay shooting video file. The corresponding shooting parameters are used for different shooting scenes, so that the quality of the video collected in the time-delay shooting mode can be improved.
In some embodiments, in combination with the second aspect, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: determining a first shooting mode according to the first shooting scene, wherein the first shooting mode comprises a video mode or a shooting mode; the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: and acquiring a plurality of pictures according to the first shooting parameter and the first shooting mode.
With reference to the second aspect, in some embodiments, when the first shooting mode is the video recording mode, the frame interval time of the collected multiple pictures is a first time interval; the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: extracting pictures from the multiple pictures to obtain frame extracting pictures, wherein the frame extracting pictures are encoded through a set first frame interval time to obtain a video file, and the first frame interval time of the video file is less than or equal to the first time interval.
With reference to the second aspect, in some embodiments, the first photographing parameter includes an exposure time, and when the first photographing mode is the photographing mode, a frame interval time between which the plurality of pictures are acquired is a second time interval; the second time interval is greater than the first time interval and is determined by the exposure time; the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: and coding the video file by a set second frame interval time, wherein the second frame interval time is less than the second time interval.
In combination with the second aspect, in some embodiments, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: and displaying a first control on the delayed photography interface, wherein the first control is used for adjusting the second time interval within a value range which is greater than or equal to the exposure time, and the first shooting parameter comprises the second time interval.
With reference to the second aspect, in some embodiments, when the first shooting scene includes the backlight scene or the normal light scene, the first shooting mode is the video recording mode; when the first shooting scene comprises the dim light scene, the first shooting mode is the shooting mode.
In combination with the second aspect, in some embodiments, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: determining a first video post-processing algorithm according to the first shooting scene, wherein the first video post-processing algorithm corresponds to the first shooting scene; processing the multiple pictures by using the first video post-processing algorithm to obtain processed multiple pictures; the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: and coding the processed multiple pictures to obtain a video file.
With reference to the second aspect, in some embodiments, the camera application interface further includes a capture control, and the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform: and responding to a second user operation acting on the shooting control, acquiring a plurality of pictures according to the first shooting parameters, and coding the pictures to obtain a video file.
In combination with the second aspect, in some embodiments, the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform: recognizing that the shooting scene is changed from the first shooting scene to a second shooting scene according to the acquired picture; determining a second shooting parameter according to the second shooting scene; and acquiring a plurality of pictures according to the second shooting parameter, and coding the plurality of pictures to obtain the video file.
In a third aspect, an embodiment of the present application provides a chip applied to an electronic device, where the chip includes one or more processors, and the processor is configured to invoke a computer instruction to cause the electronic device to perform a method as described in the first aspect and any possible implementation manner of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product including instructions, which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect and any one of the possible implementation manners of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, which includes instructions that, when executed on an electronic device, cause the electronic device to perform a method as described in the first aspect and any possible implementation manner of the first aspect.
It is understood that the electronic device provided by the second aspect, the chip provided by the third aspect, the computer program product provided by the fourth aspect, and the computer storage medium provided by the fifth aspect are all configured to execute the method provided by the embodiment of the present application. Therefore, the beneficial effects achieved by the method can refer to the beneficial effects in the corresponding method, and the details are not repeated here.
Drawings
The drawings used in the embodiments of the present application are described below.
Fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure;
fig. 2 shows a software structure block diagram of the electronic device 100 provided by the embodiment of the present application;
fig. 3 is a schematic flowchart of a video capture method according to an embodiment of the present application;
FIGS. 4-8 are schematic diagrams of some human-computer interaction interfaces provided by embodiments of the present application;
FIG. 9 is a schematic flowchart of a video file capture and preview process according to an embodiment of the present disclosure;
fig. 10 is a flowchart illustrating a video file capture and preview process according to an embodiment of the present application.
Detailed Description
The terminology used in the following embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the present application. As used in the description of the embodiments of the present application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in the embodiments of this application refers to and encompasses any and all possible combinations of one or more of the listed items.
The embodiment of the application provides a video acquisition method and electronic equipment. The electronic device can identify a current shooting scene in the delayed shooting mode, wherein the current shooting scene comprises a backlight scene, a dim light scene or a normal light scene. The electronic equipment can adjust the shooting parameters and the shooting mode of the camera according to the identified shooting scene, and acquire pictures or videos by utilizing the adjusted shooting parameters and shooting mode. The electronic equipment can also determine an adopted video post-processing algorithm according to the identified shooting scene, and process the acquired picture or video by using the video post-processing algorithm. The processed data can be encoded to obtain a video file.
In the video acquisition method, the electronic equipment can adjust the shooting parameters, the shooting mode and the video post-processing algorithm according to different shooting scenes, so that the quality of the video acquired in the delayed shooting mode can be improved.
Some concepts related to embodiments of the present application are described below.
(1) Time-lapse photography
In the delayed photography mode, the electronic device adjusts the play frame rate of a group of pictures acquired within a longer recording time to obtain a video file with the play time shorter than the recording time. The following respectively describes the process of adjusting the play frame rate in the photographing mode and the video recording mode.
In the photographing mode, the electronic device may photograph a certain number of pictures at a lower frame rate, and then increase the play frame rate of the pictures to obtain a video file. The frame rate of the pictures refers to the display frequency of the pictures when the video file composed of the pictures is played. When the video file is played, because the playing frame rate is higher than the acquisition frame rate when the picture is acquired, the process that the object slowly changes in a longer recording time is compressed to a shorter playing time for playing. The video file playing process can present a wonderful scene which cannot be perceived by naked eyes at ordinary times.
In the video recording mode, the electronic device may acquire a video at a lower frame rate, where the acquired video includes some pictures. And then the electronic equipment frames the video to reserve partial pictures, and increases the playing frame rate of the pictures to obtain a video file. When the obtained video file is played, the process that the object slowly changes in a longer recording time is compressed to be presented in a shorter playing time.
(2) Backlit, ordinary and dim light scenes
In a backlight scene, light irradiated from the back of a shot object and entering the camera is relatively strong, and light irradiated from the front of the shot object and entering the camera is relatively weak, so that the front and the back of the shot object in a collected picture are relatively dark and bright. Taking the example of collecting a portrait, when the person being photographed faces the lens, light is emitted from behind the person. The face of the portrait in the captured picture may appear darker relative to the background.
Under a normal light scene, the light intensity of the front side of the shot object reaches a certain threshold value, and the light intensity of the back side of the shot object also reaches a certain threshold value.
A dim light scene is a scene with low ambient light intensity, i.e. the light intensity on the front and back of the object is low. In a dark scene, the exposure time of the captured picture needs to be increased to improve the brightness of the picture, thereby improving the quality of the picture. For example, when the light intensity of the photographed scene is less than the light intensity threshold, the scene is a dark light scene. The electronic device may increase the exposure time to increase the brightness of the captured picture.
In the embodiment of the application, when the camera adopts the same shooting parameters to collect pictures, the quality of the images in different shooting scenes is different. Specifically, the difference between the brightness of the bright area and the brightness of the dark area of the captured picture in the backlight scene is larger, for example, larger than the second threshold. The exposure value of the picture collected under the normal light scene is smaller than a first threshold value, and the brightness difference value of the bright and dark two parts of the picture is smaller than a second threshold value. The exposure value of the picture under the dark light scene is larger than a first threshold value.
In the embodiment of the present application, the three scenes are not limited to the above three scenes, and other shooting scenes, such as an indoor scene, may also be included.
(3) Shooting mode
In the embodiment of the application, the shooting mode of the camera can comprise a video recording mode and a shooting mode.
The video recording mode is that the electronic device acquires a section of video according to a standard frame rate (for example, 24 pictures are acquired every second), and then the electronic device frames the section of video and only keeps pictures of a part of frames. The video file can be obtained after the pictures are adjusted to the playing frame rate (i.e. the frame interval time when the delayed video file is played is determined).
For example, the bud may be opened about 3 days and 3 nights, i.e., 72 hours. The electronic device captures video at a standard frame rate (e.g., 24 pictures per second) for a recording time of 72 hours. The captured video contained 6220800 (i.e., 72 x 60 x 24) pictures. The frame extraction interval set by the electronic equipment is half an hour, namely the electronic equipment extracts one picture every half an hour of recording time interval in the acquired video, and extracts 144 pictures from the video with the recording time of 72 hours, wherein the pictures are called frame extraction pictures. The electronic device then arranges the 144 pictures in sequence, and sets the play frame rate of the video file composed of the collected pictures to the standard play frame rate, for example, 24 pictures per second. The electronic device can play the video within 6 seconds and play the flowering process of 3 days and 3 nights.
In the embodiment of the application, in the video recording mode, the frame interval time of the obtained delayed shooting video file when played is less than or equal to the frame interval time of the collected pictures in the collected video, and is also less than the frame interval time of the collected frame extraction pictures. For example, in the previous example, the frame interval time when the delayed video file is played is 1/24 seconds, the frame interval time when the picture in the captured video is captured is 1/24 seconds, and the frame interval time when the frame-extracted picture is captured is half an hour.
The photographing mode is that the electronic device collects a picture at regular time intervals, and the time intervals are recording time intervals, that is, frame interval time when the picture is collected. The electronic equipment sets the playing frame rate of the collected pictures to be a standard playing frame rate to obtain a video file. For example, in the previous recording process of bud opening with the recording time of 72 hours, the electronic device acquires one picture every half hour, and acquires 144 pictures in total. The electronic equipment sets the playing frame rate of the collected pictures to be 24 pictures per second to obtain a video file. When playing a video file, the electronic device can play a blossoming process with a playing and recording time of 72 hours within a playing time of 6 seconds. The time interval for acquiring the pictures may also be referred to as a frame extraction interval.
In the embodiment of the application, in the photographing mode, the frame interval time of the obtained delayed photographing video file when being played is smaller than the frame interval time of the acquired picture. For example, in the previous example, the frame interval when the delayed video file is played is 1/24 seconds, and the frame interval when the picture is captured is half an hour.
In the video recording mode, the electronic device needs to acquire a fixed number of pictures per second. The exposure time of each picture is fixed or the exposure time is adjustable only within a certain range. In a dark scene, each picture needs a longer exposure time to improve the brightness of the picture. Therefore, if the electronic device performs the video acquisition in the delayed shooting mode in a dark scene in a video recording manner, the quality of the obtained video is low because the brightness of each frame of picture is low.
In the shooting mode, the camera collects a picture at regular time intervals, and the time intervals can provide enough exposure time for the picture so as to improve the brightness of the picture. Therefore, the video acquisition in the delayed photography mode is carried out in a dark scene in a photographing mode, and the obtained video quality is high due to the fact that the brightness of each frame of picture is high.
(4) Shooting parameters
The photographing parameters may include a shutter, an exposure time, an Aperture Value (AV), an Exposure Value (EV), an ISO, and a frame interval. The following are introduced separately.
The shutter is a device for controlling the time of light entering the camera to determine the exposure time of the picture. The longer the shutter remains in the open state, the more light that enters the camera, and the longer the exposure time of the picture. The shorter the time the shutter remains in the open state, the less light enters the camera and the shorter the exposure time of the picture.
The shutter speed is the time during which the shutter remains open. The shutter speed is the time interval from the shutter open state to the closed state. During this time, the object may leave an image on the negative. The faster the shutter speed, the sharper the picture of the moving object is presented on the image sensor. Conversely, the slower the shutter speed, the more blurred the picture will be presented by the moving object.
The exposure time is the time during which the shutter is opened in order to project light onto the photosensitive surface of the photosensitive material of the camera. The exposure time is determined by the sensitivity of the photosensitive material and the illumination on the photosensitive surface. The longer the exposure time, the more light enters the camera. Therefore, a long exposure time is required in a dark scene and a short exposure time is required in a backlight scene. The shutter speed is the exposure time.
The aperture value is the ratio of the focal length of the lens to the light passing diameter of the lens. The larger the aperture value, the more light enters the camera. The smaller the aperture value, the less light enters the camera.
The exposure value is a value in which the shutter speed and the aperture value are combined to indicate the lens light-transmitting ability of the camera. The exposure value may be defined as:
Figure GDA0002254428300000091
wherein N is the aperture value; t is the exposure time (shutter), in seconds.
ISO is used to measure the sensitivity of a film to light. For insensitive films, longer exposure times are required to achieve the same brightness of the image as the sensitive film. For sensitive films, a shorter exposure time is required to achieve the same brightness of the image as for insensitive films.
For the video recording mode, a frame of picture is extracted from the collected video at regular recording time intervals. The certain recording time is the frame extraction interval. For example, in the example of shooting the bud opening in the previous example, the electronic device captures a video at a standard frame rate (i.e., 24 pictures per second), the recording time is 72 hours, and 6220800 (i.e., 72 × 60 × 60 × 24) pictures are captured in total. The video composed of these pictures is played for 72 hours when being played. The extraction interval of the electronic equipment is half an hour, namely the electronic equipment extracts one picture every half an hour of recording time interval in the acquired video, and extracts 144 pictures from the video with the recording time of 72 hours. In the video recording mode, the frame interval time during which the pictures are captured is the first time interval, for example, in the previous example, when 24 pictures are taken per second, the first time interval is 1/24 seconds.
For the photographing mode, the frame extraction interval is the time difference between the acquisition of two adjacent pictures. In the photographing mode, the frame extraction interval is the frame interval time for the image to be captured, and may be referred to as a second time interval. The second time interval is greater than the first time interval and is determined by the exposure time.
The shorter the frame extraction interval is, the smoother the moving track of the motion scene in the picture is during playing of the obtained video, and the longer the frame extraction interval is, the more the moving track of the motion scene in the picture is clamped during playing of the obtained video.
Among the shooting parameters, the shutter, the exposure time, the aperture value, the exposure value, and the ISO, the electronic device may implement Auto Focus (AF), Auto Exposure (AE), Auto White Balance (AWB), and 3A (AF, AE, and AWB) through algorithms to implement automatic adjustment of these shooting parameters.
The automatic focusing means that the electronic device obtains the highest picture frequency component by adjusting the position of the focusing lens so as to obtain higher picture contrast. The focusing is a continuous accumulation process, and the electronic equipment compares the contrast of the pictures shot by the lens at different positions, so that the position of the lens when the contrast of the pictures is the maximum is obtained, and the focusing focal length is further determined.
Auto exposure refers to the electronic device automatically setting an exposure value according to the available light source conditions. The electronic equipment can automatically set the shutter speed and the aperture value according to the exposure value of the currently acquired picture so as to realize automatic setting of the exposure value.
The object color can change due to the color of the projected light, and the pictures collected by the electronic equipment have different color temperatures under different light colors. The white balance is closely related to the ambient light. Regardless of ambient light, the camera of the electronic device can recognize white and restore other colors based on white. The automatic white balance can realize the fidelity degree of the picture color adjusted by the electronic equipment according to the light source condition.
And 3A is automatic focusing, automatic exposure and automatic white balance.
In the embodiment of the application, the shooting parameters comprise shutter, exposure time, aperture value, exposure value, ISO and frame extraction interval, and all the shooting parameters are related to the exposure of the picture. In the embodiment of the application, the exposure amount can represent the light received by the light sensor in the camera within the exposure time.
(5) Video post-processing algorithm
In the embodiment of the application, the video post-processing algorithm is used for processing the collected multiple pictures or videos. Specifically, the video post-processing algorithm can perform anti-shake processing to reduce the image blur caused by the shake of the electronic device. The video post-processing algorithm can be called by the image processing module provided by the embodiment of the application to process the acquired picture or video.
For different shooting scenes, the image processing module can adopt different video post-processing algorithms to process. Specifically, for a backlight scene, the video post-processing algorithm may be a High Dynamic Range (HDR) algorithm, and the image processing module may combine the captured multiple pictures into one picture using the HDR algorithm. The multiple pictures have different exposure times. The brightness of the pictures is different for the pictures with different exposure time, and the details of the provided pictures are also different. The image processing module synthesizes the HDR picture by using the best details of the picture obtained by each exposure time. The HDR picture can be sent to a preview display module as a frame of picture for previewing, and also can be sent to an encoding module for encoding. For dim light scenes, the video post-processing algorithm may include a dim light optimization algorithm to improve the quality of the captured pictures in dim light scenes.
For a common light scene, the image processing module can adopt a video post-processing algorithm to perform anti-shake, noise reduction and other processing on the acquired picture or video. For dim light scenes, the image processing module can perform anti-shake, noise reduction and other processing, and can also perform dim light optimization processing through a dim light optimization algorithm. For the backlight scene, the image processing module can perform processing such as anti-shake and noise reduction, and can also perform processing by using an HDR algorithm.
The following describes an electronic apparatus according to an embodiment of the present application.
Fig. 1 shows a schematic structural diagram of an electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to speaker 170A, receiver 170B, etc.) or displays pictures or video through display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display pictures, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a capture function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, the application processor, and the like, so as to implement an image capture module of the HAL layer in the embodiment of the present application.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into a picture or video visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the picture. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still pictures or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP for conversion into a digital picture or video signal. And the ISP outputs the digital picture or video signal to the DSP for processing. The DSP converts the digital picture or video signal into a picture or video signal in a standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital picture or video signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, a picture or video playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
In the embodiment of the present application, the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of the electronic device 100.
Referring to fig. 2, fig. 2 shows a block diagram of a software structure of an exemplary electronic device 100 according to an embodiment of the present application. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface.
As shown in fig. 2, the Android system can be divided into three layers, which are from top to bottom: an application layer, an application framework layer, and a Hardware Abstraction Layer (HAL) layer. Wherein:
the application layer includes a series of application packages, for example containing a camera application. Not limited to camera applications, but also other applications such as gallery, calendar, phone, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The camera application may provide a time-lapse photography mode for the user. As shown in fig. 2, the camera application may include a mode loading module, a photographing control module, and a preview display module. Wherein:
And the mode loading module is used for inquiring the HAL layer about the mode when the camera application is started and loading the mode according to the inquiry result. The modes may include a night mode, a portrait mode, a photograph mode, a short video mode, a delayed photography mode, etc.
And the shooting control module is used for starting the shooting control module together with the preview display module when detecting that the shooting control module is switched to the delayed shooting mode, and informing the HAL layer capability enabling module of starting a module related to the delayed shooting mode. The shooting control module can also respond to the touch operation of the user on the starting video recording control in the user interface of the camera application and inform the coding module and the image processing module in the application framework layer. And the coding module starts to acquire the video data stream from the image processing module of the HAL layer after receiving the notification. The encoding module may encode the video stream to generate a video file. When the user performs a touch operation on the end video recording control in the user interface of the camera application, the shooting control module can also notify the coding module and the image processing module in the application framework layer in response to the touch operation. And the coding module stops acquiring the video data stream from the image processing module of the HAL layer after receiving the notification.
And the preview display module is used for receiving the video data stream from the image processing module or the image acquisition module of the HAL layer and displaying a preview picture or a preview video on the user interface, and the preview picture and the preview video can be updated in real time.
The FWK provides an Application Programming Interface (API) and a programming framework for an application program of the FWK. The application framework layer includes a number of predefined functions.
As shown in fig. 2, the application framework layer may include a Camera Service interface (Camera Service) that may provide a communication interface between a Camera application and the HAL layer in the application layer. The application framework layer may also include an encoding module. The encoding module may receive a notification from a capture control module in the camera application to start or stop receiving a video data stream from an image processing module of the HAL layer and encode the video data stream to obtain a video file.
As shown in fig. 2, the HAL layer includes modules for providing a delayed photography mode for camera applications. The modules providing the time-delay photography mode can collect pictures or videos in the time-delay photography mode, identify shooting scenes according to the collected pictures or videos, and report the identified shooting scenes. The HAL layer also provides corresponding post-processing algorithms for different shooting scenes.
Specifically, as shown in fig. 2, the HAL layer may include modules related to a delayed photography mode of the camera: the system comprises a capability enabling module, an image acquisition module, a scene recognition module and an image processing module. Wherein:
and the capability enabling module is used for starting the time-delay photography mode related modules of the HAL layer after receiving the notification of the shooting control module, such as an image acquisition module, a scene recognition module and an image processing module. Specifically, when the user operates in the user interface of the camera application to switch to the delayed photography mode, the shooting control module in the camera application may notify the capability enabling module of the HAL layer, and after receiving the notification, the capability enabling module enables to start the image acquisition module, the scene recognition module, and the image processing module.
And the image acquisition module is used for calling the camera to acquire the picture or the video and sending the acquired picture or video to the scene identification module and the image processing module.
And the scene identification module is used for carrying out scene identification according to the received pictures or videos so as to identify shooting scenes with different brightness, such as a normal light scene, a backlight scene and a dim light scene.
The image processing module can comprise video post-processing algorithms, and shooting scenes with different brightness can respectively correspond to different video post-processing algorithms. The image processing module can process the picture or the video through a video post-processing algorithm to obtain a video data stream, and sends the video data stream to the preview display module for preview display and to the encoding module to form a video file.
It should be noted that the software architecture of the electronic device shown in fig. 2 is only one implementation manner of the embodiment of the present application, and in practical applications, the electronic device may further include more or fewer software modules, which is not limited herein.
The following describes a video capture method provided in an embodiment of the present application in detail with reference to a software structure of the electronic device 100 shown in fig. 2. Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a video capture method according to an embodiment of the present disclosure. As shown in fig. 3, the video capturing method includes steps S101 to S124.
In the video capture method provided by the embodiment of the application, first, the camera application can respond to the operation of opening the camera application by a user and load the delayed photography mode. After the delayed photography mode is loaded, the user may start the delayed photography mode by touching the delayed photography mode icon. Then, the HAL layer can recognize the shooting scene and report to the shooting control module of the application layer. The shooting control module can adjust the shooting parameters and the shooting mode in the delayed shooting mode and sends back the shooting parameters and the shooting mode to the image acquisition module of the HAL layer. And finally, the image acquisition module can acquire the picture or the video according to the adjusted shooting parameters and the shooting mode. The image processing module can also determine an adopted video post-processing algorithm according to the identified shooting scene, the collected picture or video is processed by the video post-processing algorithm, and the processed video data stream can be encoded by the encoding module to obtain a video file. The preview display module can also obtain the processed video data stream to perform preview display.
Wherein: steps S101 to S103 describe a process of loading the delayed photography mode. Steps S104 to S118 describe a shooting parameter and shooting mode adjustment process. Steps S119 to S124 describe the process of forming a video file and previewing. The following description is made separately.
Step S101-S103, loading the delayed photography mode process
S101, starting a camera application by a user.
In the embodiment of the present application, the user may start the camera application by operating, for example, touching, an application icon of the camera application, which may specifically refer to the specific description of (a) in fig. 4.
S102, when the camera application is started, the mode loading module inquires the HAL layer about the mode.
In the embodiment of the application, the HAL layer may provide a time-lapse photography mode for the camera application. That is, in the delayed photography mode, in the HAL layer, the capability enabling module, the image capturing module, the scene recognition module, and the image processing module may be activated to perform their respective functions.
In this embodiment of the application, the HAL layer may also provide other modes for the camera application, such as a portrait mode, a normal mode, a night view mode, and a video recording mode, which are not limited in this embodiment of the application.
In particular, the schema loading module may query the capability enabling module for the schema. The capability enabling module may, in response to a query by the pattern loading module, feed back to the pattern loading module a pattern provided by the HAL layer for the camera application, for example, the provided pattern including: a time-lapse photography mode, a portrait mode, a normal mode, a night view mode, a video recording mode, and the like.
S103, loading the mode according to the query result by the mode loading module.
The loading modes comprise a delay shooting mode, and the mode loading module initializes the corresponding modules of the modes in the application program layer and the HAL layer in the loading process. After initialization, the electronic apparatus 100 may display an icon corresponding to each mode, which may be specifically described with reference to the examples shown in (B) and (C) in fig. 4. After the initialization, in response to a touch operation of a user on an icon corresponding to the delayed photography mode, the photographing control module may notify the capability enabling module, the image capturing module, the scene recognition module, and the image processing module in the HAL layer of the start-up to perform the respective functions. After initialization, other modes are similar to the delayed photography mode, and corresponding modules in the HAL layer can be started in response to the touch operation of the user on the icons corresponding to the modes.
The user interface involved in loading the delayed photography mode process is described below. Referring to fig. 4, fig. 4 is a schematic diagram of a human-computer interaction interface according to an embodiment of the present disclosure. As shown in fig. 4 (a), the electronic apparatus 100 may display the user interface 10 as the home screen interface 10 of the electronic apparatus 100. Home screen interface 10 includes calendar widget 101, weather widget 102, application icons 103, status bar 104, and navigation bar 105. Wherein:
The calendar gadget 101 may be used to indicate the current time, e.g., date, day of week, time division information, etc.
The weather widget 102 may be used to indicate a weather type, such as cloudy sunny, light rain, etc., may also be used to indicate information such as temperature, and may also be used to indicate a location.
The application icon 103 may include an icon of Wechat (Wechat), a tweet (Twitter), a Facebook (Facebook), a microblog (Sina Weibo), a QQ (Tencent QQ), a YouTube (YouTube), a Gallery (Gallery), a camera (camera), and other applications, and the embodiment of the present invention is not limited thereto. Any application icon can be used for responding to the operation of the user, such as touch operation, so that the electronic equipment starts the application corresponding to the icon.
The name of the operator (e.g., china mobile), time, WI-FI icon, signal strength, and current remaining power may be included in the status bar 104.
Navigation bar 105 may include: a return key 1051, a home screen key 1052, an outgoing call task history key 1053, and other system navigation keys. The home screen interface is an interface displayed by the electronic device 100 after any user interface detects a user operation on the main interface key 1052. When it is detected that the user clicks the return key 1051, the electronic apparatus 100 may display a user interface previous to the current user interface. When the user is detected to click on the home interface key 1052, the electronic device 100 may display the home screen interface 10. When it is detected that the user clicks the outgoing task history key 1053, the electronic device 100 may display a task that the user has recently opened. The names of the navigation keys may also be other keys, for example, 1051 may be called Back Button, 1052 may be called Home Button, and 1053 may be called Menu Button, which is not limited in this embodiment of the present application. Each navigation key in the navigation bar 105 is not limited to a virtual key, and may be implemented as a physical key.
The user launches the camera application, which may be accomplished by touching the camera icon. As shown in fig. 4 (a), in response to a touch operation of the camera icon by the user, the mode loading module executes steps S102 to S103. After the mode loading module completes loading the modes, the electronic device 100 may display an icon corresponding to each mode.
Illustratively, the loaded modes include a night mode, a portrait mode, a photograph mode, a short video mode, a delayed photography mode, and the like. As shown in fig. 4 (B), the electronic apparatus 100 may display the camera application interface 20. Icons 204 corresponding to the loaded completed modes may be included on the camera application interface 20. The icons 204 may include a night mode icon 204A, a portrait mode icon 204B, a take picture mode icon 204C, a short video mode icon 204D, a record mode icon 204E, and more icons 204F. The more icon 204F is used to display an icon of a loaded completed mode, specifically referring to the description of (C) in fig. 4. The shooting control module may start a mode corresponding to any one of the icons 204 in response to a touch operation of the user on the icon.
As shown in fig. 4 (B), the camera application interface 20 may further include a captured image playback control 201, a shooting control 202, a camera switching control 203, a finder frame 205, a focus control 206A, a setting control 206B, and a flash switch 206C. Wherein:
A captured image redisplay control 201 for the user to view captured pictures and videos.
And the camera switching control 203 is used for switching the camera for acquiring the image between the front camera and the rear camera.
And the viewing frame 205 is used for performing real-time preview display on the acquired picture.
And a focusing control 206A for focusing the camera.
And the setting control 206B is used for setting various parameters during image acquisition.
A strobe switch 206C for turning on/off the strobe.
As shown in (C) of fig. 4, in response to the user' S touch operation on the more icon 204F, the electronic device 100 displays the mode selection interface 30, and the mode selection interface 30 may include icons of other loaded completed modes through step S103.
Mode selection interface 30 may include a delayed photography mode icon 204G, and may also include a professional photography mode icon, a skin makeup photography mode icon, a slow motion mode icon, a professional video mode icon, a skin makeup video mode icon, a gourmet mode icon, a 3D dynamic panoramic mode icon, a panoramic mode icon, an HDR mode icon, a smart literacy mode icon, a streaming shutter mode icon, a voiced photos mode icon, an online translation mode icon, a watermark mode icon, and a document correction mode icon.
In the embodiment of the present application, the electronic device may open the camera application in response to a user operation, and then display the camera application interface 20 on the display screen.
When the user can operate any of the mode icons, for example, touch operation is performed to start the corresponding mode, the electronic device starts the corresponding module in the HAL layer.
(II) Steps S104 to S118, shooting parameter and shooting mode adjustment Process
And S104, switching the user to a time-lapse shooting mode.
As shown in (C) of fig. 4, the user may touch the delayed photography mode icon 204G on the mode selection interface 30 to switch to the delayed photography mode.
And S105, responding to the touch operation of the user on the delayed photography mode icon 204G, starting the photography control module and the preview display module.
After the start of the photographing control module, the photographing control module may notify the capability enabling module to enable start of modules related to the delayed photographing mode in the HAL layer, such as an image capturing module, a scene recognition module, and an image processing module.
In the embodiment of the present application, the first user operation may include a user touch operation on the delayed photography mode icon 204G.
In one possible implementation, the shooting control module and the preview display module may have been started in step S102, i.e., in response to the user starting the camera application, the shooting control module and the preview display module are started. And the shooting control module can be used for shooting control in each mode. And the preview display module can be used for performing preview display in each mode.
And S106, the shooting control module sends a notice for starting the delayed shooting mode to the capability enabling module of the HAL layer.
And S107, enabling the image acquisition module, the scene recognition module and the image processing module by the capability enabling module.
And S108, the image acquisition module acquires the picture or the video according to the preset shooting parameters and the preset shooting mode.
The preset shooting parameters and the shooting mode may be preset, and may correspond to a common light scene, for example. The preset shooting mode may be a video recording mode, and as to the video recording mode, reference may be made to the detailed description in step S112.
And S109, the image acquisition module sends the acquired picture or video to the scene identification module.
In the embodiment of the application, the image acquisition module can also send the image or the video to the image processing module for processing to obtain the video data stream, and then the image processing module sends the video data stream to the preview display module for preview display. The image processing module may process the image data using a post-processing algorithm corresponding to a preset photo scene (e.g., a normal light scene) to obtain a video data stream.
In the embodiment of the application, the video data stream comprises a group of pictures with a sequence, and the group of pictures can be set with time stamps when being shot. The time stamp may be reset during the encoding of the group of pictures by the encoding module.
And S110, the scene identification module identifies a shooting scene according to the picture or the video.
In the embodiment of the application, the scene recognition module can obtain the exposure parameters of the acquired picture according to the picture or the video and determine the brightness difference value of the bright and dark areas of the picture. Specifically, the scene recognition module may determine a shooting scene using the exposure parameters. For example, the exposure parameter is an EV, and the camera application may issue a notification to the HAL layer for detecting the exposure parameter. When receiving the notice for detecting the exposure parameter, the scene recognition module in the HAL layer can calculate the exposure value of the picture and the brightness difference value of the bright and dark two-part area of the picture. When the exposure value is larger than a first threshold value and the brightness difference value of the bright and dark two-part area of the picture is smaller than a second threshold value, the scene identification module can determine that the shooting scene is a dark light scene. When the exposure value is smaller than a first threshold value and the brightness difference value of the bright and dark two-part area of the picture is larger than a second threshold value, the scene identification module can determine that the shooting scene is a backlight scene. When the exposure value is smaller than a first threshold value and the brightness difference value of the bright and dark two-part area of the picture is smaller than a second threshold value, the scene recognition module can determine that the shooting scene is a normal light scene. Optionally, the scene recognition module may further recognize the shooting scenes corresponding to the multiple pictures by using the above principle, so as to determine the shooting scenes more accurately.
The embodiment of the present application does not limit the specific algorithm used by the scene recognition module to recognize the shooting scene.
And S111, the scene recognition module reports the recognized shooting scene to the shooting control module and sends the shooting scene to the image processing module.
In this embodiment of the application, step S111 is executed only when the scene recognition module recognizes that the current shooting scene is different from the preset shooting scene. Specifically, the preset shooting parameters and shooting modes in the image acquisition module are shooting parameters and shooting modes corresponding to the preset shooting scene. For example, the preset shooting scene may be a normal light scene, and the preset shooting parameters and shooting mode in the image acquisition module are shooting parameters and shooting mode in the normal light scene. The step S111 and the subsequent steps are performed when the scene recognition module recognizes that the current photographing scene is different from the ordinary light scene. In the case where the scene recognition module recognizes that the current photographing scene is a normal light scene, the step S111 need not be performed.
And S112, the shooting control module adjusts shooting parameters and a shooting mode according to the received shooting scene.
The adjusted shooting parameter may be a first shooting parameter, and the adjusted shooting mode is the first shooting mode. The shooting parameters may include any one or more of: shutter, exposure time, aperture value, exposure value, ISO, and frame extraction interval. The shooting mode can comprise a video recording mode and a shooting mode.
The shooting control module can correspondingly set a shooting parameter and a shooting mode for each shooting scene. Illustratively, the first photographing parameter is a photographing parameter corresponding to a normal light scene, and the first photographing mode is a photographing mode corresponding to a normal light scene.
The first shooting parameter may be a shooting parameter corresponding to a backlight scene, and the first shooting mode may be a shooting mode corresponding to the backlight scene.
The first shooting parameter may also be a shooting parameter corresponding to a dark light scene, and the first shooting mode may also be a shooting mode corresponding to a dark light scene.
For example, when the received shooting scene is a backlight scene, the shooting control module determines that the adjusted shooting parameters are the shooting parameters corresponding to the backlight scene according to the correspondence, and the adjusted shooting mode is the shooting mode corresponding to the backlight scene.
The process of adjusting the photographing mode is described in detail below. In a normal light scene and a backlight scene, because the intensity of the light of the shooting environment is sufficient, the image capturing module can capture pictures at a standard frame rate (for example, 24 pictures are captured per second) to form a video, and the brightness of each frame of picture in the video captured by the image capturing module is sufficient. And in a dark scene, if the image acquisition module records the video according to the standard frame rate, the light intensity of the shooting environment is insufficient. The brightness of the picture is not sufficient due to the insufficient exposure time of each picture. Therefore, in a dark scene, the image acquisition module can adjust the exposure time of each picture to be longer than the exposure time corresponding to the standard frame rate by taking a picture, so that a series of pictures with higher brightness can be obtained, and a video with higher quality can be obtained.
The process of adjusting the shooting parameters is described in detail below.
Among shooting parameters, shutter, exposure time, exposure value and ISO can realize automatic focusing, automatic exposure, automatic white balance and 3A through algorithms so as to realize automatic adjustment of the parameters. For the shutter, the exposure time, the aperture value, the exposure value and the ISO, the shooting control module can calculate the corresponding exposure value under different shooting scenes. The shooting control module can automatically set the shutter speed and the aperture value according to the exposure value of the collected picture so as to realize that the shooting control module automatically sets the shooting parameters according to the shooting scene. Specifically, the shooting control module may calculate a new exposure parameter according to an exposure value corresponding to a shooting scene. The new exposure parameters may include new shutter, exposure time, exposure value, ISO. The photographing control module applies the new exposure parameters to the camera, and then acquires the exposure value again. If the exposure value does not meet the requirement, the exposure control module can readjust the exposure parameter until the obtained exposure value meets the requirement.
Among the shooting parameters, the frame extraction interval may be affected by the shooting scene. In some embodiments of the present application, the frame decimation interval may be set on a user interface of the camera application in response to a user operation. For each shot scene, the scene recognition module may determine a single frame processing time, i.e., the time required for the image acquisition module and the image processing module to complete image acquisition and processing. Then in the corresponding shooting scene, the minimum value of the frame extraction interval which can be set on the application interface of the camera application is greater than or equal to the single frame processing time in the scene. In the embodiment of the application, in the photographing mode, the frame extraction interval is the frame interval time when a plurality of pictures are collected, namely the second time interval.
For example, in a dim light scene, the single frame processing time is 1 second. And after the scene recognition module reports the recognized dim light scene to a shooting control module in the camera application, setting a control for setting a frame extraction interval on an application interface of the camera application, wherein the minimum value of the frame extraction interval which can be set is greater than or equal to 1 second. Examples may be specifically described with reference to fig. 9. An example of determining the single-frame processing time when the shooting scene is a dim light scene in the embodiment of the present application is described below. For dim light scenes, the scene recognition module may also determine an exposure value from the picture or video and determine a new exposure time from the exposure value. The scene recognition module determines that the new exposure time may follow the following rules: the smaller the exposure value, the shorter the new exposure time. The larger the exposure value, the longer the new exposure time. After determining the new exposure time, the scene recognition module may report the new exposure time to a shooting control module of the camera application, and the shooting control module issues the new exposure time to the image acquisition module. The image capture module may set the exposure time for each picture as the new exposure time when the picture was captured.
Optionally, after determining the exposure time, the scene recognition module may further determine a single frame processing time according to the new exposure time. Or the scene recognition module reports the new exposure time, and the shooting control module determines the single-frame processing time according to the new exposure time. In the process of forming a video file and previewing, the camera application acquires a plurality of pictures according to the single-frame time interval instruction HAL layer, which is specifically described with reference to fig. 9. Wherein, the new exposure time can also be determined by the shooting control module and then sent to the scene recognition module.
The shooting scene is a normal light scene and a backlight scene, and the adopted shooting mode is a video mode. The exposure time of each picture can be determined according to the frame rate of recording. The scene recognition module determines single-frame processing time under a normal light scene and a backlight scene according to the exposure time.
In the embodiment of the present application, the shooting parameters corresponding to the normal light scene may include auto focus, auto exposure, auto white balance, and 3A, and may further include an adjustable range in which the frame extraction interval is displayed on the user interface in the normal light scene. The shooting parameters corresponding to the backlight scene may include auto-focus, auto-exposure, auto-white balance, and 3A, and may further include an adjustable range in which a frame extraction interval is displayed on a user interface in the backlight scene. The shooting parameters corresponding to the dim light scene may include auto focus, auto exposure, auto white balance, and 3A, and may further include an adjustable range of the frame-drawing interval displayed on the user interface in the dim light scene.
S113, the shooting control module sends the first shooting parameters and the first shooting mode to the image acquisition module.
In the embodiment of the application, when the scene identified by the scene identification module is a common light scene, the scene is the same as a preset shooting scene, and therefore the shooting control module does not need to be notified to adjust the shooting parameters and the shooting mode. When the scene identified by the scene identification module is a backlight scene, the shooting control module sets the shooting parameters as the shooting parameters corresponding to the backlight scene, sets the shooting mode as the shooting mode corresponding to the backlight scene, and sends the shooting parameters corresponding to the backlight scene and the shooting mode corresponding to the backlight scene to the image acquisition module. When the scene identified by the scene identification module is a dim light scene, the shooting control module sets the shooting parameters as the shooting parameters corresponding to the dim light scene, sets the shooting mode as the shooting mode corresponding to the dim light scene, and sends the shooting parameters corresponding to the dim light scene and the shooting mode corresponding to the dim light scene to the image acquisition module.
S114, the image acquisition module acquires a picture or a video according to the first shooting parameters and the first shooting mode.
And S115, the image acquisition module sends the acquired picture or video to the image processing module.
In the embodiment of the application, the pictures or videos can be collected and sent in real time. Namely, the image acquisition module can send a picture to the image processing module after acquiring the picture.
And S116, the image processing module processes the picture or the video according to the identified shooting scene to obtain a video data stream.
In the embodiment of the application, the image processing module can set different post-processing algorithms for different shooting scenes. The post-processing algorithm set for each shot scene image processing module in the normal light scene, the backlight scene, and the dim light scene is described below. For a common light scene and a dark light scene, the image processing module can adopt a video post-processing algorithm to perform anti-shake, noise reduction and other processing on the acquired picture or video. For a backlight scene, the image processing module can use an HDR algorithm to process, and can also perform processing such as anti-shake and noise reduction. Reference may be made in particular to a detailed description of the concept of video post-processing algorithms.
The following describes user interfaces involved in the shooting parameter and shooting mode adjustment process. Please refer to fig. 5, fig. 5 is a schematic diagram of a human-computer interaction interface according to an embodiment of the present disclosure. As shown in (C) in fig. 4 and (a) in fig. 5, in step S104, the user may touch the delayed photography mode icon 204G to switch to the delayed photography mode. In response to a touch operation of the time-lapse photography mode icon 204G by the user, the electronic apparatus 100 displays the camera application interface 20. And the camera application interface 20 includes a delayed photography mode prompt 207 and a close control 208. The closing control 208 is configured to close the delayed photography mode, and in response to a touch operation of the user on the closing control 208, in the HAL layer, modules (the image capture module, the image processing module, and the scene recognition module) related to the delayed photography mode may be closed, and the photography mode prompt 207 is no longer displayed, and the electronic device 100 displays the interface described in (B) in fig. 4.
After the shooting scene identified by the scene identification module in step S110, the identified shooting scene may be reported to a camera application, and an application interface of the camera application may include the identified shooting scene. As shown in fig. 5 (B), a shooting scene hint 209 may also be included on the camera application interface 20. Illustratively, the shot scene prompt 209 may prompt: dark light scenes.
After the shooting parameters and the shooting mode are adjusted, when the image acquisition module acquires the picture or the video according to the first shooting parameters and the first shooting mode, the quality can be improved compared with the picture acquired before adjustment. As shown in (a) and (B) of fig. 5, after the shooting parameters and the shooting mode are adjusted, the quality of the picture stream displayed in the finder frame 205 is higher than that before the adjustment. For example, when the shooting parameters are adjusted, the exposure time of the shot picture is increased, and the brightness of the picture is increased. In addition, after adjusting the shooting parameters, the video post-processing algorithm may also adjust accordingly so that the brightness range of the preview image in the viewfinder 205. Illustratively, after the video post-processing algorithm is adjusted, the brightness range of the picture displayed in the viewfinder 205 is larger than the brightness range before the adjustment.
In this embodiment of the application, if the shooting parameter and shooting mode adjustment process is not completed, for example, when step S112 is executed, the user touches the close control 208, the execution of step S112 and subsequent steps is stopped, and the electronic device 100 displays the camera application interface 20 shown in (B) in fig. 4.
The following describes a user interface involved in the influence of a shooting scene on the adjustable range of the framing interval on the user interface. In one possible implementation manner, after the shooting scene identified by the scene identification module in step S110, the camera application may display a control for setting the frame-drawing interval, i.e., the frame-drawing interval control 211, on the camera application interface 20 according to the identified shooting scene. Specifically, please refer to fig. 6, where fig. 6 is a schematic diagram of a human-computer interaction interface according to an embodiment of the present application.
As shown in fig. 6 (a), after the shooting scene identified by the scene identification module in step S110 is a dark scene, the camera application interface 20 may further include a frame-drawing interval control 211 and a prompt 212. Wherein:
prompt 212, which may be used to prompt: and clicking the icon to adjust the frame extraction interval and pressing the checking details for a long time.
The frame decimation interval control 211 is configured to adjust a frame decimation interval of the captured video, which may be specifically described with reference to fig. 7.
As shown in fig. 6 (B), in response to a long press operation of the framing interval control 211 by the user, the electronic device 100 may display the framing interval details interface 40, including a functional prompt 401, operable to prompt: the larger the frame extraction interval is, the shorter the time the photographed video is compressed to be played. Different frame extraction intervals are suitable for different scenes, and the scene details are checked by clicking the control. The framing interval details interface 40 may also include a go-view option 402 for adjusting the framing interval.
Referring to fig. 7, fig. 7 is a schematic diagram of a human-computer interaction interface according to an embodiment of the present disclosure. As shown in fig. 6 (B) and fig. 7, in response to a user's touch operation on the go view option 402, the electronic device 100 displays the camera application interface 20, and the frame-drawing interval adjustment control 213 is included on the camera application interface 20.
As shown in fig. 6 (a) and 7, in response to a user's touch operation on the frame decimation interval control 211, the electronic apparatus 100 displays the camera application interface 20, and the frame decimation interval adjustment control 213 is included on the camera application interface 20.
As shown in fig. 7, when the user operation acting on the delayed photography mode icon is detected, a photography scene prompt 209 prompting a dark scene is included on the camera application interface 20 displayed by the electronic device, which may be an interface for delayed photography. The delayed photography interface may include a first control, a frame decimation interval adjustment control 213.
For example, the shooting scene identified by the scene identification module in step S110 is a dim light scene, the single frame processing time is 1 second, and the specific description in step S112 may be referred to for the determination of the single frame processing time. The minimum value of the framing interval that can be set by the framing interval adjustment control 213 is greater than or equal to 1 second for the single-frame processing time in the dim light scene. Therefore, two frames of pictures can be extracted within the frame extraction interval time, and the condition that the frame extraction fails due to the fact that the frame extraction interval is smaller than the single-frame processing time is reduced.
In this embodiment, the first control may include a frame-drawing interval adjusting control 213, configured to adjust the second time interval within a range greater than or equal to the exposure time. Specifically, the user may touch or drag the framing interval adjustment control 213 to adjust the framing interval. Different frame extraction intervals may correspond to different shooting scenes. For example, as shown in fig. 7, the frame-drawing interval adjustment control 213 may include a town tide identifier 213A, a sunrise and sunset identifier 213B, a sky cloud identifier 213C, and a building manufacture identifier 213D. Wherein:
a town tide mark 213A for indicating that the applicable scene is a town tide when the decimation interval is set to 1 second.
A sunrise and sunset flag 213B for indicating that the applicable scene is sunrise and sunset when the decimation interval is set to 10 seconds.
A sky cloud flag 213C indicating that a suitable scene is a sky cloud when the frame extraction interval is set to 15 seconds.
A building manufacture flag 213D for indicating that the applicable scene is building manufacture when the frame extraction interval is set to 30 seconds.
In the embodiments of the present application, the above-mentioned scenario examples are only used for explaining the embodiments of the present application, and should not be construed as limiting. In addition, the different frame extraction intervals can also be used for shooting other scenes.
Illustratively, for the shooting scene identified by the scene identification module in step S110 as a normal light scene, the single frame processing time is 0.5 seconds. Please refer to fig. 8, fig. 8 is a schematic diagram of a human-computer interaction interface according to an embodiment of the present disclosure. As shown in fig. 8, the minimum value of the decimation frame interval that can be set by the decimation frame interval adjustment control 213 is greater than or equal to 0.5 seconds of the single frame processing time in the normal light scene. Therefore, pictures can be extracted within the frame extraction interval time, and the situation that frame extraction failure is caused because the frame extraction interval is smaller than the single-frame processing time is reduced.
As shown in fig. 8, a scene prompt 209 is shot to prompt a normal light scene.
And S117, the image processing module sends the video data stream to the preview display module.
And S118, the preview display module performs preview display according to the video data stream.
The pictures or videos displayed by preview may be updated in real-time.
Step S119-S124, forming video file and previewing process
And S119, performing touch operation on the shooting control 202 by the user.
And S120, responding to the touch operation of the user on the shooting control 202, sending a notice to the coding module and the HAL layer by the shooting control module.
Wherein the notification is for causing the encoding module to receive the video data stream from the image processing module and encode to form the video file.
In this embodiment, in response to the touch operation of the user on the shooting control 202, the shooting control module may further notify the HAL layer of starting to record the video, and the image processing module in the HAL layer may send the real-time video data stream to the encoding module.
And S121, receiving the video data stream from the image processing module by the encoding module after receiving the notification.
And S122, the coding module codes the video data stream to form a video file.
And S123, the image processing module sends the video data stream to the preview display module.
And S124, the preview display module performs preview display according to the video data stream.
Wherein the preview displayed picture is updated in real time.
The user interface involved in forming the video file and the preview process is described below. As shown in (B) in fig. 5, the state of the shooting control 202 is the non-start shooting state. As shown in (C) in fig. 5, in response to a touch operation of the photographing control 202 by the user, the state of the photographing control 202 is changed from the non-start photographing state to the in-photographing state. When the state of the shooting control 202 is the shooting state, a control 210 for prompting the video recording time can be displayed on the camera application interface 20 for updating the time for which the video recording has been continued in real time.
When the state of the shooting control 202 is the in-shooting state, the user may perform the touch operation on the shooting control 202 again. The state of the shooting control 202 is changed from the shooting-in state back to the non-start shooting state in response to the touch operation. After the electronic device 100 finishes recording the video in the time-lapse photography mode, the encoding module stops receiving the video data stream, and encodes the received video data stream to form a video file. In response to a user touching the capture control 202, the capture control 202 can also notify the HAL layer of the end of recording the video, and the image processing module in the HAL layer can stop sending the video data stream to the encoding module.
The following describes a specific process of encoding by the encoding module in the embodiment of the present application.
In this embodiment of the application, the encoding module may set timestamps for a plurality of pictures in sequence, so that a video file composed of the plurality of pictures may be shown according to a set frame rate, for example, 20 pictures per second. In the embodiment of the application, the frame interval time when the video file is played is set according to the set playing frame rate.
Specifically, in a video recording mode, the electronic device performs frame extraction on a plurality of pictures to obtain frame extraction pictures, the frame extraction pictures are encoded through a set first frame interval time to obtain a video file, and the frame interval time when the obtained video file is played is the first frame interval time. The first frame interval time is less than or equal to the first time interval, i.e. less than the frame interval time for acquiring a plurality of pictures.
And under the photographing mode, the electronic equipment obtains the video file through the set second frame interval time coding, and the frame interval time when the obtained video file is played is the second frame interval time. The second frame interval time is less than the second time interval, that is, less than the frame interval time of the collected multiple pictures.
For example, the unit of the time stamp is 1/8000 seconds, and 1 second corresponds to 8000 in units of time stamps. The encoding module may set the play frame rate to 20 pictures per second according to the previous example. The encoding module may obtain the time difference between two adjacent pictures (i.e. the frame interval time when playing), that is, the timestamp increment 8000/20 is 400 timestamp units, that is, the interval 1/20 seconds between two pictures. The encoding module can assign timestamps to the pictures received in sequence according to the timestamp units. Specifically, the encoding module receives the first picture and sets the timestamp of the first picture to 0. And the coding module receives the second picture, sets the time stamp of the second picture as 400 time stamp units, and so on to obtain the video file. Each picture in the video file corresponds to a time stamp.
When the video file is shown, the pictures in the video file are sequentially displayed from small to large according to the time stamps. That is, the electronic device displays the picture with the time stamp 0 first, displays the picture with the time stamp 400 after 1/20 seconds, and so on, thereby realizing the showing of the video file.
The following describes the process of forming a video file and previewing the video file respectively for the photographing mode and the video recording mode. In the embodiment of the application, the shooting control module determines that the shooting mode is the photographing mode in a dark light scene. And under the normal light scene and the backlight scene, the shooting control module determines that the shooting mode is a video recording mode.
Referring to fig. 9, fig. 9 is a schematic flowchart illustrating a video file capturing and previewing process according to an embodiment of the present application. The video file and preview process is a flow corresponding to the photographing mode, and is executed after step S118 in the embodiment described in fig. 3, and may be specifically executed after step S120. For example, when the shooting scene is identified to be a dim light scene, the electronic device performs video file acquisition and preview by using the shooting mode.
The shooting control module stores single-frame time intervals, and the image acquisition module stores exposure time. The detailed description of the single frame interval and the exposure time may refer to the detailed description in step S112 in the example described in fig. 3. In the example described in fig. 3, step S121 may be specifically expanded to S121a (including S201 to S207) and S121b (including S210) in the example shown in fig. 9, step S122 may include S208, step S123 may include S207, and step S124 may include S209.
S201, the shooting control module sends a shooting request (video request) to the image acquisition module.
The shooting request is used for requesting to acquire a picture, and the exposure time of the acquired picture is generated and stored in the HAL layer.
And S202, counting the frame extraction interval by a timer in the shooting control module.
The frame extraction interval may be determined by the shooting control module or the scene recognition module according to the exposure time, and the exposure time may be determined by the scene recognition module according to the exposure value, which may be specifically described with reference to step S112 in the example described in fig. 3.
The camera application may set a decimation interval for each picture within which the acquisition and processing of the picture is done at the HAL layer.
The frame extraction interval may be set by the user on the camera application interface, and the examples described with reference to fig. 7 and 8 may be specifically used.
And S203, the image acquisition module performs exposure according to the exposure time determined by the shooting control module to acquire the picture.
The exposure time may be determined by the scene recognition module according to the exposure value, and the image acquisition module acquires the exposure value from the scene recognition module, which may be specifically described with reference to step S112 in the example described in fig. 3.
And S204, the image acquisition module sends the acquired picture to the image processing module.
S205, the image acquisition module sends a notice that one frame of picture is shot to the shooting control module.
After the image capture module performs step S203, the image capture module may send a notification to the photographing control module that one frame of picture has been photographed. The shooting control module will issue a shooting request for the next frame of picture after the timing is over when receiving the notification that the frame of picture has been collected, specifically referring to the description of step S210.
In the embodiment of the present application, step S205 may also be performed before step S204.
And S206, the image processing module processes the video by using a video post-processing algorithm of a dim light scene.
In particular, the description of the video post-processing algorithm may refer to the description of the conceptual section.
And S207, the image processing module sends the processed picture to the coding module and sends the picture to the preview display module.
The encoding module can receive a plurality of pictures and then perform encoding.
And S208, the coding module codes the picture.
In the embodiment of the present application, step S208 is not limited to be executed before step S209, and may also be executed after step S209, which is not limited in the embodiment of the present application.
And S209, the preview display module previews and displays the pictures.
The preview display module can display the picture in the frame extraction interval period until the preview display module receives the next frame of picture.
S210, when the shooting control module detects that the notice is received within the timing time, the shooting control module sends a shooting request of the next frame of picture to the image acquisition module.
If the timing time is over, the photographing control module still does not receive the notification of the acquired frame of picture, and the photographing control module needs to wait until the notification of the acquired frame of picture is received, and then step S210 is executed. In one possible implementation manner, the shooting control module sends a shooting request of the next frame of image when waiting for the notification that the acquired frame of image is not received even if the set time is exceeded (the set time is longer than the timing time).
The HAL layer executes a process of acquiring and previewing the next frame of picture according to the received shooting request of the next frame of picture, and specifically refer to steps S201 to S210.
And circularly executing the steps S201 to S210 until the user touches the shooting control to finish recording, so that a plurality of pictures can be obtained. The multiple pictures can be arranged in sequence to be coded to obtain a video file.
Referring to fig. 10, fig. 10 is a schematic flowchart illustrating a video file collecting and previewing process according to an embodiment of the present disclosure. The video file and preview process is a flow corresponding to the video recording method, and is executed after step S118 in the embodiment described in fig. 3, or may be executed after step S120. For example, when the shooting scene is identified to be a normal light scene or a backlight scene, the electronic device performs video file acquisition and preview by using the video recording mode.
In the example described in fig. 3, step S121 may be specifically expanded to S121c (including S301 to S302) and S121d (including S305 to S308) in the example shown in fig. 10, step S122 may include S309, step S123 may include S303, and step S124 may include S304.
S301, the shooting control module sends a shooting request to the image acquisition module.
The shooting request is used to request to capture a video, and the frame rate of capturing the video may be preset, for example, according to a standard frame rate, that is, 24 pictures are captured per second.
S302, the image acquisition module records video according to a preset frame rate to obtain a video.
In some embodiments of the present application, each time the image capturing module captures a frame of picture in the video, a notification that a frame of picture has been captured is reported to the image capturing module.
And S303, the image acquisition module sends the video to the preview display module.
And S304, the preview display module displays the preview according to the video.
The image acquisition module can send a picture to the preview display module for preview display after the picture is acquired, so as to realize real-time preview display.
S305, the image acquisition module performs frame extraction on the video.
Optionally, the image acquisition module may also send the video to the image processing module, and the image processing module performs frame extraction on the video according to a preset frame extraction interval.
The frame extraction interval may be set by the user on the camera application interface, and the examples described with reference to fig. 7 and 8 may be specifically used.
S306, the image acquisition module sends the frame-extracted video to the image processing module.
And S307, the image processing module processes the video after frame extraction by using a video post-processing algorithm.
In the embodiment of the application, if the scene recognition module recognizes that the shooting scene is a common light scene, the video post-processing algorithm corresponding to the common light scene is used for processing. And if the scene identification module identifies that the shooting scene is a backlight scene, processing by using a video post-processing algorithm corresponding to the backlight scene, for example, using an HDR algorithm.
And S308, the image processing module sends the processed video to the encoding module.
S309, the coding module codes the video.
In the embodiment of the application, the electronic device can recognize that the shooting scene is changed from the first shooting scene to the second shooting scene according to the collected picture. The electronic equipment can determine a second shooting parameter according to a second shooting scene and also can determine a second shooting mode, then collects a plurality of pictures according to the second shooting parameter and/or the second shooting mode, and codes the plurality of pictures to obtain a video file. Specifically, during the video recording process in steps S119 to S124 or before the camera application is started and does not touch the shooting control (i.e., before the shooting of the video is not started), the scene recognition module may also periodically perform scene recognition. For example, through steps S101 to S124, the image capturing module captures a picture or a video according to the shooting parameters and the shooting mode to form a video data stream. Before the video recording is not stopped (i.e., before the user does not touch the shooting control 202 to switch to the non-shooting start state), the scene recognition module may perform scene recognition periodically, for example, every 10 minutes. And if the scene recognition module detects that the shooting scene changes, reporting the changed shooting scene to the shooting control module. The electronic apparatus 100 re-executes the photographing parameter and photographing mode adjustment process corresponding to steps S104 to S118.
For example, the first photographing scene is a dim light scene, the corresponding first photographing parameters are photographing parameters of the dim light scene, and the first photographing mode is a dim light photographing mode (photographing mode). The second shooting scene is a normal light scene, the corresponding second shooting parameters are shooting parameters of the normal light scene, and the second shooting mode is a normal light shooting mode (video recording mode). Then, through steps S101 to S124, the image capturing module captures a plurality of pictures according to the shooting parameters and the shooting modes corresponding to the dim light scene to form a video data stream. After step S121, when the scene recognition module recognizes that the shooting scene changes from the dark light scene to the ordinary light scene, the scene recognition module reports the ordinary light scene to the shooting control module. The electronic apparatus 100 re-executes the shooting parameters and the shooting mode adjustment process corresponding to steps S104 to S118 to adjust the shooting parameters to the shooting parameters corresponding to the normal light scene and adjust the shooting mode to the shooting mode corresponding to the normal light scene. In addition, the image processing module adjusts a video post-processing algorithm according to the common light scene to obtain a video data stream.
In the video acquisition method, if the shooting scene is detected to change in the acquisition process, the shooting parameters and the shooting mode can be readjusted to improve the quality of the acquired picture, so that the quality of the acquired video is improved.
It can be understood that, in the embodiment of the present application, video shooting is performed in a delayed shooting mode as an example, but the embodiment of the present application is not limited to the delayed shooting mode, and the above-mentioned video capturing method may also be used in other video recording modes, and the embodiment of the present application does not limit this. In addition, the shooting scene identification provided by the embodiment of the present application to adjust the shooting parameters may also be applied to various shooting modes, which is not limited in the embodiment of the present application.
In the above-described embodiments, all or part of the functions may be implemented by software, hardware, or a combination of software and hardware. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium includes: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.

Claims (17)

1. A method of video capture, the method comprising:
the method comprises the steps that the electronic equipment displays a camera application interface, wherein a delayed photography mode icon is included on the camera application interface;
in response to a first user operation acting on the time-lapse photography mode icon, the electronic device capturing at least one picture;
the electronic equipment identifies and obtains a first shooting scene according to the at least one picture, wherein the first shooting scene comprises a backlight scene, a common light scene or a dark light scene;
the electronic equipment determines a first shooting parameter according to the first shooting scene, wherein the first shooting parameter is related to exposure;
the electronic equipment acquires a plurality of pictures according to the first shooting parameter; coding the pictures to obtain a video file, wherein the frame interval time of the video file when being played is less than or equal to the frame interval time of the pictures when being collected;
After the electronic device identifies and obtains a first shooting scene according to the at least one picture, the method further includes:
the electronic equipment determines a first shooting mode according to the first shooting scene, wherein the first shooting mode comprises a video recording mode or a shooting mode;
the electronic equipment collects a plurality of pictures according to the first shooting parameter, and the method comprises the following steps:
and the electronic equipment acquires a plurality of pictures according to the first shooting parameter and the first shooting mode.
2. The method according to claim 1, wherein when the first photographing mode is the video recording mode, the frame interval time of the plurality of pictures is a first time interval;
the electronic device encodes the plurality of pictures to obtain a video file, and the method comprises the following steps:
the electronic equipment extracts pictures from the multiple pictures to obtain frame extraction pictures, the frame extraction pictures are coded through a set first frame interval time to obtain a video file, and the first frame interval time of the video file is smaller than or equal to the first time interval.
3. The method according to claim 2, wherein the first photographing parameter comprises an exposure time, and when the first photographing mode is the photographing mode, the frame interval time between which the plurality of pictures are collected is a second time interval; the second time interval is greater than the first time interval and is determined by the exposure time;
The electronic device encodes the multiple pictures to obtain a video file, and the method comprises the following steps:
and the electronic equipment obtains a video file by encoding through a set second frame interval time, wherein the second frame interval time is less than the second time interval.
4. The method of claim 3, further comprising:
and the electronic equipment displays a first control on a delayed shooting interface, the first control is used for adjusting the second time interval within a value range which is larger than or equal to the exposure time, and the first shooting parameter comprises the second time interval.
5. The method according to any one of claims 1 to 4, wherein when the first captured scene includes the backlight scene or the normal light scene, the first capturing mode is the video recording mode;
and when the first shooting scene comprises the dim light scene, the first shooting mode is the shooting mode.
6. The method of any one of claims 1 to 4, wherein after the electronic device captures at least one picture and identifies a first captured scene from the at least one picture, the method further comprises:
The electronic equipment determines a first video post-processing algorithm according to the first shooting scene, wherein the first video post-processing algorithm corresponds to the first shooting scene;
before the electronic device encodes the multiple pictures to obtain a video file, the method further includes:
the electronic equipment processes the multiple pictures by using the first video post-processing algorithm to obtain processed multiple pictures;
the electronic device encodes the plurality of pictures to obtain a video file, and the method comprises the following steps:
and the electronic equipment encodes the processed multiple pictures to obtain a video file.
7. The method according to any one of claims 1 to 4, wherein the camera application interface further includes a shooting control, and the electronic device acquires a plurality of pictures according to the first shooting parameter and encodes the plurality of pictures to obtain a video file, including:
and responding to a second user operation acting on the shooting control, acquiring a plurality of pictures by the electronic equipment according to the first shooting parameters, and coding the pictures to obtain a video file.
8. The method of any one of claims 1 to 4, wherein after the electronic device captures at least one picture and identifies a first captured scene from the at least one picture, the method further comprises:
The electronic equipment identifies that the shooting scene is changed from the first shooting scene to a second shooting scene according to the acquired picture;
the electronic equipment determines a second shooting parameter according to the second shooting scene;
and the electronic equipment acquires a plurality of pictures according to the second shooting parameters and codes the pictures to obtain the video file.
9. An electronic device, characterized in that the electronic device comprises: one or more processors, memory, and a display screen;
the memory coupled with the one or more processors, the memory to store computer program code, the computer program code including computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform:
displaying a camera application interface, wherein a time-lapse photography mode icon is included on the camera application interface;
acquiring at least one picture in response to a first user operation acting on the time-lapse photography mode icon;
identifying and obtaining a first shooting scene according to the at least one picture, wherein the first shooting scene comprises a backlight scene, a common light scene or a dark light scene;
Determining a first shooting parameter according to the first shooting scene, wherein the first shooting parameter is related to exposure;
collecting a plurality of pictures according to the first shooting parameter; coding the pictures to obtain a video file, wherein the frame interval time of the video file when being played is less than or equal to the frame interval time of the pictures when being collected;
the one or more processors are further to invoke the computer instructions to cause the electronic device to perform:
determining a first shooting mode according to the first shooting scene, wherein the first shooting mode comprises a video recording mode or a shooting mode;
the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform:
and acquiring a plurality of pictures according to the first shooting parameter and the first shooting mode.
10. The electronic device according to claim 9, wherein when the first photographing mode is the video recording mode, the frame interval time at which the plurality of pictures are collected is a first time interval;
the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform:
Extracting pictures from the multiple pictures to obtain frame extracting pictures, wherein the frame extracting pictures are coded through a set first frame interval time to obtain a video file, and the first frame interval time of the video file is less than or equal to the first time interval.
11. The electronic device according to claim 10, wherein the first photographing parameter includes an exposure time, and when the first photographing mode is the photographing mode, a frame interval time at which the plurality of pictures are captured is a second time interval; the second time interval is greater than the first time interval and is determined by the exposure time;
the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform:
and coding through a set second frame interval time to obtain the video file, wherein the second frame interval time is less than the second time interval.
12. The electronic device of claim 11, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
and displaying a first control on a delayed shooting interface, wherein the first control is used for adjusting the second time interval within a value range which is larger than or equal to the exposure time, and the first shooting parameter comprises the second time interval.
13. The electronic device according to any one of claims 9 to 12, wherein when the first shooting scene includes the backlight scene or the normal light scene, the first shooting mode is the video recording mode;
and when the first shooting scene comprises the dim light scene, the first shooting mode is the shooting mode.
14. The electronic device of any of claims 9-12, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
determining a first video post-processing algorithm according to the first shooting scene, wherein the first video post-processing algorithm corresponds to the first shooting scene;
processing the multiple pictures by using the first video post-processing algorithm to obtain processed multiple pictures;
the one or more processors are specifically configured to invoke the computer instructions to cause the electronic device to perform:
and coding the processed multiple pictures to obtain a video file.
15. The electronic device of any of claims 9-12, wherein the camera application interface further comprises a capture control, the one or more processors being further configured to invoke the computer instructions to cause the electronic device to perform:
And responding to a second user operation acting on the shooting control, collecting a plurality of pictures according to the first shooting parameters, and coding the pictures to obtain a video file.
16. The electronic device of any of claims 9-12, wherein the one or more processors are further configured to invoke the computer instructions to cause the electronic device to perform:
recognizing that a shooting scene is changed from the first shooting scene to a second shooting scene according to the acquired picture;
determining a second shooting parameter according to the second shooting scene;
and acquiring a plurality of pictures according to the second shooting parameters, and coding the plurality of pictures to obtain the video file.
17. A computer-readable storage medium comprising instructions that, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-8.
CN201910883504.5A 2019-09-18 2019-09-18 Video acquisition method and electronic equipment Active CN112532859B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910883504.5A CN112532859B (en) 2019-09-18 2019-09-18 Video acquisition method and electronic equipment
PCT/CN2020/115109 WO2021052292A1 (en) 2019-09-18 2020-09-14 Video acquisition method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910883504.5A CN112532859B (en) 2019-09-18 2019-09-18 Video acquisition method and electronic equipment

Publications (2)

Publication Number Publication Date
CN112532859A CN112532859A (en) 2021-03-19
CN112532859B true CN112532859B (en) 2022-05-31

Family

ID=74884487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910883504.5A Active CN112532859B (en) 2019-09-18 2019-09-18 Video acquisition method and electronic equipment

Country Status (2)

Country Link
CN (1) CN112532859B (en)
WO (1) WO2021052292A1 (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542591A (en) * 2021-06-02 2021-10-22 惠州Tcl移动通信有限公司 Time-lapse shooting processing method and device, mobile terminal and storage medium
CN115484423A (en) * 2021-06-16 2022-12-16 荣耀终端有限公司 Transition special effect adding method and electronic equipment
CN113992859A (en) * 2021-12-27 2022-01-28 云丁网络技术(北京)有限公司 Image quality improving method and device
CN113781388A (en) * 2021-07-20 2021-12-10 许继集团有限公司 Image enhancement-based power transmission line channel hidden danger image identification method and device
CN113810596B (en) * 2021-07-27 2023-01-31 荣耀终端有限公司 Time-delay shooting method and device
CN115633250A (en) * 2021-07-31 2023-01-20 荣耀终端有限公司 Image processing method and electronic equipment
CN113705584A (en) * 2021-08-24 2021-11-26 上海名图软件有限公司 Object difference light variation detection system, detection method and application thereof
CN115866391A (en) * 2021-09-23 2023-03-28 北京字跳网络技术有限公司 Video generation method, device, equipment and storage medium
CN115086567B (en) * 2021-09-28 2023-05-19 荣耀终端有限公司 Time delay photographing method and device
CN114051095A (en) * 2021-11-12 2022-02-15 苏州臻迪智能科技有限公司 Remote processing method of video stream data and shooting system
CN116546313A (en) * 2022-01-25 2023-08-04 华为技术有限公司 Shooting restoration method and electronic equipment
CN116723382B (en) * 2022-02-28 2024-05-03 荣耀终端有限公司 Shooting method and related equipment
CN116723417B (en) * 2022-02-28 2024-04-26 荣耀终端有限公司 Image processing method and electronic equipment
CN116701288A (en) * 2022-02-28 2023-09-05 荣耀终端有限公司 Streaming media characteristic architecture, processing method, electronic device and readable storage medium
CN114827342B (en) * 2022-03-15 2023-06-06 荣耀终端有限公司 Video processing method, electronic device and readable medium
CN116055738B (en) * 2022-05-30 2023-10-20 荣耀终端有限公司 Video compression method and electronic equipment
CN115278078A (en) * 2022-07-27 2022-11-01 深圳市天和荣科技有限公司 Shooting method, terminal and shooting system
CN116055897B (en) * 2022-08-25 2024-02-27 荣耀终端有限公司 Photographing method and related equipment thereof
CN116668580B (en) * 2022-10-26 2024-04-19 荣耀终端有限公司 Scene recognition method, electronic device and readable storage medium
CN116347224B (en) * 2022-10-31 2023-11-21 荣耀终端有限公司 Shooting frame rate control method, electronic device, chip system and readable storage medium
CN116668866B (en) * 2022-11-21 2024-04-19 荣耀终端有限公司 Image processing method and electronic equipment
CN116708753B (en) * 2022-12-19 2024-04-12 荣耀终端有限公司 Method, device and storage medium for determining preview blocking reason
CN115802144B (en) * 2023-01-04 2023-09-05 荣耀终端有限公司 Video shooting method and related equipment
CN117119291A (en) * 2023-02-06 2023-11-24 荣耀终端有限公司 Picture mode switching method and electronic equipment
CN117135299A (en) * 2023-04-27 2023-11-28 荣耀终端有限公司 Video recording method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103841323A (en) * 2014-02-20 2014-06-04 小米科技有限责任公司 Shooting parameter allocation method and device and terminal device
CN104079835A (en) * 2014-07-02 2014-10-01 深圳市中兴移动通信有限公司 Method and device for shooting nebula videos
CN105100632A (en) * 2014-05-13 2015-11-25 北京展讯高科通信技术有限公司 Adjusting method and apparatus for automatic exposure of imaging device, and imaging device
CN109743508A (en) * 2019-01-08 2019-05-10 深圳市阿力为科技有限公司 A kind of time-lapse photography device and method
CN110086985A (en) * 2019-03-25 2019-08-02 华为技术有限公司 A kind of method for recording and electronic equipment of time-lapse photography

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9247098B2 (en) * 2013-04-09 2016-01-26 Here Global B.V. Automatic time lapse capture
US9489979B2 (en) * 2013-08-06 2016-11-08 Casio Computer Co., Ltd. Image processing apparatus for time-lapse moving image, image processing method, and storage medium
JP6643109B2 (en) * 2016-01-26 2020-02-12 キヤノン株式会社 Imaging device, control method thereof, and program
JP6887245B2 (en) * 2016-12-13 2021-06-16 キヤノン株式会社 Imaging device and its control method, program, storage medium
US10771712B2 (en) * 2017-09-25 2020-09-08 Gopro, Inc. Optimized exposure temporal smoothing for time-lapse mode
CN108270966A (en) * 2017-12-27 2018-07-10 努比亚技术有限公司 A kind of method, mobile terminal and storage medium for adjusting light filling brightness
CN110012210B (en) * 2018-01-05 2020-09-22 Oppo广东移动通信有限公司 Photographing method and device, storage medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103841323A (en) * 2014-02-20 2014-06-04 小米科技有限责任公司 Shooting parameter allocation method and device and terminal device
CN105100632A (en) * 2014-05-13 2015-11-25 北京展讯高科通信技术有限公司 Adjusting method and apparatus for automatic exposure of imaging device, and imaging device
CN104079835A (en) * 2014-07-02 2014-10-01 深圳市中兴移动通信有限公司 Method and device for shooting nebula videos
CN109743508A (en) * 2019-01-08 2019-05-10 深圳市阿力为科技有限公司 A kind of time-lapse photography device and method
CN110086985A (en) * 2019-03-25 2019-08-02 华为技术有限公司 A kind of method for recording and electronic equipment of time-lapse photography

Also Published As

Publication number Publication date
WO2021052292A1 (en) 2021-03-25
CN112532859A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
CN112532859B (en) Video acquisition method and electronic equipment
CN112532857B (en) Shooting method and equipment for delayed photography
CN109951633B (en) Method for shooting moon and electronic equipment
CN113810600B (en) Terminal image processing method and device and terminal equipment
CN113810601B (en) Terminal image processing method and device and terminal equipment
CN112492193B (en) Method and equipment for processing callback stream
CN112532892B (en) Image processing method and electronic device
CN113891009B (en) Exposure adjusting method and related equipment
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
CN113572948B (en) Video processing method and video processing device
CN114095666A (en) Photographing method, electronic device and computer-readable storage medium
CN112181616A (en) Task processing method and related device
CN114863494A (en) Screen brightness adjusting method and device and terminal equipment
CN115567630A (en) Management method of electronic equipment, electronic equipment and readable storage medium
CN112188094B (en) Image processing method and device, computer readable medium and terminal equipment
CN113852755A (en) Photographing method, photographing apparatus, computer-readable storage medium, and program product
CN112532508B (en) Video communication method and video communication device
CN116055859B (en) Image processing method and electronic device
CN113923372B (en) Exposure adjusting method and related equipment
CN115706869A (en) Terminal image processing method and device and terminal equipment
CN112422814A (en) Shooting method and electronic equipment
CN115297269B (en) Exposure parameter determination method and electronic equipment
CN117714835A (en) Image processing method, electronic equipment and readable storage medium
CN117407094A (en) Display method, electronic equipment and system
CN117119314A (en) Image processing method and related electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant