CN109510941B - Shooting processing method and device and computer readable storage medium - Google Patents

Shooting processing method and device and computer readable storage medium Download PDF

Info

Publication number
CN109510941B
CN109510941B CN201811508673.2A CN201811508673A CN109510941B CN 109510941 B CN109510941 B CN 109510941B CN 201811508673 A CN201811508673 A CN 201811508673A CN 109510941 B CN109510941 B CN 109510941B
Authority
CN
China
Prior art keywords
shooting
image
frame data
value
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811508673.2A
Other languages
Chinese (zh)
Other versions
CN109510941A (en
Inventor
陈亚南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201811508673.2A priority Critical patent/CN109510941B/en
Publication of CN109510941A publication Critical patent/CN109510941A/en
Application granted granted Critical
Publication of CN109510941B publication Critical patent/CN109510941B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a shooting processing method, shooting processing equipment and a computer-readable storage medium, wherein the method comprises the following steps: detecting shooting setting parameters and shooting environment parameters; then, if the shooting setting parameters comprise large aperture value parameters and the shooting environment parameters comprise low illumination value parameters, identifying the shooting object in a motion state; then, acquiring an image frame in a shooting process, determining target frame data in the image frame if the shooting object is a shooting subject, and removing image data of the shooting subject from the target frame data; and finally, fusing the eliminated image frames to obtain a shot image. The humanized shooting processing scheme is realized, so that the terminal equipment shoots a shooting main body image with a clear effect in a low-illumination large-aperture environment, the occurrence of the fuzzy imaging condition caused by the low-illumination large-aperture environment is avoided, and the user experience is improved.

Description

Shooting processing method and device and computer readable storage medium
Technical Field
The present invention relates to the field of mobile communications, and in particular, to a shooting processing method and device, and a computer-readable storage medium.
Background
In the prior art, along with the rapid development of intelligent terminal equipment, the shooting demand of a user on the intelligent terminal equipment is higher and higher, but the shooting demand is limited by the hardware environment of the intelligent terminal equipment shooting assembly, and the shooting is easy to be blurred in a dark light shooting scene.
In order to solve the above technical problem, the conventional scheme is to use a large aperture to shoot when detecting that the illumination intensity of the environment is dark.
However, when a large aperture is used for shooting in a terminal device, the exposure time is usually prolonged, and it is expected that if the exposure time is prolonged, a slight shake in the shooting process can also seriously affect the shooting effect.
Meanwhile, when a moving object exists in a photographed viewing interface, a blur or shadow or ghost image caused by the movement of the object may exist in a photographed picture, and the problem is particularly serious when a large aperture is used for photographing.
Disclosure of Invention
In order to solve the technical defects in the prior art, the invention provides a shooting processing method, which comprises the following steps:
detecting shooting setting parameters and shooting environment parameters;
if the shooting setting parameters comprise large aperture value parameters and the shooting environment parameters comprise low illumination value parameters, identifying a shooting object in a motion state;
acquiring an image frame in a shooting process, determining target frame data in the image frame if the shooting object is a shooting subject, and eliminating image data of the shooting subject from the target frame data;
and fusing the eliminated image frames to obtain a shot image.
Optionally, the detecting the shooting setting parameter and the shooting environment parameter includes:
determining a shooting component enabled by the terminal equipment;
detecting a photographing setting parameter of the photographing assembly, and sensing the photographing environment parameter through the photographing assembly.
Optionally, if the shooting setting parameter includes a large aperture value parameter and the shooting environment parameter includes a low illumination value parameter, identifying the shooting object in a motion state, including:
analyzing the shooting environment parameters, extracting an illumination value in the shooting environment parameters, determining the current illumination value as a low illumination value parameter if the illumination value is below a preset illumination value, and simultaneously acquiring an aperture value in the shooting setting parameters;
if the illumination value is below a preset illumination value, increasing the aperture value, and if the increased aperture value is above the preset aperture value, determining the current aperture value as a large aperture value parameter;
in the shooting preview data, a moving image area in a moving state is identified, and a shooting subject is determined within the moving image area.
Optionally, the acquiring an image frame in a shooting process, if the shooting object is a shooting subject, determining target frame data in the image frame, and removing image data of the shooting subject from the target frame data includes:
acquiring all image frames in the shooting process;
if the shooting object is a shooting subject, determining frame data of a first target quantity in all the image frames, wherein the frame data of the first target quantity is positioned at the rear section of all the image frames;
and eliminating the image data in the moving image area from the first target number of frame data.
Optionally, the acquiring an image frame in a shooting process, if the shooting object is a shooting subject, determining target frame data in the image frame, and removing image data of the shooting subject from the target frame data, further includes:
acquiring all image frames in the shooting process;
if the shooting object is a non-subject object, determining frame data of a second target quantity in all the image frames, wherein the frame data of the second target quantity are in preset segments of all the image frames;
and eliminating the image data in the moving image area from the second target number of frame data.
The present invention also proposes a shooting processing apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing:
detecting shooting setting parameters and shooting environment parameters;
if the shooting setting parameters comprise large aperture value parameters and the shooting environment parameters comprise low illumination value parameters, identifying a shooting object in a motion state;
acquiring an image frame in a shooting process, determining target frame data in the image frame if the shooting object is a shooting subject, and eliminating image data of the shooting subject from the target frame data;
and fusing the eliminated image frames to obtain a shot image.
Optionally, the computer program when executed by the processor implements:
determining a shooting component enabled by the terminal equipment;
detecting a photographing setting parameter of the photographing assembly, and sensing the photographing environment parameter through the photographing assembly.
Optionally, the computer program when executed by the processor implements:
analyzing the shooting environment parameters, extracting an illumination value in the shooting environment parameters, determining the current illumination value as a low illumination value parameter if the illumination value is below a preset illumination value, and simultaneously acquiring an aperture value in the shooting setting parameters;
if the illumination value is below a preset illumination value, increasing the aperture value, and if the increased aperture value is above the preset aperture value, determining the current aperture value as a large aperture value parameter;
in the shooting preview data, a moving image area in a moving state is identified, and a shooting subject is determined within the moving image area.
Optionally, the computer program when executed by the processor implements:
acquiring all image frames in the shooting process;
if the shooting object is a shooting subject, determining frame data of a first target quantity in all the image frames, wherein the frame data of the first target quantity is positioned at the rear section of all the image frames;
eliminating image data in the moving image area from the frame data of the first target number;
if the shooting object is a non-subject object, determining frame data of a second target quantity in all the image frames, wherein the frame data of the second target quantity are in preset segments of all the image frames;
and eliminating the image data in the moving image area from the second target number of frame data.
The present invention also proposes a computer-readable storage medium having stored thereon a shooting processing program which, when executed by a processor, implements the steps of the shooting processing method as set forth in any one of the above.
The shooting processing method, the equipment and the computer readable storage medium of the invention are implemented by detecting shooting setting parameters and shooting environment parameters; then, if the shooting setting parameters comprise large aperture value parameters and the shooting environment parameters comprise low illumination value parameters, identifying the shooting object in a motion state; then, acquiring an image frame in a shooting process, determining target frame data in the image frame if the shooting object is a shooting subject, and removing image data of the shooting subject from the target frame data; and finally, fusing the eliminated image frames to obtain a shot image. The humanized shooting processing scheme is realized, so that the terminal equipment shoots a shooting main body image with a clear effect in a low-illumination large-aperture environment, the occurrence of the fuzzy imaging condition caused by the low-illumination large-aperture environment is avoided, and the user experience is improved.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a schematic diagram of a hardware structure of a mobile terminal according to the present invention;
fig. 2 is a communication network system architecture diagram provided by an embodiment of the present invention;
fig. 3 is a flowchart of a first embodiment of a photographing processing method of the present invention;
fig. 4 is a flowchart of a second embodiment of a photographing processing method of the present invention;
fig. 5 is a flowchart of a shooting processing method of the present invention in a third embodiment;
fig. 6 is a flowchart of a fourth embodiment of the photographing processing method of the present invention;
fig. 7 is a flowchart of a fifth embodiment of the photographing processing method of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, the present invention provides various embodiments of the method.
Example one
Fig. 3 is a flowchart of a first embodiment of the photographing processing method of the present invention. A shooting processing method, the method comprising:
s1, detecting shooting setting parameters and shooting environment parameters;
s2, if the shooting setting parameters comprise large aperture value parameters and the shooting environment parameters comprise low illumination value parameters, identifying the shooting object in a motion state;
s3, acquiring an image frame in the shooting process, determining target frame data in the image frame if the shooting object is a shooting subject, and removing image data of the shooting subject from the target frame data;
and S4, fusing the image frames after being removed to obtain a shot image.
In this embodiment, first, a shooting setting parameter and a shooting environment parameter are detected; then, if the shooting setting parameters comprise large aperture value parameters and the shooting environment parameters comprise low illumination value parameters, identifying the shooting object in a motion state; then, acquiring an image frame in a shooting process, determining target frame data in the image frame if the shooting object is a shooting subject, and removing image data of the shooting subject from the target frame data; and finally, fusing the eliminated image frames to obtain a shot image.
Specifically, in the present embodiment, first, the shooting setting parameters and the shooting environment parameters are detected. It can be understood that the terminal device of this embodiment is provided with one or more shooting components for implementing the shooting processing scheme of this embodiment, or the terminal device of this embodiment may be connected and controlled by a wired or wireless manner to other external shooting components, thereby implementing the shooting processing scheme of this embodiment. Specifically, in the present embodiment, a shooting setting parameter and a shooting environment parameter are detected, where the shooting setting parameter refers to a setting parameter related to an operation mode of the shooting component, for example, the shooting setting parameter of the present embodiment includes an aperture value, a shutter value, an exposure value, a sensitivity value, and the like used in shooting, and the shooting environment parameter refers to an environment parameter related to an environment state where the shooting object is located, for example, the shooting environment parameter of the present embodiment includes an illuminance value, an illuminance variation value, a preset illuminance value, and the like.
Specifically, in this embodiment, after detecting the shooting setting parameter and the shooting environment parameter, if it is determined that the shooting setting parameter includes the large aperture value parameter and the shooting environment parameter includes the low illumination value parameter, the shooting object in the motion state is identified. In the embodiment, in order to solve the defect that imaging blur is easily caused by opening a large-aperture shooting in a dark environment in the prior art, in the embodiment, a shooting object in a moving state is firstly detected in a dark light shooting environment and is identified when the large-aperture shooting is simultaneously opened, wherein a lowest motion threshold value which can cause imaging blur is determined according to a current illumination value and an aperture value, and then the shooting object in the moving state is determined according to the determined lowest motion threshold value in the shot preview image data.
Specifically, in this embodiment, after a moving object is identified, an image frame in a shooting process is acquired, if the moving object is a shooting subject, target frame data is determined in the image frame, and image data of the shooting subject is removed from the target frame data. Similarly, as described in the above example, in this embodiment, if it is determined that the object in motion is the subject, all the image frames are acquired first in one shooting process, and then the target frame data is selected from all the image frames, where the selection manner may be determined according to the shooting conditions and the shooting requirements, for example, the subsequent frame of all the image frames is selected as the target frame data of this embodiment, and finally, the image data of the subject is removed from the target frame data.
Specifically, in this embodiment, the image frames after being removed are fused to obtain a captured image. For example, all image frames acquired during shooting are listed as List _ P; if the moving object is a subject object, discarding the image data of the subject object in the previous K1 frame in List _ P, or keeping the subject object of the last K2 frame in List _ P; recording the updated frame List as List _ P'; and finally, fusing the multi-frame images in the List _ P', generating a shot image, and displaying the shot image on a display screen of the terminal equipment.
The method has the advantages that the shooting setting parameters and the shooting environment parameters are detected; then, if the shooting setting parameters comprise large aperture value parameters and the shooting environment parameters comprise low illumination value parameters, identifying the shooting object in a motion state; then, acquiring an image frame in a shooting process, determining target frame data in the image frame if the shooting object is a shooting subject, and removing image data of the shooting subject from the target frame data; and finally, fusing the eliminated image frames to obtain a shot image. The humanized shooting processing scheme is realized, so that the terminal equipment shoots a shooting main body image with a clear effect in a low-illumination large-aperture environment, the occurrence of the fuzzy imaging condition caused by the low-illumination large-aperture environment is avoided, and the user experience is improved.
Example two
Fig. 4 is a flowchart of a second embodiment of the shooting processing method according to the present invention, and based on the above embodiment, the detecting shooting setting parameters and shooting environment parameters includes:
s11, determining a shooting component enabled by the terminal equipment;
s12, detecting the shooting setting parameters of the shooting component, and sensing the shooting environment parameters through the shooting component.
In the embodiment, firstly, a shooting component enabled by the terminal equipment is determined; then, a photographing setting parameter of the photographing assembly is detected, and the photographing environment parameter is sensed by the photographing assembly.
Specifically, in the present embodiment, first, the shooting setting parameters and the shooting environment parameters are detected. It can be understood that the terminal device of this embodiment is provided with one or more shooting components for implementing the shooting processing scheme of this embodiment, or the terminal device of this embodiment may be connected and controlled by a wired or wireless manner to other external shooting components, thereby implementing the shooting processing scheme of this embodiment. Specifically, in the present embodiment, a shooting setting parameter and a shooting environment parameter are detected, where the shooting setting parameter refers to a setting parameter related to an operation mode of the shooting component, for example, the shooting setting parameter of the present embodiment includes an aperture value, a shutter value, an exposure value, a sensitivity value, and the like used in shooting, and the shooting environment parameter refers to an environment parameter related to an environment state where the shooting object is located, for example, the shooting environment parameter of the present embodiment includes an illuminance value, an illuminance variation value, a preset illuminance value, and the like.
Alternatively, a shooting component of the terminal device is detected and identified, and it is understood that the terminal device may have multiple sets of shooting components, for example, a first side of the terminal device is provided with the shooting component P1 and the shooting component P2, and a second side of the terminal device is provided with the shooting component P3, the shooting component P4 and the shooting component P5, specifically, the shooting components enabled by the terminal device are determined, for example, one set of the shooting component P1, the shooting component P2, the shooting component P3, the shooting component P4 and the shooting component P5 is enabled individually, or two or more sets of the shooting component P1, the shooting component P2, the shooting component P3, the shooting component P4 and the shooting component P5 are enabled simultaneously;
optionally, the shooting setting parameters of the shooting assembly are detected, and the shooting environment parameters are sensed by the shooting assembly. Also, as described in the above example, after one set of the photographing component P1, the photographing component P2, the photographing component P3, the photographing component P4, and the photographing component P5 is individually activated, photographing environment parameters are sensed by the activated photographing component;
optionally, two or more groups of the photographing component P1, the photographing component P2, the photographing component P3, the photographing component P4 and the photographing component P5 are activated at the same time, and the photographing component specially configured to sense the environmental parameters is activated to sense the photographing environmental parameters required by the present embodiment.
The method has the advantages that the shooting component started by the terminal equipment is determined; then, a photographing setting parameter of the photographing assembly is detected, and the photographing environment parameter is sensed by the photographing assembly. The more humanized shooting processing scheme is realized, so that the terminal equipment shoots a shooting main body image with a clear effect in a low-illumination large-aperture environment, the occurrence of the fuzzy imaging condition caused by the low-illumination large-aperture environment is avoided, and the user experience is improved.
EXAMPLE III
Fig. 5 is a flowchart of a third embodiment of the shooting processing method according to the present invention, wherein based on the above embodiments, if the shooting setting parameter includes a large aperture parameter and the shooting environment parameter includes a low illumination parameter, the identifying the shooting object in motion includes:
s21, analyzing the shooting environment parameters, extracting an illumination value in the shooting environment parameters, if the illumination value is below a preset illumination value, determining the current illumination value as a low illumination value parameter, and meanwhile, acquiring an aperture value in the shooting setting parameters;
s22, if the illumination value is below a preset illumination value, increasing the aperture value, and if the increased aperture value is above the preset aperture value, determining the current aperture value as a large aperture value parameter;
s23 is a step of identifying a moving image area in motion in the shooting preview data, and specifying a shooting target in the moving image area.
In this embodiment, first, the shooting environment parameters are analyzed, an illuminance value in the shooting environment parameters is extracted, if the illuminance value is below a preset illuminance value, the current illuminance value is determined to be a low illuminance value parameter, and meanwhile, an aperture value in the shooting setting parameters is obtained; then, if the illumination value is below a preset illumination value, increasing the aperture value, and if the increased aperture value is above the preset aperture value, determining the current aperture value as a large aperture value parameter; finally, in the shooting preview data, a moving image area in a moving state is identified, and a shooting subject is specified within the moving image area.
Specifically, in this embodiment, after detecting the shooting setting parameter and the shooting environment parameter, if it is determined that the shooting setting parameter includes the large aperture value parameter and the shooting environment parameter includes the low illumination value parameter, the shooting object in the motion state is identified. In the embodiment, in order to solve the defect that imaging blur is easily caused by opening a large-aperture shooting in a dark environment in the prior art, in the embodiment, a shooting object in a moving state is firstly detected in a dark light shooting environment and is identified when the large-aperture shooting is simultaneously opened, wherein a lowest motion threshold value which can cause imaging blur is determined according to a current illumination value and an aperture value, and then the shooting object in the moving state is determined according to the determined lowest motion threshold value in the shot preview image data.
Optionally, in a manual or semi-automatic shooting mode, analyzing the shooting environment parameters, extracting an illumination value in the shooting environment parameters, if the illumination value is below a preset illumination value, determining that the current illumination value is a low illumination value parameter, and meanwhile, acquiring an aperture value in the shooting setting parameters;
optionally, in a manual or semi-automatic shooting mode, analyzing the shooting setting parameters, extracting an aperture value in the shooting setting parameters, if the aperture value is above a preset aperture value, determining that the current aperture value is a large aperture value parameter, and meanwhile, obtaining an illuminance value in the shooting environment parameters;
optionally, in the automatic shooting mode, the shooting setting parameters and the shooting environment parameters are detected at the same time, that is, whether the shooting setting parameters and the shooting environment parameters are low illumination values and large aperture parameters are determined at the same time;
optionally, preferentially determining the aperture parameter or preferentially determining the illuminance parameter according to different shooting modes;
optionally, if the illuminance parameter is in a real-time change state, preferentially determining the aperture parameter;
alternatively, in the shooting preview data, one or more moving image areas in a moving state are identified, and one or more shooting subjects are determined within the one or more moving image areas.
The method has the advantages that the illumination value in the shooting environment parameter is extracted by analyzing the shooting environment parameter, if the illumination value is below a preset illumination value, the current illumination value is determined to be a low illumination value parameter, and meanwhile, the aperture value in the shooting setting parameter is obtained; then, if the illumination value is below a preset illumination value, increasing the aperture value, and if the increased aperture value is above the preset aperture value, determining the current aperture value as a large aperture value parameter; finally, in the shooting preview data, a moving image area in a moving state is identified, and a shooting subject is specified within the moving image area. The more humanized shooting processing scheme is realized, so that the terminal equipment shoots a shooting main body image with a clear effect in a low-illumination large-aperture environment, the occurrence of the fuzzy imaging condition caused by the low-illumination large-aperture environment is avoided, and the user experience is improved.
Example four
Fig. 6 is a flowchart of a fourth embodiment of the shooting processing method according to the present invention, based on the above embodiment, where the acquiring an image frame in a shooting process, if the shooting object is a shooting subject, determining target frame data in the image frame, and removing image data of the shooting subject from the target frame data includes:
s31, acquiring all image frames in the shooting process;
s32, if the shooting object is a shooting subject, determining frame data of a first target number in all the image frames, wherein the frame data of the first target number are located at the rear section of all the image frames;
s33, removing the image data in the moving image area from the first target number of frame data.
In the present embodiment, first, all image frames in the shooting process are acquired; then, if the shooting object is a shooting subject, determining frame data of a first target quantity in all the image frames, wherein the frame data of the first target quantity is positioned at the rear section of all the image frames; and finally, eliminating the image data in the moving image area from the frame data of the first target number.
Specifically, in this embodiment, after a moving object is identified, an image frame in a shooting process is acquired, if the moving object is a shooting subject, target frame data is determined in the image frame, and image data of the shooting subject is removed from the target frame data. Similarly, as described in the above example, in this embodiment, if it is determined that the object in motion is the subject, all the image frames are acquired first in one shooting process, and then the target frame data is selected from all the image frames, where the selection manner may be determined according to the shooting conditions and the shooting requirements, for example, the subsequent frame of all the image frames is selected as the target frame data of this embodiment, and finally, the image data of the subject is removed from the target frame data.
Optionally, if the photographic subject is a photographic subject, determining a first target number of frame data in all image frames, where the first target number of frame data is in a last section of all image frames (for example, all image frames include 20 frames, and the first target number is the last 18 th frame, 19 th frame, and 20 th frame of the 20 frames);
optionally, if the shooting object is a shooting subject, determining a first target number of frame data in all the image frames, where the first target number of frame data is in a later stage of all the image frames (for example, all the image frames include 20 frames, and the first target number is the 16 th, 17 th and 18 th frames after the first target number is 20 frames);
optionally, if the shooting object is a shooting subject, determining a first target number of frame data in all the image frames, where the first target number of frame data is in a continuous segment of all the image frames;
optionally, if the shooting object is a shooting subject, determining frame data of a first target number in all the image frames, where the frame data of the first target number are located in discontinuous segments of all the image frames;
optionally, the image data in the one or more moving image regions is removed from the first target number of frame data.
The method has the advantages that all image frames in the shooting process are acquired; then, if the shooting object is a shooting subject, determining frame data of a first target quantity in all the image frames, wherein the frame data of the first target quantity is positioned at the rear section of all the image frames; and finally, eliminating the image data in the moving image area from the frame data of the first target number. The more humanized shooting processing scheme is realized, so that the terminal equipment shoots a shooting main body image with a clear effect in a low-illumination large-aperture environment, the occurrence of the fuzzy imaging condition caused by the low-illumination large-aperture environment is avoided, and the user experience is improved.
EXAMPLE five
Fig. 7 is a flowchart of a fifth embodiment of the shooting processing method according to the present invention, based on the above embodiment, where the acquiring an image frame in a shooting process, if the shooting object is a shooting subject, determining target frame data in the image frame, and removing image data of the shooting subject from the target frame data further includes:
s31', all image frames in the shooting process are obtained;
s32', if the photographic object is a non-subject object, determining a second target amount of frame data in all the image frames, wherein the second target amount of frame data is in a preset segment of all the image frames;
s33', the image data within the moving image area is removed from the second target number of frame data.
In the present embodiment, first, all image frames in the shooting process are acquired; then, if the shooting object is a non-subject object, determining frame data of a second target number in all the image frames, wherein the frame data of the second target number are in a preset segment of all the image frames; and finally, eliminating the image data in the moving image area from the frame data of the second target number.
Optionally, if the shooting object is a non-subject object, determining frame data of a second target number in all the image frames, where the frame data of the second target number is in a preset segment of all the image frames;
optionally, if the photographic object is a non-subject object, determining frame data of a second target number in all the image frames, where the frame data of the second target number is all the image frames;
optionally, the image data in the moving image area is removed from the frame data of the second target number;
optionally, the image data in the moving image area is removed from all the image frames.
The method has the advantages that all image frames in the shooting process are acquired; then, if the shooting object is a non-subject object, determining frame data of a second target number in all the image frames, wherein the frame data of the second target number are in a preset segment of all the image frames; and finally, eliminating the image data in the moving image area from the frame data of the second target number. The more humanized shooting processing scheme is realized, so that the terminal equipment shoots a shooting main body image with a clear effect in a low-illumination large-aperture environment, the occurrence of the fuzzy imaging condition caused by the low-illumination large-aperture environment is avoided, and the user experience is improved.
EXAMPLE six
Based on the foregoing embodiments, the present invention further provides a shooting processing apparatus, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, implements:
detecting shooting setting parameters and shooting environment parameters;
if the shooting setting parameters comprise large aperture value parameters and the shooting environment parameters comprise low illumination value parameters, identifying a shooting object in a motion state;
acquiring an image frame in a shooting process, determining target frame data in the image frame if the shooting object is a shooting subject, and eliminating image data of the shooting subject from the target frame data;
and fusing the eliminated image frames to obtain a shot image.
In this embodiment, first, a shooting setting parameter and a shooting environment parameter are detected; then, if the shooting setting parameters comprise large aperture value parameters and the shooting environment parameters comprise low illumination value parameters, identifying the shooting object in a motion state; then, acquiring an image frame in a shooting process, determining target frame data in the image frame if the shooting object is a shooting subject, and removing image data of the shooting subject from the target frame data; and finally, fusing the eliminated image frames to obtain a shot image.
Specifically, in the present embodiment, first, the shooting setting parameters and the shooting environment parameters are detected. It can be understood that the terminal device of this embodiment is provided with one or more shooting components for implementing the shooting processing scheme of this embodiment, or the terminal device of this embodiment may be connected and controlled by a wired or wireless manner to other external shooting components, thereby implementing the shooting processing scheme of this embodiment. Specifically, in the present embodiment, a shooting setting parameter and a shooting environment parameter are detected, where the shooting setting parameter refers to a setting parameter related to an operation mode of the shooting component, for example, the shooting setting parameter of the present embodiment includes an aperture value, a shutter value, an exposure value, a sensitivity value, and the like used in shooting, and the shooting environment parameter refers to an environment parameter related to an environment state where the shooting object is located, for example, the shooting environment parameter of the present embodiment includes an illuminance value, an illuminance variation value, a preset illuminance value, and the like.
Specifically, in this embodiment, after detecting the shooting setting parameter and the shooting environment parameter, if it is determined that the shooting setting parameter includes the large aperture value parameter and the shooting environment parameter includes the low illumination value parameter, the shooting object in the motion state is identified. In the embodiment, in order to solve the defect that imaging blur is easily caused by opening a large-aperture shooting in a dark environment in the prior art, in the embodiment, a shooting object in a moving state is firstly detected in a dark light shooting environment and is identified when the large-aperture shooting is simultaneously opened, wherein a lowest motion threshold value which can cause imaging blur is determined according to a current illumination value and an aperture value, and then the shooting object in the moving state is determined according to the determined lowest motion threshold value in the shot preview image data.
Specifically, in this embodiment, after a moving object is identified, an image frame in a shooting process is acquired, if the moving object is a shooting subject, target frame data is determined in the image frame, and image data of the shooting subject is removed from the target frame data. Similarly, as described in the above example, in this embodiment, if it is determined that the object in motion is the subject, all the image frames are acquired first in one shooting process, and then the target frame data is selected from all the image frames, where the selection manner may be determined according to the shooting conditions and the shooting requirements, for example, the subsequent frame of all the image frames is selected as the target frame data of this embodiment, and finally, the image data of the subject is removed from the target frame data.
Specifically, in this embodiment, the image frames after being removed are fused to obtain a captured image. For example, all image frames acquired during shooting are listed as List _ P; if the moving object is a subject object, discarding the image data of the subject object in the previous K1 frame in List _ P, or keeping the subject object of the last K2 frame in List _ P; recording the updated frame List as List _ P'; and finally, fusing the multi-frame images in the List _ P', generating a shot image, and displaying the shot image on a display screen of the terminal equipment.
The method has the advantages that the shooting setting parameters and the shooting environment parameters are detected; then, if the shooting setting parameters comprise large aperture value parameters and the shooting environment parameters comprise low illumination value parameters, identifying the shooting object in a motion state; then, acquiring an image frame in a shooting process, determining target frame data in the image frame if the shooting object is a shooting subject, and removing image data of the shooting subject from the target frame data; and finally, fusing the eliminated image frames to obtain a shot image. The humanized shooting processing scheme is realized, so that the terminal equipment shoots a shooting main body image with a clear effect in a low-illumination large-aperture environment, the occurrence of the fuzzy imaging condition caused by the low-illumination large-aperture environment is avoided, and the user experience is improved.
EXAMPLE seven
Based on the above embodiments, optionally, the computer program when executed by the processor implements:
determining a shooting component enabled by the terminal equipment;
detecting a photographing setting parameter of the photographing assembly, and sensing the photographing environment parameter through the photographing assembly.
In the embodiment, firstly, a shooting component enabled by the terminal equipment is determined; then, a photographing setting parameter of the photographing assembly is detected, and the photographing environment parameter is sensed by the photographing assembly.
Specifically, in the present embodiment, first, the shooting setting parameters and the shooting environment parameters are detected. It can be understood that the terminal device of this embodiment is provided with one or more shooting components for implementing the shooting processing scheme of this embodiment, or the terminal device of this embodiment may be connected and controlled by a wired or wireless manner to other external shooting components, thereby implementing the shooting processing scheme of this embodiment. Specifically, in the present embodiment, a shooting setting parameter and a shooting environment parameter are detected, where the shooting setting parameter refers to a setting parameter related to an operation mode of the shooting component, for example, the shooting setting parameter of the present embodiment includes an aperture value, a shutter value, an exposure value, a sensitivity value, and the like used in shooting, and the shooting environment parameter refers to an environment parameter related to an environment state where the shooting object is located, for example, the shooting environment parameter of the present embodiment includes an illuminance value, an illuminance variation value, a preset illuminance value, and the like.
Alternatively, a shooting component of the terminal device is detected and identified, and it is understood that the terminal device may have multiple sets of shooting components, for example, a first side of the terminal device is provided with the shooting component P1 and the shooting component P2, and a second side of the terminal device is provided with the shooting component P3, the shooting component P4 and the shooting component P5, specifically, the shooting components enabled by the terminal device are determined, for example, one set of the shooting component P1, the shooting component P2, the shooting component P3, the shooting component P4 and the shooting component P5 is enabled individually, or two or more sets of the shooting component P1, the shooting component P2, the shooting component P3, the shooting component P4 and the shooting component P5 are enabled simultaneously;
optionally, the shooting setting parameters of the shooting assembly are detected, and the shooting environment parameters are sensed by the shooting assembly. Also, as described in the above example, after one set of the photographing component P1, the photographing component P2, the photographing component P3, the photographing component P4, and the photographing component P5 is individually activated, photographing environment parameters are sensed by the activated photographing component;
optionally, two or more groups of the photographing component P1, the photographing component P2, the photographing component P3, the photographing component P4 and the photographing component P5 are activated at the same time, and the photographing component specially configured to sense the environmental parameters is activated to sense the photographing environmental parameters required by the present embodiment.
The method has the advantages that the shooting component started by the terminal equipment is determined; then, a photographing setting parameter of the photographing assembly is detected, and the photographing environment parameter is sensed by the photographing assembly. The more humanized shooting processing scheme is realized, so that the terminal equipment shoots a shooting main body image with a clear effect in a low-illumination large-aperture environment, the occurrence of the fuzzy imaging condition caused by the low-illumination large-aperture environment is avoided, and the user experience is improved.
Example eight
Based on the above embodiments, optionally, the computer program when executed by the processor implements:
analyzing the shooting environment parameters, extracting an illumination value in the shooting environment parameters, determining the current illumination value as a low illumination value parameter if the illumination value is below a preset illumination value, and simultaneously acquiring an aperture value in the shooting setting parameters;
if the illumination value is below a preset illumination value, increasing the aperture value, and if the increased aperture value is above the preset aperture value, determining the current aperture value as a large aperture value parameter;
in the shooting preview data, a moving image area in a moving state is identified, and a shooting subject is determined within the moving image area.
In this embodiment, first, the shooting environment parameters are analyzed, an illuminance value in the shooting environment parameters is extracted, if the illuminance value is below a preset illuminance value, the current illuminance value is determined to be a low illuminance value parameter, and meanwhile, an aperture value in the shooting setting parameters is obtained; then, if the illumination value is below a preset illumination value, increasing the aperture value, and if the increased aperture value is above the preset aperture value, determining the current aperture value as a large aperture value parameter; finally, in the shooting preview data, a moving image area in a moving state is identified, and a shooting subject is specified within the moving image area.
Specifically, in this embodiment, after detecting the shooting setting parameter and the shooting environment parameter, if it is determined that the shooting setting parameter includes the large aperture value parameter and the shooting environment parameter includes the low illumination value parameter, the shooting object in the motion state is identified. In the embodiment, in order to solve the defect that imaging blur is easily caused by opening a large-aperture shooting in a dark environment in the prior art, in the embodiment, a shooting object in a moving state is firstly detected in a dark light shooting environment and is identified when the large-aperture shooting is simultaneously opened, wherein a lowest motion threshold value which can cause imaging blur is determined according to a current illumination value and an aperture value, and then the shooting object in the moving state is determined according to the determined lowest motion threshold value in the shot preview image data.
Optionally, in a manual or semi-automatic shooting mode, analyzing the shooting environment parameters, extracting an illumination value in the shooting environment parameters, if the illumination value is below a preset illumination value, determining that the current illumination value is a low illumination value parameter, and meanwhile, acquiring an aperture value in the shooting setting parameters;
optionally, in a manual or semi-automatic shooting mode, analyzing the shooting setting parameters, extracting an aperture value in the shooting setting parameters, if the aperture value is above a preset aperture value, determining that the current aperture value is a large aperture value parameter, and meanwhile, obtaining an illuminance value in the shooting environment parameters;
optionally, in the automatic shooting mode, the shooting setting parameters and the shooting environment parameters are detected at the same time, that is, whether the shooting setting parameters and the shooting environment parameters are low illumination values and large aperture parameters are determined at the same time;
optionally, preferentially determining the aperture parameter or preferentially determining the illuminance parameter according to different shooting modes;
optionally, if the illuminance parameter is in a real-time change state, preferentially determining the aperture parameter;
alternatively, in the shooting preview data, one or more moving image areas in a moving state are identified, and one or more shooting subjects are determined within the one or more moving image areas.
The method has the advantages that the illumination value in the shooting environment parameter is extracted by analyzing the shooting environment parameter, if the illumination value is below a preset illumination value, the current illumination value is determined to be a low illumination value parameter, and meanwhile, the aperture value in the shooting setting parameter is obtained; then, if the illumination value is below a preset illumination value, increasing the aperture value, and if the increased aperture value is above the preset aperture value, determining the current aperture value as a large aperture value parameter; finally, in the shooting preview data, a moving image area in a moving state is identified, and a shooting subject is specified within the moving image area. The more humanized shooting processing scheme is realized, so that the terminal equipment shoots a shooting main body image with a clear effect in a low-illumination large-aperture environment, the occurrence of the fuzzy imaging condition caused by the low-illumination large-aperture environment is avoided, and the user experience is improved.
Example nine
Based on the above embodiments, optionally, the computer program when executed by the processor implements:
acquiring all image frames in the shooting process;
if the shooting object is a shooting subject, determining frame data of a first target quantity in all the image frames, wherein the frame data of the first target quantity is positioned at the rear section of all the image frames;
eliminating image data in the moving image area from the frame data of the first target number;
if the shooting object is a non-subject object, determining frame data of a second target quantity in all the image frames, wherein the frame data of the second target quantity are in preset segments of all the image frames;
and eliminating the image data in the moving image area from the second target number of frame data.
In the present embodiment, first, all image frames in the shooting process are acquired; then, if the shooting object is a shooting subject, determining frame data of a first target quantity in all the image frames, wherein the frame data of the first target quantity is positioned at the rear section of all the image frames; and finally, eliminating the image data in the moving image area from the frame data of the first target number.
Specifically, in this embodiment, after a moving object is identified, an image frame in a shooting process is acquired, if the moving object is a shooting subject, target frame data is determined in the image frame, and image data of the shooting subject is removed from the target frame data. Similarly, as described in the above example, in this embodiment, if it is determined that the object in motion is the subject, all the image frames are acquired first in one shooting process, and then the target frame data is selected from all the image frames, where the selection manner may be determined according to the shooting conditions and the shooting requirements, for example, the subsequent frame of all the image frames is selected as the target frame data of this embodiment, and finally, the image data of the subject is removed from the target frame data.
Optionally, if the photographic subject is a photographic subject, determining a first target number of frame data in all image frames, where the first target number of frame data is in a last section of all image frames (for example, all image frames include 20 frames, and the first target number is the last 18 th frame, 19 th frame, and 20 th frame of the 20 frames);
optionally, if the shooting object is a shooting subject, determining a first target number of frame data in all the image frames, where the first target number of frame data is in a later stage of all the image frames (for example, all the image frames include 20 frames, and the first target number is the 16 th, 17 th and 18 th frames after the first target number is 20 frames);
optionally, if the shooting object is a shooting subject, determining a first target number of frame data in all the image frames, where the first target number of frame data is in a continuous segment of all the image frames;
optionally, if the shooting object is a shooting subject, determining frame data of a first target number in all the image frames, where the frame data of the first target number are located in discontinuous segments of all the image frames;
optionally, the image data in the one or more moving image regions is removed from the first target number of frame data.
Optionally, if the shooting object is a non-subject object, determining frame data of a second target number in all the image frames, where the frame data of the second target number is in a preset segment of all the image frames;
optionally, if the photographic object is a non-subject object, determining frame data of a second target number in all the image frames, where the frame data of the second target number is all the image frames;
optionally, the image data in the moving image area is removed from the frame data of the second target number;
optionally, the image data in the moving image area is removed from all the image frames.
The method has the advantages that all image frames in the shooting process are acquired; then, if the shooting object is a non-subject object, determining frame data of a second target number in all the image frames, wherein the frame data of the second target number are in a preset segment of all the image frames; and finally, eliminating the image data in the moving image area from the frame data of the second target number. The more humanized shooting processing scheme is realized, so that the terminal equipment shoots a shooting main body image with a clear effect in a low-illumination large-aperture environment, the occurrence of the fuzzy imaging condition caused by the low-illumination large-aperture environment is avoided, and the user experience is improved.
Example ten
Based on the above embodiments, the present invention also provides a computer-readable storage medium having a shooting processing program stored thereon, where the shooting processing program, when executed by a processor, implements the steps of the shooting processing method as described in any one of the above.
The shooting processing method, the equipment and the computer readable storage medium of the invention are implemented by detecting shooting setting parameters and shooting environment parameters; then, if the shooting setting parameters comprise large aperture value parameters and the shooting environment parameters comprise low illumination value parameters, identifying the shooting object in a motion state; then, acquiring an image frame in a shooting process, determining target frame data in the image frame if the shooting object is a shooting subject, and removing image data of the shooting subject from the target frame data; and finally, fusing the eliminated image frames to obtain a shot image. The humanized shooting processing scheme is realized, so that the terminal equipment shoots a shooting main body image with a clear effect in a low-illumination large-aperture environment, the occurrence of the fuzzy imaging condition caused by the low-illumination large-aperture environment is avoided, and the user experience is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. A shooting processing method, characterized by comprising:
detecting shooting setting parameters and shooting environment parameters;
if the shooting setting parameters comprise large aperture value parameters and the shooting environment parameters comprise low illumination value parameters, identifying the shooting object in a motion state, wherein,
identifying a moving image area in the moving state in the shooting preview data, and determining the shooting object within the moving image area;
acquiring an image frame in a shooting process, if the shooting object is a shooting subject, determining target frame data in the image frame, and removing image data of the shooting subject from the target frame data; wherein the content of the first and second substances,
acquiring all image frames in the shooting process;
if the shooting object is the shooting subject, determining frame data of a first target quantity in all the image frames, wherein the frame data of the first target quantity is positioned at the rear section of all the image frames;
removing the image data in the moving image area from the first target number of frame data;
and fusing the eliminated image frames to obtain a shot image.
2. The shooting processing method according to claim 1, wherein the detecting of the shooting setting parameter and the shooting environment parameter includes:
determining a shooting component enabled by the terminal equipment;
detecting a photographing setting parameter of the photographing assembly, and sensing the photographing environment parameter through the photographing assembly.
3. The image capturing processing method according to claim 2, wherein if the image capturing setting parameter includes a large aperture parameter and the image capturing environment parameter includes a low illumination parameter, identifying the moving object includes:
analyzing the shooting environment parameters, extracting an illumination value in the shooting environment parameters, determining the current illumination value as a low illumination value parameter if the illumination value is below a preset illumination value, and simultaneously acquiring an aperture value in the shooting setting parameters;
and if the illumination value is below a preset illumination value, increasing the aperture value, and if the increased aperture value is above the preset aperture value, determining the current aperture value as a large aperture value parameter.
4. The shooting processing method according to claim 3, wherein the acquiring an image frame during shooting, determining target frame data in the image frame if the shooting object is a shooting subject, and removing image data of the shooting subject from the target frame data, further comprises:
acquiring all image frames in the shooting process;
if the shooting object is a non-subject object, determining frame data of a second target quantity in all the image frames, wherein the frame data of the second target quantity are in preset segments of all the image frames;
and eliminating the image data in the moving image area from the second target number of frame data.
5. A photographic processing apparatus, characterized in that the apparatus comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program realizing, when executed by the processor:
detecting shooting setting parameters and shooting environment parameters;
if the shooting setting parameters comprise large aperture value parameters and the shooting environment parameters comprise low illumination value parameters, identifying a shooting object in a motion state; wherein the content of the first and second substances,
identifying a moving image area in the moving state in the shooting preview data, and determining the shooting object within the moving image area;
acquiring an image frame in a shooting process, if the shooting object is a shooting subject, determining target frame data in the image frame, and removing image data of the shooting subject from the target frame data; wherein the content of the first and second substances,
acquiring all image frames in the shooting process;
if the shooting object is the shooting subject, determining frame data of a first target quantity in all the image frames, wherein the frame data of the first target quantity is positioned at the rear section of all the image frames;
removing the image data in the moving image area from the first target number of frame data;
and fusing the eliminated image frames to obtain a shot image.
6. The shooting processing apparatus according to claim 5, wherein the computer program when executed by the processor implements:
determining a shooting component enabled by the terminal equipment;
detecting a photographing setting parameter of the photographing assembly, and sensing the photographing environment parameter through the photographing assembly.
7. The shooting processing apparatus according to claim 6, wherein the computer program when executed by the processor implements:
analyzing the shooting environment parameters, extracting an illumination value in the shooting environment parameters, determining the current illumination value as a low illumination value parameter if the illumination value is below a preset illumination value, and simultaneously acquiring an aperture value in the shooting setting parameters;
and if the illumination value is below a preset illumination value, increasing the aperture value, and if the increased aperture value is above the preset aperture value, determining the current aperture value as a large aperture value parameter.
8. The shooting processing apparatus according to claim 7, wherein the computer program when executed by the processor implements:
if the shooting object is a non-subject object, determining frame data of a second target quantity in all the image frames, wherein the frame data of the second target quantity are in preset segments of all the image frames;
and eliminating the image data in the moving image area from the second target number of frame data.
9. A computer-readable storage medium, characterized in that a photographing processing program is stored thereon, which when executed by a processor implements the steps of the photographing processing method according to any one of claims 1 to 4.
CN201811508673.2A 2018-12-11 2018-12-11 Shooting processing method and device and computer readable storage medium Active CN109510941B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811508673.2A CN109510941B (en) 2018-12-11 2018-12-11 Shooting processing method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811508673.2A CN109510941B (en) 2018-12-11 2018-12-11 Shooting processing method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109510941A CN109510941A (en) 2019-03-22
CN109510941B true CN109510941B (en) 2021-08-03

Family

ID=65752024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811508673.2A Active CN109510941B (en) 2018-12-11 2018-12-11 Shooting processing method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109510941B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110706176A (en) * 2019-09-29 2020-01-17 Tcl移动通信科技(宁波)有限公司 Static processing method and device for video and computer readable storage medium
CN113572916B (en) * 2021-07-23 2024-03-15 深圳传音控股股份有限公司 Shooting method, terminal equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105450931A (en) * 2015-12-30 2016-03-30 联想(北京)有限公司 Imaging method and device based on array cameras, and electronic equipment
CN107395991A (en) * 2017-08-31 2017-11-24 广东欧珀移动通信有限公司 Image combining method, device, computer-readable recording medium and computer equipment
CN107770451A (en) * 2017-11-13 2018-03-06 广东欧珀移动通信有限公司 Take pictures method, apparatus, terminal and the storage medium of processing
CN107820024A (en) * 2017-12-05 2018-03-20 北京小米移动软件有限公司 Image capturing method, device and storage medium
CN108200351A (en) * 2017-12-21 2018-06-22 深圳市金立通信设备有限公司 Image pickup method, terminal and computer-readable medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080137978A1 (en) * 2006-12-07 2008-06-12 Guoyi Fu Method And Apparatus For Reducing Motion Blur In An Image
JP4853320B2 (en) * 2007-02-15 2012-01-11 ソニー株式会社 Image processing apparatus and image processing method
CN101990063B (en) * 2009-08-05 2012-05-09 华晶科技股份有限公司 Method for adjusting shooting setting of digital camera by mobile detection
JP2012191486A (en) * 2011-03-11 2012-10-04 Sony Corp Image composing apparatus, image composing method, and program
US9596410B2 (en) * 2012-02-22 2017-03-14 Philips Lighting Holding B.V. Vision systems and methods for analysing images taken by image sensors
US9692975B2 (en) * 2013-04-10 2017-06-27 Microsoft Technology Licensing, Llc Motion blur-free capture of low light high dynamic range images
CN105391940B (en) * 2015-11-05 2019-05-28 华为技术有限公司 A kind of image recommendation method and device
CN107517351B (en) * 2017-10-18 2020-03-24 广东小天才科技有限公司 Photographing mode switching method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105450931A (en) * 2015-12-30 2016-03-30 联想(北京)有限公司 Imaging method and device based on array cameras, and electronic equipment
CN107395991A (en) * 2017-08-31 2017-11-24 广东欧珀移动通信有限公司 Image combining method, device, computer-readable recording medium and computer equipment
CN107770451A (en) * 2017-11-13 2018-03-06 广东欧珀移动通信有限公司 Take pictures method, apparatus, terminal and the storage medium of processing
CN107820024A (en) * 2017-12-05 2018-03-20 北京小米移动软件有限公司 Image capturing method, device and storage medium
CN108200351A (en) * 2017-12-21 2018-06-22 深圳市金立通信设备有限公司 Image pickup method, terminal and computer-readable medium

Also Published As

Publication number Publication date
CN109510941A (en) 2019-03-22

Similar Documents

Publication Publication Date Title
CN108900790B (en) Video image processing method, mobile terminal and computer readable storage medium
CN107959795B (en) Information acquisition method, information acquisition equipment and computer readable storage medium
CN111885307B (en) Depth-of-field shooting method and device and computer readable storage medium
CN107040723B (en) Imaging method based on double cameras, mobile terminal and storage medium
CN113179369B (en) Shot picture display method, mobile terminal and storage medium
CN111866388B (en) Multiple exposure shooting method, equipment and computer readable storage medium
CN107896304B (en) Image shooting method and device and computer readable storage medium
CN112188082A (en) High dynamic range image shooting method, shooting device, terminal and storage medium
CN112511741A (en) Image processing method, mobile terminal and computer storage medium
CN110086993B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN110177207B (en) Backlight image shooting method, mobile terminal and computer readable storage medium
CN108848321B (en) Exposure optimization method, device and computer-readable storage medium
CN109510941B (en) Shooting processing method and device and computer readable storage medium
CN112135060B (en) Focusing processing method, mobile terminal and computer storage medium
CN113347372A (en) Shooting light supplement method, mobile terminal and readable storage medium
CN112135045A (en) Video processing method, mobile terminal and computer storage medium
CN112866685A (en) Screen projection delay measuring method, mobile terminal and computer readable storage medium
CN108495033B (en) Photographing regulation and control method and device and computer readable storage medium
CN107743204B (en) Exposure processing method, terminal, and computer-readable storage medium
CN113572916B (en) Shooting method, terminal equipment and storage medium
CN112532838B (en) Image processing method, mobile terminal and computer storage medium
CN112040134B (en) Micro-holder shooting control method and device and computer readable storage medium
CN109495683B (en) Interval shooting method and device and computer readable storage medium
CN113542605A (en) Camera shooting control method, mobile terminal and storage medium
CN108989695B (en) Initial automatic exposure convergence method, mobile terminal and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant