CN109005355B - Shooting method and mobile terminal - Google Patents

Shooting method and mobile terminal Download PDF

Info

Publication number
CN109005355B
CN109005355B CN201811143307.1A CN201811143307A CN109005355B CN 109005355 B CN109005355 B CN 109005355B CN 201811143307 A CN201811143307 A CN 201811143307A CN 109005355 B CN109005355 B CN 109005355B
Authority
CN
China
Prior art keywords
image
information
mobile terminal
preset
preview image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811143307.1A
Other languages
Chinese (zh)
Other versions
CN109005355A (en
Inventor
徐启航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811143307.1A priority Critical patent/CN109005355B/en
Publication of CN109005355A publication Critical patent/CN109005355A/en
Application granted granted Critical
Publication of CN109005355B publication Critical patent/CN109005355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Telephone Function (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a shooting method and a mobile terminal, wherein the method comprises the following steps: before image shooting, calling a 3D camera to acquire a preset number of preview image frames; calling a 3D camera to shoot a first image according to a shooting instruction, and determining image information in the shooting process of the first image and shaking information of the mobile terminal; and when the image information and/or the shaking information of the mobile terminal meet the preset conditions, generating a target image according to each preview image frame. When the shaking information of the mobile terminal or the image information in the shooting process is greatly changed due to the occurrence of an emergency, the image can still be successfully shot, the imaging quality can be improved, and the use experience of a user is improved.

Description

Shooting method and mobile terminal
Technical Field
The present invention relates to the field of mobile terminal technologies, and in particular, to a shooting method and a mobile terminal.
Background
At present, with the development of mobile terminals, the shooting level of the equipment is increasingly improved, and due to the portability of the mobile terminals and the randomness of shooting places, shooting is inevitably influenced by various factors, wherein shaking has a great influence on the imaging quality. At present, the anti-shake of the mobile terminal generally adopts optical axis correction, acceleration sensor correction, geomagnetic sensor correction and the like.
However, the current anti-shake correction method is still not ideal, and when the shake amplitude is large, the imaging effect is still affected, and the user experience is affected.
Disclosure of Invention
The embodiment of the invention provides a shooting method and a mobile terminal, and aims to solve the problem of poor anti-shake effect in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a shooting method, where the method includes: before image shooting is carried out, calling the 3D camera to collect a preset number of preview image frames; calling the 3D camera to shoot a first image according to a shooting instruction, and determining image information in the first image shooting process and shaking information of the mobile terminal; and when the image information and/or the shaking information of the mobile terminal meet preset conditions, generating a target image according to each preview image frame.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes: the acquisition module is used for calling the 3D camera to acquire a preset number of preview image frames before image shooting is carried out; the determining module is used for calling the 3D camera to shoot a first image according to a shooting instruction, and determining image information in the first image shooting process and shaking information of the mobile terminal; and the generating module is used for generating a target image according to each preview image frame when the image information and/or the shaking information of the mobile terminal meet preset conditions.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the shooting method.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the shooting method are implemented.
In the embodiment of the invention, before image shooting, a 3D camera is called to collect a preset number of preview image frames; calling a 3D camera to shoot a first image according to a shooting instruction, and determining image information in the shooting process of the first image and shaking information of the mobile terminal; when the image information and/or the shaking information of the mobile terminal meet the preset conditions, the target image is generated according to each preview image frame, so that when the shaking information of the mobile terminal or the image information in the shooting process is greatly changed due to an emergency, the image can still be successfully shot, the imaging quality can be improved, and the use experience of a user is improved.
Drawings
Fig. 1 is a flowchart illustrating steps of a photographing method according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a photographing method according to a second embodiment of the present invention;
fig. 3 is a flowchart illustrating steps of a photographing method according to a third embodiment of the present invention;
fig. 4 is a block diagram of a mobile terminal according to a fourth embodiment of the present invention;
fig. 5 is a block diagram of a mobile terminal according to a fifth embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of a mobile terminal according to a sixth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a flowchart illustrating steps of a photographing method according to a first embodiment of the present invention is shown.
The shooting method provided by the embodiment of the invention comprises the following steps:
the embodiment of the invention adopts the mobile terminal with the 3D camera to shoot images.
Step 101: and before image shooting, calling a 3D camera to acquire a preset number of preview image frames.
When a preview image is acquired in a shooting interface, a 3D camera is called to capture the preview image before image shooting, which needs to be described. A person skilled in the art may set the preset number according to actual situations, where the preset number may be 3 sheets, 5 sheets, 7 sheets, and the like, and the embodiment of the present invention is not limited in this respect.
Step 102: and calling a 3D camera to shoot the first image according to the shooting instruction, and determining image information in the shooting process of the first image and the shaking information of the mobile terminal.
When a user shoots an image, a first image is shot, and in the process of shooting the first image, due to an emergency situation, image information in the process of shooting the first image changes and shake information of the mobile terminal exists.
Step 103: and when the image information and/or the shaking information of the mobile terminal meet the preset conditions, generating a target image according to each preview image frame.
The image information comprises RGB image information and depth of field information, the shaking information comprises moving displacement and angle change values of the mobile terminal, when the RGB image information and the depth of field information change compared with a previous image frame or when the displacement and angle change values of the mobile terminal are large, an emergency situation is shown to occur in the shooting process, and the target image is generated according to each preview image frame.
In the embodiment of the invention, before image shooting, a 3D camera is called to collect a preset number of preview image frames; calling a 3D camera to shoot a first image according to a shooting instruction, and determining image information in the shooting process of the first image and shaking information of the mobile terminal; when the image information and/or the shaking information of the mobile terminal meet the preset conditions, the target image is generated according to each preview image frame, so that when the shaking information of the mobile terminal or the image information in the shooting process is greatly changed due to an emergency, the image can still be successfully shot, the imaging quality can be improved, and the use experience of a user is improved.
Example two
Referring to fig. 2, a flowchart illustrating steps of a photographing method according to a second embodiment of the present invention is shown.
The shooting method provided by the embodiment of the invention comprises the following steps:
step 201: and before image shooting, calling a 3D camera to acquire a preset number of preview image frames.
When a preview image is acquired in a shooting interface, a 3D camera is called to capture the preview image before image shooting, which needs to be described. A person skilled in the art may set the preset number according to actual situations, where the preset number may be 3 sheets, 5 sheets, 7 sheets, and the like, and the embodiment of the present invention is not limited in this respect.
Step 202: and calling a 3D camera to shoot the first image according to the shooting instruction, and determining image information in the shooting process of the first image and the shaking information of the mobile terminal.
When a user shoots an image, a first image is obtained by shooting, in the process of shooting the first image, due to an emergency situation, displacement of the mobile terminal and angle change of the mobile terminal, namely shake information of the mobile terminal can occur, and meanwhile, image information can be generated by the first image.
Step 203: and when the displacement of the mobile terminal is greater than the preset displacement and the angle change value is greater than the preset angle value, generating a target image according to each preview image frame.
The shake information of the mobile terminal comprises displacement and an angle change value of the mobile terminal, when the displacement is larger than the preset displacement and the angle change value is larger than the preset angle value, the fact that the shot first image shakes violently is indicated, the shot first image is blurred or the image to be shot is not shot, and a first target image is generated according to each preview image.
It should be noted that the preset displacement may be set to be 5-10mm, and the preset angle variation value may be set to be 10-15 degrees, which is not specifically limited in this embodiment of the present invention.
Step 204: and when the displacement is less than or equal to the preset displacement and the angle change value is less than or equal to the preset angle, determining a first weight value of each preview image frame and a second weight value of the first image.
When the displacement is smaller than or equal to the preset displacement and the angle change value is smaller than or equal to the preset angle, the shake condition of the first image is within the normal range, and the first image has some blur, and in order to improve the imaging quality, a first weight value of each preview image and a second weight value of the first image are determined.
It should be noted that, the sum of the first weight value and the second weight value is 1, and when the first image has blur, that is, the displacement is less than or equal to the preset displacement and the angle change value is less than or equal to the preset angle value, the weight value of the first image is reduced.
Step 205: and fusing each preview image frame with the first image according to the first weight value and the second weight value to generate a target image.
In order to avoid poor effect, the first image and each preview image are fused according to the weight value, and the imaging effect of the second target image is improved.
In the embodiment of the invention, before image shooting, a 3D camera is called to collect a preset number of preview image frames; calling a 3D camera to shoot a first image according to a shooting instruction, and determining image information in the shooting process of the first image and shaking information of the mobile terminal; when the image information and/or the shaking information of the mobile terminal meet the preset conditions, the target image is generated according to each preview image frame, so that when the shaking information of the mobile terminal or the image information in the shooting process is greatly changed due to an emergency, the image can still be successfully shot, the imaging quality can be improved, and the use experience of a user is improved. In addition, when the displacement is smaller than or equal to the preset displacement and the angle change value is smaller than or equal to the preset angle value, the second target image is generated according to the weight value of each preview image and the weight value of the first image, so that the image is restored to the image before shaking as much as possible, and the use experience of a user is improved.
EXAMPLE III
Referring to fig. 3, a flowchart of steps of a photographing method according to a third embodiment of the present invention is shown.
The shooting method provided by the embodiment of the invention comprises the following steps:
step 301: and before image shooting, calling a 3D camera to acquire a preset number of preview image frames.
When a preview image is acquired in a shooting interface, a 3D camera is called to capture the preview image before image shooting, which needs to be described. A person skilled in the art may set the preset number according to actual situations, where the preset number may be 3 sheets, 5 sheets, 7 sheets, and the like, and the embodiment of the present invention is not limited in this respect.
Step 302: and calling a 3D camera to shoot the first image according to the shooting instruction, and determining image information in the shooting process of the first image and the shaking information of the mobile terminal.
When a user shoots an image, a 3D camera is called to shoot to obtain a first image, and image information in the process of shooting the first image and shaking information of the mobile terminal are determined. The image information of the first image photographing process may include RGB image information and depth information, and the shaking information of the mobile terminal includes displacement and angle change values of the mobile terminal.
Step 303: when the RGB image information of the first image is changed compared with the RGB image information of any preview image frame, a target image is generated according to each preview image frame.
In the shooting process, because the situation that other people enter the shooting environment to cause shielding of the shooting main body is avoided, when the displacement and angle change values are zero, the shooting process is not shaken, the RGB image information of the first image is determined, whether the main body of the first image is shielded is judged according to the RGB image information of the first image, whether the RGB image information of the first image is changed compared with the plane information of any preview image frame is judged by judging, and whether the main body of the first image is shielded is judged, and when the first image is shielded, the target image is generated according to each preview image frame.
Step 304: and when the depth of field information of the first image is changed compared with the depth of field information of any preview image frame, generating a target image according to each preview image frame.
Determining whether the depth of field information of the first image is changed compared with the depth of field information of any preview image frame; if yes, determining that the first image is blocked, and when the first image is blocked, generating a target image according to each preview image frame so that the generated image has no blocking.
In the embodiment of the invention, before image shooting, a 3D camera is called to collect a preset number of preview image frames; calling a 3D camera to shoot a first image according to a shooting instruction, and determining image information in the shooting process of the first image and shaking information of the mobile terminal; when the image information and/or the shaking information of the mobile terminal meet the preset conditions, the target image is generated according to each preview image frame, so that when the shaking information of the mobile terminal or the image information in the shooting process is greatly changed due to an emergency, the image can still be successfully shot, the imaging quality can be improved, and the use experience of a user is improved. In addition, when the RGB image information and the depth information of the first image judge whether an accident happens, and when the accident happens, the first image is shielded, and a target image is generated according to each preview image, so that the shielding resistance effect is realized.
Example four
Referring to fig. 4, a block diagram of a mobile terminal according to a fourth embodiment of the present invention is shown.
The mobile terminal provided by the embodiment of the invention comprises: the acquisition module 401 is configured to invoke the 3D camera to acquire a preset number of preview image frames before image shooting is performed; a determining module 402, configured to invoke the 3D camera to shoot a first image according to a shooting instruction, and determine image information in a shooting process of the first image and shake information of the mobile terminal; a generating module 403, configured to generate a target image according to each preview image frame when the image information and/or the shake information of the mobile terminal meet a preset condition.
In the embodiment of the invention, before image shooting, a 3D camera is called to collect a preset number of preview image frames; calling a 3D camera to shoot a first image according to a shooting instruction, and determining image information in the shooting process of the first image and shaking information of the mobile terminal; when the image information and/or the shaking information of the mobile terminal meet the preset conditions, the target image is generated according to each preview image frame, so that when the shaking information of the mobile terminal or the image information in the shooting process is greatly changed due to an emergency, the image can still be successfully shot, the imaging quality can be improved, and the use experience of a user is improved.
EXAMPLE five
Referring to fig. 5, a block diagram of a mobile terminal according to a fifth embodiment of the present invention is shown.
The mobile terminal provided by the embodiment of the invention comprises: the acquisition module 501 is configured to invoke the 3D camera to acquire a preset number of preview image frames before image shooting is performed; a determining module 502, configured to invoke the 3D camera to shoot a first image according to a shooting instruction, and determine image information in a shooting process of the first image and shake information of the mobile terminal; a generating module 503, configured to generate a target image according to each preview image frame when the image information and/or the shake information of the mobile terminal meet a preset condition.
Preferably, the jitter information of the mobile terminal includes displacement and angle change values of the mobile terminal, and the generating module 503 includes: the first generating sub-module 5031 is configured to generate a target image according to each preview image frame when the displacement of the mobile terminal is greater than a preset displacement and the angle change value is greater than a preset angle value. Preferably, the generating module 503 comprises: a determining sub-module 5032 configured to determine a first weight value of each preview image frame and a second weight value of the first image when the displacement is smaller than or equal to the preset displacement and the angle change value is smaller than or equal to the preset angle; a second generating sub-module 5033, configured to fuse each preview image frame with the first image according to the first weight value and the second weight value, so as to generate a target image.
Preferably, the generating module 503 includes: a third generating sub-module 5034, configured to generate a target image according to each preview image frame when the RGB image information of the first image is changed compared with the RGB image information of any one of the preview image frames.
Preferably, the generating module 503 includes: a fourth generating sub-module 5035, configured to generate a target image according to each of the preview image frames when the depth of field information of the first image is changed compared with the depth of field information of any one of the preview image frames.
The mobile terminal provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 3, and is not described herein again to avoid repetition.
In the embodiment of the invention, before image shooting, a 3D camera is called to collect a preset number of preview image frames; calling a 3D camera to shoot a first image according to a shooting instruction, and determining image information in the shooting process of the first image and shaking information of the mobile terminal; when the image information and/or the shaking information of the mobile terminal meet the preset conditions, the target image is generated according to each preview image frame, so that when the shaking information of the mobile terminal or the image information in the shooting process is greatly changed due to an emergency, the image can still be successfully shot, the imaging quality can be improved, and the use experience of a user is improved.
EXAMPLE six
Referring to fig. 6, a hardware structure diagram of a mobile terminal for implementing various embodiments of the present invention is shown.
The mobile terminal 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and a power supply 611. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 6 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 610 is configured to invoke the 3D camera to acquire a preset number of preview image frames before image shooting; calling the 3D camera to shoot a first image according to a shooting instruction, and determining image information in the first image shooting process and shaking information of the mobile terminal; and when the image information and/or the shaking information of the mobile terminal meet preset conditions, generating a target image according to each preview image frame.
In the embodiment of the invention, before image shooting, a 3D camera is called to collect a preset number of preview image frames; calling a 3D camera to shoot a first image according to a shooting instruction, and determining image information in the shooting process of the first image and shaking information of the mobile terminal; when the image information and/or the shaking information of the mobile terminal meet the preset conditions, the target image is generated according to each preview image frame, so that when the shaking information of the mobile terminal or the image information in the shooting process is greatly changed due to an emergency, the image can still be successfully shot, the imaging quality can be improved, and the use experience of a user is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 602, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output related to a specific function performed by the mobile terminal 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The mobile terminal 600 also includes at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the mobile terminal 600 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 605 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 606 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although the touch panel 6071 and the display panel 6061 are shown in fig. 6 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 608 is an interface through which an external device is connected to the mobile terminal 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 600 or may be used to transmit data between the mobile terminal 600 and external devices.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 609 and calling data stored in the memory 609, thereby integrally monitoring the mobile terminal. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The mobile terminal 600 may further include a power supply 611 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 611 is logically connected to the processor 610 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the mobile terminal 600 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 610, a memory 609, and a computer program stored in the memory 609 and capable of running on the processor 610, where the computer program is executed by the processor 610 to implement each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (7)

1. A shooting method is applied to a mobile terminal with a 3D camera, and is characterized by comprising the following steps:
before image shooting is carried out, calling the 3D camera to collect a preset number of preview image frames;
calling the 3D camera to shoot a first image according to a shooting instruction, and determining image information in the first image shooting process and shaking information of the mobile terminal; when the image information and/or the shaking information of the mobile terminal meet preset conditions, generating a target image according to each preview image frame;
the image information comprises RGB image information, and the step of generating the target image according to each preview image frame when the image information meets the preset condition comprises the following steps:
when the RGB image information of the first image is changed compared with the RGB image information of any preview image frame, generating a target image according to each preview image frame;
alternatively, the first and second electrodes may be,
the image information comprises depth of field information, and the step of generating the target image according to each preview image frame when the image information meets the preset condition comprises the following steps:
when the depth of field information of the first image is changed compared with the depth of field information of any one preview image frame, generating a target image according to each preview image frame;
the step of when the image information and/or the shaking information of the mobile terminal meet a preset condition comprises the following steps: when the displacement of the mobile terminal is larger than a preset displacement and the angle change value is larger than a preset angle value, when the RGB image information of the first image is changed compared with the RGB image information of any preview image frame, and when the depth of field information of the first image is changed compared with the depth of field information of any preview image frame.
2. The method according to claim 1, wherein the shaking information of the mobile terminal includes displacement and angle variation values of the mobile terminal, and the step of generating the target image from each preview image frame when the shaking information of the mobile terminal satisfies a preset condition includes:
and when the displacement of the mobile terminal is greater than the preset displacement and the angle change value is greater than the preset angle value, generating a target image according to each preview image frame.
3. The method according to claim 2, wherein the step of generating a target image from each preview image frame when the shaking information of the mobile terminal satisfies a preset condition further comprises:
when the displacement is smaller than or equal to the preset displacement and the angle change value is smaller than or equal to the preset angle, determining a first weight value of each preview image frame and a second weight value of the first image;
and fusing each preview image frame with the first image according to the first weight value and the second weight value to generate a target image.
4. A mobile terminal, characterized in that the mobile terminal comprises:
the acquisition module is used for calling the 3D camera to acquire a preset number of preview image frames before image shooting is carried out;
the determining module is used for calling the 3D camera to shoot a first image according to a shooting instruction, and determining image information in the first image shooting process and shaking information of the mobile terminal;
the generating module is used for generating a target image according to each preview image frame when the image information and/or the shaking information of the mobile terminal meet preset conditions;
the generation module comprises: a third generation submodule, configured to generate a target image according to each preview image frame when RGB image information of the first image changes compared with RGB image information of any one of the preview image frames;
or, the generating module includes: a fourth generation submodule, configured to generate a target image according to each preview image frame when depth-of-field information of the first image changes compared with depth-of-field information of any one of the preview image frames;
the step of when the image information and/or the shaking information of the mobile terminal meet a preset condition comprises the following steps: when the displacement of the mobile terminal is larger than a preset displacement and the angle change value is larger than a preset angle value, when the RGB image information of the first image is changed compared with the RGB image information of any preview image frame, and when the depth of field information of the first image is changed compared with the depth of field information of any preview image frame.
5. The mobile terminal of claim 4, wherein the jitter information of the mobile terminal includes displacement and angle variation values of the mobile terminal, and wherein the generating module comprises:
and the first generation submodule is used for generating a target image according to each preview image frame when the displacement of the mobile terminal is greater than the preset displacement and the angle change value is greater than the preset angle value.
6. The mobile terminal of claim 4, wherein the generating module comprises:
a determining submodule, configured to determine a first weight value of each preview image frame and a second weight value of the first image when the displacement is smaller than or equal to the preset displacement and the angle change value is smaller than or equal to the preset angle;
and the second generation submodule is used for fusing each preview image frame with the first image according to the first weight value and the second weight value to generate a target image.
7. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the shooting method according to any one of claims 1 to 3.
CN201811143307.1A 2018-09-28 2018-09-28 Shooting method and mobile terminal Active CN109005355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811143307.1A CN109005355B (en) 2018-09-28 2018-09-28 Shooting method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811143307.1A CN109005355B (en) 2018-09-28 2018-09-28 Shooting method and mobile terminal

Publications (2)

Publication Number Publication Date
CN109005355A CN109005355A (en) 2018-12-14
CN109005355B true CN109005355B (en) 2020-10-09

Family

ID=64589763

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811143307.1A Active CN109005355B (en) 2018-09-28 2018-09-28 Shooting method and mobile terminal

Country Status (1)

Country Link
CN (1) CN109005355B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109688293A (en) * 2019-01-28 2019-04-26 努比亚技术有限公司 A kind of image pickup method, terminal and computer readable storage medium
EP4016985A4 (en) * 2019-08-27 2022-08-31 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method and apparatus, electronic device, and computer-readable storage medium
WO2021035525A1 (en) * 2019-08-27 2021-03-04 Oppo广东移动通信有限公司 Image processing method and apparatus, and electronic device and computer-readable storage medium
CN116114260A (en) * 2020-12-31 2023-05-12 华为技术有限公司 Image processing method and device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006197098A (en) * 2005-01-12 2006-07-27 Canon Inc Image transmission system of network camera
WO2012128242A1 (en) * 2011-03-18 2012-09-27 ソニー株式会社 Image-processing device, image-processing method, and program
CN104811601B (en) * 2014-01-24 2018-01-26 青岛海信移动通信技术股份有限公司 A kind of method and apparatus for showing preview image
CN104601897B (en) * 2015-02-13 2017-11-03 浙江宇视科技有限公司 A kind of video camera anti-shake apparatus and anti-fluttering method
CN107483839B (en) * 2016-07-29 2020-08-07 Oppo广东移动通信有限公司 Multi-frame image synthesis method and device
CN107018327B (en) * 2017-03-31 2019-11-22 维沃移动通信有限公司 A kind of image capturing method and mobile terminal
CN107509034B (en) * 2017-09-22 2019-11-26 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107770451A (en) * 2017-11-13 2018-03-06 广东欧珀移动通信有限公司 Take pictures method, apparatus, terminal and the storage medium of processing

Also Published As

Publication number Publication date
CN109005355A (en) 2018-12-14

Similar Documents

Publication Publication Date Title
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN107957839B (en) Display control method and mobile terminal
CN108881733B (en) Panoramic shooting method and mobile terminal
CN109688322B (en) Method and device for generating high dynamic range image and mobile terminal
CN108307109B (en) High dynamic range image preview method and terminal equipment
CN109005355B (en) Shooting method and mobile terminal
CN110213485B (en) Image processing method and terminal
CN110445984B (en) Shooting prompting method and electronic equipment
CN107730460B (en) Image processing method and mobile terminal
CN111031234B (en) Image processing method and electronic equipment
CN110138967B (en) Terminal operation control method and terminal
CN109474784B (en) Preview image processing method and terminal equipment
CN108174110B (en) Photographing method and flexible screen terminal
CN111510623B (en) Shooting method and electronic equipment
CN108307123B (en) Exposure adjusting method and mobile terminal
CN109167917B (en) Image processing method and terminal equipment
CN110290263B (en) Image display method and mobile terminal
CN110312070B (en) Image processing method and terminal
CN109104573B (en) Method for determining focusing point and terminal equipment
CN110933307A (en) Electronic equipment and image processing method
CN108536513B (en) Picture display direction adjusting method and mobile terminal
CN109005337B (en) Photographing method and terminal
CN108259808B (en) Video frame compression method and mobile terminal
CN107798662B (en) Image processing method and mobile terminal
CN108965701B (en) Jitter correction method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant