WO2017107596A1 - 一种终端及终端拍摄的方法、计算机存储介质 - Google Patents

一种终端及终端拍摄的方法、计算机存储介质 Download PDF

Info

Publication number
WO2017107596A1
WO2017107596A1 PCT/CN2016/099502 CN2016099502W WO2017107596A1 WO 2017107596 A1 WO2017107596 A1 WO 2017107596A1 CN 2016099502 W CN2016099502 W CN 2016099502W WO 2017107596 A1 WO2017107596 A1 WO 2017107596A1
Authority
WO
WIPO (PCT)
Prior art keywords
focus
target
shooting
terminal
pixel
Prior art date
Application number
PCT/CN2016/099502
Other languages
English (en)
French (fr)
Inventor
谢书勋
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Priority to US16/064,143 priority Critical patent/US10659675B2/en
Publication of WO2017107596A1 publication Critical patent/WO2017107596A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/676Bracketing for image capture at varying focusing conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Definitions

  • the present invention relates to the field of terminal data processing technologies, and in particular, to a method for capturing a terminal and a terminal, and a computer storage medium.
  • the main purpose of the embodiments of the present invention is to provide a method for capturing a terminal and a terminal, and a computer storage medium, which are intended to enable shooting targets in different depth ranges to be clearly captured in a captured image.
  • an embodiment of the present invention provides a method for shooting a terminal, and the method may include:
  • the terminal generates a focus frame corresponding to the shooting target in the preview image
  • the terminal acquires image data in the focus frame when each shooting target is in the clearest focus;
  • the terminal In the final preview image when determining the focal length, the terminal fuses the image data in the focus frame when all the shooting targets are in the clearest focus according to a preset image fusion technology, and generates a final captured image.
  • the terminal acquires each shooting target in focus during the shooting process.
  • the image data in the focus frame at the clearest including:
  • the terminal caches the image data of the shooting target in the focus frame.
  • the terminal in the final preview image when determining the focal length, fuses the image data in the focus frame when all the shooting targets are in the clearest focus according to a preset image fusion technology, and generates a final captured image.
  • the terminal generates a transition region of a preset range at an edge of the focus frame of the photographing target i; wherein i represents a serial number of the photographing target, and the value ranges from 1 to N, where N is the number of photographing targets;
  • the terminal splices the image data in the focus frame when the shooting target i is in the clearest focus to the corresponding focus frame in the final preview image;
  • the terminal fuses the image data of the first transition region corresponding to the shooting target i when the focus is sharpest and the image data of the second transition region corresponding to the shooting target i in the final preview image.
  • the terminal sets the image data of the first transition region corresponding to the shooting target i when the focus is the sharpest and the image data of the second transition region corresponding to the shooting target i in the final preview image.
  • Convergence including:
  • the method further includes:
  • the contrast characteristic of the photographing target pixel value is taken as the sharpness measure, when the contrast value of the photographing target pixel value is the largest, the photographing target is brought to the clearest state.
  • an embodiment of the present invention provides a terminal, where the terminal may include: a generating unit, a focusing unit, an acquiring unit, and a fusion unit;
  • the generating unit is configured to generate a focus frame corresponding to the shooting target in the preview image
  • the focusing unit is configured to perform shooting focus, and triggers the acquiring unit during shooting focus;
  • the acquiring unit is configured to acquire image data in the focus frame when each shooting target is in the clearest focus;
  • the merging unit is configured to fuse the image data in the focus frame when all the photographic targets are in the clearest focus according to a preset image fusion technique in the final preview image when the shooting focal length is determined to generate a final captured image.
  • the acquiring unit is configured to buffer the image data of the shooting target in the focus frame when the contrast value between the pixels of the shooting target is the largest.
  • the merging unit includes: a transition region generating subunit, a splicing subunit, and a fused subunit;
  • the transition region generating subunit is configured to generate a transition region of a preset range at an edge of the focus frame of the photographing target i; wherein i represents a serial number of the photographing target, and the value ranges from 1 to N, where N is a photographing target Quantity
  • the splicing subunit is configured to splicing the image data in the focus frame when the photographic target i is in the clearest focus to the corresponding focus frame in the final preview image;
  • the fusion subunit is configured to image data of a first transition region corresponding to the photographing target i when the focus is most clear and image data of a second transition region corresponding to the photographing target i in the final preview image Convergence.
  • the fusion subunit is configured to:
  • the acquiring unit is further configured to:
  • the contrast characteristic of the photographing target pixel value is taken as the sharpness measure, when the contrast value of the photographing target pixel value is the largest, the photographing target is brought to the clearest state.
  • an embodiment of the present invention provides a computer storage medium, where the computer storage medium stores computer executable instructions, where the computer executable instructions include:
  • the image data in the focus frame is merged when all the shooting targets are sharpest, and the final captured image is generated.
  • the computer executable instructions further include:
  • the terminal caches the image data of the shooting target in the focus frame.
  • the computer executable instructions further include:
  • the terminal generates a transition region of a preset range at an edge of the focus frame of the photographing target i; wherein i represents a serial number of the photographing target, and the value ranges from 1 to N, where N is the number of photographing targets;
  • the terminal splices the image data in the focus frame when the shooting target i is in the clearest focus to the corresponding focus frame in the final preview image;
  • the terminal fuses the image data of the first transition region corresponding to the shooting target i when the focus is sharpest and the image data of the second transition region corresponding to the shooting target i in the final preview image.
  • the computer executable instructions further include:
  • the computer executable instructions further include:
  • the contrast characteristic of the photographing target pixel value is taken as the sharpness measure, when the contrast value of the photographing target pixel value is the largest, the photographing target is brought to the clearest state.
  • an embodiment of the present invention provides a terminal, where the terminal includes a processor and a memory, where the memory stores computer executable instructions, and the processor executes corresponding according to the computer executable instructions. deal with;
  • the processor is configured to:
  • the image data in the focus frame is merged when all the shooting targets are sharpest, and the final captured image is generated.
  • the processor is further configured to:
  • the image data of the photographing target in the in-focus frame is buffered.
  • the processor is further configured to:
  • a transition region of a preset range is generated at an edge of the focus frame of the photographing target i; wherein i represents a serial number of the photographing target, and the value ranges from 1 to N, where N is the number of photographing targets;
  • the image data of the first transition region corresponding to the shooting target i when the focus is sharpest is merged with the image data of the second transition region corresponding to the shooting target i in the final preview image.
  • the processor is further configured to:
  • the processor is further configured to:
  • the contrast characteristic of the photographing target pixel value is taken as the sharpness measure, when the contrast value of the photographing target pixel value is the largest, the photographing target is brought to the clearest state.
  • a method for capturing a terminal and a terminal, and a computer storage medium when the terminal performs shooting, the image with the clearest target at different depths of focus during the focusing process is merged by a preset image fusion technology. Thereby, it is possible to make the shooting targets in different depths of field clear in the captured image.
  • FIG. 1 is a schematic structural diagram of hardware of a mobile terminal according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a method for shooting a terminal according to an embodiment of the present invention
  • FIG. 3 is a schematic flowchart of a method for generating a captured image according to an embodiment of the present invention
  • FIG. 4 is a schematic flowchart of a method for image fusion according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of two shooting targets in different depth ranges according to an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart of a specific implementation process of a method for shooting a terminal according to an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of a terminal displaying a preview image according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a preview image according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a focused image according to an embodiment of the present invention.
  • FIG. 10 is a schematic flowchart of a specific implementation method of generating a captured image according to an embodiment of the present disclosure
  • FIG. 11 is a schematic diagram of a transition region according to an embodiment of the present invention.
  • FIG. 12 is a schematic diagram of pixel weight values in a transition region according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic diagram of a final captured image according to an embodiment of the present invention.
  • FIG. 14 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
  • FIG. 15 is a schematic structural diagram of another terminal according to an embodiment of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminals described in the present invention may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), navigation devices, and the like.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram showing the hardware structure of a mobile terminal embodying various embodiments of the present invention.
  • the mobile terminal 100 may include an audio/video (A/V) input unit 120, a user input unit 130, an output unit 150, a memory 160, an interface unit 170, a controller 180, a power supply unit 190, and the like.
  • Figure 1 shows a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented, and more or fewer components may be implemented instead, and the components of the mobile terminal will be described in detail below. .
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 122 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 can be stored in In the memory 160 (or other storage medium), two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via the microphone 122 in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via a mobile communication module (not shown) in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • the device having the identification module may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a bottom allowing A path through which various command signals input to the mobile terminal are transmitted to the mobile terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
  • the memory 160 may store a software program or the like that performs processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, and the like) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, which may be constructed within the controller 180 or may be configured to be separate from the controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like is taken as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • FIG. 2 is a flowchart of a method for shooting a terminal according to an embodiment of the present invention. The method may be applied to the terminal shown in FIG. 1. The method may include:
  • S201 The terminal generates a focus frame corresponding to the shooting target in the preview image
  • the user input unit 130 of the terminal may acquire a preview image through the viewfinder of the camera 121, and may perform a preset recognition algorithm according to a preset identification algorithm.
  • the shooting target in the preview image is recognized, and after the shooting target is recognized, a corresponding focus frame is generated for each shooting target.
  • the terminal can display the preview image acquired by the viewfinder of the camera 121 through the display unit 151, and the focus frame of each shooting target is also displayed in the preview image.
  • S202 the terminal acquires image data in the focus frame when the focus is the most clear during the shooting focus process
  • the specific shooting focus process can be realized by the camera 121 performing auto focus, and the auto focus is focused by contrast.
  • the specific process may be to control the movement of the lens assembly by the actuator according to the contrast change of the screen at the focus. Look for the lens position when the contrast is maximum, that is, the position where the focus is accurate.
  • the actuator may include one or more voice coil actuators, stepper motors, piezoelectric actuators, or other types of actuators that are capable of moving the lens assembly between different lens positions within the search range.
  • VCM voice coil motor
  • Voil Coil Moto VCM
  • VCM Voice coil Motor
  • VCM Voil Coil Moto
  • the upper and lower elastic pieces are fixed in the magnet group.
  • the coil When the coil is energized, the coil generates a magnetic field, the coil magnetic field interacts with the magnet group, the coil moves upward, and the lens locked in the coil moves together, when the power is cut off.
  • the coil returns under the elastic force of the spring, thus achieving the autofocus function.
  • the terminal acquires image data in the focus frame of each shooting target when the focus is the clearest, including:
  • the terminal caches the image data of the shooting target in the focus frame.
  • the terminal In the final preview image when determining the focal length, the terminal fuses the image data in the focus frame when all the shooting targets are in the clearest focus according to a preset image fusion technology, to generate a final captured image.
  • the S203 fuses the sharp images of the shooting targets in different depths of field acquired during the focusing process, so that the shooting targets with different depths of field can be clearly seen in the final captured image.
  • step S203 may specifically include:
  • S2031 The terminal generates a transition region of a preset range at an edge of the focus frame of the photographing target i;
  • i is the number of the shooting target, ranging from 1 to N, where N is the number of shooting targets;
  • S2032 The number of images in the focus frame when the target i focuses the target i According to the splicing to the shooting target i in the corresponding focus frame in the final preview image;
  • the terminal fuses the image data of the first transition area corresponding to the shooting target i when the focus is sharpest and the image data of the second transition area corresponding to the shooting target i in the final preview image.
  • step S2033 referring to FIG. 4, the embodiment provides a method for merging images, and the specific process may include:
  • a first weight value is set corresponding to each pixel point along the first direction of the edge of the focus frame to the edge of the transition area in the range of the first transition area;
  • the first weight value gradually decreases along the first direction
  • the second weight value is correspondingly set for each pixel point along the second direction of the transition region edge to the edge of the focus frame;
  • the second weight value gradually decreases along the second direction
  • S20333 weighting and summing the pixel value of each pixel in the first transition area and the corresponding first weight value and the pixel value of each pixel in the second transition area and the corresponding second weight value.
  • the embodiment provides a method for photographing a terminal, which can be made to be different in the final obtained captured image by merging the images at the same time when the shooting targets at different depths of the scene are the sharpest in the autofocusing process.
  • the shooting range of the depth of field range is clear.
  • FIG. 6 Based on the same technical concept of the foregoing embodiment, taking two shooting targets in different depth ranges as shown in FIG. 5 as an example, a specific implementation process of the terminal shooting method of the above embodiment is proposed. It should be noted that two shooting targets are described. The distance between the terminal and the terminal is different. In this embodiment, the target is located closer to the terminal, and the target 2 is located farther from the terminal.
  • the specific implementation process is as shown in FIG. 6 and may include:
  • S601 the terminal starts the photographing function, and generates a preview image through the viewfinder of the camera;
  • the terminal After the terminal generates the preview image, it can be displayed as shown in FIG. 7.
  • the specific preview image is as shown in FIG. 8.
  • the left side of the preview image is the shooting target 1
  • the right side of the preview image is the shooting target 2, which can It is learned that the shooting target 1 and the shooting target 2 are in different depth ranges, and the shooting target 1 is closer to the terminal than the shooting target 2.
  • the terminal After identifying the shooting target 1 and the shooting target 2 from the preview image by using a preset recognition algorithm, the terminal generates a corresponding focus frame for each shooting target;
  • the white box on the left side of the preview image is the focus frame of the shooting target 1
  • the white box on the right side of the preview image is the focus frame of the shooting target 2;
  • the terminal acquires image data in the focus frame when the contrast value between the shooting target 1 and the shooting target 2 is maximum when the shooting target is in the shooting focus process;
  • the contrast characteristic of the photographing target pixel value is taken as the sharpness measure, and therefore, when the contrast value of the photographing target pixel value is the largest, it is possible to characterize that the photographing target is in the clearest state.
  • the depth of field of the auto-focusing sequence is changed from near to far and then to the near end. Therefore, when the terminal captures the preview image, the camera focuses on the shooting target 1 The depth of field, therefore, the shooting target 1 is the clearest at this time, and the shooting target 2 is blurred;
  • the depth of field that the camera focuses on will be larger than the depth of field of the target 1.
  • the camera will focus on the depth of field of the target 2, so, as shown in Figure 9.
  • the subject 2 is the clearest and the subject 1 is blurred.
  • the terminal can acquire the image data of the shooting target in the focus frame and cache it when the shooting target is at the clearest, so that the data material can be provided for the subsequent image fusion.
  • S604 The terminal generates a final preview image after the autofocus is completed.
  • the shooting target 1 is the clearest, and the shooting target 2 is blurred.
  • the terminal fuses the image data in the focus frame to the corresponding area of the final preview image when the contrast value of the shooting target 2 is maximum between the pixels, and generates a final captured image.
  • step S605 may include:
  • the terminal expands a range of 20 pixels as a transition area at an edge of the focus frame of the shooting target 2;
  • the dotted line frame is the edge of the focus frame of the shooting target 2
  • the area between the solid line frame and the broken line frame is the transition area of the shooting target 2 .
  • S6052 the terminal splicing the image data in the focus frame when the focus is the clearest to the shooting target 2 in the corresponding focus frame in the final preview image;
  • the shooting target 2 is in the range of the first transition region, and the first weight value corresponding to each pixel point is set along the first direction of the edge of the focus frame to the edge of the transition region;
  • the first transition region range is a transition region corresponding to the shooting target 2 when the focus is most clear; and the first weight value is gradually decreased along the first direction; as shown in the upper half of FIG. 12, in the first direction The first weight value is gradually decreased from A1 to A20;
  • the shooting target 2 is in the range of the second transition region, and the second weight value corresponding to each pixel point is set along the second direction of the edge of the transition region to the edge of the focus frame;
  • the second transition region range is an excessive region corresponding to the shooting target 2 in the final preview image; and the second weight value is gradually decreased along the second direction; as shown in the lower half of FIG. 12, the second The second weight value in the direction is gradually decreased from B1 to B20;
  • the weight of the A1 is set to W_a1, the RGB values are respectively R_a1, G_a1, B_a1; in the second transition region
  • the pixel corresponding to A1 is B20
  • the weight of pixel B20 is W_b20
  • RGB is R_b20, G_b20, and B_b20 respectively.
  • the pixel C20 is the pixel of the pixel B20 and A1, and the RGB is R_c20, G_c20, B_c20, then the RGB values of C20 are:
  • R_c20 R_a1 ⁇ W_a1+R_b20 ⁇ W_b20;
  • G_c20 G_a1 ⁇ W_a1+G_b20 ⁇ W_b20;
  • B_c20 B_a1 ⁇ W_a1+B_b20 ⁇ W_b20;
  • the weight W_b20 of the pixel point B20 is 5%
  • the weight W_a1 of the pixel point A1 is 95%. From here, the RGB of the pixel point C20 after the pixel point B20 and the pixel point A1 are merged can be calculated. The values are:
  • R_c20 R_a1 ⁇ 95%+R_b20 ⁇ 5%
  • G_c20 G_a1 ⁇ 95%+G_b20 ⁇ 5%
  • B_c20 B_a1 ⁇ 95%+B_b20 ⁇ 5%
  • the final captured image can be obtained.
  • the shooting target 1 and the shooting target 2 in different depth ranges become image fusion. Clear, and the transition at the blended edge is uniform with no obvious boundaries.
  • FIG. 14 a structure of a terminal 1400 according to an embodiment of the present invention is shown, where the terminal 1400 includes: a generating unit 1401, a focusing unit 1402, an obtaining unit 1403, and a merging unit. 1404; among them,
  • the generating unit 1401 is configured to generate a focus frame corresponding to the shooting target in the preview image
  • the focusing unit 1402 is configured to perform shooting focus, and triggers the acquiring unit 1403 during shooting focus;
  • the acquiring unit 1403 is configured to acquire image data in the focus frame when each shooting target is in the clearest focus;
  • the merging unit 1404 is configured to fuse the image data in the focus frame when all the photographic targets are in the most clear focus according to a preset image fusion technique in the final preview image when the shooting focal length is determined, to generate a final captured image.
  • the acquiring unit 1403 is configured to cache the image data of the shooting target in the focus frame when the contrast value between the pixels of the shooting target is the largest.
  • the merging unit 1404 specifically includes: a transition region generating subunit 14041, a splicing subunit 14042, and a merging subunit 14043;
  • the transition region generating sub-unit 14041 is configured to generate a transition region of a preset range at an edge of the focus frame of the photographing target i; wherein i represents a serial number of the photographing target, the value ranges from 1 to N, and N is the number of photographing targets ;
  • the splicing sub-unit 14042 is configured to splicing the image data in the focus frame when the photographic target i is in the clearest focus to the corresponding focus frame in the final preview image;
  • the fusion subunit 14043 is configured to fuse the image data of the first transition region corresponding to the shooting target i when the focus is most clear and the image data of the second transition region corresponding to the shooting target i in the final preview image.
  • the fusion subunit 14043 is specifically configured as:
  • the target i is along the edge of the focus frame to the edge of the transition area.
  • the first direction is corresponding to setting a first weight value for each pixel point; wherein, the first weight value is gradually decreased along the first direction;
  • a computer storage medium storing the computer executable instructions in the computer storage medium, the computer executable instructions comprising:
  • the image data in the focus frame is merged when all the shooting targets are sharpest, and the final captured image is generated.
  • the computer executable instructions further include:
  • the terminal caches the image data of the shooting target in the focus frame.
  • the computer executable instructions further include:
  • the terminal generates a transition region of a preset range at an edge of the focus frame of the photographing target i; wherein i represents a serial number of the photographing target, and the value ranges from 1 to N, where N is the number of photographing targets;
  • the terminal splices the image data in the focus frame when the shooting target i is in the clearest focus to the corresponding focus frame in the final preview image;
  • the computer executable instructions further include:
  • the computer executable instructions further include:
  • the contrast characteristic of the photographing target pixel value is taken as the sharpness measure, when the contrast value of the photographing target pixel value is the largest, the photographing target is brought to the clearest state.
  • the terminal includes a processor and a memory; wherein the memory stores computer executable instructions, and the processor executes corresponding processing according to the computer executable instructions;
  • the processor is configured to:
  • the image data in the focus frame is merged when all the shooting targets are sharpest, and the final captured image is generated.
  • the processor is further configured to:
  • the image data of the photographing target in the in-focus frame is buffered.
  • the processor is further configured to:
  • a transition region of a preset range is generated at an edge of the focus frame of the photographing target i; wherein i represents a serial number of the photographing target, and the value ranges from 1 to N, where N is the number of photographing targets;
  • the image data of the first transition region corresponding to the shooting target i when the focus is sharpest is merged with the image data of the second transition region corresponding to the shooting target i in the final preview image.
  • the processor is further configured to:
  • the processor is further configured to:
  • the contrast characteristic of the photographing target pixel value is taken as the sharpness measure, when the contrast value of the photographing target pixel value is the largest, the photographing target is brought to the clearest state.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
  • a method for capturing a terminal and a terminal, and a computer storage medium when the terminal performs shooting, the image with the clearest target at different depths of focus during the focusing process is merged by a preset image fusion technology. Thereby, it is possible to make the shooting targets in different depths of field clear in the captured image.

Abstract

本发明实施例公开了一种终端及终端拍摄的方法、计算机存储介质,该方法可以包括:终端在预览图像中生成与拍摄目标对应的对焦框;所述终端在拍摄对焦过程中,获取各拍摄目标在对焦最清晰时对焦框内的图像数据;所述终端在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像。

Description

一种终端及终端拍摄的方法、计算机存储介质 技术领域
本发明涉及终端数据处理技术领域,尤其涉及一种终端及终端拍摄的方法、计算机存储介质。
背景技术
在拍照时,若取景框内包括有多个拍摄目标,以多个人脸为例,在镜头景深比较小的状态时,必须让所有人都处于同一焦平面,即保证所有人脸都处于景深范围内,才能让每个人脸都能够成像清晰。当多个人脸分别处于不同焦平面,由于景深的关系,会造成处于景深范围内的人脸拍照清晰,而处于景深之外的人脸成像不清晰。
发明内容
本发明实施例的主要目的在于提出一种终端及终端拍摄的方法、计算机存储介质,旨在能够使得处于不同景深范围的拍摄目标在拍摄图像中拍摄清晰。
为达到上述目的,本发明实施例的技术方案是这样实现的:
第一方面,本发明实施例提供了一种终端拍摄的方法,所述方法可以包括:
终端在预览图像中生成与拍摄目标对应的对焦框;
所述终端在拍摄对焦过程中,获取各拍摄目标在对焦最清晰时对焦框内的图像数据;
所述终端在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像。
在上述方案中,所述终端在拍摄对焦过程中,获取各拍摄目标在对焦 最清晰时对焦框内的图像数据,包括:
当所述拍摄目标的像素之间对比度值最大时,所述终端将所述拍摄目标在对焦框内的图像数据进行缓存。
在上述方案中,所述终端在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像,包括:
所述终端在拍摄目标i的对焦框的边缘处生成预设范围的过渡区域;其中,i表示拍摄目标的序号,取值范围从1至N,N为拍摄目标的数量;
所述终端将所述拍摄目标i在对焦最清晰时对焦框内的图像数据拼接至所述拍摄目标i在最终预览图像中对应的对焦框内;
所述终端将所述拍摄目标i在对焦最清晰时对应的第一过渡区域的图像数据与所述拍摄目标i在所述最终预览图像中对应的第二过渡区域的图像数据进行融合。
在上述方案中,所述终端将所述拍摄目标i在对焦最清晰时对应的第一过渡区域的图像数据与所述拍摄目标i在所述最终预览图像中对应的第二过渡区域的图像数据进行融合,包括:
在所述拍摄目标i在第一过渡区域范围内,沿着所述对焦框边缘至所述过渡区域边缘的第一方向为每个像素点对应设置第一权重值;其中,所述第一权重值沿着所述第一方向逐渐降低;
在所述拍摄目标i在第二过渡区域内,沿着所述过渡区域边缘至所述对焦框边缘的第二方向为每个像素点对应设置第二权重值;其中,所述第二权重值沿着所述第二方向逐渐降低;
将每个像素点在第一过渡区域内的像素值和对应第一权重值以及每个像素点在第二过渡区域内的像素值和对应的第二权重值进行加权求和,得出所述拍摄目标i在过渡区域的每个像素点在最终拍摄图像中的像素值。
在上述方案中,所述方法还包括:
若以拍摄目标像素值的对比度特性作为清晰度量度,则当拍摄目标像素值的对比度值最大时,使得所述拍摄目标处于最清晰的状态。
第二方面,本发明实施例提供了一种终端,所述终端可以包括:生成单元、对焦单元、获取单元以及融合单元;其中,
所述生成单元,配置为在预览图像中生成与拍摄目标对应的对焦框;
所述对焦单元,配置为进行拍摄对焦,并在拍摄对焦过程中触发所述获取单元;
所述获取单元,配置为获取各拍摄目标在对焦最清晰时对焦框内的图像数据;
所述融合单元,配置为在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像。
在上述方案中,所述获取单元,配置为:当所述拍摄目标的像素之间对比度值最大时,将所述拍摄目标在对焦框内的图像数据进行缓存。
在上述方案中,所述融合单元,包括:过渡区域生成子单元、拼接子单元和融合子单元;其中,
所述过渡区域生成子单元,配置为在拍摄目标i的对焦框的边缘处生成预设范围的过渡区域;其中,i表示拍摄目标的序号,取值范围从1至N,N为拍摄目标的数量;
所述拼接子单元,配置为将所述拍摄目标i在对焦最清晰时对焦框内的图像数据拼接至所述拍摄目标i在最终预览图像中对应的对焦框内;
所述融合子单元,配置为将所述拍摄目标i在对焦最清晰时对应的第一过渡区域的图像数据与所述拍摄目标i在所述最终预览图像中对应的第二过渡区域的图像数据进行融合。
在上述方案中,所述融合子单元,配置为:
在所述拍摄目标i在第一过渡区域范围内,沿着所述对焦框边缘至所述过渡区域边缘的第一方向为每个像素点对应设置第一权重值;其中,所述第一权重值沿着所述第一方向逐渐降低;
在所述拍摄目标i在第二过渡区域内,沿着所述过渡区域边缘至所述对焦框边缘的第二方向为每个像素点对应设置第二权重值;其中,所述第二权重值沿着所述第二方向逐渐降低;
将每个像素点在第一过渡区域内的像素值和对应第一权重值以及每个像素点在第二过渡区域内的像素值和对应的第二权重值进行加权求和,得出所述拍摄目标i在过渡区域的每个像素点在最终拍摄图像中的像素值。
在上述方案中,所述获取单元,还配置为:
若以拍摄目标像素值的对比度特性作为清晰度量度,则当拍摄目标像素值的对比度值最大时,使得所述拍摄目标处于最清晰的状态。
第三方面,本发明实施例提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令包括:
在预览图像中生成与拍摄目标对应的对焦框;
在拍摄对焦过程中,获取各拍摄目标在对焦最清晰时对焦框内的图像数据;
在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像。
在上述方案中,该计算机可执行指令还包括:
当所述拍摄目标的像素之间对比度值最大时,所述终端将所述拍摄目标在对焦框内的图像数据进行缓存。
在上述方案中,该计算机可执行指令还包括:
所述终端在拍摄目标i的对焦框的边缘处生成预设范围的过渡区域;其中,i表示拍摄目标的序号,取值范围从1至N,N为拍摄目标的数量;
所述终端将所述拍摄目标i在对焦最清晰时对焦框内的图像数据拼接至所述拍摄目标i在最终预览图像中对应的对焦框内;
所述终端将所述拍摄目标i在对焦最清晰时对应的第一过渡区域的图像数据与所述拍摄目标i在所述最终预览图像中对应的第二过渡区域的图像数据进行融合。
在上述方案中,该计算机可执行指令还包括:
在所述拍摄目标i在第一过渡区域范围内,沿着所述对焦框边缘至所述过渡区域边缘的第一方向为每个像素点对应设置第一权重值;其中,所述第一权重值沿着所述第一方向逐渐降低;
在所述拍摄目标i在第二过渡区域内,沿着所述过渡区域边缘至所述对焦框边缘的第二方向为每个像素点对应设置第二权重值;其中,所述第二权重值沿着所述第二方向逐渐降低;
将每个像素点在第一过渡区域内的像素值和对应第一权重值以及每个像素点在第二过渡区域内的像素值和对应的第二权重值进行加权求和,得出所述拍摄目标i在过渡区域的每个像素点在最终拍摄图像中的像素值。
在上述方案中,该计算机可执行指令还包括:
若以拍摄目标像素值的对比度特性作为清晰度量度,则当拍摄目标像素值的对比度值最大时,使得所述拍摄目标处于最清晰的状态。
第四方面,本发明实施例提供了一种终端,所述终端包括处理器和存储器;其中,所述存储器中存储有计算机可执行指令,所述处理器根据所述计算机可执行指令执行对应的处理;
所述处理器,配置为:
在预览图像中生成与拍摄目标对应的对焦框;
进行拍摄对焦,并在拍摄对焦过程中触发所述获取单元;
获取各拍摄目标在对焦最清晰时对焦框内的图像数据;
在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像。
在上述方案中,所述处理器,还配置为:
当所述拍摄目标的像素之间对比度值最大时,将所述拍摄目标在对焦框内的图像数据进行缓存。
在上述方案中,所述处理器,还配置为:
在拍摄目标i的对焦框的边缘处生成预设范围的过渡区域;其中,i表示拍摄目标的序号,取值范围从1至N,N为拍摄目标的数量;
将所述拍摄目标i在对焦最清晰时对焦框内的图像数据拼接至所述拍摄目标i在最终预览图像中对应的对焦框内;
将所述拍摄目标i在对焦最清晰时对应的第一过渡区域的图像数据与所述拍摄目标i在所述最终预览图像中对应的第二过渡区域的图像数据进行融合。
在上述方案中,所述处理器,还配置为:
在所述拍摄目标i在第一过渡区域范围内,沿着所述对焦框边缘至所述过渡区域边缘的第一方向为每个像素点对应设置第一权重值;其中,所述第一权重值沿着所述第一方向逐渐降低;
在所述拍摄目标i在第二过渡区域内,沿着所述过渡区域边缘至所述对焦框边缘的第二方向为每个像素点对应设置第二权重值;其中,所述第二权重值沿着所述第二方向逐渐降低;
将每个像素点在第一过渡区域内的像素值和对应第一权重值以及每个像素点在第二过渡区域内的像素值和对应的第二权重值进行加权求和,得 出所述拍摄目标i在过渡区域的每个像素点在最终拍摄图像中的像素值。
在上述方案中,所述处理器,还配置为:
若以拍摄目标像素值的对比度特性作为清晰度量度,则当拍摄目标像素值的对比度值最大时,使得所述拍摄目标处于最清晰的状态。
本发明实施例所提供的一种终端及终端拍摄的方法、计算机存储介质,在终端进行拍摄时,将对焦过程中处于不同景深的拍摄目标最清晰的图像通过预设的图像融合技术进行融合,从而能够使得处于不同景深范围的拍摄目标在拍摄图像中清晰。
附图说明
图1为本发明实施例提供的一种移动终端的硬件结构示意图;
图2为本发明实施例提供的一种终端拍摄的方法流程示意图;
图3为本发明实施例提供的一种生成拍摄图像的方法流程示意图;
图4为本发明实施例提供的一种图像融合的方法流程示意图;
图5为本发明实施例提供的一种处于不同景深范围的两个拍摄目标的示意图;
图6为本发明实施例提供的一种终端拍摄的方法具体实现流程示意图;
图7为本发明实施例提供的一种终端显示预览图像的示意图;
图8为本发明实施例提供的一种预览图像的示意图;
图9为本发明实施例提供的一种聚焦图像的示意图;
图10为本发明实施例提供的一种生成拍摄图像的方法具体实现流程示意图;
图11为本发明实施例提供的一种过渡区域的示意图;
图12为本发明实施例提供的一种过渡区域中像素点权重值的示意图;
图13为本发明实施例提供的一种最终拍摄图像的示意图;
图14为本发明实施例提供的一种终端的结构示意图;
图15为本发明实施例提供的另一种终端的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。
现在将参考附图1来描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、个人数字助理(PDA)、平板电脑(PAD)、便携式多媒体播放器(PMP)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。
图1为实现本发明各个实施例的移动终端的硬件结构示意。
移动终端100可以包括音频/视频(A/V)输入单元120、用户输入单元130、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件,可以替代地实施更多或更少的组件,将在下面详细描述移动终端的元件。
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风122,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在 存储器160(或其它存储介质)中,可以根据移动终端的构造提供两个或更多相机121。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风122接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信模块(图中未示)发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为"识别装置")可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端和外部装置之间传输数据。
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底 座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151。
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触摸输入位置和触摸输入面积。
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储已经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180可以包括用于再现(或回放)多媒体数据的多媒体模块181,多媒体模块181可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,已经按照其功能描述了移动终端。下面,为了简要起见,将描 述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本发明能够应用于任何类型的移动终端,并且不限于滑动型移动终端。
基于上述移动终端硬件结构,提出本发明方法各个实施例。
实施例一
参见图2,其示出了本发明实施例提供的一种终端拍摄的方法流程,该方法可以应用于图1所示的终端中,该方法可以包括:
S201:终端在预览图像中生成与拍摄目标对应的对焦框;
结合图1所示的终端,需要说明的是,终端的用户输入单元130在接收到用户输入的相机开启指令之后,可以通过相机121的取景器采集预览图像,并且可以根据预设的识别算法对预览图像中的拍摄目标进行识别,在识别出拍摄目标之后,为每个拍摄目标生成对应的对焦框。优选地,终端可以通过显示单元151将相机121的取景器所采集的预览图像进行显示,并且在预览图像中每个拍摄目标的对焦框也进行显示。
S202:所述终端在拍摄对焦过程中,获取各拍摄目标在对焦最清晰时对焦框内的图像数据;
需要说明的是,具体的拍摄对焦过程可以通过相机121进行自动对焦来实现,自动对焦的通过对比度进行对焦,具体过程可以是根据焦点处画面的对比度变化,通过致动器控制镜头组件的移动,寻找对比度最大时的镜头位置,也就是准确对焦的位置。致动器可包括一个或一个以上音圈致动器、步进马达、压电致动器或其它类型的致动器,其能够在搜索范围内的不同镜头位置间移动镜头组件。
可以理解地,由于移动终端的镜头比较简单,镜片很少,因此,镜头组件的位移量很小,所以,移动终端进行自动对焦可以通过音圈马达(VCM,Voil Coil Moto)来实现的,VCM主要有线圈,磁铁组和弹片构成,线圈通 过上下两个弹片固定在磁铁组内,当给线圈通电时,线圈会产生磁场,线圈磁场和磁石组相互作用,线圈会向上移动,而锁在线圈里的镜头便一起移动,当断电时,线圈在弹片弹力下返回,这样就实现了自动对焦功能。
综合上述移动终端的自动对焦原理,可以得知,所述终端在拍摄对焦过程中,获取各拍摄目标在对焦最清晰时对焦框内的图像数据,具体包括:
当所述拍摄目标的像素之间对比度值最大时,所述终端将所述拍摄目标在对焦框内的图像数据进行缓存。
S203:所述终端在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像。
需要进行说明的是,终端在对焦过程中,由于拍摄目标所处的景深范围不同,因此,会出现在对焦过程中出现某些拍摄目标清晰而某些目标不清晰的情况,因此,可以通过步骤S203将对焦过程中所获取到的处于不同景深范围的拍摄目标的清晰图像进行融合,从而可以在最终的拍摄图像中,实现处于不同景深的拍摄目标均清晰的效果。
具体地,由于需要在最终的预览图像中将处于不同景深的拍摄目标均进行融合,因此,本实施例以单个的拍摄目标i为例对具体的融合过程进行说明,可以理解地,本领域技术人员可以将单个的拍摄目标i的融合过程应用于对所有拍摄目标进行融合来实现,此时,参见图3,步骤S203具体可以包括:
S2031:所述终端在拍摄目标i的对焦框的边缘处生成预设范围的过渡区域;
其中,i表示拍摄目标的序号,取值范围从1至N,N为拍摄目标的数量;
S2032:所述终端将所述拍摄目标i在对焦最清晰时对焦框内的图像数 据拼接至所述拍摄目标i在最终预览图像中对应的对焦框内;
S2033:所述终端将所述拍摄目标i在对焦最清晰时对应的第一过渡区域的图像数据与所述拍摄目标i在所述最终预览图像中对应的第二过渡区域的图像数据进行融合。
在本发明实施例一实施方式中,对于步骤S2033,参见图4,本实施例提供了一种对图像进行融合的方法,具体过程可以包括:
S20331:在所述拍摄目标i在第一过渡区域范围内,沿着所述对焦框边缘至所述过渡区域边缘的第一方向为每个像素点对应设置第一权重值;
其中,所述第一权重值沿着所述第一方向逐渐降低;
S20332:在所述拍摄目标i在第二过渡区域内,沿着所述过渡区域边缘至所述对焦框边缘的第二方向为每个像素点对应设置第二权重值;
其中,所述第二权重值沿着所述第二方向逐渐降低;
S20333:将每个像素点在第一过渡区域内的像素值和对应第一权重值以及每个像素点在第二过渡区域内的像素值和对应的第二权重值进行加权求和,得出所述拍摄目标i在过渡区域的每个像素点在最终拍摄图像中的像素值。
本实施例提供了一种终端拍摄的方法,通过在自动对焦过程中,将处于不同景深的拍摄目标最清晰时的图像融合到最终的预览图像中,能够使得在最终获得拍摄图像中,处于不同景深范围的拍摄目标均清晰。
实施例二
基于前述实施例相同的技术构思,以图5所示的处于不同景深范围的两个拍摄目标为例,提出了上述实施例的终端拍摄方法的具体实现过程,需要说明的是,两个拍摄目标与终端之间的距离不同,在本实施例中,距离终端较近的为拍摄目标1,距离终端较远的为拍摄目标2,具体实现过程如图6所示,可以包括:
S601:终端开启拍照功能,并通过摄像头的取景器采集生成预览图像;
需要说明的是,终端生成预览图像后,可以如图7所示进行显示,具体预览图像如图8所示,预览图像的左侧为拍摄目标1,预览图像的右侧为拍摄目标2,可以得知,拍摄目标1和拍摄目标2处于不同的景深范围,拍摄目标1比拍摄目标2距离终端更近。
S602:终端通过预设的识别算法从预览图像中识别出拍摄目标1和拍摄目标2之后,为每个拍摄目标生成对应的对焦框;
具体的,如图8所示,预览图像左侧的白色方框为拍摄目标1的对焦框,预览图像右侧的白色方框为拍摄目标2的对焦框;
S603:终端在拍摄对焦过程中,获取拍摄目标1和拍摄目标2在像素之间对比度值最大时对焦框内的图像数据;
具体地,本实施例中以拍摄目标像素值的对比度特性作为清晰度量度,因此,当拍摄目标像素值的对比度值最大时,就可以表征拍摄目标处于最清晰的状态。
详细地,如图8所示,终端在自动对焦过程中,通常自动对焦的景深变化顺序为由近及远再到近,因此,在终端采集到预览图像时,摄像头聚焦于拍摄目标1所处的景深,因此,此时拍摄目标1最清晰,拍摄目标2出现模糊;
随着摄像头中的VCM控制镜片的移动来实现自动对焦,摄像头所聚焦的景深范围会大于拍摄目标1所处的景深范围,此时,摄像头会聚焦与拍摄目标2的景深,因此,如图9所示,拍摄目标2最清晰,拍摄目标1出现模糊的。
终端可以在拍摄目标处于最清晰的时候,获取拍摄目标在对焦框内的图像数据,并进行缓存,从而能够为后续的图像融合提供了数据素材。
S604:终端在自动对焦完成后,生成最终预览图像;
可以理解地,由于终端进行自动对焦的景深变化顺序为由近及远再到近,所以,终端得到的最终预览图像与图8相同,拍摄目标1最清晰,拍摄目标2模糊。
S605:终端将拍摄目标2在像素之间对比度值最大时对焦框内的图像数据融合至最终预览图像的相应区域,生成最终的拍摄图像。
可以理解地,由于最终预览图像中拍摄目标1已是最清晰的,因此,仅需要将拍摄目标2最清晰时对焦框中的图像数据融合入最终预览图像。所以,在具体实现过程中,如图10所示,步骤S605可以包括:
S6051:终端在拍摄目标2的对焦框的边缘处扩大20个像素的范围作为过渡区域;
需要说明的是,如图11所示,虚线框为拍摄目标2的对焦框边缘,实线框与虚线框之间的区域为拍摄目标2的过渡区域。
S6052:终端将拍摄目标2在对焦最清晰时对焦框内的图像数据拼接至拍摄目标2在最终预览图像中对应的对焦框内;
S6053:拍摄目标2在第一过渡区域范围内,沿着对焦框边缘至过渡区域边缘的第一方向为每个像素点对应设置第一权重值;
其中,第一过渡区域范围为拍摄目标2在对焦最清晰时所对应的过渡区域;并且,第一权重值沿着第一方向逐渐降低;如图12的上半部分所示,第一方向上的第一权重值由A1至A20逐渐降低;
S6054:拍摄目标2在第二过渡区域范围内,沿着过渡区域边缘至对焦框边缘的第二方向为每个像素点对应设置第二权重值;
其中,第二过渡区域范围为拍摄目标2在最终预览图像中所对应的过度区域;并且,第二权重值沿着所述第二方向逐渐降低;如图12的下半部分所示,第二方向上的第二权重值由B1至B20逐渐降低;
S6055:将过渡区域内的每个像素点在第一过渡区域内的像素值和对应 第一权重值以及过渡区域内的每个像素点在第二过渡区域内的像素值和对应的第二权重值进行加权求和,得出所述拍摄目标i在过渡区域的每个像素点在最终拍摄图像中的像素值。
具体地,以第一过渡区域的像素点A1为例,并且像素点的像素值以RGB值为例,设定A1的权重为W_a1,RGB值分别为R_a1、G_a1、B_a1;在第二过渡区域与A1对应的像素点为B20,像素点B20的权重为W_b20,RGB分别为R_b20、G_b20、B_b20为例,像素点C20为像素点B20和A1融合后的像素点,其RGB为R_c20、G_c20、B_c20,则C20的RGB的值分别为:
R_c20=R_a1×W_a1+R_b20×W_b20;
G_c20=G_a1×W_a1+G_b20×W_b20;
B_c20=B_a1×W_a1+B_b20×W_b20;
根据图12所示的权重可以得知,像素点B20的权重W_b20为5%,像素点A1的权重W_a1为95%,从这里可以算出像素点B20和像素点A1融合后的像素点C20的RGB值分别为:
R_c20=R_a1×95%+R_b20×5%;
G_c20=G_a1×95%+G_b20×5%;
B_c20=B_a1×95%+B_b20×5%;
可以理解地,过渡区域的其他像素点融合后的RGB值计算过程与上述过程相同,本实施例不再赘述。
在将过渡区域的所有像素点均按照上述计算过程完成融合后,就能够得到最终的拍摄图像,如图13所示,处于不同景深范围的拍摄目标1和拍摄目标2在图像融合后均变得清晰,并且融合边缘处过渡均匀,没有明显的边界。
实施例三
基于前述实施例相同的技术构思,参见图14,其示出了本发明实施例提供的一种终端1400的结构,其中,终端1400包括:生成单元1401、对焦单元1402、获取单元1403以及融合单元1404;其中,
生成单元1401,配置为在预览图像中生成与拍摄目标对应的对焦框;
对焦单元1402,配置为进行拍摄对焦,并在拍摄对焦过程中触发获取单元1403;
获取单元1403,配置为获取各拍摄目标在对焦最清晰时对焦框内的图像数据;
融合单元1404,配置为在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像。
在上述方案中,获取单元1403,具体配置为:当拍摄目标的像素之间对比度值最大时,将拍摄目标在对焦框内的图像数据进行缓存。
在上述方案中,参见图15,融合单元1404,具体包括:过渡区域生成子单元14041、拼接子单元14042和融合子单元14043;其中,
过渡区域生成子单元14041,配置为在拍摄目标i的对焦框的边缘处生成预设范围的过渡区域;其中,i表示拍摄目标的序号,取值范围从1至N,N为拍摄目标的数量;
拼接子单元14042,配置为将拍摄目标i在对焦最清晰时对焦框内的图像数据拼接至拍摄目标i在最终预览图像中对应的对焦框内;
融合子单元14043,配置为将拍摄目标i在对焦最清晰时对应的第一过渡区域的图像数据与拍摄目标i在最终预览图像中对应的第二过渡区域的图像数据进行融合。
在上述方案中,融合子单元14043,具体配置为:
在拍摄目标i在第一过渡区域范围内,沿着对焦框边缘至过渡区域边缘 的第一方向为每个像素点对应设置第一权重值;其中,第一权重值沿着第一方向逐渐降低;
在拍摄目标i在第二过渡区域内,沿着过渡区域边缘至对焦框边缘的第二方向为每个像素点对应设置第二权重值;其中,第二权重值沿着第二方向逐渐降低;
将每个像素点在第一过渡区域内的像素值和对应第一权重值以及每个像素点在第二过渡区域内的像素值和对应的第二权重值进行加权求和,得出拍摄目标i在过渡区域的每个像素点在最终拍摄图像中的像素值。
本发明实施例的一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令包括:
在预览图像中生成与拍摄目标对应的对焦框;
在拍摄对焦过程中,获取各拍摄目标在对焦最清晰时对焦框内的图像数据;
在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像。
在本发明实施例一实施方式中,该计算机可执行指令还包括:
当所述拍摄目标的像素之间对比度值最大时,所述终端将所述拍摄目标在对焦框内的图像数据进行缓存。
在本发明实施例一实施方式中,该计算机可执行指令还包括:
所述终端在拍摄目标i的对焦框的边缘处生成预设范围的过渡区域;其中,i表示拍摄目标的序号,取值范围从1至N,N为拍摄目标的数量;
所述终端将所述拍摄目标i在对焦最清晰时对焦框内的图像数据拼接至所述拍摄目标i在最终预览图像中对应的对焦框内;
所述终端将所述拍摄目标i在对焦最清晰时对应的第一过渡区域的图 像数据与所述拍摄目标i在所述最终预览图像中对应的第二过渡区域的图像数据进行融合。
在本发明实施例一实施方式中,该计算机可执行指令还包括:
在所述拍摄目标i在第一过渡区域范围内,沿着所述对焦框边缘至所述过渡区域边缘的第一方向为每个像素点对应设置第一权重值;其中,所述第一权重值沿着所述第一方向逐渐降低;
在所述拍摄目标i在第二过渡区域内,沿着所述过渡区域边缘至所述对焦框边缘的第二方向为每个像素点对应设置第二权重值;其中,所述第二权重值沿着所述第二方向逐渐降低;
将每个像素点在第一过渡区域内的像素值和对应第一权重值以及每个像素点在第二过渡区域内的像素值和对应的第二权重值进行加权求和,得出所述拍摄目标i在过渡区域的每个像素点在最终拍摄图像中的像素值。
在本发明实施例一实施方式中,该计算机可执行指令还包括:
若以拍摄目标像素值的对比度特性作为清晰度量度,则当拍摄目标像素值的对比度值最大时,使得所述拍摄目标处于最清晰的状态。
本发明实施例的一种终端,所述终端包括处理器和存储器;其中,所述存储器中存储有计算机可执行指令,所述处理器根据所述计算机可执行指令执行对应的处理;
所述处理器,配置为:
在预览图像中生成与拍摄目标对应的对焦框;
进行拍摄对焦,并在拍摄对焦过程中触发所述获取单元;
获取各拍摄目标在对焦最清晰时对焦框内的图像数据;
在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像。
在本发明实施例一实施方式中,所述处理器,还配置为:
当所述拍摄目标的像素之间对比度值最大时,将所述拍摄目标在对焦框内的图像数据进行缓存。
在本发明实施例一实施方式中,所述处理器,还配置为:
在拍摄目标i的对焦框的边缘处生成预设范围的过渡区域;其中,i表示拍摄目标的序号,取值范围从1至N,N为拍摄目标的数量;
将所述拍摄目标i在对焦最清晰时对焦框内的图像数据拼接至所述拍摄目标i在最终预览图像中对应的对焦框内;
将所述拍摄目标i在对焦最清晰时对应的第一过渡区域的图像数据与所述拍摄目标i在所述最终预览图像中对应的第二过渡区域的图像数据进行融合。
在本发明实施例一实施方式中,所述处理器,还配置为:
在所述拍摄目标i在第一过渡区域范围内,沿着所述对焦框边缘至所述过渡区域边缘的第一方向为每个像素点对应设置第一权重值;其中,所述第一权重值沿着所述第一方向逐渐降低;
在所述拍摄目标i在第二过渡区域内,沿着所述过渡区域边缘至所述对焦框边缘的第二方向为每个像素点对应设置第二权重值;其中,所述第二权重值沿着所述第二方向逐渐降低;
将每个像素点在第一过渡区域内的像素值和对应第一权重值以及每个像素点在第二过渡区域内的像素值和对应的第二权重值进行加权求和,得出所述拍摄目标i在过渡区域的每个像素点在最终拍摄图像中的像素值。
在本发明实施例一实施方式中,所述处理器,还配置为:
若以拍摄目标像素值的对比度特性作为清晰度量度,则当拍摄目标像素值的对比度值最大时,使得所述拍摄目标处于最清晰的状态。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变 体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明各个实施例所描述的方法。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。
工业实用性
本发明实施例所提供的一种终端及终端拍摄的方法、计算机存储介质,在终端进行拍摄时,将对焦过程中处于不同景深的拍摄目标最清晰的图像通过预设的图像融合技术进行融合,从而能够使得处于不同景深范围的拍摄目标在拍摄图像中清晰。

Claims (20)

  1. 一种终端拍摄的方法,所述方法包括:
    终端在预览图像中生成与拍摄目标对应的对焦框;
    所述终端在拍摄对焦过程中,获取各拍摄目标在对焦最清晰时对焦框内的图像数据;
    所述终端在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像。
  2. 根据权利要求1所述的方法,其中,所述终端在拍摄对焦过程中,获取各拍摄目标在对焦最清晰时对焦框内的图像数据,包括:
    当所述拍摄目标的像素之间对比度值最大时,所述终端将所述拍摄目标在对焦框内的图像数据进行缓存。
  3. 根据权利要求1所述的方法,其中,所述终端在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像,包括:
    所述终端在拍摄目标i的对焦框的边缘处生成预设范围的过渡区域;其中,i表示拍摄目标的序号,取值范围从1至N,N为拍摄目标的数量;
    所述终端将所述拍摄目标i在对焦最清晰时对焦框内的图像数据拼接至所述拍摄目标i在最终预览图像中对应的对焦框内;
    所述终端将所述拍摄目标i在对焦最清晰时对应的第一过渡区域的图像数据与所述拍摄目标i在所述最终预览图像中对应的第二过渡区域的图像数据进行融合。
  4. 根据权利要求3所述的方法,其中,所述终端将所述拍摄目标i在对焦最清晰时对应的第一过渡区域的图像数据与所述拍摄目标i在所述最终预览图像中对应的第二过渡区域的图像数据进行融合,包括:
    在所述拍摄目标i在第一过渡区域范围内,沿着所述对焦框边缘至所述过渡区域边缘的第一方向为每个像素点对应设置第一权重值;其中,所述第一权重值沿着所述第一方向逐渐降低;
    在所述拍摄目标i在第二过渡区域内,沿着所述过渡区域边缘至所述对焦框边缘的第二方向为每个像素点对应设置第二权重值;其中,所述第二权重值沿着所述第二方向逐渐降低;
    将每个像素点在第一过渡区域内的像素值和对应第一权重值以及每个像素点在第二过渡区域内的像素值和对应的第二权重值进行加权求和,得出所述拍摄目标i在过渡区域的每个像素点在最终拍摄图像中的像素值。
  5. 根据权利要求1至4任一项所述的方法,其中,所述方法还包括:
    若以拍摄目标像素值的对比度特性作为清晰度量度,则当拍摄目标像素值的对比度值最大时,使得所述拍摄目标处于最清晰的状态。
  6. 一种终端,所述终端包括:生成单元、对焦单元、获取单元以及融合单元;其中,
    所述生成单元,配置为在预览图像中生成与拍摄目标对应的对焦框;
    所述对焦单元,配置为进行拍摄对焦,并在拍摄对焦过程中触发所述获取单元;
    所述获取单元,配置为获取各拍摄目标在对焦最清晰时对焦框内的图像数据;
    所述融合单元,配置为在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像。
  7. 根据权利要求6所述的终端,其中,所述获取单元,配置为:当所述拍摄目标的像素之间对比度值最大时,将所述拍摄目标在对焦框内的图像数据进行缓存。
  8. 根据权利要求6所述的终端,其中,所述融合单元,包括:过渡区域生成子单元、拼接子单元和融合子单元;其中,
    所述过渡区域生成子单元,配置为在拍摄目标i的对焦框的边缘处生成预设范围的过渡区域;其中,i表示拍摄目标的序号,取值范围从1至N,N为拍摄目标的数量;
    所述拼接子单元,配置为将所述拍摄目标i在对焦最清晰时对焦框内的图像数据拼接至所述拍摄目标i在最终预览图像中对应的对焦框内;
    所述融合子单元,配置为将所述拍摄目标i在对焦最清晰时对应的第一过渡区域的图像数据与所述拍摄目标i在所述最终预览图像中对应的第二过渡区域的图像数据进行融合。
  9. 根据权利要求8所述的终端,其中,所述融合子单元,配置为:
    在所述拍摄目标i在第一过渡区域范围内,沿着所述对焦框边缘至所述过渡区域边缘的第一方向为每个像素点对应设置第一权重值;其中,所述第一权重值沿着所述第一方向逐渐降低;
    在所述拍摄目标i在第二过渡区域内,沿着所述过渡区域边缘至所述对焦框边缘的第二方向为每个像素点对应设置第二权重值;其中,所述第二权重值沿着所述第二方向逐渐降低;
    将每个像素点在第一过渡区域内的像素值和对应第一权重值以及每个像素点在第二过渡区域内的像素值和对应的第二权重值进行加权求和,得出所述拍摄目标i在过渡区域的每个像素点在最终拍摄图像中的像素值。
  10. 根据权利要求6至9任一项所述的终端,其中,所述获取单元,还配置为:
    若以拍摄目标像素值的对比度特性作为清晰度量度,则当拍摄目标像素值的对比度值最大时,使得所述拍摄目标处于最清晰的状态。
  11. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执 行指令,该计算机可执行指令包括:
    在预览图像中生成与拍摄目标对应的对焦框;
    在拍摄对焦过程中,获取各拍摄目标在对焦最清晰时对焦框内的图像数据;
    在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像。
  12. 根据权利要求11所述的计算机存储介质,其中,该计算机可执行指令还包括:
    当所述拍摄目标的像素之间对比度值最大时,所述终端将所述拍摄目标在对焦框内的图像数据进行缓存。
  13. 根据权利要求11所述的计算机存储介质,其中,该计算机可执行指令还包括:
    所述终端在拍摄目标i的对焦框的边缘处生成预设范围的过渡区域;其中,i表示拍摄目标的序号,取值范围从1至N,N为拍摄目标的数量;
    所述终端将所述拍摄目标i在对焦最清晰时对焦框内的图像数据拼接至所述拍摄目标i在最终预览图像中对应的对焦框内;
    所述终端将所述拍摄目标i在对焦最清晰时对应的第一过渡区域的图像数据与所述拍摄目标i在所述最终预览图像中对应的第二过渡区域的图像数据进行融合。
  14. 根据权利要求13所述的计算机存储介质,其中,该计算机可执行指令还包括:
    在所述拍摄目标i在第一过渡区域范围内,沿着所述对焦框边缘至所述过渡区域边缘的第一方向为每个像素点对应设置第一权重值;其中,所述第一权重值沿着所述第一方向逐渐降低;
    在所述拍摄目标i在第二过渡区域内,沿着所述过渡区域边缘至所述对焦框边缘的第二方向为每个像素点对应设置第二权重值;其中,所述第二权重值沿着所述第二方向逐渐降低;
    将每个像素点在第一过渡区域内的像素值和对应第一权重值以及每个像素点在第二过渡区域内的像素值和对应的第二权重值进行加权求和,得出所述拍摄目标i在过渡区域的每个像素点在最终拍摄图像中的像素值。
  15. 根据权利要求11至14任一项所述的计算机存储介质,其中,该计算机可执行指令还包括:
    若以拍摄目标像素值的对比度特性作为清晰度量度,则当拍摄目标像素值的对比度值最大时,使得所述拍摄目标处于最清晰的状态。
  16. 一种终端,所述终端包括处理器和存储器;其中,所述存储器中存储有计算机可执行指令,所述处理器根据所述计算机可执行指令执行对应的处理;
    所述处理器,配置为:
    在预览图像中生成与拍摄目标对应的对焦框;
    进行拍摄对焦,并在拍摄对焦过程中触发所述获取单元;
    获取各拍摄目标在对焦最清晰时对焦框内的图像数据;
    在确定拍摄焦距时的最终预览图像中,根据预设的图像融合技术,将所有拍摄目标在对焦最清晰时对焦框内的图像数据进行融合,生成最终的拍摄图像。
  17. 根据权利要求16所述的终端,其中,所述处理器,还配置为:
    当所述拍摄目标的像素之间对比度值最大时,将所述拍摄目标在对焦框内的图像数据进行缓存。
  18. 根据权利要求16所述的终端,其中,所述处理器,还配置为:
    在拍摄目标i的对焦框的边缘处生成预设范围的过渡区域;其中,i表 示拍摄目标的序号,取值范围从1至N,N为拍摄目标的数量;
    将所述拍摄目标i在对焦最清晰时对焦框内的图像数据拼接至所述拍摄目标i在最终预览图像中对应的对焦框内;
    将所述拍摄目标i在对焦最清晰时对应的第一过渡区域的图像数据与所述拍摄目标i在所述最终预览图像中对应的第二过渡区域的图像数据进行融合。
  19. 根据权利要求18所述的终端,其中,所述处理器,还配置为:
    在所述拍摄目标i在第一过渡区域范围内,沿着所述对焦框边缘至所述过渡区域边缘的第一方向为每个像素点对应设置第一权重值;其中,所述第一权重值沿着所述第一方向逐渐降低;
    在所述拍摄目标i在第二过渡区域内,沿着所述过渡区域边缘至所述对焦框边缘的第二方向为每个像素点对应设置第二权重值;其中,所述第二权重值沿着所述第二方向逐渐降低;
    将每个像素点在第一过渡区域内的像素值和对应第一权重值以及每个像素点在第二过渡区域内的像素值和对应的第二权重值进行加权求和,得出所述拍摄目标i在过渡区域的每个像素点在最终拍摄图像中的像素值。
  20. 根据权利要求16至19任一项所述的终端,其中,所述处理器,还配置为:
    若以拍摄目标像素值的对比度特性作为清晰度量度,则当拍摄目标像素值的对比度值最大时,使得所述拍摄目标处于最清晰的状态。
PCT/CN2016/099502 2015-12-23 2016-09-20 一种终端及终端拍摄的方法、计算机存储介质 WO2017107596A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/064,143 US10659675B2 (en) 2015-12-23 2016-09-20 Terminal, shooting method thereof and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510974333.9 2015-12-23
CN201510974333.9A CN105578045A (zh) 2015-12-23 2015-12-23 一种终端及终端拍摄的方法

Publications (1)

Publication Number Publication Date
WO2017107596A1 true WO2017107596A1 (zh) 2017-06-29

Family

ID=55887653

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/099502 WO2017107596A1 (zh) 2015-12-23 2016-09-20 一种终端及终端拍摄的方法、计算机存储介质

Country Status (3)

Country Link
US (1) US10659675B2 (zh)
CN (1) CN105578045A (zh)
WO (1) WO2017107596A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619616A (zh) * 2019-09-19 2019-12-27 广东工业大学 一种图像处理方法、装置和相关设备

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578045A (zh) 2015-12-23 2016-05-11 努比亚技术有限公司 一种终端及终端拍摄的方法
CN107465865A (zh) * 2016-06-03 2017-12-12 北京小米移动软件有限公司 拍摄方法及装置
CN106161949A (zh) * 2016-08-05 2016-11-23 北京小米移动软件有限公司 拍照方法及装置
CN107872614A (zh) * 2016-09-27 2018-04-03 中兴通讯股份有限公司 一种拍摄方法及拍摄装置
CN106791372B (zh) * 2016-11-30 2020-06-30 努比亚技术有限公司 一种多点清晰成像的方法及移动终端
CN106657780B (zh) * 2016-12-16 2020-06-02 北京小米移动软件有限公司 图像预览方法和装置
CN109862252B (zh) * 2017-11-30 2021-01-29 北京小米移动软件有限公司 图像拍摄方法及装置
CN108171743A (zh) * 2017-12-28 2018-06-15 努比亚技术有限公司 拍摄图像的方法、设备及计算机可存储介质
CN108200344A (zh) * 2018-01-23 2018-06-22 江苏冠达通电子科技有限公司 摄像机的调整变焦方法
CN109151310A (zh) * 2018-09-03 2019-01-04 合肥智慧龙图腾知识产权股份有限公司 一种自动对焦拍摄方法、计算机设备及可读存储介质
CN109120858B (zh) * 2018-10-30 2021-01-15 努比亚技术有限公司 一种图像拍摄方法、装置、设备及存储介质
CN109688293A (zh) * 2019-01-28 2019-04-26 努比亚技术有限公司 一种拍摄方法、终端及计算机可读存储介质
CN111131698B (zh) * 2019-12-23 2021-08-27 RealMe重庆移动通信有限公司 图像处理方法及装置、计算机可读介质、电子设备
CN111601031B (zh) * 2020-04-02 2022-02-01 维沃移动通信有限公司 一种拍摄参数调整方法及装置
CN112969027B (zh) * 2021-04-02 2022-08-16 浙江大华技术股份有限公司 电动镜头的聚焦方法和装置、存储介质及电子设备
CN113542600B (zh) * 2021-07-09 2023-05-12 Oppo广东移动通信有限公司 一种图像生成方法、装置、芯片、终端和存储介质
CN114430459A (zh) * 2022-01-26 2022-05-03 Oppo广东移动通信有限公司 拍摄方法、装置、终端和可读存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101615289A (zh) * 2009-08-05 2009-12-30 北京优纳科技有限公司 切片组织的三维采集及多层图像融合方法
CN101720027A (zh) * 2009-11-27 2010-06-02 西安电子科技大学 可变焦阵列摄像机协同获取不同分辨率多目标视频方法
CN101998061A (zh) * 2009-08-24 2011-03-30 三星电子株式会社 数字拍摄设备以及控制该数字拍摄设备的方法
CN102075679A (zh) * 2010-11-18 2011-05-25 无锡中星微电子有限公司 一种图像采集方法和装置
US20120120269A1 (en) * 2010-11-11 2012-05-17 Tessera Technologies Ireland Limited Rapid auto-focus using classifier chains, mems and/or multiple object focusing
CN102982522A (zh) * 2012-12-14 2013-03-20 东华大学 一种实现多聚焦显微图像实时融合的方法
US20140204236A1 (en) * 2013-01-23 2014-07-24 Samsung Electronics Co., Ltd Apparatus and method for processing image in mobile terminal having camera
CN104867125A (zh) * 2015-06-04 2015-08-26 北京京东尚科信息技术有限公司 获取图像的方法以及装置
CN105578045A (zh) * 2015-12-23 2016-05-11 努比亚技术有限公司 一种终端及终端拍摄的方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4127296B2 (ja) * 2006-06-09 2008-07-30 ソニー株式会社 撮像装置、および撮像装置制御方法、並びにコンピュータ・プログラム

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101615289A (zh) * 2009-08-05 2009-12-30 北京优纳科技有限公司 切片组织的三维采集及多层图像融合方法
CN101998061A (zh) * 2009-08-24 2011-03-30 三星电子株式会社 数字拍摄设备以及控制该数字拍摄设备的方法
CN101720027A (zh) * 2009-11-27 2010-06-02 西安电子科技大学 可变焦阵列摄像机协同获取不同分辨率多目标视频方法
US20120120269A1 (en) * 2010-11-11 2012-05-17 Tessera Technologies Ireland Limited Rapid auto-focus using classifier chains, mems and/or multiple object focusing
CN102075679A (zh) * 2010-11-18 2011-05-25 无锡中星微电子有限公司 一种图像采集方法和装置
CN102982522A (zh) * 2012-12-14 2013-03-20 东华大学 一种实现多聚焦显微图像实时融合的方法
US20140204236A1 (en) * 2013-01-23 2014-07-24 Samsung Electronics Co., Ltd Apparatus and method for processing image in mobile terminal having camera
CN104867125A (zh) * 2015-06-04 2015-08-26 北京京东尚科信息技术有限公司 获取图像的方法以及装置
CN105578045A (zh) * 2015-12-23 2016-05-11 努比亚技术有限公司 一种终端及终端拍摄的方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619616A (zh) * 2019-09-19 2019-12-27 广东工业大学 一种图像处理方法、装置和相关设备
CN110619616B (zh) * 2019-09-19 2022-06-24 广东工业大学 一种图像处理方法、装置和相关设备

Also Published As

Publication number Publication date
CN105578045A (zh) 2016-05-11
US10659675B2 (en) 2020-05-19
US20190007625A1 (en) 2019-01-03

Similar Documents

Publication Publication Date Title
WO2017107596A1 (zh) 一种终端及终端拍摄的方法、计算机存储介质
CN106572303B (zh) 一种图片处理方法及终端
CN107950018B (zh) 图像生成方法和系统、以及计算机可读介质
US10091411B2 (en) Mobile terminal and controlling method thereof for continuously tracking object included in video
US9628703B2 (en) Mobile terminal and controlling method thereof
WO2017107629A1 (zh) 移动终端、数据传输系统及移动终端拍摄方法
JP6293706B2 (ja) 電子機器及び電子機器の動作方法
CN106027905B (zh) 一种用于天空对焦的方法和移动终端
US20130155309A1 (en) Method and Apparatus for Controlling Camera Functions Using Facial Recognition and Eye Tracking
CN105959543B (zh) 一种去除反光的拍摄装置和方法
CN105704369B (zh) 一种信息处理方法及装置、电子设备
CN105827961A (zh) 移动终端及对焦方法
CN106791339B (zh) 成像系统及其控制方法
CN104869314A (zh) 拍照方法及装置
CN106056379A (zh) 一种支付终端及支付数据处理方法
CN105549300A (zh) 自动对焦方法及装置
CN104247412A (zh) 图像处理装置、摄像装置、图像处理方法、记录介质以及程序
KR102146856B1 (ko) 렌즈 특성 기반의 촬영 모드 제시 방법, 상기 방법을 기록한 컴퓨터 판독 가능 저장매체 및 디지털 촬영 장치.
CN106454087B (zh) 一种拍摄装置和方法
US20150130966A1 (en) Image forming method and apparatus, and electronic device
CN108124098B (zh) 电子设备和用于自动聚焦的方法
WO2018196444A1 (zh) 虚拟现实播放设备及其控制方法及计算机可读存储介质
CN114339019B (zh) 对焦方法、对焦装置及存储介质
CN112866555B (zh) 拍摄方法、装置、设备及存储介质
US11792518B2 (en) Method and apparatus for processing image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16877407

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16877407

Country of ref document: EP

Kind code of ref document: A1