CN106454121B - Double-camera shooting method and device - Google Patents

Double-camera shooting method and device Download PDF

Info

Publication number
CN106454121B
CN106454121B CN201611040702.8A CN201611040702A CN106454121B CN 106454121 B CN106454121 B CN 106454121B CN 201611040702 A CN201611040702 A CN 201611040702A CN 106454121 B CN106454121 B CN 106454121B
Authority
CN
China
Prior art keywords
picture
framing
view
weight area
matting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611040702.8A
Other languages
Chinese (zh)
Other versions
CN106454121A (en
Inventor
邱情
廖娟娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201611040702.8A priority Critical patent/CN106454121B/en
Publication of CN106454121A publication Critical patent/CN106454121A/en
Application granted granted Critical
Publication of CN106454121B publication Critical patent/CN106454121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a double-camera photographing method and a double-camera photographing device, wherein the method comprises the following steps: when the mobile terminal takes a picture, calling a wide-angle camera of the mobile terminal to capture a current viewing preview picture; determining a first view-finding weight area needing high-definition processing in a current view-finding preview picture, and determining the photographing direction of a corresponding view-finding object according to the first view-finding weight area; rotating a long-focus camera of the mobile terminal according to the shooting direction of the view finding object, and calling the long-focus camera to acquire a high-definition picture of the view finding object; when a photographing instruction triggered by a user is received, acquiring a current framing preview picture to obtain a framing picture and acquiring a high-definition picture of a framing object; and replacing the first framing weight area corresponding to the framing picture with a high-definition picture of a framing object, and generating and outputting a synthesized framing picture. The invention ensures that the framing weight area of the framing picture output by the mobile terminal is very clear, and the background atmosphere of the framing picture is increased, thereby improving the visual effect of the framing picture.

Description

Double-camera shooting method and device
Technical Field
The invention relates to the technical field of photographing, in particular to a double-camera photographing method and device.
Background
With the continuous improvement of the photographing technology of mobile terminals such as smart phones and tablet computers, the definition of an image photographed by a camera of the mobile terminal is also continuously improved, but for a distant view image photographed by the camera of the mobile terminal, when a certain local area in the distant view image is amplified, if a human face in a frame of distant view image is amplified, the amplified local area is not clear, so that the visual effect of the distant view image is affected.
Disclosure of Invention
The invention mainly aims to provide a double-camera shooting method and a double-camera shooting device, and aims to improve the definition of a local area of an image shot by a mobile terminal camera.
In order to achieve the above object, the present invention provides a dual-camera photographing device, where the dual cameras include a wide-angle camera and a telephoto camera, and the dual-camera photographing device includes:
the capturing module is used for calling a wide-angle camera of the mobile terminal to capture a current viewing preview picture when the mobile terminal takes a picture;
the determining module is used for determining a first view-finding weight area which needs high-definition processing in the current view-finding preview picture, and determining the photographing direction of the corresponding view-finding object according to the first view-finding weight area;
the first acquisition module is used for rotating a long-focus camera of the mobile terminal according to the photographing direction of the view finding object and calling the long-focus camera to acquire a high-definition picture of the view finding object;
the second acquisition module is used for acquiring the current view-finding preview picture to obtain a view-finding picture and acquiring a high-definition picture of the view-finding object when a photographing instruction triggered by a user is received;
and the replacing module is used for replacing the first view weight area corresponding to the view picture with a high-definition picture of the view object, and generating and outputting a synthesized view picture.
Optionally, the replacement module comprises:
the matting unit is used for determining a second framing weight area in the high-definition picture of the framing object and performing matting processing on the high-definition picture of the framing object according to the second framing weight area so as to separate a first matting picture corresponding to the second framing weight area;
and the replacing unit is used for replacing the first framing weight area corresponding to the framing picture with the first matting picture so as to generate a composite framing picture.
Optionally, the replacement unit includes:
the matting sub-unit is used for matting the framing picture according to the first framing weight area to obtain a matting framing picture and separating a second matting picture corresponding to the first framing weight area;
an adjusting subunit, configured to adjust the first matting picture according to the second matting picture, so that the first matting picture and the first matting picture have the same corresponding picture size;
and the replacing subunit is used for inlaying the adjusted first matting picture into the first framing weight area corresponding to the matting framing picture so as to generate a synthesized framing picture.
Optionally, the dual-camera photographing device further includes:
and the linking module is used for carrying out picture linking processing on the edge of the first framing weight area embedded with the first matting picture so as to generate a synthesized framing picture.
Optionally, the determining module is further configured to:
and determining a first viewing weight area which needs high-definition processing in the current viewing preview picture based on the selection of a user, and/or determining a viewing area of which the image parameters are in a preset image parameter interval in the current viewing preview picture as the first viewing weight area which needs high-definition processing.
In addition, in order to achieve the above object, the present invention further provides a dual-camera photographing method, where the dual cameras include a wide-angle camera and a telephoto camera, and the dual-camera photographing method includes the steps of:
when the mobile terminal takes a picture, calling a wide-angle camera of the mobile terminal to capture a current viewing preview picture;
determining a first view-finding weight area which needs high-definition processing in the current view-finding preview picture, and determining the photographing direction of a corresponding view-finding object according to the first view-finding weight area;
rotating a long-focus camera of the mobile terminal according to the photographing direction of the view finding object, and calling the long-focus camera to acquire a high-definition picture of the view finding object;
when a photographing instruction triggered by a user is received, acquiring the current view-finding preview picture to obtain a view-finding picture, and acquiring a high-definition picture of the view-finding object;
and replacing the first view weight area corresponding to the view picture with a high-definition picture of the view object, and generating and outputting a synthesized view picture.
Optionally, the step of replacing the first view weight region corresponding to the view picture with a high-definition picture of the view object, and generating and outputting a synthesized view picture includes:
determining a second framing weight area in the high-definition picture of the framing real object, and performing cutout processing on the high-definition picture of the framing real object according to the second framing weight area to separate out a first cutout picture corresponding to the second framing weight area;
and replacing the first framing weight area corresponding to the framing picture with the first matting picture to generate a synthesized framing picture.
Optionally, the step of replacing the first framing weight area corresponding to the framing picture with the first matting picture to generate a composite framing picture includes:
performing cutout processing on the framing picture according to the first framing weight area to obtain a cutout framing picture, and separating a second cutout picture corresponding to the first framing weight area;
adjusting the first matting picture according to the second matting picture to make the picture sizes corresponding to the first matting picture and the first matting picture consistent;
and inlaying the adjusted first matting picture into the first framing weight area corresponding to the matting framed picture to generate a synthesized framed picture.
Optionally, the step of inlaying the adjusted first matting picture into the first framing weight region corresponding to the matting viewfinder picture further includes:
and carrying out picture connection processing on the edge of the first framing weight area embedded with the first matting picture so as to generate a synthesized framing picture.
Optionally, the step of determining a first view weight region in the currently viewed preview screen that requires high definition processing includes:
and determining a first viewing weight area which needs high-definition processing in the current viewing preview picture based on the selection of a user, and/or determining a viewing area of which the image parameters are in a preset image parameter interval in the current viewing preview picture as the first viewing weight area which needs high-definition processing.
The double-camera shooting method provided by the invention is characterized in that when a first view-finding weight area needing high-definition processing exists in a current view-finding preview picture captured by a wide-angle camera, a telephoto camera is called to collect a high-definition picture of a view-finding object corresponding to the first view-finding weight area, when a shooting instruction triggered by a user is received, the current view-finding preview picture is collected to obtain a view-finding picture, the high-definition picture of the view-finding object is replaced into the first view-finding weight area corresponding to the view-finding picture, and a synthesized view-finding picture is generated and output. Because the image shot by the wide-angle camera has a wide visual field and the image shot by the long-focus camera is clear and fine, the invention can replace the picture of the view weight area in the view picture shot by the wide-angle camera with the high-definition picture shot by the long-focus camera, thereby ensuring that the view weight area of the view picture output by the mobile terminal is very clear, and the background atmosphere of the view picture is large, thereby improving the visual effect of the view picture.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of an alternative mobile terminal for implementing various embodiments of the present invention;
FIG. 2 is a diagram of a wireless communication system for the mobile terminal shown in FIG. 1;
fig. 3 is a schematic block diagram of a dual-camera photographing device according to a first embodiment of the present invention;
FIG. 4 is a schematic view of a current view preview captured by the wide angle camera of the present invention;
FIG. 5 is a scene schematic diagram of a high-definition picture of a viewfinder object acquired by a tele-camera according to the present invention;
FIG. 6 is a schematic view of a scene in which a composite viewfinder is generated according to the present invention;
FIG. 7 is a schematic diagram of a detailed module of an alternative module in a second embodiment of the dual-camera photographing apparatus according to the present invention;
FIG. 8 is a schematic diagram of a detailed module of an alternative unit in a third embodiment of the dual-camera photographing device according to the present invention;
fig. 9 is a schematic block diagram of a dual-camera photographing device according to a fourth embodiment of the present invention;
fig. 10 is a schematic flowchart of a dual-camera photographing method according to a first embodiment of the present invention;
FIG. 11 is a flowchart illustrating a refinement step of step S50 in the first embodiment of FIG. 10 according to the present invention;
fig. 12 is a schematic flowchart illustrating a detailed step of step S52 in the dual-camera photographing method according to the present invention;
fig. 13 is a flowchart illustrating a dual-camera photographing method according to a second embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
A mobile terminal implementing various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
The mobile terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a navigation device, and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Fig. 1 is a schematic hardware structure of an optional mobile terminal for implementing various embodiments of the present invention.
The mobile terminal 100 may include a wireless communication unit 110, an a/V (audio/video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, a power supply unit 190, a capture module 10, a determination module 20, a first capture module 30, a second capture module 40, and a replacement module 50, and the like. Fig. 1 illustrates a mobile terminal having various components, but it is to be understood that not all illustrated components are required to be implemented. More or fewer components may alternatively be implemented. Elements of the mobile terminal will be described in detail below.
The wireless communication unit 110 typically includes one or more components that allow radio communication between the mobile terminal 100 and a wireless communication system or network. For example, the wireless communication unit 110 may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
The broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel. The broadcast channel may include a satellite channel and/or a terrestrial channel. The broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to a terminal. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like. Also, the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal. The broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112. The broadcast signal may exist in various forms, for example, it may exist in the form of an Electronic Program Guide (EPG) of Digital Multimedia Broadcasting (DMB), an Electronic Service Guide (ESG) of digital video broadcasting-handheld (DVB-H), and the like. The broadcast receiving module 111 may receive a signal broadcast by using various types of broadcasting systems. In particular, the broadcast receiving module 111 may receive a broadcast signal by using a signal such as multimedia broadcasting-terrestrial (DMB-T), digital multimedia broadcasting-satellite (DMB-S), digital video broadcasting-handheld (DVB-H), forward link media (MediaFLO)@) A digital broadcasting system of a terrestrial digital broadcasting integrated service (ISDB-T), etc. receives digital broadcasting. The broadcast receiving module 111 may be constructedVarious broadcasting systems adapted to provide broadcasting signals and the above-mentioned digital broadcasting system. The broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of storage medium).
The mobile communication module 112 transmits and/or receives radio signals to and/or from at least one of a base station (e.g., access point, node B, etc.), an external terminal, and a server. Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received according to text and/or multimedia messages.
The wireless internet module 113 supports wireless internet access of the mobile terminal. The module may be internally or externally coupled to the terminal. The wireless internet access technology to which the module relates may include WLAN (wireless LAN) (Wi-Fi), Wibro (wireless broadband), Wimax (worldwide interoperability for microwave access), HSDPA (high speed downlink packet access), and the like.
The short-range communication module 114 is a module for supporting short-range communication. Some examples of short-range communication technologies include bluetoothTMRadio Frequency Identification (RFID), infrared data association (IrDA), Ultra Wideband (UWB), zigbeeTMAnd so on.
The location information module 115 is a module for checking or acquiring location information of the mobile terminal. A typical example of the location information module is a GPS (global positioning system). According to the current technology, the GPS module calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information, thereby accurately calculating three-dimensional current location information according to longitude, latitude, and altitude. Currently, a method for calculating position and time information uses three satellites and corrects an error of the calculated position and time information by using another satellite. In addition, the GPS module can calculate speed information by continuously calculating current position information in real time.
The a/V input unit 120 is used to receive an audio or video signal. The a/V input unit 120 may include a camera 121 and a microphone 122, and the camera 121 processes image data of still pictures or video obtained by an image capturing apparatus in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 151. The image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 1210 may be provided according to the construction of the mobile terminal. The microphone 122 may receive sounds (audio data) via the microphone in a phone call mode, a recording mode, a voice recognition mode, or the like, and can process such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the mobile communication module 112 in case of a phone call mode. The microphone 122 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The user input unit 130 may generate key input data according to a command input by a user to control various operations of the mobile terminal. The user input unit 130 allows a user to input various types of information, and may include a keyboard, dome sheet, touch pad (e.g., a touch-sensitive member that detects changes in resistance, pressure, capacitance, and the like due to being touched), scroll wheel, joystick, and the like. In particular, when the touch pad is superimposed on the display unit 151 in the form of a layer, a touch screen may be formed.
The sensing unit 140 detects a current state of the mobile terminal 100 (e.g., an open or closed state of the mobile terminal 100), a position of the mobile terminal 100, presence or absence of contact (i.e., touch input) by a user with the mobile terminal 100, an orientation of the mobile terminal 100, acceleration or deceleration movement and direction of the mobile terminal 100, and the like, and generates a command or signal for controlling an operation of the mobile terminal 100. For example, when the mobile terminal 100 is implemented as a slide-type mobile phone, the sensing unit 140 may sense whether the slide-type phone is opened or closed. In addition, the sensing unit 140 can detect whether the power supply unit 190 supplies power or whether the interface unit 170 is coupled with an external device.
The interface unit 170 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The identification module may store various information for authenticating a user using the mobile terminal 100 and may include a User Identity Module (UIM), a Subscriber Identity Module (SIM), a Universal Subscriber Identity Module (USIM), and the like. In addition, a device having an identification module (hereinafter, referred to as an "identification device") may take the form of a smart card, and thus, the identification device may be connected with the mobile terminal 100 via a port or other connection means. The interface unit 170 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal and the external device.
In addition, when the mobile terminal 100 is connected with an external cradle, the interface unit 170 may serve as a path through which power is supplied from the cradle to the mobile terminal 100 or may serve as a path through which various command signals input from the cradle are transmitted to the mobile terminal. Various command signals or power input from the cradle may be used as signals for recognizing whether the mobile terminal is accurately mounted on the cradle. The output unit 150 is configured to provide output signals (e.g., audio signals, video signals, alarm signals, vibration signals, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, and the like.
The display unit 151 may display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 may display a User Interface (UI) or a Graphical User Interface (GUI) related to a call or other communication (e.g., text messaging, multimedia file downloading, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or an image and related functions, and the like.
Meanwhile, when the display unit 151 and the touch pad are overlapped with each other in the form of a layer to form a touch screen, the display unit 151 may serve as an input device and an output device. The display unit 151 may include at least one of a Liquid Crystal Display (LCD), a thin film transistor LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as transparent displays, and a typical transparent display may be, for example, a TOLED (transparent organic light emitting diode) display or the like. Depending on the particular desired implementation, the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown). The touch screen may be used to detect a touch input pressure as well as a touch input position and a touch input area.
The audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 into an audio signal and output as sound when the mobile terminal is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output module 152 may provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output module 152 may include a speaker, a buzzer, and the like.
The memory 160 may store software programs and the like for processing and controlling operations performed by the controller 180, or may temporarily store data (e.g., a phonebook, messages, still images, videos, and the like) that has been or will be output. Also, the memory 160 may store data regarding various ways of vibration and audio signals output when a touch is applied to the touch screen.
The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the mobile terminal 100 may cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
The controller 180 generally controls the overall operation of the mobile terminal. For example, the controller 180 performs control and processing related to voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, and the multimedia module 181 may be constructed within the controller 180 or may be constructed separately from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
The power supply unit 190 receives external power or internal power and provides appropriate power required to operate various elements and components under the control of the controller 180.
The various embodiments described herein may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, such embodiments may be implemented in the controller 180. For a software implementation, the implementation such as a process or a function may be implemented with a separate software module that allows performing at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory 160 and executed by the controller 180.
Up to this point, mobile terminals have been described in terms of their functionality. Hereinafter, a slide-type mobile terminal among various types of mobile terminals, such as a folder-type, bar-type, swing-type, slide-type mobile terminal, and the like, will be described as an example for the sake of brevity. Accordingly, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
The mobile terminal 100 as shown in fig. 1 may be configured to operate with communication systems such as wired and wireless communication systems and satellite-based communication systems that transmit data via frames or packets.
A communication system in which a mobile terminal according to the present invention is operable will now be described with reference to fig. 2.
Such communication systems may use different air interfaces and/or physical layers. For example, the air interface used by the communication system includes, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)), global system for mobile communications (GSM), and the like. By way of non-limiting example, the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
Referring to fig. 2, the CDMA wireless communication system may include a plurality of mobile terminals 100, a plurality of Base Stations (BSs) 270, Base Station Controllers (BSCs) 275, and a Mobile Switching Center (MSC) 280. The MSC280 is configured to interface with a Public Switched Telephone Network (PSTN) 290. The MSC280 is also configured to interface with a BSC275, which may be coupled to the base station 270 via a backhaul. The backhaul may be constructed according to any of several known interfaces including, for example, E1/T1, ATM, IP, PPP, frame Relay, HDSL, ADSL, or xDSL. It will be understood that a system as shown in fig. 2 may include multiple BSCs 2750.
Each BS270 may serve one or more sectors (or regions), each sector covered by a multi-directional antenna or an antenna pointing in a particular direction being radially distant from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS270 may be configured to support multiple frequency allocations, with each frequency allocation having a particular frequency spectrum (e.g., 1.25MHz,5MHz, etc.).
The intersection of partitions with frequency allocations may be referred to as a CDMA channel. The BS270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology. In such a case, the term "base station" may be used to generically refer to a single BSC275 and at least one BS 270. The base stations may also be referred to as "cells". Alternatively, each sector of a particular BS270 may be referred to as a plurality of cell sites.
As shown in fig. 2, a Broadcast Transmitter (BT)295 transmits a broadcast signal to the mobile terminal 100 operating within the system. A broadcast receiving module 111 as shown in fig. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295. In fig. 2, several Global Positioning System (GPS) satellites 300 are shown. The satellite 300 assists in locating at least one of the plurality of mobile terminals 100.
In fig. 2, a plurality of satellites 300 are depicted, but it is understood that useful positioning information may be obtained with any number of satellites. The location information module 115 as shown in fig. 1 is generally configured to cooperate with satellites 300 to obtain desired positioning information, and a typical example of the location information module 115 is GPS. Other techniques that can track the location of the mobile terminal may be used instead of or in addition to GPS tracking techniques. In addition, at least one GPS satellite 300 may selectively or additionally process satellite DMB transmission.
As a typical operation of the wireless communication system, the BS270 receives reverse link signals from various mobile terminals 100. The mobile terminal 100 is generally engaged in conversations, messaging, and other types of communications. Each reverse link signal received by a particular base station 270 is processed within the particular BS 270. The obtained data is forwarded to the associated BSC 275. The BSC provides call resource allocation and mobility management functions including coordination of soft handoff procedures between BSs 270. The BSCs 275 also route the received data to the MSC280, which provides additional routing services for interfacing with the PSTN 290. Similarly, the PSTN290 interfaces with the MSC280, the MSC interfaces with the BSCs 275, and the BSCs 275 accordingly control the BS270 to transmit forward link signals to the mobile terminal 100.
Based on the hardware structure of the mobile terminal 100 and the communication system, the invention provides a dual-camera photographing device.
As shown in fig. 3, fig. 3 is a schematic diagram of an energy module of the dual-camera photographing device according to the first embodiment of the present invention.
In this embodiment, the dual-camera photographing apparatus includes: capture module 10, determination module 20, first acquisition module 30, second acquisition module 40, and replacement module 50.
The capturing module 10 is configured to, when the mobile terminal takes a picture, invoke a wide-angle camera of the mobile terminal to capture a current viewing preview picture;
in this embodiment, the mobile terminal is configured with dual cameras including a wide camera and a telephoto camera. When detecting that the user opens the APP with the photographing function on the mobile terminal, the capturing module 10 captures a current view preview picture through a wide-angle camera on the mobile terminal, and displays the current view preview picture on a view interface of the mobile terminal.
The wide-angle camera is a photographic lens with a focal length shorter than a standard lens, a visual angle larger than the standard lens, a focal length longer than a fisheye lens and a visual angle smaller than the fisheye lens. The wide-angle lens is basically characterized in that: the lens has short focal length, large visual angle and wide visual field, so that the range of the scenery observed from a certain viewpoint by using the wide-angle camera is much larger than that of the scenery observed by human eyes at the same viewpoint; the scene is long, and a quite large clear range can be shot; the perspective effect of the picture can be emphasized, the foreground can be exaggerated, and the sense of distance and proximity of the scene can be expressed, which is beneficial to enhancing the infectivity of the picture. The wide-angle camera is more suitable for shooting pictures of larger scenes, such as subjects of buildings, landscapes and the like.
The determining module 20 is configured to determine a first view weight area that needs high-definition processing in a current view preview image, and determine a photographing direction of a corresponding view object according to the first view weight area;
the determining module 20 in the mobile terminal automatically identifies a first framing weight area requiring high-definition processing in the current framing preview picture, and if the mobile terminal determines that a face image area is the first framing weight area requiring high-definition processing in the current framing preview picture through a face identification technology, and/or when a touch operation triggered by a user in the current framing preview picture is detected, the determining module 20 determines a touch position corresponding to the touch operation as the first framing weight area requiring high-definition processing in the current framing preview picture, for example, in a photographing preview process, if it is detected that the user clicks a head image area of a puppy in the current framing preview picture, the head image area of the puppy is determined as the first framing weight area requiring high-definition processing in the current framing preview picture.
When a first framing weight area which needs high-definition processing in the current framing preview picture is determined, determining a framing real object corresponding to the first framing weight area, if the first framing weight area is a face image area, determining the framing real object corresponding to the first framing weight area to be a face, and determining the photographing direction of a tele camera in the mobile terminal for the framing real object according to the first framing weight area, for example, the wide-angle camera is matched with the tele camera to calculate the rotation direction and the rotation angle of the tele camera when the framing real object is photographed.
The first acquisition module 30 is configured to rotate a tele camera of the mobile terminal according to the photographing direction of the view finding object, and call the tele camera to acquire a high-definition picture of the view finding object;
when the rotation direction and the rotation angle of the telephoto camera of the mobile terminal are determined, the first acquisition module 30 rotates the telephoto camera according to the rotation direction and the rotation angle, determines to shoot the depth of field of the view finding object, shoots the view finding object according to the depth of field to obtain a high-definition picture of the view finding object, and caches the high-definition picture of the view finding object.
It should be noted that when it is determined that a plurality of first view weight regions exist in the current view preview picture, it indicates that each first view weight region corresponds to one view object, a telephoto camera of the mobile terminal needs to be rotated according to a photographing direction of each view object, the telephoto camera is called to acquire a high-definition picture of each view object, and the acquired high-definition picture of each view object is cached.
The long-focus camera has long focal length, small visual angle and large imaging on the negative film, so that an image larger than a standard lens can be shot at the same distance, and the long-focus camera is suitable for shooting a distant object. Because the depth of field range of the camera is smaller than that of a standard lens, the camera can more effectively blur the shot subject with the background protruding, and when the shot subject is far away from the camera, the deformation of the shot subject in perspective is smaller, and the shot subject is more vivid.
The second acquisition module 40 is configured to acquire a current view-finding preview picture to obtain a view-finding picture and obtain a high-definition picture of a view-finding object when a photographing instruction triggered by a user is received;
the replacing module 50 is configured to replace the first view weight area corresponding to the view picture with a high-definition picture of a view object, and generate and output a synthesized view picture.
When a photographing instruction triggered by a user is received, the second acquisition module 40 acquires a current framing preview picture to obtain a framing picture, the replacement module 50 replaces a first framing weight area corresponding to the framing picture with a high-definition picture of a framing object to generate a synthesized framing picture, and the synthesized framing picture is output for the user to view.
It can be understood that, if there are a plurality of high-definition pictures of the viewfinder actual objects, the high-definition picture of each viewfinder actual object is correspondingly replaced into each first viewfinder weight area of the viewfinder picture. For example, the telephoto camera captures A, B, C, D high-definition pictures of four view objects, and the A, B, C, D high-definition pictures of the four view objects correspond to the four first view weight areas a, b, c, and d of the current view preview picture, respectively, and when a photographing instruction triggered by a user is received, the current view preview picture is captured to obtain the corresponding view picture, and the A, B, C, D high-definition pictures of the four view objects are replaced by the four first view weight areas a, b, c, and d of the view picture, respectively, so as to generate and output a frame of synthesized view picture.
With reference to fig. 4, 5 and 6, how the above-mentioned scheme is implemented is described below by a specific embodiment.
When detecting that a user opens a camera APP on a mobile phone, starting a wide-angle camera and a telephoto camera on the mobile phone, calling the wide-angle camera on the mobile phone to capture a current view preview picture 01, displaying the current view preview picture 01 on a display interface of the mobile phone, determining that the current view preview picture 01 takes a person as a shooting subject through a face recognition technology, determining a face image area 02 in the current view preview picture 01 as a first view weight area, determining a rotation direction and a rotation angle when the telephoto camera in the mobile phone shoots a face corresponding to the face image area 02, rotating the telephoto camera in the mobile phone according to the rotation direction and the rotation angle, calling the telephoto camera to acquire a high-definition picture 03 of the face, and acquiring the current view preview picture 01 to obtain a view picture when receiving a shooting instruction triggered by the user, and replacing the high-definition picture 03 of the face shot by the telephoto camera with the face image area 02 corresponding to the viewfinder picture to generate a frame of synthesized viewfinder picture 06.
The determining module 20 is further configured to determine, based on a selection of a user, a first viewing weight area in the current viewing preview image that needs high definition processing, and/or determine, as the first viewing weight area that needs high definition processing, a viewing area in the current viewing preview image where the image parameter is in a preset image parameter interval.
It is known that, by determining the first viewing weight region requiring high definition processing in the currently viewed preview screen based on the selection of the user: when touch operation triggered by a user in the current framing preview picture is detected, determining a touch position corresponding to the touch operation as a first framing weight area needing high-definition processing in the current framing preview picture; or when the mobile terminal automatically identifies the framing weight area in the current framing preview picture, if the touch operation triggered by the user in the current framing preview picture is detected, the touch position corresponding to the touch operation is determined as the first framing weight area in the current framing preview picture, which needs high-definition processing, and the automatically identified framing weight area is discarded, or the automatically identified framing weight area is reserved, and the automatically identified framing weight area is also determined as the framing weight area which needs high-definition processing.
Determining a viewing area of the current viewing preview picture, wherein the image parameter of the current viewing preview picture is in a preset image parameter interval, as a first viewing weight area needing high-definition processing: the mobile terminal automatically identifies a first framing weight area needing high-definition processing in the current framing preview picture, and if the mobile terminal determines that human face features appear in the current framing preview picture through a face identification technology and pixels of a face image area where the human face features are located are higher than pixels of other framing areas in the current framing preview picture, the face image area is determined as the first framing weight area needing high-definition processing; or when the image parameters of the whole frame of the current view preview picture are in a preset image parameter interval, the mobile terminal defaults a certain view area in the current view preview picture as a first view weight area needing high-definition processing, for example, defaults the middle area of the current view preview picture as the first view weight area needing high-definition processing.
In the double-camera photographing method provided by the embodiment, when it is determined that a first view weight area needing high-definition processing exists in a current view preview picture captured by a wide-angle camera, a telephoto camera is called to acquire a high-definition picture of a view object corresponding to the first view weight area, when a photographing instruction triggered by a user is received, the current view preview picture is acquired to obtain a view picture, the high-definition picture of the view object is replaced into the first view weight area corresponding to the view picture, and a synthesized view picture is generated and output. Because the image shot by the wide-angle camera has a wide visual field and the image shot by the long-focus camera is clear and fine, the invention can replace the picture of the view weight area in the view picture shot by the wide-angle camera with the high-definition picture shot by the long-focus camera, thereby ensuring that the view weight area of the view picture output by the mobile terminal is very clear, and the background atmosphere of the view picture is large, thereby improving the visual effect of the view picture.
Further, based on the first embodiment described above, a second embodiment of the dual-camera photographing device of the present invention is proposed, and in this embodiment, referring to fig. 7, the replacing module 50 includes a matting unit 51 and a replacing unit 52.
The matting unit 51 is configured to determine a second view weight region in the high-definition picture of the view object, and perform matting processing on the high-definition picture of the view object according to the second view weight region to separate a first matting picture corresponding to the second view weight region;
in the present embodiment, the image subject of the first viewing weight region and the image subject of the second viewing weight region are identical, such as the face a of the image subject of the first viewing weight region, and then the image subject of the second viewing weight region is also the face a. When the telephoto camera shoots to obtain the high-definition picture of the viewfinder object, the matting unit 51 in the mobile terminal automatically identifies a second viewfinder weight region in the high-definition picture of the viewfinder object, for example, in the high-definition picture of the viewfinder object through a face recognition technology, a face image region is the second viewfinder weight region, and the second viewfinder weight region is scratched to obtain a corresponding first matte picture, for example, the face image region is scratched to obtain a corresponding face image.
The replacing unit 52 is configured to replace the first framing weight area corresponding to the framing picture with the first matting picture to generate a composite framing picture.
The replacing unit 52 replaces the first cutout picture to the first framing weight area corresponding to the framing picture, for example, after the first framing weight area corresponding to the framing picture is also cutout, the first cutout picture is inlaid (filled) to the first framing weight area corresponding to the framing picture; the first framing weight area is not required to be extracted from the framing picture, but the first matting picture is directly covered on the first framing weight area corresponding to the framing picture, so that a composite framing picture is generated.
How the above scheme is implemented is described below by a specific embodiment in conjunction with fig. 4 and 5.
After a long-focus camera shot by a mobile phone shoots a high-definition picture 03 of a human face, determining a high-definition human face image 04 in the high-definition picture 03 of the human face as the second framing weight area, picking the high-definition human face image 04 out of the high-definition picture 03, replacing the picked high-definition human face image 05 with a human face image 02 in a framing picture to generate a synthesized framing picture, or directly covering the picked high-definition human face image 04 in the framing picture to generate the synthesized framing picture.
In this embodiment, the second framing weight region is extracted to obtain a corresponding first matte picture, and the first framing weight region corresponding to the framing picture is replaced by the first matte picture to generate a synthesized framing picture. The first cutout picture is shot by the long-focus camera of the mobile terminal and is clear and rich in details, and the first cutout picture is replaced to the first framing weight area corresponding to the framing picture to generate a synthesized framing picture, so that the definition of an important area in the synthesized framing picture is ensured, and the picture sense of the whole image of the synthesized framing picture is also ensured.
Further, based on the second embodiment, a third embodiment of the dual-camera photographing device of the present invention is provided, and in this embodiment, referring to fig. 8, the replacing unit 52 includes a matting subunit 521, an adjusting subunit 522, and a replacing subunit 523.
The matting subunit 521 is configured to perform matting processing on the finder picture according to the first finder weight region to obtain a scratched finder picture, and separate a second matting picture corresponding to the first finder weight region;
the adjusting subunit 522 is configured to adjust the first matte picture according to the second matte picture, so that the first matte picture and the corresponding picture size of the first matte picture are consistent;
the replacing subunit 523 is configured to inlay the adjusted first matting picture into the first framing weight area corresponding to the matting viewfinder picture to generate a synthesized viewfinder picture.
In this embodiment, the matting subunit 521 performs matting processing on the first framing weight region to obtain a second matting picture corresponding to the first framing weight region and obtain a frame of scratched framing picture, the adjusting subunit 522 adjusts the picture size of the first matting picture to be consistent with the picture size according to the second matting picture, and the replacing subunit 523 inlays the adjusted second matting picture into the first framing weight region corresponding to the scratched framing picture to generate a synthesized framing picture.
With reference to fig. 4 and 5, how the above method is implemented is described below by a specific embodiment.
The face image 02 in the view-finding picture is extracted, the high-definition face image 04 obtained by shooting through the telephoto camera and the face image 02 in the view-finding picture are adjusted, so that the sizes of the high-definition face image 04 and the face image 02 are consistent, and the adjusted high-definition face image 04 is embedded into the area of the view-finding picture from which the face image is extracted.
This embodiment obtains corresponding second scratch picture through scratching out first view weight region from finding a view the picture to and obtain the frame of finding a view after scratching, when adjusting this first scratch picture to the picture size unanimous with this second scratch picture, inlay this first scratch picture after the adjustment to in the first view weight region that the frame of finding a view after the scratch corresponds, in order to generate the synthetic frame of finding a view. Because the picture size of the first matting picture may be larger or smaller than the second matting picture, the generated synthesized framing picture is prevented from being inconsistent by adjusting the first matting picture to be consistent with the picture size of the second matting picture.
Further, based on the third embodiment, a fourth embodiment of the dual-camera photographing device of the present invention is provided, and in this embodiment, referring to fig. 9, the dual-camera photographing device further includes: an engagement module 60.
The linking module 60 is configured to perform picture linking processing on the edge of the first framing weight area embedded with the first matting picture to generate a synthesized framing picture.
In this embodiment, when the adjusted second matting image is embedded into the first framing weight area corresponding to the matting viewing image, the linking module 60 performs image linking processing on the edge of the first framing weight area, for example, performs gradual change processing on the edge of the first framing weight area.
How the above-described scheme is implemented is illustrated below by a specific embodiment.
When the adjusted high-definition face image is embedded into the region where the face image is scratched from the view-finding picture, if the connection between the adjusted high-definition face image and the region where the face image is scratched from the view-finding picture is not tight enough and the edge of the region has a crack, the edge is gradually changed, so that the crack is repaired. In this embodiment, the edge of the first framing weight area embedded with the first matting picture is subjected to picture splicing processing to generate a synthesized framing picture. Because the edge of the first framing weight area embedded with the first cutout picture is subjected to picture linking processing, the first framing weight area embedded with the first cutout picture is naturally linked with the background of the framing picture, and the visual effect of the synthesized framing picture is improved.
The invention further provides various embodiments of the double-camera photographing method.
Referring to fig. 10, fig. 10 is a schematic flowchart of a dual-camera photographing method according to a first embodiment of the present invention.
Step S10, when the mobile terminal takes a picture, a wide-angle camera of the mobile terminal is called to capture a current viewing preview picture;
in this embodiment, the mobile terminal is configured with dual cameras including a wide camera and a telephoto camera. When detecting that a user opens an APP with a photographing function on a mobile terminal, capturing a current framing preview picture through a wide-angle camera on the mobile terminal, and displaying the current framing preview picture on a framing interface of the mobile terminal.
The wide-angle camera is a photographic lens with a focal length shorter than a standard lens, a visual angle larger than the standard lens, a focal length longer than a fisheye lens and a visual angle smaller than the fisheye lens. The wide-angle lens is basically characterized in that: the lens has short focal length, large visual angle and wide visual field, so that the range of the scenery observed from a certain viewpoint by using the wide-angle camera is much larger than that of the scenery observed by human eyes at the same viewpoint; the scene is long, and a quite large clear range can be shot; the perspective effect of the picture can be emphasized, the foreground can be exaggerated, and the sense of distance and proximity of the scene can be expressed, which is beneficial to enhancing the infectivity of the picture. The wide-angle camera is more suitable for shooting pictures of larger scenes, such as subjects of buildings, landscapes and the like.
Step S20, determining a first view weight area needing high-definition processing in the current view preview picture, and determining the photographing direction of the corresponding view object according to the first view weight area;
the mobile terminal automatically identifies a first framing weight area which needs high-definition processing in the current framing preview picture, if the mobile terminal determines that the face image area is the first framing weight area which needs high-definition processing in the current framing preview picture through a face identification technology, and/or when a touch operation triggered by a user in the current framing preview picture is detected, a touch position corresponding to the touch operation is determined as the first framing weight area which needs high-definition processing in the current framing preview picture, for example, in a photographing preview process, if the user is detected to click a head image area of a puppy in the current framing preview picture, the head image area of the puppy is determined as the first framing weight area which needs high-definition processing in the current framing preview picture.
When a first framing weight area which needs high-definition processing in the current framing preview picture is determined, determining a framing real object corresponding to the first framing weight area, if the first framing weight area is a face image area, determining the framing real object corresponding to the first framing weight area to be a face, and determining the photographing direction of a tele camera in the mobile terminal for the framing real object according to the first framing weight area, for example, the wide-angle camera is matched with the tele camera to calculate the rotation direction and the rotation angle of the tele camera when the framing real object is photographed.
Step S30, rotating a tele-camera of the mobile terminal according to the photographing direction of the view finding object, and calling the tele-camera to acquire a high-definition picture of the view finding object;
when the rotation direction and the rotation angle of the long-focus camera of the mobile terminal are determined, the long-focus camera is rotated according to the rotation direction and the rotation angle, the depth of field of the view finding object is determined to be shot, the view finding object is shot according to the depth of field to obtain a high-definition picture of the view finding object, and the high-definition picture of the view finding object is cached.
It should be noted that when it is determined that a plurality of first view weight regions exist in the current view preview picture, it indicates that each first view weight region corresponds to one view object, a telephoto camera of the mobile terminal needs to be rotated according to a photographing direction of each view object, the telephoto camera is called to acquire a high-definition picture of each view object, and the acquired high-definition picture of each view object is cached.
The long-focus camera has long focal length, small visual angle and large imaging on the negative film, so that an image larger than a standard lens can be shot at the same distance, and the long-focus camera is suitable for shooting a distant object. Because the depth of field range of the camera is smaller than that of a standard lens, the camera can more effectively blur the shot subject with the background protruding, and when the shot subject is far away from the camera, the deformation of the shot subject in perspective is smaller, and the shot subject is more vivid.
Step S40, when a photographing instruction triggered by a user is received, acquiring a current view-finding preview picture to obtain a view-finding picture, and acquiring a high-definition picture of a view-finding object;
in step S50, the first viewing weight region corresponding to the viewing screen is replaced with a high-definition screen of the viewing object, and a composite viewing screen is generated and output.
When a photographing instruction triggered by a user is received, acquiring a current framing preview picture to obtain a framing picture, replacing a first framing weight area corresponding to the framing picture with a high-definition picture of a framing object to generate a synthesized framing picture, and outputting the synthesized framing picture for the user to view.
It can be understood that, if there are a plurality of high-definition pictures of the viewfinder actual objects, the high-definition picture of each viewfinder actual object is correspondingly replaced into each first viewfinder weight area of the viewfinder picture. For example, the telephoto camera captures A, B, C, D high-definition pictures of four view objects, and the A, B, C, D high-definition pictures of the four view objects correspond to the four first view weight areas a, b, c, and d of the current view preview picture, respectively, and when a photographing instruction triggered by a user is received, the current view preview picture is captured to obtain the corresponding view picture, and the A, B, C, D high-definition pictures of the four view objects are replaced by the four first view weight areas a, b, c, and d of the view picture, respectively, so as to generate and output a frame of synthesized view picture.
With reference to fig. 4, 5 and 6, how the above-mentioned scheme is implemented is described below by a specific embodiment.
When detecting that a user opens a camera APP on a mobile phone, starting a wide-angle camera and a telephoto camera on the mobile phone, calling the wide-angle camera on the mobile phone to capture a current viewing preview picture 01, displaying the current viewing preview picture 01 on a display interface of the mobile phone, determining that the current viewing preview picture 01 takes a person as a shooting subject through a face recognition technology, determining a face image area 02 in the current viewing preview picture 01 as a first viewing weight area, determining a rotation direction and a rotation angle when the telephoto camera in the mobile phone shoots a face corresponding to the face image area 02, rotating the telephoto camera in the mobile phone according to the rotation direction and the rotation angle, calling the telephoto camera to acquire a high-definition picture 03 of the face, and acquiring the current viewing preview picture 01 to obtain a viewing picture when receiving a shooting instruction triggered by the user, and replacing the high-definition picture 03 of the face shot by the telephoto camera with the face image area 02 corresponding to the viewfinder picture to generate a frame of synthesized viewfinder picture 06.
In the double-camera photographing method provided by the embodiment, when it is determined that a first view weight area needing high-definition processing exists in a current view preview picture captured by a wide-angle camera, a telephoto camera is called to acquire a high-definition picture of a view object corresponding to the first view weight area, when a photographing instruction triggered by a user is received, the current view preview picture is acquired to obtain a view picture, the high-definition picture of the view object is replaced into the first view weight area corresponding to the view picture, and a synthesized view picture is generated and output. Because the image shot by the wide-angle camera has a wide visual field and the image shot by the long-focus camera is clear and fine, the invention can replace the picture of the view weight area in the view picture shot by the wide-angle camera with the high-definition picture shot by the long-focus camera, thereby ensuring that the view weight area of the view picture output by the mobile terminal is very clear, and the background atmosphere of the view picture is large, thereby improving the visual effect of the view picture.
Further, please refer to fig. 11, which is a flowchart illustrating a step of refining step S50 according to the first embodiment of the present invention, wherein the step of refining step S50 includes:
step S51, determining a second framing weight area in the high-definition picture of the framing object, and performing cutout processing on the high-definition picture of the framing object according to the second framing weight area to separate out a first cutout picture corresponding to the second framing weight area;
in the present embodiment, the image subject of the first viewing weight region and the image subject of the second viewing weight region are identical, such as the face a of the image subject of the first viewing weight region, and then the image subject of the second viewing weight region is also the face a. When the telephoto camera shoots and obtains the high-definition picture of the viewfinder object, the mobile terminal automatically identifies a second viewfinder weight area in the high-definition picture of the viewfinder object, for example, in the high-definition picture of the viewfinder object, the face image area is the second viewfinder weight area through a face recognition technology, the second viewfinder weight area is extracted to obtain a corresponding first cutout picture, and for example, the face image area is extracted to obtain a corresponding face picture.
In step S52, the first framing weight region corresponding to the framing picture is replaced by the first matting picture to generate a composite framing picture.
Replacing the first cutout picture taken out by matting into the first framing weight area corresponding to the framing picture, for example, inlaying (filling) the first cutout picture into the first framing weight area corresponding to the framing picture after the first framing weight area corresponding to the framing picture is also taken out by matting; the first framing weight area is not required to be extracted from the framing picture, but the first matting picture is directly covered on the first framing weight area corresponding to the framing picture, so that a composite framing picture is generated.
How the above scheme is implemented is described below by a specific embodiment in conjunction with fig. 4 and 5.
After a long-focus camera shot by a mobile phone shoots a high-definition picture 03 of a human face, determining a high-definition human face image 04 in the high-definition picture 03 of the human face as the second framing weight area, picking the high-definition human face image 04 out of the high-definition picture 03, replacing the picked high-definition human face image 04 with a human face image 02 in a framing picture to generate a synthesized framing picture, or directly covering the picked high-definition human face image 04 in the framing picture to generate the synthesized framing picture.
In this embodiment, the second framing weight region is extracted to obtain a corresponding first matte picture, and the first framing weight region corresponding to the framing picture is replaced by the first matte picture to generate a synthesized framing picture. The first cutout picture is shot by the long-focus camera of the mobile terminal and is clear and rich in details, and the first cutout picture is replaced to the first framing weight area corresponding to the framing picture to generate a synthesized framing picture, so that the definition of an important area in the synthesized framing picture is ensured, and the picture sense of the whole image of the synthesized framing picture is also ensured.
Further, please refer to fig. 12, which is a flowchart illustrating a step of refining step S52 in the dual-camera photographing method according to the present invention, wherein the step of refining step S52 includes:
step S521, performing matting processing on the framing picture according to the first framing weight area to obtain a scratched framing picture, and separating a second matting picture corresponding to the first framing weight area;
step S522, adjusting the first matting picture according to the second matting picture to make the first matting picture consistent with the picture size corresponding to the first matting picture;
step S523, the adjusted first matting picture is embedded into the first framing weight area corresponding to the matting framed picture to generate a synthesized framed picture.
In this embodiment, the first framing weight area is subjected to matting processing to obtain a second matting picture corresponding to the first framing weight area and obtain a frame of framed.
With reference to fig. 4 and 5, how the above method is implemented is described below by a specific embodiment.
The method comprises the steps of picking out a face image 02 in a view-finding picture, adjusting a high-definition face image 04 obtained by shooting through a telephoto camera and the face image 02 in the view-finding picture so as to enable the size of the high-definition face image 04 to be consistent with that of the face image 02, and inlaying the adjusted high-definition face image 04 to an area of the view-finding picture from which the face image is picked.
This embodiment obtains corresponding second scratch picture through scratching out first view weight region from finding a view the picture to and obtain the frame of finding a view after scratching, when adjusting this first scratch picture to the picture size unanimous with this second scratch picture, inlay this first scratch picture after the adjustment to in the first view weight region that the frame of finding a view after the scratch corresponds, in order to generate the synthetic frame of finding a view. Because the picture size of the first matting picture may be larger or smaller than the second matting picture, the generated synthesized framing picture is prevented from being inconsistent by adjusting the first matting picture to be consistent with the picture size of the second matting picture.
Further, please refer to fig. 13, which is a flowchart illustrating a second embodiment of the dual-camera photographing method according to the present invention, in the second embodiment, after step S523, the voice call method further includes:
step S60, performing picture join processing on the edge of the first framing weight area embedded with the first matting picture to generate a composite framing picture.
In this embodiment, when the adjusted second matting picture is embedded into the first framing weight area corresponding to the matting framing picture, the edge of the first framing weight area is subjected to picture linking processing, for example, the edge of the first framing weight area is subjected to gradual change processing.
How the above-described scheme is implemented is illustrated below by a specific embodiment.
When the adjusted high-definition face image is embedded into the region where the face image is scratched from the view-finding picture, if the connection between the adjusted high-definition face image and the region where the face image is scratched from the view-finding picture is not tight enough and the edge of the region has a crack, the edge is gradually changed, so that the crack is repaired. In this embodiment, the edge of the first framing weight area embedded with the first matting picture is subjected to picture splicing processing to generate a synthesized framing picture. Because the edge of the first framing weight area embedded with the first cutout picture is subjected to picture linking processing, the first framing weight area embedded with the first cutout picture is naturally linked with the background of the framing picture, and the visual effect of the synthesized framing picture is improved.
Further, based on step S20, the step S20 is specifically configured to: and determining a first view weight area needing high-definition processing in the current view preview picture based on the selection of the user, and/or determining a view area of the image parameter in a preset image parameter interval in the current view preview picture as the first view weight area needing high-definition processing.
It is known that, by determining the first viewing weight region requiring high definition processing in the currently viewed preview screen based on the selection of the user: when touch operation triggered by a user in the current framing preview picture is detected, determining a touch position corresponding to the touch operation as a first framing weight area needing high-definition processing in the current framing preview picture; or when the mobile terminal automatically identifies the framing weight area in the current framing preview picture, if the touch operation triggered by the user in the current framing preview picture is detected, the touch position corresponding to the touch operation is determined as the first framing weight area in the current framing preview picture, which needs high-definition processing, and the automatically identified framing weight area is discarded, or the automatically identified framing weight area is reserved, and the automatically identified framing weight area is also determined as the framing weight area which needs high-definition processing.
Determining a viewing area of the current viewing preview picture, wherein the image parameter of the current viewing preview picture is in a preset image parameter interval, as a first viewing weight area needing high-definition processing: the mobile terminal automatically identifies a first framing weight area needing high-definition processing in the current framing preview picture, and if the mobile terminal determines that human face features appear in the current framing preview picture through a face identification technology and pixels of a face image area where the human face features are located are higher than pixels of other framing areas in the current framing preview picture, the face image area is determined as the first framing weight area needing high-definition processing; or when the image parameters of the whole frame of the current view preview picture are in a preset image parameter interval, the mobile terminal defaults a certain view area in the current view preview picture as a first view weight area needing high-definition processing, for example, defaults the middle area of the current view preview picture as the first view weight area needing high-definition processing.
According to the method and the device, the first view weight area which needs high-definition processing in the current view preview picture is determined based on the selection of the user, and/or the view area of which the image parameters are in the preset image parameter interval in the current view preview picture is determined as the first view weight area which needs high-definition processing, so that the mode of determining the first view weight area is flexible and free, and the picture synthesis speed is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (6)

1. The utility model provides a device is shot to two cameras, a serial communication port, two cameras include wide angle camera and long burnt camera, two cameras device of shooing includes:
the capturing module is used for calling a wide-angle camera of the mobile terminal to capture a current viewing preview picture when the mobile terminal takes a picture;
the determining module is used for determining a first view-finding weight area which needs high-definition processing in the current view-finding preview picture, and determining a photographing direction of a corresponding view-finding real object according to the first view-finding weight area, wherein the photographing direction of the view-finding real object comprises a rotating direction and a rotating angle of a tele camera when the view-finding real object is photographed;
the first acquisition module is used for rotating a long-focus camera of the mobile terminal according to the photographing direction of the view finding object and calling the long-focus camera to acquire a high-definition picture of the view finding object;
the second acquisition module is used for acquiring the current view-finding preview picture to obtain a view-finding picture and acquiring a high-definition picture of the view-finding object when a photographing instruction triggered by a user is received;
the replacing module is used for replacing the first framing weight area corresponding to the framing picture with a high-definition picture of the framing object, and generating and outputting a synthesized framing picture;
wherein the replacement module comprises:
the matting unit is used for determining a second framing weight area in the high-definition picture of the framing object and performing matting processing on the high-definition picture of the framing object according to the second framing weight area so as to separate a first matting picture corresponding to the second framing weight area;
a replacing unit, configured to replace the first framing weight area corresponding to the framing picture with the first matting picture to generate a synthesized framing picture;
wherein the replacement unit includes:
the matting sub-unit is used for matting the framing picture according to the first framing weight area to obtain a matting framing picture and separating a second matting picture corresponding to the first framing weight area;
an adjusting subunit, configured to adjust the first matting picture according to the second matting picture, so that the first matting picture and the first matting picture have the same corresponding picture size;
and the replacing subunit is used for inlaying the adjusted first matting picture into the first framing weight area corresponding to the matting framing picture so as to generate a synthesized framing picture.
2. The dual-camera photographing device according to claim 1, wherein the dual-camera photographing device further comprises:
and the linking module is used for carrying out picture linking processing on the edge of the first framing weight area embedded with the first matting picture so as to generate a synthesized framing picture.
3. The dual-camera photographing device according to any one of claims 1-2, wherein the determining module is further configured to:
and determining a first viewing weight area which needs high-definition processing in the current viewing preview picture based on the selection of a user, and/or determining a viewing area of which the image parameters are in a preset image parameter interval in the current viewing preview picture as the first viewing weight area which needs high-definition processing.
4. A double-camera photographing method is characterized in that the double cameras comprise wide-angle cameras and long-focus cameras, and the double-camera photographing method comprises the following steps:
when the mobile terminal takes a picture, calling a wide-angle camera of the mobile terminal to capture a current viewing preview picture;
determining a first view-finding weight area which needs high-definition processing in the current view-finding preview picture, and determining a photographing direction corresponding to a view-finding object according to the first view-finding weight area, wherein the photographing direction of the view-finding object comprises a rotating direction and a rotating angle of a long-focus camera when the view-finding object is photographed;
rotating a long-focus camera of the mobile terminal according to the photographing direction of the view finding object, and calling the long-focus camera to acquire a high-definition picture of the view finding object;
when a photographing instruction triggered by a user is received, acquiring the current view-finding preview picture to obtain a view-finding picture, and acquiring a high-definition picture of the view-finding object;
replacing the first view weight area corresponding to the view picture with a high-definition picture of the view object, and generating and outputting a synthesized view picture;
wherein, the step of replacing the first view weight area corresponding to the view picture with the high definition picture of the view object, and generating and outputting a synthesized view picture comprises:
determining a second framing weight area in the high-definition picture of the framing real object, and performing cutout processing on the high-definition picture of the framing real object according to the second framing weight area to separate out a first cutout picture corresponding to the second framing weight area;
replacing the first framing weight area corresponding to the framing picture with the first matting picture to generate a synthesized framing picture;
wherein the step of replacing the first framing weight area corresponding to the framing picture with the first matting picture to generate a composite framing picture comprises:
performing cutout processing on the framing picture according to the first framing weight area to obtain a cutout framing picture, and separating a second cutout picture corresponding to the first framing weight area;
adjusting the first matting picture according to the second matting picture to make the picture sizes corresponding to the first matting picture and the first matting picture consistent;
and inlaying the adjusted first matting picture into the first framing weight area corresponding to the matting framed picture to generate a synthesized framed picture.
5. The method of dual-camera photographing according to claim 4, wherein the step of inlaying the adjusted first matte into the first framing weight area corresponding to the matte framed view further comprises:
and carrying out picture connection processing on the edge of the first framing weight area embedded with the first matting picture so as to generate a synthesized framing picture.
6. The dual-camera photographing method according to any one of claims 4 to 5, wherein the step of determining the first view weight region in the currently viewed preview picture that requires high definition processing comprises:
and determining a first viewing weight area which needs high-definition processing in the current viewing preview picture based on the selection of a user, and/or determining a viewing area of which the image parameters are in a preset image parameter interval in the current viewing preview picture as the first viewing weight area which needs high-definition processing.
CN201611040702.8A 2016-11-11 2016-11-11 Double-camera shooting method and device Active CN106454121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611040702.8A CN106454121B (en) 2016-11-11 2016-11-11 Double-camera shooting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611040702.8A CN106454121B (en) 2016-11-11 2016-11-11 Double-camera shooting method and device

Publications (2)

Publication Number Publication Date
CN106454121A CN106454121A (en) 2017-02-22
CN106454121B true CN106454121B (en) 2020-02-07

Family

ID=58221639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611040702.8A Active CN106454121B (en) 2016-11-11 2016-11-11 Double-camera shooting method and device

Country Status (1)

Country Link
CN (1) CN106454121B (en)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791377B (en) * 2016-11-29 2019-09-27 Oppo广东移动通信有限公司 Control method, control device and electronic device
CN107087107B (en) * 2017-05-05 2019-11-29 中国科学院计算技术研究所 Image processing apparatus and method based on dual camera
CN107087108A (en) * 2017-05-18 2017-08-22 努比亚技术有限公司 A kind of image processing method and device based on dual camera
CN107155064B (en) * 2017-06-23 2019-11-05 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
KR102351542B1 (en) * 2017-06-23 2022-01-17 삼성전자주식회사 Application Processor including function of compensation of disparity, and digital photographing apparatus using the same
CN107566725A (en) * 2017-09-15 2018-01-09 广东小天才科技有限公司 A kind of camera control method and device
CN107948505B (en) * 2017-11-14 2020-06-23 维沃移动通信有限公司 Panoramic shooting method and mobile terminal
CN107835372A (en) * 2017-11-30 2018-03-23 广东欧珀移动通信有限公司 Imaging method, device, mobile terminal and storage medium based on dual camera
CN107911619A (en) * 2017-12-28 2018-04-13 上海传英信息技术有限公司 The camera system and its image capture method of a kind of intelligent terminal
CN108307111A (en) * 2018-01-22 2018-07-20 努比亚技术有限公司 A kind of zoom photographic method, mobile terminal and storage medium
CN108307114B (en) * 2018-01-31 2020-01-14 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN108513069B (en) * 2018-03-30 2021-01-08 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN108449532B (en) * 2018-03-30 2021-11-16 联想(北京)有限公司 Electronic equipment and control method
CN108650442A (en) * 2018-05-16 2018-10-12 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic equipment
CN108833832B (en) * 2018-06-20 2021-03-02 王学军 Glasses video recorder and video recording method
US11412132B2 (en) 2018-07-27 2022-08-09 Huawei Technologies Co., Ltd. Camera switching method for terminal, and terminal
CN109379522A (en) * 2018-12-06 2019-02-22 Oppo广东移动通信有限公司 Imaging method, imaging device, electronic device and medium
CN109639974A (en) * 2018-12-20 2019-04-16 Oppo广东移动通信有限公司 Control method, control device, electronic device and medium
CN109729266A (en) * 2018-12-25 2019-05-07 努比亚技术有限公司 A kind of image capturing method, terminal and computer readable storage medium
CN110072058B (en) * 2019-05-28 2021-05-25 珠海格力电器股份有限公司 Image shooting device and method and terminal
CN112333380B (en) * 2019-06-24 2021-10-15 华为技术有限公司 Shooting method and equipment
CN110430360A (en) * 2019-08-01 2019-11-08 珠海格力电器股份有限公司 A kind of panoramic picture image pickup method and device, storage medium
CN110290329A (en) * 2019-08-06 2019-09-27 珠海格力电器股份有限公司 A kind of image composition method
CN112702564B (en) * 2019-10-23 2023-04-18 成都鼎桥通信技术有限公司 Image monitoring method and device
CN110855883B (en) * 2019-11-05 2021-07-20 浙江大华技术股份有限公司 Image processing system, method, device equipment and storage medium
CN110855884B (en) * 2019-11-08 2021-11-09 维沃移动通信有限公司 Wearable device and control method and control device thereof
CN117221693A (en) * 2020-05-28 2023-12-12 北京小米移动软件有限公司 Camera module, electronic device, shooting processing method and storage medium
CN113810590A (en) * 2020-06-12 2021-12-17 华为技术有限公司 Image processing method, electronic device, medium, and system
CN113329172B (en) * 2021-05-11 2023-04-07 维沃移动通信(杭州)有限公司 Shooting method and device and electronic equipment
CN113347355A (en) * 2021-05-28 2021-09-03 维沃移动通信(杭州)有限公司 Image processing method and device and electronic equipment
CN114938426B (en) * 2022-04-28 2023-04-07 湖南工商大学 Method and apparatus for creating a multi-device media presentation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580687A (en) * 2013-10-11 2015-04-29 Lg电子株式会社 Mobile terminal and controlling method thereof
CN105847674A (en) * 2016-03-25 2016-08-10 维沃移动通信有限公司 Preview image processing method based on mobile terminal, and mobile terminal therein
CN105991930A (en) * 2016-07-19 2016-10-05 广东欧珀移动通信有限公司 Zoom processing method and device for dual cameras and mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580687A (en) * 2013-10-11 2015-04-29 Lg电子株式会社 Mobile terminal and controlling method thereof
CN105847674A (en) * 2016-03-25 2016-08-10 维沃移动通信有限公司 Preview image processing method based on mobile terminal, and mobile terminal therein
CN105991930A (en) * 2016-07-19 2016-10-05 广东欧珀移动通信有限公司 Zoom processing method and device for dual cameras and mobile terminal

Also Published As

Publication number Publication date
CN106454121A (en) 2017-02-22

Similar Documents

Publication Publication Date Title
CN106454121B (en) Double-camera shooting method and device
CN106909274B (en) Image display method and device
KR101792641B1 (en) Mobile terminal and out-focusing image generating method thereof
WO2017050115A1 (en) Image synthesis method
WO2018019124A1 (en) Image processing method and electronic device and storage medium
CN106713716B (en) Shooting control method and device for double cameras
CN105468158B (en) Color adjustment method and mobile terminal
CN106657782B (en) Picture processing method and terminal
CN106911881B (en) Dynamic photo shooting device and method based on double cameras and terminal
CN107071263B (en) Image processing method and terminal
CN106851125B (en) Mobile terminal and multiple exposure shooting method
WO2018076938A1 (en) Method and device for processing image, and computer storage medium
CN106200991B (en) Angle adjusting method and device and mobile terminal
WO2018019128A1 (en) Method for processing night scene image and mobile terminal
CN106534553B (en) Mobile terminal and shooting method thereof
WO2017071469A1 (en) Mobile terminal, image capture method and computer storage medium
CN106131327B (en) Terminal and image acquisition method
CN104917965A (en) Shooting method and device
CN106303229A (en) A kind of photographic method and device
CN106937056B (en) Focusing processing method and device for double cameras and mobile terminal
CN106993134B (en) Image generation device and method and terminal
CN106973226B (en) Shooting method and terminal
CN106851114B (en) Photo display device, photo generation device, photo display method, photo generation method and terminal
CN107018326B (en) Shooting method and device
CN106791119B (en) Photo processing method and device and terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant