CN109120858B - Image shooting method, device, equipment and storage medium - Google Patents

Image shooting method, device, equipment and storage medium Download PDF

Info

Publication number
CN109120858B
CN109120858B CN201811278318.0A CN201811278318A CN109120858B CN 109120858 B CN109120858 B CN 109120858B CN 201811278318 A CN201811278318 A CN 201811278318A CN 109120858 B CN109120858 B CN 109120858B
Authority
CN
China
Prior art keywords
image
area
mode
images
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811278318.0A
Other languages
Chinese (zh)
Other versions
CN109120858A (en
Inventor
马辉南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201811278318.0A priority Critical patent/CN109120858B/en
Publication of CN109120858A publication Critical patent/CN109120858A/en
Application granted granted Critical
Publication of CN109120858B publication Critical patent/CN109120858B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The embodiment of the invention discloses an image shooting method, an image shooting device, image shooting equipment and a storage medium, wherein the method comprises the following steps: acquiring a first image in a first mode by adopting a first camera unit; acquiring a second image in a second mode by adopting a second camera shooting unit; wherein the first mode is different from the second mode; respectively extracting a first area of the first image and a second area of the second image; wherein the photographic subject of the first area is different from the photographic subject of the second area; and carrying out image splicing processing on the first area and the second area to obtain a shot image.

Description

Image shooting method, device, equipment and storage medium
Technical Field
The present invention relates to the field of terminal technologies, and in particular, to, but not limited to, an image capturing method, apparatus, device, and storage medium.
Background
In the daily photographing process, a person and a scene are usually photographed at the same time. Generally, people want to be able to obtain a clear picture of both people and scenery, but when the focus position is selected in the scenery area during shooting, the scenery is well taken but the picture shooting is not good; or, when the focusing position is selected in the portrait area, the portrait is taken well but the scene shooting is not good, and obviously, the shooting efficiency is reduced, and the user experience is not good.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image capturing method, an image capturing apparatus, an image capturing device, and a storage medium, which can greatly improve the quality of a picture and improve the capturing efficiency.
The technical scheme of the embodiment of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image capturing method, where the method includes:
acquiring a first image in a first mode by adopting a first camera unit; acquiring a second image in a second mode by adopting a second camera shooting unit; wherein the first mode is different from the second mode;
respectively extracting a first area of the first image and a second area of the second image; wherein the photographic subject of the first area is different from the photographic subject of the second area;
and carrying out image splicing processing on the first area and the second area to obtain a shot image.
In a second aspect, an embodiment of the present invention provides an image capturing apparatus, including:
a first acquisition unit configured to acquire a first image in a first mode;
a second acquisition unit configured to acquire a second image in a second mode; wherein the first mode is different from the second mode;
an extraction unit configured to extract a first region of the first image and a second region of the second image, respectively; wherein the photographic subject of the first area is different from the photographic subject of the second area;
and the image splicing unit is used for carrying out image splicing processing on the first area and the second area to obtain a shot image.
In a third aspect, an embodiment of the present invention provides an image capturing apparatus, including at least: a processor and a storage medium configured to store executable instructions, wherein: the processor is configured to execute stored executable instructions;
the executable instructions are configured to perform the image capture method described above.
In a fourth aspect, embodiments of the present invention provide a storage medium having stored therein computer-executable instructions configured to perform the above-described image capturing method.
The embodiment of the invention provides an image shooting method, an image shooting device, image shooting equipment and a storage medium, wherein the method comprises the following steps: acquiring a first image in a first mode by adopting a first camera unit; acquiring a second image in a second mode by adopting a second camera shooting unit; wherein the first mode is different from the second mode; respectively extracting a first area of the first image and a second area of the second image; wherein the photographic subject of the first area is different from the photographic subject of the second area; and carrying out image splicing processing on the first area and the second area to obtain a shot image. Therefore, the first area and the second area with good shooting quality are spliced, a shot image with high overall shooting quality can be obtained, repeated shooting by a user is not needed, shooting efficiency is improved, and user experience is greatly improved.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. Like reference numerals having different letter suffixes may represent different examples of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed herein.
Fig. 1 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention;
fig. 2 is a diagram of a communication network system architecture according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a flow chart of an implementation of an image capturing method according to an embodiment of the present invention;
FIG. 4 is a schematic view of an application scenario of an image capturing method according to an embodiment of the present invention;
FIG. 5 is a schematic view of an implementation flow of a second image capturing method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an implementation flow of a third image capturing method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a structure of a four-image capturing device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an image capturing apparatus according to an embodiment of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: radio Frequency (RF) 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access 2000(Code Division Multiple Access 2000, CDMA2000), Wideband Code Division Multiple Access (WCDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), Frequency Division duplex Long Term Evolution (FDD-LTE), and Time Division duplex Long Term Evolution (TDD-LTE), etc.
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics Processing Unit 1041 processes image data of a still picture or video obtained by an image capturing device (such as a camera Unit) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes User Equipment (UE) 201, Evolved UMTS Terrestrial Radio Access Network (E-UTRAN) 202, Evolved Packet Core Network (EPC) 203, and IP service 204 of an operator, which are in communication connection in sequence.
Generally, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include a Mobility Management Entity (MME) 2031, a Home Subscriber Server (HSS) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a Policy and Charging Rules Function (PCRF) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IP Multimedia Subsystem (IMS) or other IP services, and the like.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, the present invention provides various embodiments of the method.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the following describes specific technical solutions of the present invention in further detail with reference to the accompanying drawings in the embodiments of the present invention. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Example one
The embodiment of the invention provides an image shooting method, which is applied to terminal equipment with an image shooting function, wherein the terminal equipment can be electronic equipment with a shooting function, such as a mobile phone, a tablet personal computer, a notebook computer, a desktop computer, a personal digital assistant and the like. The functions implemented by the image capturing method of the present embodiment may be implemented by calling a program code by a processor in the terminal device, and of course, the program code may be stored in a computer storage medium, and the terminal device at least includes a processor and a storage medium.
Fig. 3 is a schematic flow chart of an implementation of an image capturing method according to an embodiment of the present invention, as shown in fig. 3, the method includes the following steps:
step S301, a first image is obtained by a first camera unit in a first mode; a second image is acquired in a second mode using a second camera element.
Here, the first imaging unit and the second imaging unit may be imaging units on the same terminal device, for example, cameras. The first camera unit and the second camera unit can be the same camera, or the first camera unit and the second camera unit can be different cameras.
When the first camera unit and the second camera unit are the same camera, step S301 has a sequence, that is, the first camera unit is used to obtain the first image in the first mode, and then the first camera unit is used to obtain the second image in the second mode.
When the first camera unit and the second camera unit are different cameras, the step S301 may not have a sequence, that is, the first camera unit is used to obtain the first image in the first mode, and the second camera unit is used to obtain the second image in the second mode.
In this embodiment, the first mode and the second mode are different modes. For example, the first mode may be a portrait mode, and the second mode may be a landscape mode, so that during the process of shooting the image, when shooting in the first mode, the focusing position is set on the person, and the effect of shooting the first image is that the portrait part is clear relative to the landscape part; when the second mode is used for shooting, the focusing position is set on the scenery, so that the second image shot can have the effect that the scenery part is clear relative to the portrait part. Of course, the first mode and the second mode may be other modes, and may be specifically set according to actual needs. In this embodiment, before shooting, the user can set the first mode of the first imaging unit and the second mode of the second imaging unit in advance.
It should be noted that, when the first image capturing unit is used to capture the first image in the first mode and the second image capturing unit is used to capture the second image in the second mode, the first image capturing unit and the second image capturing unit capture the same target object.
For example, when a user takes a picture of a friend at a certain place through the terminal device, the first camera unit and the second camera unit of the terminal device can simultaneously take a picture of the friend and a current scene, or the first camera unit and the second camera unit take a picture of the friend and the current scene at a preset time interval, where the preset time is very short and can be in the order of milliseconds, so that it can be ensured that the acquired first image and the acquired second image are identical as a whole, that is, the picture of the first image and the picture of the second image are identical.
Step S302, respectively extracting a first region of the first image and a second region of the second image.
Here, the photographic subject in the first area is different from the photographic subject in the second area. When the terminal device acquires the first image and the second image of the same screen, different objects, for example, both a portrait and a landscape, may exist on the screens of the first image and the second image, and then step S302 is to extract the different objects on the same screen.
In this embodiment, the terminal device needs to identify the photographic subject in the first image and the second image, for example, identify whether the photographic subject is a person or a scene, and then distinguish the two. When the first region is a region formed by a person, when the first region of the first image is extracted, the person can be separated from the first image, that is, the first region can be extracted by using a matting technique. When the second region is a region formed by the scenery, the second region of the second image can be extracted by separating the scenery from the second image in a matting mode.
And step S303, carrying out image splicing processing on the first area and the second area to obtain a shot image.
Here, the image stitching processing of the first region and the second region means that the extracted first region and the second region are stitched, the second region of the second image is placed at a corresponding position in the first image, or the first region of the first image is placed at a corresponding position in the second image, and then the first region and the second region are stitched, so as to form a complete picture, that is, the shot image.
In this embodiment, the portion of the first image other than the first area corresponds to the second area in the second image, and the portion of the second image other than the second area corresponds to the first area in the first image. In this way, it can be ensured that a complete captured image can be obtained when the image stitching processing is performed in step S303.
Fig. 4 is a schematic view of an application scenario of an image capturing method according to an embodiment of the present invention, as shown in fig. 4, a dual-screen mobile phone 40 of a user has two cameras, a first camera 41 (a first camera unit) and a second camera 42 (a second camera unit), where a first image captured by the first camera 41 may be displayed on a first display screen 411, and a second image captured by the second camera 42 may be displayed on a second display screen 421. When the user wants to take a picture of a person in a landscape during traveling through the dual-screen mobile phone 40, the user can take the picture through the method provided by the embodiment.
When a user presses a shooting key on the dual-screen mobile phone 40, the dual-screen mobile phone 40 acquires a first image in a portrait mode (first mode) of the first camera 41, wherein the first image includes a first portrait 412 and a first landscape image 413, and the first portrait 412 is clearer than the first landscape image 413; the second camera 42 acquires a second image in the scene mode (second mode), wherein the second image includes a second portrait 422 and a second scene image 423, and the second portrait 422 is clearer than the second scene image 423. Respectively extracting a first person image 412 in the first image and a second scene image 423 in the second image, performing image stitching processing on the first person image 412 and the second scene image 423 to obtain a final shot image, and displaying and outputting the final shot image on the first display screen 411 and/or the second display screen 421.
The image shooting method provided by the embodiment of the invention adopts a first shooting unit to obtain a first image in a first mode; acquiring a second image in a second mode by adopting a second camera shooting unit; wherein the first mode is different from the second mode; respectively extracting a first area of the first image and a second area of the second image; wherein the photographic subject of the first area is different from the photographic subject of the second area; and carrying out image splicing processing on the first area and the second area to obtain a shot image. Therefore, the first area and the second area with good shooting quality are spliced, a shot image with high overall shooting quality can be obtained, a user does not need to select a picture with good quality after repeatedly shooting for many times, shooting efficiency is improved, and user experience is greatly improved.
Example two
The embodiment of the invention provides an image shooting method, which is applied to terminal equipment with an image shooting function, wherein the terminal equipment can be electronic equipment with a shooting function, such as a mobile phone, a tablet personal computer, a notebook computer, a desktop computer, a personal digital assistant and the like. The functions implemented by the image capturing method of the present embodiment may be implemented by calling a program code by a processor in the terminal device, and of course, the program code may be stored in a computer storage medium, and the terminal device at least includes a processor and a storage medium.
Fig. 5 is a schematic flow chart of an implementation of a second image capturing method according to an embodiment of the present invention, and as shown in fig. 5, the method includes the following steps:
step S501, a first image is obtained by a first camera unit in a first mode; a second image is acquired in a second mode using a second camera element.
Here, the first mode and the second mode are different modes. For example, the first mode may be a portrait mode and the second mode may be a landscape mode.
The first camera unit and the second camera unit may be camera units on the same terminal device, for example, cameras. The first camera unit and the second camera unit can be the same camera, or the first camera unit and the second camera unit can be different cameras.
In other embodiments, step S501 further includes the steps of:
in step S5011, the first camera unit and the second camera unit shoot a target object at the same time, or the first camera unit and the second camera unit respectively shoot the target object at a preset time interval to obtain the first image and the second image.
In this embodiment, when the first camera unit and the second camera unit shoot a target shooting object simultaneously, the first camera unit and the second camera unit are different camera units, and at this time, the terminal device may be a device having two camera units, for example, a single-screen mobile phone having two cameras, a front-back double-screen mobile phone having two cameras, a folding double-screen mobile phone having two cameras, and the like. Then, the first image capturing unit and the second image capturing unit may capture the same subject at the same time, resulting in the first image and the second image.
When the first camera unit and the second camera unit respectively take a picture of the target shooting object at a preset time interval, the first camera unit and the second camera unit may be the same camera unit or different camera units, and in this case, the terminal device may be a device having a single camera unit or a device having two camera units. Such as a cell phone with a single camera, or a cell phone with dual cameras. When the first camera shooting unit and the second camera shooting unit are the same camera shooting unit, the first camera shooting unit of the terminal equipment obtains a first image and then obtains a second image at intervals of preset time after obtaining the first image; or, when the first camera unit and the second camera unit are different camera units, the first camera unit of the terminal device first acquires the first image, and the second camera unit acquires the second image after a preset time interval.
Here, the preset time is very short and may be in milliseconds, so that it is ensured that the acquired first image and the acquired second image are consistent as a whole, that is, the picture of the first image and the picture of the second image are consistent. The preset time may be set by the terminal device when the terminal device leaves a factory, or may be preset by the user according to actual needs.
Step S502, performing definition processing on the first image and the second image to obtain a first image after definition processing and a second image after definition processing.
Here, the performing of the sharpness processing on the first image and the second image is to improve the sharpness of the first image and the second image, and is placed in a shooting process to cause image blurring due to shaking of both hands and the like.
In an embodiment of the present invention, the performing the sharpness processing on the first image and the second image may be implemented by:
step S5021, carrying out differential fusion processing on the first image and the second image according to a differential fusion algorithm so as to improve the definition of the first image and the second image.
The first image and the second image are respectively subjected to score fusion processing by adopting the existing differential fusion algorithm, during the implementation process, a plurality of first images can be obtained, and the plurality of first images are subjected to differential fusion to obtain an image with higher definition; and acquiring a plurality of second images, and carrying out differential fusion on the plurality of second images to obtain an image with higher definition.
Step S503, respectively extracting a first region of the sharpness-processed first image and a second region of the sharpness-processed second image.
Here, the first image includes at least the first subject photographic subject and the second subject photographic subject, and the second image also includes at least the first subject photographic subject and the second subject photographic subject. The first image and the second image have the same picture, namely the first image and the second image have the same target shooting object.
In this embodiment, the step S503 of respectively extracting the first region of the sharpness-processed first image includes the following two ways:
in the first mode, the first region includes the first target object, and the second region includes the second target object.
Then, the step S503 of extracting the first region of the first image and the second region of the second image respectively can be implemented by the following steps:
step S5031 extracts a first region where the first target object in the first image is located and a second region where the second target object in the second image is located.
For example, if the first target object is a human object, then the first area where the first target object is located is a human object area, and the second target object is a scene, then the second area where the second target object is located is a scene area. The person region of the person in the first image is extracted, and the scene region of the scene in the second image is extracted.
In a second mode, the first area includes the second target photographic subject, and the second area includes the first target photographic subject.
Then, the step S503 of extracting the first region of the first image and the second region of the second image respectively can be implemented by the following steps:
step S5032 extracts a first region where the second target object in the first image is located and a second region where the first target object in the second image is located.
For example, if the first target object is a human object, then the first area where the first target object is located is a human object area, and the second target object is a scene, then the second area where the second target object is located is a scene area. The scene area where the scene of the first image is located is extracted, and the person area where the person in the second image is located is extracted.
And step S504, carrying out image splicing processing on the first area and the second area to obtain a shot image.
Here, the image stitching processing of the first region and the second region means that the extracted first region and the second region are stitched, the second region of the second image is placed at a corresponding position in the first image, or the first region of the first image is placed at a corresponding position in the second image, and then the first region and the second region are stitched, so as to form a complete picture, that is, the shot image.
In this embodiment, the portion of the first image other than the first area corresponds to the second area in the second image, and the portion of the second image other than the second area corresponds to the first area in the first image. In this way, it can be ensured that a complete captured image can be obtained when the image stitching processing is performed in step S504.
In other embodiments, the method further comprises the steps of:
step S511, a first camera unit is adopted to obtain at least two first images in a first mode; and acquiring at least two second images in a second mode by adopting a second camera shooting unit.
Here, the acquiring of the at least two first images in the first mode using the first imaging unit may be that the at least two first images are continuously captured at a high speed in the first mode using the first imaging unit to acquire the at least two first images; the acquiring of the at least two second images in the second mode using the second imaging unit may be such that the at least two second images are acquired by continuously capturing at high speed in the second mode using the second imaging unit. In addition, in the high-speed continuous shooting, the time interval between every two times of shooting is very short, and can be in the order of milliseconds, so that the multiple first images or the multiple second images acquired in the continuous shooting can be ensured to be almost the same images.
In other embodiments, when the first imaging unit is used to obtain the at least two first images in the first mode, the first imaging unit may be used to obtain the at least two first images from different angles in the first mode; when the second imaging unit is used to obtain at least two second images in the second mode, the second imaging unit may be used to obtain at least two second images from different angles in the second mode.
Step S512, at least two first regions of the at least two first images and at least two second regions of the at least two second images are extracted respectively.
Performing first region extraction on at least two acquired first images, and extracting a first region in each first image; and performing second region extraction on the at least two acquired second images, and extracting a second region in each second image.
In this embodiment, at least two first regions extracted from the at least two first images include the same image, and at least two second regions extracted from the at least two second images include the same image.
In step S513, a first operation input by the terminal is acquired.
Here, the first operation is to determine one first selection area among the at least two first areas, and to determine one second selection area among the at least two second areas.
In a specific implementation process, the first operation may be an operation input by a user through a terminal device, where the first operation input by the user through the terminal device may be input in the following two ways:
in the first mode, the user views a plurality of first areas and a plurality of second areas, determines a first area with the best shooting effect or the highest definition as a first selection area in the plurality of first areas, determines a second area with the best shooting effect or the highest definition as a second selection area in the plurality of second areas, and then selects the first selection area and the second selection area by clicking a screen.
In a second mode, the user views a plurality of first images and a plurality of second images, determines a first image with the best shooting effect or the highest definition as a first selected image in the plurality of first images, determines a second image with the best shooting effect or the highest definition as a second selected image in the plurality of second images, and then clicks a screen to select the first selected image and the second selected image. After the user selects the first selection image and the second selection image, the terminal device takes the first region extracted from the first selection image as the first selection region and takes the second region extracted from the second selection image as the second selection region.
And step S514, carrying out image splicing processing on the first selection area and the second selection area to obtain the shot image.
Here, the image stitching processing of the first selection area and the second selection area means that the extracted first selection area and the second selection area are stitched, and the second selection area selected from the second areas of the plurality of second images is placed at a position corresponding to the first selection image selected from the plurality of first images; or the first selection area selected from the first areas of the first images is placed at the corresponding position in the second selection image selected from the second images, so that the first selection area and the second selection area are spliced, and a complete picture, namely the shot image, is formed.
In this embodiment, the portion of the first selected image other than the first selected region corresponds to the second selected region of the second selected image, and the portion of the second selected image other than the second selected region corresponds to the first selected region of the first selected image. In this way, it can be ensured that a complete captured image can be obtained when the image stitching processing is performed in step S514.
In any of the above embodiments of the present invention, the first mode may be a portrait mode, and accordingly, the first region is a portrait region in the first image; the second mode may be a scene mode, and accordingly, the second region is a scene region in the second image.
The image shooting method provided by the embodiment of the invention adopts a first shooting unit to obtain a first image in a first mode; acquiring a second image in a second mode by adopting a second camera shooting unit; wherein the first mode is different from the second mode; performing definition processing on the first image and the second image to obtain a first image subjected to definition processing and a second image subjected to definition processing; respectively extracting a first region of the first image after the definition processing and a second region of the second image after the definition processing; and carrying out image splicing processing on the first area and the second area to obtain a shot image. In this way, by performing the sharpness processing on the first image and the second image, the first image having a higher sharpness and the second image having a higher sharpness can be obtained. Moreover, the first area and the second area with better shooting quality are spliced, or the first area and the second area with higher quality are selected from the plurality of first images and the plurality of second images, so that the shot image with higher overall shooting quality can be obtained, and the user experience is greatly improved.
EXAMPLE III
The embodiment of the invention provides an image shooting method, which is applied to terminal equipment with an image shooting function, wherein the terminal equipment can be electronic equipment with a shooting function, such as a mobile phone, a tablet personal computer, a notebook computer, a desktop computer, a personal digital assistant and the like. The functions implemented by the image capturing method of the present embodiment may be implemented by calling a program code by a processor in the terminal device, and of course, the program code may be stored in a computer storage medium, and the terminal device at least includes a processor and a storage medium.
Fig. 6 is a schematic flow chart of an implementation of a third image capturing method according to an embodiment of the present invention, and as shown in fig. 6, the method includes the following steps:
step S601, a human-scene separation process.
In the step, two cameras of a double-sided screen are adopted to shoot a target shot object, wherein one camera starts a portrait mode, background blurring of depth of field preferentially pays attention to a portrait effect, the other camera starts a scenery mode, characters in a picture are blurred, and flat shooting is carried out. The background effects (clear, complete view, etc.) are of primary concern.
Every time the user shoots once, the double cameras automatically take the photos for multiple times, and the double cameras of the double-sided screen can obtain photos of a plurality of A cameras for taking the portrait and a plurality of B cameras for finding the view under different layers and angles.
In addition, a plurality of photos in each scene are processed by adopting an image difference fusion algorithm to obtain a plurality of automatically optimized portrait and scene images.
And step S602, carrying out human scene splicing processing.
In the step, key area images are respectively extracted to obtain a plurality of area images of the portrait and the background, the area images are used as alternative splicing materials, a user can select satisfactory portrait and background local images by himself, and the terminal automatically performs splicing to form the optimal photo effect.
According to the image shooting method provided by the embodiment of the invention, the separation splicing double-shooting technology of the double-sided screen is adopted, so that the cameras on different layers of the double-sided screen can be effectively utilized to shoot the portraits and scenes of multiple angles, the fused photos achieve the optimal effect, the photo quality can be greatly improved, and the user experience is improved.
Example four
An embodiment of the present invention provides an image capturing apparatus, fig. 7 is a schematic diagram of a composition structure of a fourth image capturing apparatus according to an embodiment of the present invention, and as shown in fig. 7, the image capturing apparatus 700 includes:
a first acquisition unit 701 for acquiring a first image in a first mode;
a second acquisition unit 702 for acquiring a second image in a second mode; wherein the first mode is different from the second mode;
an extracting unit 703, configured to extract a first region of the first image and a second region of the second image respectively; wherein the photographic subject of the first area is different from the photographic subject of the second area;
and an image stitching unit 704, configured to perform image stitching processing on the first area and the second area to obtain a captured image.
In other embodiments, the apparatus further comprises:
the sharpness processing unit is used for performing sharpness processing on the first image and the second image to obtain a first image subjected to sharpness processing and a second image subjected to sharpness processing;
accordingly, the extraction unit comprises:
and the first extraction module is used for respectively extracting the first region of the first image after the definition processing and the second region of the second image after the definition processing.
In other embodiments, the sharpness processing unit includes:
and the differential fusion processing module is used for carrying out differential fusion processing on the first image and the second image according to a differential fusion algorithm so as to improve the definition of the first image and the second image.
In other embodiments, the first image includes at least the first subject photographic subject and the second subject photographic subject, and the second image includes at least the first subject photographic subject and the second subject photographic subject;
when the first area includes the first target photographic subject, the second area includes the second target photographic subject;
accordingly, the extraction unit comprises:
the second extraction module is used for extracting a first area where a first target shooting object in the first image is located and a second area where a second target shooting object in the second image is located;
alternatively, the first and second electrodes may be,
when the first area includes the second target photographic subject, the second area includes the first target photographic subject;
accordingly, the extraction unit comprises:
and the third extraction module is used for extracting a first area where the second target shooting object in the first image is located and a second area where the first target shooting object in the second image is located.
In other embodiments, the apparatus further comprises:
the shooting unit is used for shooting the target shooting objects simultaneously or shooting the target shooting objects at intervals of preset time so as to acquire the first image and the second image.
In other embodiments, the apparatus further comprises:
the third acquisition unit is used for acquiring at least two first images in the first mode;
the fourth acquisition unit is used for acquiring at least two second images in a second mode;
a second extraction unit, configured to extract at least two first regions of the at least two first images and at least two second regions of the at least two second images, respectively;
a fifth acquiring unit, configured to acquire a first operation input by the terminal, where the first operation is used to determine one first selection area in the at least two first areas and determine one second selection area in the at least two second areas;
and the second image splicing unit is used for carrying out image splicing processing on the first selection area and the second selection area to obtain the shot image.
In other embodiments, the first mode is a portrait mode, and accordingly, the first region is a portrait region in the first image;
the second mode is a scene mode, and accordingly, the second area is a scene area in the second image.
It should be noted that the description of the apparatus of this embodiment is similar to the description of the method embodiment, and has similar beneficial effects to the method embodiment, and therefore, the description is not repeated. For technical details not disclosed in the embodiments of the apparatus, reference is made to the description of the embodiments of the method of the invention for understanding.
EXAMPLE five
An embodiment of the present invention provides an image capturing apparatus, fig. 8 is a schematic diagram of a composition structure of the image capturing apparatus according to the embodiment of the present invention, and as shown in fig. 8, the apparatus 800 at least includes: a processor 801 and a storage medium 802 configured to store executable instructions, wherein:
the processor 801 is configured to execute stored executable instructions configured to perform the image capture method provided in any of the embodiments described above.
It should be noted that the above description of the embodiment of the image capturing apparatus is similar to the description of the embodiment of the method described above, and has similar beneficial effects to the embodiment of the method, and therefore, the description is omitted here. For technical details not disclosed in the embodiments of the image capturing device of the present invention, reference is made to the description of the embodiments of the method of the present invention for understanding.
Correspondingly, the embodiment of the invention provides a computer-readable storage medium, wherein computer-executable instructions are stored in the computer-readable storage medium and are configured to execute the image shooting method provided by the other embodiment of the invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (8)

1. An image capturing method, characterized in that the method comprises:
acquiring at least two first images in a first mode by adopting a first camera unit; acquiring at least two second images in a second mode by adopting a second camera shooting unit; wherein the first mode is different from the second mode;
performing differential fusion processing on the at least two first images and the at least two second images according to a differential fusion algorithm to improve the definition of the first images and the second images and obtain the first images after definition processing and the second images after definition processing;
respectively extracting a first region of the first image after the definition processing and a second region of the second image after the definition processing; wherein the photographic subject of the first area is different from the photographic subject of the second area;
and carrying out image splicing processing on the first area and the second area to obtain a shot image.
2. The method of claim 1, wherein the first image includes at least a first target subject and a second target subject, and the second image includes at least the first target subject and the second target subject;
when the first area includes the first target photographic subject, the second area includes the second target photographic subject;
accordingly, extracting a first region of the first image and a second region of the second image, respectively, comprises: extracting a first area where a first target shooting object in the first image is located and a second area where a second target shooting object in the second image is located;
alternatively, the first and second electrodes may be,
when the first area includes the second target photographic subject, the second area includes the first target photographic subject;
accordingly, the separately extracting the first region of the first image and the second region of the second image comprises: and extracting a first area where a second target shooting object in the first image is located and a second area where the first target shooting object in the second image is located.
3. The method of claim 1, further comprising:
the first camera unit and the second camera unit shoot a target shooting object at the same time, or the first camera unit and the second camera unit respectively shoot the target shooting object at a preset time interval so as to acquire the first image and the second image.
4. The method of claim 1, further comprising:
respectively extracting at least two first areas of the at least two first images and at least two second areas of the at least two second images;
acquiring a first operation input by a terminal, wherein the first operation is used for determining a first selection area in the at least two first areas and determining a second selection area in the at least two second areas;
and carrying out image splicing processing on the first selection area and the second selection area to obtain the shot image.
5. The method according to any one of claims 1 to 4,
the first mode is a portrait mode, and correspondingly, the first area is a portrait area in the first image;
the second mode is a scene mode, and accordingly, the second area is a scene area in the second image.
6. An image capturing apparatus, characterized in that the apparatus comprises:
the first acquisition unit is used for acquiring at least two first images in a first mode;
the second acquisition unit is used for acquiring at least two second images in a second mode; wherein the first mode is different from the second mode;
the extraction unit is used for carrying out differential fusion processing on the at least two first images and the at least two second images according to a differential fusion algorithm so as to improve the definition of the first images and the definition of the second images and obtain the first images after definition processing and the second images after definition processing; respectively extracting a first region of the first image after the definition processing and a second region of the second image after the definition processing; wherein the photographic subject of the first area is different from the photographic subject of the second area;
and the image splicing unit is used for carrying out image splicing processing on the first area and the second area to obtain a shot image.
7. An image capturing apparatus, characterized in that the apparatus comprises at least: a processor and a storage medium configured to store executable instructions, wherein: the processor is configured to execute stored executable instructions;
the executable instructions are configured to perform the image capturing method as provided in any one of the preceding claims 1 to 5.
8. A storage medium having stored therein computer-executable instructions configured to perform the image capture method provided by any of claims 1 to 5.
CN201811278318.0A 2018-10-30 2018-10-30 Image shooting method, device, equipment and storage medium Active CN109120858B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811278318.0A CN109120858B (en) 2018-10-30 2018-10-30 Image shooting method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811278318.0A CN109120858B (en) 2018-10-30 2018-10-30 Image shooting method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109120858A CN109120858A (en) 2019-01-01
CN109120858B true CN109120858B (en) 2021-01-15

Family

ID=64855647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811278318.0A Active CN109120858B (en) 2018-10-30 2018-10-30 Image shooting method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109120858B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111526278B (en) * 2019-02-01 2021-08-24 Oppo广东移动通信有限公司 Image processing method, storage medium, and electronic device
CN111371978A (en) * 2020-03-24 2020-07-03 合肥维信诺科技有限公司 Display terminal
CN111757003A (en) * 2020-07-01 2020-10-09 Oppo广东移动通信有限公司 Shooting processing method and device, mobile terminal and storage medium
CN111988540A (en) * 2020-08-20 2020-11-24 合肥维信诺科技有限公司 Image acquisition method and system and display panel

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709878A (en) * 2016-11-30 2017-05-24 长沙全度影像科技有限公司 Rapid image fusion method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105578045A (en) * 2015-12-23 2016-05-11 努比亚技术有限公司 Terminal and shooting method of terminal
US10298864B2 (en) * 2016-06-10 2019-05-21 Apple Inc. Mismatched foreign light detection and mitigation in the image fusion of a two-camera system
CN107133939A (en) * 2017-04-24 2017-09-05 努比亚技术有限公司 A kind of picture synthesis method, equipment and computer-readable recording medium
CN107040723B (en) * 2017-04-28 2020-09-01 努比亚技术有限公司 Imaging method based on double cameras, mobile terminal and storage medium
CN107959795B (en) * 2017-11-30 2020-07-14 珠海大横琴科技发展有限公司 Information acquisition method, information acquisition equipment and computer readable storage medium
CN108154514B (en) * 2017-12-06 2021-08-13 Oppo广东移动通信有限公司 Image processing method, device and equipment
CN108650466A (en) * 2018-05-24 2018-10-12 努比亚技术有限公司 The method and electronic equipment of photo tolerance are promoted when a kind of strong light or reversible-light shooting portrait

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709878A (en) * 2016-11-30 2017-05-24 长沙全度影像科技有限公司 Rapid image fusion method

Also Published As

Publication number Publication date
CN109120858A (en) 2019-01-01

Similar Documents

Publication Publication Date Title
CN106937039B (en) Imaging method based on double cameras, mobile terminal and storage medium
CN108900790B (en) Video image processing method, mobile terminal and computer readable storage medium
CN108259781B (en) Video synthesis method, terminal and computer-readable storage medium
CN107820014B (en) Shooting method, mobile terminal and computer storage medium
CN109120858B (en) Image shooting method, device, equipment and storage medium
CN107959795B (en) Information acquisition method, information acquisition equipment and computer readable storage medium
CN107995420B (en) Remote group photo control method, double-sided screen terminal and computer readable storage medium
CN107948530B (en) Image processing method, terminal and computer readable storage medium
CN111935402B (en) Picture shooting method, terminal device and computer readable storage medium
CN107566734B (en) Intelligent control method, terminal and computer readable storage medium for portrait photographing
CN107040723B (en) Imaging method based on double cameras, mobile terminal and storage medium
CN111885307B (en) Depth-of-field shooting method and device and computer readable storage medium
CN111327840A (en) Multi-frame special-effect video acquisition method, terminal and computer readable storage medium
CN112367443A (en) Photographing method, mobile terminal and computer-readable storage medium
CN112511741A (en) Image processing method, mobile terminal and computer storage medium
CN107241504B (en) Image processing method, mobile terminal and computer readable storage medium
CN107395971B (en) Image acquisition method, image acquisition equipment and computer-readable storage medium
CN111866388B (en) Multiple exposure shooting method, equipment and computer readable storage medium
CN107295262B (en) Image processing method, mobile terminal and computer storage medium
CN112135045A (en) Video processing method, mobile terminal and computer storage medium
CN111787234A (en) Shooting control method and device and computer readable storage medium
CN109510941B (en) Shooting processing method and device and computer readable storage medium
CN108600639B (en) Portrait image shooting method, terminal and computer readable storage medium
CN108495033B (en) Photographing regulation and control method and device and computer readable storage medium
CN112532838B (en) Image processing method, mobile terminal and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant