CN109495689B - Shooting method and device, electronic equipment and storage medium - Google Patents

Shooting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109495689B
CN109495689B CN201811641102.6A CN201811641102A CN109495689B CN 109495689 B CN109495689 B CN 109495689B CN 201811641102 A CN201811641102 A CN 201811641102A CN 109495689 B CN109495689 B CN 109495689B
Authority
CN
China
Prior art keywords
camera
shooting
image
parameter values
shooting parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811641102.6A
Other languages
Chinese (zh)
Other versions
CN109495689A (en
Inventor
王涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201811641102.6A priority Critical patent/CN109495689B/en
Publication of CN109495689A publication Critical patent/CN109495689A/en
Application granted granted Critical
Publication of CN109495689B publication Critical patent/CN109495689B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Abstract

The invention provides a shooting method, a shooting device, electronic equipment and a storage medium, which are used for quickly adjusting shooting parameters of a camera when a region of interest is determined or switched in a preview image. The method comprises the following steps: when a target scene is previewed, acquiring shooting parameter values of the first camera when the first camera takes each of N image areas contained in a preview image as a focus area to shoot, and acquiring N groups of shooting parameter values in total; determining N groups of target shooting parameter values of a second camera corresponding to the N groups of shooting parameter values according to a preset mapping relation between the shooting parameter values of a first camera and the shooting parameter values of the second camera; and determining a group of optimal shooting parameter values corresponding to the current attention area as first shooting parameter values according to the current attention area of the preview image, and determining a group of optimal target shooting parameter values as second shooting parameter values.

Description

Shooting method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of information processing, and in particular, to a shooting method, an apparatus, an electronic device, and a storage medium.
Background
Along with the continuous development of science and technology, electronic equipment's function is more and more powerful, and most electronic equipment all is provided with the camera in order to provide the shooting function. When a user shoots with the electronic device, the area of interest of the shooting can be determined through the preview interface, for example, the user can select the area of interest by clicking any position of the preview interface.
In the prior art, in order to optimize the shooting effect of the region of interest, the shooting parameter values (such as the focus parameter value and the exposure parameter value) of the camera are calculated for the region of interest, and the camera is set according to the calculated shooting parameter values. Thus, if the attention area is switched, the shooting parameter value of the camera needs to be recalculated and adjusted according to the new attention area, which takes a long time. In addition, the user can visually and intuitively experience the process of recalculating the shooting parameter values, such as the definition of a preview picture and the jump of preview brightness, and the user experience is influenced.
Disclosure of Invention
The embodiment of the specification provides a shooting method, a shooting device, electronic equipment and a storage medium, which are used for quickly adjusting shooting parameters of a camera when a region of interest is determined or switched in a preview image.
In a first aspect, the present invention provides a shooting method applied to an electronic device, where the electronic device includes a first camera and a second camera, and the method includes:
when a target scene is previewed, acquiring the optimal shooting parameter values of the first camera when the first camera takes each of N image areas contained in a preview image as a focus area to shoot, and acquiring N groups of optimal shooting parameter values in total, wherein N is an integer greater than 1;
determining N groups of optimal target shooting parameter values of a second camera corresponding to the N groups of optimal shooting parameter values according to a preset mapping relation between the shooting parameter values of a first camera and the shooting parameter values of the second camera;
according to the current attention area of the preview image, determining a group of optimal shooting parameter values corresponding to the current attention area as first shooting parameter values in the N groups of optimal shooting parameter values, and determining a group of optimal target shooting parameter values corresponding to the current attention area as second shooting parameter values in the N groups of optimal target shooting parameter values, so that the first camera adopts the first shooting parameter values, and the second camera adopts the second shooting parameter values to shoot the target scene.
Optionally, when previewing the target scene, acquiring an optimal shooting parameter value when the first camera takes each of N image areas included in the preview image as a focus area for shooting respectively, includes:
based on the parameter value range of the shooting parameter of the first camera, performing parameter adjustment on the parameter value of the shooting parameter according to a preset mode, and acquiring M images of the target scene acquired by the first camera in the parameter adjustment process, wherein each image of the M images comprises the N image areas, and M is an integer greater than 1;
and determining each area of the N image areas as the optimal shooting parameter value when shooting the attention area according to the definition of the N image areas contained in each image.
Optionally, the determining, according to the sharpness of the N image regions included in each image, an optimal shooting parameter value when any one of the N image regions is taken as the attention region includes:
for each image region of the N image regions, comparing the definition of the image region in the M images;
and taking the shooting parameter value corresponding to the image with the highest definition in the image area as the optimal shooting parameter value taking the image area as the attention area.
Optionally, the shooting parameters of the first camera are focusing parameters and/or exposure parameters, and the target shooting parameters of the second camera are also corresponding to the focusing parameters and/or the exposure parameters.
Optionally, when the shooting parameters of the first camera and the target shooting parameters of the second camera are the focusing parameters, the preset mapping relationship between the shooting parameter values of the first camera and the shooting parameter values of the second camera is obtained by the following method:
gradually adjusting the position of a target object based on the adjustment range of the shooting position of the target object, determining the focusing parameter value of the target object at each position shot by the first camera, and determining the focusing parameter value of the target object at each position shot by the second camera;
and establishing the mapping relation according to the focusing parameter value of the first camera and the focusing parameter value of the second camera at each position.
Optionally, when the shooting parameters of the first camera and the target shooting parameters of the second camera are the exposure parameters, the preset mapping relationship between the shooting parameter values of the first camera and the shooting parameter values of the second camera is obtained by:
gradually adjusting an exposure parameter value of the first camera and an exposure parameter value of the second camera so as to enable the image brightness of a plurality of images collected by the first camera to be the same as the image brightness of a plurality of images collected by the second camera;
and establishing the mapping relation according to the exposure parameter value of the image acquired by the first camera and the exposure parameter value of the image acquired by the second camera corresponding to each same image brightness.
In a second aspect, the present invention provides a shooting device applied to an electronic device, where the electronic device includes a first camera and a second camera, and the device includes:
the first acquisition module is used for acquiring the optimal shooting parameter values of the first camera when the first camera respectively takes each of N image areas contained in a preview image as an attention area to shoot, and acquiring N optimal groups of shooting parameter values in total, wherein N is an integer greater than 1;
the second acquisition module is used for determining N groups of optimal target shooting parameter values of the second camera corresponding to the N groups of optimal shooting parameter values according to a preset mapping relation between the shooting parameter values of the first camera and the shooting parameter values of the second camera;
and the processing module is used for determining a group of optimal shooting parameter values corresponding to the current attention area as first shooting parameter values in the N groups of optimal shooting parameter values according to the current attention area of the preview image, and determining a group of optimal target shooting parameter values corresponding to the current attention area as second shooting parameter values in the N groups of optimal target shooting parameter values, so that the first camera adopts the first shooting parameter values, and the second camera adopts the second shooting parameter values to shoot the target scene.
Optionally, the first obtaining module is specifically configured to:
based on the parameter value range of the shooting parameter of the first camera, performing parameter adjustment on the parameter value of the shooting parameter according to a preset mode, and acquiring M images of the target scene acquired by the first camera in the parameter adjustment process, wherein each image of the M images comprises the N image areas, and M is an integer greater than 1;
and determining each area of the N image areas as the optimal shooting parameter value when shooting the attention area according to the definition of the N image areas contained in each image.
Optionally, the first obtaining module is specifically configured to:
for each image region of the N image regions, comparing the definition of the image region in the M images;
and taking the shooting parameter value corresponding to the image with the highest definition in the image area as the optimal shooting parameter value taking the image area as the attention area.
Optionally, the shooting parameters of the first camera are focusing parameters and/or exposure parameters, and the target shooting parameters of the second camera are also corresponding to the focusing parameters and/or the exposure parameters.
Optionally, when the shooting parameters of the first camera and the target shooting parameters of the second camera are both focusing parameters, the preset mapping relationship between the shooting parameter values of the first camera and the shooting parameter values of the second camera is obtained by the following method:
gradually adjusting the position of a target object based on the adjustment range of the shooting position of the target object, determining the focusing parameter value of the target object at each position shot by the first camera, and determining the focusing parameter value of the target object at each position shot by the second camera;
and establishing the mapping relation according to the focusing parameter value of the first camera and the focusing parameter value of the second camera at each position.
Optionally, when the shooting parameters of the first camera and the target shooting parameters of the second camera are exposure parameters, the preset mapping relationship between the shooting parameter values of the first camera and the shooting parameter values of the second camera is obtained by:
gradually adjusting an exposure parameter value of the first camera and an exposure parameter value of the second camera so as to enable the image brightness of a plurality of images collected by the first camera to be the same as the image brightness of a plurality of images collected by the second camera;
and establishing the mapping relation according to the exposure parameter value of the image acquired by the first camera and the exposure parameter value of the image acquired by the second camera corresponding to each same image brightness.
In a third aspect, the present specification provides an electronic device, which includes a processor, and the processor is configured to implement the steps of the shooting method as described in the foregoing first aspect embodiment when executing a computer program stored in a memory.
In a fourth aspect, the present specification provides a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the shooting method as described in the foregoing first aspect embodiment.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
in the technical solution of the embodiment of the present specification, when a target scene is previewed, an optimal shooting parameter value is obtained when the first camera takes each of N image areas included in a preview image as a focus area for shooting; determining N groups of optimal target shooting parameter values of a second camera corresponding to the N groups of optimal shooting parameter values according to a preset mapping relation between the shooting parameter values of a first camera and the shooting parameter values of the second camera; and determining a first shooting parameter value of a first camera in the N groups of optimal shooting parameter values according to the current attention area of the preview image, and determining a second shooting parameter value of a second camera in the N groups of optimal target shooting parameter values. According to the scheme, in the previewing process, when each image area is used as the attention area, the optimal shooting parameter values of the first camera and the second camera are calculated in advance, so that the shooting parameter values of the cameras are directly and correspondingly selected from the optimal shooting parameter values calculated in advance when the attention area is selected or switched, the purpose of quickly adjusting the shooting parameter values of the cameras is achieved, and the parameter adjusting time of the cameras is saved. Furthermore, the scheme in the embodiment of the present specification can quickly adjust the shooting parameter value, so that the jump of the definition and the exposure of the preview interface due to long-time parameter adjustment in the prior art does not occur, and further the user experience is improved.
Drawings
Fig. 1 is a schematic diagram of a possible terminal system provided in an embodiment of the present disclosure;
fig. 2 is a flowchart of a shooting method provided in the first aspect in an embodiment of the present specification;
fig. 3 is a schematic diagram of a shooting device provided in a second aspect in an embodiment of the present specification.
Detailed Description
The embodiment of the specification provides a shooting method, a shooting device, electronic equipment and a storage medium, which are used for quickly adjusting shooting parameters of a camera when a region of interest is determined or switched in a preview image.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
when a target scene is previewed, acquiring an optimal shooting parameter value when the first camera takes each of N image areas contained in a preview image as a focus area for shooting; determining N groups of optimal target shooting parameter values of a second camera corresponding to the N groups of optimal shooting parameter values according to a preset mapping relation between the shooting parameter values of a first camera and the shooting parameter values of the second camera; and determining a first shooting parameter value of a first camera in the N groups of optimal shooting parameter values according to the current attention area of the preview image, and determining a second shooting parameter value of a second camera in the N groups of optimal target shooting parameter values. According to the scheme, in the previewing process, when each image area is used as the attention area, the optimal shooting parameter values of the first camera and the second camera are calculated in advance, so that the shooting parameter values of the cameras are directly and correspondingly selected from the optimal shooting parameter values calculated in advance when the attention area is selected or switched, the purpose of quickly adjusting the shooting parameter values of the cameras is achieved, and the parameter adjusting time of the cameras is saved. Furthermore, the scheme in the embodiment of the present specification can quickly adjust the shooting parameter value, so that the jump of the definition and the exposure of the preview interface due to long-time parameter adjustment in the prior art does not occur, and further the user experience is improved.
The technical solutions of the present invention are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present invention are described in detail in the technical solutions of the present application, and are not limited to the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
To facilitate the description of the technical solution in the embodiments of the present specification, a terminal system to which the shooting method in the embodiments of the present specification is applied is first described. Please refer to fig. 1, which is a schematic diagram of a possible terminal system. In fig. 1, a terminal system 100 is a system including a touch input device 101. However, it should be understood that the system may also include one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick. The operation platform of the terminal system 100 may be adapted to operate one or more operating systems, such as general operating systems, e.g., an Android operating system, a Windows operating system, an apple IOS operating system, a BlackBerry operating system, and a google Chrome operating system. However, in other embodiments, the terminal system 100 may run a dedicated operating system instead of a general-purpose operating system.
In some embodiments, the terminal system 100 may also support the running of one or more applications, including but not limited to one or more of the following: disk management applications, secure encryption applications, rights management applications, system setup applications, word processing applications, presentation slide applications, spreadsheet applications, database applications, gaming applications, telephone applications, video conferencing applications, email applications, instant messaging applications, photo management applications, digital camera applications, digital video camera applications, web browsing applications, digital music player applications, digital video player applications, and the like.
The operating system and various applications running on the terminal system may use the touch input device 101 as a physical input interface device for the user. The touch input device 101 has a touch surface as a user interface. Optionally, the touch surface of the touch input device 101 is a surface of the display screen 102, and the touch input device 101 and the display screen 102 together form the touch-sensitive display screen 120, however, in other embodiments, the touch input device 101 has a separate touch surface that is not shared with other device modules. The touch sensitive display screen still further includes one or more contact sensors 106 for detecting whether a contact has occurred on the touch input device 101.
The touch sensitive Display 120 may alternatively use LCD (Liquid Crystal Display) technology, LPD (light-emitting polymer Display) technology, or LED (light-emitting diode) technology, or any other technology that enables image Display. Touch-sensitive display screen 120 further may detect contact and any movement or breaking of contact using any of a variety of touch sensing technologies now known or later developed, such as capacitive sensing technologies or resistive sensing technologies. In some embodiments, touch-sensitive display screen 120 may detect a single point of contact or multiple points of contact and changes in their movement simultaneously.
In addition to the touch input device 101 and the optional display screen 102, the terminal system 100 can also include memory 103 (which optionally includes one or more computer-readable storage media), a memory controller 104, and one or more processors (processors) 105, which can communicate via one or more signal buses 107.
Memory 103 may include Cache (Cache), high-speed Random Access Memory (RAM), such as common double data rate synchronous dynamic random access memory (DDR SDRAM), and may also include non-volatile memory (NVRAM), such as one or more read-only memories (ROM), disk storage devices, Flash memory (Flash) memory devices, or other non-volatile solid-state memory devices, such as compact disks (CD-ROM, DVD-ROM), floppy disks, or data tapes, among others. Memory 103 may be used to store the aforementioned operating system and application software, as well as various types of data generated and received during system operation. Memory controller 104 may control other components of system 100 to access memory 103.
The processor 105 is used to run or execute the operating system, various software programs, and its own instruction set stored in the internal memory 103, and is used to process data and instructions received from the touch input device 101 or from other external input pathways to implement various functions of the system 100. The processor 105 may include, but is not limited to, one or more of a Central Processing Unit (CPU), a general purpose image processor (GPU), a Microprocessor (MCU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), and an Application Specific Integrated Circuit (ASIC). In some embodiments, processor 105 and memory controller 104 may be implemented on a single chip. In some other embodiments, they may be implemented separately on separate chips from each other.
In fig. 1, a signal bus 107 is configured to connect the various components of the end system 100 for communication. It should be understood that the configuration and connection of the signal bus 107 shown in fig. 1 is exemplary and not limiting. Depending on the specific application environment and hardware configuration requirements, in other embodiments, the signal bus 107 may adopt other different connection manners, which are familiar to those skilled in the art, and conventional combinations or changes thereof, so as to realize the required signal connection among the various components.
Further, in some embodiments, the terminal system 100 may also include peripheral I/O interfaces 111, RF circuitry 112, audio circuitry 113, speakers 114, microphone 115, and camera module 116. The device 100 may also include one or more heterogeneous sensor modules 118.
RF (radio frequency) circuitry 112 is used to receive and transmit radio frequency signals to enable communication with other communication devices. The RF circuitry 112 may include, but is not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF circuitry 112 optionally communicates by wireless communication with a network, such as the internet (also known as the World Wide Web (WWW)), an intranet, and/or a wireless network (such as a cellular telephone network, a wireless Local Area Network (LAN), and/or a Metropolitan Area Network (MAN)), among other devices. The RF circuitry 112 may also include circuitry for detecting Near Field Communication (NFC) fields. The wireless communication may employ one or more communication standards, protocols, and techniques including, but not limited to, Global System for Mobile communications (GSM), Enhanced Data GSM Environment (EDGE), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), evolution, data-only (EV-DO), HSPA +, Dual-cell HSPA (DC-HSPDA), Long Term Evolution (LTE), Near Field Communication (NFC), wideband code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth Low Power consumption, Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n and/or IEEE 802.11ac), Voice over Internet protocol (VoIP), Wi-MAX, email protocols (e.g., Internet Message Access Protocol (IMAP) and/or Post Office Protocol (POP)) Instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol for instant messaging and presence with extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol including communication protocols not yet developed at the time of filing date of this application.
Audio circuitry 113, speaker 114, and microphone 115 provide an audio interface between a user and end system 100. The audio circuit 113 receives audio data from the external I/O port 111, converts the audio data into an electric signal, and transmits the electric signal to the speaker 114. The speaker 114 converts the electrical signals into human-audible sound waves. The audio circuit 113 also receives electrical signals converted by the microphone 115 from sound waves. The audio circuit 113 may further convert the electrical signal to audio data and transmit the audio data to the external I/O port 111 for processing by an external device. The audio data may be transferred to the memory 103 and/or the RF circuitry 112 under the control of the processor 105 and the memory controller 104. In some implementations, the audio circuit 113 may also be connected to a headset interface.
The camera module 116 is used to take still images and video according to instructions from the processor 105. The camera module 116 may have a lens device 1161 and an image sensor 1162, and may be capable of receiving an optical signal from the outside through the lens device 1161 and converting the optical signal into an electrical signal through the image sensor 1162, such as a metal-oxide complementary photo transistor (CMOS) sensor or a Charge Coupled Device (CCD) sensor. The camera module 116 may further have an image processor (ISP)1163 for processing and correcting the aforementioned electric signals and converting them into specific image format files, such as JPEG (joint photographic experts group) image files, TIFF (tagged image file format) image files, and the like. The image file may be sent to memory 103 for storage or to RF circuitry 112 for transmission to an external device, according to instructions from processor 105 and memory controller 104.
External I/O port 111 provides an interface for end system 100 to other external devices or system surface physical input modules. The surface physical input module may be a key, a keyboard, a dial, etc., such as a volume key, a power key, a return key, and a camera key. The interface provided by the external I/O port 111 may also include a Universal Serial Bus (USB) interface (which may include USB, Mini-USB, Micro-USB, USB Type-C, etc.), a Thunderbolt (Thunderbolt) interface, a headset interface, a video transmission interface (e.g., a high definition multimedia HDMI interface, a mobile high definition link (MHL) interface), an external storage interface (e.g., an external memory card SD card interface), a subscriber identity module card (SIM card) interface, and so forth.
The sensor module 118 may have one or more sensors or sensor arrays, including but not limited to: 1. a location sensor, such as a Global Positioning Satellite (GPS) sensor, a beidou satellite positioning sensor or a GLONASS (GLONASS) satellite positioning system sensor, for detecting the current geographical location of the device; 2. the acceleration sensor, the gravity sensor and the gyroscope are used for detecting the motion state of the equipment and assisting in positioning; 3. a light sensor for detecting external ambient light; 4. the distance sensor is used for detecting the distance between an external object and the system; 5. the pressure sensor is used for detecting the pressure condition of system contact; 6. and the temperature and humidity sensor is used for detecting the ambient temperature and humidity. The sensor module 118 may also add any other kind and number of sensors or sensor arrays as the application requires.
In some embodiments of the present invention, the photographing method of the present invention may be performed by the processor 105 calling various components of the terminal system 100 through instructions. The program required for the processor 105 to execute the photographing method of the present invention is stored by the memory 103.
The above is an introduction of a terminal system to which the photographing method is applied, and next, the photographing method will be described. Referring to fig. 2, a flowchart of a shooting method provided in an embodiment of the present disclosure is shown in fig. 2, where the method includes the following steps.
Step S11: when a target scene is previewed, acquiring the optimal shooting parameter values of the first camera when the first camera takes each of N image areas contained in a preview image as a focus area to shoot, and acquiring N groups of optimal shooting parameter values in total, wherein N is an integer greater than 1;
step S12: determining N groups of optimal target shooting parameter values of a second camera corresponding to the N groups of optimal shooting parameter values according to a preset mapping relation between the shooting parameter values of a first camera and the shooting parameter values of the second camera;
step S13: according to the current attention area of the preview image, determining a group of optimal shooting parameter values corresponding to the current attention area as first shooting parameter values in the N groups of optimal shooting parameter values, and determining a group of optimal target shooting parameter values corresponding to the current attention area as second shooting parameter values in the N groups of optimal target shooting parameter values, so that the first camera adopts the first shooting parameter values, and the second camera adopts the second shooting parameter values to shoot the target scene.
It should be noted that the electronic device in the embodiment of the present disclosure may be a smart phone, a tablet computer, a camera, or other devices with a camera, and is not limited herein. The electronic equipment is at least provided with a first camera, a second camera and a third camera or even more cameras. The method provided by the embodiment can also be adopted to set the shooting parameters of the third camera. The types of the cameras can be set according to actual needs, for example, the first camera and the second camera are both color (RGB) cameras; or the first camera is a color (RGB) camera and the second camera is a black and white (MONO) camera. The number and type of cameras on the imaging device are not limited herein.
Next, a shooting method provided in an embodiment of the present specification will be described by taking an example in which an electronic apparatus is provided with a first camera and a second camera.
First, in step S11, the electronic device is in a shooting preview state for a target scene, which may be any scene the user wants to shoot. In the embodiment of the invention, the first camera calculates the shooting parameters according to different areas, so that the preview image displayed in the shooting preview state is the target scene image acquired by the second camera in order to avoid browsing the preview image by a user.
It should be understood that, since the position of the first camera is not the same as the position of the second camera, when the images of the same target scene are acquired, the image information acquired by the first camera and the image information acquired by the second camera may have a certain offset. In general, since the distance between the first camera and the second camera is quite close, the generated offset can be ignored, i.e., the image information collected by the first camera is approximately the same as the image information collected by the second camera. In addition, when the images shot by the first camera and the second camera are synthesized, the influence of the field of view difference between the two cameras on the synthesized image can be eliminated through calculation.
In addition, in order to keep the image information acquired by the first camera completely identical to the image information acquired by the second camera, image registration may be performed. For example, when the image captured by the second camera is a preview image, the image captured by the first camera may be registered according to the preview image, so that the registered image is aligned with the preview image.
In this embodiment of the present description, in order to realize that the shooting parameter values of each camera are quickly adjusted when selecting or switching the region of interest, the optimal shooting parameter values of each camera corresponding to each image region in the current preview image may be determined in the preview process. In this way, after the region of interest is determined, each camera can be set to the optimum shooting parameter value corresponding to the image region in which the region of interest is located.
Specifically, the preview image includes N image areas. The division of the N image areas may be set according to actual needs, and two ways of dividing the image areas are described below.
In the first method, image recognition is performed on a preview image, N objects in the preview image are recognized, and the regions where the N objects are located are taken as N image regions. By way of example, the target scene is a park scene that includes a plurality of objects, such as trees, lake water, visitors, and the like. Correspondingly, the preview image is an acquired park scene image, and the preview image also comprises trees, lake water, tourists and the like. And performing characteristic recognition on the image content of the preview image, recognizing each object, namely recognizing objects such as trees, lake water, tourists and the like, and segmenting the area where each object is located to obtain a segmented image area.
In the second mode, the preview image is subjected to image segmentation according to a preset segmentation mode to obtain N image areas. The preset segmentation mode may be set according to actual needs, and in one embodiment, the preset segmentation mode is to segment the preview image into a plurality of rectangular regions with the same size, where each rectangular region corresponds to one image region. Of course, the preset segmentation method may be another segmentation method, for example, the preview image may be divided according to a preset size and a preset region shape.
It should be understood that for each image area in the preview image, it is possible to be the area of interest that the user takes. For example, still taking the target scene as the park scene as an example, when the user wants to shoot the tree with emphasis, the image area where the tree is located is the attention area, and when the user wants to shoot the visitor with emphasis, the image area where the visitor is located is the attention area. Under the condition that the preview image is not changed, the attention areas are different, and the shooting parameter values of the cameras are correspondingly adjusted, so that the definition of the image area corresponding to the attention area in the obtained shot image meets the preset condition, the preset condition can be set according to actual needs, for example, the preset condition is that the definition is greater than a threshold value.
In step S11, the optimal shooting parameter value of the first camera when each of the N image areas is used as the attention area can be determined. In a specific implementation process, step S11 may be implemented as follows:
based on the parameter value range of the shooting parameter of the first camera, performing parameter adjustment on the parameter value of the shooting parameter according to a preset mode, and acquiring M images of the target scene acquired by the first camera in the parameter adjustment process, wherein each image of the M images comprises the N image areas, and M is an integer greater than 1; and determining each area of the N image areas as the optimal shooting parameter value when shooting the attention area according to the definition of the N image areas contained in each image.
Specifically, when the second camera previews, the first camera shoots a target scene in the background, and the shooting process is realized by continuously adjusting the shooting parameter value of the first camera. It should be understood that the shooting parameters of the first camera are provided with parameter value ranges, and the parameter ranges are ranges from the minimum value of the parameters to the maximum value of the parameters. The preset mode for adjusting the parameters may be set according to actual needs, for example, the preset mode may be to start adjustment from a minimum value of the parameters, or may start adjustment from a maximum value of the parameters.
Taking the adjustment from the minimum value of the parameter as an example, the initial shooting parameter value is the minimum value of the parameter, the final shooting parameter value is the maximum value of the parameter, and the parameter values are gradually increased from the minimum value of the parameter until the shooting parameter value is adjusted to the maximum value of the parameter. In one embodiment, the parameter adjustment may be performed according to a preset step length, and the preset step length may be set according to actual needs. And acquiring one image through the first camera according to the adjustment step length every time the shooting parameter value is adjusted, and acquiring M images.
Taking the adjustment from the maximum value of the parameter as an example, the initial shooting parameter value is the maximum value of the parameter, the final shooting parameter value is the minimum value of the parameter, and the parameter values are gradually reduced from the maximum value of the parameter until the shooting parameter value is adjusted to the minimum value of the parameter. In one embodiment, the parameter adjustment may be performed according to a preset step length, and the preset step length may be set according to actual needs. And acquiring one image through the first camera according to the adjustment step length every time the shooting parameter value is adjusted, and acquiring M images.
For the obtained M images, since the M images are all images obtained by the first camera shooting the same target scene, the image contents of the M images are almost identical. Each image includes the same N image areas as the preview image, and the image contents of the N image areas in each image are almost identical. Although the image contents of the M images are the same, the display effects of the M images are different because the corresponding shooting parameter values of each image are different, for example, for the same image area in the M images, the definition, brightness, etc. of the same image area in different images may be different. Still taking the target scene as the park scene as an example, the M images of the park scene are obtained in the above manner, and for M guest image areas in the M images, the definition and brightness of the M guest image areas may be different.
Further, for the attention area, it is necessary that the definition of the attention area in the captured image satisfies a preset condition, and therefore, in this embodiment of the present specification, the attention area may be determined according to the definitions of N image areas included in the M images, and further, the capturing parameter corresponding to the attention area may be determined.
In one embodiment, determining the optimal shooting parameter value corresponding to each region of interest may be implemented as follows: for each of the N image regions, comparing the sharpness of that image region in the M images; and taking the shooting parameter value corresponding to the image with the highest definition in the image area as the optimal shooting parameter value taking the image area as the attention area.
In the present embodiment, when a certain image region in a preview image is set as a region of interest, the definition of the region of interest is the criterion in the finally captured image. In this embodiment, the shooting parameter with the highest definition of the attention area that can be shot by the first camera may be used as the optimal shooting parameter value of the attention area. Specifically, after the first camera obtains M images, the sharpness of each image area in the M images is compared. Still taking the park scenery above as an example, the preview image includes 3 image areas, respectively a visitor image area, a lake image area, and a tree image area. Firstly, for a tourist image area, acquiring the definition of the tourist image area in each image of M images, obtaining M definitions in total, determining the image corresponding to the highest definition in the M definitions, wherein the image is the best image acquired by the first camera when the tourist image area is taken as an attention area, and the shooting parameter value corresponding to the image is the best shooting parameter value of the first camera when the attention area is the tourist image area. In the same way, the optimal shooting parameter value of the first camera can be obtained when the attention area is a lake water image area and a tree image area.
By the method, the optimal shooting parameter value when the first camera takes each image area as the attention area is obtained.
In an embodiment of the present specification, the shooting parameters of the first camera and the target shooting parameters of the second camera both include focusing parameters and/or exposure parameters. The focusing parameter can be represented by a position parameter of a camera motor, and the exposure parameter can be represented by an exposure time length. Specifically, when the shooting parameters of the first camera are focusing parameters, the shooting parameters of the second camera are also corresponding to the focusing parameters; when the shooting parameters of the first camera are exposure parameters, the shooting parameters of the second camera are correspondingly exposure parameters; when the shooting parameters of the first camera are focusing parameters and exposure parameters, the shooting parameters of the second camera are corresponding to the focusing parameters and the exposure parameters. Of course, the shooting parameters may be other parameters, which are not limited herein.
Next, taking the shooting parameters as the focus parameter and the exposure parameter as an example, a process of acquiring an optimal shooting parameter value when the first camera takes each image area as a region of interest will be described. It should be understood that the optimal focus parameter value and the optimal exposure parameter value may be obtained by two separate processes, or may be obtained simultaneously in the same process.
For the process of obtaining the optimal focusing parameter value, when a target scene is previewed, the focusing parameter of the first camera is gradually adjusted from the minimum focusing parameter value to the maximum focusing parameter value (or from the maximum focusing parameter value to the minimum focusing parameter value) according to the range of the focusing parameter value of the first camera, and the image collected under each focusing parameter value in the process of adjusting the first camera from a near focus to a far focus (or from the far focus to the near focus) is obtained, so that M images are obtained. Further, by comparing the degrees of sharpness of the N image regions included in each of the M images, when each image region is determined to be a region of interest, the optimal focusing parameter values of the first camera are obtained in total to obtain N sets of optimal focusing parameter values, and the specific optimal focusing parameter value determining process may refer to the above description, which is not repeated here.
For the process of obtaining the optimal exposure parameter value, when a target scene is previewed, the exposure parameter of the first camera is gradually adjusted from the minimum exposure parameter value to the maximum exposure parameter value (or from the maximum exposure parameter value to the minimum exposure parameter value) according to the range of the exposure parameter value of the first camera, and the images collected under each exposure parameter value in the process from underexposure to overexposure (or from overexposure to underexposure) of the first camera are obtained to obtain M images. Further, by comparing the definitions of the N image regions included in each of the M images, when determining that each image region is used as the attention region, the optimal exposure parameter value of the first camera is obtained in total to obtain N sets of optimal exposure parameter values, and the specific exposure parameter value determination process may refer to the above description, which is not repeated here.
Further, after the N sets of optimal photographing parameter values for the first camera to photograph with each image area as the attention area are determined, the N sets of optimal target photographing parameter values for the second camera to photograph with each image area as the attention area are determined through step S12. In this embodiment of the present specification, the preset mapping relationship between the shooting parameter value of the first camera and the shooting parameter value of the second camera may be a mapping relationship table, a mapping relationship curve, or other types of mapping relationships. The mapping relationship may be obtained in various ways, for example, the mapping relationship is obtained through empirical values, the mapping relationship is obtained through calculation and estimation of camera parameters provided by a camera manufacturer, or the mapping relationship is obtained through testing, which is not limited herein.
Taking the shooting parameters as the focusing parameters and the exposure parameters, and obtaining the mapping relationship in a test manner in a laboratory environment is taken as an example to describe the obtaining process of the mapping relationship. It is to be understood that the mapping of the focus parameter values and the mapping of the exposure parameter values may be obtained by two processes or the same process.
When the mapping relationship of the focusing parameter values is obtained, the mapping relationship may be obtained by: gradually adjusting the position of a target object based on the adjustment range of the shooting position of the target object, determining the focusing parameter value of the target object at each position shot by the first camera, and determining the focusing parameter value of the target object at each position shot by the second camera; and establishing the mapping relation according to the focusing parameter value of the first camera and the focusing parameter value of the second camera at each position.
Specifically, since the motor movement ranges of the first camera and the second camera are limited, only a subject within a certain range can be focused, and for example, the motor movement ranges of the first camera and the second camera can focus a subject within a distance of 5cm to 10m from the camera.
In the process of acquiring the mapping relation of the focusing parameter values, the camera can be pushed to gradually move from the image shot at the distance of 5cm to the image shot at the distance of 10m from the camera. The target object may move according to a preset moving step, for example, the moving step is 5cm, and then the positions of the shooting objects are sequentially 5cm, 10cm, 15cm, and so on from the camera. During the test, the target object may be always kept at the center position of the preview image.
And at each position of the target object, determining an optimal focusing parameter value for shooting by the first camera with the area of the target object as a focus area, determining an optimal focusing parameter value for shooting by the second camera with the area of the target object as the focus area, and taking the two optimal focusing parameter values as a mapping relation. And generating a mapping relation based on the mapping relation corresponding to the plurality of positions of the target object.
For example, when the position of the target object is 5cm from the cameras, the optimal focus parameter value photographed when the first camera takes the target object as the region of interest is determined to be a1, and the optimal focus parameter value photographed when the second camera takes the target object as the region of interest is determined to be b 1. When the position of the target object is 10cm from the cameras, the optimal focusing parameter value for photographing when the target object is taken as the attention area by the first camera is determined to be a2, and the optimal focusing parameter value for photographing when the target object is taken as the attention area by the second camera is determined to be b 2. Further, the optimal focusing parameter value a1 of the first camera and the optimal focusing parameter value b1 of the second camera are established as a first mapping relationship, and the optimal focusing parameter value a2 of the first camera and the optimal focusing parameter value b2 of the second camera are established as a second mapping relationship. And synthesizing all the mapping relations to generate the mapping relation of the focusing parameters.
When the mapping relationship of the exposure parameter values is obtained, the mapping relationship may be obtained by: gradually adjusting an exposure parameter value of the first camera and an exposure parameter value of the second camera so as to enable the image brightness of a plurality of images collected by the first camera to be the same as the image brightness of a plurality of images collected by the second camera; and establishing the mapping relation according to the exposure parameter value of the image acquired by the first camera and the exposure parameter value of the image acquired by the second camera corresponding to each same image brightness.
During the acquisition of the exposure parameter value mapping relationship, the target object may be placed at a fixed position within the fields of view of the first and second cameras, and the target object is maintained at the central position of the preview image. And gradually adjusting the exposure parameter of the first camera to the maximum exposure parameter value from the minimum exposure parameter value (or gradually adjusting the maximum exposure parameter value to the minimum exposure parameter value), and obtaining the brightness of the area where the target object is located in the image shot under each exposure parameter value. And then determining that the exposure parameter value with the same brightness is obtained when the second camera is used for shooting the target object according to the brightness of the area where the target object is located in each image shot by the first camera.
For example, when the exposure parameter value of the first camera is d1, the target object is photographed to obtain a photographed image, the brightness of an image area where the target object is located in the photographed image is L1, then the exposure parameter value of the second camera is adjusted to make the brightness of an image area corresponding to the target object in the photographed image of the second camera under the exposure parameter value also be L1, and the exposure parameter value e1 of the second camera at the time is recorded. Next, the exposure parameter value of the first camera may be adjusted to d2 according to the preset step length to obtain a captured image, where the brightness of the image region corresponding to the target object in the captured image is L2, the exposure parameter value of the second camera is correspondingly adjusted so that the brightness of the image region corresponding to the target object in the captured image of the second camera under the exposure parameter is also L2, and the exposure parameter value e2 of the second camera at this time is recorded. Further, an exposure parameter value d1 of the first camera and an exposure parameter value e1 of the second camera are established as a first mapping relation, and an exposure parameter value d2 of the first camera and an exposure parameter value e2 of the second camera are established as a second mapping relation. And synthesizing all the mapping relations to generate the mapping relation of the exposure parameters.
In the embodiment of the description, the mapping relationship is obtained by an experimental test method, and compared with the method of obtaining the mapping relationship by empirical values or calculation estimation, the obtained data is more accurate and more conforms to the actual situation of each camera, the accuracy of the shooting parameters of the second camera obtained by table lookup can be improved, and the quality of subsequent shot pictures is improved. In addition, after the mapping relationship is obtained, the mapping relationship can be stored in the electronic device, so that the optimal target shooting parameter value of the second camera can be quickly determined by inquiring the mapping relationship according to the optimal shooting parameter value of the first camera.
Through the above steps, when previewing an image, the optimal shooting parameter value of the first camera and the optimal target shooting parameter value of the second camera corresponding to each attention area can be obtained when each image area in the preview image is taken as the attention area.
It should be understood that the optimal shooting parameter value of the first camera and the optimal target shooting parameter value of the second camera corresponding to each region of interest may also constitute a correspondence relationship. In the correspondence, there are N groups of sub-correspondences, where each group of sub-correspondences is an optimal photographing parameter value of the first camera and an optimal photographing parameter value of the second camera corresponding to one of the N image regions in the preview image. When the user selects the attention area of the preview image or switches the attention area, the shooting parameter value of the first camera and the shooting parameter value of the second camera corresponding to the attention area can be quickly determined by searching the corresponding relation, and the shooting parameter value of the camera does not need to be recalculated each time the attention area is determined.
Further, in step S13, the current attention area in the preview image is an image area determined by the second camera according to the auto-focusing of the target scene, and the current attention area may also be an image area selected by the user by clicking the display screen of the electronic device. After the current attention area is determined, which image area of the N image areas corresponding to the current attention area is further determined. And determining a first shooting parameter value of the first camera and a second target shooting parameter value of the second camera corresponding to the image area according to the corresponding relation. Or, a first shooting parameter value corresponding to the image area is determined in the N groups of shooting parameter values of the first camera, and then a second shooting parameter value of the second camera corresponding to the first shooting parameter value is inquired according to a preset mapping relation between the shooting parameter of the first camera and the shooting parameter of the second camera. And setting the first camera as a first shooting parameter value, setting the second camera as a second shooting parameter value, and shooting the preview image, wherein the definition of the current focus area in the shot image obtained in the way can meet the preset condition.
In addition, when the preview image changes, for example, the user moves the terminal to change the content of the acquired preview image, or the brightness of the scene corresponding to the preview image changes suddenly, or a moving object appears in the preview image, the aforementioned S101 to S103 need to be executed again to determine the shooting parameter value of each camera corresponding to each image area in the changed preview image.
In the scheme in the embodiment of the present description, when the second camera previews the target scene, the first camera calculates, in the background, the optimal shooting parameter value corresponding to each image area in the current preview image. Therefore, the selection or switching operation of the attention area in the current preview image can be quickly responded, the calculation of shooting parameters is not needed after the attention area is determined again, and the adjustment time of each camera is saved on the basis of ensuring the shooting effect. Furthermore, the scheme in the embodiment of the present specification can quickly adjust the shooting parameter value, so that the jump of the definition and the exposure of the preview interface due to long-time parameter adjustment in the prior art does not occur, and further the user experience is improved.
Based on the inventive concept similar to the shooting method in the foregoing embodiment, the second aspect of the present invention further provides a shooting apparatus applied to an electronic device, where the electronic device includes a first camera and a second camera, and as shown in fig. 3, the apparatus includes:
a first obtaining module 31, configured to, when a target scene is previewed, obtain shooting parameter values of the first camera when the first camera takes each of N image areas included in a preview image as a focus area, and obtain N groups of optimal shooting parameter values in total, where N is an integer greater than 1;
the second obtaining module 32 is configured to determine, according to a preset mapping relationship between shooting parameter values of the first camera and shooting parameter values of a second camera, N sets of optimal target shooting parameter values of the second camera corresponding to the N sets of optimal shooting parameter values;
the processing module 33 is configured to determine, according to the current attention area of the preview image, a group of optimal shooting parameter values corresponding to the current attention area as first shooting parameter values among the N groups of optimal shooting parameter values, and determine a group of optimal target shooting parameter values corresponding to the current attention area as second shooting parameter values among the N groups of optimal target shooting parameter values, so that the first camera uses the first shooting parameter values, and the second camera uses the second shooting parameter values to shoot the target scene.
As an optional embodiment, the first obtaining module 31 is specifically configured to:
based on the parameter value range of the shooting parameter of the first camera, performing parameter adjustment on the parameter value of the shooting parameter according to a preset mode, and acquiring M images of the target scene acquired by the first camera in the parameter adjustment process, wherein each image of the M images comprises the N image areas, and M is an integer greater than 1;
and determining each area of the N image areas as the optimal shooting parameter value when shooting the attention area according to the definition of the N image areas contained in each image.
As an optional embodiment, the first obtaining module 31 is specifically configured to:
for each image region of the N image regions, comparing the definition of the image region in the M images;
and taking the shooting parameter value corresponding to the image with the highest definition in the image area as the optimal shooting parameter value taking the image area as the attention area.
As an optional embodiment, the shooting parameters of the first camera are focusing parameters and/or exposure parameters, and the target shooting parameters of the second camera are also corresponding to the focusing parameters and/or exposure parameters.
As an optional embodiment, when the shooting parameters of the first camera and the target shooting parameters of the second camera are both focus parameters, the preset mapping relationship between the shooting parameter values of the first camera and the shooting parameter values of the second camera is obtained by the following method:
gradually adjusting the position of a target object based on the adjustment range of the shooting position of the target object, determining the focusing parameter value of the target object at each position shot by the first camera, and determining the focusing parameter value of the target object at each position shot by the second camera;
and establishing the mapping relation according to the focusing parameter value of the first camera and the focusing parameter value of the second camera at each position.
As an optional embodiment, when the shooting parameters of the first camera and the target shooting parameters of the second camera are exposure parameters, the preset mapping relationship between the shooting parameter values of the first camera and the shooting parameter values of the second camera is obtained by:
gradually adjusting an exposure parameter value of the first camera and an exposure parameter value of the second camera so as to enable the image brightness of a plurality of images collected by the first camera to be the same as the image brightness of a plurality of images collected by the second camera;
and establishing the mapping relation according to the exposure parameter value of the image acquired by the first camera and the exposure parameter value of the image acquired by the second camera corresponding to each same image brightness.
With regard to the above-mentioned apparatus, the specific functions of the respective modules have been described in detail in the embodiment of the photographing method provided by the embodiment of the present invention, and will not be elaborated herein.
Based on the same inventive concept as the shooting method in the foregoing embodiment, a third embodiment of the present invention further provides a terminal system, please refer to fig. 1, the apparatus of the embodiment includes: a processor 105, a memory 103 and a computer program stored in the memory and executable on the processor, for example, a program corresponding to the photographing method in the first embodiment.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to implement the invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program in the computer apparatus.
For the description of the terminal system memory, the processor and other structures, please refer to the above, and the description is not repeated here.
Further, the apparatus comprises a processor 105 having the following functionality:
when a target scene is previewed, acquiring shooting parameter values of the first camera when the first camera takes each of N image areas contained in a preview image as a focus area to shoot, and acquiring N groups of optimal shooting parameter values in total, wherein N is an integer greater than 1;
determining N groups of optimal target shooting parameter values of a second camera corresponding to the N groups of optimal shooting parameter values according to a preset mapping relation between the shooting parameter values of a first camera and the shooting parameter values of the second camera;
according to the current attention area of the preview image, determining a group of optimal shooting parameter values corresponding to the current attention area as first shooting parameter values in the N groups of optimal shooting parameter values, and determining a group of optimal target shooting parameter values corresponding to the current attention area as second shooting parameter values in the N groups of optimal target shooting parameter values, so that the first camera adopts the first shooting parameter values, and the second camera adopts the second shooting parameter values to shoot the target scene.
Further, the apparatus comprises a processor 105 having the following functions:
based on the parameter value range of the shooting parameter of the first camera, performing parameter adjustment on the parameter value of the shooting parameter according to a preset mode, and acquiring M images of the target scene acquired by the first camera in the parameter adjustment process, wherein each image of the M images comprises the N image areas, and M is an integer greater than 1;
and determining each area of the N image areas as the optimal shooting parameter value when shooting the attention area according to the definition of the N image areas contained in each image.
Further, the apparatus comprises a processor 105 having the following functions:
for each image region of the N image regions, comparing the definition of the image region in the M images;
and taking the shooting parameter value corresponding to the image with the highest definition in the image area as the optimal shooting parameter value taking the image area as the attention area.
Further, the apparatus comprises a processor 105 having the following functions:
when the shooting parameters of the first camera and the target shooting parameters of the second camera are both focusing parameters, gradually adjusting the position of a target object based on the adjustment range of the shooting position of the target object, determining the focusing parameter value of the target object at each position shot by the first camera, and determining the focusing parameter value of the target object at each position shot by the second camera;
and establishing the mapping relation according to the focusing parameter value of the first camera and the focusing parameter value of the second camera at each position.
Further, the apparatus comprises a processor 105 having the following functions:
when the shooting parameters of the first camera and the target shooting parameters of the second camera are exposure parameters, gradually adjusting the exposure parameter values of the first camera and the second camera so as to enable the image brightness of a plurality of images acquired by the first camera to be the same as the image brightness of a plurality of images acquired by the second camera;
and establishing the mapping relation according to the exposure parameter value of the image acquired by the first camera and the exposure parameter value of the image acquired by the second camera corresponding to each same image brightness.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A shooting method is applied to electronic equipment, and is characterized in that the electronic equipment comprises a first camera and a second camera, and the method comprises the following steps:
when a target scene is previewed, acquiring the optimal shooting parameter values of the first camera when the first camera takes each of N image areas contained in a preview image as a focus area to shoot, and acquiring N groups of optimal shooting parameter values in total, wherein N is an integer greater than 1; the optimal shooting parameter value is the shooting parameter value corresponding to the image with the highest definition in the image area within the parameter value range of the shooting parameter of the first camera; the shooting parameter values comprise focusing parameter values and exposure parameter values;
determining N groups of optimal target shooting parameter values of a second camera corresponding to the N groups of optimal shooting parameter values according to a preset mapping relation between the shooting parameter values of a first camera and the shooting parameter values of the second camera;
according to the current attention area of the preview image, determining a group of optimal shooting parameter values corresponding to the current attention area as first shooting parameter values in the N groups of optimal shooting parameter values, and determining a group of optimal target shooting parameter values corresponding to the current attention area as second shooting parameter values in the N groups of optimal target shooting parameter values, so that the first camera adopts the first shooting parameter values, and the second camera adopts the second shooting parameter values to shoot the target scene.
2. The method according to claim 1, wherein when previewing the target scene, acquiring the best shooting parameter value when the first camera respectively shoots with each of N image areas included in the preview image as the attention area comprises:
based on the parameter value range of the shooting parameter of the first camera, performing parameter adjustment on the parameter value of the shooting parameter according to a preset mode, and acquiring M images of the target scene acquired by the first camera in the parameter adjustment process, wherein each image of the M images comprises the N image areas, and M is an integer greater than 1;
and determining each area of the N image areas as the optimal shooting parameter value when shooting the attention area according to the definition of the N image areas contained in each image.
3. The method according to claim 2, wherein the determining, according to the degrees of sharpness of the N image regions included in each image, an optimal photographing parameter value for photographing each of the N image regions as the region of interest comprises:
for each image region of the N image regions, comparing the definition of the image region in the M images;
and taking the shooting parameter value corresponding to the image with the highest definition in the image area as the optimal shooting parameter value taking the image area as the attention area.
4. The method according to any one of claims 1 to 3, wherein the shooting parameters of the first camera are focus parameters and/or exposure parameters, and the target shooting parameters of the second camera also correspond to focus parameters and/or exposure parameters.
5. The method according to claim 4, wherein when the shooting parameters of the first camera and the target shooting parameters of the second camera are the focusing parameters, the preset mapping relationship between the shooting parameter values of the first camera and the shooting parameter values of the second camera is obtained by:
gradually adjusting the position of a target object based on the adjustment range of the shooting position of the target object, determining the focusing parameter value of the target object at each position shot by the first camera, and determining the focusing parameter value of the target object at each position shot by the second camera;
and establishing the mapping relation according to the focusing parameter value of the first camera and the focusing parameter value of the second camera at each position.
6. The method according to claim 4, wherein when the shooting parameters of the first camera and the target shooting parameters of the second camera are the exposure parameters, the preset mapping relationship between the shooting parameter values of the first camera and the shooting parameter values of the second camera is obtained by:
gradually adjusting an exposure parameter value of the first camera and an exposure parameter value of the second camera so as to enable the image brightness of a plurality of images collected by the first camera to be the same as the image brightness of a plurality of images collected by the second camera;
and establishing the mapping relation according to the exposure parameter value of the image acquired by the first camera and the exposure parameter value of the image acquired by the second camera corresponding to each same image brightness.
7. The utility model provides a shoot device, is applied to electronic equipment, its characterized in that, electronic equipment includes first camera and second camera, the device includes:
the first acquisition module is used for acquiring the optimal shooting parameter values of the first camera when the first camera respectively takes each of N image areas contained in a preview image as an attention area to shoot, and acquiring N groups of optimal shooting parameter values in total, wherein N is an integer greater than 1; the optimal shooting parameter value is the shooting parameter value corresponding to the image with the highest definition in the image area within the parameter value range of the shooting parameter of the first camera; the shooting parameter values comprise focusing parameter values and exposure parameter values;
the second acquisition module is used for determining N groups of optimal target shooting parameter values of the second camera corresponding to the N groups of optimal shooting parameter values according to a preset mapping relation between the shooting parameter values of the first camera and the shooting parameter values of the second camera;
and the processing module is used for determining a group of optimal shooting parameter values corresponding to the current attention area as first shooting parameter values in the N groups of optimal shooting parameter values according to the current attention area of the preview image, and determining a group of optimal target shooting parameter values corresponding to the current attention area as second shooting parameter values in the N groups of optimal target shooting parameter values, so that the first camera adopts the first shooting parameter values, and the second camera adopts the second shooting parameter values to shoot the target scene.
8. The apparatus of claim 7, wherein the first obtaining module is configured to:
based on the parameter value range of the shooting parameter of the first camera, performing parameter adjustment on the parameter value of the shooting parameter according to a preset mode, and acquiring M images of the target scene acquired by the first camera in the parameter adjustment process, wherein each image of the M images comprises the N image areas, and M is an integer greater than 1;
and determining each area of the N image areas as the optimal shooting parameter value when shooting the attention area according to the definition of the N image areas contained in each image.
9. An electronic device, comprising a processor and a memory:
the memory is used for storing a program for executing the method of any one of claims 1 to 6;
the processor is configured to execute programs stored in the memory.
10. A computer storage medium, characterized in that a computer program is stored thereon which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN201811641102.6A 2018-12-29 2018-12-29 Shooting method and device, electronic equipment and storage medium Active CN109495689B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811641102.6A CN109495689B (en) 2018-12-29 2018-12-29 Shooting method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811641102.6A CN109495689B (en) 2018-12-29 2018-12-29 Shooting method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109495689A CN109495689A (en) 2019-03-19
CN109495689B true CN109495689B (en) 2021-04-13

Family

ID=65713544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811641102.6A Active CN109495689B (en) 2018-12-29 2018-12-29 Shooting method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109495689B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110225248B (en) * 2019-05-29 2021-11-16 Oppo广东移动通信有限公司 Image acquisition method and device, electronic equipment and computer readable storage medium
CN112235563B (en) * 2019-07-15 2023-06-30 北京字节跳动网络技术有限公司 Focusing test method and device, computer equipment and storage medium
CN112748438A (en) * 2019-10-31 2021-05-04 傲基科技股份有限公司 Method suitable for accurate positioning of laser ranging device and laser ranging system and method
CN112770042B (en) * 2019-11-05 2022-11-15 RealMe重庆移动通信有限公司 Image processing method and device, computer readable medium, wireless communication terminal
CN110830717B (en) * 2019-11-12 2021-06-25 维沃移动通信有限公司 Parameter value acquisition method and electronic equipment
CN111654635A (en) * 2020-06-30 2020-09-11 维沃移动通信有限公司 Shooting parameter adjusting method and device and electronic equipment
CN113630558B (en) * 2021-07-13 2022-11-29 荣耀终端有限公司 Camera exposure method and electronic equipment
CN113592800A (en) * 2021-07-22 2021-11-02 梅卡曼德(北京)机器人科技有限公司 Image scanning method and device based on dynamic scanning parameters
CN114071024A (en) * 2021-11-26 2022-02-18 北京百度网讯科技有限公司 Image shooting method, neural network training method, device, equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950063A (en) * 2009-07-10 2011-01-19 佛山普立华科技有限公司 Automatic focusing system and automatic focusing method
CN103905732A (en) * 2014-04-02 2014-07-02 深圳市中兴移动通信有限公司 Shooting method and shooting device
CN104917950A (en) * 2014-03-10 2015-09-16 联想(北京)有限公司 Information processing method and electronic equipment
CN105791674A (en) * 2016-02-05 2016-07-20 联想(北京)有限公司 Electronic device and focusing method
CN106572305A (en) * 2016-11-03 2017-04-19 乐视控股(北京)有限公司 Image shooting method, image processing method, apparatuses and electronic device
CN107580184A (en) * 2017-10-31 2018-01-12 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108848294A (en) * 2018-08-15 2018-11-20 努比亚技术有限公司 A kind of shooting parameter adjustment method, terminal and computer readable storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100557219B1 (en) * 2004-07-06 2006-03-07 삼성전자주식회사 Method and apparatus for compensating for automatic exposure
US9538065B2 (en) * 2014-04-03 2017-01-03 Qualcomm Incorporated System and method for multi-focus imaging
CN105100579B (en) * 2014-05-09 2018-12-07 华为技术有限公司 A kind of acquiring and processing method and relevant apparatus of image data
CN105847703B (en) * 2016-03-28 2019-04-26 联想(北京)有限公司 A kind of image processing method and electronic equipment
CN106060412A (en) * 2016-08-02 2016-10-26 乐视控股(北京)有限公司 Photographic processing method and device
CN106534696A (en) * 2016-11-29 2017-03-22 努比亚技术有限公司 Focusing apparatus and method
CN108322670B (en) * 2018-04-27 2019-05-28 Oppo广东移动通信有限公司 A kind of control method of multi-camera system, mobile terminal and storage medium
CN108377341A (en) * 2018-05-14 2018-08-07 Oppo广东移动通信有限公司 Photographic method, device, terminal and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950063A (en) * 2009-07-10 2011-01-19 佛山普立华科技有限公司 Automatic focusing system and automatic focusing method
CN104917950A (en) * 2014-03-10 2015-09-16 联想(北京)有限公司 Information processing method and electronic equipment
CN103905732A (en) * 2014-04-02 2014-07-02 深圳市中兴移动通信有限公司 Shooting method and shooting device
CN105791674A (en) * 2016-02-05 2016-07-20 联想(北京)有限公司 Electronic device and focusing method
CN106572305A (en) * 2016-11-03 2017-04-19 乐视控股(北京)有限公司 Image shooting method, image processing method, apparatuses and electronic device
CN107580184A (en) * 2017-10-31 2018-01-12 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN108848294A (en) * 2018-08-15 2018-11-20 努比亚技术有限公司 A kind of shooting parameter adjustment method, terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN109495689A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN109495689B (en) Shooting method and device, electronic equipment and storage medium
KR102310430B1 (en) Filming method, apparatus and device
JP6803982B2 (en) Optical imaging method and equipment
CN108234880B (en) Image enhancement method and device
CN106687991B (en) System and method for setting focus of digital images based on social relationships
WO2017016030A1 (en) Image processing method and terminal
CN108234879B (en) Method and device for acquiring sliding zoom video
CN105874776B (en) Image processing apparatus and method
CN108419009B (en) Image definition enhancing method and device
US11470294B2 (en) Method, device, and storage medium for converting image from raw format to RGB format
CN110324532B (en) Image blurring method and device, storage medium and electronic equipment
WO2021143269A1 (en) Photographic method in long focal length scenario, and mobile terminal
CN109151318B (en) Image processing method and device and computer storage medium
JP2016531362A (en) Skin color adjustment method, skin color adjustment device, program, and recording medium
CN111953904B (en) Shooting method, shooting device, electronic equipment and storage medium
WO2021032117A1 (en) Photographing method and electronic device
CN111935370A (en) Camera module, shooting method and device
CN110766729B (en) Image processing method, device, storage medium and electronic equipment
KR102022892B1 (en) Apparatus and method for processing image of mobile terminal comprising camera
CN108389165B (en) Image denoising method, device, terminal system and memory
CN111953903A (en) Shooting method, shooting device, electronic equipment and storage medium
CN107483817B (en) Image processing method and device
CN109547703B (en) Shooting method and device of camera equipment, electronic equipment and medium
KR102082365B1 (en) Method for image processing and an electronic device thereof
WO2017076041A1 (en) Method and device for image-stabilized photograph capturing and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant