CN110213493B - Device imaging method and device, storage medium and electronic device - Google Patents

Device imaging method and device, storage medium and electronic device Download PDF

Info

Publication number
CN110213493B
CN110213493B CN201910579742.7A CN201910579742A CN110213493B CN 110213493 B CN110213493 B CN 110213493B CN 201910579742 A CN201910579742 A CN 201910579742A CN 110213493 B CN110213493 B CN 110213493B
Authority
CN
China
Prior art keywords
shooting
camera
image
gesture information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910579742.7A
Other languages
Chinese (zh)
Other versions
CN110213493A (en
Inventor
李亮
占文喜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910579742.7A priority Critical patent/CN110213493B/en
Publication of CN110213493A publication Critical patent/CN110213493A/en
Application granted granted Critical
Publication of CN110213493B publication Critical patent/CN110213493B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Abstract

The application discloses an equipment imaging method, an equipment imaging device, a storage medium and electronic equipment. The method comprises the following steps: when the mobile terminal is in a preview interface of a shooting application, acquiring gesture information; if the gesture information is matched with the preset gesture information, determining a user corresponding to the gesture information as a target shooting object; shooting a target shooting object through a first camera, and setting a first image obtained through shooting as a base image; shooting a target shooting object through a plurality of second cameras to obtain a plurality of second images; and carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image. The method and the device can improve the quality of the whole imaging image obtained by shooting of the electronic equipment.

Description

Device imaging method and device, storage medium and electronic device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an apparatus imaging method and apparatus, a storage medium, and an electronic device.
Background
At present, users generally take images by using electronic devices with cameras, and things around, scenes and the like can be recorded by the electronic devices anytime and anywhere. However, due to the hardware defect of the camera, the area where the focus of the image shot by the camera is often clear, and other areas are relatively blurred, so that the quality of the whole imaging image is poor.
Disclosure of Invention
The embodiment of the application provides an equipment imaging method and device, a storage medium and electronic equipment, which can improve the quality of a whole imaging image obtained by shooting of the electronic equipment.
The embodiment of the application provides an equipment imaging method, which is applied to electronic equipment, wherein the electronic equipment comprises a first camera of a first type and a plurality of second cameras of a second type, and an overlapping part exists between a shooting area of each second camera and a shooting area of each first camera, and the method comprises the following steps:
when the mobile terminal is in a preview interface of a shooting application, acquiring gesture information;
if the gesture information is matched with preset gesture information, determining a user corresponding to the gesture information as a target shooting object;
shooting the target shooting object through the first camera, and setting a first shot image as a base image;
shooting the target shooting object through the plurality of second cameras to obtain a plurality of second images;
and carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
The embodiment of the application provides an equipment image device, is applied to electronic equipment, electronic equipment includes the first camera of first type and the second camera of a plurality of second types, there is the overlap portion in the shooting region of second camera with the shooting region of first camera, include:
the first acquisition module is used for acquiring gesture information when the mobile terminal is in a preview interface of a shooting application;
the determining module is used for determining a user corresponding to the gesture information as a target shooting object if the gesture information is matched with preset gesture information;
the second acquisition module is used for shooting the target shooting object through the first camera and setting a first image obtained through shooting as a base image;
the third acquisition module is used for shooting the target shooting object through the plurality of second cameras to obtain a plurality of second images;
and the synthesis module is used for carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
The embodiment of the application provides a storage medium, wherein a computer program is stored on the storage medium, and when the computer program is executed on a computer, the computer is enabled to execute the flow in the imaging method of the device provided by the embodiment of the application.
The embodiment of the application further provides an electronic device, which comprises a memory, a processor, a first camera of a first type and a plurality of second cameras of a second type, wherein an overlapping part exists between a shooting area of the second camera and a shooting area of the first camera, and the processor is used for executing the flow in the device imaging method provided by the embodiment of the application by calling the computer program stored in the memory.
In the embodiment of the application, a user corresponding to gesture information matched with preset gesture information is determined as a target shooting object; shooting a target shooting object through a first camera, and setting a first image obtained through shooting as a base image; shooting a target shooting object through a plurality of second cameras to obtain a plurality of second images; and carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image. Therefore, in the finally obtained imaging image, the definition of the area outside the area where the target shooting object is located is also improved, and the quality of the whole imaging image is improved.
Drawings
The technical solutions and advantages of the present application will become apparent from the following detailed description of specific embodiments of the present application when taken in conjunction with the accompanying drawings.
Fig. 1 is a first flowchart of an imaging method of an apparatus provided in an embodiment of the present application.
Fig. 2 is a second flowchart of an imaging method of the apparatus provided in the embodiment of the present application.
Fig. 3 is a schematic diagram of a central area of a target photographic object in a photographic area of a first camera in the embodiment of the present application.
Fig. 4 is a schematic diagram of a first arrangement manner of a first camera and a second camera in the embodiment of the present application.
Fig. 5 is a schematic diagram of an edge portion overlapping of a shooting area of a second camera and a shooting area of a first camera in the embodiment of the present application.
Fig. 6 is a schematic diagram of an overlapping area m in which the shooting areas of all the second cameras overlap with the shooting area of the first camera at the same time in the embodiment of the present application.
Fig. 7 is a schematic diagram of image content comparison of a second image and a base image in an embodiment of the present application.
Fig. 8 is a schematic view of an overlapping region m1 where all the second images and the base image overlap simultaneously, overlapping regions m2, m3, m4, m5 where two adjacent second images and the base image overlap simultaneously, and overlapping regions m6, m7, m8, m9 where two adjacent second images overlap in the embodiment of the present application.
Fig. 9 is a schematic view of an overlapping area where all the second images and the base image overlap simultaneously in the embodiment of the present application.
Fig. 10 is a schematic diagram of a second arrangement manner of the first camera and the second camera in the embodiment of the present application.
Fig. 11 is a schematic diagram of an image sensor shared by a first camera and a second camera in an embodiment of the present application.
Fig. 12 is a third schematic flowchart of an imaging method of an apparatus provided in an embodiment of the present application.
Fig. 13 is a schematic view of a certain area m11 where a target photographic object is in the shooting area of the first camera in the embodiment of the present application.
Fig. 14 is a schematic structural diagram of a first imaging device of the apparatus provided in the embodiment of the present application.
Fig. 15 is a schematic diagram of a second structure of an imaging device of an apparatus provided in the embodiment of the present application.
Fig. 16 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Fig. 17 is a schematic structural diagram of a second electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
It is understood that the execution subject of the embodiment of the present application may be an electronic device such as a smart phone or a tablet computer.
Referring to fig. 1, fig. 1 is a schematic flow chart of an imaging method of an apparatus according to an embodiment of the present disclosure, where the flow chart may include:
101. when the mobile terminal is in a preview interface of the shooting application, gesture information is acquired.
In the embodiment of the present application, the electronic device includes a first camera of a first type and a plurality of second cameras of a second type, where a shooting area of the first camera is larger than a shooting area of the second cameras, there is an overlapping portion (overlapping area) between the shooting areas of the second cameras and the shooting area of the first camera, there is an overlapping portion between any two shooting areas of the second cameras, there is an overlapping portion between the overlapping portion of the shooting areas of any two second cameras and the shooting area of the first camera, there is an overlapping portion between the shooting areas of the plurality of second cameras, and there is an overlapping portion between the overlapping portions of the shooting areas of the plurality of second cameras and the shooting area of the first camera, that is, there is an overlapping portion between the shooting areas of the plurality of second cameras and the shooting area of the first camera.
In the embodiment of the application, when a user operates the electronic device to start a shooting application (such as a system application "camera" of the electronic device), the electronic device enters a preview interface of the shooting application. When the electronic device is in a preview interface of a camera-like application, the electronic device may obtain gesture information.
For example, the electronic device may employ image recognition techniques to detect whether gesture information exists in the preview interface. If the gesture information exists in the preview interface, the electronic device may acquire the gesture information.
For another example, the electronic device may also detect whether gesture information exists in the shooting scene by using an image recognition technology. When gesture information exists in a shooting scene, the electronic device can acquire the gesture information.
Wherein the gesture information may include: the gesture information of V-shaped, the gesture information of five fingers opening upwards, downwards, leftwards and rightwards, the gesture information of 1 finger, 2 fingers, 3 fingers or 4 fingers is erected, and the like.
After a shooting application program (such as a system application "camera" of the electronic device) is started according to a user operation, a scene aimed at by a camera of the electronic device is a shooting scene. For example, after the user clicks an icon of a "camera" application on the electronic device with a finger to start the "camera application", if the user uses a camera of the electronic device to align a scene including an XX object, the scene including the XX object is a shooting scene. From the above description, it will be understood by those skilled in the art that the shooting scene is not specific to a particular scene, but is a scene aligned in real time following the orientation of the camera.
102. And if the gesture information is matched with the preset gesture information, determining the user corresponding to the gesture information as the target shooting object.
For example, after the gesture information is acquired, the electronic device may detect whether the gesture information matches preset gesture information. And if the gesture information is matched with the preset gesture information, determining the user corresponding to the gesture information as the target shooting object. For example, if the user corresponding to the gesture information is the user U1, the user U1 may be determined as the target photographic object. It is understood that the user U1 is in the shooting scene.
Wherein the preset gesture information can be set by a user. For example, the user may set gesture information in which five fingers are opened upward as preset gesture information, or the user may set gesture information in which 1 finger is raised as preset gesture information, and so on. The method is not particularly limited, and is subject to practical requirements.
103. The target shooting object is shot through the first camera, and a first image obtained through shooting is set as a base image.
For example, after determining that the user U1 is the target shooting object, the electronic device may shoot the user U1 once by using the first camera, and record an image shot by the first camera as a first image, and set the first image as a base image.
In some embodiments, only the target photographic subject may be included in the first image, for example only the user U1. In other embodiments, the first image may include other people, objects, scenes, etc. besides the target shooting object, for example, besides the user U1, and is not limited herein.
104. And shooting the target shooting object through a plurality of second cameras to obtain a plurality of second images.
In the embodiment of the application, the electronic device further shoots a target shooting object through a plurality of second cameras arranged on the electronic device, a plurality of images are correspondingly obtained, and the images shot by the second cameras are recorded as second images, namely, the plurality of second images are obtained through shooting.
It should be noted that when a target shooting object is shot by a plurality of second cameras, the second cameras and the first cameras use the same image parameters (such as contrast and brightness) to shoot, so that the first and second images shot by the first and second cameras have different image sizes and the same effect.
It should be noted that, in this embodiment of the present application, the execution order of 103 and 104 is not limited, and 104 may be executed after 103 is executed, 103 may be executed after 104 is executed, or 103 and 104 may be executed simultaneously.
105. And carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
In the embodiment of the application, after the electronic device obtains the base image through the shooting of the first camera and obtains the plurality of second images through the shooting of the plurality of second cameras, the plurality of shot second images are aligned with the base image.
Based on the aligned base image and second image, for the overlapping portion of the base image and the second image, an average pixel value of each overlapped pixel point is calculated, for example, the electronic device obtains the base image through shooting by the first camera, and also obtains four second images through shooting by four second cameras, for the overlapping area, the pixel values of the pixel point at a certain position in five images (i.e., the base image and the four second images) are respectively 0.8, 0.9, 1.1, 1.2, and 1, and then the average pixel value of the pixel point at the position can be calculated to be 1.
Then, a composite image is obtained according to the average pixel values obtained by the corresponding pixel points at the positions in the base image, for example, the pixel values of the pixel points of the base image can be correspondingly adjusted to the average pixel values obtained by calculation, so as to obtain an imaging image; for another example, a new image, i.e., an imaged image, may be generated based on the calculated average pixel values.
In the embodiment of the application, the electronic device obtains the imaging image after performing image synthesis processing on the plurality of second images obtained by shooting and the substrate image, and thus, the electronic device completes one complete shooting operation.
In the embodiment of the present application, the target object is located in an overlapping area where the shooting areas of the plurality of second cameras and the shooting area of the first camera are all overlapped, or the target object is located in an overlapping area where the shooting areas of the plurality of second cameras and the shooting area of the first camera are simultaneously overlapped. Assume that the electronic device includes four second cameras, which are a second camera a, a second camera B, a second camera C, and a second camera D, respectively. The imaging area of the second camera a overlaps with the imaging area of the first camera, and is referred to as an overlapping area m 12. The overlapping area m12 has an overlapping area with the shooting area of the second camera B, and is referred to as an overlapping area m 13. The overlapping area m13 has an overlapping area with the shooting area of the second camera C, and is referred to as an overlapping area m 14. The overlapping area m14 is an overlapping area with the shooting area of the second camera D, and is referred to as an overlapping area m 15. The target photographic subject is located in the overlap area m 15.
In the embodiment of the application, a user corresponding to gesture information matched with preset gesture information is determined as a target shooting object; shooting a target shooting object through a first camera, and setting a first image obtained through shooting as a base image; shooting a target shooting object through a plurality of second cameras to obtain a plurality of second images; and carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image. Because the shooting areas of the second cameras and the first camera have the overlapping parts, the shooting areas of any two second cameras have the overlapping parts, the overlapping parts of the shooting areas of any two second cameras and the shooting area of the first camera have the overlapping parts, and the target shooting object is in the overlapping areas where the shooting areas of the second cameras and the shooting area of the first camera are overlapped, the definition of the area where the target shooting object is located in the finally obtained imaging image is improved, the definition of the area outside the area where the target shooting object is located is also improved, and the quality of the whole imaging image is improved.
Referring to fig. 2, fig. 2 is a schematic flow chart of an imaging method of an apparatus according to an embodiment of the present application, where the flow chart may include:
201. when the electronic equipment is in a preview interface of the shooting application, the electronic equipment acquires gesture information.
In the embodiment of the application, the electronic equipment comprises a first camera of a first type and a plurality of second cameras of a second type, wherein the shooting area of the first camera is larger than that of the second camera, the shooting area of the second camera and the shooting area of the first camera have an overlapping part or overlap area, and the shooting areas of any two second cameras have an overlapping part, and the overlapping part of the shooting areas of any two second cameras and the shooting area of the first camera have an overlapping part, the shooting areas of the plurality of second cameras are all provided with overlapping parts, and the overlapping parts of the shooting areas of the plurality of second cameras and the shooting area of the first camera are provided with overlapping parts, that is, there is an overlapping portion where the shooting areas of the plurality of second cameras and the shooting area of the first camera overlap at the same time.
In the embodiment of the application, when a user operates the electronic device to start a shooting application (such as a system application "camera" of the electronic device), the electronic device enters a preview interface of the shooting application. When the electronic device is in a preview interface of a camera-like application, the electronic device may obtain gesture information.
For example, the electronic device may employ image recognition techniques to detect whether gesture information exists in the preview interface. If the gesture information exists in the preview interface, the electronic device may acquire the gesture information.
For another example, the electronic device may also detect whether gesture information exists in the shooting scene by using an image recognition technology. When gesture information exists in a shooting scene, the electronic device can acquire the gesture information.
Wherein the gesture information may include: the gesture information of V-shaped, the gesture information of five fingers opening upwards, downwards, leftwards and rightwards, and the gesture information of 1 finger, 2 fingers, 3 fingers or 4 fingers is erected.
After a shooting application program (such as a system application "camera" of the electronic device) is started according to a user operation, a scene aimed at by a camera of the electronic device is a shooting scene. For example, after the user clicks an icon of a "camera" application on the electronic device with a finger to start the "camera application", if the user uses a camera of the electronic device to align a scene including an XX object, the scene including the XX object is a shooting scene. From the above description, it will be understood by those skilled in the art that the shooting scene is not specific to a particular scene, but is a scene aligned in real time following the orientation of the camera.
202. And if the gesture information is matched with the preset gesture information, the electronic equipment determines the user corresponding to the gesture information as the target shooting object.
For example, after the gesture information is acquired, the electronic device may detect whether the gesture information matches preset gesture information. And if the gesture information is matched with the preset gesture information, determining the user corresponding to the gesture information as the target shooting object. For example, if the user corresponding to the gesture information is the user U1, the user U1 may be determined as the target photographic object. It is understood that the user U1 is in the shooting scene.
Wherein the preset gesture information can be set by a user. For example, the user may set gesture information in which five fingers are opened upward as preset gesture information, or the user may set gesture information in which 1 finger is raised as preset gesture information, and so on. The method is not particularly limited, and is subject to practical requirements.
203. The electronic device determines a relative position of the target photographic subject and the electronic device.
In the embodiment of the present application, in order to determine the relative position of the target photographic subject and the electronic device, after determining the target photographic subject, the electronic device may generate a voice prompt message to prompt the target photographic subject to make a sound, for example, to speak a word arbitrarily, so that the electronic device may receive the sound information of the target photographic subject by using the microphone array, that is, at least three microphones. Then, the electronic device may acquire a reception time at which each microphone receives the sound information of the target photographic subject, thereby obtaining a relative position of the target photographic subject and each microphone according to the reception time. The electronic equipment can determine the relative position of the target shooting object and the terminal according to the relative positions of the target shooting object and the microphones. For example, the relative position of the sound of the target photographic subject and the electronic device is calculated by a plane geometry correlation algorithm. In the same plane, the distance from any point to the fixed three points is only one, and in the embodiment, the number of the microphones is more than three.
Wherein the relative position may be a relative position in the same plane, the relative position including a horizontal direction and a horizontal distance. Because the microphones are fixed on the electronic equipment, the positions of the microphones on the electronic equipment are known, the position distances between the microphones are also known, and the propagation speed of sound in the air is also known, and the relative positions of the sound of the target shooting object and the electronic equipment, namely the relative positions of the target shooting object and the electronic equipment can be calculated by the known conditions and the receiving time of the microphones for receiving the sound of the target shooting object. For example, since the relative positions of the microphones are known, the electronic device first calculates the relative positions of the target photographic object and the microphones according to the receiving times of the microphones for receiving the sound information of the target photographic object, then can select a reference point on the electronic device, and can obtain the relative positions of the target photographic object and the reference point on the electronic device according to the relative positions of the microphones and the reference point, so that the relative positions of the target photographic object and the electronic device can be obtained.
The relative position may also be a relative position within a three-dimensional space, which includes a spatial direction and a spatial distance. Since the distances from any point to the fixed four points are unique in the three-dimensional space, another point in the three-dimensional space can be determined by any four points. In this embodiment, the number of microphones may be more than four.
It should be noted that the relative position between the target photographic object and the electronic device may also be determined in a manner not listed in the embodiment of the present application, which is not specifically limited by the embodiment of the present application.
204. The electronic equipment adjusts the shooting angle of the first camera according to the relative position so that the target shooting object is located in the central area of the shooting area of the first camera.
After the relative position of the target shooting object and the electronic device is determined, the electronic device can adjust the shooting angle of the first camera according to the relative position. For example, the electronic device may calculate an angle and a direction that the first camera needs to rotate according to the relative position, so as to rotate the first camera according to the angle and the direction that the first camera needs to rotate, so as to adjust a shooting angle of the first camera, so that the target shooting object is located in a central area of a shooting area of the first camera, that is, as shown in fig. 3.
The electronic device shoots a target shooting object through the first camera, and sets a shot first image as a base image 205.
For example, when the electronic device determines that the user U1 is the target shooting object, the electronic device may shoot the user U1 once by the first camera, mark the image shot by the first camera as a first image, and set the first image as a base image. Wherein in this first image the user U1 is in the central area.
In some embodiments, only the target photographic subject may be included in the first image, i.e., only the user U1. In other embodiments, the first image may include other people, objects, scenes, etc. besides the target shooting object, i.e. the user U1, and the present invention is not limited to the actual shooting scene.
206. The electronic equipment adjusts the shooting angle of each second camera according to the relative position, so that the shooting area of each second camera is partially overlapped with the edge of the shooting area of the first camera.
After determining the relative position between the target shooting object and the electronic device, the electronic device may adjust the shooting angle of each second camera according to the relative position. For example, the electronic device may calculate an angle and a direction that each second camera needs to rotate according to the relative position, so as to rotate each second camera according to the angle and the direction that each second camera needs to rotate, so as to adjust a shooting angle of each second camera, so that a shooting area of each second camera overlaps with an edge portion of a shooting area of the first camera.
For example, the first camera is a standard type camera or a camera with a field angle of about 45 degrees, and the second camera is a telephoto type camera or a camera with a field angle of less than 40 degrees. The electronic equipment comprises a first camera, four second cameras, namely a second camera A, a second camera B, a second camera C and a second camera D. Referring to fig. 4 and 5 in combination, after the shooting angles of the first camera and the second cameras are adjusted according to the relative positions of the electronic device and the target shooting object, the axes of the second cameras incline and intersect with each other toward the axis of the first camera, so that the target shooting object is located in the central area of the shooting area of the first camera, the shooting area a of the second camera a corresponds to the upper left corner of the shooting area of the first camera, the shooting area B of the second camera B corresponds to the upper right corner of the shooting area of the first camera, the shooting area C of the second camera C corresponds to the lower left corner of the shooting area of the first camera, and the shooting area D of the second camera D corresponds to the lower right corner of the shooting area of the first camera, so that the shooting area of the second camera overlaps with the edge of the shooting area of the first camera, and the overlapping portion of any two shooting areas of the second camera overlaps with the shooting area of the first camera And the shooting areas of the four second cameras are overlapped with the central area of the shooting area of the first camera. And the target shooting object is located in the central area of the shooting area of the first camera, the target shooting object can be determined to be located in an area where the shooting area of the first camera and the shooting areas of the second cameras are overlapped.
For example, as shown in fig. 6, assuming that a region where the shooting region of the first camera overlaps with the shooting regions of the second cameras is an overlap region m, the target object is in the overlap region m.
207. The electronic equipment shoots the target shooting object through the second cameras to obtain a plurality of second images.
In the embodiment of the application, after the shooting angle of each of the second cameras is adjusted according to the relative position, the electronic device can shoot a target shooting object through the second cameras to obtain a plurality of images correspondingly, and the images shot by the second cameras are recorded as second images, that is, a plurality of second images are obtained through shooting.
It should be noted that when a target shooting object is shot by a plurality of second cameras, the second cameras and the first cameras use the same image parameters (such as contrast and brightness) to shoot, so that the shooting area of the second cameras is a part of the shooting area of the first cameras, but the shooting effect of the first images and the second images obtained by shooting of the second cameras is the same.
For example, the electronic device includes four second cameras, which are a second camera a, a second camera B, a second camera C and a second camera D, and the shooting areas of the four second cameras correspond to the upper left corner, the upper right corner, the lower left corner and the lower right corner of the shooting area of the first camera, respectively, and four second images are obtained by shooting through the four second cameras, as shown in fig. 7, the image content of the second image G1 shot by the second camera a corresponds to the image content of the upper left corner of the base image, the image content of the second image G2 shot by the second camera B corresponds to the image content of the upper right corner of the base image, the image content of the second image G3 shot by the second camera C corresponds to the image content of the lower left corner of the base image, the image content of the second image G4 shot by the second camera D corresponds to the image content of the lower right corner of the base image, in this way, the image content of the different second images covers different positions of the edge area in the base image, and the image content of the different second images includes the target photographic subject.
It should be noted that, in the embodiment of the present application, 205 and 207 may be executed after the execution is completed 204 and 206. For 205 and 207, 205 may be executed first and then 207 may be executed, 207 may be executed first and then 205 may be executed, and 205 and 207 may also be executed simultaneously.
208. And the electronic equipment carries out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
In the embodiment of the application, after the electronic device obtains the base image through the shooting of the first camera and obtains the plurality of second images through the shooting of the plurality of second cameras, the plurality of shot second images are aligned with the base image.
And calculating the average pixel value of each overlapped pixel point for the overlapped part of the base image and the second image based on the aligned base image and the second image. For example, the electronic device obtains four second images through four second cameras in addition to the base image through the first camera. Assume that the four second images are a second image G1, a second image G2, a second image G3, and a second image G4.
Referring to fig. 8, an overlapping area m1 where each second image and the base image overlap is located in a central area of the base image, i.e., an area where the target photographic object is located. That is, the second image G1, the second image G2, the second image G3, the second image G4, and the overlapping area m1 of the base image are located at the central area of the base image. Thus, for the overlapping area m1 shown in fig. 8, the pixel values of the pixel point at a certain position in the five images (i.e., the base image and the four second images) are 0.8, 0.9, 1.1, 1.2, and 1, respectively, and then the average pixel value of the pixel point at the position can be calculated to be 1.
With continued reference to fig. 8, the overlapping area of the second image G1, the second image G2, and the base image is an overlapping area m2, the overlapping area of the second image G1, the second image G3, and the base image is an overlapping area m3, the overlapping area of the second image G2, the second image G4, and the base image is an overlapping area m4, and the overlapping area of the second image G3, the second image G4, and the base image is an overlapping area m 5. For the overlapping area m2 shown in fig. 8, the pixel values of the pixel point at a certain position in the second image G1, the second image G2 and the base image are 0.8, 0.9 and 1, respectively, and then the average pixel value of the pixel point at the position can be calculated to be 0.9.
With continued reference to FIG. 8, the overlapping area of the second image G1 and the base image is overlapping area m6, the overlapping area of the second image G2 and the base image is overlapping area m7, the overlapping area of the second image G3 and the base image is overlapping area m8, and the overlapping area of the second image G4 and the base image is overlapping area m 9. For the overlap area m6 shown in fig. 8, if the pixel values of the pixel point at a certain position in the second image G1 and the base image are 0.8 and 1, respectively, the average pixel value of the pixel point at the position can be calculated to be 0.9.
Then, the electronic device obtains a composite image according to the average pixel values obtained by the corresponding pixel points at the positions in the base image, for example, the electronic device may correspondingly adjust the pixel values of the pixel points of the base image to the average pixel values obtained by calculation, so as to obtain an imaging image; for another example, the electronic device may further generate a new image, i.e., an imaged image, according to the calculated average pixel values. In the imaging image, the definition of the central area, namely the area where the target shooting object is located, is highest, the definition of the area where each second image overlaps with each other pairwise and overlaps with the base image is second, the definition of the overlapping area of each second image and the base image is lowest, but the overall definition of the imaging image is relatively higher compared with that of the base image.
In the embodiment of the application, the electronic device obtains the imaging image after performing image synthesis processing on the plurality of second images obtained by shooting and the substrate image, and thus, the electronic device completes one complete shooting operation.
For example, referring to fig. 9, fig. 9 shows the change of the sharpness from the base image to the imaged image, wherein the X axis represents the change from the edge area of the image to the central area, and then from the central area to the edge area, and the Y axis represents the sharpness that changes along with the X axis, it can be seen that in the base image, the sharpness of the central area is the highest, and as the central area spreads to the edge area, the sharpness gradually decreases and changes more sharply, while in the imaged image, the sharpness of the central area is the highest, and compared to the base image, the sharpness of the edge area of the imaged image is integrally improved, and as the central area spreads to the edge area, although the sharpness gradually decreases, the change is smoother, so that the overall image quality of the imaged image is improved.
In some embodiments, flow 202 may include:
if the gesture information is matched with the preset gesture information, the electronic equipment detects whether at least two users exist in the shooting scene;
if at least two users exist in the shooting scene, the electronic equipment performs face image acquisition operation on the at least two users to obtain at least two face images;
the electronic equipment determines a target face image corresponding to the gesture information from the at least two face images, and determines a user corresponding to the target face image as a target shooting object.
In the embodiment of the application, the electronic device may pre-establish a mapping relationship between the gesture information and the face image. For example, the electronic device may perform a face image acquisition operation on users U1, U2, U3, etc. to obtain a plurality of face images, which are denoted as face images K1, K2, K3. Next, the electronic device may obtain a plurality of gesture information, which are denoted as gesture information P1, P2, P3. Subsequently, the electronic device may correspond K1 to P1, K2 to P2, and K3 to P3, so as to establish a mapping relationship between the gesture information and the face image. Meanwhile, the electronic device may set the gesture information P1, P2, P3 as the preset gesture information.
After the gesture information is acquired, the electronic device may detect whether the gesture information matches preset gesture information. If the gesture information is matched with the preset gesture information, the electronic equipment can detect whether at least two users exist in the shooting scene. If at least two users exist in the shooting scene, the electronic equipment can perform face image acquisition operation on the at least two users to obtain at least two face images. Then, the electronic device may obtain a face image corresponding to the gesture information, which is assumed to be K1. If the face image K1 also exists in the at least two face images, the electronic device may determine the user corresponding to the face image as the target photographic object.
In other embodiments, flow 202 may include:
and if the gesture information is matched with the preset gesture information, the electronic equipment determines the user making the gesture as the target shooting object.
Assuming that the user U1 makes a palm-up gesture in the shooting scene, the electronic device may determine the user U1 as a target shooting object upon detecting that the gesture information matches preset gesture information.
In some embodiments, if the gesture information matches the preset gesture information, the electronic device may detect whether a plurality of users exist in the shooting scene, and the plurality of users stand side by side in the shooting scene according to a sequence from left to right. If the electronic device detects that a plurality of users exist in the shooting scene and the users stand in the shooting scene side by side according to the sequence from left to right, the electronic device may recognize the gesture corresponding to the gesture information by using an image recognition technology, so that the user corresponding to the gesture is determined as the target shooting object. For example, if the gesture is recognized as a gesture of holding 1 finger up, the electronic device may determine the leftmost user in the shooting scene as the target shooting object. If the gesture is recognized as a 2-finger gesture, the electronic device may determine the second user from the left in the shooting scene as the target shooting object. If the gesture is recognized as a gesture for erecting 3 fingers, the electronic device may determine the third user from the left in the shooting scene as the target shooting object, and so on.
In one embodiment, the electronic device includes two first cameras, and the electronic device adjusts a shooting angle of the first cameras according to the relative position so that a target shooting object is located in a central area of a shooting area of the first cameras, including:
the electronic equipment adjusts the shooting angles of the two first cameras according to the relative positions so that the target shooting object is located in the central area of the shooting areas of the two first cameras.
The electronic device shoots a target shooting object through a first camera, and sets a shot first image as a base image, and the electronic device comprises:
(1) the electronic equipment shoots a target shooting object through two first cameras to obtain at least two first images;
(2) the electronic device performs image synthesis processing on at least two first images, and sets the synthesized images as base images.
In an embodiment of the application, the electronic device includes two standard types of first cameras. For example, referring to fig. 10, the electronic device includes two first cameras, namely a first camera E and a first camera F, and the first camera E is surrounded by four second cameras.
After the shooting angles of the two first cameras are adjusted according to the relative positions so that the target shooting object is located in the central area of the shooting areas of the two first cameras, the electronic equipment can shoot the target shooting object through the two first cameras to obtain at least two first images with the same image content. And then, carrying out image synthesis processing on at least two first images, and setting the synthesized image as a base image.
When the electronic device performs image synthesis processing on the at least two first images, the at least two first images are aligned, an average pixel value of each pixel point where the at least two first images overlap is calculated, a synthetic image of the at least two first images is obtained according to each average pixel value obtained through calculation, and the synthetic image is set as a base image.
Compared with the method that the first image shot by the first camera is directly set as the base image, the base image with higher definition can be obtained in the embodiment of the application, so that the finally obtained imaging image also has higher definition.
In one embodiment, the "electronic device captures a target capture object through a first camera, and sets a captured first image as a base image", including:
(1) the electronic equipment continuously shoots a target shooting object through a first camera to obtain a plurality of first images;
(2) the electronic device performs image synthesis processing on the plurality of first images, and sets the synthesized image as a base image.
In the embodiment of the application, after the shooting angle of the first camera is adjusted according to the relative position so that the target shooting object is located in the central area of the shooting area of the first camera, the electronic device can continuously shoot the target shooting object through the first camera to obtain a plurality of first images. The electronic equipment can shoot the target shooting object through the first camera within unit time according to the set shooting frame rate, so that continuous shooting of the target shooting object is achieved. For example, assuming that the shooting frame rate of the first camera is 15FPS, the electronic device will shoot 15 images of the target shooting object within 1 second of the unit time, and since the images all correspond to the same target shooting object and the interval of the shooting time between the images is small, the image contents of the images can be regarded as the same.
After the first images of the target shooting objects are obtained through shooting, the electronic equipment selects the first image with the highest definition from the first images, aligns other first images with the first image with the highest definition, calculates the average pixel value of each pixel point overlapped by the first images, obtains a composite image of the first images according to the average pixel values obtained through calculation, and sets the composite image as a base image.
Compared with the method that the first image shot by the first camera is directly set as the base image, the base image with higher definition can be obtained in the embodiment of the application, so that the finally obtained imaging image also has higher definition.
Optionally, after the electronic device continuously captures a target capture object through the first camera to obtain a plurality of first images, the method further includes:
the electronic equipment selects an image with the highest definition from the plurality of first images obtained by shooting as a base image, and the base image is used for carrying out image synthesis processing with a second image obtained by shooting by a second camera so as to obtain an imaging image.
Generally, the sharper the image, the higher its contrast. Therefore, the contrast of an image can be used to measure the sharpness of the image.
In an embodiment, before the electronic device acquires at least one sound information when being in a preview interface of a shooting-type application, the electronic device further includes an electrochromic component covering the first camera and/or the second camera:
the electronic device switches the electrochromic component to a transparent state;
after the electronic device performs image synthesis processing on the plurality of second images and the base image to obtain an imaged image, the electronic device further includes:
the electronic device switches the electrochromic assembly to a colored state to hide the first camera and/or the second camera.
In the embodiment of the application, in order to improve the integrity of the electronic device, the electrochromic assembly is covered on the first camera and/or the second camera, so that the cameras are hidden by the electrochromic assembly when needed.
The operating principle of the electrochromic assembly will first be briefly described below.
Electrochromism refers to the phenomenon that the color/transparency of a material is changed stably and reversibly under the action of an applied electric field. Materials with electrochromic properties may be referred to as electrochromic materials. The electrochromic component in the embodiment of the present application is made of electrochromic materials.
The electrochromic assembly can comprise two conducting layers which are arranged in a stacked mode, and a color changing layer, an electrolyte layer and an ion storage layer which are arranged between the two conducting layers. For example, when no voltage (or 0V) is applied to the two transparent conductive layers of the electrochromic device, the electrochromic device will be in a transparent state, when the voltage applied between the two transparent conductive layers is changed from 0V to 3V, the electrochromic device will be in black, when the voltage applied between the two transparent conductive layers is changed from 3V to-3V, the electrochromic device will be changed from black to transparent, and so on.
In this way, the first camera and/or the second camera can be hidden by utilizing the characteristic of adjustable color of the electrochromic assembly.
In the embodiment of the application, the electronic device can switch the electrochromic assembly covering the first camera and/or the second camera to a transparent state when the shooting type application is started, so that the first camera and the second camera can shoot a target shooting object.
And after the base image is acquired through the first camera, a plurality of second images are obtained through shooting of the plurality of second cameras and finally synthesized to obtain an imaging image and the started shooting application exits, the electronic equipment switches the electrochromic assembly to a coloring state, so that the first camera and/or the second camera are/is hidden.
For example, the electronic device is provided with an electrochromic component which covers all of the first camera and the second camera simultaneously, and the color of one side of the electronic device, which is provided with the first camera and the second camera, is black, so that when the electronic device does not start shooting applications, the electrochromic component is stored in a black coloring state, and the first camera and the second camera are hidden; when the shooting application is started, the electrochromic component is synchronously switched to a transparent state, so that the electronic equipment can shoot through the first camera and the second camera; and after the imaging image is finally synthesized and the started shooting application is quitted, the electronic equipment switches the electrochromic component to a black coloring state, so that the first camera and the second camera are hidden again.
In an embodiment, before the step of the electronic device shooting a target shooting object through a first camera and setting a first shot image as a base image, the method further includes:
(1) the electronic equipment detects whether the electronic equipment is in a shaking state currently;
(2) if the electronic equipment is not in the shake state at present, shooting a target shooting object through the first camera according to an image shooting request, and setting a first image obtained through shooting as a base image.
According to the description in the above embodiments, the images captured by different cameras are finally combined to obtain the imaged image, and if the electronic device is in a shake state during the shooting process, the image contents of the images captured by different cameras are obviously different, which affects the combining effect of the imaged images.
Therefore, in the embodiment of the present application, before the electronic device captures a target capture object by using the first camera and sets a captured first image as a base image, it first determines whether the electronic device is currently in a shake state. The electronic device may determine the shaking state in a plurality of different manners, for example, the electronic device may determine whether the current speed in each direction is smaller than a preset speed, if so, determine that the electronic device is not in the shaking state (or in a stable state), and if not, determine that the electronic device is in the shaking state; for another example, the electronic device may determine whether the current displacement in each direction is smaller than a preset displacement, if so, determine that the electronic device is not in the shaking state, and if not, determine that the electronic device is in the shaking state. In addition, the judgments of the jitter states may be performed in a manner not listed in the embodiments of the present application, which is not specifically limited by the embodiments of the present application.
When it is determined that the target object is in the shake state, the electronic device captures an image of the target object through the first camera, and sets the captured first image as a base image to synthesize an obtained imaging image.
In an embodiment, before the step of the electronic device shooting a target shooting object through a first camera and setting a first shot image as a base image, the method further includes:
(1) if the target shooting object is not in the shaking state, the electronic equipment detects whether the target shooting object is in a static state;
(2) if the target shooting object is in a static state, the electronic equipment shoots the target shooting object through the first camera, and a first image obtained through shooting is set as a base image.
From the above description, it can be understood by those skilled in the art that, in the case that the electronic device is not in a shake state, if the target photographic object is not in a still state (for example, the target photographic object includes a moving object), the image content of the image obtained by the electronic device through the first camera and the second camera may have a large difference.
Therefore, in this embodiment of the application, when determining that the electronic device is not in the shake state at present, the electronic device does not immediately shoot the target photographic object through the first camera, but further detects whether the target photographic object is in the still state, and if it is detected that the target photographic object is in the still state, shoots the target photographic object through the first camera according to the image shooting request, and sets the shot first image as the base image to synthesize the obtained imaging image, which may specifically refer to the related description in the above embodiment, and is not described herein again.
In this embodiment, a person skilled in the art can select an appropriate manner to determine whether the target photographic object is in the still state according to actual needs, which is not specifically limited in this application, for example, an optical flow method, a residual method, or the like can be used to determine whether the target photographic object is in the still state.
In one embodiment, the first camera and the second camera share an image sensor.
For example, referring to fig. 11, the first camera and the second camera share the same image sensor, and the first camera (lens portion) and the second camera (lens portion) can project external light to different portions of the image sensor in a time-sharing manner, so as to capture an external object.
Compare in the image sensor of the independent use of many cameras of prior art, many cameras sharing image sensor can reduce the space and occupy in the embodiment of this application.
Referring to fig. 12, fig. 12 is a schematic diagram of a third flowchart of an apparatus imaging method according to an embodiment of the present application, where the flowchart may include:
301. when the electronic equipment is in a preview interface of the shooting application, the electronic equipment acquires gesture information.
302. And if the gesture information is matched with the preset gesture information, the electronic equipment determines the user corresponding to the gesture information as the target shooting object.
303. The electronic equipment shoots a target shooting object through the first camera, and a first image obtained through shooting is set as a base image.
The processes 301 to 303 are the same as or corresponding to the processes 101 to 103, and are not described herein again.
304. The electronic device determines a relative position of the target photographic subject and the electronic device.
The process 304 is the same as or corresponding to the process 203 described above, and is not described herein again.
305. The electronic equipment adjusts the shooting angle of each second camera according to the relative position so that the target shooting object is in an overlapping area of the shooting areas of the plurality of second cameras.
After determining the relative position between the target shooting object and the electronic device, the electronic device may adjust the shooting angle of each second camera according to the relative position. For example, the electronic device may calculate an angle and a direction that each second camera needs to rotate according to the relative position, so as to rotate each second camera according to the angle and the direction that each second camera needs to rotate, so as to adjust a shooting angle of each second camera, so that the target shooting object is located in an overlapping area of shooting areas of the plurality of second cameras.
For example, as shown in fig. 13, it is assumed that the target photographic subject is in the area m11, and the overlapping area of the photographic areas of the plurality of second cameras is the area m 11. It is understood that the shooting area of the first camera also includes the area m11, i.e., the target photographic subject is in the shooting area of the first camera. And it is understood that the overlapping area of the photographing areas of the plurality of second cameras and the overlapping area of the photographing area of the first camera are also the area m 11. That is, the region where the target photographic subject is located is an overlapping region where the photographing region of the first camera and the photographing regions of the plurality of second cameras overlap, that is, a region m 11.
306. The electronic equipment shoots the target shooting object through the second cameras to obtain a plurality of second images.
307. And the electronic equipment carries out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
The processes 306 and 307 are the same as or corresponding to the processes 104 and 105 described above, and are not described herein again.
Referring to fig. 14, fig. 14 is a schematic structural diagram of an imaging device according to an embodiment of the present disclosure. The equipment imaging device is applied to electronic equipment, and the electronic equipment comprises a first camera of a first type and a plurality of second cameras of a second type, wherein the shooting area of each second camera and the shooting area of each first camera have an overlapping part. The apparatus imaging device 400 includes: a first obtaining module 401, a determining module 402, a second obtaining module 403, a third obtaining module 404 and a synthesizing module 405.
The first obtaining module 401 is configured to obtain gesture information when the device is in a preview interface of a shooting application.
A determining module 402, configured to determine, if the gesture information matches preset gesture information, that the user corresponding to the gesture information is a target shooting object.
A second obtaining module 403, configured to take a picture of the target shooting object through the first camera, and set a first image obtained through shooting as a base image.
A third obtaining module 404, configured to take a picture of the target photographic object through the plurality of second cameras to obtain a plurality of second images.
And a synthesizing module 405, configured to perform image synthesis processing on the plurality of second images and the base image to obtain an imaging image.
Referring also to fig. 15, in some embodiments, the determining module 402 may include:
the detection submodule 4021 is configured to detect whether at least two users exist in a shooting scene if the gesture information is matched with preset gesture information.
The obtaining sub-module 4022 is configured to, if at least two users exist in the shooting scene, perform face image obtaining operation on the at least two users to obtain at least two face images.
The determining sub-module 4023 is configured to determine a target face image corresponding to the gesture information from the at least two face images, and determine a user corresponding to the target face image as a target shooting object.
In some embodiments, the determining module 402 may be configured to: and if the gesture information is matched with preset gesture information, determining the user making the gesture as a target shooting object.
In some embodiments, the second obtaining module 403 may be configured to: determining the relative position of the target shooting object and the electronic equipment; adjusting the shooting angle of the first camera according to the relative position so as to enable the target shooting object to be located in the central area of the shooting area of the first camera; and shooting the target shooting object through the first camera, and setting a first shot image as a base image.
The third obtaining module 404 may be configured to: adjusting the shooting angle of each second camera according to the relative position so as to enable the shooting area of each second camera to be overlapped with the edge part of the shooting area of the first camera; and shooting the target shooting object through the plurality of second cameras to obtain a plurality of second images.
In some embodiments, the third obtaining module 404 may be configured to: determining the relative position of the target shooting object and the electronic equipment; adjusting the shooting angle of each second camera according to the relative position so that the target shooting object is in an overlapping area of the shooting areas of the plurality of second cameras; and shooting the target shooting object through the plurality of second cameras to obtain a plurality of second images.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed on a computer, the computer is caused to execute the flow in the imaging method of the apparatus provided by the embodiment.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the flow in the device imaging method provided in this embodiment by calling the computer program stored in the memory.
For example, the electronic device may be a mobile terminal such as a tablet computer or a smart phone. Referring to fig. 16, fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Referring to fig. 16, the electronic device includes a processor 501, a memory 502, a first camera 503 of a first type, and a plurality of second cameras 504 of a second type. The processor 501 is electrically connected to the memory 502, the first camera 503 and the second camera 504.
The processor 501 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by running or loading a computer program stored in the memory 502, and calling data stored in the memory 502.
The memory 502 may be used to store software programs and modules, and the processor 501 executes various functional applications and data processing by running the computer programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, a computer program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 501 with access to the memory 502.
The first camera 503 is a standard type camera, or a camera with a field angle of about 45 degrees.
The second camera 504 is a telephoto type camera, or a camera with a field angle of 40 degrees or less.
In this embodiment of the present application, the processor 501 in the electronic device loads instructions corresponding to one or more processes of the computer program into the memory 502 according to the following steps, and the processor 501 runs the computer program stored in the memory 502, so as to implement various functions, as follows:
when the mobile terminal is in a preview interface of a shooting application, acquiring gesture information;
if the gesture information is matched with preset gesture information, determining a user corresponding to the gesture information as a target shooting object;
shooting the target shooting object through the first camera 503, and setting a first shot image as a base image;
shooting the target shooting object through the plurality of second cameras 504 to obtain a plurality of second images;
and carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
Referring to fig. 17, fig. 17 is a second schematic structural diagram of an electronic device according to an embodiment of the present disclosure, and the difference between the second schematic structural diagram and the electronic device shown in fig. 16 is that the electronic device further includes components such as an input unit 505 and an output unit 506.
The input unit 505 may be used to receive input numbers, character information, or user characteristic information (such as a fingerprint), and generate a keyboard, a mouse, a joystick, an optical or trackball signal input, etc., related to user setting and function control, among others.
The output unit 506 may be used to display information input by the user or information provided to the user, such as a screen.
In this embodiment of the present application, the processor 501 in the electronic device loads instructions corresponding to one or more processes of the computer program into the memory 502 according to the following steps, and the processor 501 runs the computer program stored in the memory 502, so as to implement various functions, as follows:
when the mobile terminal is in a preview interface of a shooting application, acquiring gesture information;
if the gesture information is matched with preset gesture information, determining a user corresponding to the gesture information as a target shooting object;
shooting the target shooting object through the first camera 503, and setting a first shot image as a base image;
shooting the target shooting object through the plurality of second cameras 504 to obtain a plurality of second images;
and carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
In an embodiment, if the gesture information matches preset gesture information, when the user corresponding to the gesture information is determined as the target shooting object, the processor 501 executes: if the gesture information is matched with preset gesture information, detecting whether at least two users exist in a shooting scene; if at least two users exist in the shooting scene, performing face image acquisition operation on the at least two users to obtain at least two face images; and determining a target face image corresponding to the gesture information from the at least two face images, and determining a user corresponding to the target face image as a target shooting object.
In an embodiment, when the user corresponding to the gesture information is determined as the target shooting object if the gesture information matches preset gesture information, the processor 501 executes: and if the gesture information is matched with preset gesture information, determining the user making the gesture as a target shooting object.
In an embodiment, when the target object is captured by the first camera 503 and a captured first image is set as a base image, the processor 501 executes: determining the relative position of the target shooting object and the electronic equipment; adjusting the shooting angle of the first camera 503 according to the relative position, so that the target shooting object is located in the central area of the shooting area of the first camera 503; shooting the target shooting object through the first camera 503, and setting a first shot image as a base image; when the second cameras 504 capture the target object to obtain a plurality of second images, the processor 501 performs: adjusting the shooting angle of each second camera 504 according to the relative position, so that the shooting area of each second camera 504 is partially overlapped with the edge of the shooting area of the first camera 503; the target photographic subject is photographed by the plurality of second cameras 504, and a plurality of second images are obtained.
In an embodiment, when the second cameras 504 capture the target object to obtain a plurality of second images, the processor 501 performs: determining the relative position of the target shooting object and the electronic equipment; adjusting the shooting angle of each second camera 504 according to the relative position so that the target shooting object is in an overlapping area of the shooting areas of the plurality of second cameras 504; the target photographic subject is photographed by the plurality of second cameras 504, and a plurality of second images are obtained.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the device imaging method, and are not described herein again.
The device imaging apparatus provided in the embodiment of the present application and the device imaging method in the above embodiments belong to the same concept, and any method provided in the device imaging method embodiment may be run on the device imaging apparatus, and a specific implementation process thereof is described in the device imaging method embodiment in detail, and is not described herein again.
It should be noted that, for the apparatus imaging method described in the embodiment of the present application, it can be understood by those skilled in the art that all or part of the process of implementing the apparatus imaging method described in the embodiment of the present application may be completed by controlling the relevant hardware through a computer program, where the computer program may be stored in a computer-readable storage medium, such as a memory, and executed by at least one processor, and during the execution, the process of the embodiment of the apparatus imaging method may be included. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
For the imaging device of the present application, each functional module may be integrated into one processing chip, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The above detailed description is provided for the imaging method, the imaging device, the storage medium, and the electronic device of the device provided in the embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the above embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (9)

1. An apparatus imaging method is applied to an electronic apparatus, and is characterized in that the electronic apparatus includes two first cameras of a first type, wherein a first camera is provided with four second cameras of a second type in a surrounding manner, a shooting area of the first camera is larger than that of the second camera, an overlapping portion exists between the shooting area of the second camera and the shooting area of the first camera, the first camera and the second camera share an image sensor, the first camera and the second camera project external light to different portions of the image sensor in a time-sharing manner to realize shooting of a target shooting object, and the first camera and the second camera adopt the same image parameters, and the method includes:
when the mobile terminal is in a preview interface of a shooting application, acquiring gesture information;
if the gesture information is matched with preset gesture information, determining a user corresponding to the gesture information as a target shooting object;
determining the relative position of the target shooting object and the electronic equipment;
adjusting the shooting angle of the first camera according to the relative position so as to enable the target shooting object to be located in the central area of the shooting area of the first camera;
when the target shooting object is not in a shaking state and is in a static state at present, shooting the target shooting object through the two first cameras to obtain at least two first images;
performing image synthesis processing on the at least two first images, and setting the synthesized images as base images;
adjusting the shooting angle of each second camera according to the relative position so that the shooting area of each second camera is partially overlapped with different edges of the shooting area of the first camera, the shooting areas of any two second cameras have overlapped parts, and partial areas of the shooting area of each second camera are overlapped with the central area of the shooting area of the first camera;
shooting the target shooting object through the plurality of second cameras to obtain a plurality of second images;
and carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
2. The device imaging method according to claim 1, wherein if the gesture information matches preset gesture information, determining a user corresponding to the gesture information as a target shooting object includes:
if the gesture information is matched with preset gesture information, detecting whether at least two users exist in a shooting scene;
if at least two users exist in the shooting scene, performing face image acquisition operation on the at least two users to obtain at least two face images;
and determining a target face image corresponding to the gesture information from the at least two face images, and determining a user corresponding to the target face image as a target shooting object.
3. The device imaging method according to claim 1, wherein if the gesture information matches preset gesture information, determining a user corresponding to the gesture information as a target shooting object includes:
and if the gesture information is matched with preset gesture information, determining the user making the gesture as a target shooting object.
4. The device imaging method according to claim 1, wherein said capturing the target photographic subject by the plurality of second cameras resulting in a plurality of second images comprises:
determining the relative position of the target shooting object and the electronic equipment;
adjusting the shooting angle of each second camera according to the relative position so as to enable the target shooting object to be in an overlapping area of shooting areas of the plurality of second cameras;
and shooting the target shooting object through the plurality of second cameras to obtain a plurality of second images.
5. The utility model provides an equipment image device, is applied to electronic equipment, its characterized in that, electronic equipment includes two first cameras of first type, and wherein a first camera is provided with four second cameras of second type around, the shooting area of first camera is greater than the shooting area of second camera, there is the overlap portion in the shooting area of second camera with the shooting area of first camera, first camera with the second camera sharing image sensor, first camera with the time-sharing of second camera is thrown external light to image sensor's different parts realizes the shooting to the target shooting object, first camera with the second camera adopts the same image parameter, includes:
the first acquisition module is used for acquiring gesture information when the mobile terminal is in a preview interface of a shooting application;
the determining module is used for determining a user corresponding to the gesture information as a target shooting object if the gesture information is matched with preset gesture information;
the second acquisition module is used for determining the relative position of the target shooting object and the electronic equipment; adjusting the shooting angle of the first camera according to the relative position so as to enable the target shooting object to be located in the central area of the shooting area of the first camera; when the target shooting object is not in a shaking state and is in a static state at present, shooting the target shooting object through the first camera to obtain at least two first images; performing image synthesis processing on the at least two first images, and setting the synthesized images as base images;
the third acquisition module is used for adjusting the shooting angle of each second camera according to the relative position so that the shooting area of each second camera is partially overlapped with different edges of the shooting area of the first camera, the shooting areas of any two second cameras have overlapped parts, and partial areas of the shooting area of each second camera are overlapped with the central area of the shooting area of the first camera; shooting the target shooting object through the plurality of second cameras to obtain a plurality of second images;
and the synthesis module is used for carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
6. The device imaging apparatus of claim 5, wherein the determining module comprises:
the detection submodule is used for detecting whether at least two users exist in a shooting scene or not if the gesture information is matched with preset gesture information;
the acquisition sub-module is used for carrying out face image acquisition operation on at least two users to obtain at least two face images if the at least two users exist in the shooting scene;
and the determining submodule is used for determining a target face image corresponding to the gesture information from the at least two face images and determining a user corresponding to the target face image as a target shooting object.
7. The device imaging apparatus of claim 5, wherein the determination module is specifically configured to: and if the gesture information is matched with preset gesture information, determining the user making the gesture as a target shooting object.
8. A storage medium having stored therein a computer program which, when run on a computer, causes the computer to execute the apparatus imaging method of any one of claims 1 to 4.
9. An electronic device, characterized in that the electronic device comprises a processor and a memory, a first camera of a first type and a plurality of second cameras of a second type, the shooting area of the second cameras and the shooting area of the first cameras have overlapping parts, a computer program is stored in the memory, and the processor is used for executing the device imaging method according to any one of claims 1 to 4 by calling the computer program stored in the memory.
CN201910579742.7A 2019-06-28 2019-06-28 Device imaging method and device, storage medium and electronic device Active CN110213493B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910579742.7A CN110213493B (en) 2019-06-28 2019-06-28 Device imaging method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910579742.7A CN110213493B (en) 2019-06-28 2019-06-28 Device imaging method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN110213493A CN110213493A (en) 2019-09-06
CN110213493B true CN110213493B (en) 2021-03-02

Family

ID=67795449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910579742.7A Active CN110213493B (en) 2019-06-28 2019-06-28 Device imaging method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN110213493B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110290324B (en) * 2019-06-28 2021-02-02 Oppo广东移动通信有限公司 Device imaging method and device, storage medium and electronic device
CN111126279B (en) * 2019-12-24 2024-04-16 深圳市优必选科技股份有限公司 Gesture interaction method and gesture interaction device
CN111901527B (en) * 2020-08-05 2022-03-18 深圳市浩瀚卓越科技有限公司 Tracking control method, tracking control device, object tracking unit, and storage medium
CN112511743B (en) * 2020-11-25 2022-07-22 南京维沃软件技术有限公司 Video shooting method and device
CN112565602A (en) * 2020-11-30 2021-03-26 北京地平线信息技术有限公司 Method and apparatus for controlling image photographing apparatus, and computer-readable storage medium
CN113031464B (en) * 2021-03-22 2022-11-22 北京市商汤科技开发有限公司 Device control method, device, electronic device and storage medium
CN115297315A (en) * 2022-07-18 2022-11-04 北京城市网邻信息技术有限公司 Correction method and device for shooting central point in circular shooting and electronic equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010273280A (en) * 2009-05-25 2010-12-02 Nikon Corp Imaging apparatus
US8417058B2 (en) * 2010-09-15 2013-04-09 Microsoft Corporation Array of scanning sensors
JP5725975B2 (en) * 2011-05-27 2015-05-27 キヤノン株式会社 Imaging apparatus and imaging method
JP6416740B2 (en) * 2015-11-27 2018-10-31 日本電信電話株式会社 Image processing apparatus, image processing method, and computer program
JP2018014699A (en) * 2016-07-23 2018-01-25 キヤノン株式会社 Imaging device and method for controlling imaging device
JP7043219B2 (en) * 2017-10-26 2022-03-29 キヤノン株式会社 Image pickup device, control method of image pickup device, and program
JP7059576B2 (en) * 2017-11-13 2022-04-26 大日本印刷株式会社 Automatic shooting system and automatic shooting method
CN108282617A (en) * 2018-01-31 2018-07-13 努比亚技术有限公司 Mobile terminal image pickup method, mobile terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN110213493A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110213493B (en) Device imaging method and device, storage medium and electronic device
CN110290324B (en) Device imaging method and device, storage medium and electronic device
CN110213492B (en) Device imaging method and device, storage medium and electronic device
KR102187146B1 (en) Dual-aperture zoom digital camera with automatic adjustable tele field of view
CN110225256B (en) Device imaging method and device, storage medium and electronic device
US9036072B2 (en) Image processing apparatus and image processing method
CN110166680B (en) Device imaging method and device, storage medium and electronic device
JP2017505004A (en) Image generation method and dual lens apparatus
CN110290299B (en) Imaging method, imaging device, storage medium and electronic equipment
CN113875220B (en) Shooting anti-shake method, shooting anti-shake device, terminal and storage medium
CN110677621B (en) Camera calling method and device, storage medium and electronic equipment
CN111770273B (en) Image shooting method and device, electronic equipment and readable storage medium
CN110636276B (en) Video shooting method and device, storage medium and electronic equipment
CN113329172B (en) Shooting method and device and electronic equipment
US20220329729A1 (en) Photographing method, storage medium and electronic device
CN110312075B (en) Device imaging method and device, storage medium and electronic device
US11431923B2 (en) Method of imaging by multiple cameras, storage medium, and electronic device
CN114363522A (en) Photographing method and related device
CN110430375B (en) Imaging method, imaging device, storage medium and electronic equipment
CN110545375B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110012208B (en) Photographing focusing method and device, storage medium and electronic equipment
CN110191274A (en) Imaging method, device, storage medium and electronic equipment
CN114866680B (en) Image processing method, device, storage medium and electronic equipment
CN117729418A (en) Character framing method and device based on picture display and terminal equipment
CN116261043A (en) Focusing distance determining method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant