CN110213492B - Device imaging method and device, storage medium and electronic device - Google Patents

Device imaging method and device, storage medium and electronic device Download PDF

Info

Publication number
CN110213492B
CN110213492B CN201910578474.7A CN201910578474A CN110213492B CN 110213492 B CN110213492 B CN 110213492B CN 201910578474 A CN201910578474 A CN 201910578474A CN 110213492 B CN110213492 B CN 110213492B
Authority
CN
China
Prior art keywords
shooting
camera
target
image
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910578474.7A
Other languages
Chinese (zh)
Other versions
CN110213492A (en
Inventor
李亮
占文喜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910578474.7A priority Critical patent/CN110213492B/en
Publication of CN110213492A publication Critical patent/CN110213492A/en
Application granted granted Critical
Publication of CN110213492B publication Critical patent/CN110213492B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Abstract

The application discloses an equipment imaging method, an equipment imaging device, a storage medium and electronic equipment. The method comprises the following steps: when the mobile terminal is in a preview interface of a shooting application, acquiring at least one voice message; determining target voice information with preset keywords from at least one voice information, and determining a user corresponding to the target voice information as a target shooting object; shooting a target shooting object through a first camera, and setting a first image obtained through shooting as a base image; shooting a target shooting object through a plurality of second cameras to obtain a plurality of second images; and carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image. The method and the device can improve the quality of the whole imaging image obtained by shooting of the electronic equipment.

Description

Device imaging method and device, storage medium and electronic device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an apparatus imaging method and apparatus, a storage medium, and an electronic device.
Background
At present, users generally take images by using electronic devices with cameras, and things around, scenes and the like can be recorded by the electronic devices anytime and anywhere. However, due to the hardware defect of the camera, the area where the focus of the image shot by the camera is often clear, and other areas are relatively blurred, so that the quality of the whole imaging image is poor.
Disclosure of Invention
The embodiment of the application provides an equipment imaging method and device, a storage medium and electronic equipment, which can improve the quality of a whole imaging image obtained by shooting of the electronic equipment.
The embodiment of the application provides an equipment imaging method, which is applied to electronic equipment, wherein the electronic equipment comprises a first camera of a first type and a plurality of second cameras of a second type, and an overlapping part exists between a shooting area of each second camera and a shooting area of each first camera, and the method comprises the following steps:
when the mobile terminal is in a preview interface of a shooting application, acquiring at least one voice message;
determining target voice information with preset keywords from the at least one voice information, and determining a user corresponding to the target voice information as a target shooting object;
shooting the target shooting object through the first camera, and setting a first shot image as a base image;
shooting the target shooting object through the plurality of second cameras to obtain a plurality of second images;
and carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
The embodiment of the application provides an equipment image device, is applied to electronic equipment, electronic equipment includes the first camera of first type and the second camera of a plurality of second types, there is the overlap portion in the shooting region of second camera with the shooting region of first camera, include:
the device comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is used for acquiring at least one voice message when the device is in a preview interface of a shooting application;
the determining module is used for determining target voice information with preset keywords from the at least one voice information and determining a user corresponding to the target voice information as a target shooting object;
the second acquisition module is used for shooting the target shooting object through the first camera and setting a first image obtained through shooting as a base image;
the third acquisition module is used for shooting the target shooting object through the plurality of second cameras to obtain a plurality of second images;
and the synthesis module is used for carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
The embodiment of the application provides a storage medium, wherein a computer program is stored on the storage medium, and when the computer program is executed on a computer, the computer is enabled to execute the flow in the imaging method of the device provided by the embodiment of the application.
The embodiment of the application further provides an electronic device, which comprises a memory, a processor, a first camera of a first type and a plurality of second cameras of a second type, wherein an overlapping part exists between a shooting area of the second camera and a shooting area of the first camera, and the processor is used for executing the flow in the device imaging method provided by the embodiment of the application by calling the computer program stored in the memory.
In the embodiment of the application, target voice information with preset keywords is determined from at least one voice information, and a user corresponding to the target voice information is determined as a target shooting object; shooting a target shooting object through a first camera, and setting a first image obtained through shooting as a base image; shooting a target shooting object through a plurality of second cameras to obtain a plurality of second images; and carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image. Therefore, in the finally obtained imaging image, the definition of the area outside the area where the target shooting object is located is also improved, and the quality of the whole imaging image is improved.
Drawings
The technical solutions and advantages of the present application will become apparent from the following detailed description of specific embodiments of the present application when taken in conjunction with the accompanying drawings.
Fig. 1 is a first flowchart of an imaging method of an apparatus provided in an embodiment of the present application.
Fig. 2 is a second flowchart of an imaging method of the apparatus provided in the embodiment of the present application.
Fig. 3 is a schematic diagram of a central area of a target photographic object in a photographic area of a first camera in the embodiment of the present application.
Fig. 4 is a schematic diagram of a first arrangement manner of a first camera and a second camera in the embodiment of the present application.
Fig. 5 is a schematic diagram of an edge portion overlapping of a shooting area of a second camera and a shooting area of a first camera in the embodiment of the present application.
Fig. 6 is a schematic diagram of an overlapping area m in which the shooting areas of all the second cameras overlap with the shooting area of the first camera at the same time in the embodiment of the present application.
Fig. 7 is a schematic diagram of image content comparison of a second image and a base image in an embodiment of the present application.
Fig. 8 is a schematic view of an overlapping region m1 where all the second images and the base image overlap simultaneously, overlapping regions m2, m3, m4, m5 where two adjacent second images and the base image overlap simultaneously, and overlapping regions m6, m7, m8, m9 where two adjacent second images overlap in the embodiment of the present application.
Fig. 9 is a schematic view of an overlapping area where all the second images and the base image overlap simultaneously in the embodiment of the present application.
Fig. 10 is a third schematic flowchart of an imaging method of an apparatus provided in an embodiment of the present application.
Fig. 11 is a schematic diagram of a second arrangement manner of the first camera and the second camera in the embodiment of the present application.
Fig. 12 is a schematic diagram of an image sensor shared by a first camera and a second camera in an embodiment of the present application.
Fig. 13 is a fourth flowchart illustrating an imaging method of an apparatus according to an embodiment of the present disclosure.
Fig. 14 is a schematic view of a certain area m11 where a target photographic object is in the shooting area of the first camera in the embodiment of the present application.
Fig. 15 is a schematic structural diagram of a first imaging device of the apparatus provided in the embodiment of the present application.
Fig. 16 is a schematic diagram of a second structure of an imaging device of an apparatus provided in the embodiment of the present application.
Fig. 17 is a schematic structural diagram of a third imaging device of the apparatus provided in the embodiment of the present application.
Fig. 18 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Fig. 19 is a second structural schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
It is understood that the execution subject of the embodiment of the present application may be an electronic device such as a smart phone or a tablet computer.
Referring to fig. 1, fig. 1 is a schematic flow chart of an imaging method of an apparatus according to an embodiment of the present disclosure, where the flow chart may include:
101. when the device is in a preview interface of the shooting application, at least one voice message is acquired.
In the embodiment of the application, the electronic equipment comprises a first camera of a first type and a plurality of second cameras of a second type, wherein the shooting area of the first camera is larger than that of the second camera, the shooting area of the second camera and the shooting area of the first camera have an overlapping part or overlap area, and the shooting areas of any two second cameras have an overlapping part, and the overlapping part of the shooting areas of any two second cameras and the shooting area of the first camera have an overlapping part, the shooting areas of the plurality of second cameras are all provided with overlapping parts, and the overlapping parts of the shooting areas of the plurality of second cameras and the shooting area of the first camera are provided with overlapping parts, that is, there is an overlapping portion where the shooting areas of the plurality of second cameras and the shooting area of the first camera overlap at the same time.
For example, when a user operates the electronic device to start a shooting class application (e.g., a system application "camera" of the electronic device), the electronic device enters a preview interface of the shooting class application. The electronic device may obtain at least one voice message when the electronic device is at a preview interface of a camera-like application.
The electronic device can acquire one or more voice messages of a user through the microphone. The electronic device may also obtain one or more voice messages from multiple users via the microphone.
For example, when the user U1 says "please help me take a photo" in the microphone pick-up range of the electronic device, the electronic device obtains a voice message through the microphone. When the user U2 says "today's weather is really good" in the microphone pick-up range of the electronic device, the electronic device obtains a voice message through the microphone.
102. And determining target voice information with preset keywords from at least one voice information, and determining a user corresponding to the target voice information as a target shooting object.
For example, after at least one voice message is acquired, the electronic device may detect whether a preset keyword exists in each voice message. For example, the electronic device may perform voice recognition on each voice message to convert the voice message into corresponding text message, and detect whether a preset keyword exists in the text message. When a certain voice message is detected to have a preset keyword, the electronic device may determine the voice message as a target voice message, and determine a user corresponding to the target voice message as a target shooting object.
For example, assume that the electronic device acquires voice information of user U1, user U2, and user U3. The speech information of the user U1 is: "please help me to take a picture", the voice information of the user U2 is: "today's weather is really good", the speech information of the user U3 is: "Sun Hai Yan". The keyword is preset as "photo". It can be known that the preset keyword "photo" exists in the voice information of the user U1, then, the electronic device may determine the voice information of the user U1 as the target voice information, and determine the user corresponding to the target voice information, that is, the user U1, as the target shooting object.
In some embodiments, each time the electronic device acquires one voice message, it may be detected whether a preset keyword exists in the voice message. If the electronic device detects that the preset keyword exists in the voice message, the electronic device may determine the voice message as the target voice message, determine the user corresponding to the target voice message as the target shooting object, and enter the process 103. If the electronic device detects that the preset keyword does not exist in the voice information, the electronic device can continue to execute the process of acquiring the voice information.
The preset keywords may be set by a user, or may be automatically generated by the electronic device, and so on. The method is not particularly limited, and is subject to practical requirements.
103. The target shooting object is shot through the first camera, and a first image obtained through shooting is set as a base image.
For example, after determining that the user U1 is the target shooting object, the electronic device may shoot the user U1 once by using the first camera, and record an image shot by the first camera as a first image, and set the first image as a base image.
In some embodiments, only the target photographic subject may be included in the first image, i.e., only the user U1. In other embodiments, the first image may include other people, objects, scenes, etc. besides the target shooting object, i.e. the user U1, and is not limited herein.
104. And shooting the target shooting object through a plurality of second cameras to obtain a plurality of second images.
In the embodiment of the application, the electronic device further shoots a target shooting object through a plurality of second cameras arranged on the electronic device, a plurality of images are correspondingly obtained, and the images shot by the second cameras are recorded as second images, namely, the plurality of second images are obtained through shooting.
It should be noted that when a target shooting object is shot by a plurality of second cameras, the second cameras and the first cameras use the same image parameters (such as contrast and brightness) to shoot, so that the first and second images shot by the first and second cameras have different image sizes and the same effect.
It should be noted that, in this embodiment of the present application, the execution order of 103 and 104 is not limited, and 104 may be executed after 103 is executed, 103 may be executed after 104 is executed, or 103 and 104 may be executed simultaneously.
105. And carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
In the embodiment of the application, after the electronic device obtains the base image through the shooting of the first camera and obtains the plurality of second images through the shooting of the plurality of second cameras, the plurality of shot second images are aligned with the base image.
Based on the aligned base image and second image, for the overlapping portion of the base image and the second image, an average pixel value of each overlapped pixel point is calculated, for example, the electronic device obtains the base image through shooting by the first camera, and also obtains four second images through shooting by four second cameras, for the overlapping area, the pixel values of the pixel point at a certain position in five images (i.e., the base image and the four second images) are respectively 0.8, 0.9, 1.1, 1.2, and 1, and then the average pixel value of the pixel point at the position can be calculated to be 1.
Then, a composite image is obtained according to the average pixel values obtained by the corresponding pixel points at the positions in the base image, for example, the pixel values of the pixel points of the base image can be correspondingly adjusted to the average pixel values obtained by calculation, so as to obtain an imaging image; for another example, a new image, i.e., an imaged image, may be generated based on the calculated average pixel values.
In the embodiment of the application, the electronic device obtains the imaging image after performing image synthesis processing on the plurality of second images obtained by shooting and the substrate image, and thus, the electronic device completes one complete shooting operation.
In the embodiment of the present application, the target object is located in an overlapping area where the shooting areas of the plurality of second cameras and the shooting area of the first camera are all overlapped, or the target object is located in an overlapping area where the shooting areas of the plurality of second cameras and the shooting area of the first camera are simultaneously overlapped. Assume that the electronic device includes four second cameras, which are a second camera a, a second camera B, a second camera C, and a second camera D, respectively. The imaging area of the second camera a overlaps with the imaging area of the first camera, and is referred to as an overlapping area m 12. The overlapping area m12 has an overlapping area with the shooting area of the second camera B, and is referred to as an overlapping area m 13. The overlapping area m13 has an overlapping area with the shooting area of the second camera C, and is referred to as an overlapping area m 14. The overlapping area m14 is an overlapping area with the shooting area of the second camera D, and is referred to as an overlapping area m 15. The target photographic subject is located in the overlap area m 15.
In the embodiment of the application, target voice information with preset keywords is determined from at least one voice information, and a user corresponding to the target voice information is determined as a target shooting object; shooting a target shooting object through a first camera, and setting a first image obtained through shooting as a base image; shooting a target shooting object through a plurality of second cameras to obtain a plurality of second images; and carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image. Because the shooting areas of the second cameras and the first camera have the overlapping parts, the shooting areas of any two second cameras have the overlapping parts, the overlapping parts of the shooting areas of any two second cameras and the shooting area of the first camera have the overlapping parts, and the target shooting object is in the overlapping areas where the shooting areas of the second cameras and the shooting area of the first camera are overlapped, the definition of the area where the target shooting object is located in the finally obtained imaging image is improved, the definition of the area outside the area where the target shooting object is located is also improved, and the quality of the whole imaging image is improved.
Referring to fig. 2, fig. 2 is a schematic flow chart of an imaging method of an apparatus according to an embodiment of the present application, where the flow chart may include:
201. when the electronic equipment is in a preview interface of the shooting application, the electronic equipment acquires at least one voice message.
In the embodiment of the application, the electronic equipment comprises a first camera of a first type and a plurality of second cameras of a second type, wherein the shooting area of the first camera is larger than that of the second camera, the shooting area of the second camera and the shooting area of the first camera have an overlapping part or overlap area, and the shooting areas of any two second cameras have an overlapping part, and the overlapping part of the shooting areas of any two second cameras and the shooting area of the first camera have an overlapping part, the shooting areas of the plurality of second cameras are all provided with overlapping parts, and the overlapping parts of the shooting areas of the plurality of second cameras and the shooting area of the first camera are provided with overlapping parts, that is, there is an overlapping portion where the shooting areas of the plurality of second cameras and the shooting area of the first camera overlap at the same time.
For example, when a user operates the electronic device to start a shooting class application (e.g., a system application "camera" of the electronic device), the electronic device enters a preview interface of the shooting class application. The electronic device may obtain at least one voice message when the electronic device is at a preview interface of a camera-like application.
The electronic equipment can acquire at least one voice message of at least one user through the microphone. For example, the electronic device may obtain one voice message of one user through the microphone, and may also obtain a plurality of voice messages of one user. The electronic equipment can acquire voice information of a plurality of users through the microphone. The electronic device may acquire one voice message of each of the plurality of users, or may acquire a plurality of voice messages of each of the plurality of users.
For example, when the user U1 says "please help me take a photo" in the microphone pick-up range of the electronic device, the electronic device obtains a voice message through the microphone, and the voice message corresponds to the user U1. When the user U2 says "today is very good" in the microphone pickup range of the electronic device, the electronic device obtains a voice message through the microphone, and the voice message corresponds to the user U2.
202. The electronic equipment determines target voice information with preset keywords from at least one voice information.
For example, if there are multiple users in the shooting scene, the electronic device may acquire one or more pieces of voice information of the multiple users at the same time. In order to improve the shooting efficiency, the electronic device may simultaneously detect whether a preset keyword exists in one or more voice messages of multiple users. For example, the electronic device may perform voice recognition on each voice message to convert the voice message into corresponding text message, and detect whether a preset keyword exists in the text message. Since there are a plurality of users in the shooting scene, there is a case where the plurality of users all say the preset keyword. For the above situation, that is, if the electronic device detects that there are at least two target voice messages with preset keywords, the electronic device may enter the process 203.
After a shooting application program (such as a system application "camera" of the electronic device) is started according to a user operation, a scene aimed at by a camera of the electronic device is a shooting scene. For example, after the user clicks an icon of a "camera" application on the electronic device with a finger to start the "camera application", if the user uses a camera of the electronic device to align a scene including an XX object, the scene including the XX object is a shooting scene. From the above description, it will be understood by those skilled in the art that the shooting scene is not specific to a particular scene, but is a scene aligned in real time following the orientation of the camera.
203. And if at least two target voice messages with preset keywords exist, the electronic equipment determines the user corresponding to each target voice message to obtain at least two users.
The electronic equipment detects that at least two target voice messages with preset keywords exist, and two situations exist. The method comprises the following steps: the electronic equipment detects that at least two target voice messages with preset keywords exist, and the at least two target voice messages with the preset keywords correspond to the same user, namely the same user speaks the at least two target voices with the preset keywords. The second is as follows: the electronic equipment detects that target voice information with at least two preset keywords exists, and each target voice information with the preset keywords corresponds to different users, namely each target voice with the preset keywords is spoken by different users.
The embodiment of the application is mainly directed to the second situation, that is, if at least two target voice messages with preset keywords exist and each target voice message with the preset keywords corresponds to a different user, the electronic device may determine the user corresponding to each target voice message to obtain at least two users.
204. The electronic device selects a target photographic subject from at least two users.
After determining the user corresponding to each target voice message and obtaining at least two users, the electronic device may select a target photographic object from the at least two users.
For example, the electronic device may determine a user at a close distance from the electronic device as a target photographic subject. Alternatively, the electronic device may determine a user located far from the electronic device as the target photographic subject.
For another example, the electronic device may obtain a plurality of face photos in a photo library. Then, the electronic device can classify each face photo according to the feature information of the face photo. For example, it is assumed that there are 20 face photos, wherein 10 face photos can extract the same first feature information, another 7 face photos can extract the same second feature information, and the remaining 3 face photos can extract the same third feature information. That is, the first feature information corresponds to 10 face photos, the second feature information corresponds to 7 face photos, and the third feature information corresponds to 3 face photos, that is, the number of face photos with the first feature information is the largest, the number of face photos with the second feature information is the next to the number of face photos with the third feature information is the smallest. Assume that the electronic apparatus needs to select a target photographic subject from 3 users. Then, the electronic device may first perform image acquisition on the 3 users to acquire 3 face images. Then, the electronic device may extract feature information of each face image to obtain 3 pieces of feature information, which are recorded as first target feature information, second target feature information, and third target feature information. Next, the electronic device may detect whether each feature information matches one of the first feature information, the second feature information, and the third feature information. And if the first target characteristic information is matched with the first characteristic information, the second target characteristic information is matched with the second characteristic information, and the third target characteristic information is matched with the third characteristic information. The electronic device may determine a user corresponding to the first target feature information as a target photographic object. And if the first target characteristic information is matched with the first characteristic information, and the second target characteristic information is not matched with the third target characteristic information, the second characteristic information and the third characteristic information, determining the user corresponding to the first target characteristic information as a target shooting object. If there are at least two pieces of feature information in the 3 pieces of feature information, for example, 2 pieces of feature information are matched with the first feature information, the second feature information, or the third feature information, the target photographic object is determined from the users corresponding to the 2 pieces of feature information, for example, any one of the users corresponding to the 2 pieces of feature information is determined as the target photographic object.
For another example, the electronic device may display a selection frame on the display screen to allow a user (a user holding the electronic device) to select which user is determined as the target photographic subject.
205. The electronic device determines a relative position of the target photographic subject and the electronic device.
In the embodiment of the application, in order to determine the relative position of the target photographic object and the electronic device, the electronic device may receive the voice information of at least one user by using a microphone array, that is, at least three microphones, when acquiring at least one voice information. After the electronic device determines the target photographic object, the electronic device may obtain the receiving time of each microphone receiving the target voice information corresponding to the target photographic object, so as to obtain the relative position of the target photographic object and each microphone according to the receiving time. The electronic equipment can determine the relative position of the target shooting object and the terminal according to the relative positions of the target shooting object and the microphones. For example, the relative position of the target speech and the electronic device is calculated through a plane geometry correlation algorithm. In the same plane, the distance from any point to the fixed three points is only one, and in the embodiment, the number of the microphones is more than three.
Wherein the relative position may be a relative position in the same plane, the relative position including a horizontal direction and a horizontal distance. Because the microphones are fixed on the electronic equipment, the positions of the microphones on the electronic equipment are known, the position distances between the microphones are also known, the propagation speed of the target voice in the air is also known, and the relative positions of the target voice and the electronic equipment, namely the relative positions of the target shooting object and the electronic equipment can be calculated by the known conditions and the receiving time of the target voice received by the microphones. For example, since the relative positions of the microphones are known, the electronic device first calculates the relative positions of the target photographic object and the microphones according to the receiving time of the microphones receiving the target voice information, then may select a reference point on the electronic device, and may obtain the relative positions of the target photographic object and the reference point on the electronic device according to the relative positions of the microphones and the reference point, so as to obtain the relative positions of the target photographic object and the electronic device.
The relative position may also be a relative position within a three-dimensional space, which includes a spatial direction and a spatial distance. Since the distances from any point to the fixed four points are unique in the three-dimensional space, another point in the three-dimensional space can be determined by any four points. In this embodiment, the number of microphones may be more than four.
It should be noted that the relative position between the target photographic object and the electronic device may also be determined in a manner not listed in the embodiment of the present application, which is not specifically limited by the embodiment of the present application.
206. The electronic equipment adjusts the shooting angle of the first camera according to the relative position so that the target shooting object is located in the central area of the shooting area of the first camera.
After the relative position of the target shooting object and the electronic device is determined, the electronic device can adjust the shooting angle of the first camera according to the relative position. For example, the electronic device may calculate an angle and a direction that the first camera needs to rotate according to the relative position, so as to rotate the first camera according to the angle and the direction that the first camera needs to rotate, so as to adjust a shooting angle of the first camera, so that the target shooting object is located in a central area of a shooting area of the first camera, that is, as shown in fig. 3.
And 207, the electronic equipment shoots a target shooting object through the first camera, and a first shot image is set as a base image.
For example, when the electronic device determines that the user U1 is the target shooting object, the electronic device may shoot the user U1 once by the first camera, mark the image shot by the first camera as a first image, and set the first image as a base image.
In some embodiments, only the target photographic subject may be included in the first image, i.e., only the user U1. In other embodiments, the first image may include other people, objects, scenes, etc. besides the target shooting object, i.e. the user U1, and is not limited herein.
208. The electronic equipment adjusts the shooting angle of each second camera according to the relative position, so that the shooting area of each second camera is partially overlapped with the edge of the shooting area of the first camera.
After determining the relative position between the target shooting object and the electronic device, the electronic device may adjust the shooting angle of each second camera according to the relative position. For example, the electronic device may calculate an angle and a direction that each second camera needs to rotate according to the relative position, so as to rotate each second camera according to the angle and the direction that each second camera needs to rotate, so as to adjust a shooting angle of each second camera, so that a shooting area of each second camera overlaps with an edge portion of a shooting area of the first camera.
For example, the first camera is a standard type camera or a camera with a field angle of about 45 degrees, and the second camera is a telephoto type camera or a camera with a field angle of less than 40 degrees. The electronic equipment comprises a first camera, four second cameras, namely a second camera A, a second camera B, a second camera C and a second camera D. Referring to fig. 4 and 5 in combination, after the shooting angles of the first camera and the second cameras are adjusted according to the relative positions of the electronic device and the target shooting object, the axes of the second cameras incline and intersect with each other toward the axis of the first camera, so that the target shooting object is located in the central area of the shooting area of the first camera, the shooting area a of the second camera a corresponds to the upper left corner of the shooting area of the first camera, the shooting area B of the second camera B corresponds to the upper right corner of the shooting area of the first camera, the shooting area C of the second camera C corresponds to the lower left corner of the shooting area of the first camera, and the shooting area D of the second camera D corresponds to the lower right corner of the shooting area of the first camera, so that the shooting area of the second camera overlaps with the edge of the shooting area of the first camera, and the overlapping portion of any two shooting areas of the second camera overlaps with the shooting area of the first camera And the shooting areas of the four second cameras are overlapped with the central area of the shooting area of the first camera. And the target shooting object is located in the central area of the shooting area of the first camera, the target shooting object can be determined to be located in an area where the shooting area of the first camera and the shooting areas of the second cameras are overlapped.
For example, as shown in fig. 6, assuming that a region where the shooting region of the first camera overlaps with the shooting regions of the second cameras is an overlap region m, the target object is in the overlap region m.
209. The electronic equipment shoots the target shooting object through the second cameras to obtain a plurality of second images.
In the embodiment of the application, after the shooting angle of each of the second cameras is adjusted according to the relative position, the electronic device can shoot a target shooting object through the second cameras to obtain a plurality of images correspondingly, and the images shot by the second cameras are recorded as second images, that is, a plurality of second images are obtained through shooting.
It should be noted that when a target shooting object is shot by a plurality of second cameras, the second cameras and the first cameras use the same image parameters (such as contrast and brightness) to shoot, so that the shooting area of the second cameras is a part of the shooting area of the first cameras, but the shooting effect of the first images and the second images obtained by shooting of the second cameras is the same.
For example, the electronic device includes four second cameras, which are a second camera a, a second camera B, a second camera C and a second camera D, and the shooting areas of the four second cameras correspond to the upper left corner, the upper right corner, the lower left corner and the lower right corner of the shooting area of the first camera, respectively, and four second images are obtained by shooting through the four second cameras, as shown in fig. 7, the image content of the second image G1 shot by the second camera a corresponds to the image content of the upper left corner of the base image, the image content of the second image G2 shot by the second camera B corresponds to the image content of the upper right corner of the base image, the image content of the second image G3 shot by the second camera C corresponds to the image content of the lower left corner of the base image, the image content of the second image G4 shot by the second camera D corresponds to the image content of the lower right corner of the base image, in this way, the image content of the different second images covers different positions of the edge area in the base image, and the image content of the different second images includes the target photographic subject.
It should be noted that, in the embodiment of the present application, 205 and 207 may be executed after the execution is completed 204 and 206. For 205 and 207, 205 may be executed first and then 207 may be executed, 207 may be executed first and then 205 may be executed, and 205 and 207 may also be executed simultaneously.
210. And the electronic equipment carries out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
In the embodiment of the application, after the electronic device obtains the base image through the shooting of the first camera and obtains the plurality of second images through the shooting of the plurality of second cameras, the plurality of shot second images are aligned with the base image.
And calculating the average pixel value of each overlapped pixel point for the overlapped part of the base image and the second image based on the aligned base image and the second image. For example, the electronic device obtains four second images through four second cameras in addition to the base image through the first camera. Assume that the four second images are a second image G1, a second image G2, a second image G3, and a second image G4.
Referring to fig. 8, an overlapping area m1 where each second image and the base image overlap is located in a central area of the base image, i.e., an area where the target photographic object is located. That is, the second image G1, the second image G2, the second image G3, the second image G4, and the overlapping area m1 of the base image are located at the central area of the base image. Thus, for the overlapping area m1 shown in fig. 8, the pixel values of the pixel point at a certain position in the five images (i.e., the base image and the four second images) are 0.8, 0.9, 1.1, 1.2, and 1, respectively, and then the average pixel value of the pixel point at the position can be calculated to be 1.
With continued reference to fig. 8, the overlapping area of the second image G1, the second image G2, and the base image is an overlapping area m2, the overlapping area of the second image G1, the second image G3, and the base image is an overlapping area m3, the overlapping area of the second image G2, the second image G4, and the base image is an overlapping area m4, and the overlapping area of the second image G3, the second image G4, and the base image is an overlapping area m 5. For the overlapping area m2 shown in fig. 8, the pixel values of the pixel point at a certain position in the second image G1, the second image G2 and the base image are 0.8, 0.9 and 1, respectively, and then the average pixel value of the pixel point at the position can be calculated to be 0.9.
With continued reference to FIG. 8, the overlapping area of the second image G1 and the base image is overlapping area m6, the overlapping area of the second image G2 and the base image is overlapping area m7, the overlapping area of the second image G3 and the base image is overlapping area m8, and the overlapping area of the second image G4 and the base image is overlapping area m 9. For the overlap area m6 shown in fig. 8, if the pixel values of the pixel point at a certain position in the second image G1 and the base image are 0.8 and 1, respectively, the average pixel value of the pixel point at the position can be calculated to be 0.9.
Then, the electronic device obtains a composite image according to the average pixel values obtained by the corresponding pixel points at the positions in the base image, for example, the electronic device may correspondingly adjust the pixel values of the pixel points of the base image to the average pixel values obtained by calculation, so as to obtain an imaging image; for another example, the electronic device may further generate a new image, i.e., an imaged image, according to the calculated average pixel values. In the imaging image, the definition of the central area, namely the area where the target shooting object is located, is highest, the definition of the area where each second image overlaps with each other pairwise and overlaps with the base image is second, the definition of the overlapping area of each second image and the base image is lowest, but the overall definition of the imaging image is relatively higher compared with that of the base image.
In the embodiment of the application, the electronic device obtains the imaging image after performing image synthesis processing on the plurality of second images obtained by shooting and the substrate image, and thus, the electronic device completes one complete shooting operation.
For example, referring to fig. 9, fig. 9 shows the change of the sharpness from the base image to the imaged image, wherein the X axis represents the change from the edge area of the image to the central area, and then from the central area to the edge area, and the Y axis represents the sharpness that changes along with the X axis, it can be seen that in the base image, the sharpness of the central area is the highest, and as the central area spreads to the edge area, the sharpness gradually decreases and changes more sharply, while in the imaged image, the sharpness of the central area is the highest, and compared to the base image, the sharpness of the edge area of the imaged image is integrally improved, and as the central area spreads to the edge area, although the sharpness gradually decreases, the change is smoother, so that the overall image quality of the imaged image is improved.
As shown in fig. 10, in some embodiments, flow 204 may include:
2041. the electronic equipment performs image acquisition operation on each user to obtain at least two face images, wherein each face image corresponds to one user.
2042. The electronic equipment extracts the feature information of each face image to obtain at least two pieces of feature information.
2043. The electronic equipment determines target characteristic information matched with the preset characteristic information from the at least two pieces of characteristic information.
2044. And the electronic equipment determines the user corresponding to the target characteristic information as a target shooting object.
After determining the user corresponding to each target voice message and obtaining at least two users, the electronic device may perform image acquisition operation on each user to obtain at least two face images. Each face image corresponds to a user. Then, the electronic device may extract feature information of each face image to obtain at least two pieces of feature information. Then, the electronic device detects whether each piece of feature information matches preset feature information, and determines the feature information matching the preset feature information as target feature information. And finally, the electronic equipment determines the user corresponding to the target characteristic information as a target shooting object.
For example, there may be other unrelated people or objects in the shooting scene, and if the other unrelated people in the shooting scene happen to speak the preset keyword together with the target shooting object, the electronic device may be caused to erroneously determine the other people as the target shooting object. In order to avoid the situation, the user can store the feature information corresponding to the face photos of the user or other friends in advance, and set the feature information as the preset feature information, so that when other unrelated people happen to speak the preset keywords together with the target shooting object in the shooting scene, the situation that the electronic equipment mistakenly determines other unrelated people as the target shooting object can be avoided.
It can be understood that, in the embodiment of the present application, when a preset keyword is detected in the acquired voice information, the electronic device receives an image capturing request for a target capturing object while determining that a user corresponding to the voice information is the target capturing object. That is, when the preset keyword is detected in the acquired voice information, the electronic device performs one-time shooting on the target shooting object. Therefore, it is important to avoid a situation where the electronic device erroneously determines other unrelated persons as the target photographic subject. Based on this, in the embodiment of the application, when the preset keyword is detected in the acquired voice information, the electronic device may determine the user corresponding to the voice information. Then, the electronic equipment performs image acquisition operation on the user to obtain a face image. Then, the electronic equipment extracts the feature information of the face image. Subsequently, the electronic device detects whether the feature information matches preset feature information. Finally, if the characteristic information is matched with the preset characteristic information, the electronic equipment can determine the user as the target shooting object.
In one embodiment, the electronic device includes two first cameras, and the electronic device adjusts a shooting angle of the first cameras according to the relative position so that a target shooting object is located in a central area of a shooting area of the first cameras, including:
the electronic equipment adjusts the shooting angles of the two first cameras according to the relative positions so that the target shooting object is located in the central area of the shooting areas of the two first cameras.
The electronic device shoots a target shooting object through a first camera, and sets a shot first image as a base image, and the electronic device comprises:
(1) the electronic equipment shoots a target shooting object through two first cameras to obtain at least two first images;
(2) the electronic device performs image synthesis processing on at least two first images, and sets the synthesized images as base images.
In an embodiment of the application, the electronic device includes two standard types of first cameras. For example, referring to fig. 11, the electronic device includes two first cameras, namely a first camera E and a first camera F, and the first camera E is surrounded by four second cameras.
After the shooting angles of the two first cameras are adjusted according to the relative positions so that the target shooting object is located in the central area of the shooting areas of the two first cameras, the electronic equipment can shoot the target shooting object through the two first cameras to obtain at least two first images with the same image content. And then, carrying out image synthesis processing on at least two first images, and setting the synthesized image as a base image.
When the electronic device performs image synthesis processing on the at least two first images, the at least two first images are aligned, an average pixel value of each pixel point where the at least two first images overlap is calculated, a synthetic image of the at least two first images is obtained according to each average pixel value obtained through calculation, and the synthetic image is set as a base image.
Compared with the method that the first image shot by the first camera is directly set as the base image, the base image with higher definition can be obtained in the embodiment of the application, so that the finally obtained imaging image also has higher definition.
In one embodiment, the "electronic device captures a target capture object through a first camera, and sets a captured first image as a base image", including:
(1) the electronic equipment continuously shoots a target shooting object through a first camera to obtain a plurality of first images;
(2) the electronic device performs image synthesis processing on the plurality of first images, and sets the synthesized image as a base image.
In the embodiment of the application, after the shooting angle of the first camera is adjusted according to the relative position so that the target shooting object is located in the central area of the shooting area of the first camera, the electronic device can continuously shoot the target shooting object through the first camera to obtain a plurality of first images. The electronic equipment can shoot the target shooting object through the first camera within unit time according to the set shooting frame rate, so that continuous shooting of the target shooting object is achieved. For example, assuming that the shooting frame rate of the first camera is 15FPS, the electronic device will shoot 15 images of the target shooting object within 1 second of the unit time, and since the images all correspond to the same target shooting object and the interval of the shooting time between the images is small, the image contents of the images can be regarded as the same.
After the first images of the target shooting objects are obtained through shooting, the electronic equipment selects the first image with the highest definition from the first images, aligns other first images with the first image with the highest definition, calculates the average pixel value of each pixel point overlapped by the first images, obtains a composite image of the first images according to the average pixel values obtained through calculation, and sets the composite image as a base image.
Compared with the method that the first image shot by the first camera is directly set as the base image, the base image with higher definition can be obtained in the embodiment of the application, so that the finally obtained imaging image also has higher definition.
Optionally, after the electronic device continuously captures a target capture object through the first camera to obtain a plurality of first images, the method further includes:
the electronic equipment selects an image with the highest definition from the plurality of first images obtained by shooting as a base image, and the base image is used for carrying out image synthesis processing with a second image obtained by shooting by a second camera so as to obtain an imaging image.
Generally, the sharper the image, the higher its contrast. Therefore, the contrast of an image can be used to measure the sharpness of the image.
In an embodiment, before the electronic device acquires at least one voice message when being in a preview interface of a shooting-type application, the electronic device further includes an electrochromic component covering the first camera and/or the second camera:
the electronic device switches the electrochromic component to a transparent state;
after the electronic device performs image synthesis processing on the plurality of second images and the base image to obtain an imaged image, the electronic device further includes:
the electronic device switches the electrochromic assembly to a colored state to hide the first camera and/or the second camera.
In the embodiment of the application, in order to improve the integrity of the electronic device, the electrochromic assembly is covered on the first camera and/or the second camera, so that the cameras are hidden by the electrochromic assembly when needed.
The operating principle of the electrochromic assembly will first be briefly described below.
Electrochromism refers to the phenomenon that the color/transparency of a material is changed stably and reversibly under the action of an applied electric field. Materials with electrochromic properties may be referred to as electrochromic materials. The electrochromic component in the embodiment of the present application is made of electrochromic materials.
The electrochromic assembly can comprise two conducting layers which are arranged in a stacked mode, and a color changing layer, an electrolyte layer and an ion storage layer which are arranged between the two conducting layers. For example, when no voltage (or 0V) is applied to the two transparent conductive layers of the electrochromic device, the electrochromic device will be in a transparent state, when the voltage applied between the two transparent conductive layers is changed from 0V to 3V, the electrochromic device will be in black, when the voltage applied between the two transparent conductive layers is changed from 3V to-3V, the electrochromic device will be changed from black to transparent, and so on.
In this way, the first camera and/or the second camera can be hidden by utilizing the characteristic of adjustable color of the electrochromic assembly.
In the embodiment of the application, the electronic device can switch the electrochromic assembly covering the first camera and/or the second camera to a transparent state when the shooting type application is started, so that the first camera and the second camera can shoot a target shooting object.
And after the base image is acquired through the first camera, a plurality of second images are obtained through shooting of the plurality of second cameras and finally synthesized to obtain an imaging image and the started shooting application exits, the electronic equipment switches the electrochromic assembly to a coloring state, so that the first camera and/or the second camera are/is hidden.
For example, the electronic device is provided with an electrochromic component which covers all of the first camera and the second camera simultaneously, and the color of one side of the electronic device, which is provided with the first camera and the second camera, is black, so that when the electronic device does not start shooting applications, the electrochromic component is stored in a black coloring state, and the first camera and the second camera are hidden; when the shooting application is started, the electrochromic component is synchronously switched to a transparent state, so that the electronic equipment can shoot through the first camera and the second camera; and after the imaging image is finally synthesized and the started shooting application is quitted, the electronic equipment switches the electrochromic component to a black coloring state, so that the first camera and the second camera are hidden again.
In an embodiment, before the step of the electronic device shooting a target shooting object through a first camera and setting a first shot image as a base image, the method further includes:
(1) the electronic equipment detects whether the electronic equipment is in a shaking state currently;
(2) if the electronic equipment is not in the shake state at present, shooting a target shooting object through the first camera according to an image shooting request, and setting a first image obtained through shooting as a base image.
According to the description in the above embodiments, the images captured by different cameras are finally combined to obtain the imaged image, and if the electronic device is in a shake state during the shooting process, the image contents of the images captured by different cameras are obviously different, which affects the combining effect of the imaged images.
Therefore, in the embodiment of the present application, before the electronic device captures a target capture object by using the first camera and sets a captured first image as a base image, it first determines whether the electronic device is currently in a shake state. The electronic device may determine the shaking state in a plurality of different manners, for example, the electronic device may determine whether the current speed in each direction is smaller than a preset speed, if so, determine that the electronic device is not in the shaking state (or in a stable state), and if not, determine that the electronic device is in the shaking state; for another example, the electronic device may determine whether the current displacement in each direction is smaller than a preset displacement, if so, determine that the electronic device is not in the shaking state, and if not, determine that the electronic device is in the shaking state. In addition, the judgments of the jitter states may be performed in a manner not listed in the embodiments of the present application, which is not specifically limited by the embodiments of the present application.
When it is determined that the target object is in the shake state, the electronic device captures an image of the target object through the first camera, and sets the captured first image as a base image to synthesize an obtained imaging image.
In an embodiment, before the step of the electronic device shooting a target shooting object through a first camera and setting a first shot image as a base image, the method further includes:
(1) if the target shooting object is not in the shaking state, the electronic equipment detects whether the target shooting object is in a static state;
(2) if the target shooting object is in a static state, the electronic equipment shoots the target shooting object through the first camera, and a first image obtained through shooting is set as a base image.
From the above description, it can be understood by those skilled in the art that, in the case that the electronic device is not in a shake state, if the target photographic object is not in a still state (for example, the target photographic object includes a moving object), the image content of the image obtained by the electronic device through the first camera and the second camera may have a large difference.
Therefore, in this embodiment of the application, when determining that the electronic device is not in the shake state at present, the electronic device does not immediately shoot the target photographic object through the first camera, but further detects whether the target photographic object is in the still state, and if it is detected that the target photographic object is in the still state, shoots the target photographic object through the first camera according to the image shooting request, and sets the shot first image as the base image to synthesize the obtained imaging image, which may specifically refer to the related description in the above embodiment, and is not described herein again.
In this embodiment, a person skilled in the art can select an appropriate manner to determine whether the target photographic object is in the still state according to actual needs, which is not specifically limited in this application, for example, an optical flow method, a residual method, or the like can be used to determine whether the target photographic object is in the still state.
In one embodiment, the first camera and the second camera share an image sensor.
For example, referring to fig. 12, the first camera and the second camera share the same image sensor, and the first camera (lens portion) and the second camera (lens portion) can project external light to different portions of the image sensor in a time-sharing manner, so as to capture an external object.
Compare in the image sensor of the independent use of many cameras of prior art, many cameras sharing image sensor can reduce the space and occupy in the embodiment of this application.
Referring to fig. 13, fig. 13 is a schematic diagram of a third flowchart of an apparatus imaging method according to an embodiment of the present application, where the flowchart may include:
301. when the electronic equipment is in a preview interface of the shooting application, the electronic equipment acquires at least one voice message.
302. The electronic equipment determines target voice information with preset keywords from at least one voice information, and determines a user corresponding to the target voice information as a target shooting object.
303. The electronic equipment shoots a target shooting object through the first camera, and a first image obtained through shooting is set as a base image.
The processes 301 to 303 are the same as or corresponding to the processes 101 to 103, and are not described herein again.
304. The electronic device determines a relative position of the target photographic subject and the electronic device.
The process 304 is the same as or corresponding to the process 205, and is not described herein again.
305. The electronic equipment adjusts the shooting angle of each second camera according to the relative position so that the target shooting object is in an overlapping area of the shooting areas of the plurality of second cameras.
After determining the relative position between the target shooting object and the electronic device, the electronic device may adjust the shooting angle of each second camera according to the relative position. For example, the electronic device may calculate an angle and a direction that each second camera needs to rotate according to the relative position, so as to rotate each second camera according to the angle and the direction that each second camera needs to rotate, so as to adjust a shooting angle of each second camera, so that the target shooting object is located in an overlapping area of shooting areas of the plurality of second cameras.
For example, as shown in fig. 14, it is assumed that the target photographic subject is in the area m11, and the overlapping area of the photographic areas of the plurality of second cameras is the area m 11. It is understood that the shooting area of the first camera also includes the area m11, i.e., the target photographic subject is in the shooting area of the first camera. And it is understood that the overlapping area of the photographing areas of the plurality of second cameras and the overlapping area of the photographing area of the first camera are also the area m 11. That is, the region where the target photographic subject is located is an overlapping region where the photographing region of the first camera and the photographing regions of the plurality of second cameras overlap, that is, a region m 11.
306. The electronic equipment shoots the target shooting object through the second cameras to obtain a plurality of second images.
307. And the electronic equipment carries out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
The processes 306 and 307 are the same as or corresponding to the processes 104 and 105 described above, and are not described herein again.
Referring to fig. 15, fig. 15 is a schematic structural diagram of an imaging device according to an embodiment of the present disclosure. The equipment imaging device is applied to electronic equipment, and the electronic equipment comprises a first camera of a first type and a plurality of second cameras of a second type, wherein the shooting area of each second camera and the shooting area of each first camera have an overlapping part. The apparatus imaging device 400 includes: a first obtaining module 401, a determining module 402, a second obtaining module 403, a third obtaining module 404 and a synthesizing module 405.
The first obtaining module 401 is configured to obtain at least one piece of voice information when the device is in a preview interface of a shooting application.
A determining module 402, configured to determine, from the at least one piece of voice information, target voice information with a preset keyword, and determine a user corresponding to the target voice information as a target shooting object.
A second obtaining module 403, configured to take a picture of the target shooting object through the first camera, and set a first image obtained through shooting as a base image.
A third obtaining module 404, configured to take a picture of the target photographic object through the plurality of second cameras to obtain a plurality of second images.
And a synthesizing module 405, configured to perform image synthesis processing on the plurality of second images and the base image to obtain an imaging image.
Referring to fig. 16, in some embodiments, the second obtaining module 403 may include:
a first determining submodule 4031 configured to determine a relative position of the target photographic object and the electronic device.
A first adjusting submodule 4032, configured to adjust a shooting angle of the first camera according to the relative position, so that the target shooting object is located in a central area of a shooting area of the first camera.
The first obtaining submodule 4033 is configured to capture the target capture object by using the first camera, and set a first image obtained by the capture as a base image.
The third obtaining module 404 may include:
a second adjusting submodule 4041, configured to adjust a shooting angle of each second camera according to the relative position, so that a shooting area of each second camera overlaps with an edge portion of a shooting area of the first camera.
The second obtaining sub-module 4042 is configured to capture the target capture object by the plurality of second cameras to obtain a plurality of second images.
Referring also to fig. 17, in some embodiments, the third obtaining module 404 may include:
a second determining sub-module 4043, configured to determine a relative position between the target photographic object and the electronic device.
The second adjusting submodule 4044 is configured to adjust the shooting angle of each second camera according to the relative position, so that the target shooting object is located in an overlapping area of the shooting areas of the multiple second cameras.
The third obtaining sub-module 4045 is configured to capture the target capture object by the plurality of second cameras to obtain a plurality of second images.
In some embodiments, the determining module 402 may be configured to: determining target voice information with preset keywords from the at least one voice information; if at least two target voice messages with preset keywords exist, determining a user corresponding to each target voice message to obtain at least two users; a target photographic subject is selected from at least two users.
In some embodiments, the determining module 402 may be configured to: performing image acquisition operation on each user to obtain at least two face images, wherein each face image corresponds to one user; extracting the feature information of each face image to obtain at least two pieces of feature information; determining target characteristic information matched with preset characteristic information from the at least two pieces of characteristic information; and determining the user corresponding to the target characteristic information as a target shooting object.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed on a computer, the computer is caused to execute the flow in the imaging method of the apparatus provided by the embodiment.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the processor is configured to execute the flow in the device imaging method provided in this embodiment by calling the computer program stored in the memory.
For example, the electronic device may be a mobile terminal such as a tablet computer or a smart phone. Referring to fig. 18, fig. 18 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Referring to fig. 18, the electronic device includes a processor 501, a memory 502, a first camera 503 of a first type, and a plurality of second cameras 504 of a second type. The processor 501 is electrically connected to the memory 502, the first camera 503 and the second camera 504.
The processor 501 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by running or loading a computer program stored in the memory 502, and calling data stored in the memory 502.
The memory 502 may be used to store software programs and modules, and the processor 501 executes various functional applications and data processing by running the computer programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, a computer program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 501 with access to the memory 502.
The first camera 503 is a standard type camera, or a camera with a field angle of about 45 degrees.
The second camera 504 is a telephoto type camera, or a camera with a field angle of 40 degrees or less.
In this embodiment of the present application, the processor 501 in the electronic device loads instructions corresponding to one or more processes of the computer program into the memory 502 according to the following steps, and the processor 501 runs the computer program stored in the memory 502, so as to implement various functions, as follows:
when the mobile terminal is in a preview interface of a shooting application, acquiring at least one voice message;
determining target voice information with preset keywords from the at least one voice information, and determining a user corresponding to the target voice information as a target shooting object;
shooting the target shooting object through the first camera 503, and setting a first shot image as a base image;
shooting the target shooting object through the plurality of second cameras 504 to obtain a plurality of second images;
and carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
Referring to fig. 19, fig. 19 is a second schematic structural diagram of an electronic device according to an embodiment of the present disclosure, and the difference between the second schematic structural diagram and the electronic device shown in fig. 18 is that the electronic device further includes components such as an input unit 505 and an output unit 506.
The input unit 505 may be used to receive input numbers, character information, or user characteristic information (such as a fingerprint), and generate a keyboard, a mouse, a joystick, an optical or trackball signal input, etc., related to user setting and function control, among others.
The output unit 506 may be used to display information input by the user or information provided to the user, such as a screen.
In this embodiment of the present application, the processor 501 in the electronic device loads instructions corresponding to one or more processes of the computer program into the memory 502 according to the following steps, and the processor 501 runs the computer program stored in the memory 502, so as to implement various functions, as follows:
when the mobile terminal is in a preview interface of a shooting application, acquiring at least one voice message;
determining target voice information with preset keywords from the at least one voice information, and determining a user corresponding to the target voice information as a target shooting object;
shooting the target shooting object through the first camera 503, and setting a first shot image as a base image;
shooting the target shooting object through the plurality of second cameras 504 to obtain a plurality of second images;
and carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
In an embodiment, when the target object is captured by the first camera 503 and a captured first image is set as a base image, the processor 501 executes: determining the relative position of the target shooting object and the electronic equipment; adjusting the shooting angle of the first camera 503 according to the relative position, so that the target shooting object is located in the central area of the shooting area of the first camera 503; shooting the target shooting object through the first camera 503, and setting a first shot image as a base image; when the second cameras 504 capture the target object to obtain a plurality of second images, the processor 501 performs: adjusting the shooting angle of each second camera 504 according to the relative position, so that the shooting area of each second camera 504 is partially overlapped with the edge of the shooting area of the first camera 503; the target photographic subject is photographed by the plurality of second cameras 504, and a plurality of second images are obtained.
In an embodiment, when the second cameras 504 capture the target object to obtain a plurality of second images, the processor 501 performs: determining the relative position of the target shooting object and the electronic equipment; adjusting the shooting angle of each second camera 504 according to the relative position so that the target shooting object is in an overlapping area of the shooting areas of the plurality of second cameras 504; the target photographic subject is photographed by the plurality of second cameras 504, and a plurality of second images are obtained.
In an embodiment, when determining, from the voice information of the at least one user, that target voice information with a preset keyword exists, and determining a user corresponding to the target voice information as a target shooting object, the processor 501 executes: determining target voice information with preset keywords from the at least one voice information; if at least two target voice messages with preset keywords exist, determining a user corresponding to each target voice message to obtain at least two users; a target photographic subject is selected from at least two users.
In one embodiment, when selecting a target photographic subject from at least two users, the processor 501 performs: performing image acquisition operation on each user to obtain at least two face images, wherein each face image corresponds to one user; extracting the feature information of each face image to obtain at least two pieces of feature information; determining target characteristic information matched with preset characteristic information from the at least two pieces of characteristic information; and determining the user corresponding to the target characteristic information as a target shooting object.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the device imaging method, and are not described herein again.
The device imaging apparatus provided in the embodiment of the present application and the device imaging method in the above embodiments belong to the same concept, and any method provided in the device imaging method embodiment may be run on the device imaging apparatus, and a specific implementation process thereof is described in the device imaging method embodiment in detail, and is not described herein again.
It should be noted that, for the apparatus imaging method described in the embodiment of the present application, it can be understood by those skilled in the art that all or part of the process of implementing the apparatus imaging method described in the embodiment of the present application may be completed by controlling the relevant hardware through a computer program, where the computer program may be stored in a computer-readable storage medium, such as a memory, and executed by at least one processor, and during the execution, the process of the embodiment of the apparatus imaging method may be included. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
For the imaging device of the present application, each functional module may be integrated into one processing chip, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The above detailed description is provided for the imaging method, the imaging device, the storage medium, and the electronic device of the device provided in the embodiments of the present application, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the above embodiments are only used to help understand the method and the core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (8)

1. An apparatus imaging method is applied to an electronic apparatus, and is characterized in that the electronic apparatus includes two first cameras of a first type, wherein a first camera is provided with four second cameras of a second type in a surrounding manner, a shooting area of the first camera is larger than that of the second camera, an overlapping portion exists between the shooting area of the second camera and the shooting area of the first camera, the first camera and the second camera share an image sensor, the first camera and the second camera project external light to different portions of the image sensor in a time-sharing manner to realize shooting of a target shooting object, and the first camera and the second camera adopt the same image parameters, and the method includes:
when the mobile terminal is in a preview interface of a shooting application, acquiring at least one voice message;
determining target voice information with preset keywords from the at least one voice information, and determining a user corresponding to the target voice information as a target shooting object;
determining the relative position of the target shooting object and the electronic equipment;
adjusting the shooting angle of the first camera according to the relative position so as to enable the target shooting object to be located in the central area of the shooting area of the first camera;
when the target shooting object is not in a shaking state and is in a static state at present, shooting the target shooting object through the two first cameras to obtain at least two first images;
performing image synthesis processing on the at least two first images, and setting the synthesized images as base images;
adjusting the shooting angle of each second camera according to the relative position so that the shooting area of each second camera is partially overlapped with different edges of the shooting area of the first camera, the shooting areas of any two second cameras have overlapped parts, and partial areas of the shooting area of each second camera are overlapped with the central area of the shooting area of the first camera;
shooting the target shooting object through the plurality of second cameras to obtain a plurality of second images;
and carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
2. The device imaging method according to claim 1, wherein said capturing the target photographic subject by the plurality of second cameras resulting in a plurality of second images comprises:
determining the relative position of the target shooting object and the electronic equipment;
adjusting the shooting angle of each second camera according to the relative position so as to enable the target shooting object to be in an overlapping area of shooting areas of the plurality of second cameras;
and shooting the target shooting object through the plurality of second cameras to obtain a plurality of second images.
3. The device imaging method according to claim 1, wherein the determining, from the at least one voice message, a target voice message with a preset keyword, and determining a user corresponding to the target voice message as a target shooting object comprises:
determining target voice information with preset keywords from the at least one voice information;
if at least two target voice messages with preset keywords exist, determining a user corresponding to each target voice message to obtain at least two users;
a target photographic subject is selected from at least two users.
4. The device imaging method according to claim 3, wherein said selecting a target photographic subject from at least two users comprises:
performing image acquisition operation on each user to obtain at least two face images, wherein each face image corresponds to one user;
extracting the feature information of each face image to obtain at least two pieces of feature information;
determining target characteristic information matched with preset characteristic information from the at least two pieces of characteristic information;
and determining the user corresponding to the target characteristic information as a target shooting object.
5. The utility model provides an equipment image device, is applied to electronic equipment, its characterized in that, electronic equipment includes two first cameras of first type, and wherein a first camera is provided with four second cameras of second type around, the shooting area of first camera is greater than the shooting area of second camera, there is the overlap portion in the shooting area of second camera with the shooting area of first camera, first camera with the second camera sharing image sensor, first camera with the time-sharing of second camera is thrown external light to image sensor's different parts realizes the shooting to the target shooting object, first camera with the second camera adopts the same image parameter, includes:
the device comprises a first acquisition module, a second acquisition module and a display module, wherein the first acquisition module is used for acquiring at least one voice message when the device is in a preview interface of a shooting application;
the determining module is used for determining target voice information with preset keywords from the at least one voice information and determining a user corresponding to the target voice information as a target shooting object;
a second acquisition module comprising:
the first determining submodule is used for determining the relative position of the target shooting object and the electronic equipment;
the first adjusting submodule is used for adjusting the shooting angle of the first camera according to the relative position so as to enable the target shooting object to be located in the central area of the shooting area of the first camera;
the first acquisition sub-module is used for shooting the target shooting object through the two first cameras to obtain at least two first images when the target shooting object is not in a still state and is not in a shake state currently; performing image synthesis processing on the at least two first images, and setting the synthesized images as base images;
a third acquisition module comprising:
the second adjusting submodule is used for adjusting the shooting angle of each second camera according to the relative position so that the shooting area of each second camera is partially overlapped with the edge of the shooting area of the first camera, the shooting areas of any two second cameras have overlapped parts, and the partial area of the shooting area of each second camera is overlapped with the central area of the shooting area of the first camera;
the second acquisition sub-module is used for shooting the target shooting object through the plurality of second cameras to obtain a plurality of second images;
and the synthesis module is used for carrying out image synthesis processing on the plurality of second images and the substrate image to obtain an imaging image.
6. The device imaging apparatus of claim 5, wherein the third acquisition module comprises:
the second determining submodule is used for determining the relative position of the target shooting object and the electronic equipment;
the second adjusting submodule is used for adjusting the shooting angle of each second camera according to the relative position so as to enable the target shooting object to be in the overlapping area of the shooting areas of the plurality of second cameras;
and the third acquisition sub-module is used for shooting the target shooting object through the plurality of second cameras to obtain a plurality of second images.
7. A storage medium having stored therein a computer program which, when run on a computer, causes the computer to execute the apparatus imaging method of any one of claims 1 to 4.
8. An electronic device, characterized in that the electronic device comprises a processor and a memory, a first camera of a first type and a plurality of second cameras of a second type, the shooting area of the second cameras and the shooting area of the first cameras have overlapping parts, a computer program is stored in the memory, and the processor is used for executing the device imaging method according to any one of claims 1 to 4 by calling the computer program stored in the memory.
CN201910578474.7A 2019-06-28 2019-06-28 Device imaging method and device, storage medium and electronic device Active CN110213492B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910578474.7A CN110213492B (en) 2019-06-28 2019-06-28 Device imaging method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910578474.7A CN110213492B (en) 2019-06-28 2019-06-28 Device imaging method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN110213492A CN110213492A (en) 2019-09-06
CN110213492B true CN110213492B (en) 2021-03-02

Family

ID=67795506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910578474.7A Active CN110213492B (en) 2019-06-28 2019-06-28 Device imaging method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN110213492B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110290324B (en) * 2019-06-28 2021-02-02 Oppo广东移动通信有限公司 Device imaging method and device, storage medium and electronic device
CN110740259B (en) * 2019-10-21 2021-06-25 维沃移动通信有限公司 Video processing method and electronic equipment
CN114374815B (en) * 2020-10-15 2023-04-11 北京字节跳动网络技术有限公司 Image acquisition method, device, terminal and storage medium
CN112637489A (en) * 2020-12-18 2021-04-09 努比亚技术有限公司 Image shooting method, terminal and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012249070A (en) * 2011-05-27 2012-12-13 Canon Inc Imaging apparatus and imaging method
CN108737719A (en) * 2018-04-04 2018-11-02 深圳市冠旭电子股份有限公司 Camera filming control method, device, smart machine and storage medium
JP2019029978A (en) * 2017-08-04 2019-02-21 株式会社カーメイト Camera, imaging system, and image processing method
JP2019080243A (en) * 2017-10-26 2019-05-23 キヤノン株式会社 Imaging apparatus, method for controlling imaging apparatus, and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107509017A (en) * 2017-09-19 2017-12-22 信利光电股份有限公司 A kind of multi-cam module

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012249070A (en) * 2011-05-27 2012-12-13 Canon Inc Imaging apparatus and imaging method
JP2019029978A (en) * 2017-08-04 2019-02-21 株式会社カーメイト Camera, imaging system, and image processing method
JP2019080243A (en) * 2017-10-26 2019-05-23 キヤノン株式会社 Imaging apparatus, method for controlling imaging apparatus, and program
CN108737719A (en) * 2018-04-04 2018-11-02 深圳市冠旭电子股份有限公司 Camera filming control method, device, smart machine and storage medium

Also Published As

Publication number Publication date
CN110213492A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110213493B (en) Device imaging method and device, storage medium and electronic device
CN110213492B (en) Device imaging method and device, storage medium and electronic device
CN110290324B (en) Device imaging method and device, storage medium and electronic device
KR102187146B1 (en) Dual-aperture zoom digital camera with automatic adjustable tele field of view
CN107580178B (en) Image processing method and device
US9036072B2 (en) Image processing apparatus and image processing method
CN110225256B (en) Device imaging method and device, storage medium and electronic device
CN109313799B (en) Image processing method and apparatus
CN110290299B (en) Imaging method, imaging device, storage medium and electronic equipment
CN110166680B (en) Device imaging method and device, storage medium and electronic device
WO2020259445A1 (en) Device imaging method and apparatus, storage medium, and electronic device
CN110636276B (en) Video shooting method and device, storage medium and electronic equipment
US20210127059A1 (en) Camera having vertically biased field of view
CN112887609B (en) Shooting method and device, electronic equipment and storage medium
CN113973190A (en) Video virtual background image processing method and device and computer equipment
CN105578023A (en) Image quick photographing method and device
WO2012163370A1 (en) Image processing method and device
US9167150B2 (en) Apparatus and method for processing image in mobile terminal having camera
CN113840070A (en) Shooting method, shooting device, electronic equipment and medium
US20220329729A1 (en) Photographing method, storage medium and electronic device
KR20210101009A (en) Method for Recording Video using a plurality of Cameras and Device thereof
CN114363522A (en) Photographing method and related device
CN110312075B (en) Device imaging method and device, storage medium and electronic device
CN110430375B (en) Imaging method, imaging device, storage medium and electronic equipment
US11431923B2 (en) Method of imaging by multiple cameras, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant