WO2020259401A1 - 设备成像方法、装置、存储介质及电子设备 - Google Patents

设备成像方法、装置、存储介质及电子设备 Download PDF

Info

Publication number
WO2020259401A1
WO2020259401A1 PCT/CN2020/097044 CN2020097044W WO2020259401A1 WO 2020259401 A1 WO2020259401 A1 WO 2020259401A1 CN 2020097044 W CN2020097044 W CN 2020097044W WO 2020259401 A1 WO2020259401 A1 WO 2020259401A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
scene
camera
shooting
focus area
Prior art date
Application number
PCT/CN2020/097044
Other languages
English (en)
French (fr)
Inventor
占文喜
李亮
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020259401A1 publication Critical patent/WO2020259401A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/675Focus control based on electronic image sensor signals comprising setting of focusing regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • This application relates to the field of image processing technology, and in particular to a device imaging method, device, storage medium and electronic equipment.
  • the embodiments of the present application provide an equipment imaging method, device, storage medium, and electronic equipment, which can improve the quality of the entire imaging image captured by the electronic equipment.
  • an embodiment of the present application provides a device imaging method, which is applied to an electronic device.
  • the electronic device includes a first camera of a first type and a plurality of second cameras of a second type.
  • the device imaging method includes :
  • Image synthesis processing is performed on the plurality of second images and the base image, and the synthesized image is set as the imaged image of the image shooting request.
  • an embodiment of the present application provides a device imaging device applied to an electronic device.
  • the electronic device includes a first camera of a first type and a plurality of second cameras of a second type.
  • the device imaging device includes :
  • An area recognition module configured to recognize the target focus area of the scene to be photographed when an image shooting request of the scene to be photographed is received
  • a substrate acquisition module configured to focus and shoot the scene to be shot based on the target focus area through the first camera, and set the first image obtained by shooting as a substrate image;
  • An angle adjustment module configured to adjust the shooting angles of the plurality of second cameras, so that the target focus area is located within the shooting area overlapped by all the second cameras at the same time;
  • An auxiliary image acquisition module configured to focus and shoot the scene to be shot based on the target focus area through the plurality of second cameras to obtain a plurality of second images
  • the image imaging module is configured to perform image synthesis processing on the multiple second images and the base image, and set the synthesized image as the imaging image requested by the image shooting.
  • an embodiment of the present application provides a storage medium on which a computer program is stored.
  • the computer program is run on a computer including a first type of first camera and a plurality of second types of second cameras .
  • Image synthesis processing is performed on the plurality of second images and the base image, and the synthesized image is set as the imaged image of the image shooting request.
  • an embodiment of the present application provides an electronic device, including a processor, a memory, a first camera of a first type, and a plurality of second cameras of a second type.
  • the memory stores a computer program, and the processing
  • the processor is used to execute:
  • Image synthesis processing is performed on the plurality of second images and the base image, and the synthesized image is set as the imaged image of the image shooting request.
  • FIG. 1 is a schematic flowchart of a device imaging method provided by an embodiment of the present application.
  • Fig. 2 is a schematic diagram of an arrangement of the first camera and the second camera in an embodiment of the present application.
  • Fig. 3 is a schematic diagram of an operation for triggering an input image shooting request in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a comparison between the shooting areas of the second camera and the first camera in an embodiment of the present application.
  • Fig. 5 is a schematic diagram of image content comparison between a second image and a base image in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of the overlapping area where all the second images and the base image overlap at the same time in the embodiment of the present application.
  • Fig. 7 is a schematic diagram of another setting manner of the first camera and the second camera in an embodiment of the present application.
  • FIG. 8 is another schematic flowchart of the device imaging method provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an equipment imaging device provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of another structure of an electronic device provided by an embodiment of the present application.
  • the embodiment of the present application first provides a device imaging method, which is applied to an electronic device.
  • the execution subject of the device imaging method may be the device imaging device provided in the embodiment of the application, or an electronic device integrated with the device imaging device.
  • the device imaging device may be implemented in hardware or software, and the electronic device may be a smart device.
  • Mobile phones, tablet computers, handheld computers, notebook computers, or desktop computers are equipped with processors and have processing capabilities.
  • the present application provides a device imaging method applied to an electronic device, wherein the electronic device includes a first camera of a first type and a plurality of second cameras of a second type, and the device imaging method includes:
  • Image synthesis processing is performed on the plurality of second images and the base image, and the synthesized image is set as the imaged image of the image shooting request.
  • the identifying the target focus area of the scene to be photographed includes:
  • the recognized visually significant area is set as the target focus area of the scene to be photographed.
  • the obtaining a preview image of the scene to be shot includes:
  • the identifying the target focus area of the scene to be photographed includes:
  • the area indicated by the focus area selection information is set as the target focus area of the scene to be shot.
  • the electronic device includes two first cameras, the first camera is used to focus and shoot the scene to be shot based on the target focus area, and the first image obtained by shooting is used as a base Images, including:
  • the electronic device further includes an electrochromic component covering the first camera and/or the second camera, and when receiving an image shooting request of the scene to be shot, the electronic device recognizes the Before shooting the target focus area of the scene, it also includes:
  • the method After performing image synthesis processing on the plurality of second images and the base image, and setting the synthesized image as the imaging image of the image shooting request, the method further includes:
  • the electrochromic component is switched to a colored state to hide the first camera and/or the second camera.
  • the method before focusing and shooting the scene to be shot based on the target focus area by the first camera, and setting the first image obtained by shooting as a base image, the method further includes:
  • the first camera is used to focus and shoot the scene to be shot based on the target focus area, and the first image obtained by shooting is set as a base image.
  • the method before focusing and shooting the scene to be shot based on the target focus area by the first camera, and setting the first image obtained by shooting as a base image, the method further includes:
  • the first camera is used to focus and photograph the scene to be photographed based on the target focus area, and the first image obtained by shooting is set as a base image.
  • the focusing and shooting of the scene to be shot based on the target focus area by the first camera, and setting the first image obtained by shooting as a base image includes:
  • Image synthesis processing is performed on the plurality of first images, and the synthesized image is set as a base image.
  • FIG. 1 is a schematic flowchart of a device imaging method according to an embodiment of the application.
  • the device imaging method is applied to the electronic device provided in the embodiment of the present application.
  • the process of the device imaging method provided in the embodiment of the present application may be as follows:
  • the target focus area of the scene to be shot is identified.
  • the electronic device includes a first camera of a first type and a plurality of second cameras of a second type.
  • the first camera is a standard type camera, or a camera with a field angle of about 45 degrees
  • the second camera is a telephoto camera, or a camera with a field angle of less than 40 degrees.
  • the electronic device first receives the input image shooting request, where the image shooting request can be directly input by the user to instruct the electronic device to shoot the scene to be shot.
  • the scene to be photographed is the scene that the first camera is aimed at when the electronic device receives the input image photographing request, which includes but is not limited to people, objects, and scenery.
  • a shooting application such as the system application "camera” of the electronic device
  • the "photograph” button virtual button
  • the "camera” preview interface inputs an image shooting request to the electronic device, as shown in Figure 3.
  • the user after the user operates the electronic device to start a shooting application, and moves the electronic device so that the first camera and the second camera of the electronic device are aimed at the scene to be photographed, the user can speak the voice command "photograph" and input to the electronic device Image capture request.
  • the electronic device determines that the user currently has a shooting demand for the scene to be shot, and then identifies the target focus area of the scene to be shot, where the target focus area is the electronic device's response to the received Image capture request, the focus area when performing image capture operation.
  • the first camera is used to focus and shoot the scene to be shot based on the target focus area, and the first image obtained by shooting is set as the base image.
  • the electronic device After identifying the target focus area of the scene to be shot, the electronic device uses the first camera to focus and shoot the scene to be shot based on the target focus area, record the image captured at this time as the first image, and set the first image as Base image.
  • the shooting angle of the second camera is adjusted so that the target focus area is within the shooting area where all the second cameras and the first camera overlap at the same time.
  • the electronic device After the electronic device recognizes the target focus area of the scene to be photographed, in addition to obtaining the base image through the first camera, it also uses the micro-panel set by each second camera according to the recognized target focus area to perform The shooting angle of the second camera is adjusted so that the target focus area is within the shooting area where all the second cameras and the first camera overlap at the same time.
  • the electronic device includes a first camera and four second cameras, namely the second camera A, the second camera B, the second camera C, and the second camera D.
  • the target focus area is the middle area of the scene to be shot (corresponding to the middle area of the shooting area of the first camera).
  • the electronic device adjusts the shooting angle of each second camera according to the identified target focus area, so that the second camera A
  • the shooting area a corresponds to the upper left corner of the shooting area of the first camera
  • the shooting area b of the second camera B corresponds to the upper right corner of the shooting area of the first camera
  • the shooting area c of the second camera C corresponds to the lower left corner of the shooting area of the first camera.
  • the shooting area d of the second camera D corresponds to the lower right corner of the shooting area of the first camera, so that the target focus area is located in the shooting area where all the second cameras and the first camera overlap at the same time, that is, the shooting area of the first camera The middle area.
  • the shooting area of each second camera also partially overlaps the edge of the shooting area of the first camera.
  • execution order of 102 and 103 is not limited in the embodiment of this application. It can be executed after 102 is executed, 102 can be executed after 103 is executed, or 102 and 103 can be executed simultaneously. 103.
  • all second cameras are used to focus and shoot the scene to be shot based on the target focus area to obtain multiple second images.
  • the electronic device After adjusting the shooting angle of the second camera so that the target focus area is within the shooting area where all the second cameras and the first camera overlap at the same time, the electronic device further uses all the second cameras to focus and shoot the scene to be shot based on the target focus area.
  • the image captured by the second camera is recorded as the second image, and thus multiple second images are captured.
  • the second camera and the first camera use the same shooting parameters (such as contrast and brightness) for shooting. Therefore, the second camera can obtain the same shooting parameters as the first camera. The same shooting effect as the camera.
  • the electronic device includes four second cameras, namely the second camera A, the second camera B, the second camera C, and the second camera D, and the electronic device adjusts the shooting angles of the four second cameras to make this
  • the shooting areas of the four second cameras correspond to the upper left corner, the upper right corner, the lower left corner, and the lower right corner of the shooting area of the first camera.
  • Four second images will be captured by the four second cameras, as shown in Figure 5.
  • the image content of the second image A captured by the second camera A corresponds to the image content in the upper left corner of the base image.
  • the image content of the second image B captured by the second camera B corresponds to the image content in the upper right corner of the base image.
  • image synthesis processing is performed on the plurality of second images and the base image, and the synthesized image is set as the imaged image requested by the image shooting.
  • the electronic device aligns the multiple second images obtained by shooting with the substrate image after obtaining the base image through the first camera and multiple second images through the multiple second cameras.
  • the electronic device Based on the aligned base image and the second image, for the overlapping part of the base image and the second image, calculate the average pixel value of each overlapping pixel.
  • the electronic device also Four second images are captured by four second cameras. Please refer to Figure 6.
  • the overlapping area common to all the second images and the base image is located in the middle area of the base image (corresponding to the target focus area of the scene to be shot).
  • the pixel values of a pixel at a certain position in the five images ie the base image and the four second images
  • the average pixel value of the pixels at the position is 1.
  • the composite image is obtained according to the average pixel value of each pixel in the corresponding position in the base image.
  • the pixel value of each pixel of the base image can be adjusted to the calculated average pixel value to obtain the composite image.
  • the synthesized image obtained by synthesis is set as the imaging image of the image shooting request. So far, the electronic device has completed the response A complete shooting operation of the received image shooting request.
  • Figure 6 thus shows the sharpness change from the base image to the imaging image, where the X axis represents the image edge area to the center area (corresponding to the target focus area of the scene to be shot), and then The position changes from the center area to the edge area, and the Y axis represents the sharpness following the change of the X axis.
  • the sharpness of the center area is the highest.
  • the sharpness of the central area is the highest, and compared with the base image, the sharpness of the edge area of the imaged image is improved as a whole.
  • the sharpness gradually decreases and the changes are smoother, which improves the overall image quality of the imaged image.
  • the electronic device includes a first camera of a first type and a plurality of second cameras of a second type, wherein the electronic device recognizes the target of the scene to be shot according to the received image shooting request Focus area, and then use the first camera to focus on the scene to be shot based on the target focus area, set the first image obtained as the base image, and adjust the shooting angle of the second camera so that the target focus area is located between all the second cameras and the first camera.
  • all second cameras are used to focus and shoot the scene to be shot based on the target focus area to obtain multiple second images.
  • image synthesis processing is performed on the multiple second images and the base image to combine
  • the obtained image is set as the imaged image requested by the image capture.
  • identifying the target focus area of the scene to be shot includes:
  • the image cache queue is preset in the electronic device.
  • the image cache queue may be a fixed-length queue or a variable-length queue.
  • the image cache queue is a fixed-length queue.
  • the 8 images collected in real time by the first camera are buffered and used as preview images of the scene to be shot for display.
  • a visual attention model is pre-trained, and the visual attention model is a deep learning network model.
  • the visual attention model is obtained by model training using a sample image set pre-marked with salient regions, so that the visual attention model is used to simulate the visual features of the human eye, and the visually salient regions in the image can be identified.
  • people are generally considered to be more saliency than sky, grass, and buildings.
  • an image includes people, sky, grass, and buildings, usually the area corresponding to the person will be recognized as visually significant by the visual attention model. area.
  • the electronic device when it recognizes the target focus area of the scene to be shot, it can first obtain it from the image cache queue To the preview image of the scene to be shot, for example, extract the preview image from the image cache queue before the image capture request is received and the closest to the moment the image capture request is received; then, input the preview image into the pre-trained visual attention model for visual inspection Recognize the salient area, and set the identified visual salient area as the target focus area of the scene to be shot.
  • identifying the target focus area of the scene to be shot includes:
  • the input image shooting request also carries focus area selection information.
  • the electronic device receives the image shooting request input by the user based on the preview interface, it can receive the user's touch operation based on the preview interface, and place the receiving position of the touch operation.
  • the corresponding area is taken as the area where the user desires to focus, and accordingly, the position information of the touch operation is taken as the focus area selection information.
  • the electronic device After the electronic device receives the user's touch operation on the preview interface, and uses the position information of the touch operation as the focus area selection information, if it receives the click operation on the "photograph" button in the preview interface, it generates and carries the aforementioned focus area Select the image shooting request of the message.
  • the electronic device when the electronic device recognizes the target focus area of the scene to be shot, it first obtains the focus area selection information carried in the image shooting request, and then the area indicated by the focus area selection information (that is, the area where the user expects to focus) Set the target focus area of the scene to be shot.
  • the electronic device includes two first cameras, "using the first camera to focus and shoot the scene to be shot based on the target focus area, and setting the first image obtained as a base image" includes:
  • the electronic device includes two standard types of first cameras.
  • the electronic device includes two first cameras, a first camera E and a first camera F respectively, and the first camera E is surrounded by four second cameras.
  • the electronic device can use the same shooting parameters to use the two first cameras to take the scene to be shot based on the target focus area. Focus shooting to obtain at least two first images with the same image content. Then, image synthesis processing is performed on at least two first images obtained by shooting, and the synthesized image is set as a base image.
  • the electronic device when it performs image synthesis processing on the at least two captured first images, it first aligns the at least two captured first images, and then calculates each pixel point overlapping the at least two captured first images Finally, a composite image of at least two first images obtained by shooting is obtained according to the calculated average pixel values, and the composite image is set as the base image.
  • the base image with higher definition can be obtained in the embodiments of the present application, so that the finally obtained imaged image also has higher definition.
  • "using the first camera to focus and shoot the scene to be shot based on the target focus area, and set the first image obtained as a base image” includes:
  • the electronic device when the first camera is used to focus and shoot the scene to be shot based on the target focus area, and the first image obtained is set as the base image, the electronic device can continuously focus on the scene to be shot based on the target focus area through the first camera Shoot to get multiple first images.
  • the electronic device may use the first camera to shoot the scene to be shot within a unit time according to the set shooting frame rate, thereby realizing continuous shooting of the scene to be shot.
  • the electronic device will capture 15 images of the scene to be shot, because these images all correspond to the same scene to be shot, and the difference between the images
  • the interval between the shooting moments is small, and the image content of these images can be regarded as the same.
  • the electronic device selects the first image with the highest definition from them, aligns other first images with the first image with the highest definition, and then calculates multiple first images
  • the average pixel value of each overlapped pixel is finally obtained according to the calculated average pixel value to obtain a composite image of a plurality of first images, and the composite image is set as a base image.
  • the base image with higher definition can be obtained in the embodiments of the present application, so that the finally obtained imaged image also has higher definition.
  • the method further includes:
  • the image with the highest definition is selected from the plurality of first images obtained by shooting as the base image, and used as the second image obtained by the second camera to perform image synthesis processing to obtain an imaged image.
  • the contrast of the image can be used to measure the sharpness of the image.
  • the electronic device further includes an electrochromic component covering the first camera and/or the second camera.
  • an electrochromic component covering the first camera and/or the second camera.
  • the electrochromic component in order to improve the integrity of the electronic device, is covered on the first camera and/or the second camera, so that the electrochromic component is used to hide the camera when needed.
  • Electrochromism refers to the phenomenon that the color/transparency of a material undergoes a stable and reversible change under the action of an external electric field.
  • Materials with electrochromic properties can be called electrochromic materials.
  • the electrochromic components in the embodiments of this application are made of electrochromic materials.
  • the electrochromic component may include two conductive layers arranged in a stack, and a color changing layer, an electrolyte layer, and an ion storage layer located between the two conductive layers.
  • the electrochromic component when no voltage is applied (or the voltage is 0V) on the two transparent conductive layers of the electrochromic component, the electrochromic component will be in a transparent state, and when applied between the two transparent conductive layers When the voltage changes from 0V to 3V, the electrochromic component will appear black, when the voltage applied between its two transparent conductive layers changes from 3V to -3V, the electrochromic component will change from black to transparent, etc. Wait.
  • the first camera and/or the second camera can be concealed by using the color-adjustable feature of the electrochromic component.
  • the electronic device can switch the electrochromic component covering the first camera and/or the second camera to a transparent state while starting the shooting application, so that the first camera and the second camera can Shoot the scene to be shot.
  • the electronic device does not start shooting applications
  • the electrochromic component is kept in the black colored state, the first camera and the second camera are hidden; and when the shooting application is started, the electrochromic component is switched to the transparent state synchronously, so that the electronic device can pass the first The camera and the second camera shoot; and after the imaged image is finally synthesized and the activated shooting application is exited, the electronic device switches the electrochromic component to a black colored state, so that the first camera and the second camera are hidden again.
  • the method before “focusing and shooting the scene to be shot based on the target focus area by the first camera, and setting the first image obtained as a base image", the method further includes:
  • the embodiments of the present application use images captured by different cameras to finally synthesize an imaged image. If the electronic device is in a jitter state during the shooting process, the image content of the images captured by different cameras will be significantly different. , Affect the synthesis effect of imaging images.
  • the electronic device when it recognizes the target focus area of the scene to be photographed, it first determines whether it is currently in a shake state.
  • the electronic device can judge the jitter state in many different ways. For example, the electronic device can determine whether the current speed in each direction is greater than the preset speed, if it is, it is judged that it is in the jitter state, if not, it is judged that the current Not in a jitter state (or a stable state); for another example, the electronic device can determine whether the current displacement in each direction is greater than the preset displacement, if it is, it determines that it is currently in a jitter state, if not, it determines that it is not currently in a jitter status.
  • the jitter state can also be determined in a manner not listed in the embodiment of the present application, which is not specifically limited in the embodiment of the present application.
  • the electronic device uses the first camera to focus and shoot the scene to be shot based on the target focus area according to the identified target focus area of the scene to be shot, and set the first image obtained by shooting as the base image.
  • the relevant description in the above embodiment which will not be repeated here.
  • the method before “focusing and shooting the scene to be shot based on the target focus area by the first camera, and setting the first image obtained as a base image", the method further includes:
  • the first camera is used to focus and shoot the scene to be shot based on the target focus area, and the first image obtained by shooting is set as the base image.
  • the electronic device when the electronic device is not in a shaking state, if the scene to be photographed is not in a static state (for example, the scene to be photographed includes a moving object), the electronic device passes through the first
  • the image content of the image captured by the camera and the second camera may also be quite different.
  • the electronic device determines that it is not currently in a shaking state, it does not immediately use the first camera to shoot the scene to be photographed, but further detects whether the scene to be photographed is in a static state.
  • the scene to be shot is in a static state, and then according to the identified target focus area of the scene to be shot, the first camera is used to focus and shoot the scene to be shot based on the target focus area, and the first image obtained is set as the base image.
  • FIG. 8 is a schematic diagram of another flow chart of a device imaging method provided by an embodiment of this application.
  • the device imaging method is applied to an electronic device provided by an embodiment of this application.
  • the electronic device includes two first types of A first camera and four second cameras of the second type, the process of the imaging method of the device may include:
  • the electronic device when receiving an image shooting request of the scene to be shot, the electronic device recognizes the target focus area of the scene to be shot.
  • the electronic device includes a first camera of a first type and a plurality of second cameras of a second type.
  • the first camera is a standard type camera, or a camera with a field angle of about 45 degrees
  • the second camera is a telephoto camera, or a camera with a field angle of less than 40 degrees.
  • the electronic device includes two first cameras and four second cameras, namely a first camera E, a first camera F, a second camera A, a second camera B, a second camera C, and a second camera. D, and each second camera is set on the electronic device through a micro-panel (not shown in the figure), so that the shooting angle of each second camera is adjustable.
  • the electronic device first receives the input image shooting request, where the image shooting request can be directly input by the user to instruct the electronic device to shoot the scene to be shot.
  • the scene to be photographed is the scene that the first camera is aimed at when the electronic device receives the input image photographing request, which includes but is not limited to people, objects, and scenery.
  • a shooting application such as the system application "camera” of the electronic device
  • the "photograph” button virtual button
  • the "camera” preview interface inputs an image shooting request to the electronic device, as shown in Figure 3.
  • the user after the user operates the electronic device to start a shooting application, and moves the electronic device so that the first camera and the second camera of the electronic device are aimed at the scene to be photographed, the user can speak the voice command "photograph" and input to the electronic device Image capture request.
  • the electronic device determines that the user currently has a shooting demand for the scene to be shot, and then identifies the target focus area of the scene to be shot, where the target focus area is the electronic device's response to the received Image capture request, the focus area when performing image capture operation.
  • the electronic device uses the two first cameras to focus and shoot the scene to be shot based on the target focus area to obtain two first images.
  • the electronic device After the electronic device recognizes the target focus area of the scene to be shot, the electronic device can use the two first cameras to focus and shoot the scene to be shot based on the target focus area according to the same shooting parameters to obtain two first images with the same image content.
  • the electronic device performs image synthesis processing on the two first images obtained by shooting, and sets the synthesized image as a base image.
  • the electronic device when the electronic device performs image synthesis processing on the two captured first images, it first aligns the two captured first images, and then calculates the average pixel of each pixel overlapping the captured two first images Finally, the composite image of the two first images obtained by shooting is obtained according to the calculated average pixel values, and the composite image is set as the base image.
  • the electronic device adjusts the shooting angles of the four second cameras so that the target focus area is located within the shooting area where the four second cameras and the two first cameras overlap simultaneously.
  • the electronic device After the electronic device recognizes the target focus area of the scene to be photographed, in addition to obtaining the base image through the first camera, it also uses the micro-panel set by each second camera according to the recognized target focus area to perform The shooting angle of the second camera is adjusted so that the target focus area is located within the shooting area where the four second cameras and the two first cameras simultaneously overlap.
  • the electronic device includes two first cameras and four second cameras, namely, a first camera E, a first camera F, a second camera A, a second camera B, a second camera C, and a second camera.
  • the second camera D where the shooting area of the first camera E and the first camera F are the same.
  • the figure shows the shooting area of the first camera.
  • the electronic device adjusts the shooting angle of each second camera according to the identified target focus area, so that the shooting area a of the second camera A corresponds to the first The upper left corner of the shooting area of camera E/F, the shooting area b of the second camera B corresponds to the upper right corner of the shooting area of the first camera E/F, and the shooting area c of the second camera C corresponds to the first camera E/F In the lower left corner of the shooting area, the shooting area d of the second camera D corresponds to the lower right corner of the shooting area of the first camera E/F, so that the target focus area is located at the same time as the four second cameras and the two first cameras
  • the overlapping shooting area is the middle area of the shooting area of the first camera E/F.
  • the shooting area of each second camera also partially overlaps the edge of the shooting
  • the electronic device uses four second cameras to focus and shoot the scene to be shot based on the target focus area to obtain four second images.
  • the electronic device After adjusting the shooting angles of the four second cameras so that the target focus area is within the shooting area where the four second cameras and the two first cameras overlap at the same time, the electronic device further uses the four second cameras based on the target focus area Focusing on the scene to be photographed, the image captured by the second camera is recorded as the second image, and four second images are obtained by shooting.
  • each second camera and the first camera use the same shooting parameters (such as contrast and brightness) for shooting, so that the four second cameras It can achieve the same shooting effect as the two first cameras.
  • the electronic device includes four second cameras, namely the second camera A, the second camera B, the second camera C, and the second camera D, and the electronic device adjusts the shooting angles of the four second cameras to make this
  • the shooting areas of the four second cameras correspond to the upper left corner, upper right corner, lower left corner and lower right corner of the E/F shooting area of the first camera.
  • Four second images will be captured by these four second cameras, as shown in Figure 5.
  • the image content of the second image A captured by the second camera A corresponds to the image content in the upper left corner of the base image
  • the image content of the second image B captured by the second camera B corresponds to the image content in the upper right corner of the base image.
  • the electronic device performs image synthesis processing on the four second images and the base image, and sets the synthesized image as the imaged image requested for image shooting.
  • the electronic device aligns the four second images obtained by shooting with the substrate image after obtaining the base image through the first camera and four second images through the four second cameras.
  • each second image overlaps the base image at the same time
  • the overlapping area is located in the middle area of the base image.
  • the pixel values of a certain position of the pixel in the five images that is, the base image and the four second images
  • the average pixel value of the pixel at this position can be calculated to be 1.
  • the composite image is obtained according to the average pixel value of each pixel in the corresponding position in the base image.
  • the pixel value of each pixel of the base image can be adjusted to the calculated average pixel value to obtain the composite image.
  • the synthesized image is set as the imaging image requested by the image shooting. So far, the electronic device has completed the corresponding The received image shooting requests a complete shooting operation.
  • Figure 6 thus shows the sharpness change from the base image to the imaging image, where the X axis represents the image edge area to the center area (corresponding to the target focus area of the scene to be shot), and then The position changes from the center area to the edge area, and the Y axis represents the sharpness following the change of the X axis.
  • the sharpness of the center area is the highest.
  • the sharpness of the central area is the highest, and compared with the base image, the sharpness of the edge area of the imaged image is improved as a whole.
  • the sharpness gradually decreases and the changes are smoother, which improves the overall image quality of the imaged image.
  • the embodiment of the present application also provides an equipment imaging device.
  • FIG. 9, is a schematic structural diagram of a device imaging device provided by an embodiment of the application.
  • the equipment imaging device is applied to electronic equipment.
  • the electronic equipment includes a first camera of a first type and a plurality of second cameras of a second type.
  • the equipment imaging device includes an area recognition module 301, a substrate acquisition module 302, and an angle adjustment module. 303.
  • the auxiliary image acquisition module 304 and the image imaging module 305 are as follows:
  • the area identification module 301 is configured to identify the target focus area of the scene to be shot when an image shooting request of the scene to be shot is received.
  • the substrate acquisition module 302 is used for focusing and shooting the scene to be shot based on the target focus area through the first camera, and setting the first image obtained by shooting as the substrate image.
  • the auxiliary image acquisition module 304 is configured to focus and shoot the scene to be shot based on the target focus area through all the second cameras to obtain multiple second images.
  • the image imaging module 305 is configured to perform image synthesis processing on a plurality of second images and a base image, and set the synthesized image as the imaging image requested by the image shooting.
  • the area recognition module 301 when recognizing the target focus area of the scene to be shot, is used to:
  • the area identification module 301 when identifying the target focus area of the scene to be photographed, is further configured to:
  • the electronic device includes two first cameras.
  • the base acquisition module 302 is configured to:
  • the two first cameras focus and shoot the scene to be shot based on the target focus area to obtain at least two first images
  • the electronic device further includes an electrochromic component covering the first camera and/or the second camera
  • the device imaging device further includes an electrochromic module for the area recognition module 301 when the scene to be photographed is received
  • switch the electrochromic component When requesting an image, switch the electrochromic component to a transparent state before identifying the target focus area of the scene to be photographed;
  • the electrochromic component is switched to the colored state to hide the first camera And/or a second camera.
  • the base acquisition module 302 is further configured to:
  • the first camera is used to focus and shoot the scene to be shot based on the target focus area, and the first image obtained by shooting is set as the base image.
  • the base acquisition module 302 is further configured to:
  • the first camera is used to focus and shoot the scene to be shot based on the target focus area, and the first image obtained by shooting is set as the base image.
  • the device imaging device provided in the embodiment of the application belongs to the same concept as the device imaging method in the above embodiment. Any method provided in the device imaging method embodiment can be run on the device imaging device, and its specific implementation For details of the process, refer to the embodiment of the device imaging method, which will not be repeated here.
  • the embodiment of the present application provides a computer-readable storage medium on which a computer program is stored.
  • the computer is caused to execute the steps in the device imaging method provided in the embodiment of the present application.
  • the storage medium may be a magnetic disk, an optical disc, a read only memory (Read Only Memory, ROM,) or a random access device (Random Access Memory, RAM), etc.
  • the electronic device includes a processor 401, a memory 402, a first camera 403 of a first type, and a plurality of second cameras 404 of a second type.
  • the processor 401 is electrically connected to the memory 402, the first camera 403, and the second camera 404.
  • the memory 402 can be used to store software programs and modules.
  • the processor 401 executes various functional applications and data processing by running the computer programs and modules stored in the memory 402.
  • the memory 402 may mainly include a storage program area and a storage data area.
  • the storage program area may store an operating system, a computer program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of electronic equipment, etc.
  • the memory 402 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the memory 402 may also include a memory controller to provide the processor 401 with access to the memory 402.
  • the first camera 403 is a standard type of camera, or a camera with a field of view of about 45 degrees.
  • the second camera 404 is a telephoto camera, or a camera with a field of view angle of less than 40 degrees.
  • the processor 401 in the electronic device loads the instructions corresponding to the process of one or more computer programs into the memory 402 according to the following steps, and the processor 401 runs and stores them in the memory 402
  • the computer program to achieve various functions, as follows:
  • the first camera 403 Using the first camera 403 to focus and shoot the scene to be shot based on the target focus area, and set the first image obtained by shooting as the base image;
  • All the second cameras 404 focus and shoot the scene to be shot based on the target focus area to obtain multiple second images;
  • Image synthesis processing is performed on the plurality of second images and the base image, and the synthesized image is set as the imaged image requested by the image shooting.
  • FIG. 11 is another schematic structural diagram of the electronic device provided by an embodiment of the application. The difference from the electronic device shown in FIG. 10 is that the electronic device further includes components such as an input unit 405 and an output unit 406.
  • the input unit 405 may be used to receive inputted numbers, character information or user characteristic information (such as fingerprints), and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • user characteristic information such as fingerprints
  • the output unit 406 may be used to display information input by the user or information provided to the user, such as a screen.
  • the first camera 403 Using the first camera 403 to focus and shoot the scene to be shot based on the target focus area, and set the first image obtained by shooting as the base image;
  • All the second cameras 404 focus and shoot the scene to be shot based on the target focus area to obtain multiple second images;
  • Image synthesis processing is performed on the plurality of second images and the base image, and the synthesized image is set as the imaged image requested by the image shooting.
  • the processor 401 executes:
  • the processor 401 may further execute:
  • the electronic device includes two first cameras 403.
  • the processor 401 executes:
  • the two first cameras 403 focus and shoot the scene to be shot based on the target focus area to obtain at least two first images
  • the electronic device further includes an electrochromic component covering the first camera 403 and/or the second camera 404.
  • the processor 401 executes:
  • the processor 401 After performing image synthesis processing on the multiple second images and the base image, and setting the synthesized image as the imaging image requested by the image shooting, the processor 401 further executes:
  • the electrochromic component is switched to the colored state to hide the first camera 403 and/or the second camera 404.
  • the processor 401 before the first camera 403 is used to focus and shoot the scene to be shot based on the target focus area, and before the first image obtained by shooting is set as the base image, the processor 401 further executes:
  • the first camera 403 is used to focus and shoot the scene to be shot based on the target focus area, and the first image obtained by shooting is set as the base image.
  • the processor 401 before the first camera 403 is used to focus and shoot the scene to be shot based on the target focus area, and before the first image obtained by shooting is set as the base image, the processor 401 further executes:
  • the first camera 403 is used to focus and shoot the scene to be shot based on the target focus area, and the first image obtained by shooting is set as the base image.
  • the electronic device provided in the embodiments of this application belongs to the same concept as the device imaging method in the above embodiments. Any method provided in the device imaging method embodiment can be run on the electronic device. The specific implementation process is detailed. See the embodiment of the feature extraction method, which will not be repeated here.
  • the device imaging method of the embodiment of the present application ordinary testers in the field can understand that all or part of the process of implementing the device imaging method of the embodiment of the present application can be completed by controlling the relevant hardware through a computer program.
  • the computer program may be stored in a computer readable storage medium, such as stored in the memory of an electronic device, and executed by at least one processor in the electronic device.
  • the execution process may include such as device imaging method.
  • the storage medium can be a magnetic disk, an optical disk, a read-only memory, a random access memory, etc.
  • the device imaging device of the embodiment of the present application its functional modules may be integrated into one processing chip, or each module may exist alone physically, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium, such as a read-only memory, a magnetic disk or an optical disk, etc. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例公开了一种设备成像方法、装置、存储介质及电子设备,其中,电子设备包括第一类型的第一摄像头和多个第二类型的第二摄像头,协同第一摄像头和第二摄像头拍摄,使得第一摄像头的目标对焦区域位于所有第二摄像头与第一摄像头同时重叠的拍摄区域之内,然后根据拍摄得到的图像进行图像合成,作为图像拍摄请求的成像图像。

Description

设备成像方法、装置、存储介质及电子设备
本申请要求于2019年06月28日提交中国专利局、申请号为201910579757.3、发明名称为“设备成像方法、装置、存储介质及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,具体涉及一种设备成像方法、装置、存储介质及电子设备。
背景技术
目前,用户通常利用具有摄像头的电子设备拍摄图像,能够通过这些电子设备随时随地的记录身边发生的事情,看到的景物等。然而,由于摄像头本身的硬件缺陷,使得其拍摄的图像往往中间区域比较清晰,而边缘区域则相对模糊。
发明内容
本申请实施例提供了一种设备成像方法、装置、存储介质及电子设备,能够提高电子设备拍摄得到的整张成像图像的质量。
第一方面,本申请实施例提供了一种设备成像方法,应用于电子设备,所述电子设备包括第一类型的第一摄像头和多个第二类型的第二摄像头,所述设备成像方法包括:
当接收到对待拍摄场景的图像拍摄请求时,识别所述待拍摄场景的目标对焦区域;
通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像;
调整所述多个第二摄像头的拍摄角度,使得所述目标对焦区域位于所有第二摄像头与所述第一摄像头同时重叠的拍摄区域之内;
通过所述多个第二摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,得到多个第二图像;
对所述多个第二图像与所述基底图像进行图像合成处理,将合成得到的图像设为所述图像拍摄请求的成像图像。
第二方面,本申请实施例提供了一种设备成像装置,应用于电子设备,所述电子设备包括第一类型的第一摄像头和多个第二类型的第二摄像头,所述设备成像装置包括:
区域识别模块,用于当接收到对待拍摄场景的图像拍摄请求时,识别所述待拍摄场景的目标对焦区域;
基底获取模块,用于通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像;
角度调整模块,用于调整所述多个第二摄像头的拍摄角度,使得所述目标对焦区域位于所有第二摄像头同时重叠的拍摄区域之内;
辅图获取模块,用于通过所述多个第二摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄, 得到多个第二图像;
图像成像模块,用于对所述多个第二图像与所述基底图像进行图像合成处理,将合成得到的图像设为所述图像拍摄请求的成像图像。
第三方面,本申请实施例提供了一种存储介质,其上存储有计算机程序,当所述计算机程序在包括第一类型的第一摄像头和多个第二类型的第二摄像头的计算机上运行时,使得所述计算机执行:
当接收到对待拍摄场景的图像拍摄请求时,识别所述待拍摄场景的目标对焦区域;
通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像;
调整所述多个第二摄像头的拍摄角度,使得所述目标对焦区域位于所有第二摄像头与所述第一摄像头同时重叠的拍摄区域之内;
通过所述多个第二摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,得到多个第二图像;
对所述多个第二图像与所述基底图像进行图像合成处理,将合成得到的图像设为所述图像拍摄请求的成像图像。
第四方面,本申请实施例提供了一种电子设备,包括处理器、存储器、第一类型的第一摄像头和多个第二类型的第二摄像头,所述存储器存储有计算机程序,所述处理器通过调用所述计算机程序,用于执行:
当接收到对待拍摄场景的图像拍摄请求时,识别所述待拍摄场景的目标对焦区域;
通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像;
调整所述多个第二摄像头的拍摄角度,使得所述目标对焦区域位于所有第二摄像头与所述第一摄像头同时重叠的拍摄区域之内;
通过所述多个第二摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,得到多个第二图像;
对所述多个第二图像与所述基底图像进行图像合成处理,将合成得到的图像设为所述图像拍摄请求的成像图像。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的设备成像方法的一流程示意图。
图2是本申请实施例中第一摄像头和第二摄像头的一种设置方式示意图。
图3是本申请实施例中触发输入图像拍摄请求的操作示意图。
图4是本申请实施例中第二摄像头与第一摄像头的拍摄区域对比示意图。
图5是本申请实施例中第二图像和基底图像的图像内容对比示意图。
图6是本申请实施例中所有第二图像和基底图像同时重叠的重叠区域示意图。
图7是本申请实施例中第一摄像头和第二摄像头的另一种设置方式示意图。
图8是本申请实施例提供的设备成像方法的另一流程示意图。
图9是本申请实施例提供的设备成像装置的一结构示意图。
图10是本申请实施例提供的电子设备的一结构示意图。
图11是本申请实施例提供的电子设备的另一结构示意图。
具体实施方式
请参照图式,其中相同的组件符号代表相同的组件,本申请的原理是以实施在一适当的运算环境中来举例说明。以下的说明是基于所例示的本申请具体实施例,其不应被视为限制本申请未在此详述的其它具体实施例。
本申请实施例首先提供一种设备成像方法,该设备成像方法应用于电子设备。其中,该设备成像方法的执行主体可以是本申请实施例提供的设备成像装置,或者集成了该设备成像装置的电子设备,该设备成像装置可以采用硬件或者软件的方式实现,电子设备可以是智能手机、平板电脑、掌上电脑、笔记本电脑、或者台式电脑等配置有处理器而具有处理能力的设备。
本申请提供一种设备成像方法,应用于电子设备,其中,所述电子设备包括第一类型的第一摄像头和多个第二类型的第二摄像头,所述设备成像方法包括:
当接收到对待拍摄场景的图像拍摄请求时,识别所述待拍摄场景的目标对焦区域;
通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像;
调整所述多个第二摄像头的拍摄角度,使得所述目标对焦区域位于所有第二摄像头与所述第一摄像头同时重叠的拍摄区域之内;
通过所述多个第二摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,得到多个第二图像;
对所述多个第二图像与所述基底图像进行图像合成处理,将合成得到的图像设为所述图像拍摄请求的成像图像。
在一实施例中,所述识别所述待拍摄场景的目标对焦区域,包括:
获取所述待拍摄场景的预览图像,将所述预览图像输入到预先训练的视觉注意模型进行视觉显著区域的识别;
将识别出的视觉显著区域设为所述待拍摄场景的目标对焦区域。
在一实施例中,所述获取所述待拍摄场景的预览图像,包括:
从预设的图像缓存队列中提取接收到所述图像拍摄请求之前且距离图像拍摄请求接收时刻最近的预览图像。
在一实施例中,所述识别所述待拍摄场景的目标对焦区域,包括:
获取所述图像拍摄请求中携带的对焦区域选择信息;
将所述对焦区域选择信息所指示的区域设为所述待拍摄场景的目标对焦区域。
在一实施例中,所述电子设备包括两个第一摄像头,所述通过所述第一摄像头基于所述目标对焦区 域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像,包括:
通过所述两个第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,得到至少两个第一图像;
对所述至少两个第一图像进行图像合成处理,将合成得到的图像设为所述基底图像。
在一实施例中,所述电子设备还包括覆盖所述第一摄像头和/或所述第二摄像头的电致变色组件,所述当接收到对待拍摄场景的图像拍摄请求时,识别所述待拍摄场景的目标对焦区域之前,还包括:
切换所述电致变色组件至透明状态;
所述对所述多个第二图像与所述基底图像进行图像合成处理,将合成得到的图像设为所述图像拍摄请求的成像图像之后,还包括:
将所述电致变色组件切换至着色状态,以隐藏所述第一摄像头和/或所述第二摄像头。
在一实施例中,所述通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像之前,还包括:
检测当前是否处于抖动状态;
若当前不处于抖动状态,则通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像。
在一实施例中,所述通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像之前,还包括:
若当前不处于抖动状态,则检测所述待拍摄场景是否处于静止状态;
若所述待拍摄场景处于静止状态,则通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像。
在一实施例中,所述通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像,包括:
通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景连续对焦拍摄,得到多个第一图像;
对所述多个第一图像进行图像合成处理,将合成得到的图像设为基底图像。
请参照图1,图1为本申请实施例提供的设备成像方法的流程示意图。该设备成像方法应用于本申请实施例提供的电子设备,如图1所示,本申请实施例提供的设备成像方法的流程可以如下:
在101中,当接收到对待拍摄场景的图像拍摄请求时,识别待拍摄场景的目标对焦区域。
应当说明的是,在本申请实施例中,电子设备包括第一类型的第一摄像头和多个第二类型的第二摄像头。其中,第一摄像头为标准类型的摄像头,或者说视场角为45度左右的摄像头,第二摄像头为长焦类型的摄像头,或者说视场角为40度以内的摄像头。
比如,请参照图2,电子设备包括一个第一摄像头和四个第二摄像头,分别为第二摄像头A、第二摄像头B、第二摄像头C以及第二摄像头D,各第二摄像头分别通过微云台(图中未示出)设置在电子设备上,使得各第二摄像头的拍摄角度可调。
本申请实施例中,电子设备首先接收输入的图像拍摄请求,其中,图像拍摄请求可由用户直接输入, 用于指示电子设备对待拍摄场景进行拍摄。其中,待拍摄场景即电子设备在接收到输入的图像拍摄请求时,第一摄像头所对准的场景,其中包括但不限于人、物以及景等。
比如,用户在操作电子设备启动拍摄类应用(比如电子设备的系统应用“相机”),并通过移动电子设备,使得电子设备的第一摄像头以及第二摄像头对准待拍摄场景之后,可以通过点击“相机”预览界面提供的“拍照”按键(为虚拟按键),向电子设备输入图像拍摄请求,如图3所示。
又比如,用户在操作电子设备启动拍摄类应用,并通过移动电子设备,使得电子设备的第一摄像头以及第二摄像头对准待拍摄场景之后,可以说出语音指令“拍照”,向电子设备输入图像拍摄请求。
相应的,电子设备在接收到输入的图像拍摄请求之后,判定用户当前存在对待拍摄场景的拍摄需求,此时识别待拍摄场景的目标对焦区域,其中,目标对焦区域即电子设备响应于接收到的图像拍摄请求,执行图像拍摄操作时的对焦区域。
在102中,通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像。
在识别出待拍摄场景的目标对焦区域之后,电子设备即通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将此时拍摄得到的图像记为第一图像,并将该第一图像设为基底图像。
在103中,调整第二摄像头的拍摄角度,使得目标对焦区域位于所有第二摄像头与第一摄像头同时重叠的拍摄区域之内。
其中,电子设备在识别出待拍摄场景的目标对焦区域之后,除了通过第一摄像头拍摄得到基底图像之外,还根据识别出的目标对焦区域,利用各第二摄像头设置的微云台,对各第二摄像头的拍摄角度进行调整,使得目标对焦区域位于所有第二摄像头与第一摄像头同时重叠的拍摄区域之内。
比如,请参照图4,电子设备包括一个第一摄像头和四个第二摄像头,分别为第二摄像头A、第二摄像头B、第二摄像头C以及第二摄像头D,假设识别出待拍摄场景的目标对焦区域为待拍摄场景的中间区域(对应于第一摄像头的拍摄区域的中间区域),电子设备根据识别出的目标对焦区域对各第二摄像头的拍摄角度进行调整,使得第二摄像头A的拍摄区域a对应第一摄像头的拍摄区域的左上角,第二摄像头B的拍摄区域b对应第一摄像头的拍摄区域的右上角,第二摄像头C的拍摄区域c对应第一摄像头的拍摄区域的左下角,第二摄像头D的拍摄区域d对应第一摄像头的拍摄区域的右下角,由此,使得目标对焦区域位于所有第二摄像头与第一摄像头同时重叠的拍摄区域,即第一摄像头的拍摄区域的中间区域。此外,如图4所示,各第二摄像头的拍摄区域还与第一摄像头的拍摄区域的边缘部分重叠。
应当说明的是,本申请实施例中对102和103的执行顺序不做限制,可以是在执行完成102后再执行103,可以是在执行完成103之后再执行102,还可以是同时执行102和103。
在104中,通过所有第二摄像头基于目标对焦区域对待拍摄场景对焦拍摄,得到多个第二图像。
在完成对第二摄像头拍摄角度调整,使得目标对焦区域位于所有第二摄像头与第一摄像头同时重叠的拍摄区域之内后,电子设备进一步通过所有第二摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将第二摄像头拍摄得到的图像记为第二图像,由此拍摄得到多个第二图像。
应当说明的是,在通过所有第二摄像头对待拍摄场景进行拍摄时,第二摄像头与第一摄像头采用相 同的拍摄参数(比如对比度、亮度)进行拍摄,由此,第二摄像头能够获得与第一摄像头相同的拍摄效果。
比如,电子设备包括四个第二摄像头,分别为第二摄像头A、第二摄像头B、第二摄像头C以及第二摄像头D,且电子设备通过调整这四个第二摄像头的拍摄角度,使得这四个第二摄像头的拍摄区域分别对应第一摄像头拍摄区域的左上角、右上角、左下角以及右下角,通过这四个第二摄像头将拍摄得到四个第二图像,如图5所示,第二摄像头A拍摄的第二图像A的图像内容对应基底图像左上角的图像内容,第二摄像头B拍摄的第二图像B的图像内容对应基底图像右上角的图像内容,第二摄像头C拍摄的第二图像C的图像内容对应基底图像左下角的图像内容,第二摄像头D拍摄的第二图像D的图像内容对应基底图像右下角的图像内容,这样,所有第二图像以及第一图像均同时覆盖了目标对焦区域的图像内容,且不同第二图像的图像内容即覆盖了基底图像中边缘区域的不同位置。
在105中,对多个第二图像与基底图像进行图像合成处理,将合成得到的图像设为图像拍摄请求的成像图像。
本申请实施例中,电子设备在通过第一摄像头拍摄得到基底图像,以及通过多个第二摄像头拍摄得到多个第二图像之后,将拍摄得到的多个第二图像与基底图像对齐。
基于对齐后的基底图像和第二图像,对于基底图像和第二图像的重叠部分,计算重叠的各像素点的平均像素值,比如,电子设备除了通过第一摄像头拍摄得到基底图像之外,还通过四个第二摄像头拍摄得到四个第二图像,请参照图6,所有第二图像与基底图像共同的重叠区域位于基底图像的中间区域(对应于待拍摄场景的目标对焦区域),这样,对于图6所示的重叠区域,某位置的像素点在五个图像(即基底图像和四个第二图像)中的像素值分别为0.8、0.9、1.1、1.2和1,则可计算得到该位置的像素点的平均像素值为1。
之后,根据基底图像中对应的各位置像素点所得到的平均像素值得到合成图像,比如,可以将基底图像的各像素点的像素值相应调整为计算得到的各平均像素值,从而得到合成图像;又比如,还可以根据计算得到各平均像素值,生成一幅新的图像,即合成图像。
本申请实施例中,电子设备在对拍摄得到的多个第二图像与基底图像进行图像合成处理之后,将合成得到的合成图像设为图像拍摄请求的成像图像,至此,电子设备即完成了响应接收到的图像拍摄请求的一次完整拍摄操作。
比如,请继续参照图6,图6由此示出了基底图像到成像图像的清晰度变化,其中,X轴表示由图像边缘区域到中心区域(对应于待拍摄场景的目标对焦区域),再由中心区域到边缘区域的位置变化,Y轴表示跟随X轴变化的清晰度,可以看出,在基底图像中,中心区域的清晰度最高,随着中心区域向边缘区域的扩散,清晰度逐渐降低,且变化较为剧烈,而在成像图像中,中心区域的清晰度最高,且相较于基底图像,成像图像的边缘区域的清晰度被整体提高,随着中心区域向边缘区域的扩散,虽然清晰度逐渐降低,且变化更为平滑,使得成像图像的整体图像质量得以提高。
由上可知,本申请实施例中,电子设备包括第一类型的第一摄像头和多个第二类型的第二摄像头,其中,电子设备根据接收到的图像拍摄请求,识别出待拍摄场景的目标对焦区域,再通过第一摄像头基 于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像,以及调整第二摄像头的拍摄角度,使得目标对焦区域位于所有第二摄像头与第一摄像头同时重叠的拍摄区域之内,再通过所有第二摄像头基于目标对焦区域对待拍摄场景对焦拍摄,得到多个第二图像,最后对多个第二图像与基底图像进行图像合成处理,将合成得到的图像设为图像拍摄请求的成像图像。由此,最终得到的成像图像不同区域的清晰度都得以提高,且图像整体的清晰度变化更为平缓,提高了整张成像图像的质量。
在一实施例中,“识别待拍摄场景的目标对焦区域”,包括:
(1)获取待拍摄场景的预览图像,将预览图像输入到预先训练的视觉注意模型进行视觉显著区域的识别;
(2)将识别出的视觉显著区域设为待拍摄场景的目标对焦区域。
应当说明的是,本申请实施例中在电子设备预先设置有的图像缓存队列,该图像缓存队列可以为定长队列,也可以为变长队列,比如,该图像缓存队列为定长队列,能够缓存第一摄像头实时采集的8个图像,用作待拍摄场景的预览图像进行展示。
此外,还预先训练有视觉注意模型,该视觉注意模型为深度学习网络模型。在进行模型的训练时,该视觉注意模型利用预先标记有显著区域的样本图像集合进行模型训练得到,从而利用该视觉注意模型模拟人眼视觉特征,能够对图像中的视觉显著区域进行识别。比如,通常认为人物要比天空、草地以及建筑物的显著性更高,当一张图像中包括人物、天空、草地以及建筑物时,通常该人物对应的区域将被视觉注意模型识别为视觉显著区域。
本领域普通技术人员可以理解的是,用户通常更愿意将场景中的视觉显著区域作为对焦区域进行对焦,因此,电子设备在识别待拍摄场景的目标对焦区域时,可以首先从图像缓存队列中获取到待拍摄场景的预览图像,比如,从图像缓存队列中提取接收到图像拍摄请求之前且距离图像拍摄请求接收时刻最近的预览图像;然后,将该预览图像输入到预先训练的视觉注意模型进行视觉显著区域的识别,并将识别出的视觉显著区域设为待拍摄场景的目标对焦区域。
在一实施例中,“识别待拍摄场景的目标对焦区域”,包括:
(1)获取图像拍摄请求中携带的对焦区域选择信息;
(2)将对焦区域选择信息所指示的区域设为待拍摄场景的目标对焦区域。
本申请实施例,还提供另外一种可选的识别待拍摄场景的目标对焦区域的方案。其中,输入的图像拍摄请求还携带的有对焦区域选择信息,比如,电子设备在基于预览界面接收用户输入的图像拍摄请求时,可以接收用户基于预览界面触摸操作,将该触摸操作的接收位置所对应的区域作为用户期望对焦的区域,相应的,将该触摸操作的位置信息作为对焦区域选择信息。由此,当电子设备接收到用户对预览界面的触摸操作,且将触摸操作的位置信息作为对焦区域选择信息之后,若接收到对预览界面中“拍照”按键的点击操作,生成携带前述对焦区域选择信息的图像拍摄请求。
这样,电子设备在识别待拍摄场景的目标对焦区域时,首先获取到图像拍摄请求中携带的对焦区域选择信息,然后将该对焦区域选择信息所指示的区域(也即是用户期望对焦的区域)设为待拍摄场景的目标对焦区域。
在一实施例中,电子设备包括两个第一摄像头,“通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像”,包括:
(1)通过两个第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,得到至少两个第一图像;
(2)对至少两个第一图像进行图像合成处理,将合成得到的图像设为基底图像。
本申请实施例中,电子设备包括两个标准类型的第一摄像头。比如,请参照图7,电子设备包括两个第一摄像头,分别为第一摄像头E和第一摄像头F,且第一摄像头E环绕设置有四个第二摄像头。
在通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像时,电子设备可以按照相同的拍摄参数,通过两个第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,得到图像内容相同的至少两个第一图像。然后再对拍摄得到的至少两个第一图像进行图像合成处理,将合成得到的图像设为基底图像。
其中,电子设备在对拍摄的的至少两个第一图像进行图像合成处理时,首先将拍摄得到的至少两个第一图像对齐,再计算拍摄得到的至少两个第一图像重叠的各像素点的平均像素值,最后根据计算得到的各平均像素值得到拍摄得到的至少两个第一图像的合成图像,将该合成图像设为基底图像。
相较于直接将第一摄像头拍摄得到的第一图像设为基底图像,本申请实施例中能够获得更高清晰度的基底图像,使得最终得到的成像图像也具有更高的清晰度。
在一实施例中,“通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像”,包括:
(1)通过第一摄像头基于目标对焦区域对待拍摄场景连续对焦拍摄,得到多个第一图像;
(2)对多个第一图像进行图像合成处理,将合成得到的图像设为基底图像。
本申请实施例中,在通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像时,电子设备可以通过第一摄像头基于目标对焦区域对待拍摄场景连续对焦拍摄,得到多个第一图像。其中,电子设备可以按照设定的拍摄帧率,在单位时长内通过第一摄像头对待拍摄场景进行拍摄,从而实现对待拍摄场景的连续拍摄。比如,假设第一摄像头的拍摄帧率为15FPS,则在单位时长1秒内,电子设备将拍摄得到待拍摄场景的15个图像,由于这些图像均对应于同一待拍摄场景,且各图像间的拍摄时刻的间隔较小,可以将这些图像的图像内容看做相同。
在拍摄得到多个待拍摄场景的第一图像之后,电子设备从中选取出清晰度最高的第一图像,将其它第一图像与该清晰度最高的第一图像对齐,再计算多个第一图像重叠的各像素点的平均像素值,最后根据计算得到的各平均像素值得到多个第一图像的合成图像,将该合成图像设为基底图像。
相较于直接将第一摄像头拍摄得到的第一图像设为基底图像,本申请实施例中能够获得更高清晰度的基底图像,使得最终得到的成像图像也具有更高的清晰度。
可选的,在通过第一摄像头基于目标对焦区域对待拍摄场景连续对焦拍摄,得到多个第一图像之后,还包括:
从拍摄得到的多个第一图像中选取清晰度最高的图像作为基底图像,用作与第二摄像头拍摄得到的第二图像进行图像合成处理以得到成像图像。
通常来说,图像越清晰,其对比度越高。因此,可以使用图像的对比度来衡量图像的清晰度。
在一实施例中,电子设备还包括覆盖第一摄像头和/或第二摄像头的电致变色组件,“当接收到对待拍摄场景的图像拍摄请求时,识别待拍摄场景的目标对焦区域”之前,还包括:
切换电致变色组件至透明状态;
“对多个第二图像与基底图像进行图像合成处理,将合成得到的图像设为图像拍摄请求的成像图像”之后,还包括:
将电致变色组件切换至着色状态,以隐藏第一摄像头和/或第二摄像头。
本申请实施例中,为了提升电子设备的完整性,在第一摄像头和/或第二摄像头之上覆盖电致变色组件,从而在需要时利用电致变色组件对摄像头进行隐藏。
以下首先对电致变色组件的工作原理进行简单介绍。
电致变色是指材料的颜色/透明度在外加电场的作用下发生稳定、可逆的变化的现象。具有电致变色性能的材料可以称为电致变色材料。而本申请实施例中的电致变色组件,就是利用电致变色材料制成。
其中,电致变色组件可以包括层叠设置的两层导电层,以及位于两个导电层之间的变色层、电解质层、离子存储层。比如,电致变色组件的两个透明导电层之上未施加电压(或者说,电压为0V)时,该电致变色组件将呈透明状态,而当施加在其两个透明导电层之间的电压由0V变为3V时,该电致变色组件将呈黑色,当施加在其两个透明导电层之间的电压由3V变为-3V时,该电致变色组件将由黑色变为透明,等等。
这样,利用电致变色组件颜色可调的特性,可以对第一摄像头和/或第二摄像头进行隐藏。
本申请实施例中,电子设备可以在启动拍摄类应用的同时,将覆盖于第一摄像头和/或第二摄像头之上的电致变色组件切换至透明状态,使得第一摄像头和第二摄像头能够对待拍摄场景进行拍摄。
而在通过第一摄像头获取到基底图像,以及通过多个第二摄像头拍摄得到多个第二图像,并最终合成得到成像图像且退出启动的拍摄类应用之后,电子设备即将电致变色组件切换至着色状态,从而隐藏第一摄像头和/或第二摄像头。
比如,电子设备设置有同时覆盖全部第一摄像头和第二摄像头的电致变色组件,且电子设备设置有第一摄像头和第二摄像头的一面的颜色为黑色,则电子设备在未启动拍摄类应用时,保持电致变色组件处于黑色的着色状态,使得第一摄像头和第二摄像头被隐藏;而在启动拍摄类应用时,同步将电致变色组件切换至透明状态,使得电子设备能够通过第一摄像头和第二摄像头进行拍摄;而在最终合成得到成像图像且退出启动的拍摄类应用之后,电子设备将电致变色组件切换至黑色的着色状态,使得第一摄像头和第二摄像头再次被隐藏。
在一实施例中,“通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像”之前,还包括:
(1)检测当前是否处于抖动状态;
(2)若当前不处于抖动状态,则通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像。
根据以上实施例中的相关描述,本申请实施例利用不同摄像头拍摄得到的图像来最终合成得到成像图像,若在拍摄过程电子设备处于抖动状态,将导致不同摄像头拍摄得到图像的图像内容存在明显差异,影响成像图像的合成效果。
因此,在本申请实施例中,电子设备在识别出待拍摄场景的目标对焦区域时,首先判断自身当前是否处于抖动状态。其中,电子设备可以通过多种不同方式进行抖动状态的判断,比如,电子设备可以判断当前在各方向的速度是否均大于预设速度,若是,则判定当前处于抖动状态,若否,则判定当前不处于抖动状态(或者说,稳定状态);又比如,电子设备可以判断当前在各方向的位移是否均大于预设位移,若是,则判定当前处于抖动状态,若否,则判定当前不处于抖动状态。此外,还可以通本申请实施例未列出的方式进行抖动状态的判断,本申请实施例对此不做具体限制。
在判定当前不处于抖动状态时,电子设备即根据识别出的待拍摄场景的目标对焦区域,通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像,具体可参照以上实施例中的相关描述,此处不再赘述。
在一实施例中,“通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像”之前,还包括:
(1)若当前不处于抖动状态,则检测待拍摄场景是否处于静止状态;
(2)若待拍摄场景处于静止状态,则通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像。
根据以上相关描述,本领域普通技术人员可以理解的是,在电子设备不处于抖动状态的情况下,若待拍摄场景不处于静止状态(比如,待拍摄场景包括运动物体),电子设备通过第一摄像头和第二摄像头拍摄得到图像的图像内容也可能存在较大的差异。
因此,在本申请实施例中,电子设备在判定其自身当前不处于抖动状态时,并不立即通过第一摄像头对待拍摄场景进行拍摄,而是进一步检测待拍摄场景是否处于静止状态,若检测到待拍摄场景处于静止状态,再根据识别出的待拍摄场景的目标对焦区域,通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像,具体可参照以上实施例中的相关描述,此处不再赘述。
其中,对于如何判断待拍摄场景是否处于静止状态,可由本领域技术人员是根据实际需要选取合适的方式进行判断,本申请实施例对此不做具体限制,比如,可以采用光流法、残差法等来判断待拍摄场景是否处于静止状态。
请参照图8,图8为本申请实施例提供的设备成像方法的另一种流程示意图,该设备成像方法应用于本申请实施例提供的电子设备,如该电子设备包括两个第一类型的第一摄像头以及四个第二类型的第二摄像头,该设备成像方法的流程可以包括:
在201中,当接收到对待拍摄场景的图像拍摄请求时,电子设备识别待拍摄场景的目标对焦区域。
应当说明的是,在本申请实施例中,电子设备包括第一类型的第一摄像头和多个第二类型的第二摄像头。其中,第一摄像头为标准类型的摄像头,或者说视场角为45度左右的摄像头,第二摄像头为长焦类型的摄像头,或者说视场角为40度以内的摄像头。请参照图7,电子设备包括两个第一摄像头和四 个第二摄像头,分别为第一摄像头E、第一摄像头F、第二摄像头A、第二摄像头B、第二摄像头C以及第二摄像头D,且各第二摄像头分别通过微云台(图中未示出)设置在电子设备上,使得各第二摄像头的拍摄角度可调。
本申请实施例中,电子设备首先接收输入的图像拍摄请求,其中,图像拍摄请求可由用户直接输入,用于指示电子设备对待拍摄场景进行拍摄。其中,待拍摄场景即电子设备在接收到输入的图像拍摄请求时,第一摄像头所对准的场景,其中包括但不限于人、物以及景等。
比如,用户在操作电子设备启动拍摄类应用(比如电子设备的系统应用“相机”),并通过移动电子设备,使得电子设备的第一摄像头以及第二摄像头对准待拍摄场景之后,可以通过点击“相机”预览界面提供的“拍照”按键(为虚拟按键),向电子设备输入图像拍摄请求,如图3所示。
又比如,用户在操作电子设备启动拍摄类应用,并通过移动电子设备,使得电子设备的第一摄像头以及第二摄像头对准待拍摄场景之后,可以说出语音指令“拍照”,向电子设备输入图像拍摄请求。
相应的,电子设备在接收到输入的图像拍摄请求之后,判定用户当前存在对待拍摄场景的拍摄需求,此时识别待拍摄场景的目标对焦区域,其中,目标对焦区域即电子设备响应于接收到的图像拍摄请求,执行图像拍摄操作时的对焦区域。
在202中,电子设备通过两个第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,得到两个第一图像。
电子设备在识别出待拍摄场景的目标对焦区域之后,电子设备可以按照相同的拍摄参数,通过两个第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,得到图像内容相同的两个第一图像。
在203中,电子设备对拍摄得到的两个第一图像进行图像合成处理,将合成得到的图像设为基底图像。
其中,电子设备在对拍摄的的两个第一图像进行图像合成处理时,首先将拍摄得到的两个第一图像对齐,再计算拍摄得到的两个第一图像重叠的各像素点的平均像素值,最后根据计算得到的各平均像素值得到拍摄得到的两个第一图像的合成图像,将该合成图像设为基底图像。
在204中,电子设备调整四个第二摄像头的拍摄角度,使得目标对焦区域位于四个第二摄像头与两个第一摄像头同时重叠的拍摄区域之内。
其中,电子设备在识别出待拍摄场景的目标对焦区域之后,除了通过第一摄像头拍摄得到基底图像之外,还根据识别出的目标对焦区域,利用各第二摄像头设置的微云台,对各第二摄像头的拍摄角度进行调整,使得目标对焦区域位于四个第二摄像头与两个第一摄像头同时重叠的拍摄区域之内。
比如,请参照图4,电子设备包括两个第一摄像头和四个第二摄像头,分别为第一摄像头E、第一摄像头F、第二摄像头A、第二摄像头B、第二摄像头C以及第二摄像头D,其中,第一摄像头E和第一摄像头F的拍摄区域相同,图中示为第一摄像头的拍摄区域,假设识别出待拍摄场景的目标对焦区域为待拍摄场景的中间区域(对应于第一摄像头E和第一摄像头F的拍摄区域的中间区域),电子设备根据识别出的目标对焦区域对各第二摄像头的拍摄角度进行调整,使得第二摄像头A的拍摄区域a对应第一摄像头E/F的拍摄区域的左上角,第二摄像头B的拍摄区域b对应第一摄像头头E/F的拍摄区域的右 上角,第二摄像头C的拍摄区域c对应第一摄像头头E/F的拍摄区域的左下角,第二摄像头D的拍摄区域d对应第一摄像头头E/F的拍摄区域的右下角,由此,使得目标对焦区域位于四个第二摄像头与两个第一摄像头同时重叠的拍摄区域,即第一摄像头头E/F的拍摄区域的中间区域。此外,如图4所示,各第二摄像头的拍摄区域还与第一摄像头头E/F的拍摄区域的边缘部分重叠。
在205中,电子设备通过四个第二摄像头基于目标对焦区域对待拍摄场景对焦拍摄,得到四个第二图像。
在完成对四个第二摄像头拍摄角度调整,使得目标对焦区域位于四个第二摄像头与两个第一摄像头同时重叠的拍摄区域之内后,电子设备进一步通过四个第二摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将第二摄像头拍摄得到的图像记为第二图像,由此拍摄得到四个第二图像。
应当说明的是,在通过四个第二摄像头对待拍摄场景进行拍摄时,各第二摄像头与第一摄像头采用相同的拍摄参数(比如对比度、亮度)进行拍摄,由此,使得四个第二摄像头能够获得与两个第一摄像头相同的拍摄效果。
比如,电子设备包括四个第二摄像头,分别为第二摄像头A、第二摄像头B、第二摄像头C以及第二摄像头D,且电子设备通过调整这四个第二摄像头的拍摄角度,使得这四个第二摄像头的拍摄区域分别对应第一摄像头E/F拍摄区域的左上角、右上角、左下角以及右下角,通过这四个第二摄像头将拍摄得到四个第二图像,如图5所示,第二摄像头A拍摄的第二图像A的图像内容对应基底图像左上角的图像内容,第二摄像头B拍摄的第二图像B的图像内容对应基底图像右上角的图像内容,第二摄像头C拍摄的第二图像C的图像内容对应基底图像左下角的图像内容,第二摄像头D拍摄的第二图像D的图像内容对应基底图像右下角的图像内容,这样,所有第二图像以及第一图像均同时覆盖了目标对焦区域的图像内容,且不同第二图像的图像内容即覆盖了基底图像中边缘区域的不同位置。
在206中,电子设备对四个第二图像与基底图像进行图像合成处理,将合成得到的图像设为图像拍摄请求的成像图像。
本申请实施例中,电子设备在通过第一摄像头拍摄得到基底图像,以及通过四个第二摄像头拍摄得到四个第二图像之后,将拍摄得到的四个第二图像与基底图像对齐。
基于对齐后的基底图像和第二图像,对于基底图像和第二图像的重叠部分,计算重叠的各像素点的平均像素值,比如,请参照图6,各第二图像与基底图像同时重叠的重叠区域位于基底图像的中间区域,这样,对于图6所示的重叠区域,某位置的像素点在五个图像(即基底图像和四个第二图像)中的像素值分别为0.8、0.9、1.1、1.2和1,则可计算得到该位置的像素点的平均像素值为1。
之后,根据基底图像中对应的各位置像素点所得到的平均像素值得到合成图像,比如,可以将基底图像的各像素点的像素值相应调整为计算得到的各平均像素值,从而得到合成图像;又比如,还可以根据计算得到各平均像素值,生成一幅新的图像,即合成图像。
本申请实施例中,电子设备在对拍摄得到的四个第二图像与基底图像进行图像合成处理之后,将合成得到的合成图像设为图像拍摄请求的成像图像,至此,电子设备即完成了对应接收到的图像拍摄请求一次完整拍摄操作。
比如,请继续参照图6,图6由此示出了基底图像到成像图像的清晰度变化,其中,X轴表示由图像边缘区域到中心区域(对应于待拍摄场景的目标对焦区域),再由中心区域到边缘区域的位置变化,Y轴表示跟随X轴变化的清晰度,可以看出,在基底图像中,中心区域的清晰度最高,随着中心区域向边缘区域的扩散,清晰度逐渐降低,且变化较为剧烈,而在成像图像中,中心区域的清晰度最高,且相较于基底图像,成像图像的边缘区域的清晰度被整体提高,随着中心区域向边缘区域的扩散,虽然清晰度逐渐降低,且变化更为平滑,使得成像图像的整体图像质量得以提高。
本申请实施例还提供一种设备成像装置。请参照图9,图9为本申请实施例提供的设备成像装置的结构示意图。其中该设备成像装置应用于电子设备,该电子设备包括第一类型的第一摄像头和多个第二类型的第二摄像头,该设备成像装置包括区域识别模块301、基底获取模块302、角度调整模块303、辅图获取模块304以及图像成像模块305,如下:
区域识别模块301,用于当接收到对待拍摄场景的图像拍摄请求时,识别待拍摄场景的目标对焦区域。
基底获取模块302,用于通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像。
角度调整模块303,用于调整第二摄像头的拍摄角度,使得目标对焦区域位于所有第二摄像头与第一摄像头同时重叠的拍摄区域之内。
辅图获取模块304,用于通过所有第二摄像头基于目标对焦区域对待拍摄场景对焦拍摄,得到多个第二图像。
图像成像模块305,用于对多个第二图像与基底图像进行图像合成处理,将合成得到的图像设为图像拍摄请求的成像图像。
在一实施例中,在识别待拍摄场景的目标对焦区域时,区域识别模块301用于:
获取待拍摄场景的预览图像,将预览图像输入到预先训练的视觉注意模型进行视觉显著区域的识别;
将识别出的视觉显著区域设为待拍摄场景的目标对焦区域。
在一实施例中,在识别待拍摄场景的目标对焦区域时,区域识别模块301还用于:
获取图像拍摄请求中携带的对焦区域选择信息;
将对焦区域选择信息所指示的区域设为待拍摄场景的目标对焦区域。
在一实施例中,电子设备包括两个第一摄像头,在通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像时,基底获取模块302用于:
通过两个第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,得到至少两个第一图像;
对至少两个第一图像进行图像合成处理,将合成得到的图像设为基底图像。
在一实施例中,电子设备还包括覆盖第一摄像头和/或第二摄像头的电致变色组件,设备成像装置还包括电致变色模块,用于在区域识别模块301当接收到对待拍摄场景的图像拍摄请求时,识别待拍摄场景的目标对焦区域之前,切换电致变色组件至透明状态;
以及在图像成像模块305对多个第二图像与基底图像进行图像合成处理,将合成得到的图像设为图像拍摄请求的成像图像之后,将电致变色组件切换至着色状态,以隐藏第一摄像头和/或第二摄像头。
在一实施例中,在通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像之前,基底获取模块302还用于:
检测当前是否处于抖动状态;
若当前不处于抖动状态,则通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像。
在一实施例中,在通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像之前,基底获取模块302还用于:
若当前不处于抖动状态,则检测待拍摄场景是否处于静止状态;
若待拍摄场景处于静止状态,则通过第一摄像头基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像。
应当说明的是,本申请实施例提供的设备成像装置与上文实施例中的设备成像方法属于同一构思,在设备成像装置上可以运行设备成像方法实施例中提供的任一方法,其具体实现过程详见设备成像方法实施例,此处不再赘述。
本申请实施例提供一种计算机可读的存储介质,其上存储有计算机程序,当其存储的计算机程序在计算机上执行时,使得计算机执行如本申请实施例提供的设备成像方法中的步骤。其中,存储介质可以是磁碟、光盘、只读存储器(Read Only Memory,ROM,)或者随机存取器(Random Access Memory,RAM)等。
本申请实施例还提供一种电子设备,请参照图10,电子设备包括处理器401、存储器402、第一类型的第一摄像头403以及多个第二类型的第二摄像头404。其中,处理器401与存储器402、第一摄像头403以及第二摄像头404电性连接。
处理器401是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或加载存储在存储器402内的计算机程序,以及调用存储在存储器402内的数据,执行电子设备的各种功能并处理数据。
存储器402可用于存储软件程序以及模块,处理器401通过运行存储在存储器402的计算机程序以及模块,从而执行各种功能应用以及数据处理。存储器402可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的计算机程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器402可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器402还可以包括存储器控制器,以提供处理器401对存储器402的访问。
第一摄像头403为标准类型的摄像头,或者说视场角为45度左右的摄像头。
第二摄像头404为长焦类型的摄像头,或者说视场角为40度以内的摄像头。
在本申请实施例中,电子设备中的处理器401会按照如下的步骤,将一个或一个以上的计算机程序 的进程对应的指令加载到存储器402中,并由处理器401运行存储在存储器402中的计算机程序,从而实现各种功能,如下:
当接收到对待拍摄场景的图像拍摄请求时,识别待拍摄场景的目标对焦区域;
通过第一摄像头403基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像;
调整第二摄像头404的拍摄角度,使得目标对焦区域位于所有第二摄像头404与第一摄像头403同时重叠的拍摄区域之内;
通过所有第二摄像头404基于目标对焦区域对待拍摄场景对焦拍摄,得到多个第二图像;
对多个第二图像与基底图像进行图像合成处理,将合成得到的图像设为图像拍摄请求的成像图像。
请参照图11,图11为本申请实施例提供的电子设备的另一结构示意图,与图10所示电子设备的区别在于,电子设备还包括输入单元405和输出单元406等组件。
其中,输入单元405可用于接收输入的数字、字符信息或用户特征信息(比如指纹),以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入等。
输出单元406可用于显示由用户输入的信息或提供给用户的信息,如屏幕。
在本申请实施例中,电子设备中的处理器401会按照如下的步骤,将一个或一个以上的计算机程序的进程对应的指令加载到存储器402中,并由处理器401运行存储在存储器402中的计算机程序,从而实现各种功能,如下:
当接收到对待拍摄场景的图像拍摄请求时,识别待拍摄场景的目标对焦区域;
通过第一摄像头403基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像;
调整第二摄像头404的拍摄角度,使得目标对焦区域位于所有第二摄像头404与第一摄像头403同时重叠的拍摄区域之内;
通过所有第二摄像头404基于目标对焦区域对待拍摄场景对焦拍摄,得到多个第二图像;
对多个第二图像与基底图像进行图像合成处理,将合成得到的图像设为图像拍摄请求的成像图像。
在一实施例中,在识别待拍摄场景的目标对焦区域时,处理器401执行:
获取待拍摄场景的预览图像,将预览图像输入到预先训练的视觉注意模型进行视觉显著区域的识别;
将识别出的视觉显著区域设为待拍摄场景的目标对焦区域。
在一实施例中,在识别待拍摄场景的目标对焦区域时,处理器401还可以执行:
获取图像拍摄请求中携带的对焦区域选择信息;
将对焦区域选择信息所指示的区域设为待拍摄场景的目标对焦区域。
在一实施例中,电子设备包括两个第一摄像头403,在通过第一摄像头403基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像时,处理器401执行:
通过两个第一摄像头403基于目标对焦区域对待拍摄场景对焦拍摄,得到至少两个第一图像;
对至少两个第一图像进行图像合成处理,将合成得到的图像设为基底图像。
在一实施例中,电子设备还包括覆盖第一摄像头403和/或第二摄像头404的电致变色组件,当接收到对待拍摄场景的图像拍摄请求时,识别待拍摄场景的目标对焦区域之前,处理器401还执行:
切换电致变色组件至透明状态;
而在对多个第二图像与基底图像进行图像合成处理,将合成得到的图像设为图像拍摄请求的成像图像之后,处理器401还执行:
将电致变色组件切换至着色状态,以隐藏第一摄像头403和/或第二摄像头404。
在一实施例中,在通过第一摄像头403基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像之前,处理器401还执行:
检测当前是否处于抖动状态;
若当前不处于抖动状态,则通过第一摄像头403基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像。
在一实施例中,在通过第一摄像头403基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像之前,处理器401还执行:
若当前不处于抖动状态,则检测待拍摄场景是否处于静止状态;
若待拍摄场景处于静止状态,则通过第一摄像头403基于目标对焦区域对待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像。
应当说明的是,本申请实施例提供的电子设备与上文实施例中的设备成像方法属于同一构思,在电子设备上可以运行设备成像方法实施例中提供的任一方法,其具体实现过程详见特征提取方法实施例,此处不再赘述。
需要说明的是,对本申请实施例的设备成像方法而言,本领域普通测试人员可以理解实现本申请实施例的设备成像方法的全部或部分流程,是可以通过计算机程序来控制相关的硬件来完成,所述计算机程序可存储于一计算机可读取存储介质中,如存储在电子设备的存储器中,并被该电子设备内的至少一个处理器执行,在执行过程中可包括如设备成像方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储器、随机存取记忆体等。
对本申请实施例的设备成像装置而言,其各功能模块可以集成在一个处理芯片中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中,所述存储介质譬如为只读存储器,磁盘或光盘等。
以上对本申请实施例所提供的一种设备成像方法、装置、存储介质及电子设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种设备成像方法,应用于电子设备,其中,所述电子设备包括第一类型的第一摄像头和多个第二类型的第二摄像头,所述设备成像方法包括:
    当接收到对待拍摄场景的图像拍摄请求时,识别所述待拍摄场景的目标对焦区域;
    通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像;
    调整所述多个第二摄像头的拍摄角度,使得所述目标对焦区域位于所有第二摄像头与所述第一摄像头同时重叠的拍摄区域之内;
    通过所述多个第二摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,得到多个第二图像;
    对所述多个第二图像与所述基底图像进行图像合成处理,将合成得到的图像设为所述图像拍摄请求的成像图像。
  2. 根据权利要求1所述的设备成像方法,其中,所述识别所述待拍摄场景的目标对焦区域,包括:
    获取所述待拍摄场景的预览图像,将所述预览图像输入到预先训练的视觉注意模型进行视觉显著区域的识别;
    将识别出的视觉显著区域设为所述待拍摄场景的目标对焦区域。
  3. 根据权利要求2所述的设备成像方法,其中,所述获取所述待拍摄场景的预览图像,包括:
    从预设的图像缓存队列中提取接收到所述图像拍摄请求之前且距离图像拍摄请求接收时刻最近的预览图像。
  4. 根据权利要求1所述的设备成像方法,其中,所述识别所述待拍摄场景的目标对焦区域,包括:
    获取所述图像拍摄请求中携带的对焦区域选择信息;
    将所述对焦区域选择信息所指示的区域设为所述待拍摄场景的目标对焦区域。
  5. 根据权利要求1所述的设备成像方法,其中,所述电子设备包括两个第一摄像头,所述通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像,包括:
    通过所述两个第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,得到至少两个第一图像;
    对所述至少两个第一图像进行图像合成处理,将合成得到的图像设为所述基底图像。
  6. 根据权利要求1所述的设备成像方法,其中,所述电子设备还包括覆盖所述第一摄像头和/或所述第二摄像头的电致变色组件,所述当接收到对待拍摄场景的图像拍摄请求时,识别所述待拍摄场景的目标对焦区域之前,还包括:
    切换所述电致变色组件至透明状态;
    所述对所述多个第二图像与所述基底图像进行图像合成处理,将合成得到的图像设为所述图像拍摄请求的成像图像之后,还包括:
    将所述电致变色组件切换至着色状态,以隐藏所述第一摄像头和/或所述第二摄像头。
  7. 根据权利要求1所述的设备成像方法,其中,所述通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像之前,还包括:
    检测当前是否处于抖动状态;
    若当前不处于抖动状态,则通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像。
  8. 根据权利要求7所述的设备成像方法,其中,所述通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像之前,还包括:
    若当前不处于抖动状态,则检测所述待拍摄场景是否处于静止状态;
    若所述待拍摄场景处于静止状态,则通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像。
  9. 根据权利要求1所述的设备成像方法,其中,所述通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像,包括:
    通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景连续对焦拍摄,得到多个第一图像;
    对所述多个第一图像进行图像合成处理,将合成得到的图像设为所述基底图像。
  10. 一种设备成像装置,应用于电子设备,其中,所述电子设备包括第一类型的第一摄像头和多个第二类型的第二摄像头,所述设备成像装置包括:
    区域识别模块,用于当接收到对待拍摄场景的图像拍摄请求时,识别所述待拍摄场景的目标对焦区域;
    基底获取模块,用于通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像;
    角度调整模块,用于调整所述多个第二摄像头的拍摄角度,使得所述目标对焦区域位于所有第二摄像头同时重叠的拍摄区域之内;
    辅图获取模块,用于通过所述多个第二摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,得到多个第二图像;
    图像成像模块,用于对所述多个第二图像与所述基底图像进行图像合成处理,将合成得到的图像设为所述图像拍摄请求的成像图像。
  11. 一种存储介质,其上存储有计算机程序,其中,当所述计算机程序在包括第一类型的第一摄像头和多个第二类型的第二摄像头的计算机上运行时,使得所述计算机执行:
    当接收到对待拍摄场景的图像拍摄请求时,识别所述待拍摄场景的目标对焦区域;
    通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像;
    调整所述多个第二摄像头的拍摄角度,使得所述目标对焦区域位于所有第二摄像头与所述第一摄像头同时重叠的拍摄区域之内;
    通过所述多个第二摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,得到多个第二图像;
    对所述多个第二图像与所述基底图像进行图像合成处理,将合成得到的图像设为所述图像拍摄请求的成像图像。
  12. 一种电子设备,包括处理器、存储器、第一类型的第一摄像头和多个第二类型的第二摄像头,所述存储器储存有计算机程序,其中,所述处理器通过调用所述计算机程序,用于执行:
    当接收到对待拍摄场景的图像拍摄请求时,识别所述待拍摄场景的目标对焦区域;
    通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像;
    调整所述多个第二摄像头的拍摄角度,使得所述目标对焦区域位于所有第二摄像头与所述第一摄像头同时重叠的拍摄区域之内;
    通过所述多个第二摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,得到多个第二图像;
    对所述多个第二图像与所述基底图像进行图像合成处理,将合成得到的图像设为所述图像拍摄请求的成像图像。
  13. 根据权利要求12所述的电子设备,其中,在识别所述待拍摄场景的目标对焦区域时,所述处理器用于执行:
    获取所述待拍摄场景的预览图像,将所述预览图像输入到预先训练的视觉注意模型进行视觉显著区域的识别;
    将识别出的视觉显著区域设为所述待拍摄场景的目标对焦区域。
  14. 根据权利要求13所述的电子设备,其中,在获取所述待拍摄场景的预览图像时,所述处理器用于执行:
    从预设的图像缓存队列中提取接收到所述图像拍摄请求之前且距离图像拍摄请求接收时刻最近的预览图像。
  15. 根据权利要求12所述的电子设备,其中,在识别所述待拍摄场景的目标对焦区域时,所述处理器用于执行:
    获取所述图像拍摄请求中携带的对焦区域选择信息;
    将所述对焦区域选择信息所指示的区域设为所述待拍摄场景的目标对焦区域。
  16. 根据权利要求12所述的电子设备,其中,所述电子设备包括两个第一摄像头,在通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像时,所述处理器用于执行:
    通过所述两个第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,得到至少两个第一图像;
    对所述至少两个第一图像进行图像合成处理,将合成得到的图像设为所述基底图像。
  17. 根据权利要求12所述的电子设备,其中,所述电子设备还包括覆盖所述第一摄像头和/或所述第二摄像头的电致变色组件,当接收到对待拍摄场景的图像拍摄请求时,识别所述待拍摄场景的目标对焦区域之前,所述处理器还用于执行:
    切换所述电致变色组件至透明状态;
    所述对所述多个第二图像与所述基底图像进行图像合成处理,将合成得到的图像设为所述图像拍摄请求的成像图像之后,还包括:
    将所述电致变色组件切换至着色状态,以隐藏所述第一摄像头和/或所述第二摄像头。
  18. 根据权利要求12所述的电子设备,其中,在通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像之前,所述处理器还用于执行:
    检测当前是否处于抖动状态;
    若当前不处于抖动状态,则通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像。
  19. 根据权利要求18所述的电子设备,其中,在通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像之前,所述处理器还用于执行:
    若当前不处于抖动状态,则检测所述待拍摄场景是否处于静止状态;
    若所述待拍摄场景处于静止状态,则通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像。
  20. 根据权利要求12所述的电子设备,其中,在通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景对焦拍摄,将拍摄得到的第一图像设为基底图像时,所述处理器用于执行:
    通过所述第一摄像头基于所述目标对焦区域对所述待拍摄场景连续对焦拍摄,得到多个第一图像;
    对所述多个第一图像进行图像合成处理,将合成得到的图像设为基底图像。
PCT/CN2020/097044 2019-06-28 2020-06-19 设备成像方法、装置、存储介质及电子设备 WO2020259401A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910579757.3A CN110290324B (zh) 2019-06-28 2019-06-28 设备成像方法、装置、存储介质及电子设备
CN201910579757.3 2019-06-28

Publications (1)

Publication Number Publication Date
WO2020259401A1 true WO2020259401A1 (zh) 2020-12-30

Family

ID=68020191

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/097044 WO2020259401A1 (zh) 2019-06-28 2020-06-19 设备成像方法、装置、存储介质及电子设备

Country Status (2)

Country Link
CN (1) CN110290324B (zh)
WO (1) WO2020259401A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473011A (zh) * 2021-06-29 2021-10-01 广东湾区智能终端工业设计研究院有限公司 一种拍摄方法、系统及存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110290324B (zh) * 2019-06-28 2021-02-02 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备
CN110809101B (zh) * 2019-11-04 2022-05-17 RealMe重庆移动通信有限公司 图像变焦处理方法及装置、电子设备、存储介质
CN111062313A (zh) * 2019-12-13 2020-04-24 歌尔股份有限公司 一种图像识别方法、装置、监控系统及存储介质
CN111915779B (zh) * 2020-07-31 2022-04-15 浙江大华技术股份有限公司 一种闸机控制方法、装置、设备和介质
CN114125256A (zh) * 2020-08-25 2022-03-01 宁波舜宇光电信息有限公司 用于内置云台的多摄模组的拍摄方法
CN112188096A (zh) * 2020-09-27 2021-01-05 北京小米移动软件有限公司 拍照方法及装置、终端及存储介质
CN113766136B (zh) * 2021-09-23 2023-08-22 西安维沃软件技术有限公司 拍摄方法及电子设备

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060014228A (ko) * 2004-08-10 2006-02-15 주식회사 팬택 다수의 카메라를 구비하고 있는 이동통신 단말기의 멀티초점 촬영 방법 및 장치
CN105100578A (zh) * 2014-05-05 2015-11-25 南昌欧菲光电技术有限公司 图像处理系统及其图像处理方法
CN109379528A (zh) * 2018-12-20 2019-02-22 Oppo广东移动通信有限公司 成像方法、成像装置、电子装置及介质
CN109379522A (zh) * 2018-12-06 2019-02-22 Oppo广东移动通信有限公司 成像方法、成像装置、电子装置及介质
CN109600551A (zh) * 2018-12-29 2019-04-09 Oppo广东移动通信有限公司 成像方法、成像装置、电子装置及介质
CN109639997A (zh) * 2018-12-20 2019-04-16 Oppo广东移动通信有限公司 图像处理方法、电子装置及介质
CN110166680A (zh) * 2019-06-28 2019-08-23 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备
CN110191274A (zh) * 2019-06-28 2019-08-30 Oppo广东移动通信有限公司 成像方法、装置、存储介质及电子设备
CN110213492A (zh) * 2019-06-28 2019-09-06 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备
CN110213493A (zh) * 2019-06-28 2019-09-06 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备
CN110225256A (zh) * 2019-06-28 2019-09-10 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备
CN110290322A (zh) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备
CN110290299A (zh) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 成像方法、装置、存储介质及电子设备
CN110290324A (zh) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587590A (zh) * 2009-06-17 2009-11-25 复旦大学 基于脉冲余弦变换的选择性视觉注意计算模型
US9557519B2 (en) * 2013-10-18 2017-01-31 Light Labs Inc. Methods and apparatus for implementing a camera device supporting a number of different focal lengths
CN105678242B (zh) * 2015-12-30 2019-05-07 小米科技有限责任公司 手持证件模式下的对焦方法和装置
CN105791674B (zh) * 2016-02-05 2019-06-25 联想(北京)有限公司 电子设备和对焦方法
KR102520225B1 (ko) * 2016-06-30 2023-04-11 삼성전자주식회사 전자 장치 및 전자 장치의 이미지 촬영 방법
CN109040570A (zh) * 2018-10-26 2018-12-18 Oppo(重庆)智能科技有限公司 电子设备的摄像头组件、闪光灯组件及电子设备
CN109474785A (zh) * 2018-11-27 2019-03-15 三星电子(中国)研发中心 电子装置和电子装置的焦点追踪拍照方法
CN109583514A (zh) * 2018-12-19 2019-04-05 成都西纬科技有限公司 一种图像处理方法、装置及计算机存储介质

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060014228A (ko) * 2004-08-10 2006-02-15 주식회사 팬택 다수의 카메라를 구비하고 있는 이동통신 단말기의 멀티초점 촬영 방법 및 장치
CN105100578A (zh) * 2014-05-05 2015-11-25 南昌欧菲光电技术有限公司 图像处理系统及其图像处理方法
CN109379522A (zh) * 2018-12-06 2019-02-22 Oppo广东移动通信有限公司 成像方法、成像装置、电子装置及介质
CN109379528A (zh) * 2018-12-20 2019-02-22 Oppo广东移动通信有限公司 成像方法、成像装置、电子装置及介质
CN109639997A (zh) * 2018-12-20 2019-04-16 Oppo广东移动通信有限公司 图像处理方法、电子装置及介质
CN109600551A (zh) * 2018-12-29 2019-04-09 Oppo广东移动通信有限公司 成像方法、成像装置、电子装置及介质
CN110166680A (zh) * 2019-06-28 2019-08-23 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备
CN110191274A (zh) * 2019-06-28 2019-08-30 Oppo广东移动通信有限公司 成像方法、装置、存储介质及电子设备
CN110213492A (zh) * 2019-06-28 2019-09-06 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备
CN110213493A (zh) * 2019-06-28 2019-09-06 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备
CN110225256A (zh) * 2019-06-28 2019-09-10 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备
CN110290322A (zh) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备
CN110290299A (zh) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 成像方法、装置、存储介质及电子设备
CN110290324A (zh) * 2019-06-28 2019-09-27 Oppo广东移动通信有限公司 设备成像方法、装置、存储介质及电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113473011A (zh) * 2021-06-29 2021-10-01 广东湾区智能终端工业设计研究院有限公司 一种拍摄方法、系统及存储介质
CN113473011B (zh) * 2021-06-29 2023-04-25 广东湾区智能终端工业设计研究院有限公司 一种拍摄方法、系统及存储介质

Also Published As

Publication number Publication date
CN110290324B (zh) 2021-02-02
CN110290324A (zh) 2019-09-27

Similar Documents

Publication Publication Date Title
WO2020259401A1 (zh) 设备成像方法、装置、存储介质及电子设备
WO2020259445A1 (zh) 设备成像方法、装置、存储介质及电子设备
US11750918B2 (en) Assist for orienting a camera at different zoom levels
US10645272B2 (en) Camera zoom level and image frame capture control
CN110213493B (zh) 设备成像方法、装置、存储介质及电子设备
RU2649773C2 (ru) Управление камерой посредством функции распознавания лица
US11483469B2 (en) Camera zoom level and image frame capture control
CN110213492B (zh) 设备成像方法、装置、存储介质及电子设备
CN110290299B (zh) 成像方法、装置、存储介质及电子设备
US9571739B2 (en) Camera timer
CN110166680B (zh) 设备成像方法、装置、存储介质及电子设备
US8400532B2 (en) Digital image capturing device providing photographing composition and method thereof
CN110225256B (zh) 设备成像方法、装置、存储介质及电子设备
CN115280756B (zh) 一种数码相机的缩放设置调整的方法、装置及可读存储介质
KR20240004839A (ko) 촬영 방법, 장치 및 전자기기
US11431923B2 (en) Method of imaging by multiple cameras, storage medium, and electronic device
CN110430375B (zh) 成像方法、装置、存储介质及电子设备
WO2018196854A1 (zh) 一种拍照方法、拍照装置及移动终端
CN110312075B (zh) 设备成像方法、装置、存储介质及电子设备
JP2022120681A (ja) 画像処理装置および画像処理方法
WO2023174009A1 (zh) 基于虚拟现实的拍摄处理方法、装置及电子设备
WO2015141185A1 (ja) 撮像制御装置、撮像制御方法および記録媒体
US20230291998A1 (en) Electronic apparatus, method for controlling the same, and computer-readable storage medium storing program
CN116489504A (zh) 一种摄像头模组的控制方法、摄像头模组及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20833516

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20833516

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20833516

Country of ref document: EP

Kind code of ref document: A1