CN113965687A - Shooting method and device and electronic equipment - Google Patents

Shooting method and device and electronic equipment Download PDF

Info

Publication number
CN113965687A
CN113965687A CN202111452041.0A CN202111452041A CN113965687A CN 113965687 A CN113965687 A CN 113965687A CN 202111452041 A CN202111452041 A CN 202111452041A CN 113965687 A CN113965687 A CN 113965687A
Authority
CN
China
Prior art keywords
input
area
image
pixel
photosensitive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111452041.0A
Other languages
Chinese (zh)
Other versions
CN113965687B (en
Inventor
卢培锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202111452041.0A priority Critical patent/CN113965687B/en
Priority claimed from CN202111452041.0A external-priority patent/CN113965687B/en
Publication of CN113965687A publication Critical patent/CN113965687A/en
Application granted granted Critical
Publication of CN113965687B publication Critical patent/CN113965687B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a shooting method, a shooting device and electronic equipment, and belongs to the field of sensors. The method comprises the following steps: receiving a first input of a user to a shooting preview interface; responding to the first input, controlling the pixel units in a first photosensitive area in the pixel array to be switched on, and controlling the pixel units in a second photosensitive area in the pixel array to be switched off, and outputting a second image; the first photosensitive area is an area corresponding to a first image area in a first image displayed on the shooting preview interface, and the second photosensitive area is an area except the first photosensitive area in the photosensitive area of the pixel array; the first image region is determined based on the first input; the second image is an image corresponding to the first image area in the first image.

Description

Shooting method and device and electronic equipment
Technical Field
The application belongs to the field of sensors, and particularly relates to a shooting method, a shooting device and electronic equipment.
Background
With the rapid development of terminal technology, more and more terminals with shooting function are provided, and in order to show the shooting capability of a terminal, a plurality of cameras are generally mounted on the terminal to realize the capabilities of multiple kinds of shooting, such as high-definition shooting, super-wide-angle shooting, zoom shooting, and the like, but more cameras mean the increase of the cost of the terminal.
In order to meet the use requirements of users and simultaneously enable the cost to be lower, the above partial functions are realized through software, for example, a double-focal-length mode in a portrait mode, and the function is realized under the condition that a double-optical portrait lens is not carried, a picture different from a common view field can be obtained by enlarging and cutting an imaged high-resolution picture, but the image definition is damaged in such a way, and the image quality is poor.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method, a shooting device and electronic equipment, which can enable an image sensor to be suitable for multiple shooting modes, obtain a better digital zooming effect and obtain a higher-quality image.
In a first aspect, an embodiment of the present application provides a shooting method applied to a shooting device, where the shooting device includes an image sensor, the image sensor includes a pixel array, and the pixel array includes a first number of first pixel units and a second number of second pixel units; wherein the first pixel unit includes photosensitive elements for acquiring at least two color signals arranged in a stack, and the second pixel unit includes photosensitive elements for acquiring a single color signal, the method comprising:
receiving a first input of a user to a shooting preview interface;
responding to the first input, controlling the pixel units in a first photosensitive area in the pixel array to be switched on, and controlling the pixel units in a second photosensitive area in the pixel array to be switched off, and outputting a second image;
the first photosensitive area is an area corresponding to a first image area in a first image displayed on the shooting preview interface, and the second photosensitive area is an area except the first photosensitive area in the photosensitive area of the pixel array; the first image region is determined based on the first input; the second image is an image corresponding to the first image area in the first image.
In a second aspect, an embodiment of the present application provides a shooting apparatus, including:
an image sensor comprising a pixel array comprising a first number of first pixel cells and a second number of second pixel cells; the first pixel unit comprises photosensitive elements which are arranged in a stacked mode and used for acquiring at least two color signals, and the second pixel unit comprises photosensitive elements used for acquiring a single color signal;
the input module is used for receiving first input of a user to the shooting preview interface;
the execution module is used for responding to the first input, controlling the pixel units in a first photosensitive area in the pixel array to be switched on, controlling the pixel units in a second photosensitive area in the pixel array to be switched off, and outputting a second image;
the first photosensitive area is an area corresponding to a first image area in a first image displayed on the shooting preview interface, and the second photosensitive area is an area except the first photosensitive area in the photosensitive area of the pixel array; the first image region is determined based on the first input; the second image is an image corresponding to the first image area in the first image.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a photographing device including an image sensor including a pixel array including a first number of first pixel cells and a second number of second pixel cells; the first pixel unit comprises photosensitive elements which are arranged in a stacked mode and used for acquiring at least two color signals, and the second pixel unit comprises photosensitive elements used for acquiring a single color signal;
a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a first input of a user to a shooting preview interface is received; and responding to the first input, controlling the pixel units in the first photosensitive area in the pixel array to be switched on, and controlling the pixel units in the second photosensitive area in the pixel array to be switched off, and outputting a second image. According to the embodiment of the invention, the image sensor can be suitable for various shooting modes, better data zooming effect is obtained, higher-quality images are obtained, and meanwhile, the cost is saved by disconnecting the unnecessary pixel units.
Drawings
Fig. 1 is a schematic flowchart of a shooting method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a first pixel unit according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of an image sensor according to an embodiment of the present application;
FIG. 4 is a second schematic structural diagram of an image sensor according to an embodiment of the present application;
FIG. 5 is a schematic diagram of the light sensing of the image sensor according to the embodiment of the present application;
FIG. 6 is a diagram of a preview interface for shooting according to an embodiment of the present application;
FIG. 7 is a second schematic diagram of the image sensor according to the embodiment of the present application;
FIG. 8 is a second schematic diagram of a preview interface for shooting according to an embodiment of the present application;
fig. 9 is a third schematic diagram of a shooting preview interface according to the embodiment of the present application;
FIG. 10 is a fourth illustration of a preview interface for capturing in accordance with an embodiment of the present application;
FIG. 11 is a third schematic structural diagram of an image sensor according to an embodiment of the present application;
FIG. 12 is a fourth schematic structural diagram of an image sensor according to an embodiment of the present application;
FIG. 13 is a third schematic diagram of the image sensor according to the embodiment of the present application;
fig. 14 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 16 is a hardware configuration diagram of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The shooting method, the shooting device and the electronic device provided by the embodiment of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an embodiment of the present application provides a photographing method, which may be applied to a photographing apparatus including an image sensor including a pixel array including a first number of first pixel units and a second number of second pixel units; the first pixel unit comprises photosensitive elements which are arranged in a stacked mode and used for acquiring at least two color signals, and the second pixel unit comprises photosensitive elements used for acquiring a single color signal.
The second Pixel units 102 are formed by micro-lenses, color filters, photodiodes, etc., and each of the second Pixel units 102 can collect a light signal of one color, and in one embodiment, the second Pixel units 102 can be divided into a second Pixel unit for collecting a light signal of a blue wavelength band, a second Pixel unit for collecting a light signal of a green wavelength band, and a second Pixel unit for collecting a light signal of a red wavelength band, which can be represented as RGB Pixel.
In one embodiment, as shown in fig. 2, the first Pixel unit 101 includes a first photosensitive element 1011, a second photosensitive element 1012 and a third photosensitive element 1013 that are stacked, the first photosensitive element 1011 is configured to collect light signals in a blue wavelength band, the second photosensitive element 1012 is configured to collect light signals in a green wavelength band, and the third photosensitive element 1013 is configured to collect light signals in a red wavelength band, so that the first Pixel unit 101 can simultaneously collect light signals in three different wavelengths by using the principle that different wavelengths of visible light have different penetrating power, and the first Pixel unit 101 can be represented as X3 Pixel. The first pixel unit 101 can put more pixels in a limited space, and can effectively solve the problems of false color imaging under high pixels and the like.
The image sensor further comprises a control circuit, by which the operating state of each of the first pixel unit 101 and the second pixel unit 102 is controlled, the pixel unit may be operated by turning on the pixel unit, and the pixel unit may be not operated by turning off the pixel unit. In one embodiment, the on/off of the corresponding pixel unit can be controlled by controlling the switch of the row-column selector.
As shown in fig. 3 to 4, all of the first pixel unit 101 and the second pixel unit 102 in the pixel array are turned on in fig. 3 with the solid line turned on and the broken line turned off. In fig. 4, the first pixel unit 101 located in the middle area is turned on, and the second pixel unit 102 located in the edge area is turned off. The turned-on pixel unit operates, and the output optical signal passes through the signal amplifier and the analog-to-digital conversion circuit 104 to output an image output. The output Image is subjected to Image data Processing (ISP) such as white balance or Gamma Processing, and a final Image is output.
The shooting method comprises the following steps:
and step 110, receiving a first input of a user to the shooting preview interface.
It should be understood that the shooting preview interface may be an interface for displaying a preview image captured by the shooting device, and the interface may be a human-computer interaction interface, and may receive a first input by monitoring a user's operation on the interface, for example, a program interface of a camera program installed on the shooting device. For example, in the camera program running state, the shooting device will be in the shooting preview state, as shown in fig. 5, and in the shooting preview state, each of the first pixel unit 101 and the second pixel unit 102 in the pixel array on the image sensor in the shooting device works, collects light signals of corresponding colors from light through a lens (lens) and an infrared Filter (IR Filter), outputs a first image, and displays the first image as a preview image on the shooting preview interface 200, as shown in fig. 6. The user may perform a first input on the first image displayed by the shooting preview interface 200 according to actual needs.
It should be understood that the first input may be a touch operation of shooting the preview interface, may be a touch input such as clicking, pressing, or a sliding input, and may also be obtained by a voice input or a gesture input, which is not limited herein.
Step 120, in response to the first input, controlling a pixel unit in a first photosensitive area in the pixel array to be turned on, and controlling a pixel unit in a second photosensitive area in the pixel array to be turned off, and outputting a second image;
the first photosensitive area is an area corresponding to a first image area in a first image displayed on the shooting preview interface, and the second photosensitive area is an area except the first photosensitive area in the photosensitive area of the pixel array; the first image region is determined based on the first input; the second image is an image corresponding to the first image area in the first image.
According to a first input by the user, a first image area 201 is determined in the first image displayed by the photographing preview interface 200, i.e., the first image area is selected by the user through the first input. A first photosensitive area corresponding to the first image area in the pixel array is determined, and the pixel units in the first photosensitive area are controlled to operate (i.e., turned on), and the pixel units outside the first photosensitive area do not operate (i.e., turned off), as shown in fig. 7, so that the image sensor outputs a second image corresponding to the first image area 201, as shown in fig. 8. The second image is not a partial enlarged image obtained by cutting the first image, but an image obtained by re-shooting the shooting object corresponding to the first image area of the first image.
The determination method of the correspondence between the first image area and the first photosensitive area may be set according to actual needs, for example, the position of the first photosensitive area in the pixel array may be determined according to the position of the first image area in the first image, or a pixel unit corresponding to a pixel point in the first image area may be found from the pixel array and the operation of the pixel unit may be controlled according to the correspondence between each pixel point in the first image and each pixel unit in the pixel array.
Further, the manner of determining the first image area according to the received first input may be set according to an actual requirement and an input manner of the first input, and this application embodiment only provides some specific embodiments for illustration.
In one embodiment, prior to step 110, the method further comprises: dividing an image area of the first image into N sub-areas, wherein N is an integer greater than 1; as shown in fig. 9, N sub-areas are divided on the photographing preview interface.
In a case that the first input is a touch input, after step 110, the method further includes:
determining a sub-area corresponding to the input area according to the input area of the touch input;
and determining a sub-area corresponding to the input area as a first image area.
The touch input may be a touch operation such as clicking or pressing on the shooting preview interface, as shown in fig. 9, if the user wants to shoot a plant in the first image, two sub-areas covering the plant may be clicked and determined as the first image area.
In another embodiment, as shown in fig. 10, in the case that the first input is a slide input, after step 110, the method further comprises:
acquiring a sliding distance of the sliding input; the sliding input can be obtained by a user through sliding operation of two or more fingers on the shooting preview interface;
determining a zooming ratio according to the sliding distance of the sliding input;
and determining a first image area according to the zooming magnification.
As shown in fig. 10, when the user employs the slide input, the first image may be enlarged as the user slide input progresses. Further, based on an initial contact point of the user on the shooting preview interface, with the initial contact point as a center, the first image may be enlarged as the user performs the sliding input until the user stops the sliding input, and at this time, an area of the image displayed on the shooting preview interface, which corresponds to the first image, is the first image area.
In another embodiment, in the case that the first input is a touch input, after step 110, the method further includes:
acquiring a touch area of the touch input; the determination mode of the touch area can be set according to actual requirements, can be determined according to a contact point when a user performs touch operation, and can also be determined according to a range defined by touch input of the user;
performing object recognition on the image of the touch area; wherein the object recognition can be realized by an image recognition technology, and the recognizable object can be a person or an object in the image, such as a plant or a person as shown in fig. 6;
determining an object display area of the recognized object as a first image area; the object display area of the object may be a regular area or an irregular area including the object and its edge.
If the user wants to capture an object in the first image, for example, a plant as shown in fig. 6, the user may obtain a touch area of the touch input by clicking a position where the plant is located, identify the plant in the touch area by using technologies such as image recognition, and determine the first image area according to a display position of the plant in the first image.
Since the second image unit 102 can capture only one color, if a color is to be presented, filtering and interpolation processing are required, which easily causes a false color range and affects image resolution. Compared with the second pixel unit 102, each first pixel unit 101 can be used for outputting color signals without referring to signals of adjacent pixel units and interpolation processing, and the first pixel unit 101 and the second pixel unit 102 are different. To this end, based on the above embodiment, further, the step 120 includes:
and responding to the first input, controlling the first pixel unit in the first photosensitive area in the pixel array to be conducted, and outputting a second image. The pixel units in the first photosensitive area in the pixel array are all first pixel units.
In one embodiment, prior to step 110, the method further comprises:
acquiring a shooting mode;
and controlling a first pixel unit in the pixel array to be conducted and a second pixel unit in the pixel array to be interrupted when the shooting mode is a first shooting mode. The first photographing mode may be a preset photographing mode, such as a zoom mode, or a plurality of preset photographing modes, such as photographing modes other than a wide-angle mode.
And outputting a first image to be displayed on a shooting preview interface according to the optical signal collected after each first pixel unit is conducted. Then, step 110-120 is performed.
In order to adapt to various shooting modes, the arrangement mode of each first pixel unit and each second pixel unit in the pixel array can be set according to actual needs. The examples of the present application are given by way of illustration only.
In one embodiment, as shown in fig. 11, the first number of first pixel units 101 is disposed in a middle region of the pixel array, and the second number of second pixel units 102 is disposed in an edge region of the pixel array.
The image sensor may be adapted to a wide-angle mode and a zoom mode, the first photographing mode being set to the zoom mode.
The wide-angle mode may be used to capture a landscape, where it is desirable to have the entire captured scene as much as possible.
When some portrait photos are taken, the zoom mode can be used, the zoom scale of zooming can be adjusted, the zoom imaging has the characteristic of space compression, and the space compression is to compress the depth of field, enlarge the long shot and change the space position relationship between the scenes with different distances so as to combine various elements in the picture. When outdoor portrait is shot, the zoom mode can give a feeling of 'short distance between scenery and people' to a viewer through the characteristic, so that the picture looks compact and full, a photo more conforming to the vision of human eyes is obtained, and simultaneously the figure and the background show a virtual and real effect. The zoom mode can also be used at the same time with a shooting scene in which a local close-up is shot.
When shooting is performed, if it is determined that the shooting mode is the wide-angle mode, as shown in fig. 3, all the first pixel units and all the second pixel units in the pixel array are controlled to operate simultaneously, and light signals are collected and output, so that a landscape shot with more details and a larger field angle is shot at one time.
If it is determined that the shooting mode is the zoom mode, as shown in fig. 4, the first pixel unit located in the middle area of the pixel array is controlled to operate, and the second pixel unit located in the edge area does not operate, so that an image with low crosstalk and high resolution can be acquired by using the characteristics of the first pixel unit, and a better zoom effect can be achieved.
In one embodiment, when the photographing apparatus is turned on, photographing may be performed using a wide-angle mode by default, a first pixel unit of a middle area and a second pixel unit of an edge area are simultaneously operated to perform full output, and an initial image is displayed on the photographing preview interface.
And when the user switches the shooting mode to the zooming mode, interrupting the second pixel unit in the edge area, and only reserving the first pixel unit in the middle area to work, wherein the image displayed on the shooting preview interface is the first image.
The steps 110-120 shown in fig. 1 are performed:
receiving a first input of a user to the shooting preview interface;
and in response to the first input, interrupting the first pixel unit in the second photosensitive area in the pixel array, only remaining the first pixel unit in the first photosensitive area, and outputting a second image.
And performing ISP processing on the second image, and outputting a final image after white balance, Gamma and other processing.
In another embodiment, the step 110 and the step 120 may also be performed by taking the output initial image as the first image when all the first pixel units and the second pixel units in the pixel array operate simultaneously.
In another embodiment, as shown in fig. 12, a first pixel unit may be disposed in a middle region of the pixel array, and a first pixel unit and a second pixel unit may be disposed in an edge region of the pixel array in a spaced manner, where the spaced manner may be a grouped spacing manner. The embodiment of the present application only provides one specific implementation manner, and as shown in fig. 12, the first pixel units 101 in the first number are respectively disposed in four corner sub-regions in a middle region and an edge region of the pixel array, and the second pixel units 102 in the second number are disposed in a pre-determined sub-region in the edge region, where the pre-determined sub-region is a region in the edge region except for the four corner sub-regions.
The image sensor is applicable to a wide-angle mode and a zoom mode, and also applicable to a zoom shooting mode of an edge area in a preview image, when a user is interested in a certain object located in the edge area in the preview image, the user can select the area where the object is located, and the first pixel unit corresponding to the area where the object is located is used for realizing the zoom shooting of the area where the object is located. The first photographing mode is set to a zoom mode and a zoom photographing mode for an edge area in the preview image.
When shooting is carried out, if the shooting mode is determined to be the wide-angle mode, controlling all first pixel units and all second pixel units in the pixel array to work simultaneously, collecting optical signals and outputting initial images;
and if the shooting mode is determined to be the zoom mode or the zoom shooting mode for the edge area in the preview image, controlling a first pixel unit positioned in the pixel array to work or only making the first pixel unit positioned in the edge area work, and outputting a first image.
The steps 110-120 shown in fig. 1 are performed:
receiving a first input of a user to the shooting preview interface;
and in response to the first input, interrupting the first pixel unit in the second photosensitive area in the pixel array, only remaining the first pixel unit in the first photosensitive area, and outputting a second image. For example, if the user wants to capture a person in the first image as shown in fig. 6, a first image area corresponding to the person may be selected through a first input, a first photosensitive area corresponding to the first image area is determined, only first pixel cells in the first photosensitive area remain, and the second image is output as shown in fig. 13.
When the first photosensitive area corresponding to the first image area is determined, it may happen that not all of the first photosensitive areas corresponding to the first image area are first pixel units, but some of the first photosensitive areas are second pixel units, and at this time, the first photosensitive area may be adjusted so that all of the pixel units in the first photosensitive area are first pixel units. The specific adjustment mode can be realized by adjusting the light entrance angle and the like.
According to the technical scheme provided by the embodiment of the invention, the embodiment of the invention receives the first input of the user to the shooting preview interface; and responding to the first input, controlling the pixel units in the first photosensitive area in the pixel array to be switched on, and controlling the pixel units in the second photosensitive area in the pixel array to be switched off, and outputting a second image. According to the embodiment of the invention, the image sensor can be suitable for various shooting modes, better data zooming effect is obtained, higher-quality images are obtained, and meanwhile, the cost is saved by disconnecting the unnecessary pixel units.
Based on the above embodiment, further, after the step 120, the method further includes:
receiving a second input of the shooting preview interface by the user; the second input may be a touch operation corresponding to the first input, for example, if the first input is sliding outward in two directions, the second input may be sliding in two directions, or may be obtained through voice input or gesture input, which is not limited herein.
And responding to the second input, controlling the pixel units in the second photosensitive area to be conducted, and outputting a third image.
After the user finishes shooting the second image, the pixel units in the second photosensitive area can be turned on again through the second input, and the pixel units in the first photosensitive area and the second photosensitive area work together to output a third image.
As can be seen from the above technical solutions provided by the embodiments of the present invention, in the embodiments of the present invention, by receiving a second input to the shooting preview interface from a user, in response to the second input, the pixel unit in the second photosensitive region is controlled to be turned on, and a third image is output, and the operating state of each pixel unit in the pixel array of the image sensor is flexibly adjusted, so that the image sensor can quickly respond to switching among multiple shooting modes to obtain an image with higher quality.
According to the shooting method provided by the embodiment of the application, the execution main body can be a shooting device. The embodiment of the present application takes an example in which a shooting device executes a shooting method, and the shooting device provided in the embodiment of the present application is described.
The photographing apparatus 1400 includes: an image sensor 1401, an input block 1402, and an execution block 1403; wherein the image sensor 1401 comprises a pixel array comprising a first number of first pixel cells and a second number of second pixel cells; the first pixel unit comprises photosensitive elements which are arranged in a stacked mode and used for acquiring at least two color signals, and the second pixel unit comprises photosensitive elements used for acquiring a single color signal; the input module 1402 is configured to receive a first input of a shooting preview interface from a user; the executing module 1403 is configured to, in response to the first input, control a pixel unit in a first photosensitive region in the pixel array to be turned on, and control a pixel unit in a second photosensitive region in the pixel array to be turned off, so as to output a second image; the first photosensitive area is an area corresponding to a first image area in a first image displayed on the shooting preview interface, and the second photosensitive area is an area except the first photosensitive area in the photosensitive area of the pixel array; the first image region is determined based on the first input; the second image is an image corresponding to the first image area in the first image.
Further, the input module 1403 is further configured to divide the image area of the first image into N sub-areas, where N is an integer greater than 1;
in a case that the first input is a touch input, the execution module is further configured to:
determining a sub-area corresponding to the input area according to the input area of the touch input;
and determining a sub-area corresponding to the input area as a first image area.
Further, in a case that the first input is a sliding input, the execution module 1403 is further configured to:
acquiring a sliding distance of the sliding input;
determining a zooming ratio according to the sliding distance of the sliding input;
and determining a first image area according to the zooming magnification.
Further, in a case that the first input is a touch input, the executing module 1403 is further configured to:
acquiring a touch area of the touch input;
performing object recognition on the image of the touch area;
an object display area of the recognized object is determined as a first image area.
According to the technical scheme provided by the embodiment of the invention, the embodiment of the invention receives the first input of the user to the shooting preview interface; and responding to the first input, controlling the pixel units in the first photosensitive area in the pixel array to be switched on, and controlling the pixel units in the second photosensitive area in the pixel array to be switched off, and outputting a second image. According to the embodiment of the invention, the image sensor can be suitable for various shooting modes, better data zooming effect is obtained, higher-quality images are obtained, and meanwhile, the cost is saved by disconnecting the unnecessary pixel units.
Based on the above embodiment, further, the input module is further configured to receive a second input of the shooting preview interface from the user;
the execution module is further used for responding to the second input, controlling the pixel units in the second photosensitive area to be conducted and outputting a third image.
As can be seen from the above technical solutions provided by the embodiments of the present invention, in the embodiments of the present invention, by receiving a second input to the shooting preview interface from a user, in response to the second input, the pixel unit in the second photosensitive region is controlled to be turned on, and a third image is output, and the operating state of each pixel unit in the pixel array of the image sensor is flexibly adjusted, so that the image sensor can quickly respond to switching among multiple shooting modes to obtain an image with higher quality.
The shooting device in the embodiment of the present application may be an electronic device, or may be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to 13, and is not described here again to avoid repetition.
Optionally, as shown in fig. 15, an embodiment of the present application further provides an electronic device 1500, including a camera 1503, a processor 1501 and a memory 1502, where the camera 1503 includes an image sensor that includes a pixel array including a first number of first pixel units and a second number of second pixel units; the first pixel unit includes stacked photosensitive elements for acquiring at least two color signals, the second pixel unit includes photosensitive elements for acquiring a single color signal, and the memory 1502 stores a program or an instruction that can be executed on the processor 1501, and when the program or the instruction is executed by the processor 1501, the steps of the above-described embodiment of the shooting method are implemented, and the same technical effects can be achieved, and therefore, the description is omitted here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 16 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1600 includes, but is not limited to: radio frequency unit 1601, network module 1602, audio output unit 1603, input unit 1604, sensor 1605, display unit 1606, user input unit 1607, interface unit 1608, memory 1609, and processor 1610.
Those skilled in the art will appreciate that the electronic device 1600 may further include a power supply (e.g., a battery) for supplying power to various components, which may be logically coupled to the processor 1610 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 16 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description thereof is omitted.
The user input unit 1607 is configured to receive a first input of the shooting preview interface from a user;
a processor 1610, configured to, in response to the first input, control a pixel unit in a first photosensitive region in the pixel array to be turned on, and control a pixel unit in a second photosensitive region in the pixel array to be turned off, and output a second image;
the first photosensitive area is an area corresponding to a first image area in a first image displayed on the shooting preview interface, and the second photosensitive area is an area except the first photosensitive area in the photosensitive area of the pixel array; the first image region is determined based on the first input; the second image is an image corresponding to the first image area in the first image.
Further, the processor 1610 is further configured to divide an image area of the first image into N sub-areas, where N is an integer greater than 1;
in the case that the first input is a touch input, the processor 1610 is further configured to:
determining a sub-area corresponding to the input area according to the input area of the touch input;
and determining a sub-area corresponding to the input area as a first image area.
Further, in a case where the first input is a slide input, the user input unit 1607 is further configured to acquire a slide distance of the slide input; the processor 1610 is further configured to determine a zoom magnification according to the sliding distance of the sliding input; and determining a first image area according to the zooming magnification.
Further, in a case that the first input is a sliding input, the user input unit 1607 is further configured to obtain a touch area of the touch input; the processor 1610 is further configured to perform object recognition on the image of the touch area; an object display area of the recognized object is determined as a first image area.
According to the embodiment of the invention, the image sensor can be suitable for various shooting modes, better data zooming effect is obtained, higher-quality images are obtained, and meanwhile, the cost is saved by disconnecting the unnecessary pixel units.
Further, the user input unit 1607 is further configured to: receiving a second input of the shooting preview interface by the user;
processor 1610 is further configured to: and responding to the second input, controlling the pixel units in the second photosensitive area to be conducted, and outputting a third image.
According to the embodiment of the application, the working state of each pixel unit in the pixel array of the image sensor can be flexibly adjusted, so that the image sensor can rapidly respond to switching among multiple shooting modes to obtain images with higher quality.
It should be understood that in the embodiment of the present application, the input Unit 1604 may include a Graphics Processing Unit (GPU) 16041 and a microphone 16042, and the Graphics processor 16041 processes image data of still pictures or videos obtained by an image capturing device, such as a photographing device, in a video capturing mode or an image capturing mode. The camera includes an image sensor including a pixel array including a first number of first pixel cells and a second number of second pixel cells; the first pixel unit comprises photosensitive elements which are arranged in a stacked mode and used for acquiring at least two color signals, and the second pixel unit comprises photosensitive elements used for acquiring a single color signal. The display unit 1606 may include a display panel 16061, and the display panel 16061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1607 includes a touch panel 16071 and at least one of other input devices 16072. Touch panel 16071, also referred to as a touch screen. The touch panel 16071 may include two parts of a touch detection device and a touch controller. Other input devices 16072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1609 may be used to store software programs as well as various data. The memory 1609 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. Further, the memory 1609 may include volatile memory or nonvolatile memory, or the memory 1609 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1609 in the embodiments of the subject application includes, but is not limited to, these and any other suitable types of memory.
Processor 1610 may include one or more processing units; optionally, processor 1610 integrates an application processor, which primarily handles operations involving the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It is to be appreciated that the modem processor described above may not be integrated into processor 1610.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing shooting method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A shooting method is applied to a shooting device, and the shooting device comprises an image sensor, wherein the image sensor comprises a pixel array, and the pixel array comprises a first number of first pixel units and a second number of second pixel units; wherein the first pixel unit includes photosensitive elements arranged in a stack for acquiring at least two color signals, and the second pixel unit includes photosensitive elements for acquiring a single color signal, the method comprising:
receiving a first input of a user to a shooting preview interface;
responding to the first input, controlling the pixel units in a first photosensitive area in the pixel array to be switched on, and controlling the pixel units in a second photosensitive area in the pixel array to be switched off, and outputting a second image;
the first photosensitive area is an area corresponding to a first image area in a first image displayed on the shooting preview interface, and the second photosensitive area is an area except the first photosensitive area in the photosensitive area of the pixel array; the first image region is determined based on the first input; the second image is an image corresponding to the first image area in the first image.
2. The method of claim 1, wherein prior to receiving the first user input to the capture preview interface, further comprising:
dividing an image area of the first image into N sub-areas, wherein N is an integer greater than 1;
when the first input is a touch input, after the receiving of the first input of the user to the first image of the shooting preview interface, the method further includes:
determining a sub-area corresponding to the input area according to the input area of the touch input;
and determining a sub-area corresponding to the input area as a first image area.
3. The method of claim 1, wherein in a case that the first input is a slide input, after receiving the first input of the shooting preview interface by the user, further comprising:
acquiring a sliding distance of the sliding input;
determining a zooming ratio according to the sliding distance of the sliding input;
and determining a first image area according to the zooming magnification.
4. The method of claim 1, wherein in a case that the first input is a touch input, after receiving the first input of the shooting preview interface by the user, further comprising:
acquiring a touch area of the touch input;
performing object recognition on the image of the touch area;
an object display area of the recognized object is determined as a first image area.
5. The method of claim 1, wherein the controlling the pixel cells in a first photosensitive area of the pixel array to be turned on and the pixel cells in a second photosensitive area of the pixel array to be turned off in response to the first input, further comprises, after outputting a second image:
receiving a second input of the shooting preview interface by the user;
and responding to the second input, controlling the pixel units in the second photosensitive area to be conducted, and outputting a third image.
6. A photographing apparatus, characterized by comprising:
an image sensor comprising a pixel array comprising a first number of first pixel cells and a second number of second pixel cells; the first pixel unit comprises photosensitive elements which are arranged in a stacked mode and used for acquiring at least two color signals, and the second pixel unit comprises photosensitive elements used for acquiring a single color signal;
the input module is used for receiving first input of a user to the shooting preview interface;
the execution module is used for responding to the first input, controlling the pixel units in a first photosensitive area in the pixel array to be switched on, controlling the pixel units in a second photosensitive area in the pixel array to be switched off, and outputting a second image;
the first photosensitive area is an area corresponding to a first image area in a first image displayed on the shooting preview interface, and the second photosensitive area is an area except the first photosensitive area in the photosensitive area of the pixel array; the first image region is determined based on the first input; the second image is an image corresponding to the first image area in the first image.
7. The apparatus of claim 6, wherein the input module is further configured to divide the image area of the first image into N sub-areas, N being an integer greater than 1;
in a case that the first input is a touch input, the execution module is further configured to:
determining a sub-area corresponding to the input area according to the input area of the touch input;
and determining a sub-area corresponding to the input area as a first image area.
8. The apparatus of claim 6, wherein in the case that the first input is a slide input, the execution module is further configured to:
acquiring a sliding distance of the sliding input;
determining a zooming ratio according to the sliding distance of the sliding input;
and determining a first image area according to the zooming magnification.
9. The apparatus of claim 6, wherein if the first input is a touch input, the execution module is further configured to:
acquiring a touch area of the touch input;
performing object recognition on the image of the touch area;
an object display area of the recognized object is determined as a first image area.
10. The apparatus of claim 6, wherein the input module is further configured to receive a second input from the user to the capture preview interface;
the execution module is further used for responding to the second input, controlling the pixel units in the second photosensitive area to be conducted and outputting a third image.
11. An electronic device, comprising:
a camera including an image sensor including a pixel array including a first number of first pixel cells and a second number of second pixel cells; the first pixel unit comprises photosensitive elements which are arranged in a stacked mode and used for acquiring at least two color signals, and the second pixel unit comprises photosensitive elements used for acquiring a single color signal;
a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the method of any one of claims 1-5.
12. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the photographing method according to any one of claims 1 to 5.
CN202111452041.0A 2021-11-30 Shooting method and device and electronic equipment Active CN113965687B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111452041.0A CN113965687B (en) 2021-11-30 Shooting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111452041.0A CN113965687B (en) 2021-11-30 Shooting method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113965687A true CN113965687A (en) 2022-01-21
CN113965687B CN113965687B (en) 2024-07-05

Family

ID=

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115379091A (en) * 2022-08-12 2022-11-22 维沃移动通信有限公司 Photosensitive chip, electronic equipment and control method and device of electronic equipment
CN115379091B (en) * 2022-08-12 2024-07-12 维沃移动通信有限公司 Photosensitive chip, electronic equipment, and control method and device thereof

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62242475A (en) * 1986-04-15 1987-10-23 Canon Inc Image pickup device
JP2004172844A (en) * 2002-11-19 2004-06-17 Canon Inc Imaging sensor
US20190174084A1 (en) * 2017-12-04 2019-06-06 Pixart Imaging Inc. Image sensor capable of averaging pixel data
CN109889712A (en) * 2019-03-11 2019-06-14 维沃移动通信(杭州)有限公司 A kind of control method of pixel circuit, imaging sensor, terminal device and signal
CN110959287A (en) * 2017-07-27 2020-04-03 麦克赛尔株式会社 Image pickup element, image pickup apparatus, and method for acquiring range image
CN111669506A (en) * 2020-07-01 2020-09-15 维沃移动通信有限公司 Photographing method and device and electronic equipment
KR20200138639A (en) * 2019-05-31 2020-12-10 삼성전자주식회사 Operating method of electronic device including image sensor
CN112312035A (en) * 2020-10-29 2021-02-02 维沃移动通信有限公司 Image sensor, exposure parameter adjustment method, and electronic apparatus
CN113676625A (en) * 2021-08-04 2021-11-19 Oppo广东移动通信有限公司 Image sensor, camera assembly and mobile terminal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62242475A (en) * 1986-04-15 1987-10-23 Canon Inc Image pickup device
JP2004172844A (en) * 2002-11-19 2004-06-17 Canon Inc Imaging sensor
CN110959287A (en) * 2017-07-27 2020-04-03 麦克赛尔株式会社 Image pickup element, image pickup apparatus, and method for acquiring range image
US20190174084A1 (en) * 2017-12-04 2019-06-06 Pixart Imaging Inc. Image sensor capable of averaging pixel data
CN109889712A (en) * 2019-03-11 2019-06-14 维沃移动通信(杭州)有限公司 A kind of control method of pixel circuit, imaging sensor, terminal device and signal
KR20200138639A (en) * 2019-05-31 2020-12-10 삼성전자주식회사 Operating method of electronic device including image sensor
CN111669506A (en) * 2020-07-01 2020-09-15 维沃移动通信有限公司 Photographing method and device and electronic equipment
CN112312035A (en) * 2020-10-29 2021-02-02 维沃移动通信有限公司 Image sensor, exposure parameter adjustment method, and electronic apparatus
CN113676625A (en) * 2021-08-04 2021-11-19 Oppo广东移动通信有限公司 Image sensor, camera assembly and mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
任航;张涛;: "科学级CCD相机的降噪技术研究", 微计算机信息, no. 04, 5 February 2009 (2009-02-05) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115379091A (en) * 2022-08-12 2022-11-22 维沃移动通信有限公司 Photosensitive chip, electronic equipment and control method and device of electronic equipment
CN115379091B (en) * 2022-08-12 2024-07-12 维沃移动通信有限公司 Photosensitive chip, electronic equipment, and control method and device thereof

Similar Documents

Publication Publication Date Title
US11758265B2 (en) Image processing method and mobile terminal
US10311649B2 (en) Systems and method for performing depth based image editing
US9313400B2 (en) Linking-up photographing system and control method for linked-up cameras thereof
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN112954196B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN111885285B (en) Image shooting method and electronic equipment
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
CN114125179B (en) Shooting method and device
CN113329172B (en) Shooting method and device and electronic equipment
CN113923350A (en) Video shooting method and device, electronic equipment and readable storage medium
CN113840070A (en) Shooting method, shooting device, electronic equipment and medium
US9172860B2 (en) Computational camera and method for setting multiple focus planes in a captured image
CN112929563B (en) Focusing method and device and electronic equipment
WO2023098638A1 (en) Image sensor, photographic module, electronic device and photographing method
CN111131714A (en) Image acquisition control method and device and electronic equipment
CN113965687B (en) Shooting method and device and electronic equipment
KR101812585B1 (en) Method for providing User Interface and image photographing apparatus thereof
CN113965687A (en) Shooting method and device and electronic equipment
CN115499589A (en) Shooting method, shooting device, electronic equipment and medium
CN112153291B (en) Photographing method and electronic equipment
CN112887603B (en) Shooting preview method and device and electronic equipment
RU2792413C1 (en) Image processing method and mobile terminal
WO2021135487A1 (en) Electronic apparatus having optical zoom camera, camera optical zoom method, unit, and memory
CN114157814B (en) Display method, terminal and storage medium of light field photo
CN114866680B (en) Image processing method, device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant