WO2022141349A1 - 图像处理管道、图像处理方法、摄像头组件和电子设备 - Google Patents

图像处理管道、图像处理方法、摄像头组件和电子设备 Download PDF

Info

Publication number
WO2022141349A1
WO2022141349A1 PCT/CN2020/141968 CN2020141968W WO2022141349A1 WO 2022141349 A1 WO2022141349 A1 WO 2022141349A1 CN 2020141968 W CN2020141968 W CN 2020141968W WO 2022141349 A1 WO2022141349 A1 WO 2022141349A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
image
image processing
pixels
identification
Prior art date
Application number
PCT/CN2020/141968
Other languages
English (en)
French (fr)
Inventor
罗俊
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN202080104988.XA priority Critical patent/CN116114264A/zh
Priority to PCT/CN2020/141968 priority patent/WO2022141349A1/zh
Publication of WO2022141349A1 publication Critical patent/WO2022141349A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors

Definitions

  • the present application relates to the field of consumer electronic products, and more particularly, to an image processing pipeline, an image processing method, a camera assembly and an electronic device.
  • the electronic device in order to realize face recognition, gesture recognition, gesture recognition, etc. in real time, the electronic device needs to be equipped with a camera, which occupies the internal space of the electronic device and increases the manufacturing cost of the electronic device.
  • Embodiments of the present application provide an image processing pipeline, an image processing method, a camera assembly, and an electronic device.
  • the image processing pipeline of the embodiment of the present application is used for an image sensor, the image sensor includes a pixel array, the pixel array includes imaging pixels and identification pixels, the imaging pixels are used to output a scene image, and the scene image is used to characterize Scene information, the image processing pipeline is used to obtain the image to be recognized according to the output signal of the recognition pixel, determine whether there is preset information in the image to be recognized, and when the preset information exists in the image to be recognized The electronic device is controlled to perform corresponding operations according to the preset information.
  • the image processing method of the embodiment of the present application is applied to an image sensor, the image sensor includes a pixel array, the pixel array includes imaging pixels and identification pixels, the imaging pixels are used for outputting a scene image, and the scene image is used for Representing scene information, the image processing method includes: obtaining an image to be recognized according to an output signal of the recognition pixel; judging whether preset information exists in the image to be recognized and the preset information exists in the image to be recognized At the same time, the electronic device is controlled to perform corresponding operations according to the preset information.
  • the camera assembly of the embodiment of the present application includes an image sensor and the above-mentioned image processing pipeline, and the image processing pipeline is used for processing the image output by the image sensor.
  • the electronic device of the embodiment of the present application includes a casing and the above-mentioned camera assembly, and the camera assembly is placed on the casing.
  • the image processing pipeline, image processing method, camera assembly, and electronic device of the electronic device can obtain both the scene image and the to-be-recognized image through an image sensor, determine whether preset information exists in the to-be-recognized image, and When preset information exists in the image to be recognized, the electronic device is controlled to perform corresponding operations according to the preset information, thereby realizing the AON (alwayson) function.
  • FIG. 1 is a schematic diagram of a camera assembly according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an image processing pipeline according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a pixel array according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a pixel array according to an embodiment of the present application.
  • FIG. 6 is a schematic cross-sectional view of a photosensitive pixel according to an embodiment of the present application.
  • FIG. 7 is a pixel circuit diagram of a photosensitive pixel according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an image processing method according to an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • FIGS. 10 and 11 are schematic diagrams of pixel arrays according to embodiments of the present application.
  • FIG. 12 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of an image processing method according to an embodiment of the present application.
  • 15 is a schematic diagram of an image processing method according to an embodiment of the present application.
  • 16 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • 17 and 18 are schematic diagrams of pixel arrays according to embodiments of the present application.
  • FIG. 19 is a schematic diagram of an electronic device according to an embodiment of the present application.
  • the camera assembly 500 includes an image sensor 200 and an image processing pipeline 100 .
  • Image sensor 200 includes pixel array 201 including imaging pixels 202 and identification pixels 204 .
  • the imaging pixel 202 is used to output a scene image
  • the scene image is used to represent the scene information
  • the image processing pipeline 100 is used to: obtain the to-be-recognized image according to the output signal of the recognition pixel 204, determine whether there is preset information in the to-be-recognized image, and determine whether there is preset information in the to-be-recognized image.
  • the electronic device 1000 (refer to FIG. 19 ) is controlled to perform corresponding operations according to the preset information.
  • the present application also discloses an image processing method.
  • the image processing method is used in the image sensor 200 .
  • the image sensor 200 includes a pixel array 201 , and the pixel array 201 includes imaging pixels 202 and identification pixels 204 .
  • the imaging pixels 202 are used to output a scene image, and the scene image is used to represent scene information.
  • the image processing method may be implemented by the image processing pipeline 100 of the embodiment of the present application. Referring to Figure 4, the image processing method includes:
  • the image processing method of the embodiment of the present application may be implemented by the image processing pipeline 100 of the embodiment of the present application, wherein both steps 01 and 02 may be implemented by the image processing pipeline 100 . That is to say, the image processing pipeline 100 is used for: obtaining the image to be recognized according to the output signal of the recognition pixel 204; The electronic device 1000 performs corresponding operations.
  • an image to be recognized can be obtained through an image sensor 200, it is determined whether preset information exists in the image to be recognized, and when preset information exists in the image to be recognized, the electronic device 1000 is controlled to perform corresponding operations according to the preset information, thereby realizing AON function.
  • the context-aware-based application functions that can be implemented include at least one of the following: privacy protection, remote operation, non-extinguishing while watching, and non-extinguishing while lying down. rotate.
  • privacy protection for example, a social application APP has a new message from a girlfriend, and a bank sends a new text message with a salary to the account. The private information in it is not expected to be seen by others, and the terminal can detect a stranger through the identification pixel 204. A black screen when looking at the screen of the owner's mobile phone, etc.
  • Remote operation For example, when a user is cooking and puts his mobile phone aside to view the recipe, an important call comes in, and the user’s hands are full of oil stains, making it inconvenient to operate the mobile phone directly. empty gesture and perform the corresponding operation of the air gesture. Keep watching the screen: For example, when reading a recipe or reading an e-book, there is often a page that people will read carefully and repeatedly. After a while, the screen is about to be automatically turned off. The terminal can detect through the identification pixel 204 that the head user is still staring at the screen. , the automatic screen-off function is not enabled.
  • Lying without rotation For example, when the user is lying down and the screen orientation of the electronic device 1000 changes, such as from the vertical direction to the horizontal direction, the electronic device 1000 can detect that the user's human eye gaze direction follows the change through the identification pixel 204, and the screen No rotation occurs.
  • the electronic device in order to realize the application functions of face recognition, gesture recognition, gesture recognition and the above-mentioned situation perception in real time, the electronic device needs to be equipped with an always-on (alwayson, AON) camera, which will occupy the internal space of the electronic device, and The manufacturing cost of the electronic device will be increased.
  • the camera assembly 500 , the image processing pipeline 100 , and the image processing method in the embodiments of the present application can obtain both a scene image and an image to be recognized through one image sensor 200 and determine whether there is preset information according to the image to be recognized.
  • the identification image is obtained from the output signal of the identification pixel 204, and the identification pixel 204 receives light with a preset period, so whether the preset information exists can be periodically detected to realize the AON function.
  • the preset period can be a default setting value or can be set according to user input.
  • the setting period of the preset period is short, and the preset period is, for example, 10 seconds, 1 second, 300 milliseconds, 100 milliseconds, 10 milliseconds, etc., so that the identification pixels 204 can be sensitive to light at a higher frequency so as to Changes in optical signals can be periodically detected to obtain an image to be recognized, determine whether preset information exists in the image to be recognized, and control the electronic device 1000 to perform corresponding operations according to the preset information when preset information exists in the image to be recognized.
  • the image sensor 200 is provided in the camera assembly 500 .
  • the image sensor 200 may use a complementary metal oxide semiconductor (CMOS, Complementary Metal Oxide Semiconductor) photosensitive element or a charge coupled device (CCD, Charge-coupled Device) photosensitive element.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge-coupled Device
  • the camera assembly 500 of the embodiment of the present application is exposed through the pixel array 201 to obtain a scene image and an image to be recognized.
  • the image sensor 200 further includes a vertical driving unit 22 , a control unit 23 , a column processing unit 24 and a horizontal driving unit 25 .
  • the image sensor 200 may use a complementary metal oxide semiconductor (CMOS, Complementary Metal Oxide Semiconductor) photosensitive element or a charge coupled device (CCD, Charge-coupled Device) photosensitive element.
  • CMOS complementary metal oxide semiconductor
  • CCD Charge-coupled Device
  • the pixel array 201 includes a plurality of photosensitive pixels 2011 (shown in FIG. 6 ) arranged two-dimensionally in an array form (ie, arranged in a two-dimensional matrix form), and each photosensitive pixel 2011 includes a photoelectric conversion element 2012 (shown in FIG. 7 ) .
  • Each photosensitive pixel 2011 converts light into electric charge according to the intensity of light incident thereon.
  • the vertical driving unit 22 includes a shift register and an address decoder.
  • the vertical driving unit 22 includes readout scan and reset scan functions.
  • the readout scanning refers to sequentially scanning the unit photosensitive pixels 2011 row by row, and reading signals from these unit photosensitive pixels 2011 row by row.
  • the signal output by each photosensitive pixel 2011 in the selected and scanned photosensitive pixel row is transmitted to the column processing unit 24 .
  • the reset scan is used to reset the charges, and the photocharges of the photoelectric conversion element 2012 are discarded, so that the accumulation of new photocharges can be started.
  • the signal processing performed by the column processing unit 24 is correlated double sampling (CDS) processing.
  • CDS correlated double sampling
  • the reset level and the signal level output from each photosensitive pixel 2011 in the selected photosensitive pixel row are taken out, and the level difference is calculated.
  • the signals of the photosensitive pixels 2011 in one row are obtained.
  • the column processing unit 24 may have an analog-to-digital (A/D) conversion function for converting the analog pixel signal into a digital format.
  • A/D analog-to-digital
  • the horizontal driving unit 25 includes a shift register and an address decoder.
  • the horizontal driving unit 25 sequentially scans the pixel array 201 column by column. Through the selective scanning operation performed by the horizontal driving unit 25, each photosensitive pixel column is sequentially processed by the column processing unit 24 and sequentially output.
  • control unit 23 configures timing signals according to the operation mode, and uses various timing signals to control the vertical driving unit 22 , the column processing unit 24 and the horizontal driving unit 25 to work together.
  • the photosensitive pixel 2011 includes a pixel circuit 211 , a filter 212 , and a microlens 213 .
  • the microlens 213 is used to condense the light
  • the filter 212 is used to pass the light of a certain wavelength band and filter out the light of the other wavelength bands.
  • the pixel circuit 211 is used to convert the received light into electrical signals, and provide the generated electrical signals to the column processing unit 24 shown in FIG. 5 .
  • the pixel circuit 211 can be applied to each photosensitive pixel 2011 (shown in FIG. 6 ) in the pixel array 201 shown in FIG. 5 .
  • the working principle of the pixel circuit 211 will be described below with reference to FIGS. 2 to 4 .
  • the pixel circuit 211 includes a photoelectric conversion element 2012 (eg, a photodiode), an exposure control circuit (eg, a transfer transistor 2112 ), a reset circuit (eg, a reset transistor 2113 ), an amplifier circuit (eg, an amplifier transistor 2114 ) ) and a selection circuit (eg, selection transistor 2115).
  • a photoelectric conversion element 2012 eg, a photodiode
  • an exposure control circuit eg, a transfer transistor 2112
  • a reset circuit eg, a reset transistor 2113
  • an amplifier circuit eg, an amplifier transistor 2114
  • selection transistor 2115 selection transistor 2115
  • the transfer transistor 2112 , the reset transistor 2113 , the amplifying transistor 2114 and the selection transistor 2115 are, for example, MOS transistors, but are not limited thereto.
  • the photoelectric conversion element 2012 includes a photodiode, and the anode of the photodiode is connected to the ground, for example.
  • Photodiodes convert received light into electrical charges.
  • the cathode of the photodiode is connected to the floating diffusion unit FD via an exposure control circuit (eg, transfer transistor 2112).
  • the floating diffusion unit FD is connected to the gate of the amplifier transistor 2114 and the source of the reset transistor 2113 .
  • the exposure control circuit is the transfer transistor 2112
  • the control terminal TG of the exposure control circuit is the gate of the transfer transistor 2112 .
  • a pulse of an active level eg, VPIX level
  • the transfer transistor 2112 is turned on.
  • the transfer transistor 2112 transfers the charges photoelectrically converted by the photodiode to the floating diffusion unit FD.
  • the drain of the reset transistor 2113 is connected to the pixel power supply VPIX.
  • the source of the reset transistor 113 is connected to the floating diffusion unit FD.
  • a pulse of an effective reset level is transmitted to the gate of the reset transistor 113 via the reset line, and the reset transistor 113 is turned on.
  • the reset transistor 113 resets the floating diffusion unit FD to the pixel power supply VPIX.
  • the gate of the amplification transistor 2114 is connected to the floating diffusion unit FD.
  • the drain of the amplification transistor 2114 is connected to the pixel power supply VPIX.
  • the amplifying transistor 2114 After the floating diffusion unit FD is reset by the reset transistor 2113 , the amplifying transistor 2114 outputs the reset level through the output terminal OUT via the selection transistor 2115 .
  • the amplifying transistor 2114 After the charge of the photodiode is transferred by the transfer transistor 2112 , the amplifying transistor 2114 outputs the signal level through the output terminal OUT via the selection transistor 2115 .
  • the drain of the selection transistor 2115 is connected to the source of the amplification transistor 2114.
  • the source of the selection transistor 2115 is connected to the column processing unit 24 in FIG. 5 through the output terminal OUT.
  • the selection transistor 2115 is turned on.
  • the signal output by the amplification transistor 2114 is transmitted to the column processing unit 24 through the selection transistor 2115 .
  • the pixel structure of the pixel circuit 211 in the embodiment of the present application is not limited to the structure shown in FIG. 7 .
  • the pixel circuit 211 may also have a three-transistor pixel structure, in which the functions of the amplification transistor 2114 and the selection transistor 2115 are performed by one transistor.
  • the exposure control circuit is not limited to the mode of a single transfer transistor 2112, and other electronic devices or structures with the function of controlling the conduction of the control terminal can be used as the exposure control circuit in the embodiments of the present application.
  • the implementation of transistor 2112 is simple, low cost, and easy to control.
  • the image processing pipeline 100 includes a first image processing pipeline 10 and a second image processing pipeline 20 , and the first image processing pipeline 10 is used to obtain a scene image according to the output signal of the imaging pixel 202 ,
  • the second image processing pipeline 20 is configured to obtain the image to be recognized according to the output signal of the recognition pixel 204 , determine whether there is preset information in the image to be recognized, and control the electronic device 1000 to execute execution according to the preset information when there is preset information in the image to be recognized corresponding operation.
  • the image processing method includes:
  • the image processing pipeline 100 includes a first image processing pipeline 10 and a second image processing pipeline 20 .
  • Step 03 may be implemented by the first image processing pipeline 10 . That is to say, the first image processing pipeline 10 is used to obtain a scene image according to the output signal of the imaging pixel 202 .
  • the image sensor 200 can obtain both the scene image and the image to be recognized, determine whether there is preset information in the image to be recognized, and control the electronic device 1000 to execute the execution according to the preset information when there is preset information in the image to be recognized. Corresponding operation, so as to realize the AON function.
  • the image processing method can be implemented by an image processing pipeline 100 (pipeline), the image processing pipeline 100 includes a first image processing pipeline 10 and a second image processing pipeline 20 , and the first image processing pipeline 10 can be based on the output signals of the imaging pixels 202 .
  • a scene image is obtained, and the scene image is used to represent scene information, and the scene information includes color information, brightness information, etc., and the scene information is processed to realize the imaging function of the camera assembly to obtain an image.
  • the first image processing pipeline 10 may also include black level correction, lens attenuation, white balance processing, image correction and adjustment, advanced noise reduction, temporal filtering, color aberration correction, color space conversion, local tone mapping, color correction, gamma Image processing functions such as color correction, color adjustment, chroma enhancement, and chroma suppression.
  • the user can activate an application on the electronic device 1000 to obtain a scene image, such as a photographing application, a video recording application, and the like.
  • the second image processing pipeline 20 can obtain the to-be-recognized image according to the output signal of the recognition pixel 204 , determine whether preset information exists in the to-be-recognized image, and control the electronic device 1000 to perform the corresponding operation according to the preset information when the preset information exists in the to-be-recognized image. operation.
  • the user can implement application functions based on context perception, such as privacy protection, remote operation, non-extinguishing of the screen while watching, and non-rotating while lying down, through the second image processing pipeline 20 .
  • the processing volume of the second image processing pipeline 20 for each frame of the image to be recognized is lower than the processing volume of the scene image processed by the first image processing pipeline 20.
  • the second image processing pipeline 20 does not need to perform color aberration correction, color space conversion, color Image processing such as correction can make the workload of the second image processing pipeline 20 relatively low, thereby reducing the power consumption of the second image processing pipeline 20 and facilitating the realization of the AON function.
  • the scene image may be obtained according to the output signal of the imaging pixel 202 , or the scene image may be obtained according to the output signal of the imaging pixel 202 and the output signal of the identification pixel 204 .
  • the proportion of the number of pixels of the identification pixels 204 in the pixel array 201 is less than 5%. In an example, the proportion of the number of pixels of the identification pixels 204 in the pixel array 201 may be 2.5%. In this way, the number of pixels of the identification pixels 204 is small, so that the power consumption required to obtain the to-be-recognized image can be reduced, and the workload required to process the to-be-recognized image can be reduced. In addition, the number of pixels of the identification pixels 204 is small, and the influence of the identification pixels 204 on the scene image can also be reduced.
  • the camera assembly 500 may further include a lens, an imaging device, a lens barrel, and the like.
  • the lens may include a lens and multiple lens groups, and the multiple lens groups may realize zooming of any focal length within any focal length range, so as to ensure the clarity of the image to be recognized.
  • the imaging device includes a voice coil motor, an infrared cut filter, and the like.
  • the voice coil motor may be located above the image sensor 200, and the voice coil motor includes a magnetic shield, an upper fixing ring, a pressure plate, an upper spring, a lens holder, a coil, a magnet and a magnet holder, a lower spring, a bottom holder, and the like.
  • the voice coil motor can convert electrical energy into mechanical energy, and the lens can realize the automatic focusing function through the voice coil motor, and the voice coil motor can adjust the position for focusing to present a clear image to be recognized.
  • the infrared cut filter can be a filter that filters the infrared band, and the filter that filters the infrared band can prevent the lens through which infrared rays pass from causing image distortion.
  • the camera assembly 500 can be applied to an electronic device having the functions of taking pictures and cameras, for example, the electronic device includes a smart phone, a tablet computer, and the like.
  • the image sensor 200 includes a pixel array 201 , and the pixel array 201 includes imaging pixels 202 and identification pixels 204 .
  • the pixel array 201 includes imaging pixels 202 and identification pixels 204 (as shown in FIG. 3 ) arranged two-dimensionally in an array (ie, arranged in a two-dimensional matrix), and the imaging pixels 202 and the identification pixels 204 can be Bayer (Bayer) array form.
  • the image sensor 200 includes a pixel array 201, the pixel array 201 includes a plurality of pixel units, each pixel unit includes a plurality of photosensitive pixels, the photosensitive pixels of the same pixel unit cover the same color channel, and the plurality of pixels
  • the cells are arranged in a Bayer array.
  • the pixel array 201 includes imaging pixels 202 and identification pixels 204 .
  • the pixel array 201 further includes a first type of pixel unit UR, a second type of pixel unit UG and a third type of pixel unit UB.
  • the first type pixel unit UA includes a plurality of first color photosensitive pixels R
  • the second type pixel unit UG includes a plurality of second color photosensitive pixels G
  • the third type pixel unit UB includes a plurality of third color photosensitive pixels B.
  • a plurality of first type pixel units UR, a plurality of second type pixel units UG and a plurality of third type pixel units UB are arranged in a Bayer array.
  • the image sensor 200 includes a pixel array 201, the pixel array 201 further includes a plurality of pixel units, each pixel unit includes at least one color photosensitive pixel and at least one panchromatic photosensitive pixel W, and the color photosensitive pixel has a ratio of The spectral response of the color-sensitive pixel W is narrower.
  • the pixel array 201 includes imaging pixels 202 and identification pixels 204 .
  • the pixel array 201 further includes a first type of pixel unit UR, a second type of pixel unit UG and a third type of pixel unit UB.
  • the first type of pixel unit UA includes a plurality of first color photosensitive pixels R and a plurality of panchromatic photosensitive pixels W
  • the second type of pixel unit UG includes a plurality of second color photosensitive pixels G and a plurality of panchromatic photosensitive pixels W
  • the third The pixel-like unit UB includes a plurality of third-color photosensitive pixels B and a plurality of full-color photosensitive pixels W. Since the color photosensitive pixel has a narrower spectral response than the panchromatic photosensitive pixel W, the signal-to-noise ratio of the image can be improved, so that the definition of the image is higher.
  • a plurality of identification pixels 204 form a combination of identification pixels 205, and the plurality of identification pixels 204 of each combination of identification pixels 205 cover multiple color channels.
  • the image sensor may be a filter array arranged in the form of a Bayer array, so that the plurality of imaging pixels 202 and the identification pixels 204 in the image sensor can all receive through the corresponding filter light, thereby generating pixel signals with different color channels.
  • the identification pixel 204 may include pixel signals of three color channels of RGB. Wherein, RGB are respectively: R is the pixel signal of the red channel, G is the pixel signal of the green channel, and B is the pixel signal of the blue channel.
  • the identification pixel 204 can cover multiple channels, compared to the identification pixel 204 covering only one channel, the to-be-identified image obtained by the output signal of the identification pixel 204 includes information of multiple color channels, which enables the identification pixel 204 to form the image to be identified.
  • the image to be recognized is more accurate, and the recognition error caused by the color restriction can be avoided in the process of the image to be recognized, so that the recognition result of the image to be recognized is also more accurate.
  • a plurality of identification pixels 204 of each identification pixel combination 205 are arranged adjacently in the image sensor 200, and the to-be-identified image is processed to determine whether there is preset information, including:
  • the plurality of identification pixels 204 of each identification pixel combination 205 are arranged adjacently in the image sensor 200 , and both steps 021 and 022 may be implemented by the second image processing pipeline 20 . That is to say, the second image processing pipeline 20 is used for: combining the output signals of the multiple identifying pixels 204 of each identifying pixel combination 205 to obtain the combined pixel 206; set information.
  • a plurality of identification pixels 204 of each identification pixel combination 205 are arranged adjacently in the image sensor 200, and the identification pixels 204 arranged adjacently have a high degree of correlation, and can be used to represent The same point, thus facilitating binning to obtain binned pixel 206.
  • the pixel data of the merged pixel 206 may be the sum or weighted average of the pixel data of the plurality of identified pixels 204 .
  • the identification pixel combination 205 includes four identification pixels 204 , and the sum or weighted average of the pixel data of the four identification pixels 204 can be used as the pixel data of the combined pixel 206 .
  • the merged pixel 206 includes a variety of pixel information, and the pixel data of the merged pixel 206 is more accurate.
  • the combined pixel 206 may be used to represent the brightness information of the object and the color information of the object, so that in the process of judging whether the preset information exists in the image to be identified, the brightness information of the object can be more accurately determined and color information for identification and judgment.
  • the merged pixel 206 can be used to represent the brightness information of the object, but does not include the color information of the object, and does not need to be used in the process of judging whether the preset information exists in the to-be-recognized image. Involving the identification and judgment of color information, it is thus determined whether the preset information exists in the to-be-identified image according to the plurality of merged pixels 206, which can reduce the calculation amount of the subsequent processing to realize the AON function.
  • processing the image to be recognized to determine whether preset information exists further comprising:
  • both steps 023 and 0221 may be implemented by the second image processing pipeline 20 . That is to say, the second image processing pipeline 20 is used to: convert the first pixel bit depth of the plurality of merged pixels 206 into a second pixel bit depth, and the second pixel bit depth is smaller than the first pixel bit depth; A plurality of merged pixels 206 with a bit depth determine whether preset information exists in the image to be recognized.
  • the first pixel bit depth of the plurality of combined pixels 206 is converted into a second pixel bit depth
  • the second pixel bit depth is smaller than the first pixel bit depth
  • the first pixel bit depth of the combined pixel 206 is converted into a second pixel bit depth.
  • the depth can be 10bit
  • the second pixel bit depth can be 8bit. Whether the preset information exists is determined according to the plurality of merged pixels 206 of the second pixel bit depth (ie, 8 bits). In this way, the processing amount of data can be reduced, and identification and detection can be performed more quickly.
  • the preset information includes a preset face, a preset gesture or a preset human posture
  • the image processing method includes:
  • both steps 024 and 025 may be implemented by the second image processing pipeline 20 . That is to say, the second image processing pipeline 20 is used for: judging whether there is a preset face, a preset gesture or a preset human posture in the image to be recognized; When the human body posture is preset, the electronic device 1000 is controlled to perform corresponding operations.
  • the second image processing pipeline 20 can obtain the to-be-recognized image according to the output signal of the recognition pixel 204 and determine whether preset information exists in the to-be-recognized image, and the preset information includes a preset face, a preset gesture or a preset human posture .
  • the preset information may be a preset face to be unlocked.
  • the second image processing pipeline 20 obtains the to-be-recognized image, the to-be-recognized image may be processed. If it is determined that there is a preset human face in the to-be-recognized image, the corresponding the preset information, so that it can be unlocked.
  • the preset information may be preset gestures to operate the electronic device, for example, to identify whether the user's gesture is a preset screen bright gesture, and control the electronic device 1000 to perform the operation when the recognized gesture is a preset screen bright gesture. Bright screen.
  • the preset information may be a preset human posture to operate the electronic device 1000, such as identifying whether the user's hand waving is a preset screen capture gesture, and if the waving motion is recognized as a preset screen capture gesture When the electronic device 1000 is controlled to capture a screen image.
  • the spectral response of the identification pixels 204 is narrower than the spectral response of the imaging pixels 202 .
  • the bands that identify the spectral response of the pixels 204 may be designed in advance for different countries of sale and user populations.
  • the spectral response of the identification pixel 204 is narrower than the spectral response of the imaging pixel 202, which can make the light received by the identification pixel 204 more inclined, thereby avoiding the influence of other light (stray light), thereby improving the signal-to-noise ratio and improving the identification of people. species, object recognition accuracy, etc.
  • the spectral response of identification pixels 204 is wider than the spectral response of imaging pixels 202 .
  • the bands for identifying the spectral response of the pixels 204 may be designed in advance for different common application scenarios.
  • the spectral response of the recognition pixel 204 is wider than the spectral response of the imaging pixel 202 to enhance the amount of incoming light to increase the recognition accuracy in a dark environment.
  • the identification pixels 204 are uniformly distributed in the image sensor 200 .
  • the to-be-recognized image obtained by evenly distributing the identification pixels 204 in the image sensor 200 can more comprehensively identify whether there are preset faces, preset gestures, and preset human postures in the shooting range, so as to avoid omissions.
  • the image sensor 200 includes a central region 207 and an edge region 208 surrounding the central region 207 , and the density of the identification pixels 204 in the central region 207 is greater than the density of the identification pixels 204 in the edge region 208 .
  • the image sensor 200 may include a center point 209, and the center area 207 may refer to an area whose distance from the center point 209 is less than a preset distance, wherein the center area 207 may be a circular area or a square area.
  • Edge regions 208 may be other regions of image sensor 200 than central region 207 .
  • the density of the central area 207 is greater than the density of the recognition pixels 204 in the edge area 208. In this way, the central area 207 can be considered as the area of interest, and the area of interest will be used as the key area of the image to be recognized, and the presence or absence of the image is periodically detected in the key area. Preset information for AON function.
  • the filter array of the embodiments of the present application may employ filters with other arrangements.
  • the filter is a Bayer array filter in the form of “R, G, G, B”, and in other embodiments, it may also include a filter in the form of “R, G, B, W” Filters and the like are not specifically limited here.
  • the present application further discloses an electronic device 1000 , the electronic device 1000 includes a housing 600 and the above-mentioned camera assembly 500 .
  • the electronic device 1000 is a terminal device configured with the camera assembly 500 .
  • the electronic device 1000 may include a smart phone, a tablet computer, or other terminal devices configured with the camera assembly 500 .
  • the electronic device 1000 can obtain the image to be recognized through an image sensor 200, determine whether preset information exists in the image to be recognized, and control the electronic device 1000 to perform corresponding operations according to the preset information when the preset information exists in the image to be recognized, thereby Implement AON function.
  • any description of a process or method in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing a specified logical function or step of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种图像处理管道(100)、图像处理方法、摄像头组件(500)和电子设备(1000)。图像传感器(200)包括像素阵列(201),像素阵列(201)包括成像像素(202)和识别像素(204),成像像素(202)用于输出场景图像,场景图像用于表征场景信息,图像处理管道(100)用于根据识别像素(204)的输出信号获得待识别图像、判断待识别图像中是否存在预设信息并在待识别图像中存在预设信息时根据预设信息控制电子设备(1000)执行对应的操作。

Description

图像处理管道、图像处理方法、摄像头组件和电子设备 技术领域
本申请涉及消费性电子产品领域,更具体而言,特别涉及一种图像处理管道、图像处理方法、摄像头组件和电子设备。
背景技术
在相关技术中,为了实时实现人脸识别、手势识别、姿态识别等,电子装置需要配备一个摄像头,如此会占用电子装置的内部空间,并且会增加电子装置的制造成本。
发明内容
本申请的实施方式提供一种图像处理管道、图像处理方法、摄像头组件和电子设备。
本申请的实施方式的图像处理管道用于图像传感器,所述图像传感器包括像素阵列,所述像素阵列包括成像像素和识别像素,所述成像像素用于输出场景图像,所述场景图像用于表征场景信息,所述图像处理管道用于根据所述识别像素的输出信号获得待识别图像、判断所述待识别图像中是否存在预设信息并在所述待识别图像中存在所述预设信息时根据所述预设信息控制电子设备执行对应的操作。
本申请的实施方式的图像处理方法,用于图像传感器,所述图像传感器包括像素阵列,所述像素阵列包括成像像素和识别像素,所述成像像素用于输出场景图像,所述场景图像用于表征场景信息,所述图像处理方法包括:根据所述识别像素的输出信号获得待识别图像;判断所述待识别图像中是否存在预设信息并在所述待识别图像中存在所述预设信息时根据所述预设信息控制电子设备执行对应的操作。
本申请的实施方式的摄像头组件包括图像传感器和上述图像处理管道,所述图像处理管道用于处理所述图像传感器输出的图像。
本申请的实施方式的电子设备包括壳体和上述的摄像头组件,所述摄像头组件置在所述壳体上。
本申请实施方式的电子设备的图像处理管道、图像处理方法、摄像头组件和电子设备通过一个图像传感器既可以获得场景图像,也可以获得待识别图像,判断待识别图像中是否存在预设信息,并在待识别图像中存在预设信息时根据预设信息控制电子设备执行对应的操作,从而实现AON(alwayson)功能。
本申请的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变 得明显,或通过本申请的实践了解到。
附图说明
本申请的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本申请实施方式的摄像头组件的示意图;
图2是本申请实施方式的图像处理管道的示意图;
图3是本申请实施方式的像素阵列的示意图;
图4是本申请实施方式的图像处理方法的流程示意图;
图5是本申请实施方式的像素阵列的示意图;
图6是本申请实施方式的感光像素的截面示意图;
图7是本申请实施方式的感光像素的像素电路图;
图8是本申请实施方式的图像处理方法的示意图;
图9是本申请实施方式的图像处理方法的流程示意图;
图10和图11是本申请实施方式的像素阵列的示意图;
图12是本申请实施方式的图像处理方法的流程示意图;
图13是本申请实施方式的图像处理方法的示意图;
图14是本申请实施方式的图像处理方法的流程示意图;
图15是本申请实施方式的图像处理方法的示意图;
图16是本申请实施方式的图像处理方法的流程示意图;
图17和图18是本申请实施方式的像素阵列的示意图;
图19是本申请实施方式的电子设备的示意图。
具体实施方式
下面详细描述本申请的实施方式,所述实施方式的实施方式在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。
请一并参阅图1、图2和图3,本申请实施方式的摄像头组件500包括图像传感器200和图像处理管道100。图像传感器200包括像素阵列201,像素阵列201包括成像像素202和识别像素204。成像像素202用于输出场景图像,场景图像用于表征场景信息,图像处理管道100用于:根据识别像素204的输出信号获得待识别图像、判断待 识别图像中是否存在预设信息并在待识别图像中存在预设信息时根据预设信息控制电子设备1000(请参阅图19)执行对应的操作。
本申请还公开一种图像处理方法,图像处理方法用于图像传感器200,图像传感器200包括像素阵列201,像素阵列201包括成像像素202和识别像素204。成像像素202用于输出场景图像,场景图像用于表征场景信息。图像处理方法可以由本申请实施方式的图像处理管道100实现。请参阅图4,图像处理方法包括:
01:根据识别像素204的输出信号获得待识别图像;
02:判断待识别图像中是否存在预设信息并在待识别图像中存在预设信息时根据预设信息控制电子设备1000执行对应的操作。
本申请实施方式的图像处理方法可以由本申请实施方式的图像处理管道100实现,其中,步骤01和步骤02均可以由图像处理管道100实现。也即是说,图像处理管道100用于:根据识别像素204的输出信号获得待识别图像;判断待识别图像中是否存在预设信息并在待识别图像中存在预设信息时根据预设信息控制电子设备1000执行对应的操作。
如此,通过一个图像传感器200可以获得待识别图像,判断待识别图像中是否存在预设信息,并在待识别图像中存在预设信息时根据预设信息控制电子设备1000执行对应的操作,从而实现AON功能。
值得一提的是,根据识别像素204的输出信号所获得的待识别图像,能够实现的基于情景感知的应用功能包括以下至少一种:隐私保护、隔空操作、注视不灭屏和躺卧不旋转。具体地,隐私保护:例如社交应用程序APP来了女朋友的新消息,银行发来工资到账新短信,其中的隐私信息不希望他人看到,终端通过识别像素204能够检测到陌生人的人眼注视户主手机屏幕时黑屏等。隔空操作:例如用户正在做饭,将手机放在一旁查看菜谱,这时有重要电话打入,而用户手上满是油污,不便直接操作手机,终端通过识别像素204能够检测到用户的隔空手势并执行该隔空手势对应的操作。注视不灭屏:例如看菜谱或看电子书,常有一页会让人认真地反复阅读,不一会就到了快要自动灭屏的时间,终端通过识别像素204能够检测到户主用户仍然注视着屏幕,则不启用自动灭屏功能。躺卧不旋转:例如用户躺卧导致电子设备1000的屏幕方向发生变化时,如由竖直方向变为水平方向,电子设备1000通过识别像素204能够检测到用户人眼注视方向跟随变化,则屏幕不发生旋转。
在相关技术中,为了实时实现人脸识别、手势识别、姿态识别和上述情景感知的应用功能等,电子装置需要配备一个常开启(alwayson,AON)摄像头,如此会占用电子装置的内部空间,并且会增加电子装置的制造成本。
本申请实施方式的摄像头组件500、图像处理管道100和图像处理方法通过一个图像传感器200既可以获得场景图像,也可以获得待识别图像并根据待识别图像确定是否存在预设信息,其中,由于待识别图像是由识别像素204的输出信号获得的,而识别像素204以预设周期接收光线,因此,能够周期性地检测是否存在预设信息以实现AON功能。预设周期可以是默认设置值,也可以根据用户输入进行设定。在一个实施例中,预设周期的设置周期时间短暂,预设周期例如为10秒、1秒、300毫秒、100毫秒、10毫秒等,如此可以使得识别像素204能够以较高频率进行感光从而能够周期性地检测光信号的变化,以获得待识别图像、判断待识别图像中是否存在预设信息并在待识别图像中存在预设信息时根据预设信息控制电子设备1000执行对应的操作。
摄像头组件500中设置有图像传感器200。图像传感器200可以采用互补金属氧化物半导体(CMOS,Complementary Metal Oxide Semiconductor)感光元件或者电荷耦合元件(CCD,Charge-coupled Device)感光元件。本申请实施方式的摄像头组件500通过像素阵列201曝光以获得场景图像和待识别图像。
请参阅图5,值得一提的是,图像传感器200还包括垂直驱动单元22、控制单元23、列处理单元24和水平驱动单元25。
例如,图像传感器200可以采用互补金属氧化物半导体(CMOS,Complementary Metal Oxide Semiconductor)感光元件或者电荷耦合元件(CCD,Charge-coupled Device)感光元件。
例如,像素阵列201包括以阵列形式二维排列(即二维矩阵形式排布)的多个感光像素2011(图6所示),每个感光像素2011包括光电转换元件2012(图7所示)。每个感光像素2011根据入射在其上的光的强度将光转换为电荷。
例如,垂直驱动单元22包括移位寄存器和地址译码器。垂直驱动单元22包括读出扫描和复位扫描功能。读出扫描是指顺序地逐行扫描单位感光像素2011,从这些单位感光像素2011逐行地读取信号。例如,被选择并被扫描的感光像素行中的每一感光像素2011输出的信号被传输到列处理单元24。复位扫描用于复位电荷,光电转换元件2012的光电荷被丢弃,从而可以开始新的光电荷的积累。
例如,由列处理单元24执行的信号处理是相关双采样(CDS)处理。在CDS处理中,取出从所选感光像素行中的每一感光像素2011输出的复位电平和信号电平,并且计算电平差。因而,获得了一行中的感光像素2011的信号。列处理单元24可以具有用于将模拟像素信号转换为数字格式的模数(A/D)转换功能。
例如,水平驱动单元25包括移位寄存器和地址译码器。水平驱动单元25顺序逐列扫描像素阵列201。通过水平驱动单元25执行的选择扫描操作,每一感光像素列被 列处理单元24顺序地处理,并且被顺序输出。
例如,控制单元23根据操作模式配置时序信号,利用多种时序信号来控制垂直驱动单元22、列处理单元24和水平驱动单元25协同工作。
请参阅图6,感光像素2011包括像素电路211、滤光片212、及微透镜213。沿感光像素2011的收光方向,微透镜213、滤光片212、及像素电路211依次设置。微透镜213用于汇聚光线,滤光片212用于供某一波段的光线通过并过滤掉其余波段的光线。像素电路211用于将接收到的光线转换为电信号,并将生成的电信号提供给图5所示的列处理单元24。
请参阅图7,像素电路211可应用在图5所示的像素阵列201内的每个感光像素2011(图6所示)中。下面结合图2至图4对像素电路211的工作原理进行说明。
如图7所示,像素电路211包括光电转换元件2012(例如,光电二极管)、曝光控制电路(例如,转移晶体管2112)、复位电路(例如,复位晶体管2113)、放大电路(例如,放大晶体管2114)和选择电路(例如,选择晶体管2115)。在本申请的实施例中,转移晶体管2112、复位晶体管2113、放大晶体管2114和选择晶体管2115例如是MOS管,但不限于此。
例如,光电转换元件2012包括光电二极管,光电二极管的阳极例如连接到地。光电二极管将所接收的光转换为电荷。光电二极管的阴极经由曝光控制电路(例如,转移晶体管2112)连接到浮动扩散单元FD。浮动扩散单元FD与放大晶体管2114的栅极、复位晶体管2113的源极连接。
例如,曝光控制电路为转移晶体管2112,曝光控制电路的控制端TG为转移晶体管2112的栅极。当有效电平(例如,VPIX电平)的脉冲通过曝光控制线传输到转移晶体管2112的栅极时,转移晶体管2112导通。转移晶体管2112将光电二极管光电转换的电荷传输到浮动扩散单元FD。
例如,复位晶体管2113的漏极连接到像素电源VPIX。复位晶体管113的源极连接到浮动扩散单元FD。在电荷被从光电二极管转移到浮动扩散单元FD之前,有效复位电平的脉冲经由复位线传输到复位晶体管113的栅极,复位晶体管113导通。复位晶体管113将浮动扩散单元FD复位到像素电源VPIX。
例如,放大晶体管2114的栅极连接到浮动扩散单元FD。放大晶体管2114的漏极连接到像素电源VPIX。在浮动扩散单元FD被复位晶体管2113复位之后,放大晶体管2114经由选择晶体管2115通过输出端OUT输出复位电平。在光电二极管的电荷被转移晶体管2112转移之后,放大晶体管2114经由选择晶体管2115通过输出端OUT输出信号电平。
例如,选择晶体管2115的漏极连接到放大晶体管2114的源极。选择晶体管2115的源极通过输出端OUT连接到图5中的列处理单元24。当有效电平的脉冲通过选择线被传输到选择晶体管2115的栅极时,选择晶体管2115导通。放大晶体管2114输出的信号通过选择晶体管2115传输到列处理单元24。
需要说明的是,本申请实施例中像素电路211的像素结构并不限于图7所示的结构。例如,像素电路211也可以具有三晶体管像素结构,其中放大晶体管2114和选择晶体管2115的功能由一个晶体管完成。例如,曝光控制电路也不局限于单个转移晶体管2112的方式,其它具有控制端控制导通功能的电子器件或结构均可以作为本申请实施例中的曝光控制电路,本申请实施方式中的单个转移晶体管2112的实施方式简单、成本低、易于控制。
请参阅图8,在某些实施方式中,图像处理管道100包括第一图像处理管道10和第二图像处理管道20,第一图像处理管道10用于根据成像像素202的输出信号获得场景图像,第二图像处理管道20用于根据识别像素204的输出信号获得待识别图像、判断待识别图像中是否存在预设信息并在待识别图像中存在预设信息时根据预设信息控制电子设备1000执行对应的操作。
请参阅图9,在某些实施方式中,图像处理方法包括:
03:根据成像像素202的输出信号获得场景图像。
在某些实施方式中,图像处理管道100包括第一图像处理管道10和第二图像处理管道20。步骤03可以由第一图像处理管道10实现。也即是说,第一图像处理管道10用于:根据成像像素202的输出信号获得场景图像。
如此,通过图像传感器200既可以获得场景图像,也可以获得待识别图像,判断待识别图像中是否存在预设信息,并在待识别图像中存在预设信息时根据预设信息控制电子设备1000执行对应的操作,从而实现AON功能。
具体地,图像处理方法可以通过图像处理管道100(pipeline)实现,图像处理管道100包括第一图像处理管道10和第二图像处理管道20,第一图像处理管道10可以根据成像像素202的输出信号获得场景图像,场景图像用于表征场景信息,场景信息包括颜色信息、亮度信息等,场景信息经过处理后以实现摄像头组件的成像功能,获得图像。第一图像处理管道10还可以包括黑电平校正、镜头衰减、白平衡处理、图像校正和调整、高级降噪、时域滤波、色彩失常校正、色彩空间转换、局部色调映射、色彩校正、伽玛校正、色彩调整和色度增强、色度抑制等图像处理功能。用户可以启用电子设备1000上的应用程序以获得场景图像,例如:拍照应用程序、录像应用程序等。第二图像处理管道20可以根据识别像素204的输出信号获得待识别图像、判断待 识别图像中是否存在预设信息并在待识别图像中存在预设信息时根据预设信息控制电子设备1000执行对应的操作。用户可以通过第二图像处理管道20实现隐私保护、隔空操作、注视不灭屏和躺卧不旋转等基于情景感知的应用功能。第二图像处理管道20对于每一帧待识别图像的处理量低于第一图像处理管道20对场景图像的处理量,例如第二图像处理管道20可以不用进行色彩失常校正、色彩空间转换、色彩校正等图像处理,如此,能够使得第二图像处理管道20的工作量比较低,从而降低第二图像处理管道20的功耗,便于实现AON功能。
在某些实施方式中,可以根据成像像素202的输出信号获得场景图像,也可以根据成像像素202的输出信号和识别像素204的输出信号获得场景图像。识别像素204的像素数量在像素阵列201中的占比小于5%,在一个例子中,识别像素204的像素数量在像素阵列201中的占比可以为2.5%。如此,识别像素204的像素数量较少,从而可以减少获得待识别图像所需的功耗,并且降低待识别图像处理所需的工作量。另外,识别像素204的像素数量较少,还可以降低识别像素204对场景图像的影响。
值得一提的是,摄像头组件500还可以包括镜头、成像装置、镜筒等。具体地,镜头可以包括镜片和多个透镜组,多个透镜组可以在任意焦段范围内实现任意焦距的变焦,以保证待识别图像的清晰度。成像装置包括音圈马达、红外截止滤光片等。音圈马达可以位于图像传感器200上方,音圈马达包括防磁罩、上固定圈、压板、上弹簧、镜头固定架、线圈、磁铁与磁铁架、下弹簧和底固定座等。音圈马达可以将电能转化为机械能,镜头可以通过音圈马达实现自动对焦功能,音圈马达可以调节位置用于对焦呈现清晰的待识别图像。红外截止滤光片可以是过滤红外波段的滤镜,过滤红外波段的滤镜可以阻止红外线穿过的镜头造成成像失真。摄像头组件500可以应用在具有拍照、摄像功能的电子设备上,例如:电子设备包括智能手机、平板电脑等。
在本申请的实施方式中,图像传感器200包括像素阵列201,像素阵列201包括成像像素202和识别像素204。在一个实施例中,像素阵列201包括以阵列形式二维排列(即二维矩阵形式排布)的成像像素202和识别像素204(如图3所示),成像像素202和识别像素204可以拜耳(Bayer)阵列形式排布。
在另一个实施例中,图像传感器200包括像素阵列201,像素阵列201包括多个像素单元,每个像素单元包括多个感光像素,同一像素单元的多个感光像素覆盖相同颜色通道,多个像素单元呈拜耳阵列排布。
请参阅图10,具体地,像素阵列201像素阵列201包括成像像素202和识别像素204。像素阵列201还包括第一类像素单元UR、第二类像素单元UG和第三类像素单元UB。第一类像素单元UA包括多个第一颜色感光像素R,第二类像素单元UG包括 多个第二颜色感光像素G,第三类像素单元UB包括多个第三颜色感光像素B。多个第一类像素单元UR、多个第二类像素单元UG和多个第三类像素单元UB呈拜耳阵列排布。
在又一个实施例中,图像传感器200包括像素阵列201,像素阵列201还包括多个像素单元,每个像素单元包括至少一个彩色感光像素和至少一个全色感光像素W,彩色感光像素具有比全色感光像素W更窄的光谱响应。
请参阅图11,具体地,像素阵列201像素阵列201包括成像像素202和识别像素204。像素阵列201还包括第一类像素单元UR、第二类像素单元UG和第三类像素单元UB。第一类像素单元UA包括多个第一颜色感光像素R和多个全色感光像素W,第二类像素单元UG包括多个第二颜色感光像素G和多个全色感光像素W,第三类像素单元UB包括多个第三颜色感光像素B和多个全色感光像素W。由于彩色感光像素具有比全色感光像素W更窄的光谱响应,因此可以提高图像的信噪比,使得图像的清晰度更高。
请再次参阅图3,在某些实施方式中,多个识别像素204形成识别像素组合205,每个识别像素组合205的多个识别像素204覆盖多个颜色通道。
具体地,在一个例子中,图像传感器可以是设置以拜耳(Bayer)阵列形式排布的滤光片阵列,以使得图像传感器中的多个成像像素202和识别像素204均能够接收穿过对应的滤光片的光线,从而生成具有不同颜色通道的像素信号。如图3所示,识别像素204可以包括RGB三个颜色通道的像素信号。其中,RGB分别为是:R为红色通道的像素信号、G为绿色通道的像素信号、B为蓝色通道的像素信号。如此,由于识别像素204能够覆盖多个通道,相对于识别像素204只覆盖一个通道来说,识别像素204的输出信号获得的待识别图像包括多个颜色通道的信息,能够使得识别像素204形成的待识别图像更加准确,可以在待识别图像的过程中避免颜色限制而造成的识别错误,从而使得待识别图像的识别结果也更加精准。
请参阅图12,在某些实施方式中,每个识别像素组合205的多个识别像素204在图像传感器200中相邻设置,处理待识别图像以确定是否存在预设信息,包括:
021:合并每个识别像素组合205的多个识别像素204的输出信号以获得合并像素206;
022:根据多个合并像素206判断待识别图像中是否存在预设信息。
在某些实施方式中,每个识别像素组合205的多个识别像素204在图像传感器200中相邻设置,步骤021和步骤022均可以由第二图像处理管道20实现。也即是说,第二图像处理管道20用于:合并每个识别像素组合205的多个识别像素204的输出信号 以获得合并像素206;根据多个合并像素206判断待识别图像中是否存在预设信息。
请一并参阅图3和图13,每个识别像素组合205的多个识别像素204在图像传感器200中相邻设置,相邻设置的识别像素204相关度较高,可以用于表示场景中的同一个点,因此便于合并以获得合并像素206。合并像素206的像素数据可以是多个识别像素204像素数据的和值或加权平均值。以图13为例,识别像素组合205包括4个识别像素204,4个识别像素204的像素数据的和值或加权平均值即可作为合并像素206的像素数据。合并像素206包括多种像素信息,合并像素206的像素数据更加精准。在一个实施例中,合并像素206可以用于表征物体的亮度信息和物体的颜色信息,如此在判断待识别图像中是否存在所述预设信息的过程中,可以更准确的对物体的亮度信息和颜色信息进行识别和判断。在又一个实施例中,如图13所示,合并像素206可以用于表征物体的亮度信息,而不包括物体的颜色信息,判断待识别图像中是否存在所述预设信息的过程中不需要涉及颜色信息的识别和判断,如此根据多个合并像素206判断待识别图像中是否存在所述预设信息,可以减少后续处理的计算量以实现AON功能。
请参阅图14,在某些实施方式中,处理待识别图像以确定是否存在预设信息,还包括:
023:将多个合并像素206的第一像素位深转换成第二像素位深,第二像素位深小于第一像素位深;
根据多个合并像素206确定是否存在预设信息,包括:
0221:根据第二像素位深的多个合并像素206判断待识别图像中是否存在预设信息。
在某些实施方式中,步骤023和步骤0221均可以由第二图像处理管道20实现。也即是说,第二图像处理管道20用于:将多个合并像素206的第一像素位深转换成第二像素位深,第二像素位深小于第一像素位深;根据第二像素位深的多个合并像素206判断待识别图像中是否存在预设信息。
请参阅图15,在一个例子中,将多个合并像素206的第一像素位深转换成第二像素位深,第二像素位深小于第一像素位深,合并像素206的第一像素位深可以是10bit,第二像素位深可以是8bit。根据第二像素位深(即8bit)的多个合并像素206确定是否存在预设信息。如此,可以减少数据的处理量,可以更快速的进行识别检测。
请参阅图16,在某些实施方式中,预设信息包括预设人脸、预设手势或预设人体姿态,图像处理方法包括:
024:判断待识别图像中是否存在预设人脸、预设手势或预设人体姿态;
025:在待识别图像中存在预设人脸、预设手势或预设人体姿态时控制电子设备1000执行对应的操作。
在某些实施方式中,步骤024和步骤025均可以由第二图像处理管道20实现。也即是说,第二图像处理管道20用于:判断待识别图像中是否存在预设人脸、预设手势或预设人体姿态;在待识别图像中存在预设人脸、预设手势或预设人体姿态时控制电子设备1000执行对应的操作。
具体地,第二图像处理管道20可以根据识别像素204的输出信号获得待识别图像并判断待识别图像中是否存在预设信息,预设信息包括预设人脸、预设手势或预设人体姿态。在一个例子中,预设信息可以是预设人脸进行解锁,当第二图像处理管道20获得待识别图像后可以对待识别图像进行处理,若确定待识别图像中存在预设人脸时确定对应的预设信息,如此便可以进行解锁。在另外一个例子中,预设信息可以是预设手势对电子设备进行操作,例如识别用户的手势是否为预设亮屏手势,在识别到的手势为预设亮屏手势时控制电子设备1000进行亮屏。在又一个例子中,预设信息可以是预设人体姿态对电子设备1000进行操作,例如识别用户的挥手动作是否为预设截取屏幕画面手势,在识别到挥手动作的为预设截取屏幕画面手势时控制电子设备1000进行截取屏幕画面。
在某些实施方式中,识别像素204的光谱响应比成像像素202的光谱响应窄。
在某些实施方式中,可以针对不同的出售国家和使用人群,提前设计识别像素204的光谱响应的波段。识别像素204的光谱响应比成像像素202的光谱响应窄,可以使得识别像素204所接收的光线更有倾向性,从而避免其他光线(杂光)的影响,从而提高信噪比,以提高识别人种、识别物体等精确度。
在某些实施方式中,识别像素204的光谱响应比成像像素202的光谱响应宽。
在某些实施方式中,可以针对不同常用的应用场景,提前设计识别像素204的光谱响应的波段。识别像素204的光谱响应比成像像素202的光谱响应宽可以增强进光量,以增加在黑暗环境下的识别精准度。
请参阅图17,在某些实施方式中,识别像素204在图像传感器200中均匀分布。
具体地,识别像素204在图像传感器200中均匀分布所获得的待识别图像能够更加全面地识别拍摄范围内是否存在预设人脸、预设手势和预设人体姿态等,避免发生遗漏的情况。
请参阅图18,在某些实施方式中,图像传感器200包括中心区域207和环绕中心区域207的边缘区域208,识别像素204在中心区域207的密度大于识别像素204在边缘区域208的密度。
具体地,图像传感器200可包括中心点209,中心区域207可以是指与中心点209的距离小于预设距离的区域,其中,中心区域207可以是圆形区域或方形区域等。边缘区域208可以是图像传感器200除中心区域207外的其他区域。中心区域207的密度大于识别像素204在边缘区域208的密度,如此,可以认为中心区域207为感兴趣区域,感兴趣区域会作为待识别图像的重点区域,在重点区域内周期性地检测是否存在预设信息以实现AON功能。
值得一提的是,本申请实施方式的滤光片阵列可以采用其他排布方式的滤光片。在图3所示的例子中,滤光片为“R、G、G、B”形式的拜尔阵列滤光片,在其他实施方式中也可以包括“R、G、B、W”形式的滤光片等,此处不作具体限定。
请参阅图19,本申请还公开一种电子设备1000,电子设备1000包括壳体600和上述摄像头组件500。
具体地,电子设备1000以是配置有摄像头组件500终端设备。例如,电子设备1000可以包括智能手机、平板电脑或其他配置有摄像头组件500的终端设备。电子设备1000通过一个图像传感器200可以获得待识别图像,判断待识别图像中是否存在预设信息,并在待识别图像中存在预设信息时根据预设信息控制电子设备1000执行对应的操作,从而实现AON功能。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。

Claims (28)

  1. 一种图像处理管道,用于图像传感器,其特征在于,所述图像传感器包括像素阵列,所述像素阵列包括成像像素和识别像素,所述成像像素用于输出场景图像,所述场景图像用于表征场景信息,所述图像处理管道用于根据所述识别像素的输出信号获得待识别图像、判断所述待识别图像中是否存在预设信息并在所述待识别图像中存在所述预设信息时根据所述预设信息控制电子设备执行对应的操作。
  2. 根据权利要求1所述的图像处理管道,其特征在于,所述图像处理管道包括第一图像处理管道和第二图像处理管道,所述第一图像处理管道用于根据所述成像像素的输出信号获得所述场景图像,所述第二图像处理管道用于根据所述识别像素的输出信号获得所述待识别图像、判断所述待识别图像中是否存在所述预设信息并在所述待识别图像中存在所述预设信息时根据所述预设信息控制电子设备执行对应的操作。
  3. 根据权利要求1所述的图像处理管道,其特征在于,多个所述识别像素形成识别像素组合,每个所述识别像素组合的多个所述识别像素覆盖多个颜色通道。
  4. 根据权利要求3所述的图像处理管道,其特征在于,每个所述识别像素组合的多个所述识别像素在所述图像传感器中相邻设置,所述图像处理管道用于:合并每个所述识别像素组合的多个所述识别像素的输出信号以获得合并像素;根据多个所述合并像素判断所述待识别图像中是否存在所述预设信息。
  5. 根据权利要求4所述的图像处理管道,其特征在于,所述图像处理管道还用于:将多个所述合并像素的第一像素位深转换成第二像素位深,所述第二像素位深小于所述第一像素位深;根据所述第二像素位深的多个所述合并像素判断所述待识别图像中是否存在所述预设信息。
  6. 根据权利要求1所述的图像处理管道,其特征在于,所述预设信息包括预设人脸、预设手势或预设人体姿态,所述图像处理管道还用于:判断所述待识别图像中是否存在预设人脸、预设手势或预设人体姿态并在所述待识别图像中存在所述预设人脸、所述预设手势或所述预设人体姿态时控制所述电子设备执行对应的操作。
  7. 根据权利要求1所述的图像处理管道,其特征在于,所述识别像素的光谱响应比所述成像像素的光谱响应窄。
  8. 根据权利要求1所述的图像处理管道,其特征在于,所述识别像素的光谱响应比所述成像像素的光谱响应宽。
  9. 根据权利要求1所述的图像处理管道,其特征在于,所述识别像素在所述图像传感器中均匀分布。
  10. 根据权利要求1所述的图像处理管道,其特征在于,所述图像传感器包括中心区域和环绕所述中心区域的边缘区域,所述识别像素在所述中心区域的密度大于所述识别像素在所述边缘区域的密度。
  11. 根据权利要求1所述的图像处理管道,其特征在于,所述像素阵列包括多个感光像素,多个感光像素呈拜耳阵列排布。
  12. 根据权利要求1所述的图像处理管道,其特征在于,所述像素阵列包括多个像素单元,每个所述像素单元包括多个感光像素,同一所述像素单元的多个所述感光像素覆盖相同颜色通道,多个所述像素单元呈拜耳阵列排布。
  13. 根据权利要求1所述的图像处理管道,其特征在于,所述像素阵列包括多个像素单元,每个所述像素单元包括至少一个彩色感光像素和至少一个全色感光像素,所述彩色感光像素具有比所述全色感光像素更窄的光谱响应。
  14. 一种图像处理方法,用于图像传感器,其特征在于,所述图像传感器包括像素阵列,所述像素阵列包括成像像素和识别像素,所述成像像素用于输出场景图像,所述场景图像用于表征场景信息,所述图像处理方法包括:
    根据所述识别像素的输出信号获得待识别图像;
    判断所述待识别图像中是否存在预设信息并在所述待识别图像中存在所述预设信息时根据所述预设信息控制电子设备执行对应的操作。
  15. 根据权利要求14所述的图像处理方法,其特征在于,所述图像处理方法包括:
    根据所述成像像素的输出信号获得所述场景图像。
  16. 根据权利要求14所述的图像处理方法,其特征在于,多个所述识别像素形成识别像素组合,每个所述识别像素组合的多个所述识别像素覆盖多个颜色通道。
  17. 根据权利要求16所述的图像处理方法,其特征在于,每个所述识别像素组合的多个所述识别像素在所述图像传感器中相邻设置,所述判断所述待识别图像中是否存在预设信息,包括:
    合并每个所述识别像素组合的多个所述识别像素的输出信号以获得合并像素;
    根据多个所述合并像素判断所述待识别图像中是否存在所述预设信息。
  18. 根据权利要求17所述的图像处理方法,其特征在于,所述判断所述待识别图像中是否存在预设信息,包括:
    将多个所述合并像素的第一像素位深转换成第二像素位深,所述第二像素位深小于所述第一像素位深;
    所述根据多个所述合并像素确定是否存在所述预设信息,包括:
    根据所述第二像素位深的多个所述合并像素判断所述待识别图像中是否存在所述预设信息。
  19. 根据权利要求14所述的图像处理方法,其特征在于,所述预设信息包括预设人脸、预设手势或预设人体姿态,所述图像处理方法包括:
    判断所述待识别图像中是否存在预设人脸、预设手势或预设人体姿态;
    在所述待识别图像中存在所述预设人脸、所述预设手势或所述预设人体姿态时控制所述电子设备执行对应的操作。
  20. 根据权利要求14所述的图像处理方法,其特征在于,所述识别像素的光谱响应比所述成像像素的光谱响应窄。
  21. 根据权利要求14所述的图像处理方法,其特征在于,所述识别像素的光谱响应比所述成像像素的光谱响应宽。
  22. 根据权利要求14所述的图像处理方法,其特征在于,所述识别像素在所述图 像传感器中均匀分布。
  23. 根据权利要求14所述的图像处理方法,其特征在于,所述图像传感器包括中心区域和环绕所述中心区域的边缘区域,所述识别像素在所述中心区域的密度大于所述识别像素在所述边缘区域的密度。
  24. 根据权利要求14所述的图像处理方法,其特征在于,所述像素阵列包括多个感光像素,多个感光像素呈拜耳阵列排布。
  25. 根据权利要求14所述的图像处理方法,其特征在于,所述像素阵列包括多个像素单元,每个所述像素单元包括多个感光像素,同一所述像素单元的多个所述感光像素覆盖相同颜色通道,多个所述像素单元呈拜耳阵列排布。
  26. 根据权利要求14所述的图像处理方法,其特征在于,所述像素阵列包括多个像素单元,每个所述像素单元包括至少一个彩色感光像素和至少一个全色感光像素,所述彩色感光像素具有比所述全色感光像素更窄的光谱响应。
  27. 一种摄像头组件,其特征在于,所述摄像头组件包括:
    图像传感器,和
    权利要求1-13任一项所述的图像处理管道,所述图像处理管道用于处理所述图像传感器输出的图像。
  28. 一种电子设备,其特征在于,所述电子设备包括壳体和权利要求27所述的摄像头组件,所述摄像头组件置在所述壳体上。
PCT/CN2020/141968 2020-12-31 2020-12-31 图像处理管道、图像处理方法、摄像头组件和电子设备 WO2022141349A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080104988.XA CN116114264A (zh) 2020-12-31 2020-12-31 图像处理管道、图像处理方法、摄像头组件和电子设备
PCT/CN2020/141968 WO2022141349A1 (zh) 2020-12-31 2020-12-31 图像处理管道、图像处理方法、摄像头组件和电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/141968 WO2022141349A1 (zh) 2020-12-31 2020-12-31 图像处理管道、图像处理方法、摄像头组件和电子设备

Publications (1)

Publication Number Publication Date
WO2022141349A1 true WO2022141349A1 (zh) 2022-07-07

Family

ID=82258819

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/141968 WO2022141349A1 (zh) 2020-12-31 2020-12-31 图像处理管道、图像处理方法、摄像头组件和电子设备

Country Status (2)

Country Link
CN (1) CN116114264A (zh)
WO (1) WO2022141349A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190230339A1 (en) * 2018-01-22 2019-07-25 Kuan-Yu Lu Image sensor capable of enhancing image recognition and application of the same
CN110462630A (zh) * 2019-05-27 2019-11-15 深圳市汇顶科技股份有限公司 用于人脸识别的光学传感器、装置、方法和电子设备
CN111586323A (zh) * 2020-05-07 2020-08-25 Oppo广东移动通信有限公司 图像传感器、控制方法、摄像头组件和移动终端
CN111814745A (zh) * 2020-07-31 2020-10-23 Oppo广东移动通信有限公司 手势识别方法、装置、电子设备及存储介质
CN111860530A (zh) * 2020-07-31 2020-10-30 Oppo广东移动通信有限公司 电子设备、数据处理方法及相关装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190230339A1 (en) * 2018-01-22 2019-07-25 Kuan-Yu Lu Image sensor capable of enhancing image recognition and application of the same
CN110462630A (zh) * 2019-05-27 2019-11-15 深圳市汇顶科技股份有限公司 用于人脸识别的光学传感器、装置、方法和电子设备
CN111586323A (zh) * 2020-05-07 2020-08-25 Oppo广东移动通信有限公司 图像传感器、控制方法、摄像头组件和移动终端
CN111814745A (zh) * 2020-07-31 2020-10-23 Oppo广东移动通信有限公司 手势识别方法、装置、电子设备及存储介质
CN111860530A (zh) * 2020-07-31 2020-10-30 Oppo广东移动通信有限公司 电子设备、数据处理方法及相关装置

Also Published As

Publication number Publication date
CN116114264A (zh) 2023-05-12

Similar Documents

Publication Publication Date Title
WO2022088311A1 (zh) 图像处理方法、摄像头组件及移动终端
WO2022007469A1 (zh) 图像获取方法、摄像头组件及移动终端
WO2021179806A1 (zh) 图像获取方法、成像装置、电子设备及可读存储介质
WO2021208593A1 (zh) 高动态范围图像处理系统及方法、电子设备和存储介质
WO2021212763A1 (zh) 高动态范围图像处理系统及方法、电子设备和可读存储介质
WO2021196553A1 (zh) 高动态范围图像处理系统及方法、电子设备和可读存储介质
US8018516B2 (en) Solid-state image sensor and signal processing method of same
WO2021063162A1 (zh) 图像传感器、摄像头组件及移动终端
JP2008236620A (ja) 固体撮像装置及び撮像装置
CN112235494B (zh) 图像传感器、控制方法、成像装置、终端及可读存储介质
CN110913152B (zh) 图像传感器、摄像头组件和移动终端
WO2021179805A1 (zh) 图像传感器、摄像头组件、移动终端及图像获取方法
CN110996077A (zh) 图像传感器、摄像头组件和移动终端
WO2021223364A1 (zh) 高动态范围图像处理系统及方法、电子设备和可读存储介质
CN111314592A (zh) 图像处理方法、摄像头组件及移动终端
WO2022007215A1 (zh) 图像获取方法、摄像头组件及移动终端
WO2022036817A1 (zh) 图像处理方法、图像处理系统、电子设备及可读存储介质
US20220336508A1 (en) Image sensor, camera assembly and mobile terminal
US20230247308A1 (en) Image processing method, camera assembly and mobile terminal
WO2022141349A1 (zh) 图像处理管道、图像处理方法、摄像头组件和电子设备
CN111031297B (zh) 图像传感器、控制方法、摄像头组件和移动终端
WO2021062662A1 (zh) 图像传感器、摄像头组件及移动终端
RU2564678C1 (ru) Компьютерная система панорамного телевизионного наблюдения с повышенной чувствительностью
WO2021046690A1 (zh) 图像传感器、摄像头模组、移动终端及图像采集方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20967688

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20967688

Country of ref document: EP

Kind code of ref document: A1