CN117808688A - High-resolution high-frame-rate image pickup method and image processing apparatus - Google Patents

High-resolution high-frame-rate image pickup method and image processing apparatus Download PDF

Info

Publication number
CN117808688A
CN117808688A CN202211187446.0A CN202211187446A CN117808688A CN 117808688 A CN117808688 A CN 117808688A CN 202211187446 A CN202211187446 A CN 202211187446A CN 117808688 A CN117808688 A CN 117808688A
Authority
CN
China
Prior art keywords
photosensitive
image
exposure
data
ganguang
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211187446.0A
Other languages
Chinese (zh)
Inventor
王海军
刘远通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202211187446.0A priority Critical patent/CN117808688A/en
Priority to PCT/CN2023/120896 priority patent/WO2024067428A1/en
Publication of CN117808688A publication Critical patent/CN117808688A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a high-resolution high-frame-rate image pickup method and an image processing device, wherein the method comprises the following steps: acquiring N Zhang Ganguang images, wherein N Zhang Ganguang images are respectively obtained by carrying out continuous N times of exposure on the photosensitive element; the first photosensitive image in the N Zhang Ganguang image comprises P photosensitive data, the second photosensitive image comprises Q photosensitive data, the P photosensitive data and the Q photosensitive data are respectively obtained by exposing P photosensitive points and Q photosensitive points, and the positions of the P photosensitive points and the Q photosensitive points on the photosensitive element are different; carrying out fusion treatment on the N Zhang Ganguang image to obtain a third photosensitive image; the third photosensitive image comprises P photosensitive data corresponding to the P photosensitive points respectively and Q photosensitive data corresponding to the Q photosensitive points respectively. According to the method and the device, the high-resolution high-frame-rate photosensitive image can be obtained on the premise of not increasing the data transmission bandwidth, and the output image quality of the electronic equipment is improved.

Description

High-resolution high-frame-rate image pickup method and image processing apparatus
Technical Field
The present disclosure relates to the field of video data acquisition and processing technologies, and in particular, to a high resolution high frame rate image capturing method and an image processing apparatus.
Background
With the popularization of mobile devices and the rising of the short video industry, the video specification requirement of recording life is increased from 1080 Progressive Scan (P) +30 transmission Frames Per Second (FPS) to 4 kilos (Kilo, K) +30FPS, even 8k+30fps, and the like for imaging quality is higher and higher.
The process of mobile device video data acquisition is typically as follows: the original data of the optical signal is collected by a photosensitive element (for example, a Sensor, etc.) of the camera, and is transmitted to an image processing module (for example, an image processor (Image Signal Processor, ISP, etc.) through a data bus to be processed into normal color image data, and finally transmitted to equipment to finish data storage or real-time preview.
Because of the limitation of the transmission bandwidth and the transmission power consumption of the data bus from the photosensitive element to the image processing module, the total amount of the original data of the optical signal output by the photosensitive element per unit time is limited. That is, without special processing, the sequence of photosensitive images output by the photosensitive element cannot meet the requirements of high resolution and high frame rate at the same time.
The prior art generally converts an acquired low resolution high frame rate photographic image into a high resolution high frame rate photographic image by: firstly, acquiring more extra photosensitive data through a binocular camera to realize high resolution; and secondly, acquiring a low-resolution high-frame-rate photosensitive image, and then performing image processing (such as up-sampling) by using a trained neural network model to obtain the high-resolution photosensitive image.
However, the above prior art needs to rely on specific hardware (i.e. binocular camera), so the application range is small and the cost is high, or the generalization effect is uncertain due to relying on the prior information of the model, and an additional neural network processing unit needs to be introduced on the hardware.
Disclosure of Invention
The embodiment of the application provides a high-resolution high-frame-rate shooting method and an image processing device, which can fuse low-resolution photosensitive images by utilizing inter-frame information on the premise of not increasing data transmission bandwidth to obtain corresponding high-resolution high-frame-rate photosensitive images, so that the output image quality of electronic equipment is effectively improved.
In a first aspect, the present application provides a high resolution high frame rate imaging method, the method comprising: acquiring N Zhang Ganguang images, wherein the N Zhang Ganguang images are respectively obtained by carrying out continuous N times of exposure on the photosensitive element; the N Zhang Ganguang image comprises a first photosensitive image and a second photosensitive image, the first photosensitive image comprises P pieces of photosensitive data, the P pieces of photosensitive data are obtained by exposing P pieces of photosensitive points on the photosensitive element respectively in one exposure process, the second photosensitive image comprises Q pieces of photosensitive data, the Q pieces of photosensitive data are obtained by exposing Q pieces of photosensitive points on the photosensitive element respectively in one exposure process, the positions of the P pieces of photosensitive points and the Q pieces of photosensitive points on the photosensitive element are different, P, Q is an integer greater than or equal to 1, and N is an integer greater than or equal to 2; performing fusion processing on the N Zhang Ganguang image to obtain a third photosensitive image; the third photosensitive image comprises the P photosensitive data respectively corresponding to the P photosensitive points and the Q photosensitive data respectively corresponding to the Q photosensitive points.
From the technical effect, the application obtains a multi-frame low-resolution original photosensitive image by exposing different photosensitive points on the photosensitive element in a section of continuous time domain, and the exposed photosensitive points contained in any two photosensitive images are different (the exposure mode does not require the transmission bandwidth of the data bus for obtaining the low-resolution high-frame-rate photosensitive image). And then combining the photosensitive data corresponding to all the exposed photosensitive points to obtain a third photosensitive image, so that the third photosensitive image contains all or most of the photosensitive data corresponding to the photosensitive points, and a corresponding photosensitive image containing high-resolution information is obtained. In summary, on the basis of no need of increasing hardware overhead and data transmission bandwidth, the high-resolution photosensitive image (i.e. the third image) corresponding to the low-resolution photosensitive image can be obtained through the cooperation of the specific photosensitive mode and the fusion mode, so that the video or image quality output by the terminal equipment is effectively improved. In addition, adjacent frames in the time domain are utilized for fusion, the correlation between the time domain and the space domain is stronger, so that the photosensitive data in the fused high-resolution photosensitive image is more close to reality, and the output image/video effect is better.
In a possible implementation manner, the N Zhang Ganguang images are N temporally consecutive images in a k×e Zhang Ganguang image, the k×e Zhang Ganguang images are obtained by exposing sequentially for consecutive K exposure periods, and each exposure period in the K exposure periods exposes the photosensitive element for consecutive E times; the photosensitive element comprises M photosensitive units, each of the M photosensitive units comprises C photosensitive points, and in each photosensitive period, all the photosensitive points in each photosensitive unit are sequentially subjected to one exposure, wherein C is a positive integer greater than or equal to 2.
From the technical effect, since the photosites in each photosite are sequentially exposed once in one photospot period, N continuous photosites are selected randomly in the acquisition time domain, the photosites contained in the N photosites are respectively obtained by exposing different photosites, and at the moment, fusion is carried out, so that the high-resolution photosites containing most or all photosites corresponding to the photosites can be obtained. In addition, only continuous N photosensitive images need to be selected each time, namely the terminal equipment only needs to save the acquired N photosensitive images at least, and the buffer memory expense of the terminal equipment is reduced.
In a possible embodiment, in each exposure within one exposure period, one exposure point within each exposure unit is exposed, and the N is equal to the C.
From the technical effect, when only one photosensitive point is exposed in each exposure in one photosensitive period and N is equal to the exposure times in one photosensitive period, the photosensitive data contained in the selected N photosensitive images are just obtained by exposing each photosensitive point on the photosensitive element once, and at the moment, the obtained third photosensitive images are directly fused, and the obtained third photosensitive images contain the photosensitive data corresponding to each photosensitive point on the photosensitive element, so that one photosensitive image containing high-resolution information is obtained.
In a possible embodiment, the M photosensitive units include a first photosensitive unit, and the method further includes: when a first photosensitive point in the first photosensitive unit is not exposed in the N times of exposure, interpolating based on photosensitive data corresponding to the exposed photosensitive point in the N times of exposure of the first photosensitive unit, and calculating to obtain photosensitive data corresponding to the first photosensitive point; the first photosites are any one of the photosites which are not exposed in the first photosites, and the first photosites are any one of the M photosites; the third photosensitive image further includes photosensitive data corresponding to the first photosensitive point.
In view of the technical effects, when there are some photo-sensitive points on the photo-sensitive element that are not exposed in the time domain corresponding to the N photo-sensitive images, that is, all photo-sensitive data contained in the acquired N photo-sensitive images do not contain photo-sensitive data corresponding to the non-exposed photo-sensitive points, at this time, for the non-exposed photo-sensitive points in one photo-sensitive unit, spatial interpolation may be performed on the photo-sensitive data corresponding to the photo-sensitive points that have been exposed in the photo-sensitive unit, so as to obtain photo-sensitive data corresponding to the non-exposed photo-sensitive points in the photo-sensitive unit, and the photo-sensitive data is used as photo-sensitive data corresponding to the third photo-sensitive image, so that the third photo-sensitive image contains photo-sensitive data corresponding to each photo-sensitive point, and a photo-sensitive image containing high-resolution information is obtained.
In a possible embodiment, the M photosensitive units include a first photosensitive unit, and the method further includes: when a first photosensitive point in the first photosensitive unit is not exposed in N times of exposure corresponding to the N Zhang Ganguang image, calculating a motion vector based on photosensitive data contained in the N Zhang Ganguang image, and calculating photosensitive data corresponding to the first photosensitive point based on the motion vector; the first photosites are any one of the photosites which are not exposed in the first photosites, and the first photosites are any one of the M photosites; the third photosensitive image further includes photosensitive data corresponding to the first photosensitive point.
In view of the technical effects, besides the above-mentioned calculation of the photosensitive data of the unexposed photosensitive point by interpolation, the edge feature points of the object in the multiple photosensitive images can be matched, so as to calculate the motion vector (i.e. the moving direction and the moving speed) of each pixel point on the photosensitive image in the time domain, and then determine which exposed photosensitive point is the same position of the object as the unexposed photosensitive point, so that the photosensitive data of the exposed photosensitive point is used as the photosensitive data of the unexposed photosensitive point. And determining the photosensitive data corresponding to all the unexposed photosensitive points in a motion vector mode, so that the third photosensitive image contains the photosensitive data corresponding to each photosensitive point, and a photosensitive image containing high-resolution information is obtained.
In a possible embodiment, the method further comprises: carrying out fusion processing on N-1 photosensitive images and a fourth photosensitive image which are continuous in the time domain in the N Zhang Ganguang image to obtain a fifth photosensitive image adjacent to the third photosensitive image; wherein the fourth photosensitive image is the next photosensitive image adjacent to the N Zhang Ganguang image in the time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive image in the time domain.
From the technical effect, for the photosensitive images continuously exposed in the time domain, the N photosensitive images with low resolution are sequentially utilized to fuse according to the sequence of the time domain, the instant fusion is collected and immediately fused, and at least only the N Zhang Ganguang images are required to be cached, so that the caching cost and the preview delay of a user can be effectively reduced.
In a possible embodiment, the method further comprises: and processing each piece of photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each piece of photosensitive data on the third photosensitive image.
Each photosensitive point on the photosensitive element corresponds to a pixel point on the photosensitive image, and the photosensitive data corresponding to each photosensitive point is the photosensitive data contained in the pixel point corresponding to the photosensitive point on the photosensitive image, and the photosensitive data is the pixel value for calculating the pixel point.
In a second aspect, the present application provides an image processing apparatus, the apparatus comprising: an acquisition unit for acquiring N Zhang Ganguang images, the N Zhang Ganguang images being obtained by sequentially exposing the photosensitive element N times, respectively; the N Zhang Ganguang image comprises a first photosensitive image and a second photosensitive image, the first photosensitive image comprises P pieces of photosensitive data, the P pieces of photosensitive data are obtained by exposing P pieces of photosensitive points on the photosensitive element respectively in one exposure process, the second photosensitive image comprises Q pieces of photosensitive data, the Q pieces of photosensitive data are obtained by exposing Q pieces of photosensitive points on the photosensitive element respectively in one exposure process, the positions of the P pieces of photosensitive points and the Q pieces of photosensitive points on the photosensitive element are different, P, Q is an integer greater than or equal to 1, and N is an integer greater than or equal to 2; the fusion unit is used for carrying out fusion processing on the N Zhang Ganguang image to obtain a third photosensitive image; the third photosensitive image comprises the P photosensitive data respectively corresponding to the P photosensitive points and the Q photosensitive data respectively corresponding to the Q photosensitive points.
In a possible implementation manner, the N Zhang Ganguang images are N temporally consecutive images in a k×e Zhang Ganguang image, the k×e Zhang Ganguang images are obtained by exposing sequentially K consecutive exposure periods, each exposure period of the K exposure periods exposes the photosensitive element sequentially E times, and K is a positive integer greater than or equal to 1; the photosensitive element comprises M photosensitive units, each of the M photosensitive units comprises C photosensitive points, all the photosensitive points in each photosensitive unit are sequentially exposed once in each photosensitive period, and C and M are positive integers which are more than or equal to 2.
In a possible embodiment, in each exposure within one exposure period, one exposure point within each exposure unit is exposed, and the N is equal to the C.
In a possible embodiment, the M photosensitive units include a first photosensitive unit, and the fusion unit is further configured to: when a first photosensitive point in the first photosensitive unit is not exposed in the N times of exposure, interpolating based on photosensitive data corresponding to the exposed photosensitive point in the N times of exposure of the first photosensitive unit, and calculating to obtain photosensitive data corresponding to the first photosensitive point; the first photosites are any one of the photosites which are not exposed in the first photosites, and the first photosites are any one of the M photosites; the third photosensitive image further includes photosensitive data corresponding to the first photosensitive point.
In a possible embodiment, the M photosensitive units include a first photosensitive unit, and the fusion unit is further configured to: when a first photosensitive point in the first photosensitive unit is not exposed in N times of exposure corresponding to the N Zhang Ganguang image, calculating a motion vector based on photosensitive data contained in the N Zhang Ganguang image, and calculating photosensitive data corresponding to the first photosensitive point based on the motion vector; the first photosites are any one of the photosites which are not exposed in the first photosites, and the first photosites are any one of the M photosites; the third photosensitive image further includes photosensitive data corresponding to the first photosensitive point.
In a possible embodiment, the fusion unit is further configured to: carrying out fusion processing on N-1 photosensitive images and a fourth photosensitive image which are continuous in the time domain in the N Zhang Ganguang image to obtain a fifth photosensitive image adjacent to the third photosensitive image; wherein the fourth photosensitive image is the next photosensitive image adjacent to the N Zhang Ganguang image in the time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive image in the time domain.
In a possible embodiment, the apparatus further comprises: and the image signal processing unit is used for processing each photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each photosensitive data on the third photosensitive image.
In a third aspect, the present application provides an electronic apparatus, where the computer device includes at least one processor, a memory, and a communication interface, where the memory, the communication interface, and the at least one processor are interconnected by a line, and where the at least one memory stores instructions; the method of any of the above first aspects is implemented when the instructions are executed by the processor.
In a fourth aspect, embodiments of the present application provide a chip system, where the chip system includes at least one processor, a memory, and a communication interface, where the memory, the communication interface, and the at least one processor are interconnected by a line, and where an instruction is stored in the at least one memory; the method of any of the above first aspects is implemented when the instructions are executed by the processor.
In a fifth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program, where the method according to any one of the first aspects is implemented when the computer program is executed.
In a sixth aspect, embodiments of the present application provide a computer program comprising instructions which, when executed, implement a method according to any one of the first aspects above.
Drawings
The drawings used in the embodiments of the present application are described below.
Fig. 1 is a network environment of an electronic device according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 3 is a flowchart of a high resolution high frame rate image capturing method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a data acquisition mode according to an embodiment of the present application;
FIG. 5 is a pipeline architecture for generating a high resolution high frame rate video sequence according to an embodiment of the present application;
fig. 6 is a schematic diagram of an implementation process of a high-resolution high-frame-rate image capturing method according to an embodiment of the present application;
fig. 7 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic hardware structure of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings in the embodiments of the present application. Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; the text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: the three cases where a exists alone, a and B exist together, and B exists alone, and in addition, in the description of the embodiments of the present application, "plural" means two or more than two.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims of this application and in the drawings, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus. Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The following describes the terminology involved in this application
(1) Photosensitive element: the photosensitive element is core equipment of a camera on the electronic device, and converts optical signals into electric signals, and then converts the electric signals into digital signals, and the digital signals can be converted into pixel values of images through subsequent processing of an image processing module such as an ISP (Internet service provider). Specifically, the photosensitive element comprises a plurality of photosensitive points, the number of the photosensitive points on the photosensitive element is the same as that of the pixel points on the photosensitive image, and the positions are in one-to-one correspondence. The photosensitive data output by each photosensitive point participating in exposure is the photosensitive data contained in the pixel point at the corresponding position on the photosensitive image. Typically, during an exposure, one photosite may or may not participate in the photosoutput.
The image processing device provided by the embodiment of the invention can be an electronic device as shown below, and is used for executing the high-resolution high-frame-rate image capturing method provided by the embodiment of the invention, and the electronic device can be a device comprising a communication function. For example, the electronic device may include at least one of: terminals, smart phones, tablet Personal Computers (PCs), mobile phones, video phones, electronic book readers, desktop PCs, laptop PCs, netbook computers, personal Digital Assistants (PDAs), portable Multimedia Players (PMPs), moving picture experts group (MPEG-1 or MPEG-2) audio layer 3 (MP 3) players, ambulatory medical devices, cameras, or wearable devices (e.g., head Mounted Devices (HMDs) (such as electronic glasses), electronic apparel, electronic bracelets, electronic necklaces, electronic application accessories, electronic tattoos, smartwatches, and the like).
It should be understood that the electronic device in the embodiments of the present application may be various devices having image capturing or video recording functions. It is therefore obvious to a person skilled in the art that the electronic device is not limited to the above-described device.
Hereinafter, an electronic device will be described with reference to the accompanying drawings. The term "user" as used in various embodiments disclosed herein may refer to a person using an electronic device or a device using the electronic device (e.g., an artificial intelligence electronic device).
Fig. 1 is a network environment of an electronic device according to an embodiment of the present invention.
Referring to fig. 1, an electronic device 101 may include a bus 110, a processor 120, a memory 130, an input/output (I/O) interface 140, a display 150, a communication interface 160, a camera 1 (170), a camera 2 (171), and the like. The camera 1 (170) and the camera 2 (171) may be variously referred to as a first camera module and a second camera module, a first image capturing module and a second image capturing module, or the like. It should be understood that the electronic device 101 may also include only the camera 1 (170), not the camera 2 (171).
Camera 1 (170) may be a front camera capturing a front from display 150 and camera 2 (171) may be a rear camera capturing a rear and may cooperate with processor 120. Bus 110 may be a circuit that connects the elements described above to each other and transmits communications (e.g., control messages) between the elements described above. As another implementation, camera 1 (170) and camera 2 (171) may both be rear-facing cameras and may cooperate with processor 120.
Processor 120 may receive, for example, instructions from the other elements described above (e.g., memory 130, I/O interface 140, display 150, communication interface 160, etc.) via bus 110, interpret the received instructions, and perform operations or data processing corresponding to the interpreted instructions. Processor 120 may include at least one of a central processor (Central Processing Unit, CPU), a graphics processor (Graphics Processing Unit, GPU), a digital signal processor (digital signal processor, DSP) and an image signal processor (Image Signal Processor, ISP), which may include CPU, GPU, DSP and ISP, for example.
Memory 130 may store instructions or data received from processor 120 or other elements (e.g., I/O interface 140, display 150, communication interface 160, etc.) or generated by processor 120 or other elements. Memory 130 may include, for example, programming modules such as a kernel 131, middleware 132, application programming interfaces (Application Programming Interface, APIs) 133, applications 134, and the like. The programming modules may each be configured using software, firmware, hardware, or a combination of two or more of software, firmware, and hardware.
The kernel 131 may control or manage system resources (e.g., the bus 110, the processor 120, or the memory 130, etc.) for performing operations or functions implemented in the remaining programming modules (e.g., the middleware 132, the API 133, or the application 134). In addition, the kernel 131 may provide interfaces that allow the middleware 132, the API 133, or the application 134 to access and control or manage the various elements of the electronic device 101.
Middleware 132 may perform intermediation so that API 133 or application 134 may communicate with kernel 131 to provide and retrieve data. Further, in association with task requests received from applications 134, middleware 132 may perform control (e.g., scheduling or load balancing) of the task requests using a method of assigning priorities to at least one of applications 134 that may use system resources (e.g., bus 110, processor 120, or memory 130, etc.) of the electronic device.
The API 133 is an interface that allows the application 134 to control functions provided by the kernel 131 or the middleware 132, and may include at least one interface or function (e.g., instructions) for file control, window control, image processing, character control, or the like.
According to various embodiments disclosed herein, applications 134 may include a Short Message Service (SMS)/Multimedia Message Service (MMS) application, an email application, a calendar application, an alarm clock application, a health care application (e.g., an application for measuring amount of movement or blood glucose, etc.), or an environmental information application (e.g., an application providing barometric pressure, humidity, or temperature information, etc.). Additionally or alternatively, the application 134 may be an application related to the exchange of information between the electronic device 101 and an external electronic device (e.g., the electronic device 104). Applications related to this information exchange may include, for example, a notification relay application for transmitting specific information to an external electronic device or a device management application for managing the external electronic device.
For example, the notification relay application may include functionality for transmitting notification information generated from a different application (e.g., SMS/MMS application, email application, healthcare application, or environmental information application) of the electronic device 101 to an external electronic device (e.g., electronic device 104). Additionally or alternatively, for example, the notification relay application may receive notification information from an external electronic device (e.g., electronic device 104) and provide the notification information to the user. The device management application may manage (e.g., install, delete, or update) functions running in the external electronic device (e.g., on or off of the external electronic device itself (or some constituent components) or brightness (or resolution) control of the display) and applications or services (e.g., communication services or messaging services) provided by the external electronic device.
According to various embodiments of the present disclosure, the applications 134 may include specified applications according to properties of an external electronic device (e.g., the type of electronic device) such as the electronic device 104. For example, in the case where the external electronic device is an MP3 player, the application 134 may include an application related to music reproduction. Similarly, where the external electronic device is an ambulatory medical healthcare device, the applications 134 may include healthcare-related applications. According to embodiments of the present disclosure, the applications 134 may include at least one of applications specified in the electronic device 101 and applications received from an external electronic device (e.g., the server 106 or the electronic device 104).
The I/O interface 140 may transmit instructions or data entered by a user via an I/O unit (e.g., sensor, keyboard, or touch screen) to the processor 120, memory 130, and communication interface 160 via, for example, the bus 110. For example, the I/O interface 140 may provide data to the processor 120 regarding user touches entered via a touch screen. Further, for example, the I/O interface 140 may output instructions or data received from the processor 120, memory 130, and communication interface 160 via the bus 110 via an I/O unit (e.g., a speaker or display). For example, the I/O interface 140 may output voice data processed by the processor 120 to a user via a speaker.
The display 150 may display various information (e.g., multimedia data or text data, etc.) to a user. The communication interface 160 may connect communication between the electronic device 101 and an external device (e.g., the electronic device 104 or the server 106). For example, the communication interface 160 may be connected with the network 162 via wireless communication or wired communication to communicate with external devices. The wireless communication may include, for example, at least one of wireless fidelity (Wi-Fi), bluetooth (BT), near Field Communication (NFC), GPS, or cellular communication (e.g., long Term Evolution (LTE), LTE-advanced (LTE-a), code Division Multiple Access (CDMA), wideband CDMA (WCDMA), universal Mobile Telecommunications System (UMTS), wireless broadband (WiBro), or global system for mobile communications (GSM), etc.). The wired communication may include at least one of Universal Serial Bus (USB), high Definition Multimedia Interface (HDMI), recommended standard 232 (RS-232), and Plain Old Telephone Service (POTS).
According to embodiments disclosed herein, the network 162 may be a telecommunications network. The telecommunications network may comprise at least one of a computer network, the internet of things, and a telephone network. According to embodiments of the present disclosure, a protocol (e.g., a transport layer protocol, a data link layer protocol, or a physical layer protocol) for communication between the electronic device 101 and an external device may be supported by at least one of the application 134, the application programming interface (Application Programming Interface, API) 133, the middleware 132, the kernel 131, or the communication interface 160.
Fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. For example, the electronic device may configure all or a portion of the electronic device 101 shown in fig. 1.
Referring to fig. 2, the electronic device 201 may include one or more Application Processors (APs) 210, a communication module 220, a Subscriber Identity Module (SIM) card 224, a memory 230, a sensor module 240, an input device 250, a display 260, an interface 270, an audio module 280, a camera module 290, a camera module 291, a power management module 295, a battery 296, an indicator 297, and a motor 298.
The AP 210 may drive an Operating System (OS) or an application to control a plurality of hardware or software elements connected to the AP 210 and perform various data processing and operations including multimedia data. For example, the AP 210 may be implemented as a system on chip (SoC). According to embodiments of the present disclosure, the AP 210 may further include at least one of a Graphics Processing Unit (GPU) and a DSP (not shown).
The communication module 220 (e.g., the communication interface 160) may perform data transmission/reception in communication between the electronic device 201 (e.g., the electronic device 101) and other electronic devices (e.g., the electronic device 104 or the server 106) connected via a network. According to embodiments of the present disclosure, the communication module 220 may include a cellular module 221, a Wi-Fi module 223, a BT module 225, a GPS module 227, an NFC module 228, and a Radio Frequency (RF) module 229.
The cellular module 221 may provide voice communication, image communication, short message service, internet service, or the like via a communication network (e.g., LTE-A, CDMA, WCDMA, UMTS, wiBro, GSM, or the like). Further, the cellular module 221 may perform identification and authentication of electronic devices within the communication network using, for example, a subscriber identification module (e.g., SIM card 224). According to embodiments of the present disclosure, cellular module 221 may perform at least a portion of the functionality that AP 210 may provide. For example, the cellular module 221 may perform at least a portion of the multimedia control functions.
According to embodiments of the present disclosure, the cellular module 221 may include a Communication Processor (CP). In addition, for example, the cellular module 221 may be implemented as a SoC. Although elements such as cellular module 221 (e.g., communication processor), memory 230, power management module 295, etc. are shown in fig. 2 as being separate from elements of AP 210, AP 210 may be implemented to include at least a portion of the elements described above (e.g., cellular module 221).
According to embodiments disclosed herein, the AP 210 or cellular module 221 (e.g., a communication processor) may load instructions or data received from at least one of the nonvolatile memory and other elements connected thereto onto the volatile memory and process it. Further, the AP 210 or the cellular module 221 may store data received from or generated by at least one of the other elements in the nonvolatile memory.
Wi-Fi module 223, BT module 225, GPS module 227, or NFC module 228 may each include, for example, a processor for processing data sent/received via the relevant module. Although the cellular module 221, wi-Fi module 223, BT module 225, GPS module 227, or NFC module 228 are shown as separate blocks in fig. 2, at least a portion (e.g., two or more elements) of the cellular module 221, wi-Fi module 223, BT module 225, GPS module 227, or NFC module 228 may be included in one Integrated Circuit (IC) or IC package. For example, at least a portion of the processors corresponding to each of the cellular module 221, wi-Fi module 223, BT module 225, GPS module 227, or NFC module 228 (e.g., the communication processor corresponding to cellular module 221 and the Wi-Fi processor corresponding to Wi-Fi module 223) may be implemented as one SoC.
The RF module 229 may perform transmission/reception of data, for example, transmission/reception of an RF signal. Although not shown, the RF module 229 may include, for example, a transceiver, a Power Amplifier Module (PAM), a frequency filter, a Low Noise Amplifier (LNA), or the like. In addition, the RF module 229 may further include a means (e.g., a conductor, a wire, etc.) for transmitting/receiving electromagnetic waves through free space in wireless communication. Although fig. 2 shows that the cellular module 221, the Wi-Fi module 223, the BT module 225, the GPS module 227, and the NFC module 228 share one RF module 229, at least one of the cellular module 221, the Wi-Fi module 223, the BT module 225, the GPS module 227, or the NFC module 228 may perform transmission/reception of RF signals via a separate RF module.
The SIM card 224 may be a card including a subscriber identity module and may be inserted into a slot formed in a specific location of the electronic device. The SIM card 224 may include unique identification information (e.g., an integrated circuit card identification code (ICCID)) or subscriber information (e.g., an international mobile subscriber identification code (IMSI)).
Memory 230 (e.g., memory 130) may include internal memory 232 (alternatively referred to as embedded memory) or external memory 234. The internal memory 232 may include, for example, at least one of volatile memory (e.g., dynamic Random Access Memory (DRAM), static RAM (SRAM), synchronous Dynamic RAM (SDRAM)) and nonvolatile memory (e.g., one-time programmable read-only memory (OTPROM), programmable ROM (PROM), erasable Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), mask ROM, flash ROM, NAND (NAND) flash, NOR (NOR) flash, etc.).
According to embodiments of the present disclosure, the internal memory 232 may be a Solid State Drive (SSD). The external memory 234 may also include a flash drive (e.g., compact Flash (CF), secure Digital (SD), micro-secure digital (Micro-SD), mini-secure digital (Mini-SD), limit digital (xD), or memory stick). The external memory 234 may be functionally connected with the electronic device 201 via various interfaces. According to an embodiment of the present disclosure, the electronic device 201 may further include a storage device (or storage medium), such as a hard disk drive.
The sensor module 240 may measure a physical quantity or detect an operation state of the electronic device 201 and convert the measured or detected information into an electrical signal. The sensor module 240 may include, for example, at least one of the following: gesture sensor 240A, gyroscope sensor 240B, barometric pressure sensor 240C, magnetic sensor 240D, acceleration sensor 240E, grip sensor 240F, proximity sensor 240G, color sensor 240H (e.g., red Green Blue (RGB) sensor), biological sensor 240I, temperature/humidity sensor 240J, light sensor 240K, or Ultraviolet (UV) sensor 240M. Additionally or alternatively, the sensor module 240 may include, for example, an electronic nose sensor (not shown), an Electromyography (EMG) sensor (not shown), an electroencephalogram (EEG) sensor (not shown), an Electrocardiogram (ECG) sensor (not shown), an Infrared (IR) sensor (not shown), an iris sensor (not shown), or a fingerprint sensor (not shown), among others. The sensor module 240 may further include a control circuit for controlling at least one sensor belonging thereto.
The input device 250 may include a touch panel 252, (digital) pen sensor 254, keys 256, or an ultrasonic input device 258. Touch panel 252 may detect touch input using at least one of capacitive, resistive, infrared, or ultrasonic methods. In addition, the touch panel 252 may further include a control circuit. The capacitive touch panel may perform physical contact detection or proximity detection. The touch panel 252 may further include a tactile layer. In this case, the touch panel 252 may provide a tactile response to the user.
For example, the (digital) pen sensor 254 may be implemented using the same or similar method as that of receiving a user's touch input or using a panel for detection alone. Keys 256 may include, for example, physical buttons, optical keys, or a keypad. The ultrasonic input device 258 is a unit that recognizes data by detecting sound waves using a microphone (e.g., microphone 288) in the electronic device 201 via an input tool that generates ultrasonic signals, and is capable of wireless detection. According to embodiments of the present disclosure, the electronic device 201 may receive user input using the communication module 220 from an external device (e.g., a computer or server) connected to the communication module 220.
Display 260 (e.g., display 150) may include a panel 262, a hologram device 264, or a projector 266. The panel 262 may be, for example, a Liquid Crystal Display (LCD) or an active matrix organic light emitting diode (AM-OLED), or the like. For example, the panel 262 may be implemented as flexible, transparent, or wearable. The panel 262 together with the touch panel 252 may be configured as one module. Hologram device 264 can use the interference of light to display a three-dimensional image in air. The projector 266 may project light onto a screen to display an image. For example, the screen may be located inside or outside the electronic device 201. According to embodiments of the present disclosure, the display 260 may further include control circuitry for controlling the panel 262, the hologram device 264, or the projector 266.
The interface 270 may include, for example, an HDMI 272, a USB 274, an optical interface 276, or a D-ultra-small (D-sub) 278. The interface 270 may be included in, for example, the communication interface 160 shown in fig. 1. Additionally or alternatively, the interface 270 may include a mobile high definition link (MHL) interface, an SD card/multimedia card (MMC) interface, or an infrared data association (IrDA) standard interface.
The audio module 280 may bi-directionally convert sound and electrical signals. At least a portion of audio module 280 may be included in I/O interface 140, for example, shown in fig. 1. The audio module 280 may process sound information input or output via, for example, a speaker 282, a receiver 284, headphones 286, or a microphone 288.
The camera module 290 and the camera module 291 are devices that can take still images and moving pictures, and can be manufactured as one module, which can be the camera 1 (170) and the camera 2 (171) in fig. 1, respectively. According to embodiments disclosed herein, the camera module 290 and the camera module 291 may include one or more image sensors (e.g., front sensor or back sensor), lenses (not shown), an Image Signal Processor (ISP) (not shown), a DSP (not shown), or a flash (e.g., LED or xenon lamp). The ISP or DSP may be separate from the elements of AP 210, but AP 210 may be implemented to include at least one of the ISP or DSP.
The power management module 295 may manage power of the electronic device 201. Although not shown, the power management module 295 may include, for example, a Power Management Integrated Circuit (PMIC), a charger Integrated Circuit (IC), or a battery gauge or fuel gauge.
For example, the PMIC may be mounted inside an integrated circuit or SoC semiconductor. The charging method may be classified into a wired charging method and a wireless charging method. The charging IC may charge the battery and may prevent the introduction of an overvoltage or an overcurrent from the charger.
According to embodiments disclosed herein, the charging IC may include a charging IC for at least one of a wired charging method and a wireless charging method. The wireless charging method may be, for example, a magnetic resonance method, a magnetic induction method, or an electromagnetic wave method, etc., and may additionally include additional circuitry for wireless charging, for example, circuitry (such as a coil loop, a resonant circuit, or a rectifier, etc.).
The battery gauge may measure, for example, the remaining amount of the battery 296 as well as the voltage, current, or temperature at the time of charging. The battery 296 may store or generate electricity and use the stored or generated electricity to power the electronic device 201. The battery 296 may include, for example, a rechargeable battery or a solar cell.
The indicator 297 may display a particular state of the electronic device 201 or a portion thereof (e.g., AP 210), such as a startup state, a message state, or a charging state, etc. The motor 298 may convert the electrical signal into mechanical vibration. Although not shown, the electronic device 201 may include a processor (e.g., GPU) for supporting mobile TV. The processor for supporting the mobile TV may process media data corresponding to standards such as Digital Multimedia Broadcasting (DMB), digital Video Broadcasting (DVB), media streaming, etc., for example.
Each of the above-described elements of the electronic device may be configured using one or more components, and the names of the related elements may vary depending on the type of the electronic device. The electronic device may include at least one of the above-described elements, and some of the elements may be omitted, or additional other elements may be included. Furthermore, some of the elements of the electronic device may be combined to form one entity and likewise perform the functions of the related elements prior to the combination.
It should be noted that the above camera or camera module may also be referred to as a video camera, a lens module, or a lens, where the camera or camera module may further include at least one of a focusing motor or an anti-shake motor.
Referring to fig. 3, fig. 3 is a flowchart of a high resolution and high frame rate image capturing method according to an embodiment of the present application, which may be performed by an image processing apparatus (e.g., an electronic apparatus) in the context of the present application, the image processing apparatus including a camera. The photosensitive element in the camera can adopt a periodical photosensitive mode to convert the taken optical signals into corresponding digital signals, so as to obtain corresponding photosensitive images. As shown in fig. 3, the method includes step S310 and step S320.
Step S310: acquiring N Zhang Ganguang images, wherein the N Zhang Ganguang images are respectively obtained by carrying out continuous N times of exposure on the photosensitive element; the N Zhang Ganguang image comprises a first photosensitive image and a second photosensitive image, the first photosensitive image comprises P photosensitive data, the P photosensitive data are obtained by exposing P photosensitive points on the photosensitive element respectively in one exposure process, the second photosensitive image comprises Q photosensitive data, the Q photosensitive data are obtained by exposing Q photosensitive points on the photosensitive element respectively in one exposure process, the positions of the P photosensitive points and the Q photosensitive points on the photosensitive element are different, P, Q is an integer greater than or equal to 1, and N is an integer greater than or equal to 2.
The photosites on the photosensor and the pixels on the photosensor image are in one-to-one correspondence, namely the number of photosites on the photosensor and the number of pixels on the photosensor image are the same, and the positions of the photosites on the photosensor and the positions of the pixels on the photosensor image are in one-to-one correspondence. Specifically, in each exposure process, the photosensitive data corresponding to each photosensitive point (also referred to as photosensitive data output by exposure of each photosensitive point) is the photosensitive data included in the corresponding pixel point on the obtained photosensitive image.
Here, each photosensitive image in the present application is RAW photosensitive Data generated by one exposure of a photosensitive element, and is generally referred to as RAW Data.
The first photosensitive image and the second photosensitive image are any two of the N Zhang Ganguang images. The first photosensitive image is obtained by exposing P photosensitive points on the photosensitive element respectively in one exposure process. The P photosites are in one-to-one correspondence with the P pixel points on the first photosite image. Each of the P pixel points respectively contains photosensitive data output by the corresponding photosensitive point through exposure. Similarly, each of the Q pixel points on the second photosensitive image includes photosensitive data output by exposing the corresponding photosensitive point.
In each exposure of the N times, the exposed photosensitive points on the photosensitive element are different, that is, the photosensitive data contained in each of the N photosensitive images obtained by the N times of exposure are obtained by exposing the photosensitive points at different positions.
The following describes the photosensitive mode/exposure mode of the terminal device in the time domain and the space domain in detail.
Specifically, the terminal device performs data acquisition through periodical light sensing (i.e., K light sensing periods in the application), the number of times of exposing the light sensing element in each light sensing period is the same, and one exposure process of exposing the light sensing element generates one light sensing image. And then exposing the photosensitive element by utilizing continuous photosensitive periods to obtain a plurality of photosensitive images.
Optionally, the N Zhang Ganguang images are N consecutive images in the time domain in the k×e Zhang Ganguang images, the k×e Zhang Ganguang images are obtained by exposing sequentially in K consecutive exposure periods, each exposure period in the K exposure periods exposes the photosensitive element for E consecutive times, and K is a positive integer greater than or equal to 1.
Specifically, the photosensitive element is exposed through K consecutive photosensitive periods, and each photosensitive period is exposed for E times, so as to obtain k×e Zhang Ganguang images which are consecutive in time domain. The N Zhang Ganguang images are N consecutive images in the time domain in the k×e Zhang Ganguang images.
The specific exposure pattern in each exposure period is as follows:
optionally, the photosensitive element includes M photosensitive units, each of the M photosensitive units includes C photosensitive points, and in each photosensitive period, all the photosensitive points in each photosensitive unit are sequentially exposed once, where C and M are positive integers greater than or equal to 2.
Specifically, the photosensitive element is divided into at least one photosensitive unit, and each photosensitive unit comprises at least one photosensitive point. For E times of exposure in one exposure period, at least one photosite in each photosite is exposed in each photosite in each exposure process, and the photosites exposed in the same photosite are different in any two exposure processes.
Further, after E exposures in one exposure period, all photosites in each photosite are exposed once. And in the E exposure process in one photosensitive period, the photosensitive points in each photosensitive unit are sequentially exposed according to a preset sequence.
For example, each photosensitive unit contains 4 photosensitive points: photosite 1, photosite 2, photosite 3, and photosite 4. If each exposure period includes 4 exposure processes, each exposure process exposes one photosite in the photosite, at this time, in the 4 exposure processes in one photosite period, the exposure sequence of the 4 photosites in each photosite may be: a first exposure photosite 3, a second exposure photosite 2, a third exposure photosite 4, a fourth exposure photosite 1; or a first exposure photosite 4, a second exposure photosite 1, a third exposure photosite 2, and a fourth exposure photosite 3.
For another example, each photosite includes 4 photosites: photosite 1, photosite 2, photosite 3, and photosite 4. If each exposure period includes 3 exposure processes, each exposure process exposes one or two photosensitive points in the photosensitive unit, at this time, in the 3 exposure processes in one exposure period, the exposure sequence of 4 photosensitive points in each photosensitive unit may be: a first exposure photosite 3, a second exposure photosite 2, a photosite 4 and a third exposure photosite 1; or a first exposure photosite 2 and photosite 4, a second exposure photosite 1, a third exposure photosite 3.
Referring to fig. 4, fig. 4 is a schematic diagram of a data acquisition manner provided in an embodiment of the present application, which shows a process of performing continuous 4 exposures on a photosensitive element in a photosensitive period in a time domain t, where the four exposures respectively obtain four photosensitive images: the 4 photosensitive images 1-4 are 4 consecutive in the low resolution high frame rate photosensitive image sequence.
In fig. 4, the plane in which the photosensitive element is located is represented by an X-0-Y coordinate system, and the length of each photosensitive spot in the X-direction and the Y-direction is 1, respectively. It can be seen that 16 photosites are contained on the photosite. The 16 photosites are divided into 4 photosites: photosensitive units 1-4. Each photosensitive unit contains 4 photosensitive points. Wherein, the coordinates of 4 photosensitive points contained in the photosensitive unit 1 are (1, 1) (2, 1) (1, 2) (2, 2) respectively; coordinates of the 4 photosensitive points included in the photosensitive unit 2 are (3, 1) (3, 2) (4, 1) (4, 2), respectively; coordinates of 4 photosensitive points included in the photosensitive unit 3 are (1, 3) (2, 3) (1, 4) (2, 4), respectively; the coordinates of the 4 photosensitive points included in the photosensitive unit 4 are (3, 3) (4, 3) (3, 4) (4, 4), respectively.
As shown in fig. 4, the photosensitive points in each photosensitive unit are numbered 1, 2, 3, and 4 from left to right and from top to bottom in sequence.
In each exposure process in one exposure period shown in fig. 4, the photosites of the shadow portion are the photosites that are not exposed, and the photosites of the blank portion are the photosites that are exposed. It can be seen that in the first exposure process in fig. 4, the photosites 1 in each photosite are exposed, in the second exposure process, the photosites 2 in each photosite are exposed, in the third exposure process, the photosites 4 in each photosite are exposed, and in the fourth exposure process, the photosites 3 in each photosite are exposed.
It can be seen that with this exposure, each photosensitive spot on the photosensitive element is exposed once during a photosensitive period.
After the above 4 exposures, 4 photosensitive images shown in fig. 4 were obtained, respectively: the photosensitive images 1-4. One cell on each photosensitive image is a pixel point, namely the number of the pixel points on the photosensitive image is the same as the number of the photosensitive points on the photosensitive element, and the positions are in one-to-one correspondence. For example, the photosensitive data after exposure of the photosensitive dot on the photosensitive element at the coordinates (1, 1) is the photosensitive data contained in the pixel at the coordinates (1, 1) on the photosensitive image.
In each photosensitive image, the pixel points of the shadow part are the pixel points containing photosensitive data, and the pixel points of the blank part are the pixel points not containing photosensitive data. The pixels of the photosensitive image 1 with coordinates (1, 1) (3, 1) (1, 3) and (3, 3) contain photosensitive data, the photosensitive data contained in the 4 pixels are obtained by exposing the photosensitive points at the corresponding position coordinates on the photosensitive element in the first exposure process, and the pixels of the rest positions in the photosensitive image 1 do not contain photosensitive data.
Similarly, the process of the photosensitive image 2 obtained by the second exposure, the photosensitive image 3 obtained by the third exposure, and the photosensitive image 4 obtained by the fourth exposure is the same as that of the photosensitive image 1 obtained by the first exposure, and will not be described again.
Step S320: performing fusion processing on the N Zhang Ganguang image to obtain a third photosensitive image; the third photosensitive image comprises the P photosensitive data respectively corresponding to the P photosensitive points and the Q photosensitive data respectively corresponding to the Q photosensitive points.
The N photosensitive images and the third photosensitive image have the same size, and the number of the contained pixels is equal. The pixel points contained in each photosensitive image in the N Zhang Ganguang image are in one-to-one correspondence with the photosensitive points contained in the photosensitive elements, and similarly, the pixel points in the third photosensitive image are in one-to-one correspondence with the photosensitive points in the photosensitive elements.
Specifically, the one-to-one correspondence process described above is described below by taking the second photosites on the photosites as an example: the positions of the second photosites in the first photosites corresponding to the pixels in the second photosites and the positions of the second photosites in the third photosites corresponding to the pixels in the first photosites are the same.
Specifically, the process of obtaining the third photosensitive image through the fusion process specifically includes: taking the first photosensitive image as an example, the P photosensitive points correspond to P pixel points on the first photosensitive image, the P pixel points on the first photosensitive image correspond to P photosensitive data output by exposing the P photosensitive points, and each photosensitive data in the P photosensitive data is used for describing pixel information of the corresponding pixel point. Similarly, it can be known that the P photosites also correspond to the P pixels on the third photosite, and the positions of the P pixels on the third photosite and the positions of the P pixels on the first photosite are the same. The process of fusing the first photosensitive image specifically includes: taking a first pixel point of the P pixel points on the first photosensitive image as an example, taking photosensitive data contained in the first pixel point on the first photosensitive image as photosensitive data contained in the first pixel point in the third photosensitive image. The first pixel point is any one of P pixel points. By fusing the N Zhang Ganguang images in the fusing manner, each pixel point on the third photosensitive image can contain photosensitive data which is output by exposure of the corresponding photosensitive point on the photosensitive element.
The above process is a process of simply fusing the acquired N photosensitive images.
Further, based on the relationship between the number of selected photosensitive images (i.e., N) and the number of exposure times in each photosensitive period, different fusion processing methods may be adopted to obtain a corresponding third photosensitive image. Based on the relationship between N and the exposure times in each photosensitive period, the fusion process in the present application corresponds to two different scenes, and the following detailed description is given respectively.
Scene one: n is equal to the number of exposures in each photosensitive period
Since the exposure is sequentially performed on the photosensitive points in each photosensitive unit in each photosensitive period, when N continuous photosensitive images are selected in the time domain and N is equal to the exposure times in one photosensitive period, N continuous photosensitive images in the time domain are selected at this time, and the number of all photosensitive data contained in the N Zhang Ganguang images is exactly equal to the number of photosensitive points on the photosensitive element and is exactly obtained by performing one exposure on each photosensitive point on the photosensitive element. Under such a scenario, the N Zhang Ganguang image may be directly fused by the simple fusion method in the foregoing embodiment, and each pixel point in the obtained third photosensitive image includes one photosensitive data corresponding to the exposure output of the photosensitive point.
In one exposure period of the scene, different exposure modes can be adopted:
(1) Mode one: during each exposure in one exposure period, one exposure point in each exposure unit is exposed, and N is equal to C. By making N equal to C in the exposure mode, the photosensitive data commonly contained on N photosensitive images are respectively obtained by exposing each photosensitive point on the photosensitive element once.
(2) Mode two: during each exposure in one exposure period, one or more photosites in each photosite are exposed, and N is equal to the number of exposures in one exposure period. In this exposure mode, by making N equal to the number of exposure times in one exposure period, the photosensitive data commonly included in N photosensitive images are respectively obtained by performing one exposure on each photosensitive point on the photosensitive element.
Scene II: n is smaller than the exposure times in each photosensitive period
When N is smaller than the exposure times in each photosensitive period, a part of photosensitive points exist on the photosensitive element, and the part of photosensitive points are not exposed in a continuous time domain corresponding to the N Zhang Ganguang image. That is, all the photosensitive data included in the N photosensitive images do not include the photosensitive data corresponding to the photosensitive point, so that the photosensitive data corresponding to the photosensitive point is deleted from the third photosensitive image after the N Zhang Ganguang image is fused by the simple fusion method. At this time, the photosensitive data corresponding to the photosensitive points that are not exposed on this continuous time domain may be calculated by spatial interpolation or calculation of a motion vector, and both calculation methods are specifically discussed below.
(1) The process of calculating the spatial interpolation is described below by taking the first photosensitive unit of the M photosensitive units as an example:
when a first photosensitive point in the first photosensitive unit is not exposed in the N times of exposure, interpolating based on photosensitive data corresponding to the exposed photosensitive point in the N times of exposure of the first photosensitive unit, and calculating to obtain photosensitive data corresponding to the first photosensitive point; the first photosites are any one of the photosites which are not exposed in the first photosites, and the first photosites are any one of the M photosites; the third photosensitive image further includes photosensitive data corresponding to the first photosensitive point.
Specifically, the interpolation process may be nearest neighbor interpolation, bilinear interpolation, higher order interpolation, or other possible interpolation methods, which is not limited in this application.
After interpolation is performed on the photosensitive data corresponding to the photosensitive points in each photosensitive unit, calculating to obtain photosensitive data corresponding to the photosensitive points which are not exposed in the continuous time domain (namely N times of exposure) in each photosensitive unit, and taking the calculated photosensitive data as the photosensitive data contained in the corresponding pixel points in the third photosensitive image, so that each pixel point in the third photosensitive image contains one photosensitive data.
(2) The process of calculation using the motion vector is described below taking the first photosensitive unit of the M photosensitive units as an example:
when a first photosensitive point in the first photosensitive unit is not exposed in N times of exposure corresponding to the N Zhang Ganguang image, calculating a motion vector based on photosensitive data contained in the N Zhang Ganguang image, and calculating photosensitive data corresponding to the first photosensitive point based on the motion vector; the first photosites are any one of the photosites which are not exposed in the first photosites, and the first photosites are any one of the M photosites; the third photosensitive image further includes photosensitive data corresponding to the first photosensitive point.
Specifically, a front H Zhang Ganguang image adjacent to the N Zhang Ganguang image in the time domain is acquired, and fusion processing is performed on the H photosensitive images by using the simple fusion method to obtain a sixth photosensitive image. Wherein, H is equal to the exposure times in each photosensitive period, namely each pixel point in the sixth photosensitive image contains photosensitive data; in this case, H is greater than N. And meanwhile, the N Zhang Ganguang image is fused by adopting the simple fusion method, so that a third photosensitive image is obtained.
Further, optionally, the third photosensitive image and the sixth photosensitive image are subjected to feature point matching (for example, may be by matching feature points of edges of the object, or the like), specifically: determining the positions of the pixel points representing the same object part on the third photosensitive image and the sixth photosensitive image and the corresponding displacement; the moving direction and speed of the pixel point, that is, the moving direction and speed of the object in the photosensitive image (or referred to as a motion vector) are then calculated by the displacement, the time difference between the third photosensitive image and the sixth photosensitive image.
And finally, determining photosensitive data corresponding to unexposed photosensitive points on the first photosensitive unit in the N exposure processes based on the calculated motion vector, and specifically: the specific calculation process is described by taking a first photosite in the first photosite unit as an example, the first photosite corresponds to a first pixel on the third photosite image, the first pixel corresponds to a second pixel on the sixth photosite image determined by the motion vector obtained by the calculation, and the second pixel is a pixel representing the same part of the object as the first pixel, at this time, the photosite data contained in the second pixel is taken as the photosite data contained in the first pixel, that is, the photosite data corresponding to the first photosite. Referring to the above steps, the photosensitive data corresponding to each unexposed photosensitive point in each of the M photosensitive units is calculated, and the photosensitive data of the unexposed photosensitive point is used as the photosensitive data contained in the corresponding pixel point on the third photosensitive image, so that each pixel point on the third photosensitive image contains one photosensitive data.
The time difference between the third photosensitive image and the sixth photosensitive image is obtained by calculating the difference between the time corresponding to the third photosensitive image and the time corresponding to the sixth photosensitive image. The time corresponding to the third photosensitive image may be the intermediate time of N exposures corresponding in the time domain, and similarly, the calculation mode of the time corresponding to the sixth photosensitive image is the same as the calculation mode of the time corresponding to the third photosensitive image.
The feature point may have the same size as the pixel point.
In summary, through the above-mentioned two processes of fusion under the scene, each pixel point in the fused third photosensitive image contains one photosensitive data, and thus a high-resolution photosensitive image is obtained.
The above embodiment describes a process of fusing N photosensitive images, which are continuous in time domain, in k×e Zhang Ganguang images to obtain a corresponding high-resolution photosensitive image. A procedure for how to generate a sequence of photosensitive images corresponding to a high resolution and high frame rate based on K x E Zhang Ganguang images (i.e., a sequence of photosensitive images of low resolution and high frame rate) will be described below.
Specifically, the K x E Zhang Ganguang images are continuously acquired through K photosensitive periods, and a group of photosensitive images (i.e., N sheets) that are continuous in time domain can be selected from the K x E Zhang Ganguang images at a time to perform the fusion processing, so as to obtain one high-resolution photosensitive image. For two groups of photosensitive images selected from K.times.E Zhang Ganguang images, if the two groups of photosensitive images comprise common N-1 photosensitive images, the two photosensitive images obtained by respectively carrying out the fusion processing on the two groups of photosensitive images are adjacent in a high-resolution high-frame-rate photosensitive image sequence.
Optionally, a pipeline architecture can be adopted for fusion, so that a high-resolution high-frame-rate photosensitive image sequence is obtained, namely N adjacent time domains are sequentially selected for fusion processing according to the acquisition time domain of the low-resolution high-frame-rate photosensitive image sequence by taking one photosensitive image as a step length.
For example, the 1 st to N Zhang Ganguang th images in the k× Zhang Ganguang images are selected for the first time to perform the foregoing fusion processing, so as to obtain a high-resolution photosensitive image, the 2 nd to n+1 th photosensitive images are selected for the second time to perform the fusion processing, so as to obtain a high-resolution photosensitive image, and then so on.
The following describes how, after the third photosensitive image is obtained, the next photosensitive image adjacent to the third photosensitive image in the high resolution high frame rate photosensitive image sequence is obtained: carrying out fusion processing on N-1 photosensitive images and a fourth photosensitive image which are continuous in the time domain in the N Zhang Ganguang image to obtain a fifth photosensitive image adjacent to the third photosensitive image; wherein the fourth photosensitive image is the next photosensitive image adjacent to the N Zhang Ganguang image in the time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive image in the time domain.
Referring to fig. 5, fig. 5 is a pipeline architecture for generating a high resolution high frame rate video sequence according to an embodiment of the present application.
As shown in fig. 5, the sequence of high frame rate low resolution photosensitive images collected by the photosensitive element includes photosensitive images 1-6, wherein the photosensitive image 1 is a photosensitive image obtained by the first exposure. The high frame rate low resolution photographic image sequence is stored in a memory unit. Wherein the number on the data stream represents the transmission process of the corresponding photosensitive image.
The pipeline architecture is an immediate acquisition and immediate fusion architecture. Assuming that four continuous photosensitive images in the time domain are selected for fusion, the specific process of the operation of the pipeline architecture is as follows: in the first fusion process, a fusion unit acquires photosensitive images 1-4 from a storage unit to fuse to obtain a photosensitive image 7; in the second fusion process, the fusion unit acquires the photosensitive images 2-5 from the storage unit for fusion to obtain a photosensitive image 8; in the third fusion process, the fusion unit acquires the photosensitive images 3-6 from the storage unit for fusion to obtain a photosensitive image 9. And then fusing sequentially according to the sequence, and finally obtaining the high-frame-rate high-resolution photosensitive image sequence.
It can be seen that through the pipeline architecture, at least 4 acquired photosensitive images need to be stored in the storage unit, the photosensitive images 1-4 are stored before the first fusion process, and after the first fusion process is finished, the photosensitive images stored in the storage unit are updated as follows: the photosensitive image 2-5, then so on. By the pipeline architecture, low-resolution photosensitive images needing to be cached can be effectively reduced, and storage overhead is reduced; meanwhile, the preview delay can be effectively reduced by adopting the mode of fusion.
Optionally, the method further comprises: and processing each piece of photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each piece of photosensitive data on the third photosensitive image.
The above-mentioned process of processing the photosensitive data to obtain the corresponding pixel value may be implemented by a feasible module such as an image signal processing ISP module on the terminal device.
The pixel value corresponding to each photosensitive data is a pixel value corresponding to a pixel point where the photosensitive data is located, and the pixel value obtained by processing each photosensitive data may be in an RGB format, a YUV format or other feasible formats, which is not limited in this application.
After the above processing is performed on the high-resolution high-frame-rate photosensitive image sequence, a high-resolution high-frame-rate video or image sequence which can be directly displayed to a user can be obtained.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating an implementation process of a high-resolution high-frame-rate image capturing method according to an embodiment of the present application. The method may be performed by an imaging module as shown in fig. 6, which may be the camera of the embodiment of fig. 1.
The method comprises the following steps: the high-resolution high-frame-rate low-resolution photographic image sequence is obtained by periodically exposing the photographic element (namely, a plurality of continuous photographic periods), and is buffered by the storage unit, and then the high-frame-rate low-resolution photographic image sequence is fused by the fusion unit according to the time domain sequence by utilizing the pipelined architecture, so that the high-resolution high-frame-rate photographic image sequence is obtained. And finally, processing the photosensitive data contained on each photosensitive image in the high-resolution high-frame-rate photosensitive image sequence by an image signal processing unit (such as ISP) to obtain corresponding pixel values, namely obtaining the high-resolution high-frame-rate image sequence/video for displaying to a user.
Specifically, the above execution process may be specifically described with reference to the foregoing embodiments, which is not repeated herein.
It should be understood that the modules included in the camera module of fig. 6 are only one example, and are not intended to limit the integration of the modules. I.e. the storage unit, the fusion unit and the image signal processing unit may be separate modules or integrated in one or more modules.
Referring to fig. 7, fig. 7 is an image processing apparatus according to an embodiment of the present application. As shown in fig. 7, the apparatus includes an acquisition unit 701 and a fusion unit 702; wherein,
an acquiring unit 701 for acquiring N Zhang Ganguang images, the N Zhang Ganguang images being obtained by sequentially exposing the photosensitive element N times, respectively; the N Zhang Ganguang image comprises a first photosensitive image and a second photosensitive image, the first photosensitive image comprises P photosensitive data, the P photosensitive data are obtained by exposing P photosensitive points on the photosensitive element respectively in one exposure process, the second photosensitive image comprises Q photosensitive data, the Q photosensitive data are obtained by exposing Q photosensitive points on the photosensitive element respectively in one exposure process, the positions of the P photosensitive points and the Q photosensitive points on the photosensitive element are different, P, Q is an integer greater than or equal to 1, and N is an integer greater than or equal to 2. A fusion unit 702, configured to perform fusion processing on the N Zhang Ganguang image to obtain a third photosensitive image; the third photosensitive image comprises the P photosensitive data respectively corresponding to the P photosensitive points and the Q photosensitive data respectively corresponding to the Q photosensitive points.
The image processing apparatus further includes a storage unit (not shown in fig. 7) for storing the N photosensitive images or storing the N photosensitive images and a previous H Zhang Ganguang image adjacent to the N Zhang Ganguang image in the time domain, H being equal to the number of exposure times in one photosensitive period.
In a possible implementation manner, the N Zhang Ganguang images are N temporally consecutive images in a k×e Zhang Ganguang image, the k×e Zhang Ganguang images are obtained by exposing sequentially K consecutive exposure periods, each exposure period of the K exposure periods exposes the photosensitive element sequentially E times, and K is a positive integer greater than or equal to 1; the photosensitive element comprises M photosensitive units, each of the M photosensitive units comprises C photosensitive points, all the photosensitive points in each photosensitive unit are sequentially exposed once in each photosensitive period, and C and M are positive integers which are more than or equal to 2.
In a possible embodiment, in each exposure within one exposure period, one exposure point within each exposure unit is exposed, and the N is equal to the C.
In a possible embodiment, the M photosensitive units include a first photosensitive unit, and the fusion unit 702 is further configured to: when a first photosensitive point in the first photosensitive unit is not exposed in the N times of exposure, interpolating based on photosensitive data corresponding to the exposed photosensitive point in the N times of exposure of the first photosensitive unit, and calculating to obtain photosensitive data corresponding to the first photosensitive point; the first photosites are any one of the photosites which are not exposed in the first photosites, and the first photosites are any one of the M photosites; the third photosensitive image further includes photosensitive data corresponding to the first photosensitive point.
In a possible embodiment, the M photosensitive units include a first photosensitive unit, and the fusion unit 702 is further configured to: when a first photosensitive point in the first photosensitive unit is not exposed in N times of exposure corresponding to the N Zhang Ganguang image, calculating a motion vector based on photosensitive data contained in the N Zhang Ganguang image, and calculating photosensitive data corresponding to the first photosensitive point based on the motion vector; the first photosites are any one of the photosites which are not exposed in the first photosites, and the first photosites are any one of the M photosites; the third photosensitive image further includes photosensitive data corresponding to the first photosensitive point.
In a possible embodiment, the fusion unit 702 is further configured to: carrying out fusion processing on N-1 photosensitive images and a fourth photosensitive image which are continuous in the time domain in the N Zhang Ganguang image to obtain a fifth photosensitive image adjacent to the third photosensitive image; wherein the fourth photosensitive image is the next photosensitive image adjacent to the N Zhang Ganguang image in the time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive image in the time domain.
In a possible embodiment, the apparatus further comprises: and the image signal processing unit is used for processing each photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each photosensitive data on the third photosensitive image.
Specifically, the specific execution process of the image processing apparatus 700 may refer to the execution flow of the high resolution high frame rate image capturing method described in the embodiment of fig. 3 in the foregoing embodiment, which is not described herein.
Referring to fig. 8, fig. 8 is a schematic hardware structure of an image processing apparatus according to an embodiment of the present invention, where an image processing apparatus 800, as an electronic apparatus, may include all or part of elements or modules in the electronic apparatus 101 and the electronic apparatus 201. As shown in fig. 8, the image processing apparatus 800 may be an implementation of the image processing apparatus 700, and the image processing apparatus 800 includes a processor 802, a memory 804, an input/output interface 806, a communication interface 808, and a bus 810. Wherein the processor 802, the memory 804, the input/output interface 806, and the communication interface 808 are communicatively coupled to one another via a bus 810.
The processor 802 may employ a general-purpose central processing unit (Central Processing Unit, CPU), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits for executing related programs to implement the functions required to be performed by the units included in the image processing apparatus 800 provided in the embodiments of the present invention, or to perform the high-resolution high-frame-rate image capturing method provided in the method embodiments and the summary of the present invention. The processor 802 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the methods described above may be performed by integrated logic circuitry in hardware or instructions in software in the processor 802. The processor 802 described above may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The software modules may be located in random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 804, and the processor 802 reads the information in the memory 804, and in combination with the hardware, performs the steps in the above method embodiments.
The Memory 804 may be a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a random access Memory (Random Access Memory, RAM). Memory 804 may store an operating system and other application programs. When the functions to be performed by the units included in the image processing apparatus 700 provided in the embodiment of the present invention are implemented by software or firmware, or the high-resolution high-frame-rate image capturing method provided in the embodiment of the present invention and the summary of the invention is performed, program codes for implementing the technical solution provided in the embodiment of the present invention are stored in the memory 804, and the operations to be performed by the units included in the image processing apparatus 700 are performed by the processor 802, or the high-resolution high-frame-rate image capturing method provided in the embodiment of the present invention is performed.
The input/output interface 806 is used to receive input data and information, and output data such as operation results.
The communication interface 808 enables communication between the image processing apparatus 800 and other devices or communication networks using a transceiver apparatus such as, but not limited to, a transceiver.
Bus 810 may include a path for transferring information between various components of image processing device 800, such as processor 802, memory 804, input/output interface 806, and communication interface 808.
It should be noted that although the image processing apparatus 800 shown in fig. 8 only shows the processor 802, the memory 804, the input/output interface 806, the communication interface 808, and the bus 810, those skilled in the art will appreciate that in a specific implementation, the image processing apparatus 800 also contains other devices necessary to achieve normal operation, such as a display, a camera. Also, it will be appreciated by those skilled in the art that the image processing apparatus 800 may also include hardware devices that perform other additional functions, as desired. Furthermore, it will be appreciated by those skilled in the art that the image processing apparatus 800 may also contain only the necessary components to implement the embodiments of the present invention, and not necessarily all of the components shown in fig. 8.
It is to be understood that the further execution operations of the image processing apparatus 800 of this embodiment may refer to the above embodiments and the related descriptions in the summary of the invention, and are not repeated here.
The embodiment of the application provides a chip system, which comprises at least one processor, a memory and a communication interface, wherein the memory, the communication interface and the at least one processor are interconnected through a circuit, and instructions are stored in the at least one memory; when executed by the processor, the instructions implement some or all of the steps recited in any of the method embodiments described above. The present application provides a computer storage medium storing a computer program which, when executed, causes some or all of the steps of any one of the method embodiments described above to be implemented.
The present embodiments provide a computer program comprising instructions which, when executed by a processor, cause some or all of the steps of any one of the method embodiments described above to be implemented.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments. It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (18)

1. A high resolution high frame rate image capturing method, the method comprising:
acquiring N Zhang Ganguang images, wherein the N Zhang Ganguang images are respectively obtained by carrying out continuous N times of exposure on the photosensitive element; the N Zhang Ganguang image comprises a first photosensitive image and a second photosensitive image, the first photosensitive image comprises P pieces of photosensitive data, the P pieces of photosensitive data are obtained by exposing P pieces of photosensitive points on the photosensitive element respectively in one exposure process, the second photosensitive image comprises Q pieces of photosensitive data, the Q pieces of photosensitive data are obtained by exposing Q pieces of photosensitive points on the photosensitive element respectively in one exposure process, the positions of the P pieces of photosensitive points and the Q pieces of photosensitive points on the photosensitive element are different, P, Q is an integer greater than or equal to 1, and N is an integer greater than or equal to 2;
Performing fusion processing on the N Zhang Ganguang image to obtain a third photosensitive image; the third photosensitive image comprises the P photosensitive data respectively corresponding to the P photosensitive points and the Q photosensitive data respectively corresponding to the Q photosensitive points.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the N Zhang Ganguang images are N continuous images in the time domain in K-E Zhang Ganguang images, the K-E Zhang Ganguang images are obtained by exposing the K continuous photosensitive periods in sequence, each photosensitive period in the K photosensitive periods carries out E continuous exposure on the photosensitive element, and K is a positive integer greater than or equal to 1;
the photosensitive element comprises M photosensitive units, each of the M photosensitive units comprises C photosensitive points, all the photosensitive points in each photosensitive unit are sequentially exposed once in each photosensitive period, and C and M are positive integers which are more than or equal to 2.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
in each exposure within one exposure period, one exposure point within each exposure unit is exposed, and the N is equal to the C.
4. The method of claim 2, wherein a first photosensitive unit is included in the M photosensitive units, the method further comprising:
when a first photosensitive point in the first photosensitive unit is not exposed in the N times of exposure, interpolating based on photosensitive data corresponding to the exposed photosensitive point in the N times of exposure of the first photosensitive unit, and calculating to obtain photosensitive data corresponding to the first photosensitive point; the first photosites are any one of the photosites which are not exposed in the first photosites, and the first photosites are any one of the M photosites;
the third photosensitive image further includes photosensitive data corresponding to the first photosensitive point.
5. The method of claim 2, wherein a first photosensitive unit is included in the M photosensitive units, the method further comprising:
when a first photosensitive point in the first photosensitive unit is not exposed in N times of exposure corresponding to the N Zhang Ganguang image, calculating a motion vector based on photosensitive data contained in the N Zhang Ganguang image, and calculating photosensitive data corresponding to the first photosensitive point based on the motion vector; the first photosites are any one of the photosites which are not exposed in the first photosites, and the first photosites are any one of the M photosites;
The third photosensitive image further includes photosensitive data corresponding to the first photosensitive point.
6. The method according to any one of claims 1-5, further comprising:
carrying out fusion processing on N-1 photosensitive images and a fourth photosensitive image which are continuous in the time domain in the N Zhang Ganguang image to obtain a fifth photosensitive image adjacent to the third photosensitive image;
wherein the fourth photosensitive image is the next photosensitive image adjacent to the N Zhang Ganguang image in the time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive image in the time domain.
7. The method according to any one of claims 3-6, further comprising:
and processing each piece of photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each piece of photosensitive data on the third photosensitive image.
8. An image processing apparatus, characterized in that the apparatus comprises:
an acquisition unit for acquiring N Zhang Ganguang images, the N Zhang Ganguang images being obtained by sequentially exposing the photosensitive element N times, respectively; the N Zhang Ganguang image comprises a first photosensitive image and a second photosensitive image, the first photosensitive image comprises P pieces of photosensitive data, the P pieces of photosensitive data are obtained by exposing P pieces of photosensitive points on the photosensitive element respectively in one exposure process, the second photosensitive image comprises Q pieces of photosensitive data, the Q pieces of photosensitive data are obtained by exposing Q pieces of photosensitive points on the photosensitive element respectively in one exposure process, the positions of the P pieces of photosensitive points and the Q pieces of photosensitive points on the photosensitive element are different, P, Q is an integer greater than or equal to 1, and N is an integer greater than or equal to 2;
The fusion unit is used for carrying out fusion processing on the N Zhang Ganguang image to obtain a third photosensitive image; the third photosensitive image comprises the P photosensitive data respectively corresponding to the P photosensitive points and the Q photosensitive data respectively corresponding to the Q photosensitive points.
9. The apparatus of claim 8, wherein the device comprises a plurality of sensors,
the N Zhang Ganguang images are N continuous images in the time domain in K-E Zhang Ganguang images, the K-E Zhang Ganguang images are obtained by exposing the K continuous photosensitive periods in sequence, each photosensitive period in the K photosensitive periods carries out E continuous exposure on the photosensitive element, and K is a positive integer greater than or equal to 1;
the photosensitive element comprises M photosensitive units, each of the M photosensitive units comprises C photosensitive points, all the photosensitive points in each photosensitive unit are sequentially exposed once in each photosensitive period, and C and M are positive integers which are more than or equal to 2.
10. The apparatus of claim 9, wherein the device comprises a plurality of sensors,
in each exposure within one exposure period, one exposure point within each exposure unit is exposed, and the N is equal to the C.
11. The apparatus of claim 9, wherein the M photosensitive cells include a first photosensitive cell, the fusion unit further configured to:
when a first photosensitive point in the first photosensitive unit is not exposed in the N times of exposure, interpolating based on photosensitive data corresponding to the exposed photosensitive point in the N times of exposure of the first photosensitive unit, and calculating to obtain photosensitive data corresponding to the first photosensitive point; the first photosites are any one of the photosites which are not exposed in the first photosites, and the first photosites are any one of the M photosites;
the third photosensitive image further includes photosensitive data corresponding to the first photosensitive point.
12. The apparatus of claim 9, wherein the M photosensitive cells include a first photosensitive cell, the fusion unit further configured to:
when a first photosensitive point in the first photosensitive unit is not exposed in N times of exposure corresponding to the N Zhang Ganguang image, calculating a motion vector based on photosensitive data contained in the N Zhang Ganguang image, and calculating photosensitive data corresponding to the first photosensitive point based on the motion vector; the first photosites are any one of the photosites which are not exposed in the first photosites, and the first photosites are any one of the M photosites;
The third photosensitive image further includes photosensitive data corresponding to the first photosensitive point.
13. The apparatus according to any one of claims 8-12, wherein the fusion unit is further configured to:
carrying out fusion processing on N-1 photosensitive images and a fourth photosensitive image which are continuous in the time domain in the N Zhang Ganguang image to obtain a fifth photosensitive image adjacent to the third photosensitive image;
wherein the fourth photosensitive image is the next photosensitive image adjacent to the N Zhang Ganguang image in the time domain, and the fourth photosensitive image is adjacent to the N-1 photosensitive image in the time domain.
14. The apparatus according to any one of claims 10-13, wherein the apparatus further comprises:
and the image signal processing unit is used for processing each photosensitive data contained in the third photosensitive image to obtain a pixel value corresponding to each photosensitive data on the third photosensitive image.
15. A chip system, comprising at least one processor, a memory and a communication interface, wherein the memory, the communication interface and the at least one processor are interconnected by a line, and instructions are stored in the at least one memory; the method of any of claims 1-7 being implemented when said instructions are executed by said processor.
16. An electronic device comprising at least one processor, a memory and a communication interface, wherein the memory, the communication interface and the at least one processor are interconnected by a circuit, and wherein instructions are stored in the at least one memory; the method of any of claims 1-7 being implemented when said instructions are executed by said processor.
17. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed, implements the method of any of claims 1-7.
18. A computer program, characterized in that the computer program comprises instructions which, when the computer program is executed, implement the method of any one of claims 1-7.
CN202211187446.0A 2022-09-26 2022-09-26 High-resolution high-frame-rate image pickup method and image processing apparatus Pending CN117808688A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211187446.0A CN117808688A (en) 2022-09-26 2022-09-26 High-resolution high-frame-rate image pickup method and image processing apparatus
PCT/CN2023/120896 WO2024067428A1 (en) 2022-09-26 2023-09-23 High-resolution high-frame-rate photographing method, and image processing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211187446.0A CN117808688A (en) 2022-09-26 2022-09-26 High-resolution high-frame-rate image pickup method and image processing apparatus

Publications (1)

Publication Number Publication Date
CN117808688A true CN117808688A (en) 2024-04-02

Family

ID=90428660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211187446.0A Pending CN117808688A (en) 2022-09-26 2022-09-26 High-resolution high-frame-rate image pickup method and image processing apparatus

Country Status (2)

Country Link
CN (1) CN117808688A (en)
WO (1) WO2024067428A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008090730A1 (en) * 2007-01-23 2008-07-31 Nikon Corporation Image processing device, electronic camera, image processing method, and image processing program
JP2014236251A (en) * 2013-05-31 2014-12-15 キヤノン株式会社 Imaging apparatus
KR102277178B1 (en) * 2015-03-09 2021-07-14 삼성전자 주식회사 Electronic Device Including The Camera Module And Method For Processing Image Of the Same
CN109863742B (en) * 2017-01-25 2021-01-29 华为技术有限公司 Image processing method and terminal device
CN112492228B (en) * 2020-12-15 2022-04-29 维沃移动通信有限公司 Exposure method, camera module and electronic equipment

Also Published As

Publication number Publication date
WO2024067428A1 (en) 2024-04-04

Similar Documents

Publication Publication Date Title
KR102170781B1 (en) Electronic device and method for processing image
US9692959B2 (en) Image processing apparatus and method
CN105282430B (en) Electronic device using composition information of photograph and photographing method using the same
CN108391060B (en) Image processing method, image processing device and terminal
WO2018048177A1 (en) Electronic device and method for processing multiple images
JP6924901B2 (en) Photography method and electronic equipment
KR20160016068A (en) Method for generating image and electronic device thereof
KR20180011539A (en) Electronic device for processing image
US20150235366A1 (en) Method for processing image data and apparatus for the same
CN111741303B (en) Deep video processing method and device, storage medium and electronic equipment
US9747945B2 (en) Method for creating a content and electronic device thereof
CN112954251B (en) Video processing method, video processing device, storage medium and electronic equipment
KR20150043894A (en) Apparatas and method for adjusting a preview area of multi image in an electronic device
US20160092750A1 (en) Method for recommending one or more images and electronic device thereof
KR20150116220A (en) Media streaming method and electronic device thereof
KR20150027934A (en) Apparatas and method for generating a file of receiving a shoot image of multi angle in an electronic device
KR20160058627A (en) A method for displaying contents and an electronic device therefor
WO2022170866A1 (en) Data transmission method and apparatus, and storage medium
CN117808688A (en) High-resolution high-frame-rate image pickup method and image processing apparatus
KR20150098533A (en) Method for obtaining image and an electronic device thereof
KR20160027699A (en) Method for processing image and electronic device thereof
US10715737B2 (en) Imaging device, still image capturing method, and still image capturing program
US20150326630A1 (en) Method for streaming video images and electrical device for supporting the same
CN110996013B (en) Electronic device and method for processing image
CN111626929B (en) Depth image generation method and device, computer readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication