CN113329146A - Pulse camera simulation method and device - Google Patents

Pulse camera simulation method and device Download PDF

Info

Publication number
CN113329146A
CN113329146A CN202110447689.2A CN202110447689A CN113329146A CN 113329146 A CN113329146 A CN 113329146A CN 202110447689 A CN202110447689 A CN 202110447689A CN 113329146 A CN113329146 A CN 113329146A
Authority
CN
China
Prior art keywords
pixel
pulse
sensor
electrons
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110447689.2A
Other languages
Chinese (zh)
Other versions
CN113329146B (en
Inventor
熊瑞勤
赵菁
黄铁军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202110447689.2A priority Critical patent/CN113329146B/en
Publication of CN113329146A publication Critical patent/CN113329146A/en
Application granted granted Critical
Publication of CN113329146B publication Critical patent/CN113329146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/41Extracting pixel data from a plurality of image sensors simultaneously picking up an image, e.g. for increasing the field of view by combining the outputs of a plurality of sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Abstract

The application relates to the technical field of computational imaging, in particular to a pulse camera simulation method and device. The method comprises the following steps: determining the relative motion speed of each pixel point in a sampling time period to be simulated, and initializing the size of a sensor pixel; calculating photon cumulant of each pixel in the simulated sampling time period, and converting the photon cumulant into electron cumulant; updating the total accumulated quantity of electrons on each pixel of the sensor according to the accumulated quantity of electrons; comparing the total accumulated quantity of electrons with a preset threshold value, and if the total accumulated quantity of electrons reaches or exceeds the threshold value, performing pulse distribution and generating a corresponding pulse sequence; a sequence of reference maps corresponding to the pulse sequence is acquired. The method and the device can simultaneously generate the pulse sequence and the reference image sequence corresponding to the pulse sequence, thereby providing data guarantee for the technology of the pulse camera in the aspects of algorithm prototype, deep learning, algorithm basic test and the like.

Description

Pulse camera simulation method and device
Technical Field
The present application relates to the field of computational imaging technologies, and more particularly, to a method and an apparatus for pulse camera simulation.
Background
Conventional digital cameras typically perform photographic imaging at a fixed frame rate, with each frame image being generated as follows: in a certain exposure time window, each pixel of the image sensor performs photoelectric conversion and charge accumulation on incident light, and the total illumination amount of the pixel is obtained through analog-to-digital conversion after exposure is finished. Because the information in the exposure time window is overlapped for recording, the mode can not effectively image the high-speed object, and the imaging blur of the high-speed moving object is often caused.
In recent years, neuron connection structures of biological fovea retinalis and integral distribution models of ganglion cells provide new ideas for visual sampling. Through simulation and abstraction of the fovea of the retina, a pulsed camera comprising photoreceptors, integrators and threshold comparators is presented. The pulse camera represents visual information in a pulse array form, can continuously record the change of light intensity, has no concept of exposure time window, breaks through the limitation of the traditional camera, and can realize the capture and the record of high-speed motion. In recent years, pulse cameras have attracted much attention, such as high-speed scene reconstruction, high-speed moving object recognition, and high-speed moving object tracking based on pulse cameras.
However, in the current phase, the pulse camera equipment is in a scarce state, and the acquisition process of pulse data is time-consuming, so that a researcher cannot easily obtain the pulse data required by the researcher; on the other hand, the pulse data disclosed so far are relatively limited, the covered scene is relatively small, and the data cannot provide a true value (Ground Truth) image of the photographed scene.
The present application therefore proposes an improved method and apparatus to at least partially solve the above technical problem.
Disclosure of Invention
In order to achieve the above technical object, the present application provides a pulse camera simulation method, including the following steps:
determining the relative motion speed of each pixel point in a sampling time period to be simulated, and initializing the size of a sensor pixel;
calculating photon cumulant of each pixel in the simulated sampling time period, and converting the photon cumulant into electron cumulant;
updating the total accumulated quantity of electrons on each pixel of the sensor according to the accumulated quantity of electrons;
comparing the total accumulated quantity of electrons with a preset threshold value, and if the total accumulated quantity of electrons reaches or exceeds the threshold value, performing pulse distribution and generating a corresponding pulse sequence;
a sequence of reference maps corresponding to the pulse sequence is acquired.
Specifically, the determining a relative motion velocity of each pixel point in a sampling time period to be simulated and initializing a size of a sensor pixel includes:
determining the relative motion speed of each pixel point in a sampling time period to be simulated according to the adopted motion model and parameters;
the size of the sensor pixel, which is the ratio of the sensor pixel to the side length of a pixel in an image or video sequence representing the optical scene, is initialized according to the level of detail that the scene capture is required to be.
Preferably, the motion model is translation, rotation or scaling.
Further, the calculating photon accumulation amount of each pixel in the simulated sampling time period and converting the photon accumulation amount into electron accumulation amount comprises:
calculating the acting time length of each point in the optical scene to each pixel point of the sensor in the kth light accumulation period
Figure BDA0003037476980000031
Figure BDA0003037476980000032
Figure BDA0003037476980000033
Wherein, (x, y) represents the coordinates of points in the optical scene, (r, c) represents the coordinates of each pixel point of the sensor, (u, v) is the relative motion velocity of each pixel point,
Figure BDA0003037476980000034
is the length of action;
converting the photon accumulation into electron accumulation
Figure BDA0003037476980000035
Wherein, alpha is photoelectric conversion rate, Ix,yThe area of integration corresponds to the area of one pixel on the sensor for the luminance value at the (x, y) point in the optical scene.
Further, the method for updating the total accumulated quantity of electrons on each pixel of the sensor according to the accumulated quantity of electrons comprises the following steps:
ek(r,c)=ek-1(r,c)+Δ(r,c)。
further, the comparing the total accumulated amount of electrons with a preset threshold value, and performing pulse delivery if the threshold value is reached or exceeded includes:
if ek(r, c) is more than or equal to 1, the corresponding pixel of the pulse frame is set to be 1, and the integrator is emptied;
if ek(r,c)<1, the corresponding pixel of the pulse frame is set to 0, and the integrator state remains unchanged.
Further, the acquiring a reference map sequence corresponding to the pulse sequence includes: and calculating the area of the sensor pixel corresponding to the input image, integrating the pixels of the area, and calculating the light intensity true value corresponding to each pixel to form the reference image sequence.
In particular, the integration is pixel weighted according to the sensing area.
A second aspect of the present application provides a pulse camera simulation apparatus, which performs the steps of:
determining the relative motion speed of each pixel point in a sampling time period to be simulated, and initializing the size of a sensor pixel;
calculating photon cumulant of each pixel in the simulated sampling time period, and converting the photon cumulant into electron cumulant;
updating the total accumulated quantity of electrons on each pixel of the sensor according to the accumulated quantity of electrons;
comparing the total accumulated quantity of electrons with a preset threshold value, and if the total accumulated quantity of electrons reaches or exceeds the threshold value, performing pulse distribution and generating a corresponding pulse sequence;
a sequence of reference maps corresponding to the pulse sequence is acquired.
Further, the device simulates a pulse camera sensor, including a photoreceptor, an integrator, and a threshold comparator.
The beneficial effect of this application does: the pulse camera simulation method and the pulse camera simulation device can simultaneously generate the pulse sequence and the reference image sequence corresponding to the pulse sequence, so that data guarantee is provided for the pulse camera in the aspects of algorithm prototype, deep learning, algorithm basic test and the like.
Drawings
FIG. 1 shows a schematic flow chart of the method of embodiment 1 of the present application;
FIG. 2 is a diagram showing an effect of the method of embodiment 2 of the present application;
FIG. 3 is a diagram showing another effect of the method of embodiment 2 of the present application;
FIG. 4 shows a schematic view of the apparatus of example 3 of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 6 shows a schematic diagram of a storage medium provided in an embodiment of the present application.
Detailed Description
Hereinafter, embodiments of the present application will be described with reference to the accompanying drawings. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present application. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present application. It will be apparent to one skilled in the art that the present application may be practiced without one or more of these details. In other instances, well-known features of the art have not been described in order to avoid obscuring the present application.
It should be noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments in accordance with the application. As used herein, the singular is intended to include the plural unless the context clearly dictates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Exemplary embodiments according to the present application will now be described in more detail with reference to the accompanying drawings. These exemplary embodiments may, however, be embodied in many different forms and should not be construed as limited to only the embodiments set forth herein. The figures are not drawn to scale, wherein certain details may be exaggerated and omitted for clarity. The shapes of various regions, layers, and relative sizes and positional relationships therebetween shown in the drawings are merely exemplary, and deviations may occur in practice due to manufacturing tolerances or technical limitations, and a person skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions, as actually required.
Example 1:
the embodiment implements a pulse camera simulation method, as shown in fig. 1, including the following steps:
s1, determining the relative motion speed of each pixel point in a sampling time period to be simulated, and initializing the size of the sensor pixel;
s2, calculating photon accumulation of each pixel in the simulated sampling time period, and converting the photon accumulation into electron accumulation;
s3, updating the total accumulated quantity of electrons on each pixel of the sensor according to the accumulated quantity of electrons;
s4, comparing the total accumulated quantity of electrons with a preset threshold value, and if the total accumulated quantity of electrons reaches or exceeds the threshold value, performing pulse distribution and generating a corresponding pulse sequence;
and S5, acquiring a reference diagram sequence corresponding to the pulse sequence.
Determining the relative motion speed of each pixel point in a sampling time period to be simulated, and initializing the size of a sensor pixel, wherein the method comprises the following steps:
determining the relative motion speed of each pixel point in a sampling time period to be simulated according to the adopted motion model and parameters;
the size of the sensor pixel, which is the ratio of the sensor pixel to the side length of a pixel in an image or video sequence representing the optical scene, is initialized according to the level of detail that the scene capture is required to be.
Preferably, the motion model is translation, rotation or scaling.
Further, calculating photon accumulation amount of each pixel in the simulated sampling time period and converting the photon accumulation amount into electron accumulation amount, comprising:
calculating the acting time length of each point in the optical scene to each pixel point of the sensor in the kth light accumulation period
Figure BDA0003037476980000071
Figure BDA0003037476980000072
Figure BDA0003037476980000073
Wherein, (x, y) represents the coordinates of points in the optical scene, (r, c) represents the coordinates of each pixel point of the sensor, (u, v) is the relative motion velocity of each pixel point,
Figure BDA0003037476980000074
is the length of action;
converting photon accumulation into electron accumulation
Figure BDA0003037476980000075
Wherein, alpha is photoelectric conversion rate, Ix,yThe area of integration corresponds to the area of one pixel on the sensor for the luminance value at the (x, y) point in the optical scene.
And updating the total accumulated quantity of electrons on each pixel of the sensor according to the accumulated quantity of the electrons, wherein the method comprises the following steps:
ek(r,c))=ek-1(r,c)+Δ(r,c)。
further, comparing the total accumulated amount of electrons with a preset threshold, and performing pulse delivery if the threshold is reached or exceeded, includes:
if ek(r, c) is more than or equal to 1, the corresponding pixel of the pulse frame is set to be 1, and the integrator is emptied;
if ek(r,c)<1, the corresponding pixel of the pulse frame is set to 0, and the integrator state remains unchanged.
Further, acquiring a reference map sequence corresponding to the pulse sequence includes: and calculating the area of the sensor pixel corresponding to the input image, integrating the pixels of the area, calculating the light intensity true value corresponding to each pixel to form the reference diagram sequence, and performing pixel weighted summation by the integration according to the sensing area.
Example 2:
the embodiment implements a pulse camera simulation method, which specifically executes the following steps:
the method comprises the following steps: and initializing parameters. The relative motion velocity (u, v) of each pixel point within a (very short) sampling period is determined based on the motion model (translation, rotation, scaling, etc.) and parameters used. The size s of the sensor pixel, i.e. the ratio of the sensor pixel to the side length of a pixel in an image or video sequence representing the optical scene, is initialized according to the level of fineness of the scene capture required.
Step two: and calculating the photon accumulation amount of each pixel in the k sampling period of the current simulation. Firstly, calculating the action time length of each point in the optical scene to each pixel point of the sensor in the k light accumulation period to satisfy the following inequality
Figure BDA0003037476980000091
Figure BDA0003037476980000092
Figure BDA0003037476980000093
Wherein, (x, y) represents the coordinates of points in the optical scene, (r, c) represents the coordinates of each pixel point of the sensor,
Figure BDA0003037476980000094
is the length of action;
converting the photon accumulation into electron accumulation
Figure BDA0003037476980000095
Wherein, alpha is photoelectric conversion rate, Ix,yThe area of integration corresponds to the area of one pixel on the sensor for the luminance value at the (x, y) point in the optical scene.
Step three: simulating electron accumulation. Updating the total accumulated quantity of electrons on each pixel of the sensor according to the accumulated quantity of electrons in the current sampling time period, wherein the method comprises the following steps:
ek(r,c)=ek-1(r,c)+Δ(r,c)。
step four: threshold comparison and pulse delivery. Comparing the current electrical signal with a preset threshold value, if ek(r, c) is more than or equal to 1, the corresponding pixel of the pulse frame is set to 1, namely Mk(r, c) is 1, indicating that a pulse is generated and the integrator is emptied, i.e. ek(r, c) ═ 0; comparing the current electrical signal with a preset threshold value, if ek(r,c)<1, the corresponding pixel of the pulse frame is set to 0, and the integrator state remains unchanged.
Step five: and extracting illumination information covered by the sensor. The area covered by each pixel of the current sensor on the optical scene image is determined, and the average brightness information of the image in the area is output (pixel weighting according to the area) to form a reference image (Ground Truth). Specifically, the position and sensing window of the sensor pixel corresponding to the input image are calculated, and the light intensity true value corresponding to each pixel can be calculated by integrating the pixels in the area (performing pixel weighted summation according to the sensing area), so as to form a true value reference image.
Images with different contents are input as external optical scenes to test the performance of the simulator, and the simulator generates a corresponding pulse image sequence. Fig. 2 lists the pulse sequence and reference image sequence generated by the present invention with a "Window" image as the optical scene. Fig. 3 lists the pulse sequence and reference image sequence generated by the present invention with the "Motorbike" image as the optical scene. The experimental result shows that the relatively real pulse sequence and the reference image sequence corresponding to the pulse sequence can be generated according to the input image, so that data guarantee is provided for the technology of the pulse camera in the aspects of algorithm prototype, deep learning, algorithm basic test and the like.
Example 3:
this embodiment implements a pulse camera simulation apparatus, as shown in fig. 4, which simulates a pulse camera sensor including a photoreceptor, an integrator, and a threshold comparator. As shown in fig. 4, the pulse camera simulation apparatus performs the following steps:
capturing an optical signal, calculating the contribution time of each point on a scene image to each pixel point on a sensor, and converting the optical signal into an electric signal according to the photoelectric conversion rate;
continuously accumulating the electrical signals;
and checking whether the electronic cumulant of each pixel point of the sensor reaches a preset threshold value, recording 1 for the pixel point in the pulse frame, emptying the integrator and sending out a pulse.
Please refer to fig. 5, which illustrates a schematic diagram of an electronic device according to some embodiments of the present application. As shown in fig. 5, the electronic device 2 includes: the system comprises a processor 200, a memory 201, a bus 202 and a communication interface 203, wherein the processor 200, the communication interface 203 and the memory 201 are connected through the bus 202; the memory 201 stores a computer program that can be executed on the processor 200, and the processor 200 executes the pulse camera simulation method provided by any one of the foregoing embodiments when executing the computer program.
The Memory 201 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 203 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
Bus 202 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 201 is used for storing a program, and the processor 200 executes the program after receiving an execution instruction, and the pulse camera simulation method disclosed in any of the foregoing embodiments of the present application may be applied to the processor 200, or implemented by the processor 200.
The processor 200 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 200. The Processor 200 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 201, and the processor 200 reads the information in the memory 201 and completes the steps of the method in combination with the hardware thereof.
The electronic device provided by the embodiment of the application and the pulse camera simulation method provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic device.
Referring to fig. 6, the computer readable storage medium is an optical disc 30, and a computer program (i.e., a program product) is stored thereon, and when being executed by a processor, the computer program executes the pulse camera simulation method according to any of the foregoing embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the quantum key distribution channel allocation method in the spatial division multiplexing optical network provided by the embodiment of the present application have the same inventive concept, and have the same beneficial effects as the method adopted, run, or implemented by the application program stored in the computer-readable storage medium.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. In addition, this application is not directed to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the present application.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in the creation apparatus of a virtual machine according to embodiments of the present application. The present application may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A pulse camera simulation method is characterized by comprising the following steps:
determining the relative motion speed of each pixel point in a sampling time period to be simulated, and initializing the size of a sensor pixel;
calculating photon cumulant of each pixel in the simulated sampling time period, and converting the photon cumulant into electron cumulant;
updating the total accumulated quantity of electrons on each pixel of the sensor according to the accumulated quantity of electrons;
comparing the total accumulated quantity of electrons with a preset threshold value, and if the total accumulated quantity of electrons reaches or exceeds the threshold value, performing pulse distribution and generating a corresponding pulse sequence;
a sequence of reference maps corresponding to the pulse sequence is acquired.
2. The method of claim 1, wherein the determining the relative motion velocity of each pixel within a sampling period to be simulated and initializing the size of the sensor pixel comprises:
determining the relative motion speed of each pixel point in a sampling time period to be simulated according to the adopted motion model and parameters;
the size of the sensor pixel, which is the ratio of the sensor pixel to the side length of a pixel in an image or video sequence representing the optical scene, is initialized according to the level of detail that the scene capture is required to be.
3. The pulse camera simulation method according to claim 2, wherein the motion model is translation, rotation or scaling.
4. The pulse camera simulation method according to claim 1, wherein the calculating of the photon accumulation amount of each pixel in the simulated sampling period and converting the photon accumulation amount into an electron accumulation amount comprises:
calculating the acting time length of each point in the optical scene to each pixel point of the sensor in the kth light accumulation period
Figure FDA0003037476970000021
Figure FDA0003037476970000022
Figure FDA0003037476970000023
Wherein, (x, y) represents the coordinates of points in the optical scene, (r, c) represents the coordinates of each pixel point of the sensor, (u, v) is the relative motion velocity of each pixel point,
Figure FDA0003037476970000024
is the length of action;
converting the photon accumulation into electron accumulation
Figure FDA0003037476970000025
Wherein, alpha is photoelectric conversion rate, Ix,yThe area of integration corresponds to the area of one pixel on the sensor for the luminance value at the (x, y) point in the optical scene.
5. The pulse camera simulation method according to claim 4, wherein the total accumulated amount of electrons at each pixel of the sensor is updated according to the accumulated amount of electrons by:
ek(r,c)=ek-1(r,c)+Δ(r,c)。
6. a pulse camera simulation method according to claim 4, wherein the comparing the total accumulated amount of electrons with a preset threshold value, and performing pulse delivery if the threshold value is reached or exceeded comprises:
if ek(r, c) is more than or equal to 1, the corresponding pixel of the pulse frame is set to be 1, and the integrator is emptied;
if ek(r,c)<1, the corresponding pixel of the pulse frame is set to 0, and the integrator state remains unchanged.
7. The pulse camera simulation method according to claim 1, wherein the acquiring a reference map sequence corresponding to a pulse sequence comprises: and calculating the area of the sensor pixel corresponding to the input image, integrating the pixels of the area, and calculating the light intensity true value corresponding to each pixel to form the reference image sequence.
8. The pulse camera simulation method of claim 7, wherein the integration is pixel weighted summation based on sensing area.
9. A pulse camera simulation apparatus, characterized in that the apparatus performs the method of any of claims 1 to 8.
10. The pulse camera simulation device of claim 9, wherein the device simulates a pulse camera sensor comprising a photoreceptor, an integrator, and a threshold comparator.
CN202110447689.2A 2021-04-25 2021-04-25 Pulse camera simulation method and device Active CN113329146B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110447689.2A CN113329146B (en) 2021-04-25 2021-04-25 Pulse camera simulation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110447689.2A CN113329146B (en) 2021-04-25 2021-04-25 Pulse camera simulation method and device

Publications (2)

Publication Number Publication Date
CN113329146A true CN113329146A (en) 2021-08-31
CN113329146B CN113329146B (en) 2022-06-03

Family

ID=77413684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110447689.2A Active CN113329146B (en) 2021-04-25 2021-04-25 Pulse camera simulation method and device

Country Status (1)

Country Link
CN (1) CN113329146B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114584713A (en) * 2022-04-29 2022-06-03 北京大学 Pulse camera simulation method and device, control equipment and readable storage medium
CN116389912A (en) * 2023-04-24 2023-07-04 北京大学 Method for reconstructing high-frame-rate high-dynamic-range video by fusing pulse camera with common camera
CN116482398A (en) * 2023-06-26 2023-07-25 北京大学 Method and system for determining moving speed of pulse imaging

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010084493A1 (en) * 2009-01-26 2010-07-29 Elbit Systems Ltd. Optical pixel and image sensor
US20150092098A1 (en) * 2013-09-27 2015-04-02 Canon Kabushiki Kaisha Image capturing apparatus and control method thereof
CN105093206A (en) * 2014-05-19 2015-11-25 洛克威尔自动控制技术股份有限公司 Waveform reconstruction in a time-of-flight sensor
CN205300683U (en) * 2015-12-24 2016-06-08 河南华润电力首阳山有限公司 Power generating equipment and cumulant measuring device thereof
CN109803096A (en) * 2019-01-11 2019-05-24 北京大学 A kind of display methods and system based on pulse signal
US20200099871A1 (en) * 2018-09-24 2020-03-26 Robert Bosch Gmbh Image sensor element for outputting an image signal, and method for manufacturing an image sensor element for outputting an image signal
US20200128245A1 (en) * 2016-01-22 2020-04-23 Peking University Imaging method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010084493A1 (en) * 2009-01-26 2010-07-29 Elbit Systems Ltd. Optical pixel and image sensor
US20150092098A1 (en) * 2013-09-27 2015-04-02 Canon Kabushiki Kaisha Image capturing apparatus and control method thereof
CN105093206A (en) * 2014-05-19 2015-11-25 洛克威尔自动控制技术股份有限公司 Waveform reconstruction in a time-of-flight sensor
CN205300683U (en) * 2015-12-24 2016-06-08 河南华润电力首阳山有限公司 Power generating equipment and cumulant measuring device thereof
US20200128245A1 (en) * 2016-01-22 2020-04-23 Peking University Imaging method and device
US20200099871A1 (en) * 2018-09-24 2020-03-26 Robert Bosch Gmbh Image sensor element for outputting an image signal, and method for manufacturing an image sensor element for outputting an image signal
CN109803096A (en) * 2019-01-11 2019-05-24 北京大学 A kind of display methods and system based on pulse signal

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114584713A (en) * 2022-04-29 2022-06-03 北京大学 Pulse camera simulation method and device, control equipment and readable storage medium
CN114584713B (en) * 2022-04-29 2022-09-20 北京大学 Pulse camera simulation method and device, control equipment and readable storage medium
CN116389912A (en) * 2023-04-24 2023-07-04 北京大学 Method for reconstructing high-frame-rate high-dynamic-range video by fusing pulse camera with common camera
CN116389912B (en) * 2023-04-24 2023-10-10 北京大学 Method for reconstructing high-frame-rate high-dynamic-range video by fusing pulse camera with common camera
CN116482398A (en) * 2023-06-26 2023-07-25 北京大学 Method and system for determining moving speed of pulse imaging
CN116482398B (en) * 2023-06-26 2023-11-03 北京大学 Method and system for determining moving speed of pulse imaging

Also Published As

Publication number Publication date
CN113329146B (en) 2022-06-03

Similar Documents

Publication Publication Date Title
CN113329146B (en) Pulse camera simulation method and device
CN110910486B (en) Indoor scene illumination estimation model, method and device, storage medium and rendering method
CN110276767B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109815843B (en) Image processing method and related product
Tang et al. Joint implicit image function for guided depth super-resolution
CN111402130B (en) Data processing method and data processing device
CN113067979A (en) Imaging method, device, equipment and storage medium based on bionic pulse camera
US10839222B2 (en) Video data processing
Mei et al. Waymo open dataset: Panoramic video panoptic segmentation
Rodriguez-Vazquez et al. CMOS vision sensors: Embedding computer vision at imaging front-ends
CN111401215B (en) Multi-class target detection method and system
CN112927279A (en) Image depth information generation method, device and storage medium
CN115861380B (en) Method and device for tracking visual target of end-to-end unmanned aerial vehicle under foggy low-illumination scene
CN114584703A (en) Imaging method, device, equipment and storage medium of bionic pulse camera
CN108734712B (en) Background segmentation method and device and computer storage medium
CN113159229A (en) Image fusion method, electronic equipment and related product
CN116612103B (en) Intelligent detection method and system for building structure cracks based on machine vision
CN112132753A (en) Infrared image super-resolution method and system for multi-scale structure guide image
US20190251695A1 (en) Foreground and background detection method
CN115546681A (en) Asynchronous feature tracking method and system based on events and frames
CN115249269A (en) Object detection method, computer program product, storage medium, and electronic device
CN116091337A (en) Image enhancement method and device based on event signal nerve coding mode
CN114708143A (en) HDR image generation method, equipment, product and medium
Nguyen et al. Joint image deblurring and binarization for license plate images using deep generative adversarial networks
CN111160255B (en) Fishing behavior identification method and system based on three-dimensional convolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant