CN114584713B - Pulse camera simulation method and device, control equipment and readable storage medium - Google Patents
Pulse camera simulation method and device, control equipment and readable storage medium Download PDFInfo
- Publication number
- CN114584713B CN114584713B CN202210466831.2A CN202210466831A CN114584713B CN 114584713 B CN114584713 B CN 114584713B CN 202210466831 A CN202210466831 A CN 202210466831A CN 114584713 B CN114584713 B CN 114584713B
- Authority
- CN
- China
- Prior art keywords
- pulse
- sequence
- simulation
- model
- intensity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004088 simulation Methods 0.000 title claims abstract description 104
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000003860 storage Methods 0.000 title claims abstract description 23
- 238000009826 distribution Methods 0.000 claims abstract description 26
- 238000004590 computer program Methods 0.000 claims description 22
- 230000005284 excitation Effects 0.000 claims description 21
- 230000010354 integration Effects 0.000 claims description 13
- 230000009466 transformation Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 6
- 239000003990 capacitor Substances 0.000 claims description 5
- 239000007788 liquid Substances 0.000 claims 1
- 230000008569 process Effects 0.000 abstract description 17
- 238000003384 imaging method Methods 0.000 abstract description 8
- 230000002950 deficient Effects 0.000 abstract description 6
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 12
- 230000003287 optical effect Effects 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 7
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000005286 illumination Methods 0.000 description 5
- 239000012528 membrane Substances 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000009825 accumulation Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000000691 measurement method Methods 0.000 description 3
- 210000002569 neuron Anatomy 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000218645 Cedrus Species 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000007306 turnover Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
The application provides a pulse camera simulation method, a pulse camera simulation device, control equipment and a readable storage medium, wherein the method comprises the following steps: extracting a key frame sequence of a video to be simulated, and converting the key frame sequence into an intensity map sequence; increasing the frame rate of the intensity map sequence according to the working clock frequency of the pulse camera; acquiring an integral distribution model, and simulating the intensity map sequence according to the integral distribution model; and outputting the simulated pulse data. Therefore, the imaging process of the pulse camera is simulated, the existing image and video data are converted into pulse data, and the problem that the real pulse data are seriously deficient is solved.
Description
Technical Field
The application relates to the technical field of computers, in particular to a pulse camera simulation method, a pulse camera simulation device, a pulse camera simulation control device and a readable storage medium.
Background
The pulse camera adopts a sensing principle inspired by biological vision, has ultrahigh time resolution and a sensing mechanism inspired by biological vision, breaks through the limit of fixed exposure time of the traditional camera, and has stronger detection capability on a high-speed moving target. Such a feature makes the pulse camera have great advantages in high-speed motion scenes such as automatic driving and the like.
However, in a specific application, the pulse camera does not directly output the final result as a separate device, but provides pulse data for other modules to call as a provider of high-speed images. Therefore, to put a pulse camera into practical use, a large amount of real pulse data needs to be provided for simulation. The current real pulse data is seriously deficient, so how to develop a pulse camera simulation algorithm to convert the existing image and video data into pulse data is a problem to be solved urgently.
Disclosure of Invention
The problem that this application was solved is with present image and video data conversion impulse data, solves the problem that impulse data is deficient.
In order to solve the above problem, a first aspect of the present application provides a pulse camera simulation method, including:
extracting a key frame sequence of a video to be simulated, and converting the key frame sequence into a strength graph sequence;
raising the frame rate of the intensity map sequence according to the working clock frequency of the pulse camera;
acquiring an integral distribution model, and simulating the intensity map sequence according to the integral distribution model;
and outputting the simulated pulse data.
Preferably, after the increasing the frame rate of the intensity map sequence according to the operating clock frequency of the pulse camera, the method further comprises:
and multiplying the intensity map sequence after the frame rate is improved by a transformation coefficient to perform brightness adjustment.
Preferably, the value range of the transformation coefficient is (1-3) × 10 -4 。
Preferably, in the simulation of the intensity map sequence according to the integral issuing model, a simulation time step and an excitation threshold are determined, each simulation time step is integrated by the integral issuing model, a pulse is issued after an integration result exceeds the excitation threshold, and the simulation is completed based on the issued pulse.
Preferably, in the step of converting the key frame sequence into the intensity map sequence, in the case that the key frame sequence is a color image sequence, the key frame sequence is converted into the intensity map sequences of an R channel, a G channel and a B channel; and converting the key frame sequence into a single-channel intensity map sequence under the condition that the key frame sequence is a gray image sequence.
Preferably, the point issuing model is:
in the model, the model is divided into a plurality of models,C pd in order to switch the capacitance of the capacitor,λin order to convert the coefficients of the image,N 1 (t) In order to be a poisson model,G(t) Is the intensity of the luminance of the sequence of intensity maps,tin order to be the time step,ηis a linear coefficient.
Preferably, in the output of the simulated pulse data, the pulse data is encoded into binary pulse data and then output.
The present application provides in a second aspect a pulse camera simulation apparatus, comprising:
the frame extraction module is used for extracting a key frame sequence of a video to be simulated and converting the key frame sequence into an intensity map sequence;
the frame rate increasing module is used for increasing the frame rate of the intensity map sequence according to the working clock frequency of the pulse camera;
the simulation module is used for acquiring an integral distribution model and simulating the intensity map sequence according to the integral distribution model;
and the output module is used for outputting the simulated pulse data.
A third aspect of the present application provides an electronic device, comprising a computer-readable storage medium storing a computer program and a processor, wherein the computer program, when read and executed by the processor, implements the pulse camera simulation method as described above.
A fourth aspect of the present application provides a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is read and executed by a processor, the computer program implements the pulse camera simulation method as described above.
A fifth aspect of the application provides a computer program product comprising a computer program, wherein the computer program is executed by a processor to implement the pulse camera simulation method as described above.
In the application, the imaging process of the pulse camera is simulated, the existing image and video data are converted into pulse data, and the problem that the real pulse data are seriously deficient is solved.
In the application, the brightness value of the intensity map sequence can be amplified by introducing the transformation coefficient, so that the problem of insufficient brightness of the video to be simulated is solved.
In the application, a Poisson model is introduced into an integral distribution model to perform noise addition operation, so that noise caused by scattered light is loaded, and the simulation accuracy is increased.
Drawings
FIG. 1 is an exemplary diagram of a control group image and an experimental group image according to one embodiment of the present application;
FIG. 2 is an exemplary graph of a control group image and an experimental group image according to another embodiment of the present application;
FIG. 3 is a block diagram of a pulse camera simulation apparatus according to an embodiment of the present application;
FIG. 4 is a block diagram of a pulse camera simulation apparatus according to another embodiment of the present application;
fig. 5 is a block diagram of a control device according to an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanying the present application are described in detail below. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which this application belongs.
Although the current field of machine vision has met with great success, many challenges remain in high speed scenarios. For example, during automatic driving, for a moving object that suddenly appears, a conventional camera causes imaging blur and data delay of several tens of milliseconds due to an imaging mode of a fixed exposure time interval. For vehicles traveling at high speeds, this problem can have very dangerous consequences.
The pulsed camera employs the sensing principle of biological visual inspiration, with each pixel sensing the intensity of light independently and converting to charge accumulation. When the accumulated charge exceeds the trigger threshold, the pixel will fire a pulse and reset the voltage immediately. The excited pulses are read out with a high-speed scan clock. The pulse camera breaks through the limit of the fixed exposure time of the traditional camera by means of ultrahigh time resolution and a biological inspired sensing mechanism, so that the pulse camera has stronger detection capability on a high-speed moving target. The imaging principle brings about great improvement of time resolution and dynamic range, so that many problems of the traditional camera in a high-speed motion scene are solved.
However, in a specific application, the pulse camera is not used as a separate device to directly output a final result, but is used as a provider of high-speed images to provide pulse data for other modules to call. For example, after pulse data is output, the control module or the calculation module analyzes the pulse data through a trained deep neural network model, and outputs corresponding features, so that automatic driving decision is made.
This way of use means that, for practical application of the pulse camera, a large amount of real pulse data needs to be provided for training the deep neural network model, so that the pulse camera can be used practically. However, because of the novelty and high manufacturing cost of the pulse camera, it is difficult to popularize such a camera on a large scale at present. Which in turn results in a significant lack of real pulse data. Existing large-scale image and video data are completely different in format from pulse data, thus greatly limiting research and application based on pulse cameras.
Under the circumstances, developing a pulse camera simulation algorithm to convert the existing image and video data into pulse data is a problem to be solved urgently.
Aiming at the problems, the application provides a new pulse camera simulation scheme, which can convert the existing image and video data into pulse data and solve the problem that the real pulse data is seriously deficient.
The embodiment of the application provides a pulse camera simulation method, which can be executed by a pulse camera simulation device, wherein the pulse camera simulation device can be integrated in electronic equipment such as a pad, a computer, a server cluster and a data center. Fig. 1 is a flowchart of a pulse camera simulation method according to an embodiment of the present application; the pulse camera simulation method comprises the following steps:
s100, extracting a key frame sequence of a video to be simulated, and converting the key frame sequence into an intensity map sequence;
the video to be simulated may be a given video or a given image sequence.
In one embodiment, the video to be simulated is an image taken with an imaging device; or key frame images extracted from the video; or an image generated from synthesis software.
In one embodiment, in the case that the video to be simulated is in a video format, an image sequence is obtained by extracting key frames of the video to be simulated, and the image sequence is a sequence of key frames.
In one embodiment, the sequence of keyframes is a sequence of images of 60 frames per second.
In the case that the video to be simulated is in a video format, the video is typically a 60FPS video, and thus, all frames of the video are directly extracted as a sequence of key frames.
In one embodiment, in the case that the video to be simulated is a given image sequence, the image sequence is directly used as the key frame sequence; in case the image sequence fails to reach 60 frames per second, a sequence of key frames is obtained by copying the image sequence.
For example, if the image sequence is a single image, a key frame sequence of 60 frames per second is generated by duplication.
When the video to be simulated is in a video format, if the video is 120FPS video, the key frame may be extracted in an interval extraction manner. In the present application, the setting or extraction method of the key frame is not limited.
In one embodiment, in the case where the sequence of key frames is a sequence of color images, the sequence of key frames is converted into a sequence of intensity maps for the R, G, and B channels; and converting the key frame sequence into a single-channel intensity map sequence under the condition that the key frame sequence is a gray image sequence.
When the key frame sequence is a color image sequence, the key frame sequence is converted into three intensity map sequences which are respectively intensity map sequences of an R channel, a G channel and a B channel. In the case where the sequence of keyframes is a sequence of grayscale images, the sequence of keyframes is converted into a sequence of intensity maps of a single channel.
For the color image, each pixel value has RGB values of three color channels, and values of an R channel, a G channel and a B channel are respectively reserved to obtain intensity map sequences of the R channel, the G channel and the B channel.
S200, improving the frame rate of the intensity chart sequence according to the working clock frequency of the pulse camera;
the converted intensity map sequence needs to increase the frame rate according to the working clock frequency of the pulse camera. If the working clock of the pulse camera is 1MHz, the intensity map sequence is interpolated to 1000000 FPS. The gray value of each frame intensity image is used as the equivalent light brightness of the pulse simulation.
For example, for a 60 frames per second intensity map sequence, the Vidar-I operating clock frequency is 10MHz, and therefore 166666 (10000000/60-1) frame images need to be added between adjacent key frames.
And through frame interpolation, the frame rate is improved to the working clock frequency of the pulse camera.
In one embodiment, the frame interpolation method employs bilinear interpolation.
In one embodiment, the frame interpolation method is a linear interpolation algorithm; or an algorithm based on a deep neural network; or an optical flow based algorithm.
S400, acquiring an integral distribution model, and simulating the intensity map sequence according to the integral distribution model;
and S500, outputting the simulated pulse data.
In the application, the imaging process of the pulse camera is simulated, the existing image and video data are converted into pulse data, and the problem that the real pulse data are seriously deficient is solved.
In one embodiment, as shown in fig. 2, after the step S200 of increasing the frame rate of the intensity map sequence according to the operating clock frequency of the pulse camera, the method further includes:
and S300, multiplying the intensity map sequence after the frame rate is improved by a transformation coefficient to adjust the brightness.
Based on the principle of a pulse camera, for a pixel, a corresponding pulse is output only after a threshold value of the pixel value accumulation, so that if the pixel value in the intensity map sequence is too low, the corresponding pulse is sparse. The image of nature has very strong luminance, but after taking a picture or video, because the imaging principle, its luminance that corresponds can greatly reduced, direct simulation can make the emulation pulse comparatively sparse, produces the distortion.
To solve this problem, the intensity map sequence needs to be adjusted in brightness, i.e. the corresponding brightness is increased. Each pixel value on the intensity map represents the brightness intensity of light in the pulse camera equivalent field of view; by introducing the transformation coefficient, the brightness value of the intensity map sequence can be amplified, so that the problem of insufficient brightness of the video to be simulated is solved.
The specific value of the transform coefficient can be obtained according to actual conditions or multiple measurements. For example, images of the same scene are acquired through a pulse camera and a normal camera, then the images of the normal camera are used as a video to be simulated to simulate based on a preset transformation coefficient, and the transformation coefficient is adjusted based on a simulation result until the simulation result meets requirements. The transform coefficients may also be determined in other ways, such as historical data or the like.
In one embodiment, the point issuing model is:
in the model, the model is divided into a plurality of models,C pd in order to switch the capacitance of the capacitor,λin order to convert the coefficients of the image,N 1 (t) In order to be a poisson model,G(t) Is the intensity of the luminance of the sequence of intensity maps,tin order to be the time step,ηis a linear coefficient.
The integral distribution model is used for conducting noise adding operation by introducing a Poisson model, so that noise caused by scattered light is loaded, and the simulation accuracy is increased.
In one embodiment, in the step S400, an integral distribution model is obtained, and in the simulation of the intensity map sequence according to the integral distribution model,
in an embodiment, in step S400, an integral issuing model is obtained, a simulation time step and an excitation threshold are determined in the simulation of the intensity map sequence according to the integral issuing model, each simulation time step is integrated by the integral issuing model, a pulse is issued after an integration result exceeds the excitation threshold, and the simulation is completed based on the issued pulse.
Determining total simulation time T and a simulation time step, and integrating the simulation time step through an integral issuing model; after each time step is finished, judging an integral result, if the integral result exceeds a threshold value, issuing a pulse and resetting the integral result, and if the integral result does not exceed the threshold value, keeping the integral result; and continuing to perform the next time step until the total simulation time is reached.
Specifically, the method comprises the following steps: and determining the total simulation time T, and performing pulse simulation by using the brightness intensity graph sequence. Each pixel value on the luminance intensity map represents the luminance intensity of light in the pulse camera equivalent field of view. And substituting the brightness intensity of the light into an integral excitation model to perform integral and noise addition operation.
The generation process of the pulse signal is described by taking one pixel p as an example. The brightness intensity of the pixel p is Gp, at a simulation time stepδ t The actual brightness intensity received at the pixel p of the photosensitive chip of the pulse camera is Gp + N under the influence of the scattering noise 1 (t)。
Each camera of the pulse camera can be regarded as an integral excitation neuron, and the change amount of the neuron membrane potential in each time step is as follows:
wherein,ηfor each pixel of the pulse camera, the linear coefficient of the pulse excitation frequency and the light intensity,λis a linear coefficient of the gray value of the pixel and the intensity of the brightness,C pd the photosensitive circuit capacitance parameter is set for each pixel.
After the neuron membrane potential is updated, judging whether the current membrane potential exceeds an excitation threshold value theta, if so, resetting the membrane potential and issuing a pulse; if not, the accumulation process continues.
The simulation time length T determines how long a pulse sequence needs to be generated. The simulation time T is further divisible into N T r N is the number of simulation time units, T r The length of the time unit is simulated. The simulation time unit can be further divided into K simulation time steps, each simulation time step is the minimum time scale in the simulation process, the length of the time step is 2 times of the reciprocal of the working clock frequency of the pulse camera, and the time step is recorded asδ t 。
In order to improve the reality of the simulation pulse data, noise is added in the simulation process. Depending on the source of the noise, it is possible to,dividing noise into scattered noise (denoted N) 1 ) And noise (denoted as N) 2 ). The scattering noise is caused by diffuse reflection light in a natural scene, and is generally modeled by using a cedar model. The intrinsic noise is caused by the manufacturing process of the pulse camera, and when no optical signal is input, a pulse signal is still generated. Therefore, this type of noise is independent of the optical signal. In the simulation process, the distribution rule of the noise is obtained by adopting a statistical method, and then modeling is carried out.
In one embodiment, in the case where the pulse readout frequency of the pulse camera is less than the operating frequency, if one, multiple pulses are excited within the simulation time cell, and only one is read out.
For example, the pulse readout frequency of a pulse camera is 40000Hz, so the excited pulses cannot be read out in real time. This in turn results in that only one can be read out even if multiple pulses are fired within one simulation time cell.
In one embodiment, the noise floor is introduced, the noise floor is initialized to a noise matrix in advance, and the excited pulse is determined from the noise matrix after the integration process for each time unit is completed.
For example, during simulation time T, the noise floor is initialized to noise matrix N2 in advance. In the matrix, 1 represents the presence of noise and 0 represents the absence of noise. After the integration process of each time unit is finished, the value of the corresponding position in the noise matrix N2 needs to be read, and if the value of the position at the time is 1, 1 pulse is read regardless of whether the excited pulse exists at the current time. If the value of the noise matrix N2 at this time is 0, the simulation result at the current time is not affected.
In one embodiment, in the outputting the simulated pulse data, the pulse data is encoded into binary pulse data and then output.
In one embodiment, the output is a direct readout of the pulse signal in the simulated pulse data.
In one embodiment, the output simulated pulse data is read after a pulse signal is encoded, and the encoding is quaternary encoding, octal encoding or hexadecimal encoding.
For the sake of understanding, the implementation principle of the simulation method is explained in the present application: suppose a visual scene is recorded by a pulse camera and a conventional camera at the same time, and separatelySAndGto represent pulse sequences and images, respectively; the object of the present application is to pass video sequencesGGenerating a simulated pulse train Ŝ and minimizing the simulated pulse train Ŝ and the real pulse trainSThe difference between them. Wherein the simulation pulse sequence Ŝ and the real pulse sequenceSThe difference between them needs to be measured from the distribution of the pulse excitation frequency and the pulse sequence, respectively.
To pass through a video sequenceGTo generate the simulation pulse sequence Ŝ, first, a relationship between the image gray scale and the pulse frequency is established, specifically:
first, establishing image gray scaleGAnd intensity of brightnessIThe relationship between them, then deducing the brightness intensityIAnd pulse delivery frequencyf s The relationship between them. In a conventional CCD camera, the brightness intensity of a scene is generally in a forward linear relationship with the image gray scale. Therefore, the conventional camera can be imaged to resultGIntensity of brightness of sceneIThe relationship between is recorded as:G=λIwhereinλIs a constant.
The pulse issuing principle of the pulse camera is as follows: each pixel of the pulse camera sensing chip is composed of 3 circuit modules, namely an integrating circuit, a resetting circuit and a reading circuit. Correspondingly, the generation of the pulse also comprises 3 states, i.e. integration, reset and readout. In the integrating state, the photodiode converts the optical signal into photocurrentI ph . As the photoelectric conversion process proceeds, the capacitanceC pd Will continuously drop while the triode node voltageV tr Will gradually rise. When the node voltage exceeds the trigger threshold θ, the comparator will flip. After the reset circuit detects the turnover signal, the integral circuit enters a reset state immediately. The reset circuit generates a reset signal to input into the triode, which resets the photodiode and re-entersThe new integration state. At the same time, the inverted signal stored in the reset circuit is read out by the readout circuit, and the information in the reset circuit is cleared. The clock frequency of the readout circuit is 40 KHz. Thus, the clock frequency of the readout circuit limits the maximum pulse delivery frequency of the pulse chip, i.e., 40 KHz. This in turn determines a time resolution of 25 for a pulsed cameraus。
Based on the above analysis, the conditions under which a pixel can trigger a pulse are:
where Δ t is the integration time and the trigger threshold is θ =V dd -V ref ,V dd In order to be able to supply the voltage,V ref is a reference voltage. From the data of the pulses excited, it can be seen that a higher light intensity will cause a higher pulse delivery frequency. Assuming that the light intensity perceived by a pixel is stable, the pulsing frequency of the pixel is:
the pulse excitation conditions were:
according to the sensing principle of the pulse camera, the integration time required for a pixel to excite a pulse is as follows:
based on the pulse-emitting principle of the pulse camera, the pulse-emitting frequencyf s And intensity of photocurrentI ph Have a linear relationship therebetween. Further assuming the photocurrent intensityI ph The relationship with the illumination intensity isI ph =R(I) Then can deducef s AndIthe relationship between them is:
since the intensity of photocurrent is directly measuredI ph And intensity of lightIExperiments on the relationship are very difficult, and the method adopts the principle that the illumination intensity is measuredIAnd pulse delivery frequencyf s The relationship between the two, and then the photocurrent intensity is estimated from the experimental resultsI ph And intensity of lightIThe relationship (2) of (c). The detailed experimental procedures are not described herein.
The experimental result shows that the intensity of the illumination is highILess than the turning pointI S (i.e., 11000 Lux), the pulse emitting frequency and the illumination intensity are approximately in a linear positive correlation; when the illumination intensity is more than 11000Lux, the pulse emission frequency reaches the maximum excitation frequencyf max =1/T r =40KHz, and does not change despite an increase in the light intensity. From the experimental results, it can be estimated that when the light intensity is less than 11000Lux, the photocurrent intensityI ph And intensity of lightIThere is also a linear positive correlation between them. And measuringIAndf s linear coefficient of betweenηThe value of (b) is 1.09 e-13. By combining the relationship between the image gray scale and the light intensity, the image gray scale can be deducedGAnd
frequency of pulse deliveryf s The relationship between them is:
thus, a gray value of a pixel is givenGBased on this formula, the pulse excitation frequency of each pixel can be simulatedf s . However, the pulse generated by the model simulation is data in an ideal case, and the influence of noise is not considered.
For this purpose, it is also necessary to perform noise analysis and modeling:
in an actual scene, the light sensing process is influenced by diffuse reflection, and the pulse interval generated in simulation under an ideal model is further influenced. Reach the photosensitive core in a period of timeThe number of scattered photons of the sheet is random. Poisson distributions are commonly used to model the distribution of random events and are applied in modeling image noise. Thus, a poisson model is employed herein to model scattering noise in the simulation process. Suppose that at unit time delta t (δ t =2/CLK) The probability of the number of scattered photons arriving internally at the pixel being k is gamma, then at a minimum burst intervalT r The probability distribution function of the poisson noise model can be expressed as:
in addition, the inherent noise causes the pulse camera to be excited even in the case where no light is incident
And (2) pulse sending, which is determined by experiments (the specific experimental mode is not described herein) that the pulse sending interval approximately follows Gaussian distribution. Therefore, a gaussian model is proposed herein based on experimental results to simulate the intrinsic noise in the simulation process, as shown in the following formula:
in the application, in order to verify the simulation effect of the simulation method and obtain the pulse sequence of the real scene recorded by the pulse camera, firstly, the pulse sequence is restored into a gray image by using the pulse data through a reconstruction algorithm (TFW), and the gray image is simulated as a video to be simulated through the simulation method to obtain the simulated pulse sequence. And calculating the similarity of the true pulse sequence and the simulation pulse sequence.
In the present application, two measurement rules are used to calculate the similarity between the pulse sequence and the simulated pulse sequence.
In the first measurement method, the pulse sequence is regarded as a matrix with one dimension of N × M (M = W × H, where W and H are the spatial resolution of the pulse camera), and the pulse excitation frequency similarity of each pixel is calculated.
The second measurement method is KL divergence, and the similarity of the pulse sequence and the simulation pulse sequence in distribution is evaluated.
In the calculation result, the similarity of the scene 1, the scene 2, the scene 3 and the scene 4 in the first measurement method is respectively 0.991, 0.993, 0.994 and 0.993; the results of scene 1, scene 2, scene 3, and scene 4 in the second metric method are 0.081, 0.079, 0.057, and 0.044, respectively. As can be seen from the calculation result, the pulse data acquired by the simulation method is very close to the real pulse data, and the simulation accuracy is greatly improved.
The embodiment of the present application provides a pulse camera simulation apparatus, which is used for executing the pulse camera simulation method described in the above-mentioned content of the present application, and the pulse camera simulation apparatus is described in detail below.
As shown in fig. 3, the pulse camera simulation apparatus includes:
the frame extraction module 101 is configured to extract a key frame sequence of a video to be simulated, and convert the key frame sequence into an intensity map sequence;
a frame rate increasing module 102, configured to increase a frame rate of the intensity map sequence according to a working clock frequency of the pulse camera;
the simulation module 104 is configured to obtain an integral distribution model, and simulate the intensity map sequence according to the integral distribution model;
and the output module 105 is used for outputting the simulated pulse data.
In one embodiment, as shown in fig. 4, the apparatus further comprises:
and the brightness adjusting module 103 is configured to perform brightness adjustment on the intensity map sequence after frame rate enhancement multiplied by a transform coefficient.
In one embodiment, the simulation module 104 is further configured to:
and determining simulation time steps and an excitation threshold value, integrating each simulation time step through the integral issuing model, issuing a pulse after an integration result exceeds the excitation threshold value, and finishing the simulation based on the issued pulse.
In one embodiment, the frame extraction module 101 is further configured to:
converting the key frame sequence into an intensity map sequence of an R channel, a G channel and a B channel under the condition that the key frame sequence is a color image sequence; and converting the key frame sequence into a single-channel intensity map sequence under the condition that the key frame sequence is a gray image sequence.
In one embodiment, the point issuing model is:
in the model, the model is divided into a plurality of models,C pd in order to switch the capacitance of the capacitor,λin order to convert the coefficients of the image,N 1 (t) In order to be a poisson model,G(t) Is the intensity of the luminance of the sequence of intensity maps,tin order to be the time step,ηis a linear coefficient.
In one embodiment, the output module 105 is further configured to:
and in the output of the simulated pulse data, the pulse data is encoded into binary pulse data and then output.
The pulse camera simulation device provided by the above embodiment of the present application and the pulse camera simulation method provided by the embodiment of the present application have the same inventive concept and have the same beneficial effects as the method adopted, operated or implemented by the application program stored in the pulse camera simulation device.
Having described the internal functions and structure of the pulse camera simulation apparatus as described above, as shown in fig. 5, in practice, the pulse camera simulation apparatus can be implemented as a control device including: a memory 301 and a processor 303.
A memory 301, which may be configured to store a program.
In addition, the memory 301 may also be configured to store other various data to support operations on the control device. Examples of such data include instructions for any application or method operating on the control device, contact data, phonebook data, messages, pictures, videos, and the like.
The memory 301 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A processor 303, coupled to the memory 301, for executing programs in the memory 301 for:
extracting a key frame sequence of a video to be simulated, and converting the key frame sequence into an intensity map sequence;
increasing the frame rate of the intensity map sequence according to the working clock frequency of the pulse camera;
acquiring an integral distribution model, and simulating the intensity map sequence according to the integral distribution model;
and outputting the simulated pulse data.
In one embodiment, the processor 303 is specifically configured to:
and multiplying the intensity map sequence after the frame rate is improved by a transformation coefficient to perform brightness adjustment.
In one embodiment, the processor 303 is specifically configured to:
and determining simulation time steps and an excitation threshold value, integrating each simulation time step through the integral issuing model, issuing a pulse after an integration result exceeds the excitation threshold value, and finishing the simulation based on the issued pulse.
In one embodiment, the processor 303 is specifically configured to:
converting the key frame sequence into an intensity map sequence of an R channel, a G channel and a B channel under the condition that the key frame sequence is a color image sequence; and converting the key frame sequence into a single-channel intensity map sequence under the condition that the key frame sequence is a gray image sequence.
In one embodiment, the point issuing model is:
in the model, the model is divided into a plurality of models,C pd in order to switch the capacitance of the capacitor,λin order to convert the coefficients of the image data,N 1 (t) In order to be a poisson model,G(t) Is the intensity of the luminance of the sequence of intensity maps,tin order to be the time step,ηis a linear coefficient.
In one embodiment, the processor 303 is specifically configured to:
and encoding the pulse data into binary pulse data and outputting the binary pulse data.
In the present application, only some of the components are schematically shown in fig. 5, and it is not intended that the control apparatus includes only the components shown in fig. 5.
The control device provided by the embodiment of the present application and the pulse camera simulation method provided by the embodiment of the present application have the same beneficial effects as the method adopted, operated or implemented by the application program stored in the control device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
The present application further provides a computer-readable storage medium corresponding to the pulse camera simulation method provided in the foregoing embodiments, and a computer program (i.e., a program product) is stored thereon, and when being executed by a processor, the computer program will execute the pulse camera simulation method provided in any of the foregoing embodiments.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the pulse camera simulation method provided by the embodiment of the present application have the same beneficial effects as the method adopted, run or implemented by the application program stored in the computer-readable storage medium.
It should be noted that in the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (7)
1. A pulse camera simulation method is characterized by comprising the following steps:
extracting a key frame sequence of a video to be simulated, and converting the key frame sequence into an intensity map sequence;
increasing the frame rate of the intensity map sequence according to the working clock frequency of the pulse camera;
acquiring an integral distribution model, and simulating the intensity map sequence according to the integral distribution model;
outputting the simulated pulse data;
wherein the point issuing model is as follows:
in the model, the model is divided into a plurality of models,C pd in order to switch the capacitance of the capacitor,λin order to convert the coefficients of the image,N 1 (t) In order to be a poisson model,G(t) Is the intensity of the luminance of the sequence of intensity maps,tin order to be the time step,ηis a linear coefficient;
and in the simulation of the intensity map sequence according to the integral issuing model, determining simulation time steps and an excitation threshold value, integrating each simulation time step through the integral issuing model, issuing a pulse after an integration result exceeds the excitation threshold value, and finishing the simulation based on the issued pulse.
2. The method of claim 1, wherein after increasing the frame rate of the sequence of intensity maps according to the operating clock frequency of the pulse camera, the method further comprises:
and multiplying the intensity map sequence after the frame rate is improved by a transformation coefficient to perform brightness adjustment.
3. The method according to any of claims 1-2, wherein the converting the sequence of key frames into a sequence of intensity maps is performed by converting the sequence of key frames into a sequence of intensity maps for an R-channel, a G-channel and a B-channel in case the sequence of key frames is a sequence of color images; and converting the key frame sequence into a single-channel intensity map sequence under the condition that the key frame sequence is a gray image sequence.
4. The method according to any one of claims 1-2, wherein the output of the simulated pulse data is performed after the pulse data is encoded into binary pulse data and output.
5. A pulse camera simulation apparatus, comprising:
the frame extraction module is used for extracting a key frame sequence of a video to be simulated and converting the key frame sequence into an intensity map sequence;
the frame rate increasing module is used for increasing the frame rate of the intensity map sequence according to the working clock frequency of the pulse camera;
the simulation module is used for acquiring an integral distribution model and simulating the intensity map sequence according to the integral distribution model;
wherein the point issuing model is as follows:
in the model, the model is divided into a plurality of models,C pd to convert electricityThe volume of the liquid to be treated is,λin order to convert the coefficients of the image,N 1 (t) In order to be a poisson model,G(t) Is the intensity of the luminance of the sequence of intensity maps,tin order to be the time step,ηis a linear coefficient;
in the simulation of the intensity map sequence according to the integral issuing model, a simulation time step and an excitation threshold value are determined, integration is performed on each simulation time step through the integral issuing model, a pulse is issued after an integration result exceeds the excitation threshold value, and the simulation is completed based on the issued pulse;
and the output module is used for outputting the simulated pulse data.
6. An electronic device, comprising a computer-readable storage medium storing a computer program and a processor, the computer program, when read and executed by the processor, implementing the method according to any one of claims 1-4.
7. A computer-readable storage medium, characterized in that it stores a computer program which, when read and executed by a processor, implements the method according to any one of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210466831.2A CN114584713B (en) | 2022-04-29 | 2022-04-29 | Pulse camera simulation method and device, control equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210466831.2A CN114584713B (en) | 2022-04-29 | 2022-04-29 | Pulse camera simulation method and device, control equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114584713A CN114584713A (en) | 2022-06-03 |
CN114584713B true CN114584713B (en) | 2022-09-20 |
Family
ID=81784812
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210466831.2A Active CN114584713B (en) | 2022-04-29 | 2022-04-29 | Pulse camera simulation method and device, control equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114584713B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112435279A (en) * | 2019-08-26 | 2021-03-02 | 天津大学青岛海洋技术研究院 | Optical flow conversion method based on bionic pulse type high-speed camera |
EP3806413A1 (en) * | 2019-10-07 | 2021-04-14 | FRAUNHOFER-GESELLSCHAFT zur Förderung der angewandten Forschung e.V. | Method for the acquisition of impulse responses, e.g. for ultra-wideband systems |
CN113067979A (en) * | 2021-03-04 | 2021-07-02 | 北京大学 | Imaging method, device, equipment and storage medium based on bionic pulse camera |
CN113329146A (en) * | 2021-04-25 | 2021-08-31 | 北京大学 | Pulse camera simulation method and device |
CN114118268A (en) * | 2021-11-25 | 2022-03-01 | 福州大学 | Antagonistic attack method and system for generating uniformly distributed disturbance by taking pulse as probability |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11228758B2 (en) * | 2016-01-22 | 2022-01-18 | Peking University | Imaging method and device |
-
2022
- 2022-04-29 CN CN202210466831.2A patent/CN114584713B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112435279A (en) * | 2019-08-26 | 2021-03-02 | 天津大学青岛海洋技术研究院 | Optical flow conversion method based on bionic pulse type high-speed camera |
EP3806413A1 (en) * | 2019-10-07 | 2021-04-14 | FRAUNHOFER-GESELLSCHAFT zur Förderung der angewandten Forschung e.V. | Method for the acquisition of impulse responses, e.g. for ultra-wideband systems |
CN113067979A (en) * | 2021-03-04 | 2021-07-02 | 北京大学 | Imaging method, device, equipment and storage medium based on bionic pulse camera |
CN113329146A (en) * | 2021-04-25 | 2021-08-31 | 北京大学 | Pulse camera simulation method and device |
CN114118268A (en) * | 2021-11-25 | 2022-03-01 | 福州大学 | Antagonistic attack method and system for generating uniformly distributed disturbance by taking pulse as probability |
Non-Patent Citations (2)
Title |
---|
High-Speed Motion Scene Reconstruction for Spike Camera via Motion Aligned Filtering;Jing Zhao 等;《2020 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS)》;20201014;全文 * |
Hybrid Coding of Spatiotemporal Spike Data for a Bio-Inspired Camera;Lin Zhu 等;《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》;20210731;第31卷(第7期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114584713A (en) | 2022-06-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gehrig et al. | Combining events and frames using recurrent asynchronous multimodal networks for monocular depth prediction | |
US8547442B2 (en) | Method and apparatus for motion blur and ghosting prevention in imaging system | |
US20190164257A1 (en) | Image processing method, apparatus and device | |
US8311385B2 (en) | Method and device for controlling video recordation property of camera module according to velocity of object | |
US9432589B2 (en) | Systems and methods for generating high dynamic range images | |
CN108419028B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
US11539896B2 (en) | Method and apparatus for dynamic image capturing based on motion information in image | |
CN102356631A (en) | Image processing device, signal processing device, and program | |
CN109300151B (en) | Image processing method and device and electronic equipment | |
CN111753869A (en) | Image processing method, image processing apparatus, storage medium, image processing system, and learned model manufacturing method | |
CN103020927A (en) | Image processing apparatus and image processing method | |
US20220198625A1 (en) | High-dynamic-range image generation with pre-combination denoising | |
CN105141853A (en) | Image processing method and electronic device | |
Duan et al. | Guided event filtering: Synergy between intensity images and neuromorphic events for high performance imaging | |
CN115601403A (en) | Event camera optical flow estimation method and device based on self-attention mechanism | |
CN107211092A (en) | Image capture with improved temporal resolution and perceptual image definition | |
CN109325905B (en) | Image processing method, image processing device, computer readable storage medium and electronic apparatus | |
CN114584713B (en) | Pulse camera simulation method and device, control equipment and readable storage medium | |
CN111798484A (en) | Continuous dense optical flow estimation method and system based on event camera | |
CN117036442A (en) | Robust monocular depth completion method, system and storage medium | |
US20230224599A1 (en) | Systems, methods, and media for high dynamic range imaging using single-photon and conventional image sensor data | |
EP3844945B1 (en) | Method and apparatus for dynamic image capturing based on motion information in image | |
CN114331900A (en) | Video denoising method and video denoising device | |
EP3979648A1 (en) | Device for compensating the movement of an event sensor and associated observation system and method | |
KR20210133844A (en) | Systems and methods of motion estimation using monocular event-based sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |