CN113744355B - Pulse signal processing method, device and equipment - Google Patents

Pulse signal processing method, device and equipment Download PDF

Info

Publication number
CN113744355B
CN113744355B CN202010476161.3A CN202010476161A CN113744355B CN 113744355 B CN113744355 B CN 113744355B CN 202010476161 A CN202010476161 A CN 202010476161A CN 113744355 B CN113744355 B CN 113744355B
Authority
CN
China
Prior art keywords
target
pulse
pulse signal
pixel position
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010476161.3A
Other languages
Chinese (zh)
Other versions
CN113744355A (en
Inventor
唐超影
肖飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202010476161.3A priority Critical patent/CN113744355B/en
Publication of CN113744355A publication Critical patent/CN113744355A/en
Application granted granted Critical
Publication of CN113744355B publication Critical patent/CN113744355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Abstract

The application provides a method, a device and equipment for processing pulse signals, wherein the method comprises the following steps: acquiring an original pulse signal output by a pulse sensor; preprocessing the original pulse signal to obtain a preprocessed target pulse signal; performing image reconstruction based on the target pulse signal to obtain a two-dimensional image; and carrying out post-processing on the two-dimensional image to obtain a target image. By the technical scheme, the pulse signal can be reconstructed into the target image.

Description

Pulse signal processing method, device and equipment
Technical Field
The present application relates to the field of image processing, and in particular, to a method, an apparatus, and a device for processing a pulse signal.
Background
The traditional image sensor obtains scene light intensity information by adopting a light integration mode and is widely applied to the fields of photography, image recognition and the like. The image sensor samples the scene according to fixed frequency by taking the frame as a unit to obtain multi-frame images. However, the real world time is continuous and there is no concept of frames, so the image sensor loses a part of time domain information, and it is difficult to reflect high-speed dynamic changes in the scene.
Inspired by the impulse vision mechanism of the biological retina, in recent years, many types of impulse sensors have emerged, which respond to changes in light intensity or light intensity and output impulse signals. Compared with the image sensor, the pulse sensor has the characteristics of high time resolution, high dynamic range, asynchronous output and the like, and can acquire more time domain information, so that the pulse sensor is widely applied.
Since the pulse sensor outputs a pulse signal, the pulse signal needs to be reconstructed into a target image. However, there is currently no efficient implementation that is capable of reconstructing a pulse signal into a target image.
Disclosure of Invention
The application provides a processing method of a pulse signal, which comprises the following steps:
acquiring an original pulse signal output by a pulse sensor;
preprocessing the original pulse signal to obtain a preprocessed target pulse signal;
performing image reconstruction based on the target pulse signal to obtain a two-dimensional image;
and carrying out post-processing on the two-dimensional image to obtain a target image.
The application provides a pulse signal processing device, which comprises:
the acquisition module is used for acquiring an original pulse signal output by the pulse sensor;
The processing module is used for preprocessing the original pulse signal to obtain a preprocessed target pulse signal;
the reconstruction module is used for reconstructing an image based on the target pulse signal to obtain a two-dimensional image;
and the generating module is used for carrying out post-processing on the two-dimensional image to obtain a target image.
The present application provides a processing apparatus of a pulse signal, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine-executable instructions to perform the steps of:
acquiring an original pulse signal output by a pulse sensor;
preprocessing the original pulse signal to obtain a preprocessed target pulse signal;
performing image reconstruction based on the target pulse signal to obtain a two-dimensional image;
and carrying out post-processing on the two-dimensional image to obtain a target image.
As can be seen from the above technical solutions, in the embodiments of the present application, an original pulse signal output by a pulse sensor can be obtained, the original pulse signal is preprocessed, a preprocessed target pulse signal is obtained, and a target image is obtained based on the target pulse signal, so that the pulse signal can be reconstructed into the target image. The above mode can perform signal processing on the pulse signal, output a conventional image signal (i.e. a target image) and a pulse signal (i.e. a target pulse signal), design a complete pulse processing frame for the pulse signal, wherein the conventional image signal is used for human eye observation and conventional image application, and the pulse signal is used for pulse application such as a pulse neural network.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly describe the drawings required to be used in the embodiments of the present application or the description in the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings of the embodiments of the present application for a person having ordinary skill in the art.
FIG. 1 is a flow chart of a method of processing a pulse signal in one embodiment of the application;
FIG. 2 is a schematic diagram of a 3D noise reduction process in one embodiment of the application;
FIG. 3 is a schematic diagram of a plurality of pulse transmit instants in one embodiment of the application;
FIGS. 4A-4C are schematic diagrams of application scenarios in an embodiment of the present application;
fig. 5 is a block diagram of a pulse signal processing apparatus according to an embodiment of the present application;
fig. 6 is a block diagram of a pulse signal processing apparatus according to an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to any or all possible combinations including one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. Depending on the context, furthermore, the word "if" used may be interpreted as "at … …" or "at … …" or "in response to a determination".
Inspired by the pulse vision mechanism of the biological retina, in recent years, many types of pulse sensors (which may also be referred to as pulse cameras) have emerged, which can respond to changes in light intensity or light intensity and output pulse signals. Compared with an image sensor, the pulse sensor has the characteristics of high time resolution, high dynamic range, asynchronous output and the like, and can acquire more time domain information. In order to reconstruct a pulse signal output by a pulse sensor into a target image, the embodiment of the application provides a processing method of the pulse signal, which can preprocess an original pulse signal to obtain a preprocessed target pulse signal, and obtain the target image based on the target pulse signal, so that the pulse signal can be reconstructed into the target image.
The method for processing a pulse signal according to the embodiment of the present application will be described below with reference to specific embodiments.
Referring to fig. 1, a flowchart of a method for processing a pulse signal is shown, where the method may include:
step 101, acquiring an original pulse signal output by a pulse sensor.
For example, the pulse sensor may output a pulse train, which may include a plurality of pulse signals, and the pulse signals output by the pulse sensor are referred to as raw pulse signals for convenience of description.
By way of example, the pulse sensor may include, but is not limited to, a retinal camera, the pulse sensor outputting a continuous raw pulse signal, a collection of which is referred to as a pulse train.
For example, the original pulse signal may be expressed as (x, y, t), (x, y) being used to represent the spatial coordinates of the original pulse signal, and (x, y) being referred to herein as the pixel position of the original pulse signal. t represents a time stamp generated by the original pulse signal, that is, a time when the pulse sensor outputs the original pulse signal, which may be referred to as a pulse transmission time, and the pulse transmission time is subsequently referred to as the original pulse transmission time for convenience of distinction.
For the pixel position (x 1, y 1), the light intensity is accumulated from time t0, and if the accumulated light intensity reaches a threshold value at time t11, the pulse sensor outputs an original pulse signal for the pixel position (x 1, y 1), which is denoted as (x 1, y1, t 11). From time t11, the light intensity accumulation is resumed, and if at time t12, the accumulated light intensity reaches a threshold value, the pulse sensor outputs an original pulse signal for the pixel position (x 1, y 1), which is denoted as (x 1, y1, t 12), and so on.
Similarly, for the pixel position (x 2, y 2), light intensity is accumulated from time t0, and if the accumulated light intensity reaches the threshold value at time t21, the pulse sensor outputs an original pulse signal for the pixel position (x 2, y 2), which is denoted as (x 2, y2, t 21), and so on. Similarly, the implementation of the other pixel positions is similar to the implementation of the pixel position (x 1, y 1), and the detailed description is not repeated here.
In summary, in step 101, an original pulse signal output by the pulse sensor may be obtained, where the original pulse signal is expressed as (x, y, t), (x, y) represents a pixel position of the original pulse signal, and t represents an original pulse transmitting time of the original pulse signal. Obviously, when the light intensity of the pixel position (x, y) is stronger, the transmission frequency of the original pulse signal of the pixel position (x, y) is higher, and the original pulse signal is output in a shorter time.
Step 102, preprocessing the original pulse signal to obtain a preprocessed target pulse signal.
For example, the original pulse signal may be subjected to preprocessing such as noise reduction and/or frequency adjustment, so as to obtain a preprocessed target pulse signal. Of course, the noise reduction and the frequency adjustment are only two examples of the preprocessing method, and the preprocessing method is not limited thereto, and the noise reduction and the frequency adjustment are described as examples in this embodiment.
In one possible implementation manner, the original pulse signal can be subjected to noise reduction to obtain a target pulse signal after noise reduction; or, the frequency of the original pulse signal can be adjusted to obtain a target pulse signal after the frequency adjustment; or, the original pulse signal can be firstly subjected to noise reduction to obtain a noise-reduced pulse signal, and then the noise-reduced pulse signal is subjected to frequency adjustment to obtain a frequency-adjusted target pulse signal; or, the original pulse signal may be subjected to frequency adjustment to obtain a pulse signal after frequency adjustment, and then the pulse signal after frequency adjustment is subjected to noise reduction to obtain a target pulse signal after noise reduction.
The preprocessing process of the original pulse signal will be described below in connection with the specific case.
In the first case, noise reduction is carried out on the original pulse signal, and a target pulse signal after noise reduction is obtained.
Illustratively, the pulse sensor is responsive to the light intensity, and outputs the original pulse signal when the light intensity is accumulated beyond a threshold value, i.e., the threshold value affects the frequency of the original pulse signal, however, due to process errors, differences in the threshold values at different positions, etc., noise may occur in the original pulse signal. On the other hand, photon poisson noise, device thermal noise, etc. can reduce the signal-to-noise ratio of the pulse signal, and can also cause noise to exist in the pulse signal. In summary, the noise reduction processing is performed on the original pulse signal, so that the accuracy of the pulse signal can be improved.
In one possible implementation, the noise reduction may include, but is not limited to, at least one of the following: time domain noise reduction, space domain noise reduction and 3D noise reduction. For example, the original pulse signal may be time-domain noise reduced; alternatively, spatial domain noise reduction can be performed on the original pulse signal; alternatively, the original pulse signal may be subjected to 3D noise reduction; alternatively, time domain noise reduction and spatial domain noise reduction can be performed on the original pulse signal; alternatively, time domain noise reduction and 3D noise reduction may be performed on the original pulse signal; alternatively, spatial domain noise reduction and 3D noise reduction may be performed on the original pulse signal; alternatively, the original pulse signal may be subjected to time domain noise reduction, spatial domain noise reduction, and 3D noise reduction.
Of course, the above-described procedure is just a few examples of the noise reduction method, and the noise reduction method is not limited thereto. The noise reduction process of the original pulse signal is described below in connection with several embodiments.
In the first mode, according to the pulse period of the target pixel position, time domain noise reduction is performed on the original pulse signal of the target pixel position, so as to obtain the target pulse signal of the target pixel position. In one possible implementation, the kth weighting period for the target pixel location may be determined based on the kth pulse period for the target pixel location, the (k-1) th pulse period for the target pixel location, and the (k-1) th weighting period for the target pixel location; k is a positive integer greater than 1. Then, a kth target pulse signal for the target pixel position is determined based on the kth weighted period for the target pixel position and the (k-1) th original pulse signal for the target pixel position. Illustratively, the pulse period is the interval between two adjacent raw pulse signals output by the pulse sensor.
The temporal noise reduction process is described below in connection with a specific application scenario, which is, of course, only an example of temporal noise reduction, and is not limited thereto, so long as the temporal noise reduction can be performed on the original pulse signal of the target pixel position according to the pulse period of the target pixel position, so as to obtain the target pulse signal.
Illustratively, the pixel position (x, y) is taken as the target pixel position, and the kth original pulse signal of the target pixel position is (x, y, t) k ),t k The (k-1) th original pulse signal of the target pixel position is (x, y, t) as the original pulse transmission time of the kth original pulse signal k-1 ),t k-1 The original pulse transmission time of the (k-1) th original pulse signal. On the basis, the kth pulse period of the target pixel positionIs T k And T is k =t k -t k-1 The (k-1) th pulse period of the target pixel position is T k-1 And T is k-1 =t k-1 -t k-2 K is the serial number of the original pulse signal, T is the original pulse sending time, and T is the pulse period.
Illustratively, the kth weighting period for the target pixel location may be determined according to the following formula:
in the above-mentioned formula(s),represents the kth weighting period, T k Represents the kth pulse period,/-)>The actual pulse emission period (k-1) is represented by ω, which is a weight, and may be empirically configured, and the value of ω is not limited, for example, ω may be a value between 0 and 1, such as 14/16, 15/16, etc.
By way of example only, and not by way of limitation,can be determined by the (k-1) th pulse period T of the target pixel position k-1 And (k-1) th weighting period T * k-1 Determining, for example, see the following formula +. >Examples of the manner of determination of (a):
in the above formula, thre1 is a difference threshold, which can be empirically configured without limitation, e.g., thre1 is related to the pulse period, e.g., thre1 is taken0.15·T k Of course, here, only an example, thre1 may be arbitrarily configured. d, d T Representing the difference between the pulse period and the weighted period, in the calculationWhen d T For the absolute value of the difference between the (k-1) th pulse period and the (k-1) th weighting period, i.e.>
In summary, it can be based on T k-1 And T * k-1 ObtainingAnd can be according to->And T k Obtaining T * k 。T k-1 And T * k-1 Is the value determined in the previous period, and T k Is the value determined by the current period.
Exemplary, the pulse period T can be based on the kth k And the kth weighting period T * k Determining the kth actual pulse delivery periodSpecific modes can be seen from the above formula. For example, when d T When the ratio is greater than thre1, then +.>May be T k When d T If not more than thre1, then +.>May be T * k . Wherein, in calculating->When d T Can be used forFor the kth pulse period T k And the kth weighting period T * k Is the absolute value of the difference of (c).
Can be based on the kth actual pulse delivery periodThe kth original pulse transmission time t for the target pixel position k Make adjustments, e.g. if->Is T k The original pulse transmission time t is maintained k Unchanged, if->Is T * k The original pulse is sent at time t k-1 And T is * k As the adjusted pulse transmission time t k
In summary, the kth pulse period T can be based on the target pixel position k The (k-1) th pulse period T of the target pixel position k-1 The (k-1) th weighting period T of the target pixel position * k-1 The kth weighting period T of determining the target pixel position * k . Then, the kth weighting period T based on the target pixel position * k And the (k-1) th original pulse signal (x, y, t) of the target pixel position k-1 ) A kth target pulse signal (x, y, t) for determining a target pixel position k ). For example, ifIs T k T in the kth target pulse signal k Can be t in the kth original pulse signal k . If->Is T * k T in the kth target pulse signal k Can be t in the (k-1) th original pulse signal k-1 And T is * k A kind of electronic device.
And secondly, performing spatial domain noise reduction on the original pulse signal of the target pixel position according to the pulse period of the adjacent pixel position of the target pixel position to obtain the target pulse signal of the target pixel position. In one possible implementation, the pulse period variance may be determined based on pulse periods of adjacent pixel locations of the target pixel location; if the pulse period variance is smaller than the variance threshold, determining a pulse period mean value according to the pulse period of the target pixel position and the pulse period of the adjacent pixel position; a target pulse signal for the target pixel location is determined based on the pulse period average and the original pulse signal for the target pixel location. Illustratively, the pulse period is the interval between two adjacent raw pulse signals output by the pulse sensor.
The spatial domain noise reduction process is described below with reference to a specific application scenario, however, this is only an example of spatial domain noise reduction, and is not limited thereto, so long as the spatial domain noise reduction can be performed on the original pulse signal according to the pulse periods of the adjacent pixel positions of the target pixel position, so as to obtain the target pulse signal.
Exemplary, assuming that the pixel position (x, y) is taken as the target pixel position, the kth original pulse signal of the target pixel position is (x, y, t) k ),t k The original pulse transmission time of the kth original pulse signal.
An m×m spatial region is selected with the target pixel position as the center, that is, the spatial region may include m×m pixel positions, among which the pixel position located at the center is the target pixel position and the pixel position not located at the center is the adjacent pixel position to the target pixel position. Ordering M×M pixel positions, the ordering result is 1,2, …, M 2 The target pixel position is (M) 2 +1)/2 pixel positions.
The pulse period at pixel 1 is denoted as T 1 ,T 1 The difference in the transmission timings of the original pulses in the adjacent two original pulse signals for the 1 st pixel position can be taken as the difference, and so on, the (M 2 +1)/2 pixel positions (i.e. eyeThe mark pixel location) is noted as the pulse periodThe difference in the time instants of transmission of the original pulses in the two adjacent original pulse signals of the target pixel position can be taken into account, and so on. In summary, the pulse period T of the latest original pulse signal at each pixel position can be obtained n ,n=1,...,M 2 The most recent original pulse signal is the original pulse transmission time t with the target pixel position k The closest original pulse signal.
For example, assume that the original pulse transmission timing of the kth original pulse signal of the target pixel position is t k And spatial noise reduction is required for the kth original pulse signal of the target pixel positionIs t k And t k-1 Is a difference in (c). For the 1 st pixel position, determine the sum t k The original pulse transmission time in the closest original pulse signal is assumed to be t a (may be located at t k Before), then T 1 May be t a And t a-1 Is a difference in (c). For the 2 nd pixel position, determine and t k The original pulse transmission time in the closest original pulse signal is assumed to be t b (may be located at t k Before), then T 2 May be t b And t b-1 And so on.
Pulse period of adjacent pixel position based on target pixel position, i.e. dividedAll pulse periods except those for the pulse period can be calculated using the following formula 2
Based on the variance delta 2 Pulse period of target pixel locationThe average value of the pulse periods of all pixel positions can be used to obtain the actual pulse release period +.>The specific way can be seen in the following formula:
as can be seen from the above formula, when delta 2 When the variance threshold thre2 is smaller, the actual pulse delivery periodWhen delta is the average of pulse periods for all pixel locations 2 When not less than thre2, then the actual pulse delivery period +.>Pulse period for target pixel position +.>Where thre2 is a variance threshold, which can be empirically configured without limitation. For example, thre2 may increase as scene illumination decreases, or may be manually set.
Then, according to the actual pulse distribution periodTime t of original pulse transmission to target pixel position k For adjustment, e.g. if the actual pulse delivery period +.>The original pulse is sent at time t, which is the average of the pulse periods of all pixel positions k-1 And->As the adjusted pulse transmission time t k . If the actual pulse delivery period +.>Pulse period for target pixel position +.>Then the original pulse transmission time t is maintained k Is unchanged.
In summary, the pulse period variance delta can be determined based on the pulse periods of the adjacent pixel locations of the target pixel location 2 The method comprises the steps of carrying out a first treatment on the surface of the If delta 2 Less than the variance threshold thre2, determining a pulse period average value as an actual pulse delivery period based on the pulse period of the target pixel position and the pulse period of each adjacent pixel positionT in the kth target pulse signal of the target pixel position k Can be t in the (k-1) th original pulse signal k-1 And->A kind of electronic device. If delta 2 Not less than the variance threshold thre2, then +.>T in the kth target pulse signal for the pulse period of the target pixel position k Can be t in the kth original pulse signal k
And thirdly, performing 3D noise reduction on the original pulse signal of the target pixel position according to the pulse period of the target pixel position and the pulse period of the adjacent pixel position of the target pixel position to obtain the target pulse signal of the target pixel position. In one possible embodiment, the pulse period difference value of the target pixel position may be determined according to the pulse period of the target pixel position, and the pulse period difference value of the target pixel position may be a difference value between two adjacent pulse periods of the target pixel position; the pulse period difference value of the adjacent pixel position is determined according to the pulse period of the adjacent pixel position, and the pulse period difference value of the adjacent pixel position may be a difference value between two adjacent pulse periods of the adjacent pixel position. Then, determining whether the target pixel position is an isolated pixel position according to the pulse period difference value of the target pixel position and the pulse period difference value of the adjacent pixel position; if so, determining a kth target pulse signal of the target pixel position based on the (k-1) th pulse period of the target pixel position and the (k-1) th original pulse signal of the target pixel position. Illustratively, the pulse period is the interval between two adjacent raw pulse signals output by the pulse sensor.
The following describes the 3D noise reduction process with reference to a specific application scenario, however, only an example of 3D noise reduction is provided herein, and the 3D noise reduction is not limited thereto, so long as the original pulse signal can be subjected to 3D noise reduction according to the pulse period of the target pixel position and the pulse period of the adjacent pixel position, so as to obtain the target pulse signal.
For example, referring to fig. 2, a spatio-temporal region of m×m×l is selected with the target pixel position as the center, M is a spatial distance of the pixel position, L is a temporal region, for example, M is 3, and L is 10ms, which is only an example, and the values of M and L are not limited. The time-free region may include m×m pixel positions, the pixel position located at the center being a target pixel position, and the pixel position not located at the center being a neighboring pixel position of the target pixel position. Ordering M x M pixel locations, the result of ordering the pixel locations is 1,2, …, M 2 The target pixel position is (M) 2 +1)/2 pixel positions.
Referring to FIG. 2, for each pixel location, the temporal-spatial region may also include a plurality of pulse periods for that pixel location within the temporal region LThese pulse periods are noted asK n The value of n can be 1,2, …, M, which represents the total number of pulse periods of the pixel position n in the time domain interval L 2 The last two pulse periods of the pixel position n in the time domain interval L may be +.>
As shown in fig. 2, when n is 1,pulse period representing the 1 st pixel position, e.g. K is 1, 1 st pulse period representing the 1 st pixel position within the time domain interval L, and so on, K is K n K represents the 1 st pixel position within the time domain interval L n The last pulse period. When n is (M) 2 In the case of +1)/2,represents the (M) 2 Pulse period of +1)/2 pixel positions (i.e., target pixel position), e.g., K is 1, representing the 1 st pulse period of the target pixel position within the time domain interval L, and so on, K is K n K-th representing target pixel position within time domain interval L n With a pulse period. And so on, no further description is given for other pixel positions.
For example, the pulse period may be the difference between the times of transmission of the adjacent two original pulses. For example, the 1 st pulse period in which the 1 st pixel position is within the time domain section L means: the difference between the 2 nd and 1 st original pulse transmission timings at the 1 st pixel position in the time domain section L. For another example, the 1 st pulse period of the target pixel position within the time domain interval L means: the difference between the 2 nd and 1 st original pulse transmission moments of the target pixel position in the time domain interval L.
In summary, for the target pixel positionTo say, K of the target pixel position in the time domain interval L can be obtained n With a pulse period. In addition, K for each adjacent pixel position of the target pixel position within the time domain interval L can also be obtained n With a pulse period.
For example, for pixel position n, the last two pulse periods in the time domain interval L can be determined according to pixel position nDetermining a pulse period difference value D for pixel position n abs (which may also be referred to as an absolute difference map of pulse periods). For example, D for pixel position n may be determined by the following formula abs Of course, other formulas may be used to determine D for pixel location n abs There is no limitation in this regard.
When n is (M) 2 +1)/2, D abs Or the pulse period difference value of the target pixel position, when n is not (M 2 +1)/2, D abs Is the pulse period difference value for the adjacent pixel location to the target pixel location.
In the above formula, thre3 may be a period difference threshold, thre3 may be empirically configured, and thre3 is not limited thereto. For example, thre3 may be 10% of the maximum pulse period. Referring to the above embodiment, for a pixel position n, the total number of pulse periods of the pixel position n in the time domain interval L is K n Thus, the maximum pulse period may be K n The maximum pulse period of the pulse periods.
After the pulse period difference value for each pixel position n is obtained, it can be determined from these pulse period difference values whether the target pixel position is an isolated pixel position. For example, assuming that the pulse period difference value of the target pixel position is a first value (e.g., 0 or 1), the total number of pulse period difference values of adjacent pixel positions that are not the first value is counted. When the total number is greater than or equal to the preset threshold, then the target pixel position may be determined to be an isolated pixel position, otherwise, the target pixel position is determined not to be an isolated pixel position.
For example, if the pulse period difference value of the target pixel position is 0, the total number of pulse period difference values of the adjacent pixel positions is 1, and if there are 6 adjacent pixel positions, the total number is 6. If the pulse period difference value of the target pixel position is 1, counting the total number of the pulse period difference values of the adjacent pixel positions to be 0, and if the pulse period difference values of the 3 adjacent pixel positions are 0, the total number is 3.
For example, the preset threshold may be empirically configured, without limitation, and may be, for example, 1/2,2/3,1/1, or the like of the total number of adjacent pixel positions. Referring to the above embodiment, when M is 3, the total number of adjacent pixel positions may be 8, and the preset threshold value is taken as 8 as an example.
Based on this, when the pulse period difference values of all pixel positions satisfyWhen the target pixel position is considered to be an isolated pixel position, that is, the pulse period difference value of the target pixel position is 0, and the pulse period difference values of all the adjacent pixel positions of the target pixel position are 1.
For another example, when the pulse period difference values of all pixel positions satisfyWhen the target pixel position is considered to be an isolated pixel position, that is, the pulse period difference value of the target pixel position is 1, and the pulse period difference values of all the adjacent pixel positions of the target pixel position are 0.
Exemplary, when the target pixel position is an isolated pixel position, the actual pulse delivery periodCan be the target pixel positionK in time domain interval L n -1 pulse period, i.e. +.>According to->Time t of original pulse transmission to target pixel position k Make adjustments, e.g. to t k-1 And->The sum is taken as the adjusted pulse transmission time t k . When the target pixel position is not an isolated pixel position, < >>Can be the K-th pixel position in the time domain interval L n For a pulse period, i.e. maintaining the original pulse transmission time t k Is unchanged.
In summary, the pulse period difference value of the target pixel position can be determined according to the last two pulse periods of the target pixel position in the time domain interval L; and determining pulse period difference values of adjacent pixel positions according to the last two pulse periods of the adjacent pixel positions in the time domain interval L. And determining whether the target pixel position is an isolated pixel position according to the pulse period difference value of the target pixel position and the pulse period difference value of the adjacent pixel position. If yes, according to the K-th pixel position in the time domain interval L n -1 (i.e. (k-1) th) pulse period determining the actual pulse emission periodBased on this, t in the kth target pulse signal of the target pixel position k Can be t in the (k-1) th original pulse signal k-1 And->A kind of electronic device. If not, according to the K-th pixel position in the time domain interval L n The actual pulse delivery period is determined by the (i.e. kth) pulse periodBased on this, t in the kth target pulse signal of the target pixel position k Can be t in the kth original pulse signal k I.e. the target pulse signal is identical to the original pulse signal.
And fourthly, firstly performing time domain noise reduction on the original pulse signal of the target pixel position according to the pulse period of the target pixel position to obtain a pulse signal after time domain noise reduction, wherein the implementation mode is referred to as a first mode. And then, according to the pulse period of the adjacent pixel position of the target pixel position, performing spatial domain noise reduction on the pulse signal subjected to time domain noise reduction to obtain a target pulse signal of the target pixel position, wherein the implementation mode II is referred to.
And fifthly, performing time domain noise reduction on the original pulse signal of the target pixel position according to the pulse period of the target pixel position to obtain a pulse signal after time domain noise reduction. And then, according to the pulse period of the target pixel position and the pulse period of the adjacent pixel position of the target pixel position, performing 3D noise reduction on the pulse signal subjected to time-domain noise reduction to obtain a target pulse signal of the target pixel position, wherein the implementation mode is referred to as a third mode.
And sixthly, performing spatial domain noise reduction on the original pulse signal of the target pixel position according to the pulse period of the adjacent pixel position of the target pixel position to obtain a pulse signal after spatial domain noise reduction. And then, according to the pulse period of the target pixel position and the pulse period of the adjacent pixel position of the target pixel position, performing 3D noise reduction on the pulse signal after space domain noise reduction to obtain a target pulse signal of the target pixel position.
And in a seventh mode, time domain noise reduction can be performed on the original pulse signal of the target pixel position according to the pulse period of the target pixel position, so as to obtain a pulse signal after time domain noise reduction. Then, the spatial domain noise reduction can be performed on the pulse signal after the time domain noise reduction according to the pulse period of the adjacent pixel position of the target pixel position, so as to obtain the pulse signal after the spatial domain noise reduction. Then, the pulse signal after space domain noise reduction can be subjected to 3D noise reduction processing according to the pulse period of the target pixel position and the pulse period of the adjacent pixel position of the target pixel position, so as to obtain the target pulse signal of the target pixel position.
In the above embodiment, the noise reduction is described on the original pulse signal, and in practical application, the frequency adjustment may also be performed on the original pulse signal, and the frequency adjustment process will be described below.
And secondly, carrying out frequency adjustment on the original pulse signal of the target pixel position, and recording the pulse signal after the frequency adjustment as a target pulse signal. For example, according to the original pulse period of the target pixel position (the original pulse period is the interval between two adjacent original pulse signals output by the pulse sensor), the original pulse frequency of the target pixel position is determined, and the original pulse frequency is converted to obtain the target pulse frequency. And determining a target pulse period according to the target pulse frequency, and adjusting the original pulse signal of the target pixel position according to the target pulse period to obtain an adjusted pulse signal, namely a target pulse signal of the target pixel position.
For example, the pixel position (x, y) is taken as the target pixel position, and the kth original pulse signal of the target pixel position is (x, y, t) k ),t k The kth-1 th original pulse signal of the target pixel position is (x, y, t) as the original pulse transmission time of the kth original pulse signal k-1 ),t k-1 The original pulse transmission time of the (k-1) th original pulse signal. On the basis, when the frequency adjustment processing is carried out on the kth original pulse signal, the original pulse period is T k And T is k =t k -t k-1 . Will T k The inverse of (1) is taken as the original pulse frequency of the target pixel position, denoted as f k . For f k Converting to obtain target pulse frequency, denoted as f k '. Will f k The reciprocal of' is taken as the target pulse period, denoted as T k '. According to T k ' transmitting time t for original pulse in original pulse signal k Make adjustments, e.g. to t k-1 And T is k The sum of the' is taken as the adjusted pulse transmission time, and the kth target pulse signal of the target pixel position is (x, y, t) k ’),t k ' is the adjusted pulse transmission time.
In one possible implementation, the original pulse frequency is converted to a target pulse frequency, which may include, but is not limited to: converting the original pulse frequency based on a mapping relation to obtain a target pulse frequency, wherein the mapping relation is a conversion relation between the original pulse frequency and the target pulse frequency which are configured in advance; or converting the original pulse frequency based on a preset gain value to obtain a target pulse frequency; or converting the original pulse frequency based on the mapping relation to obtain a converted pulse frequency, and then converting the converted pulse frequency based on a preset gain value to obtain a target pulse frequency. Of course, the above are just a few examples of converting the original pulse frequency, and the conversion method is not limited.
For example, the pulse frequency of the pulse signal and the light intensity are generally in a linear or logarithmic relationship, and the pulse frequency used for imaging is not necessarily in a relationship of the original light intensity and the frequency, and therefore, a mapping relationship of the light intensity and the frequency may be preconfigured and the original pulse frequency may be converted based on the mapping relationship.
Illustratively, the information of the pulse signal is stored in the pulse frequency, the time domain information can contain more light intensity information, and a higher image dynamic range can be obtained through frequency adjustment. On the other hand, the frequencies of the different channels may also be different due to the transmittance of the filter, and frequency adjustment is also required, based on which the original pulse frequency may be converted by a preset gain value (i.e., a gain value that is configured in advance).
The frequency conversion process of the original pulse frequency is described below with reference to the embodiment.
Mode 1, converting an original pulse frequency based on a mapping relation to obtain a target pulse frequency. The mapping relationship may be empirically configured, such as a light intensity-frequency relationship, without limitation. See the following formula,for the example of the conversion process, map () is a mapping relationship, which may be an anti-log function, a gamma function, or the like, and the mapping relationship is not limited. f (f) in Representing the original pulse frequency, f out Representing the target pulse frequency, it is apparent that the map () is based on the mapping relation map () to the original pulse frequency f in After conversion, the target pulse frequency f can be obtained out
f out =map(f in )
By way of example, the original pulse frequency is converted through the mapping relationship, so that the output frequency (i.e. the target pulse frequency) can better meet the requirement of subsequent application, for example, the frequency nonlinear mapping can improve the dynamic range of the reconstructed image, and finally, the original pulse sending time is adjusted according to the mapped target pulse frequency.
And 2, converting the original pulse frequency based on a preset gain value to obtain a target pulse frequency. For example, the product of the preset gain value and the original pulse frequency may be determined as the target pulse frequency. Illustratively, the preset gain value may be an empirically configured gain value, such as a white balance gain value, or the like.
For example, taking the Bayer format color array "RGGB" as an example, the following formula may be adopted, and the original pulse frequency is amplified according to different color channel gains, so as to obtain the target pulse frequency.
/>
Original pulse frequencies, gain, respectively corresponding to different color channels R 、/>gain B The white balance gains of different channels, namely preset gain values of different channels, can be obtained through manual configuration or other algorithms, and are not limited. / >Respectively corresponding to the target pulse frequencies of different color channels, namely the adjusted output pulse frequency. Obviously, since the pulse frequency is the inverse of the pulse period, frequency amplification can also be expressed as period reduction.
In the above embodiment, the description is given of the related content of frequency adjustment of the original pulse signal.
And thirdly, noise reduction can be performed on the original pulse signal at the target pixel position, frequency adjustment is performed on the pulse signal after noise reduction, and the pulse signal after frequency adjustment can be recorded as a target pulse signal.
And fourthly, the original pulse signal at the target pixel position can be subjected to frequency adjustment, the pulse signal after frequency adjustment is subjected to noise reduction, and the pulse signal after noise reduction can be recorded as a target pulse signal.
Through the above processing, a target pulse signal for the target pixel position can be obtained.
And 103, reconstructing an image based on the target pulse signal to obtain a two-dimensional image.
For example, a target reconstruction time of the two-dimensional image may be determined, and for each pixel position of the two-dimensional image, a pixel value of the pixel position at the target reconstruction time may be determined from the target pulse signal of the pixel position. Then, a two-dimensional image at the target reconstruction time is generated based on the pixel values of all the pixel positions of the two-dimensional image, for example, the pixel values of all the pixel positions are combined together to obtain the two-dimensional image.
In one possible implementation, a two-dimensional image may be obtained using the following steps:
step 1031, determining a target reconstruction time of the two-dimensional image.
The target reconstruction time represents a target pulse signal based on each pixel position, and a two-dimensional image of the target reconstruction time is reconstructed, and the target reconstruction time is not limited, and may be arbitrarily configured, for example, the target reconstruction time may be a0, a1, a2, and the like in sequence, and the values of a0, a1, a2 are not limited.
For example, the frequency of the pulse signal may be very high, e.g., tens of thousands of pulse signals may be generated per second, while the two-dimensional image does not require such a high frame rate, and thus, the reconstructed image frame rate of the two-dimensional image may be determined first, which may indicate how long to reconstruct one frame of the two-dimensional image, and then the target reconstruction timing of the two-dimensional image is determined from the reconstructed image frame rate. For example, the target reconstruction time of the first two-dimensional image is a0, and the frame rate of the reconstructed image indicates that one frame of image is reconstructed every x time periods, then the target reconstruction time of the second two-dimensional image may be a1, a1 is the sum of a0 and x, the target reconstruction time of the third two-dimensional image may be a2, a2 is the sum of a1 and x, and so on.
Regarding the reconstructed image frame rate, the reconstructed image frame rate may be empirically configured, or may be determined by using a certain algorithm, and the reconstructed image frame rate is not limited as long as the reconstructed image frame rate can be obtained. In one possible implementation, the reconstructed image frame rate may be determined as follows:
for each pixel position of the two-dimensional image, determining a pulse period variance of the pixel position according to the pulse period of the pixel position and the pulse periods of adjacent pixel positions of the pixel position; determining a variance variation value according to the pulse period variance of each pixel position; and determining the reconstructed image frame rate according to the variance change value.
For example, for each pixel position of the two-dimensional image, all pulse periods of the pixel position within the time length L may be determined, and the manner of determining the pulse periods is referred to in step 102, which is not described herein. For each pixel location, the last pulse period of that pixel location within the length of time L may be selected and, in a subsequent process, processed using the last pulse period of the pixel location within the length of time L.
For each pixel position (e.g., pixel position a) of the two-dimensional image, m×m pixel positions may be selected with the pixel position a as the center, the pixel position located at the center being the pixel position a, and the pixel positions not located at the center being adjacent pixel positions to the pixel position a. Then, based on the pulse period of the pixel position a and the pulse periods of all the adjacent pixel positions of the pixel position a, the mean μ and the variance δ of the pulse periods are determined 2 The determination method is not repeated, and the variance delta 2 As the pulse period variance of pixel location a. Obviously, for each pixel position of the two-dimensional image, the pulse period variance of that pixel position can be obtained.
The pulse period variances of all pixel positions are ordered, for example, the pulse period variances are ordered according to the order from big to small, R pulse period variances which are ordered at the front are selected, the value of R can be configured according to experience, and the R is not limited, for example, R is one ten thousandth, one thousandth and one hundredth of the total number of the pulse period variances.
And averaging the variances of the selected R pulse periods to obtain a variance change value of the current scene, wherein the variance change value is the average value of the variances of the R pulse periods. Based on this, the reconstructed image frame rate may be determined from the variance change value, e.g. when the variance change value is larger than a preset threshold (configured empirically), indicating that the variance change value is larger, when there is a faster moving object in the scene, thus a higher first reconstructed image frame rate may be used, and when the variance change value is not larger than the preset threshold, indicating that the variance change value is smaller, when there is no faster moving object in the scene, i.e. the object is moving slower, thus a lower second reconstructed image frame rate may be used. Obviously, the first reconstructed image frame rate may be greater than the reconstructed image frame rate.
For another example, a mapping relationship between the variance change value (or the variance change value interval) and the reconstructed image frame rate may be preconfigured, and in the mapping relationship, the larger the variance change value is, the larger the reconstructed image frame rate corresponding to the variance change value is, and the mapping relationship is not limited. Based on the above, after the variance change value is obtained, the mapping relation can be directly queried, and the reconstructed image frame rate corresponding to the variance change value is obtained. Of course, the above is just two examples, and is not limited thereto, as long as the reconstructed image frame rate can be obtained.
In summary, the variance variation value may be determined according to the pulse period variance of each pixel position, the reconstructed image frame rate may be determined according to the variance variation value, and then the target reconstruction time may be determined according to the reconstructed image frame rate.
Step 1032 selects a first target pulse signal from the plurality of target pulse signals that is located before the target reconstruction time and a second target pulse signal from the plurality of target pulse signals that is located after the target reconstruction time. For example, the pulse transmission time of the first target pulse signal may coincide with the target reconstruction time or be the first pulse transmission time preceding the target reconstruction time, and for convenience of distinction, the pulse transmission time of the first target pulse signal is referred to as the first pulse transmission time. The pulse transmission time of the second target pulse signal may coincide with the target reconstruction time or be the first pulse transmission time located after the target reconstruction time, and for convenience of distinction, the pulse transmission time of the second target pulse signal is denoted as the second pulse transmission time.
For each pixel position of the two-dimensional image, a plurality of pulse transmission timings can be obtained based on a plurality of target pulse signals for the pixel position, for example. Referring to fig. 3, which is a schematic diagram of a plurality of pulse transmission timings for a pixel position, 5 pulse transmission timings, namely, timing 2, timing 4, timing 6, timing 11, and timing 16 are shown in fig. 3. Of course, these pulse transmission timings are merely illustrative.
Illustratively, in step 1031, one or more of the moments between moment 3 and moment 17 may be determined as target reconstruction moments. Assuming that the time 3 is the target reconstruction time, the pulse transmission time of the first target pulse signal is time 2, and the pulse transmission time of the second target pulse signal is time 4. Assuming that the time 9 is the target reconstruction time, the pulse transmission time of the first target pulse signal is time 6, and the pulse transmission time of the second target pulse signal is time 11. Assuming that the time 10 is taken as the target reconstruction time, the pulse transmission time of the first target pulse signal is time 6, the pulse transmission time of the second target pulse signal is time 11, and so on, and no further description is given for other target reconstruction times.
Step 1033, determining a time interval between the first target pulse signal and the second target pulse signal (i.e., a time interval between the pulse transmission time of the first target pulse signal and the pulse transmission time of the second target pulse signal), and determining the two-dimensional image at the target reconstruction time according to the time interval, the bit width of the two-dimensional image, and the pre-configured normalization coefficient. Of course, the above manner is merely an example, and other manners of determining the two-dimensional image of the target reconstruction time may be used, and the determination manner of the two-dimensional image is not limited.
For example, the pixel value for each pixel location of a two-dimensional image may be determined using the following formula:
in the above formula, I ij The pixel value representing the pixel position, floor () is a downward rounding function, N is the bit width of the two-dimensional image, α is a normalization coefficient, is a coefficient related to scene illumination, can be empirically configured, and can also be the inverse of the minimum time interval (i.e., the minimum time interval supported by the device) selected.For the time interval between the first target pulse signal and the second target pulse signal, e.g. the first target pulse signalThe difference between the pulse transmission time of the number and the pulse transmission time of the second target pulse signal.
After obtaining the pixel value of each pixel position of the two-dimensional image, that is, the pixel value of each pixel position at the target reconstruction time, the pixel values may be combined together to obtain the two-dimensional image at the target reconstruction time.
And 104, performing post-processing on the two-dimensional image to obtain a target image.
For example, after obtaining the two-dimensional image at the target reconstruction time, the two-dimensional image may be post-processed to obtain the target image at the target reconstruction time, where the post-processing manner may include, but is not limited to, at least one of the following: image interpolation processing, color correction processing, sharpening enhancement processing.
For example, performing image interpolation processing on a two-dimensional image to obtain a target image; or performing color correction processing on the two-dimensional image to obtain a target image; or, carrying out sharpening enhancement treatment on the two-dimensional image to obtain a target image; or, performing image interpolation processing on the two-dimensional image, and performing color correction processing on the image subjected to the interpolation processing to obtain a target image; or, performing image interpolation processing on the two-dimensional image, and performing sharpening enhancement processing on the image subjected to the interpolation processing to obtain a target image; or performing color correction processing on the two-dimensional image, and performing sharpening enhancement processing on the image subjected to the color correction processing to obtain a target image; or, performing image interpolation processing on the two-dimensional image, performing color correction processing on the image subjected to the interpolation processing, and performing sharpening enhancement processing on the image subjected to the color correction processing to obtain a target image.
Exemplary, regarding image interpolation processing on an image, may include, but is not limited to: according to the color channel format of the sensor, the image is interpolated into a color image, and the interpolation mode is not limited. The color image may be, for example, an RGB image, and the image interpolation may also be referred to as demosaic.
Illustratively, regarding the color correction processing of the image, it may include, but is not limited to: correcting the image color, and setting a color correction matrix asThe color correction can be:wherein r_out, g_out, b_out may respectively represent three channel color corrected pixel values of the same pixel, and r_in, g_in, b_in may respectively represent three channel original pixel values of the same pixel (i.e., pixel values before color correction). />
Illustratively, regarding the sharpening enhancement processing of the image, it may include, but is not limited to: one embodiment of image sharpening may be USM (Unsharp Mask) algorithm sharpening, for example, to-be-sharpened image I c Three channels of c epsilon { R, G, B } are respectively subjected to Gaussian high-pass filtering, and a high-frequency part I_h of the image can be obtained by adopting the following formula ic epsilon { R, G, B }, kernel is a two-dimensional Gaussian kernel, e.g., the Gaussian kernel radius is 3 and the standard deviation is 1. Then, the high frequency I_h can be set according to a preset threshold thre4 (which can be empirically configured without limitation, e.g., the threshold can be set to 10 for 8-bit images) c Thresholding is performed, for example, with zero below a preset threshold thre4, as shown in the following equation.
And (3) multiplying the processed high frequency by the gain A, and then superposing the processed high frequency to an image to be sharpened, wherein the processed high frequency is used as a sharpened image, and the following formula is shown: i_enh c =I+A·I_h c '. The gain a may be empirically configured without limitation.
In one possible implementation, after the original pulse signal is preprocessed to obtain the preprocessed target pulse signal, the target pulse signal may be further input to a pulse neural network (Spiking Neural Networks, SNN), so that the pulse neural network performs target tracking and/or target recognition according to the target pulse signal, and the specific implementation of the target tracking and/or target recognition is not limited herein.
As can be seen from the above technical solutions, in the embodiments of the present application, an original pulse signal output by a pulse sensor can be obtained, the original pulse signal is preprocessed, a preprocessed target pulse signal is obtained, and a target image is obtained based on the target pulse signal, so that the pulse signal can be reconstructed into the target image. The above mode can perform signal processing on the pulse signal, output a conventional image signal (i.e. a target image) and a pulse signal (i.e. a target pulse signal), design a complete pulse processing frame for the pulse signal, wherein the conventional image signal is used for human eye observation and conventional image application, and the pulse signal is used for pulse application such as a pulse neural network.
The above technical solution of the embodiments of the present application is described below with reference to specific application scenarios. Referring to fig. 4A, an application scenario of an embodiment of the present application may include a pulse sensing unit, a pulse processing unit, an image reconstruction unit, an image output unit, and a pulse output unit.
The pulse sensing unit is used for imaging a scene and outputting scene information in the form of pulse signals, for example, the pulse sensing unit can be a pulse sensor and can output a pulse sequence of pixel positions, and the pulse sequence comprises a plurality of original pulse signals. The pulse sensing unit outputs an original pulse signal to the pulse processing unit.
The pulse processing unit is used for preprocessing the original pulse signal to obtain a target pulse signal. For example, the pulse processing unit may include a pulse noise reduction unit, which is configured to perform noise reduction on an original pulse signal, for example, time domain noise reduction, space domain noise reduction, 3D noise reduction, etc., and the detailed description is omitted herein. The pulse processing unit may further include a frequency adjustment unit, where the frequency adjustment unit is configured to perform frequency adjustment on the original pulse signal, such as adjustment of a light intensity-frequency relationship, adjustment of a white balance, etc., and the detailed description is omitted herein. The target pulse signals processed by the pulse processing unit are respectively output to the image reconstruction unit and the pulse output unit.
The pulse output unit is used for outputting a target pulse signal, the output target pulse signal can be used for applications requiring the pulse signal as input, such as a pulse neural network, and the output process is not limited.
The image reconstruction unit is used for reconstructing a target image according to the target pulse signal and outputting an image signal. Referring to fig. 4B, the image reconstruction unit may include, but is not limited to, a light intensity conversion unit and an image post-processing unit. The light intensity conversion unit is used for converting the target pulse signal into a light intensity signal, namely generating a two-dimensional image. The image post-processing unit is configured to perform conventional image processing on the two-dimensional image to obtain a target image, as shown in fig. 4C, where the conventional image processing includes, but is not limited to, image interpolation, color correction, sharpening enhancement, and the like.
The image signal (i.e., the target image) obtained by the image reconstruction unit is output to the image output unit, which can output the image signal after receiving the image signal.
Based on the same application concept as the above method, an apparatus for processing a pulse signal is further provided in an embodiment of the present application, as shown in fig. 5, which is a structural diagram of the apparatus for processing a pulse signal, where the apparatus includes:
An acquisition module 51, configured to acquire an original pulse signal output by the pulse sensor;
the processing module 52 is configured to perform preprocessing on the original pulse signal to obtain a preprocessed target pulse signal;
a reconstruction module 53, configured to perform image reconstruction based on the target pulse signal, so as to obtain a two-dimensional image;
and the generating module 54 is used for performing post-processing on the two-dimensional image to obtain a target image.
The processing module 52 performs preprocessing on the original pulse signal, and is specifically configured to: noise reduction is carried out on the original pulse signal, and a target pulse signal after noise reduction is obtained; or, frequency adjustment is carried out on the original pulse signal to obtain a target pulse signal after frequency adjustment; or alternatively, the process may be performed,
noise is reduced on the original pulse signal to obtain a noise-reduced pulse signal, and frequency adjustment is carried out on the noise-reduced pulse signal to obtain a frequency-adjusted target pulse signal; or alternatively, the process may be performed,
and carrying out frequency adjustment on the original pulse signal to obtain a pulse signal with the frequency adjusted, and carrying out noise reduction on the pulse signal with the frequency adjusted to obtain a target pulse signal with the noise reduced.
The processing module 52 performs noise reduction on the original pulse signal, and is specifically configured to: performing time domain noise reduction on an original pulse signal of a target pixel position according to a pulse period of the target pixel position to obtain a target pulse signal of the target pixel position; or alternatively, the process may be performed,
Performing spatial domain noise reduction on an original pulse signal of a target pixel position according to pulse periods of adjacent pixel positions of the target pixel position to obtain a target pulse signal of the target pixel position; or alternatively, the process may be performed,
3D noise reduction is carried out on the original pulse signal of the target pixel position according to the pulse period of the target pixel position and the pulse period of the adjacent pixel position of the target pixel position, so as to obtain a target pulse signal of the target pixel position;
the pulse period is the interval between two adjacent original pulse signals output by the pulse sensor.
The processing module 52 performs temporal noise reduction on the original pulse signal of the target pixel position according to the pulse period of the target pixel position, and is specifically configured to:
determining a kth weighting period of the target pixel location based on the kth pulse period of the target pixel location, the (k-1) th pulse period of the target pixel location, and the (k-1) th weighting period of the target pixel location; wherein k is a positive integer greater than 1;
a kth target pulse signal for the target pixel location is determined based on the kth weighted period for the target pixel location and the (k-1) th original pulse signal for the target pixel location.
The processing module 52 performs spatial domain noise reduction on the original pulse signal of the target pixel position according to the pulse periods of the adjacent pixel positions of the target pixel position, and is specifically configured to: determining a pulse period variance based on pulse periods of adjacent pixel locations of the target pixel location; if the pulse period variance is smaller than a variance threshold, determining a pulse period mean value according to the pulse period of the target pixel position and the pulse period of the adjacent pixel position; and determining a target pulse signal of the target pixel position based on the pulse period average value and the original pulse signal of the target pixel position.
The processing module 52 performs 3D noise reduction on the original pulse signal of the target pixel position according to the pulse period of the target pixel position and the pulse period of the adjacent pixel position of the target pixel position, and is specifically configured to:
determining a pulse period difference value of the target pixel position according to the pulse period of the target pixel position, wherein the pulse period difference value is a difference value between two adjacent pulse periods of the target pixel position;
Determining a pulse period difference value of the adjacent pixel position according to the pulse period of the adjacent pixel position, wherein the pulse period difference value is a difference value between two adjacent pulse periods of the adjacent pixel position;
determining whether the target pixel position is an isolated pixel position according to the pulse period difference value of the target pixel position and the pulse period difference value of the adjacent pixel position;
if so, determining a kth target pulse signal of the target pixel position based on a (k-1) th pulse period of the target pixel position and a (k-1) th original pulse signal of the target pixel position.
The processing module 52 performs frequency adjustment on the original pulse signal, and is specifically configured to: determining an original pulse frequency of a target pixel position according to an original pulse period of the target pixel position, and converting the original pulse frequency to obtain a target pulse frequency; the original pulse period is the interval between two adjacent original pulse signals output by the pulse sensor;
determining a target pulse period according to the target pulse frequency;
and adjusting the original pulse signal of the target pixel position according to the target pulse period to obtain a target pulse signal of the target pixel position.
The processing module 52 converts the original pulse frequency to obtain a target pulse frequency, which is specifically configured to: converting the original pulse frequency based on a mapping relation to obtain a target pulse frequency; the mapping relation is a conversion relation between a preconfigured original pulse frequency and a target pulse frequency;
or converting the original pulse frequency based on a preset gain value to obtain a target pulse frequency.
The reconstruction module 53 performs image reconstruction based on the target pulse signal, and is specifically configured to: determining a target reconstruction time of the two-dimensional image;
selecting a first target pulse signal located before the target reconstruction time from a plurality of target pulse signals, and selecting a second target pulse signal located after the target reconstruction time from a plurality of target pulse signals;
determining a time interval between the first target pulse signal and the second target pulse signal;
and determining the two-dimensional image at the target reconstruction moment according to the bit width of the two-dimensional image and the pre-configured normalization coefficient.
The processing module 52 performs preprocessing on the original pulse signal, and is further configured to: and inputting the target pulse signal into a pulse neural network so that the pulse neural network performs target tracking and/or target identification according to the target pulse signal.
Based on the same application concept as the above method, the embodiment of the present application further provides a pulse signal processing device, and from a hardware level, a schematic hardware architecture of the pulse signal processing device may be shown in fig. 6. The pulse signal processing apparatus may include: a processor 61 and a machine-readable storage medium 62, the machine-readable storage medium 62 storing machine-executable instructions executable by the processor 61; the processor 61 is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
For example, the processor 61 is configured to execute machine executable instructions to implement the steps of:
acquiring an original pulse signal output by a pulse sensor;
preprocessing the original pulse signal to obtain a preprocessed target pulse signal;
performing image reconstruction based on the target pulse signal to obtain a two-dimensional image;
and carrying out post-processing on the two-dimensional image to obtain a target image.
Based on the same application concept as the above method, the embodiment of the present application further provides a machine-readable storage medium, where the machine-readable storage medium stores a number of computer instructions, where the computer instructions can implement the method disclosed in the above example of the present application when executed by a processor.
For example, the computer instructions, when executed by a processor, can implement the steps of:
acquiring an original pulse signal output by a pulse sensor;
preprocessing the original pulse signal to obtain a preprocessed target pulse signal;
performing image reconstruction based on the target pulse signal to obtain a two-dimensional image;
and carrying out post-processing on the two-dimensional image to obtain a target image.
By way of example, the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, and the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state drive, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Moreover, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (11)

1. A method of processing a pulse signal, the method comprising:
acquiring an original pulse signal output by a pulse sensor;
preprocessing the original pulse signal to obtain a preprocessed target pulse signal;
performing image reconstruction based on the target pulse signal to obtain a two-dimensional image; the image reconstruction based on the target pulse signal to obtain a two-dimensional image comprises the following steps: determining a target reconstruction time of the two-dimensional image; selecting a first target pulse signal located before the target reconstruction time from a plurality of target pulse signals, and selecting a second target pulse signal located after the target reconstruction time from a plurality of target pulse signals; determining a time interval between the first target pulse signal and the second target pulse signal; determining a two-dimensional image at the target reconstruction moment according to the bit width of the two-dimensional image and a pre-configured normalization coefficient;
and carrying out post-processing on the two-dimensional image to obtain a target image.
2. The method of claim 1, wherein preprocessing the original pulse signal to obtain a preprocessed target pulse signal comprises:
Noise reduction is carried out on the original pulse signal, and a target pulse signal after noise reduction is obtained; or alternatively, the process may be performed,
frequency adjustment is carried out on the original pulse signal, and a target pulse signal after frequency adjustment is obtained; or alternatively, the process may be performed,
noise is reduced on the original pulse signal to obtain a noise-reduced pulse signal, and frequency adjustment is carried out on the noise-reduced pulse signal to obtain a frequency-adjusted target pulse signal; or alternatively, the process may be performed,
and carrying out frequency adjustment on the original pulse signal to obtain a pulse signal with the frequency adjusted, and carrying out noise reduction on the pulse signal with the frequency adjusted to obtain a target pulse signal with the noise reduced.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the noise reduction is carried out on the original pulse signal to obtain a target pulse signal after noise reduction, which comprises the following steps:
performing time domain noise reduction on an original pulse signal of a target pixel position according to a pulse period of the target pixel position to obtain a target pulse signal of the target pixel position; or alternatively, the process may be performed,
performing spatial domain noise reduction on an original pulse signal of a target pixel position according to pulse periods of adjacent pixel positions of the target pixel position to obtain a target pulse signal of the target pixel position; or alternatively, the process may be performed,
3D noise reduction is carried out on the original pulse signal of the target pixel position according to the pulse period of the target pixel position and the pulse period of the adjacent pixel position of the target pixel position, so as to obtain a target pulse signal of the target pixel position;
the pulse period is the interval between two adjacent original pulse signals output by the pulse sensor.
4. The method of claim 3, wherein the step of,
performing temporal noise reduction on the original pulse signal of the target pixel position according to the pulse period of the target pixel position to obtain a target pulse signal of the target pixel position, including:
determining a kth weighting period of the target pixel location based on the kth pulse period of the target pixel location, the (k-1) th pulse period of the target pixel location, and the (k-1) th weighting period of the target pixel location; wherein k is a positive integer greater than 1;
a kth target pulse signal for the target pixel location is determined based on the kth weighted period for the target pixel location and the (k-1) th original pulse signal for the target pixel location.
5. The method of claim 3, wherein the step of,
The step of performing spatial domain noise reduction on the original pulse signal of the target pixel position according to the pulse period of the adjacent pixel position of the target pixel position to obtain the target pulse signal of the target pixel position comprises the following steps:
determining a pulse period variance based on pulse periods of adjacent pixel locations of the target pixel location;
if the pulse period variance is smaller than a variance threshold, determining a pulse period mean value according to the pulse period of the target pixel position and the pulse period of the adjacent pixel position;
and determining a target pulse signal of the target pixel position based on the pulse period average value and the original pulse signal of the target pixel position.
6. A method according to claim 3, wherein performing 3D noise reduction on the original pulse signal of the target pixel position according to the pulse period of the target pixel position and the pulse period of the adjacent pixel position of the target pixel position to obtain the target pulse signal of the target pixel position comprises:
determining a pulse period difference value of the target pixel position according to the pulse period of the target pixel position, wherein the pulse period difference value is a difference value between two adjacent pulse periods of the target pixel position;
Determining a pulse period difference value of the adjacent pixel position according to the pulse period of the adjacent pixel position, wherein the pulse period difference value is a difference value between two adjacent pulse periods of the adjacent pixel position;
determining whether the target pixel position is an isolated pixel position according to the pulse period difference value of the target pixel position and the pulse period difference value of the adjacent pixel position;
if so, determining a kth target pulse signal of the target pixel position based on a (k-1) th pulse period of the target pixel position and a (k-1) th original pulse signal of the target pixel position.
7. The method according to claim 2, wherein the frequency-adjusting the original pulse signal to obtain the frequency-adjusted target pulse signal includes:
determining an original pulse frequency of a target pixel position according to an original pulse period of the target pixel position, and converting the original pulse frequency to obtain a target pulse frequency; the original pulse period is the interval between two adjacent original pulse signals output by the pulse sensor;
determining a target pulse period according to the target pulse frequency;
And adjusting the original pulse signal of the target pixel position according to the target pulse period to obtain a target pulse signal of the target pixel position.
8. The method of claim 7, wherein the step of determining the position of the probe is performed,
the converting the original pulse frequency to obtain a target pulse frequency includes:
converting the original pulse frequency based on a mapping relation to obtain a target pulse frequency; the mapping relation is a conversion relation between a preconfigured original pulse frequency and a target pulse frequency;
or converting the original pulse frequency based on a preset gain value to obtain a target pulse frequency.
9. The method of claim 1, wherein after preprocessing the original pulse signal to obtain a preprocessed target pulse signal, the method further comprises:
and inputting the target pulse signal into a pulse neural network so that the pulse neural network performs target tracking and/or target identification according to the target pulse signal.
10. A pulse signal processing apparatus, the apparatus comprising:
the acquisition module is used for acquiring an original pulse signal output by the pulse sensor;
The processing module is used for preprocessing the original pulse signal to obtain a preprocessed target pulse signal;
the reconstruction module is used for reconstructing an image based on the target pulse signal to obtain a two-dimensional image; the reconstruction module performs image reconstruction based on the target pulse signal, and is specifically used for obtaining a two-dimensional image: determining a target reconstruction time of the two-dimensional image; selecting a first target pulse signal located before the target reconstruction time from a plurality of target pulse signals, and selecting a second target pulse signal located after the target reconstruction time from a plurality of target pulse signals; determining a time interval between the first target pulse signal and the second target pulse signal; determining a two-dimensional image at the target reconstruction moment according to the bit width of the two-dimensional image and a pre-configured normalization coefficient;
and the generating module is used for carrying out post-processing on the two-dimensional image to obtain a target image.
11. A pulse signal processing apparatus, characterized by comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine-executable instructions to perform the steps of:
Acquiring an original pulse signal output by a pulse sensor;
preprocessing the original pulse signal to obtain a preprocessed target pulse signal;
performing image reconstruction based on the target pulse signal to obtain a two-dimensional image; the image reconstruction based on the target pulse signal to obtain a two-dimensional image comprises the following steps: determining a target reconstruction time of the two-dimensional image; selecting a first target pulse signal located before the target reconstruction time from a plurality of target pulse signals, and selecting a second target pulse signal located after the target reconstruction time from a plurality of target pulse signals; determining a time interval between the first target pulse signal and the second target pulse signal; determining a two-dimensional image at the target reconstruction moment according to the bit width of the two-dimensional image and a pre-configured normalization coefficient;
and carrying out post-processing on the two-dimensional image to obtain a target image.
CN202010476161.3A 2020-05-29 2020-05-29 Pulse signal processing method, device and equipment Active CN113744355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010476161.3A CN113744355B (en) 2020-05-29 2020-05-29 Pulse signal processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010476161.3A CN113744355B (en) 2020-05-29 2020-05-29 Pulse signal processing method, device and equipment

Publications (2)

Publication Number Publication Date
CN113744355A CN113744355A (en) 2021-12-03
CN113744355B true CN113744355B (en) 2023-09-26

Family

ID=78724699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010476161.3A Active CN113744355B (en) 2020-05-29 2020-05-29 Pulse signal processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN113744355B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230421917A1 (en) * 2022-04-13 2023-12-28 Shenzhen SynSense Technology Co., Ltd. Frame image conversion spike system
CN114466153B (en) * 2022-04-13 2022-09-09 深圳时识科技有限公司 Self-adaptive pulse generation method and device, brain-like chip and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104884972A (en) * 2012-11-27 2015-09-02 E2V半导体公司 Method for producing images with depth information and image sensor
WO2019137973A1 (en) * 2018-01-11 2019-07-18 Gensight Biologics Method and device for processing asynchronous signals generated by an event-based light sensor

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9123127B2 (en) * 2012-12-10 2015-09-01 Brain Corporation Contrast enhancement spiking neuron network sensory processing apparatus and methods
EP2887009A1 (en) * 2013-12-23 2015-06-24 Universität Zürich Method for reconstructing a surface using spatially structured light and a dynamic vision sensor
US11126913B2 (en) * 2015-07-23 2021-09-21 Applied Brain Research Inc Methods and systems for implementing deep spiking neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104884972A (en) * 2012-11-27 2015-09-02 E2V半导体公司 Method for producing images with depth information and image sensor
WO2019137973A1 (en) * 2018-01-11 2019-07-18 Gensight Biologics Method and device for processing asynchronous signals generated by an event-based light sensor

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A RETINA-INSPIRED SAMPLING METHOD FOR VISUAL TEXTURE RECONSTRUCTION;Lin Zhu等;arXiv.org/abs/1907.089769;摘要,第3.1-3.2,4.1-4.3节 *
Lin Zhu等.A RETINA-INSPIRED SAMPLING METHOD FOR VISUAL TEXTURE RECONSTRUCTION.arXiv.org/abs/1907.089769.2019,摘要,第3.1-3.2,4.1-4.3节. *
Spike Camera and Its Coding Methods;Siwei Dong等;2017 Data Compression Conference;第437页 *
位置事件目标运动轨迹检测与分析;张珂;中国优秀硕士学位论文全文数据库信息科技辑;第9-10,30-31,47页 *
神经形态视觉传感器及其应用研究;桑永胜等;物联网学报;第3卷(第4期);第3,3.3,4.2.1,4.3,4.3.2节 *

Also Published As

Publication number Publication date
CN113744355A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN111539879B (en) Video blind denoising method and device based on deep learning
Abdelhamed et al. Noise flow: Noise modeling with conditional normalizing flows
KR101574733B1 (en) Image processing apparatus for obtaining high-definition color image and method therof
EP2489007B1 (en) Image deblurring using a spatial image prior
JP4460839B2 (en) Digital image sharpening device
KR20130013288A (en) High dynamic range image creation apparatus of removaling ghost blur by using multi exposure fusion and method of the same
JP4456819B2 (en) Digital image sharpening device
CN113744355B (en) Pulse signal processing method, device and equipment
KR102106537B1 (en) Method for generating a High Dynamic Range image, device thereof, and system thereof
CN110555808A (en) Image processing method, device, equipment and machine-readable storage medium
CN112529854B (en) Noise estimation method, device, storage medium and equipment
JP2017010092A (en) Image processing apparatus, imaging device, image processing method, image processing program, and recording medium
JP2017102245A (en) Image processing device and image processing method
WO2016165076A1 (en) Method and system for image enhancement
JP7032913B2 (en) Image processing device, image processing method, computer program
CN112288646A (en) Stack noise reduction method and device, electronic equipment and storage medium
CN110111261B (en) Adaptive balance processing method for image, electronic device and computer readable storage medium
Zhang et al. Self-supervised image restoration with blurry and noisy pairs
CN113784014B (en) Image processing method, device and equipment
CN110689486A (en) Image processing method, device, equipment and computer storage medium
JP6514504B2 (en) IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND PROGRAM
JP7191588B2 (en) Image processing method, image processing device, imaging device, lens device, program, and storage medium
Luo et al. Real‐time digital image stabilization for cell phone cameras in low‐light environments without frame memory
US20220405892A1 (en) Image processing method, image processing apparatus, image processing system, and memory medium
JP2021005206A (en) Image processing device, image processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant