CN115410063A - Image processing apparatus and method, electronic device, and medium - Google Patents

Image processing apparatus and method, electronic device, and medium Download PDF

Info

Publication number
CN115410063A
CN115410063A CN202211014761.3A CN202211014761A CN115410063A CN 115410063 A CN115410063 A CN 115410063A CN 202211014761 A CN202211014761 A CN 202211014761A CN 115410063 A CN115410063 A CN 115410063A
Authority
CN
China
Prior art keywords
processing
photosensitive
processing mode
module
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211014761.3A
Other languages
Chinese (zh)
Inventor
何伟
祝夭龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lynxi Technology Co Ltd
Original Assignee
Beijing Lynxi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lynxi Technology Co Ltd filed Critical Beijing Lynxi Technology Co Ltd
Priority to CN202211014761.3A priority Critical patent/CN115410063A/en
Publication of CN115410063A publication Critical patent/CN115410063A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)

Abstract

The disclosure provides an image processing device and method, electronic equipment and a medium, and belongs to the technical field of computers. The image processing apparatus includes: the photosensitive module is used for generating a corresponding electric signal according to the incident optical signal; the control module is used for controlling the processing mode of the electric signals generated by the photosensitive module according to a time division multiplexing mode; the processing module is used for processing the electric signal according to the processing mode determined by the control module to obtain a processing result; wherein the processing modes include a first processing mode based on frame vision and a second processing mode based on dynamic vision. According to the embodiment of the disclosure, the first processing mode based on the frame vision and the second processing mode based on the dynamic vision can be fused, and the output image based on the frame vision and/or the output image based on the dynamic vision can be obtained.

Description

Image processing apparatus and method, electronic device, and medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing apparatus and method, an electronic device, and a medium.
Background
Conventional visual image acquisition methods are usually based on "frames" acquired at a fixed frequency, and have the defects of high redundancy, high delay, high noise, low dynamic range, high data volume and the like. In an image acquisition mode based on a Dynamic Vision Sensor (DVS), only the address and information of a pixel with changed light intensity are output, instead of passively reading out the information of each pixel in a frame in sequence, redundant data can be eliminated from the source, and power consumption is reduced. In the related art, any one of the above methods is generally adopted for image acquisition, and two image acquisition methods cannot be effectively fused.
Disclosure of Invention
The present disclosure provides an image processing apparatus and method, an electronic device, and a medium.
In a first aspect, the present disclosure provides an image processing apparatus comprising: the photosensitive module is used for generating a corresponding electric signal according to the incident optical signal; the control module is used for controlling the processing mode of the electric signal generated by the photosensitive module in a time division multiplexing mode; the processing module is used for processing the electric signal according to the processing mode determined by the control module to obtain a processing result; wherein the processing modes include a first processing mode based on frame vision and a second processing mode based on dynamic vision.
In a second aspect, the present disclosure provides an image processing method comprising: generating a corresponding electrical signal according to the incident optical signal; determining a processing mode of the electrical signal in a time dimension; processing the electric signal in a time division multiplexing mode according to the processing mode to obtain a processing result; acquiring an output image according to the processing result; wherein the processing modes include a first processing mode based on frame vision and a second processing mode based on dynamic vision.
In a third aspect, the present disclosure provides an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores one or more computer programs executable by the at least one processor, the one or more computer programs being executable by the at least one processor to enable the at least one processor to perform the image processing method as described above.
In a fourth aspect, the present disclosure provides a computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor/processing core, implements the image processing method described above.
According to the embodiment provided by the disclosure, the first processing mode based on the frame vision and the second processing mode based on the dynamic vision are fused in a time division multiplexing manner, so that the image processing device supports both the first processing mode based on the frame vision and the second processing mode based on the dynamic vision, and the output image based on the frame vision and/or the output image based on the dynamic vision can be selected and obtained according to requirements.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. The above and other features and advantages will become more apparent to those skilled in the art by describing in detail exemplary embodiments thereof with reference to the attached drawings, in which:
fig. 1 is a block diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an image processing apparatus according to an embodiment of the disclosure;
fig. 3 is a schematic diagram of an image processing apparatus according to an embodiment of the disclosure;
fig. 4 is a schematic diagram of an operation process of an image processing apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of an operation process of an image processing apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating a connection between a comparing unit and a photosensor according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating a connection between a comparing unit and a photosensor according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram illustrating a connection between a comparing unit and a photosensor according to an embodiment of the disclosure;
fig. 9 is a schematic diagram illustrating a connection between a comparing unit and a photosensitive sensor according to an embodiment of the disclosure;
fig. 10 is a schematic diagram illustrating a connection between a comparing unit and a photosensitive sensor according to an embodiment of the disclosure;
fig. 11 is a schematic distribution diagram of a photosensor according to an embodiment of the present disclosure;
FIG. 12 is a schematic diagram illustrating a connection between a comparing unit and a photosensor according to an embodiment of the present disclosure;
fig. 13 is a schematic diagram illustrating a connection between a comparing unit and a photosensor according to an embodiment of the present disclosure;
fig. 14 is a flowchart of an image processing method provided by an embodiment of the present disclosure;
fig. 15 is a block diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
To enable those skilled in the art to better understand the technical aspects of the present disclosure, exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the present disclosure are included to assist understanding, and they should be considered as being merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Embodiments of the disclosure and features of the embodiments may be combined with each other without conflict.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," 8230; \8230 "; when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Vision is one of the strongest perception ways of human beings, so that the human beings can acquire information of the surrounding environment in a visual perception way without actual contact. With the development of computer and other technologies, vision can be applied to machines, and such a method of performing vision processing using the processing capability of a machine is called machine vision. In other words, machine vision is to give visual perception to a machine, so that the machine has scene perception capability similar to that of a biological vision system.
In the related art, machine vision relies primarily on a conventional camera. Conventional cameras typically acquire images using frame-based image sensors. When an image is taken, the pixel values of the individual pixels in each frame of image are acquired by an image sensor, usually on a "frame" basis, which is acquired at a fixed frequency. The quality of images acquired based on conventional cameras is high, but the data processing is large due to the fact that each frame of data needs to be captured, communicated, processed, etc., and a large amount of redundant and unnecessary data may be generated. This high data load tends to slow the reaction time by reducing the temporal resolution, resulting in increased power consumption, while also increasing the size and cost of the machine vision system. In addition to this, in the frame-based image sensor, since data is captured as a sequence of still images (frames), the frame-based image sensor has disadvantages of a limited dynamic range, poor low light performance, motion blur, and the like. In other words, the conventional camera is suitable for use in a case where a demand for display quality of an image or the like is high (e.g., movie work), or a machine vision task such as object recognition is performed, however, the conventional camera based on frames no longer has a processing advantage for machine vision tasks such as tracking, monitoring, motion estimation, and the like.
A Dynamic Vision Sensor (DVS) adopts a new image processing mode, and a Dynamic Vision based on Address-Event Representation (AER) simulates the working mechanism of biological Vision, only outputs the Address and information of pixels with changed light intensity, but passively reads out the information of each pixel in a frame in turn, can eliminate redundant data from the source, and has the characteristics of scene change real-time Dynamic response, image ultra-sparse Representation, event asynchronous output and the like, thereby processing data based on less resources, lower power and faster system reaction time. Accordingly, the quality of images acquired based on a dynamic vision sensor may be lower than that of a frame-based image sensor. Based on the characteristics, the dynamic vision sensor is widely applied to machine vision tasks such as high-speed target tracking, real-time monitoring, industrial automation and the like.
In the related art, the frame-based image sensor and the dynamic vision sensor are often used independently, and the two sensors are not effectively merged.
According to the image processing device disclosed by the embodiment of the disclosure, the first processing mode based on the frame vision and the second processing mode based on the dynamic vision can be fused in a time division multiplexing manner, so that the image processing device supports both the first processing mode based on the frame vision and the second processing mode based on the dynamic vision, and the output image based on the frame and/or the output image based on the dynamic vision can be selected and obtained according to requirements.
The image processing method according to the embodiment of the present disclosure may be performed by an electronic device such as a terminal device or a server, the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling a computer-readable program instruction stored in a memory. Alternatively, the method may be performed by a server. In other words, the image processing apparatus according to the embodiment of the present disclosure may be packaged as an electronic device product such as a terminal device or a server.
In a first aspect, an embodiment of the present disclosure provides an image processing apparatus.
Fig. 1 is a block diagram of an image processing apparatus according to an embodiment of the present disclosure. Referring to fig. 1, the image processing apparatus includes:
the photosensitive module 10 is configured to generate a corresponding electrical signal according to the incident optical signal.
And the control module 20 is used for controlling the processing modes of the electric signals generated by the photosensitive module in a time division multiplexing mode, wherein the processing modes comprise a first processing mode based on frame vision and a second processing mode based on dynamic vision.
And the processing module 30 is used for processing the electric signal according to the processing mode determined by the control module to obtain a processing result.
In some possible implementations, the photosensitive module has a photosensitive function, which can convert the optical signal into a corresponding electrical signal for further processing of the electrical signal. The optical signal comprises an optical signal reflected to the photosensitive module by the object to be imaged.
In some possible implementations, the photosensitive module implements a photosensitive function through a sensor.
In some possible implementations, the photosensitive module includes a plurality of photosensitive sensors, and the photosensitive sensors are configured to generate corresponding electrical signals according to the intensity of the incident optical signals.
In some possible implementations, the photosensitive sensors may be arranged in an array, thereby forming a corresponding sensor array. Moreover, the minimum photosensitive unit of the photosensitive sensor corresponds to one Pixel (Pixel), and the photosensitive area of the photosensitive sensor is formed by the pixels arranged in an array.
For an incident light signal, the light sensor can sense the light intensity and convert the light intensity into an electrical signal matched with the light intensity. Here, "matching" means that there is a certain mapping relationship or proportional relationship between the light intensity and the intensity of the electrical signal, so that after the electrical signal is processed, real and accurate image information can be obtained.
In some possible implementations, the electrical signal includes at least one of a capacitance signal, a voltage signal, and a current signal. Correspondingly, the photosensitive sensor has multiple sensing modes and is used for outputting different types of electric signals.
Illustratively, the light-sensitive sensor includes at least one of a first sensor, a second sensor, and a third sensor; the first sensor is used for acquiring the light intensity of the optical signal and generating a corresponding capacitance signal according to the light intensity, the second sensor is used for acquiring the light intensity of the optical signal and generating a corresponding voltage signal according to the light intensity, and the third sensor is used for acquiring the light intensity of the optical signal and generating a corresponding current signal according to the light intensity.
For example, the first sensor includes a storage capacitor, and a capacitance signal C related to light intensity can be obtained through charging and discharging operations; the second sensor comprises a photosensitive signal circuit which can generate a voltage signal V depending on the light intensity; the third sensor contains a Photodiode (PD) or the like, and can measure the impinging light and convert the light intensity into a corresponding current signal I.
It should be noted that, the above is only an example for the photosensitive sensor, and the embodiments of the present disclosure do not limit the photosensitive sensor and the type of the output signal thereof.
It should be further noted that the pixel module may be composed of any one of the above photosensitive sensors, and may also be composed of two or more of the above photosensitive sensors, which is not limited in the embodiment of the present disclosure.
After the light sensing module generates an electrical signal according to the optical signal, the electrical signal needs to be further processed to obtain an output image. In the embodiment of the present disclosure, the image processing apparatus supports both the first processing mode based on the frame vision and the second processing mode based on the dynamic vision, and therefore, it is necessary to determine which processing mode to select for processing the electrical signal through the control operation of the related function module, so as to ensure the orderly execution of the image processing task.
In some possible implementations, the processing mode of the electrical signal is controlled by the control module, and the processing mode includes a first processing mode based on frame vision and a second processing mode based on dynamic vision.
For example, the first processing mode based on frame vision may acquire and output images one frame by one frame at a fixed frequency, and the second processing mode based on dynamic vision may output dynamic visual event data only if it is determined that the light intensity has changed.
In some possible implementations, the control module may implement the control of the processing mode based on the time control circuit, i.e., the first processing mode or the second processing mode is "gated" in the time dimension by the time control circuit to process the electrical signal based on the gated processing mode.
In some possible implementations, the control module may control the processing mode by sending a control signal to the processing module, where the control signal includes information of the processing mode specified by the control module, and the processing module responds to the control signal sent by the control module, so as to determine which processing mode is used to process the electrical signal.
It should be understood that, no matter what control mode is adopted, the processing module processes the electrical signals in a time-division multiplexing manner, that is, a first processing mode is adopted in a part of time period, and a second processing mode is adopted in another part of time period. The time period corresponding to the first processing mode and the time period corresponding to the second processing mode may be a regular time period, or may also be a randomly set time period without regularity, which is not limited in the embodiment of the present disclosure.
In some optional implementations, the control module is configured to control a processing mode of the electrical signals generated by the light sensing module in a time division multiplexing manner. The time division multiplexing refers to processing the electrical signal in a first processing mode in a partial time period (or partial time), and processing the electrical signal in a second processing mode in a remaining time period (or remaining partial time). The first processing mode and the second processing mode are alternately used for processing the electric signal in the time dimension.
In which time period, which processing mode is used to process the electrical signal may be determined according to the instantaneous indication signal or indication information, or may be determined according to a preset time period, which is not limited in the embodiment of the present disclosure.
Illustratively, if the current processing mode is the first processing mode, the control module switches the processing mode of the electrical signal from the first processing mode to the second processing mode when receiving an externally transmitted indication signal, and switches the processing mode of the electrical signal from the second processing mode to the first processing mode when receiving the externally transmitted indication signal again. The indication signal may be sent by a preset management terminal, a management server, or the like.
Illustratively, the control module controls the processing module to alternately process the electrical signal in a first processing mode and a second processing mode for the electrical signal generated by the photosensitive module in the first period and the second period. In other words, the control module controls the processing module to alternately process the electrical signal in the first processing mode and the second processing mode in the first period and the second period respectively based on the time division multiplexing mode. The first period and the second period may have the same time length or different time lengths.
In other words, the processing module employs the first processing mode in each first cycle and the second processing mode in each second cycle, and the first cycles and the second cycles are alternately arranged in the time dimension.
It is emphasized that in conventional frame-image-based cameras, the concept of frame rate is involved. The reason is that it takes a certain time to read the exposed image data, and the photosensitive module is not exposed any more during the reading process, so that the photosensitive module is idle in the waiting time period, and the utilization rate of the photosensitive module is not high. In contrast, in the embodiment of the present disclosure, the photosensitive module is used based on a time division multiplexing manner, which is specifically represented as: in the exposure stage, the photosensitive module is used for executing exposure processing, and in the reading stage, the photosensitive module is used for acquiring dynamic visual event data. In other words, the photosensitive modules are used as different sensors at different times in a time division multiplexing manner, and data processing is jointly completed by combining the control module and the processing module, so that corresponding processing results (including frame-based output images and/or dynamic vision-based output images) are obtained.
Illustratively, assuming that exposure processing of frame data requires 10 milliseconds (ms) and reading processing requires 50ms in a conventional camera, the photosensitive block is in an idle state for 50ms corresponding to the reading operation. In the embodiment of the disclosure, in order to improve the utilization rate of the photosensitive module, the first processing mode is set to correspond to 10ms, the second processing mode corresponds to 50ms, so as to obtain frame data through 10ms exposure, and by using the 50ms time for reading the frame data, on one hand, the exposed frame data is read and processed, and on the other hand, the photosensitive module is multiplexed to obtain dynamic visual event data, so that all the time of the photosensitive module is fully utilized, and the utilization rate of the photosensitive module is effectively improved.
For example, the first period corresponds to 1ms, the second period corresponds to 2ms, and taking the case that the start time corresponds to the first period, the electrical signal is processed by adopting the first processing mode within 1ms, the electrical signal is processed by adopting the second processing mode within 2ms and 3ms, the electrical signal is processed by adopting the first processing mode within 4ms, the electrical signal is processed by adopting the second processing mode within 5ms and 6ms, and so on; taking the example that the start time corresponds to the second period, the electrical signals are processed by adopting the second processing mode in 1ms and 2ms, the electrical signals are processed by adopting the first processing mode in 3ms, the electrical signals are processed by adopting the second processing mode in 4ms and 5ms, the electrical signals are processed by adopting the first processing mode in 6ms, and so on.
It should be noted that, the above description is only for the control module by way of example, and the embodiment of the present disclosure does not limit the control manner of the control module for the processing mode.
After the control module determines the processing mode, the processing module may perform a corresponding data processing operation according to the determined processing mode.
In some possible implementations, the processing module processes the electrical signal according to the processing mode determined by the control module to obtain a processing result, so as to obtain an output image according to the processing result.
In some possible implementations, the processing module includes a storage unit and a comparison unit; the control module is used for controlling the photosensitive module to be connected with the storage unit to obtain a first processing result under the condition that the first processing mode is determined to be used, and/or the control module is used for controlling the photosensitive module to be connected with the comparison unit to obtain a second processing result under the condition that the second processing mode is determined to be used.
Under the condition that the photosensitive module is controlled to be connected with the storage unit, the storage unit is used for carrying out exposure processing on the electric signal output by the photosensitive module to obtain a first processing result, and the first processing result at least comprises a pixel integral value; and under the condition of controlling the connection of the photosensitive module and the comparison unit, the comparison unit is used for determining a signal difference value according to the electric signal output by the photosensitive module and a reference signal value, and acquiring a second processing result according to the comparison result of the signal difference value and a preset threshold, wherein the second processing result comprises outputting dynamic visual event data or not outputting the dynamic visual event data.
In some possible implementations, the dynamic visual event data includes at least an address and change information of a pixel having a change in light intensity.
In some possible implementations, the control module controls the processing module to process the electrical signal alternately in a first processing mode and a second processing mode in a first period and a second period, respectively, based on a time division multiplexing scheme.
Correspondingly, in the first period, the control module controls the storage unit to be connected with the photosensitive module to obtain frame data; and in a second period, the control module controls the comparison unit to be connected with the photosensitive module and determines whether to output dynamic visual event data according to a comparison result of the comparison unit.
Illustratively, in a first period, the storage unit performs exposure processing on the electrical signal output by the photosensitive module to obtain corresponding frame data.
In the second period, the comparing unit may calculate a signal difference according to the electrical signal output by the light sensing module and the reference signal value, and compare the signal difference with a preset threshold, so as to obtain a comparison result, and determine to output the dynamic visual event data if the comparison result is that the signal difference is greater than the preset threshold, otherwise determine not to output the dynamic visual event data.
In some possible implementations, the comparison unit operates based on a preset time step. Correspondingly, in the ith time step, the comparison unit determines a signal difference value according to the electric signal output by the photosensitive module in the ith time step and the electric signal in the (i-1) th time step, and determines that the second processing result is output dynamic visual event data under the condition that the signal difference value is greater than a preset threshold value.
In some alternative implementations, the reference signal value may be determined according to the electrical signal output by the light sensing module at a time step previous to the current time step. It should be noted that the reference signal value may also be set in other manners (for example, an electrical signal at a time before the current time is determined as the reference signal value), and the preset threshold may be set according to any one or more of experience, statistical data, actual requirements, and the like.
From the above, the reference signal value is a reference value for determining whether the electrical signal changes, and the preset threshold is used for measuring whether the change range of the electrical signal is sufficient to output the corresponding dynamic visual event data.
Taking the example that the exposure processing of the frame data needs 10ms and the reading processing needs 50ms as an example, in a time period of 1-10ms, the photosensitive module performs the exposure processing, and in a time period of 11-50ms, the photosensitive module is used for acquiring dynamic visual event data and simultaneously performing the reading processing on the pixel integral value acquired in the time period of 1-10ms to acquire frame image data corresponding to the time period of 1-10 ms. And in the time period of 51-60ms, the photosensitive module carries out exposure processing again, in the time period of 61-110ms, the photosensitive module is used for acquiring dynamic visual event data again, and simultaneously, the pixel integral value acquired in the time period of 51-60ms is subjected to reading processing to obtain frame image data corresponding to the time period of 51-60ms, and the like, so that image data of each time period is obtained.
As can be seen from the above, for the time periods of 1-10ms, 51-60ms, etc., the dynamic visual event data corresponding to these time periods are missing because the photosensitive modules are used for performing the exposure process. However, the method still has strong adaptability to scenes with slow scene change or unpleasant object motion and the like.
It should be noted that, in the case of adopting the first processing mode, it is necessary to process each frame of image, and taking the frame rate equal to 30 as an example, 300 images can be generated within 10 seconds, and since it is necessary to perform processing such as capturing and communication for each pixel, the processing amount is large, and the corresponding power consumption is also large. Correspondingly, under the condition of adopting the second processing mode, if a certain target object is only displaced at a certain moment, and the position of the target object is kept unchanged at other moments, the dynamic visual event data can be output only at the moment of displacement, and the data are not output at other moments and are converted into a two-dimensional structure, so that the data have certain sparsity, the data processing amount is smaller, and the power consumption is correspondingly smaller.
In summary, it can be seen that, when the first processing mode is adopted, image frames with relatively high quality can be obtained, but the processed data volume is large, and dynamic visual event data in the period can be lost, and when the second processing mode is adopted, corresponding dynamic visual event data can be obtained, and the processed data volume is small, but the image quality is relatively low. Regarding the time allocation (for example, the length relationship between the first period and the second period) using the first processing mode and the second processing mode, the time allocation ratio between the first processing mode and the second processing mode may be determined in a static setting or a dynamic adjustment manner according to an application scenario, a throughput requirement, and the like, which is not limited by the embodiment of the disclosure.
Illustratively, the first processing mode is set to correspond to a first cycle, and the second processing mode corresponds to a second cycle, that is: the exposure time of the frame data is a first period, the acquisition time of the dynamic visual event data is a second period, and the exposure data acquired in the previous adjacent first period is read in the second period.
For example, the operating time of the second processing mode may be increased (e.g., the second period is set to be greater than the first period) to obtain more dynamic visual event data if the frame rate requirement is satisfied.
For example, on the premise of obtaining dynamic visual event data satisfying task requirements, the working time length of the first processing mode may be increased (for example, the first period is set to be greater than the second period) appropriately to improve the quality of the image.
It should be noted that neither the first processing result obtained by the first processing mode nor the processing result corresponding to the second processing mode belongs to the image form, and therefore, the corresponding functional unit needs to perform further processing to obtain the output image. Moreover, since the types of the first processing result and the second processing result are different, and the corresponding processing manners may also be different, the respective functional units may be respectively set for the first processing result (or the first processing mode) and the second processing result (or the second processing mode) to obtain the respective output images.
In some possible implementations, the processing module further includes a pixel reading unit, and the pixel reading unit is connected to the storage unit, and the pixel reading unit is configured to perform reading processing on data stored in the storage unit. In other words, the pixel reading unit is a functional unit corresponding to the first processing result, and the output image can be obtained according to the first processing result.
In some possible implementations, the pixel reading unit may perform the reading processing when the comparing unit in the processing module is in the second processing mode, so as to obtain the output image without affecting the image capturing operation of the second processing mode, thereby improving the processing efficiency.
Illustratively, the storage unit stores data as pixel integrated values, and the pixel reading unit performs a reading process on the pixel integrated values of the storage unit in a previous first period adjacent to a second period corresponding to the first processing mode in the second period, to obtain an output image (i.e., frame data).
In some possible implementations, the processing module further includes an encoding unit, and the encoding unit is connected to the comparing unit; the coding unit is used for coding the dynamic visual event data output by the comparison unit, obtaining pulse codes and obtaining output images according to the pulse codes. In other words, the encoding unit is a functional unit corresponding to the second processing result, and is configured to obtain the output image according to the second processing result.
Illustratively, since the comparing unit outputs the dynamic visual event data only for the pixels with large changes, the dynamic visual event data output by the comparing unit can be represented as a pulse sequence, and the encoding unit performs pulse encoding on the dynamic visual event data, obtains the pulse encoding, and further obtains the output image according to the pulse encoding.
It should be noted that the encoding process of the encoding unit may be completed in the second period, or may be completed in the next first period adjacent to the second period, which is not limited in this disclosure.
The following describes an image processing apparatus according to an embodiment of the present disclosure with reference to fig. 2 to 5.
Fig. 2 is a schematic diagram of an image processing apparatus according to an embodiment of the disclosure. Referring to fig. 2, the image processing apparatus includes a photosensitive module, a time control circuit (equivalent to a control module), and a processing module including a storage unit, a pixel reading unit, a comparing unit, and an encoding unit. Wherein, time control circuit includes three interfaces: the device comprises an interface T1, an interface T2 and an interface T3, wherein the interface T1 is connected with the photosensitive module, the interface T2 is connected with a storage unit of the processing module, the interface T3 is connected with a comparison unit of the processing module, the storage unit is connected with the pixel reading unit, and the comparison unit is connected with the coding unit.
In some possible implementation manners, the photosensitive module generates a corresponding electrical signal according to the intensity of the incident optical signal, an interface T1 of the time control circuit is connected with the photosensitive module, and a processing mode of the electrical signal is determined by controlling the connection of the interface T1 and the interface T2 or T3. When the interface T1 is connected with the interface T2, an electric signal is input into the storage unit, the storage unit carries out exposure processing on the electric signal to obtain a pixel integral value of each pixel, the pixel integral value is transmitted to the pixel reading unit, the pixel integral value of each pixel is read by the pixel reading unit, and the pixel integral values are arranged in a corresponding array to obtain an output image. When the interface T1 is connected with the interface T3, the electric signal is input into a comparison unit, the comparison unit determines a signal difference value according to the electric signal and a reference signal value, compares the signal difference value with a preset threshold value, does not output dynamic visual event data outwards under the condition that the signal difference value is smaller than or equal to the preset threshold value, outputs the dynamic visual event data under the condition that the signal difference value is larger than the preset threshold value, and transmits the dynamic visual event data to a coding unit, the coding unit codes the dynamic visual event data to obtain pulse codes, and an output image is obtained according to the pulse codes.
As described above, when the interface T1 is connected to the interface T2, the processing module performs image processing based on the first processing mode, and can obtain an output image with high quality, and when the interface T1 is connected to the interface T3, the processing module performs image processing based on the second processing mode, and can obtain an output image including dynamic visual event data.
It should be noted that, in addition to the control of the processing mode of the electrical signal by the time control circuit, the processing mode of the electrical signal may be controlled by the control signal. An image processing apparatus that controls a processing mode using a control signal is described below with reference to fig. 3.
Fig. 3 is a schematic diagram of an image processing apparatus according to an embodiment of the present disclosure. Referring to fig. 3, the image processing apparatus includes a photosensitive module, a control module, and a processing module including a storage unit, a pixel reading unit, a comparing unit, and an encoding unit. The photosensitive module and the control module are both connected with the processing module, the photosensitive module is used for transmitting an electric signal to the processing module, and the control module is used for sending a control signal to the processing module.
In some possible implementation manners, the photosensitive module generates a corresponding electrical signal according to the intensity of the incident optical signal, and inputs the electrical signal into the processing module; the control module sends a control signal to the processing module, and after the processing module receives the control signal, the processing module can determine which processing mode to use to process the electric signal according to the control signal. Under the condition that the control signal instructs the processing module to adopt the first processing mode, the processing module carries out exposure processing on the electric signal based on the storage unit to obtain a pixel integral value of each pixel and transmits the pixel integral value to the pixel reading unit, the pixel integral value of each pixel is read by the pixel reading unit, and the pixel integral values are arranged according to a corresponding array to obtain an output image; under the condition that the control signal indicates the processing module to adopt the second processing mode, the processing module processes the electric signal based on the comparison unit, the comparison unit determines a signal difference value according to the electric signal and a reference signal value, compares the signal difference value with a preset threshold value, does not output dynamic visual event data outwards under the condition that the signal difference value is smaller than or equal to the preset threshold value, outputs the dynamic visual event data under the condition that the signal difference value is larger than the preset threshold value, and transmits the dynamic visual event data to the coding unit, the coding unit codes the dynamic visual event data to obtain pulse codes, and an output image is obtained according to the pulse codes.
It should be noted that, in the embodiment of the present disclosure, the photosensitive module and the comparing unit are connected, and the functions of the dynamic vision sensor can be realized by the combined action of the photosensitive module and the comparing unit. In other words, the dynamic vision sensor only outputs the address and information (i.e. dynamic vision event data) of the pixel with the changed light intensity, in the embodiment of the present disclosure, the light sensing module outputs an electrical signal that can represent the light intensity, and the comparing unit is connected to the light sensing module, so as to receive the electrical signal output by the light sensing module, determine whether the light intensity of the corresponding pixel changes according to the electrical signal (implemented by calculating a signal difference between the electrical signal and a reference signal value, and comparing the signal difference with a preset threshold), and output the dynamic vision event data when it is determined that the light intensity changes. Through the arrangement, the two processing modes can be fused by simply adding the comparison unit in the traditional image processing device, the operation is simple and convenient, and the photosensitive module adopts a sharing mode and does not need to add a dynamic vision sensor, so that the cost of the device can be effectively reduced.
Fig. 2 and 3 show the image processing apparatus exemplarily from a functional configuration dimension, and to further explain the image processing apparatus, the operation of the image processing apparatus is shown from a time dimension based on fig. 4.
Fig. 4 is a schematic diagram of an operation process of an image processing apparatus according to an embodiment of the present disclosure. Referring to fig. 4, a process of the image processing apparatus shown in fig. 2 is shown from a time dimension.
As shown in FIG. 4, taking the time period t1-t2 as an example: in the time period from t1 to t2, the photosensitive module is multiplexed to acquire dynamic visual event data, and meanwhile, the time period from t1 to t2 is also used for reading data acquired by exposure in the time period from t0 to t1 to acquire a frame image (p 1) corresponding to the time period from t0 to t 1. Similarly, the t2-t3 time periods may also be multiplexed: in the time period from t2 to t3, the photosensitive module is used for carrying out exposure processing to obtain exposure data, and meanwhile, the time period from t2 to t3 is also used for processing the dynamic visual event data obtained in the time period from t1 to t2 to obtain an event image (p 2) corresponding to the time period from t1 to t 2.
It should be noted that p1-p5 in fig. 4 are used to indicate output images corresponding to respective time periods, and do not indicate that the output images can be obtained in the corresponding time periods. For example, p1 indicates that it is a frame image obtained based on exposure data of the photosensitive module in the time period t0-t1, but does not indicate that p1 can be obtained in the time period t0-t 1.
The following describes the operation of the image processing apparatus according to the embodiment of the present disclosure with reference to fig. 4.
As shown in fig. 4, in a time period from T0 to T1, the interface T1 is connected to the interface T2, the processing module performs data processing in a first processing mode, obtains a pixel integral value of each pixel in a global pixel integration time period, and reads the pixel integral value in the time period from T0 to T1 through operations such as pixel reading in the time period from T1 to T2, so as to obtain an output image corresponding to the time period from T1 to T2 (the output image is shown as p 1); meanwhile, in a time period from T1 to T2, the interface T1 is connected with the interface T3, the photosensitive module is multiplexed and used for converting the optical signal into a corresponding electrical signal, the processing module performs data processing on the generated electrical signal by adopting a second processing mode to obtain dynamic visual event data of a pixel of which the signal difference exceeds a preset threshold, an event image is obtained based on the dynamic visual event data, and a corresponding output image is shown as p 2. In a time period from T2 to T3, the interface T1 is reconnected to the interface T2, the photosensitive module is configured to perform an exposure operation, the processing module performs data processing in a first processing mode, obtains a pixel integration value of each pixel in a global pixel integration time period, and reads the pixel integration value in the time period from T2 to T3 through operations such as pixel reading in the time period from T3 to T4, so as to obtain an output image (the output image is shown as p 3) corresponding to the time period from T2 to T3. Similarly, during the time period T3-T4, the interface T1 is reconnected to the interface T3, the photosensitive module is multiplexed again to convert the optical signal into a corresponding electrical signal, the processing module performs data processing on the generated electrical signal by using the second processing mode to obtain dynamic visual event data of pixels with signal difference values exceeding a preset threshold, and an event image is obtained based on the dynamic visual event data, so that an output image as shown by p4 can be obtained. the t4-t5 period is similar to t0-t1, t2-t3, and an output image as shown by p5 can be obtained for the t4-t5 period. And the rest can be done in the same way until the image processing task is completed.
In the above processing procedure, only the time periods t1-t2, t3-t4, etc. are multiplexed, and in some optional implementation manners, the time periods t2-t3, t4-t5, etc. may also be multiplexed.
Illustratively, to obtain more dynamic visual event data, the length of the t1-t2, t3-t4, etc. time periods are increased appropriately. Based on this setting, it may result in that the reading operation of the pixel integrated value for the time period of t0-t1 is completed before the time t2 is reached, and it may also result in that the output image corresponding to t1-t2 cannot be obtained before the time t2 or at the time t2, so that multiplexing is required to be performed also in the time period of t2-t3 to obtain the output image corresponding to t1-t2 from the dynamic visual event data of t1-t 2. the time period t4-t5 is similar and will not be described further herein.
It should be noted that, in some alternative implementations, to improve the image quality, more frame images may be obtained, and therefore, the time corresponding to the second processing mode is set to match the frame data reading time, taking the time period from t1 to t2 as an example, the corresponding actions are: at the time just reaching t2, reading of the pixel integration values for the time period t0-t1 is completed, thereby obtaining an output image corresponding to the time period t1-t 2. In some optional implementation manners, the duration corresponding to the second processing mode is longer to acquire more dynamic visual event data, taking a time period from t1 to t2 as an example, after the time t1 and before the time t2 is reached, the reading of the pixel integration value in the time period from t0 to t1 may be completed, so as to obtain an output image corresponding to the time period from t1 to t 2.
In some possible implementations, the comparison unit works based on a preset time step Δ t, and the time period t1-t2 is taken as an example for illustration. Taking the t1 moment as an initial moment, after a first time step, reaching the t1+ Δ t moment, taking the electric signal at the t1 moment as a reference signal value of the current time step by a comparison unit, calculating a difference value between the electric signal at the t1+ Δ t moment and the electric signal at the t1 moment to obtain a signal difference value, and comparing the signal difference value with a preset threshold value, wherein when the signal difference value is greater than the preset threshold value, the signal difference value indicates that the light intensity of a corresponding pixel is greatly changed, so that dynamic visual event data corresponding to the first time step is output, and the dynamic visual event data comprises a pixel position and difference value information (used for representing the variation of the light intensity), otherwise, if the signal difference value is less than or equal to the preset threshold value, the signal difference value indicates that the light intensity of the corresponding pixel is not greatly changed, so that the dynamic visual event data is not output outwards; after a time step, when the time reaches t1+2 Δ t, the comparison unit takes the electric signal at the time of t1+ Δ t as a reference signal value of the current time step, calculates the difference value between the electric signal at the time of t1+2 Δ t and the electric signal at the time of t1+ Δ t to obtain a signal difference value, compares the signal difference value with a preset threshold value, and when the signal difference value is greater than the preset threshold value, the signal difference value indicates that the light intensity of the corresponding pixel is greatly changed, so that dynamic visual event data corresponding to a second time step, including the pixel position and difference value information, is output, otherwise, if the signal difference value is less than or equal to the preset threshold value, the signal difference value indicates that the light intensity of the corresponding pixel is not greatly changed, so that the dynamic visual event data are not output outwards; and so on, until the time point t2 is reached, the processing unit enters the first processing mode, and the comparison unit is in a pause state or a sleep state.
It should be noted that, in the time period t1 to t2, the pixel integrated value obtained in the time period t0 to t1 is read by the pixel reading unit to obtain p1, and p1 may be output externally. For the output image p2, since the data processing amount in the second processing mode is relatively small, and the processing speed is fast, p2 may be acquired in the time period from t1 to t2, and may also be acquired in the time period from t2 to t3 (i.e. the encoding unit may operate in the time period from t1 to t2 and/or from t2 to t 3), which is not limited by the embodiment of the present disclosure.
Fig. 5 is a schematic diagram of an operating process of an image processing apparatus according to an embodiment of the present disclosure, which mainly describes an operating process in a first processing mode and an operating process for acquiring an output image based on a first processing result in the first processing mode.
As shown in FIG. 5, during the time period t0-t1, the processing module operates based on the first processing mode. In some possible implementations, at time t0, the storage unit is reset and ready for exposure processing, and during a time period from t0 to t1, the storage unit sequentially exposes each pixel for a long time, obtains an exposure signal (i.e., a pixel integrated value) of each pixel, enters a hold state at time t1, sequentially reads (i.e., time-divisionally reads) each pixel integrated value after time t1, and after reading of all pixel integrated values is completed, obtains an output image and transmits the image to the outside, and meanwhile, the time period from t1 to t2 belongs to a multiplexing time period and is used for obtaining dynamic visual event data in addition to reading a frame image so as to generate an event image based on the dynamic visual event data. In the time period from t2 to t3, the processing module works based on the first processing mode again, the working process is similar to the time period from t0 to t1, and the processing module is in a multiplexing state in the time period from t3 to t4, on one hand, the integral value of each pixel obtained in the time period from t2 to t3 is read to obtain a frame image, on the other hand, the photosensitive module obtains dynamic visual event data, and the working process can refer to other relevant contents, and is not described herein.
As can be seen from the foregoing, whether the second processing mode outputs the dynamic visual event data is related to the preset threshold, the sensitivity of the comparing unit, and the like. For the preset threshold, if the preset threshold is set too low, dynamic visual event data may be output for pixels with small light intensity change, and the data processing amount is large, whereas if the preset threshold is set too high, part of pixels with large light intensity change may not output dynamic visual event data, and therefore, a scientific and reasonable preset threshold needs to be set. In addition, if the sensitivity of the comparison unit is low, the change in light intensity cannot be accurately acquired in time, which results in low accuracy of the output result or a long reaction time.
Considering that the electrical signal of a single pixel (single photosensor) has a relatively small variation even if it changes, the comparing unit connected to it needs to be particularly sensitive to sense the change in time, which results in high cost, while if the comparing unit with relatively low sensitivity is used, it may result in an increase in response time and thus in an increase in time delay.
In view of this, in some possible implementations, the processing efficiency of the image processing apparatus may be improved by combining the local pixels to increase the reaction speed of the comparison unit.
In some possible implementations, the photosensitive module includes a plurality of photosensitive sensors, and the effect of combining local pixels to increase the response speed is achieved by connecting the plurality of photosensitive sensors to the same comparison unit.
In some possible implementations, the comparing unit connected with the plurality of photosensors includes: the comparison unit is connected with a plurality of photosensitive sensors corresponding to the same color channel; and/or the comparison unit is connected with a plurality of photosensitive sensors in the preset area; and/or the comparison unit is connected with at least one preset photosensitive sensor pixel group, and each photosensitive sensor pixel group comprises a plurality of photosensitive sensors.
The number of the photosensitive sensors connected with one comparison unit can be set according to one or more of experience, statistical data and task requirements. In general, when the requirement for accuracy is high, the number of the photosensitive sensors connected to one comparing unit can be reduced appropriately to improve accuracy; when the requirement on the reaction speed is high, the number of the photosensitive sensors connected with one comparison unit can be increased appropriately so as to reduce the reaction time and reduce the time delay.
In some alternative implementations, in a case where the comparison unit is connected to the plurality of photosensors, the comparison unit determines a signal sum according to the electrical signals of the connected plurality of photosensors, determines a signal difference according to the signal sum and a reference signal value, and determines whether to output the dynamic visual event data according to the signal difference and a preset threshold.
It should be understood that, in case that the comparing unit is connected to one photo sensor, the comparing unit may determine a signal difference value according to the electrical signal of the connected photo sensor and a reference signal value, and determine whether to output dynamic visual event data according to the signal difference value and a preset threshold value.
It should be noted that the effect of increasing the reaction speed can be achieved by connecting a plurality of photosensors with one comparison unit, the above implementation manner is merely to exemplify the connection manner between the comparison unit and the plurality of photosensors, and the connection manner between the comparison unit and the plurality of photosensors is not limited in the embodiment of the present disclosure.
It should be understood that, in some possible implementations, one comparing unit may be provided for each photosensitive sensor, and each photosensitive sensor is connected to the corresponding comparing unit to determine whether the corresponding pixel of the photosensitive sensor needs to output dynamic visual event data.
The connection manner of the comparison unit in the embodiment of the present disclosure is described below with reference to fig. 6 to 11.
Fig. 6 is a schematic connection diagram of a comparison unit and a photosensor according to an embodiment of the present disclosure. Referring to fig. 6, the photo sensors include three sensors corresponding to an R (red) color channel, a G (green) color channel, and a B (blue) color channel, and are arranged in a preset order to form a pixel array.
In some possible implementations, the comparison unit is connected to a plurality of photosensors corresponding to the same color channel. As shown in fig. 6, four photosensitive sensors corresponding to the R color channel are connected to the same comparison unit. When the comparison unit executes comparison operation, the comparison unit calculates the sum of signals with the electric signals output by the four light-sensitive sensors, compares the sum of signals with a reference signal value to obtain a corresponding signal difference value, and then determines the duration according to the signal difference value and a preset threshold value to output dynamic visual event data related to the four pixels.
The photosensitive sensors corresponding to the G color channel and the B color channel may also be connected to the comparison unit in a similar connection manner, and will not be described herein.
Fig. 7 is a schematic connection diagram of a comparison unit and a photosensor according to an embodiment of the present disclosure. Referring to fig. 7, r-G-B constitutes a photosensitive sensor pixel group, which includes four photosensitive sensor pixel groups in total, and each photosensitive sensor pixel group is connected to a comparing unit.
The first comparison unit is connected with four photosensitive sensors in the photosensitive sensor pixel group positioned at the upper left corner, the second comparison unit is connected with four photosensitive sensors in the photosensitive sensor pixel group positioned at the upper right corner, the third comparison unit is connected with four photosensitive sensors in the photosensitive sensor pixel group positioned at the lower left corner, and the fourth comparison unit is connected with four photosensitive sensors in the photosensitive sensor pixel group positioned at the lower right corner.
Fig. 8 is a schematic connection diagram of a comparison unit and a photosensor according to an embodiment of the present disclosure. Referring to fig. 8, sixteen photosensors arranged in a 4-by-4 array form a preset region, and the photosensors in the preset region are connected to the same comparison unit.
As shown in fig. 8, sixteen photo sensors are connected in series in the manner shown in fig. 8, and are connected to the comparison unit. And when determining whether to output the dynamic visual event data, the comparison unit compares the sum of the signal differences of the sixteen sensors with a preset threshold value, and determines whether to execute output operation according to a comparison result.
It should be noted that, in some possible implementations, a plurality of preset regions may be disposed in one pixel array, and different preset regions may have the same or different sizes.
Fig. 9 is a schematic connection diagram of a comparison unit and a photosensor according to an embodiment of the present disclosure. Referring to fig. 9, twenty-four photosensors in 4 × 6 array distribution form two preset regions, sixteen photosensors in 4 × 4 array distribution on the left side form one preset region, eight photosensors in 4 × 2 array distribution on the right side form another preset region, and the photosensors in the same preset region are connected to the same comparison unit.
As shown in fig. 9, twenty-four photosensors in the left array are connected to the fifth comparing unit, and eight photosensors in the right array are connected to the sixth comparing unit.
It should be noted that, in a pixel array, only one connection method may be adopted, and multiple connection methods may also be adopted at the same time, which is not limited in the embodiment of the present disclosure.
Fig. 10 is a schematic connection diagram of a comparison unit and a photosensor according to an embodiment of the present disclosure. Referring to fig. 10, in the 4 × 4 pixel array, the upper 2 × 4 portion corresponds to a predetermined region, and the lower 2 × 4 portion includes two photosensor pixel groups.
As shown in fig. 10, eight photosensors in the preset region are connected to the seventh comparing unit, the photosensor pixel group on the lower left side is connected to the eighth comparing unit, and the photosensor pixel group on the lower right side is connected to the ninth comparing unit. It can be seen that in fig. 10, the connection between the photosensor and the comparison unit is established based on the preset area and the photosensor pixel group.
In some possible implementations, the photosensor may be divided into two parts, one part operating in the first processing mode and the other part operating in the second processing mode. Based on the setting, the working pressure of the photosensitive sensor can be relieved, the normal work of the second processing mode cannot be influenced even if the photosensitive sensor corresponding to the first processing mode fails, and similarly, the normal work of the first processing mode cannot be influenced even if the photosensitive sensor corresponding to the second processing mode fails, so that the robustness of the image processing device is improved.
Illustratively, the photosensitive region composed of a plurality of photosensitive sensors includes at least one first region and at least one second region; wherein the photosensor in the first region outputs an electrical signal when in the first processing mode, and the photosensor in the second region outputs an electrical signal when in the second processing mode.
Fig. 11 is a schematic distribution diagram of a photosensor according to an embodiment of the present disclosure. Referring to fig. 11, the photo sensors are divided into two first regions and two second regions, wherein the photo sensors in the first regions output electrical signals in case of the first process mode, and the photo sensors in the second regions output electrical signals in case of the second process mode.
Similar to the foregoing, each second region may be connected to a corresponding comparing unit, and a plurality of second regions may also share one comparing unit, so as to further implement merging of local pixels, so as to effectively improve the response speed of the comparing unit, and improve the processing efficiency of the image processing apparatus.
Fig. 12 is a schematic connection diagram of a comparison unit and a photosensor, provided in the embodiment of the present disclosure, for describing a connection relationship between the photosensor and the comparison unit in the second area shown in fig. 11. As shown in fig. 12, the photosensor in the second region located at the upper right corner is connected to the tenth comparing unit, and the photosensor in the second region located at the lower left corner is connected to the eleventh comparing unit.
Fig. 13 is a schematic connection diagram of a comparison unit and a photosensitive sensor provided in the embodiment of the present disclosure, and is used to describe a connection relationship between the photosensitive sensor and the comparison unit in the second area shown in fig. 11. As shown in fig. 13, the photosensor in the two second regions are commonly connected to the twelfth comparing unit.
It should be noted that, since the photosensor in the first region does not operate in the second processing mode, it is not necessary to perform an operation such as comparison of an electric signal, and therefore, the photosensor in the first region does not need to be connected to the comparison unit.
It should also be noted that in some possible implementations, two sensors may be provided in the image processing apparatus, and different types of sensors may be used for different processing modes. In the image processing device, both the frame-based photosensitive sensor and the dynamic vision sensor are exemplarily included, and in the working process, any one of the two sensors can be selected according to the processing requirements to execute the image processing task. In this case, although the comparison unit is not required, the cost of the image processing apparatus is high because the cost of the dynamic vision sensor is high.
In a second aspect, an embodiment of the present disclosure provides an image processing method.
Fig. 14 is a flowchart of an image processing method according to an embodiment of the present disclosure. Referring to fig. 14, the image processing method includes:
in step S141, a corresponding electrical signal is generated from the incident optical signal.
In step S142, a processing mode for the electrical signal in the time dimension is determined, wherein the processing mode includes a first processing mode based on frame vision and a second processing mode based on dynamic vision.
In step S143, the electrical signal is processed in a time division multiplexing manner according to the processing mode, and a processing result is obtained.
In step S144, an output image is acquired according to the processing result.
In some possible implementations, in step S141, a corresponding electrical signal may be generated according to the intensity of the incident optical signal; wherein the electrical signal comprises at least one of a capacitance signal, a voltage signal, and a current signal.
Illustratively, from an incident light signal, a light intensity is determined and a corresponding capacitance signal is generated from the light intensity, or a corresponding voltage signal is generated from the light intensity, or a corresponding current signal is generated from the light intensity.
In some possible implementations, the step S142 is configured to determine the processing mode of the electrical signal at each time period or each time instant, so that the processing mode of the electrical signal appears as the first processing mode and the second processing mode alternately appearing in the time dimension.
For example, step S142 may implement the control of the electrical signal processing mode by the way that the time control circuit gates the processing mode or by sending a control signal to indicate the processing mode.
In some possible implementations, in step S143, the first processing mode and the second processing mode may be alternately adopted for the electric signals generated in the first period and the second period based on a time division multiplexing manner.
For example, the corresponding processing module may be controlled to process the electrical signal in a first processing mode during a first period and in a second processing mode during a second period.
In some possible implementations, in step S144, for the first processing result obtained in the first processing mode, a corresponding output image may be obtained through a pixel reading operation, for the second processing result obtained in the second processing mode, a corresponding pulse code may be obtained through an encoding process, and an output image is obtained according to the pulse code.
It should be noted that, after the output image is obtained, various machine vision tasks may also be performed based on the output image, and the application scenario of the output image is not limited by the embodiments of the present disclosure.
The processing procedure of each step can be referred to the description of relevant content in the embodiments of the present disclosure, and is not described herein.
It should be noted that the image processing method described above may be implemented by any image processing apparatus of the embodiments of the present disclosure.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides an electronic device and a computer-readable storage medium, which can be used to implement any image processing method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method sections are not repeated.
Fig. 15 is a block diagram of an electronic device provided in an embodiment of the present disclosure.
Referring to fig. 15, an embodiment of the present disclosure provides an electronic device including: at least one processor 1501; at least one memory 1502, and one or more I/O interfaces 1503 coupled between the processor 1501 and the memory 1502; the memory 1502 stores one or more computer programs that can be executed by the at least one processor 1501, and the one or more computer programs are executed by the at least one processor 1501, so that the at least one processor 1501 can execute the image processing method described above.
In some possible implementations, the electronic device further includes a photosensitive pixel circuit (corresponding to the photosensitive module) and a control circuit, where the photosensitive pixel circuit is configured to generate a corresponding electrical signal according to an incident optical signal, the control circuit is configured to control a processing mode of the electrical signal, and the processor corresponds to the processing module and is configured to process the electrical signal according to the processing mode determined by the control circuit and obtain an output image according to a processing result. During the above processing, operations such as storing and buffering of data may be required, and the memory may be used to implement this function, for example, storing data such as pixel integration values, dynamic visual event data, output images, and the like. Moreover, the connection among the functional units can be realized through an I/O interface, so that the data transmission among the functional units is ensured, and the smooth execution of the image processing task is ensured.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor/processing core, implements the image processing method described above. The computer readable storage medium may be a volatile or non-volatile computer readable storage medium.
The disclosed embodiments also provide a computer program product comprising computer readable code or a non-volatile computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, the processor in the electronic device performs the above-mentioned image processing method.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer-readable storage media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable program instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), static Random Access Memory (SRAM), flash memory or other memory technology, portable compact disc read-only memory (CD-ROM), digital Versatile Discs (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. In addition, communication media typically embodies computer readable program instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as is well known to those skilled in the art.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives the computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the disclosure are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
The computer program product described herein may be embodied in hardware, software, or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Example embodiments have been disclosed herein, and although specific terms are employed, they are used and should be interpreted in a generic and descriptive sense only and not for purposes of limitation. In some instances, features, characteristics and/or elements described in connection with a particular embodiment may be used alone or in combination with features, characteristics and/or elements described in connection with other embodiments, unless expressly stated otherwise, as would be apparent to one skilled in the art. Accordingly, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the disclosure as set forth in the appended claims.

Claims (13)

1. An image processing apparatus characterized by comprising:
the photosensitive module is used for generating a corresponding electric signal according to the incident optical signal;
the control module is used for controlling the processing mode of the electric signal generated by the photosensitive module according to a time division multiplexing mode;
the processing module is used for processing the electric signal according to the processing mode determined by the control module to obtain a processing result;
wherein the processing modes include a first processing mode based on frame vision and a second processing mode based on dynamic vision.
2. The image processing apparatus according to claim 1, wherein the processing module includes a storage unit and a comparison unit;
the control module is used for controlling the photosensitive module to be connected with the storage unit to obtain a first processing result under the condition that a first processing mode is determined to be used;
and/or the presence of a gas in the atmosphere,
and the control module is used for controlling the photosensitive module to be connected with the comparison unit to acquire a second processing result under the condition that the second processing mode is determined to be used.
3. The apparatus according to claim 1, wherein said control module controls said processing module to alternately perform the processing in said first processing mode and said second processing mode for the electric signals generated by said light sensing module in a first period and a second period.
4. The image processing apparatus according to claim 3, wherein the control module controls the storage unit to connect to the photosensitive module to obtain frame data during the first period;
and in the second period, the control module controls a comparison unit to be connected with the photosensitive module and determines whether to output dynamic visual event data according to a comparison result of the comparison unit.
5. The image processing apparatus according to claim 4, wherein the comparison unit operates based on a preset time step,
and in the ith time step, the comparison unit determines a signal difference value according to the electric signal output by the photosensitive module in the ith time step and the electric signal in the (i-1) th time step, and determines that the second processing result is output dynamic visual event data under the condition that the signal difference value is greater than the preset threshold value.
6. The image processing apparatus according to claim 5, wherein the photosensitive module comprises a plurality of photosensitive sensors, the photosensitive sensors are configured to generate corresponding electrical signals according to incident light signals; the comparison unit is connected with at least one photosensitive sensor.
7. The image processing apparatus according to claim 6, wherein the comparing unit is connected to a plurality of photosensors, and comprises:
the comparison unit is connected with a plurality of photosensitive sensors corresponding to the same color channel;
and/or the presence of a gas in the atmosphere,
the comparison unit is connected with a plurality of photosensitive sensors in a preset area;
and/or the presence of a gas in the gas,
the comparison unit is connected with at least one preset photosensitive sensor pixel group, and each photosensitive sensor pixel group comprises a plurality of photosensitive sensors.
8. The image processing apparatus according to claim 4, wherein the processing module further comprises a pixel reading unit, and the pixel reading unit is connected to the storage unit;
the pixel reading unit is used for reading the data stored in the previous first period adjacent to the second period by the storage unit in the second period.
9. The image processing device according to claim 4, wherein the processing module further comprises an encoding unit, and the encoding unit is connected to the comparing unit;
and the coding unit is used for coding the dynamic visual event data output by the comparison unit, obtaining pulse codes and obtaining output images according to the pulse codes.
10. The image processing apparatus according to claim 1, wherein the photosensitive module includes a plurality of photosensitive sensors;
the photosensitive sensor is used for generating a corresponding electric signal according to the intensity of an incident optical signal;
the electrical signal includes at least one of a capacitance signal, a voltage signal, and a current signal.
11. The image processing apparatus according to claim 10, wherein the light-sensitive sensor includes at least one of a first sensor, a second sensor, and a third sensor;
the first sensor is used for acquiring the light intensity of an optical signal and generating a corresponding capacitance signal according to the light intensity, the second sensor is used for acquiring the light intensity of the optical signal and generating a corresponding voltage signal according to the light intensity, and the third sensor is used for acquiring the light intensity of the optical signal and generating a corresponding current signal according to the light intensity.
12. The image processing apparatus according to claim 10, wherein the photosensitive area composed of the plurality of photosensitive sensors includes at least one first area and at least one second area;
wherein the photosensor in the first region outputs an electrical signal when in a first processing mode, and the photosensor in the second region outputs an electrical signal when in a second processing mode.
13. An image processing method, comprising:
generating a corresponding electrical signal according to the incident optical signal;
determining a processing mode of the electrical signal in a time dimension;
processing the electric signal in a time division multiplexing mode according to the processing mode to obtain a processing result; acquiring an output image according to the processing result;
wherein the processing modes include a first processing mode based on frame vision and a second processing mode based on dynamic vision.
CN202211014761.3A 2022-08-23 2022-08-23 Image processing apparatus and method, electronic device, and medium Pending CN115410063A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211014761.3A CN115410063A (en) 2022-08-23 2022-08-23 Image processing apparatus and method, electronic device, and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211014761.3A CN115410063A (en) 2022-08-23 2022-08-23 Image processing apparatus and method, electronic device, and medium

Publications (1)

Publication Number Publication Date
CN115410063A true CN115410063A (en) 2022-11-29

Family

ID=84162688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211014761.3A Pending CN115410063A (en) 2022-08-23 2022-08-23 Image processing apparatus and method, electronic device, and medium

Country Status (1)

Country Link
CN (1) CN115410063A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117848493A (en) * 2024-02-07 2024-04-09 北京灵汐科技有限公司 Signal processing device, sensor chip, signal processing method, device, and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117848493A (en) * 2024-02-07 2024-04-09 北京灵汐科技有限公司 Signal processing device, sensor chip, signal processing method, device, and medium
CN117848493B (en) * 2024-02-07 2024-05-24 北京灵汐科技有限公司 Signal processing device, sensor chip, signal processing method, device, and medium

Similar Documents

Publication Publication Date Title
CN110192387B (en) Data rate control for event-based vision sensors
US10178338B2 (en) Electronic apparatus and method for conditionally providing image processing by an external apparatus
US9279955B2 (en) Image pickup apparatus, control method thereof, and program
CN104010128A (en) Image capturing apparatus and method for controlling the same
CN108833812B (en) Image sensor and image dynamic information processing method
CN105578067B (en) image generating method, device and terminal device
JP2015094925A (en) Focus adjustment device, focus adjustment method and program, and imaging device having focus adjustment device
US20200389579A1 (en) Image processing apparatus, image processing method, and storage medium
CN111193866B (en) Image processing method, image processor, photographing device and electronic equipment
US10313612B2 (en) Image sensor, control method, and electronic device
CN115410063A (en) Image processing apparatus and method, electronic device, and medium
KR20230135501A (en) Image sensor and its image output method and application
CN112995545A (en) Multi-sensor high dynamic range imaging
CN111193867A (en) Image processing method, image processor, photographing device and electronic equipment
CN108881731B (en) Panoramic shooting method and device and imaging equipment
CN109309784B (en) Mobile terminal
US20140168467A1 (en) Focus adjustment apparatus and method, and image capturing apparatus
CN116918342A (en) Determining exposure parameters for imaging
CN105245797A (en) Image sensor, image capturing apparatus, and control method of image capturing apparatus
CN113766128B (en) Image processing apparatus, image processing method, and image forming apparatus
KR20190139788A (en) Methods and apparatus for capturing media using plurality of cameras in electronic device
CN221408976U (en) High frame rate camera
US20240104920A1 (en) Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
CN114630008B (en) Image processing method, device, storage medium and electronic equipment
CN220570631U (en) Photoelectric sensor, pixel unit thereof, pulse camera and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination