CN111492650A - Image preprocessing method and device, image sensor interface, image processing method and device - Google Patents

Image preprocessing method and device, image sensor interface, image processing method and device Download PDF

Info

Publication number
CN111492650A
CN111492650A CN201880077740.1A CN201880077740A CN111492650A CN 111492650 A CN111492650 A CN 111492650A CN 201880077740 A CN201880077740 A CN 201880077740A CN 111492650 A CN111492650 A CN 111492650A
Authority
CN
China
Prior art keywords
data
frame
deserializing
timing signal
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880077740.1A
Other languages
Chinese (zh)
Other versions
CN111492650B (en
Inventor
袁扬智
刘俊秀
胡江鸣
韦毅
石岭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kaiyang Electronics Co ltd
Arkmicro Technologies Inc
Original Assignee
Shenzhen Kaiyang Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kaiyang Electronics Co ltd filed Critical Shenzhen Kaiyang Electronics Co ltd
Publication of CN111492650A publication Critical patent/CN111492650A/en
Application granted granted Critical
Publication of CN111492650B publication Critical patent/CN111492650B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level

Abstract

The invention discloses an image preprocessing method, an image preprocessing device, an image sensor interface, an image processing method and an image processing device. The image preprocessing method comprises the following steps: selecting the input interface according to user input to receive original data in a parallel format or a serial format; when the received original data is in a parallel format, acquiring a first time sequence signal from the original data and sending the original data to the on-chip cache for storage according to the first time sequence signal; when the received original data is in a serial format, converting the original data into deserializing data in a parallel format and performing deserializing processing on the deserializing data to obtain a second time sequence signal; and sending the deserializing data to the on-chip cache for storage according to the second time sequence signal.

Description

Image preprocessing method and device, image sensor interface, image processing method and device Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image preprocessing method and apparatus, an image sensor interface, and an image processing method and apparatus.
Background
Image Signal Processors (ISPs) of the related art can generally only receive and process raw data (raw data) in serial or parallel format, however, with the development of imaging technology, current image sensors or additionally added devices have been able to acquire more data for assisting in improving image definition, such as L VDS multiframe data or infrared images.
Disclosure of Invention
The embodiment of the invention provides an image preprocessing method and device, an image sensor interface, an image processing method and device.
The image preprocessing method is used for an image sensor interface, and the image sensor interface comprises an input interface and an on-chip cache. The image processing method comprises the following steps:
selecting the input interface according to user input to receive original data in a parallel format or a serial format;
when the received original data is in a parallel format, acquiring a first time sequence signal from the original data and sending the original data to the on-chip cache for storage according to the first time sequence signal;
when the received original data is in a serial format, converting the original data into deserializing data in a parallel format and performing deserializing processing on the deserializing data to obtain a second time sequence signal; and
sending the deserializing data to the on-chip cache for storage according to the second time sequence signal;
the deserializing data comprises single-frame deserializing data, the second time sequence signal comprises a single-frame time sequence signal, and the step of sending the deserializing data to the on-chip buffer for storage according to the second time sequence signal comprises the following steps:
sending the single-frame deserializing data to the on-chip cache for storage according to the single-frame time sequence signal;
the image preprocessing method comprises the following steps:
reading and outputting the single-frame deserializing data from the on-chip cache according to the single-frame timing signal;
the step of reading and outputting the single-frame deserializing data from the on-chip buffer according to the single-frame timing signal comprises the following steps:
receiving a through signal;
when the through signal is enabled, outputting the single-frame deserializing data to an image signal processor according to the single-frame timing signal;
when the through signal is not enabled, outputting the single-frame deserializing data to the bus interface according to the single-frame timing signal; and
and converting the single-frame deserializing data into single-frame output data which accords with the bus protocol standard through the bus interface and storing the single-frame output data into a main memory through a bus.
The image preprocessing device comprises a first processing module, a second processing module, a third processing module and a fourth processing module. The first processing module is used for selecting the input interface according to user input to receive raw data in a parallel format or a serial format. The second processing module is used for acquiring a first time sequence signal from the original data and sending the original data to the on-chip cache for storage according to the first time sequence signal when the received original data is in a parallel format. And the third processing module is used for converting the original data into deserialized data in a parallel format and performing deserializing processing on the deserialized data to obtain a second time sequence signal when the received original data is in the serial format. And the fourth processing module is used for sending the deserializing data to the on-chip cache for storage according to the second time sequence signal. The fourth processing module includes a first sending submodule. And the first sending submodule is used for sending the single-frame deserializing data to the on-chip cache for storage according to the single-frame time sequence signal. The image preprocessing device comprises a second output module. And the second output module is used for reading and outputting the single-frame deserializing data from the on-chip cache according to the single-frame timing signal. The second output module comprises a second receiving submodule, a fourth output submodule, a fifth output submodule and a sixth output submodule. The second receiving submodule is used for receiving a through signal. And the fourth output submodule is used for outputting the single-frame deserializing data to an image signal processor according to the single-frame timing signal when the through signal is enabled. And the fifth output submodule is used for outputting the single-frame deserializing data to the bus interface according to the single-frame timing signal when the through signal is not enabled. And the sixth output submodule is used for converting the single-frame deserializing data into single-frame output data which accords with the bus protocol standard through the bus interface and storing the single-frame output data into the main memory through the bus.
The image processing method of the embodiment of the invention comprises the following steps:
acquiring data processed by the image preprocessing method to obtain data to be processed;
processing the data to be processed to obtain data to be shunted;
dividing the data to be branched into brightness path data and chroma path data;
processing the luminance channel data to obtain luminance data;
processing the chroma path data to obtain chroma data;
the step of processing the luminance channel data to obtain luminance data comprises the steps of:
performing interpolation processing on the brightness path data to obtain the brightness data;
the step of processing the chroma pass data to obtain chroma data comprises the steps of:
performing interpolation processing on the chrominance path data to obtain the chrominance data;
the image processing method comprises the following steps:
outputting the brightness data;
and outputting the chrominance data.
The image processing apparatus of an embodiment of the present invention includes an acquisition unit, a first processing unit, a branching unit, a second processing unit, and a third processing unit. The acquiring unit is used for acquiring data processed by the image preprocessing method of any one of the above embodiments to obtain data to be processed. The first processing unit is used for processing the data to be processed to obtain data to be branched. The branching unit is used for dividing the data to be branched into luminance path data and chrominance path data. The second processing unit is used for processing the brightness channel data to obtain brightness data. The third processing unit is used for processing the chroma channel data to obtain chroma data. The second processing unit includes a luminance interpolation subunit. The brightness interpolation subunit is used for carrying out interpolation processing on the brightness channel data. The third processing unit includes a chrominance interpolation subunit. The chrominance interpolation subunit is used for carrying out interpolation processing on the chrominance channel data. The image processing apparatus includes a luminance output unit and a chrominance output unit. The brightness output unit is used for outputting the brightness data after interpolation processing. The chrominance output unit is used for outputting the chrominance data after interpolation processing.
The image preprocessing method, the image preprocessing device, the image processing method and the image processing device can select and receive original data in a serial format or a parallel format, directly buffer the original data when the original data is in the parallel format which can be directly processed by an image signal processor, and convert the original data into deserialized data in the parallel format and buffer the deserialized data when the original data is in the serial format. Thus, the image signal processor can process various data, thereby improving the image definition.
In summary, the image preprocessing method, the image preprocessing device, the image sensor interface, the image processing method and the image processing device according to the embodiments of the present invention have the following advantages: first, the image signal processor is enabled to support a plurality of input data formats of the image sensor (e.g., a raw data format input in parallel, a raw data format input in serial multi-frame, an input format of an infrared sensor, etc.); second, the image signal processor is enabled to support complex algorithmic processing (e.g., wide dynamic processing of multi-frame input, image defogging, etc.) and more efficient adaptive processing of image noise reduction and enhancement; third, in the chrominance and luminance processing domains, a more flexible data processing path mechanism is provided in which the chrominance path is independent of the luminance path. Fourthly, the system bandwidth, the power consumption and the storage resources (including an on-chip static random access memory, a main memory and the like) can be flexibly configured according to different input modes and configuration modes, so that the chip works in an optimal state.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow diagram of an image pre-processing method according to an embodiment of the invention;
FIG. 2 is a block diagram of an image preprocessing apparatus according to an embodiment of the present invention;
FIG. 3 is a block diagram of an image sensor interface according to an embodiment of the present invention;
FIG. 4 is a flow chart diagram of an image processing method according to an embodiment of the invention;
FIG. 5 is a block diagram of an image processing apparatus according to an embodiment of the present invention;
FIG. 6 is a block diagram of a first processing unit of the image processing apparatus according to the embodiment of the present invention;
FIG. 7 is a block diagram of a second processing unit of the image processing apparatus according to the embodiment of the present invention;
fig. 8 is a block schematic diagram of a third processing unit of the image processing apparatus according to the embodiment of the present invention.
Description of the main element symbols:
an image preprocessing device 10, a first processing module 12, a second processing module 14, a third processing module 16, a fourth processing module 18, an image sensor interface 20, an input interface 22, an on-chip buffer 24, a bus interface 26, an infrared processing unit 28, a serial data processor 21, a buffer controller 23, an image processing device 30, an acquisition unit 32, a first processing unit 34, a first blending subunit 342, a second blending subunit 344, a 3A statistics subunit 346, a white balance correction subunit 348, a dead spot correction subunit 341, a lens shading compensation subunit 343, a transparency calculation subunit 345, a branching unit 36, a second processing unit 38, a luminance interpolation subunit 382, a luminance tone mapping subunit 384, a luminance color matrix correction subunit 386, a defogging subunit 388, a luminance color space conversion subunit 381, a luminance 2D noise reduction subunit 383, a brightness processing unit 32, a brightness processing unit 34, a brightness tone mapping subunit 384, a luminance color matrix correction subunit 386, a defogging subunit 388, sharpening subunit 385, third processing unit 39, chrominance interpolation subunit 392, chrominance color matrix correction subunit 394.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
Referring to fig. 1, fig. 2 and fig. 3 together, the image preprocessing method according to the embodiment of the present invention is applied to an image preprocessing apparatus 10 and an image sensor interface 20, where the image sensor interface 20 includes an input interface 22 and an on-chip buffer 24. The image preprocessing method of the embodiment of the invention comprises the following steps:
s12: the input interface 22 is selected according to the user input to receive the original data in parallel format or serial format;
s14: when the received original data is in a parallel format, acquiring a first time sequence signal from the original data and sending the original data to the on-chip cache 24 for storage according to the first time sequence signal;
s16: when the received original data is in a serial format, converting the original data into deserializing data in a parallel format and performing deserializing processing on the deserializing data to obtain a second time sequence signal; and
s18: and sending the deserializing data to the on-chip cache 24 for storage according to the second timing signal.
The deserializing data includes single frame deserializing data, the second timing signal includes a single frame timing signal, and the step S18 includes the steps of:
s182: and sending the single-frame deserializing data to the on-chip buffer 24 for storage according to the single-frame timing signal.
The image preprocessing method of the embodiment of the invention comprises the following steps:
s13: the single frame deserializing data is read from the on-chip buffer 24 and output according to the single frame timing signal.
Step S13 includes the following steps:
receiving a through signal;
when the through signal is enabled, outputting the single-frame deserializing data to the image signal processor according to the single-frame timing signal; and
when the through signal is not enabled, outputting the single-frame deserializing data to the bus interface 26 according to the single-frame timing signal; and
the single frame deserialized data is converted to a single frame output data compliant with the bus protocol standard by the bus interface 26 and stored into the main memory via the bus 80 a.
The image preprocessing apparatus 10 of the embodiment of the present invention includes a first processing module 12, a second processing module 14, a third processing module 16, and a fourth processing module 18. The first processing module 12 is used for receiving the raw data in parallel format or serial format according to the input interface 22 selected by the user input. The second processing module 14 is configured to, when the received original data is in a parallel format, obtain a first timing signal from the original data and send the original data to the on-chip buffer 24 for storage according to the first timing signal. The third processing module 16 is configured to, when the received original data is in a serial format, convert the original data into deserialized data in a parallel format and perform desynchronization processing on the deserialized data to obtain a second timing signal. The fourth processing module 18 is configured to send the deserialized data to the on-chip buffer 24 for storage according to the second timing signal. The fourth processing module 18 includes a first sending submodule 182. The first sending submodule 182 is configured to send the single-frame deserializing data to the on-chip buffer 24 for storage according to the single-frame timing signal. The image preprocessing device 10 of the embodiment of the present invention includes a second output module 13. The second output module 13 is configured to read and output the single-frame deserializing data from the on-chip buffer 24 according to the single-frame timing signal. The second output module 13 includes a second receiving sub-module, a fourth output sub-module, a fifth output sub-module, and a sixth output sub-module. The second receiving submodule is used for receiving the through signal. And the fourth output submodule is used for outputting the single-frame deserializing data to the image signal processor according to the single-frame timing signal when the through signal is enabled. The fifth output sub-module is configured to output the single-frame deserialized data to the bus interface 26 according to the single-frame timing signal when the pass signal is not enabled. The sixth output submodule is configured to convert the single-frame deserialized data into single-frame output data conforming to the bus protocol standard through the bus interface 26 and store the single-frame output data into the main memory through the bus 80 a.
In this manner, single frame deserialized data is output from the image sensor interface 20 for further processing by the image signal processor. Two output modes are provided for the single-frame deserializing data, so that the single-frame deserializing data can be output more flexibly. The single-frame deserializing data only has one frame and does not need frame buffering. Therefore, in the through mode, the single-frame deserializing data does not need to be sent to the main memory for frame buffering through the bus, but is directly sent to the image signal processor, and therefore the requirement of bus bandwidth is relieved.
The image preprocessing device 10 according to the embodiment of the present invention can be applied to a terminal. The terminal includes a memory and a processor. The memory stores computer readable instructions. The instructions, when executed by the processor, cause the processor to perform the image pre-processing method described above.
In some embodiments, the terminal further comprises a bus, an image signal processor, a display controller, and an input device. The user may select the input interface 22 by determining a photographing mode through the input device. The bus connects the image sensor interface 20, the memory, the processor, the image signal processor, the display controller, and the input device. In this manner, information may be transferred between various functional elements of the computer via the bus.
It is understood that the terminal further includes an image sensor (not shown) which inputs the acquired image raw data to the image sensor interface 20. Of course, the image sensor may also send the acquired image raw data to a main memory (the main memory may be a memory or a part of a memory) for storage, and the image sensor interface 20 may also read the image raw data from the main memory. And are not particularly limited herein.
The image preprocessing method, the image preprocessing device 10, and the image sensor interface 20 according to the embodiments of the present invention can select to receive raw data in a serial format or a parallel format, and directly buffer the raw data when the raw data is in a parallel format that can be directly processed by the image signal processor, and convert the raw data into deserialized data in a parallel format and buffer the deserialized data when the raw data is in the serial format. In this way, an Image Signal Processor (ISP) can process various data, thereby improving image definition.
In some embodiments, the image preprocessing method of embodiments of the present invention includes the steps of: the original data is read from the on-chip buffer 24 and output according to the first timing signal. In some embodiments, the image preprocessing device 10 of embodiments of the present invention includes a first output module. The first output module is used for reading and outputting the original data from the on-chip buffer 24 according to the first timing signal. In this manner, raw data is output from the image sensor interface 20 for further processing by the image signal processor. In some embodiments, the first output module may output the raw data directly to the image signal processor. In some embodiments, the first output module may output the raw data to a main memory, from which the image signal processor reads the raw data and performs subsequent processing thereon. The specific case is as follows.
In some embodiments, the image sensor interface 20 includes a bus interface 26, and the step of reading and outputting raw data from the on-chip buffer 24 according to the first timing signal includes the steps of: receiving a through signal; outputting the original data to the image signal processor according to the first timing signal when the through signal is enabled; outputting the original data to the bus interface 26 according to the first timing signal when the through signal is not enabled; and converts the raw data into parallel output data conforming to the bus protocol standard through the bus interface 26 and stores the parallel output data into the main memory through the bus. In some embodiments, the first output module includes a first receiving sub-module, a first output sub-module, a second output sub-module, and a third output sub-module. The first receiving submodule is used for receiving a through signal. The first output submodule is used for outputting the original data to the image signal processor according to the first time sequence signal when the through signal is enabled. The second output submodule is arranged to output the raw data to the bus interface 26 in accordance with the first timing signal when the through signal is not enabled. The third output submodule is used for converting the original data into parallel output data which accords with the bus protocol standard through the bus interface 26 and storing the parallel output data into the main memory through the bus. Therefore, two output modes are provided for the original data, so that the output of the original data is more flexible. Usually, the original data in parallel format has only one frame, and frame buffering is not necessary. Thus, in the pass-through mode, the raw data in parallel format does not have to be sent over the bus to the main memory for frame buffering, but is sent directly to the image signal processor, thereby alleviating the bus bandwidth requirements.
In some embodiments, the single frame deserializing data comprises normal single frame deserializing data and the single frame timing signal comprises a normal single frame timing signal. The image and processing method of the embodiment of the invention can be used for common single-frame deserializing data.
In some embodiments, the single frame deserializing data comprises infrared single frame deserializing data and the single frame timing signal comprises an infrared single frame timing signal. The image sensor interface 20 includes an infrared processing unit 28. The image preprocessing method of the embodiment of the invention comprises the following steps: reading infrared correction data from a main memory according to the infrared single-frame timing signal and sending the infrared correction data to an in-chip cache 24 for storage; reading infrared correction data from the in-chip buffer 24 according to the infrared single-frame timing signal and sending the infrared correction data to the infrared processing unit 28; processing the infrared correction data by the infrared processing unit 28 to obtain processed infrared correction data; and outputting the processed infrared correction data. In some embodiments, the image pre-processing device 10 includes an infrared acquisition module, an infrared transmission module, an infrared processing module, and an infrared output module. The infrared acquisition module is used for reading infrared correction data from the main memory according to the infrared single-frame time sequence signal and sending the infrared correction data to the on-chip cache 24 for storage. The infrared transmitting module is configured to read the infrared correction data from the in-chip buffer 24 according to the infrared single-frame timing signal and transmit the infrared correction data to the infrared processing unit 28. The infrared processing module is configured to process the infrared correction data through the infrared processing unit 28 to obtain processed infrared correction data. And the infrared output module is used for outputting the processed infrared correction data. The infrared sensor requires a correction data to correct an error of the sensor itself. In the infrared mode, the infrared correction data is read from the main memory according to the infrared single-frame timing signal and stored in the on-chip buffer 24, and the infrared correction data in the on-chip buffer 24 is read according to the infrared single-frame timing signal and sent to the infrared processing unit 28, so that the infrared processing unit 28 can process the infrared correction data. The processed infrared correction data is output to the infrared sensor. In this way, the infrared sensor can correct the error of the infrared sensor by using the processed infrared correction data, so that the quality of the image is improved.
In some embodiments, the deserializing data comprises two-frame deserializing data comprising first long frame deserializing data and first mid frame deserializing data. The second timing signals include a first long frame timing signal and a first mid frame timing signal. Step S18 includes the following steps: the first long frame deserializing data and the first mid frame deserializing data are sent to the on-chip buffer 24 according to the first long frame timing signal and the first mid frame timing signal, respectively. The image preprocessing method of the embodiment of the invention comprises the following steps: the first long frame deserializing data and the first mid frame deserializing data are read from the on-chip buffer 24 and output according to the first long frame timing signal and the first mid frame timing signal, respectively. In some embodiments, the deserializing data comprises two-frame deserializing data comprising first long frame deserializing data and first mid frame deserializing data. The second timing signals include a first long frame timing signal and a first mid frame timing signal. The fourth processing module 18 includes a second transmit submodule. The second sending submodule is configured to send the first long frame deserializing data and the first mid frame deserializing data to the on-chip buffer 24 according to the first long frame timing signal and the first mid frame timing signal, respectively. The image preprocessing device 10 of the embodiment of the present invention includes a third output module. The third output module is configured to read and output the first long frame deserializing data and the first mid frame deserializing data from the on-chip buffer 24 according to the first long frame timing signal and the first mid frame timing signal, respectively. The image pre-processing method may be applied to two-frame deserialized data. Since the deserialized data has two frames, the first long frame deserialized data and the first intermediate frame deserialized data are written and output from the on-chip buffer 24 according to the first long frame timing signal and the first intermediate frame timing signal, respectively.
In some embodiments, the image sensor interface 20 includes a bus interface 26. The step of reading and outputting the first long frame deserializing data and the first mid frame deserializing data from the on-chip buffer 24 according to the first long frame timing signal and the first mid frame timing signal, respectively, includes the steps of: outputting the first long frame deserializing data and the first mid frame deserializing data to the bus interface 26 according to the first long frame timing signal and the first mid frame timing signal, respectively; and converts the first long frame deserializing data and the first intermediate frame deserializing data into first long frame output data and first intermediate frame output data conforming to the bus protocol standard through the bus interface 26 and stores them into the main memory through the bus. In some embodiments, the image sensor interface 20 includes a bus interface 26. The third output module includes a seventh output sub-module and an eighth output sub-module. The seventh output submodule is configured to output the first long frame deserializing data and the first mid frame deserializing data to the bus interface 26 according to the first long frame timing signal and the first mid frame timing signal, respectively. The eighth output submodule is configured to convert the first long frame deserializing data and the first medium frame deserializing data into first long frame output data and first medium frame output data meeting the bus protocol standard through the bus interface 26, and store the first long frame deserializing data and the first medium frame output data in the main memory through the bus. Since the first long frame deserializing data and the first intermediate frame deserializing data are not synchronized, the first long frame deserializing data and the first intermediate frame deserializing data need to be converted into first long frame output data and first intermediate frame output data which meet the bus protocol standard through the bus interface 26 and stored into the main memory through the bus for frame buffering, so that the image signal processor can read the first long frame output data and the first intermediate frame output data from the main memory and perform subsequent processing on the first long frame output data and the first intermediate frame output data.
In some embodiments, the deserialization data comprises three frames of deserialization data, the three frames of deserialization data comprising second long frame deserialization data, second mid frame deserialization data, and short frame deserialization data. The second timing signals include a second long frame timing signal, a second mid frame timing signal, and a short frame timing signal. Step S18 includes the following steps: and sending the second long frame deserializing data, the second intermediate frame deserializing data and the short frame deserializing data to the on-chip cache 24 for storage according to the second long frame timing signal, the second intermediate frame timing signal and the short frame timing signal respectively. The image preprocessing method of the embodiment of the invention comprises the following steps: the second long frame deserializing data, the second mid frame deserializing data, and the short frame deserializing data are read from the on-chip buffer 24 and output according to the second long frame timing signal, the second mid frame timing signal, and the short frame timing signal, respectively. In some embodiments, the deserialization data comprises three frames of deserialization data, the three frames of deserialization data comprising second long frame deserialization data, second mid frame deserialization data, and short frame deserialization data. The second timing signals include a second long frame timing signal, a second mid frame timing signal, and a short frame timing signal. The fourth processing module 18 includes a third transmit submodule. The third sending submodule is configured to send the second long frame deserializing data, the second mid frame deserializing data, and the short frame deserializing data to the on-chip buffer 24 for storage according to the second long frame timing signal, the second mid frame timing signal, and the short frame timing signal, respectively. The image preprocessing device 10 of the embodiment of the present invention includes a fourth output module. The fourth output module is configured to read and output the second long frame deserializing data, the second mid frame deserializing data, and the short frame deserializing data from the on-chip buffer 24 according to the second long frame timing signal, the second mid frame timing signal, and the short frame timing signal, respectively. The image preprocessing method according to the embodiment of the present invention is applicable to three-frame deserializing data, and since there are three frames in the deserializing data, the second long-frame deserializing data, the second middle-frame deserializing data, and the short-frame deserializing data are written into the on-chip buffer 24 and output from the on-chip buffer 24 according to the second long-frame timing signal, the second middle-frame timing signal, and the short-frame timing signal, respectively.
In some embodiments, the image sensor interface 20 includes a bus interface 26. The step of reading and outputting the second long frame deserialized data, the second mid frame deserialized data and the short frame deserialized data from the on-chip buffer 24 according to the second long frame timing signal, the second mid frame timing signal and the short frame timing signal, respectively, includes the steps of: sending the second long frame deserializing data, the second mid frame deserializing data, and the short frame deserializing data to the bus interface 26 according to the second long frame timing signal, the second mid frame timing signal, and the short frame timing signal, respectively; and converts the second long frame deserializing data, the second intermediate frame deserializing data, and the short frame deserializing data into second long frame output data, second intermediate frame output data, and short frame output data conforming to the bus protocol standard through the bus interface 26 and stores them into the main memory through the bus. In some embodiments, the image sensor interface 20 includes a bus interface 26. The fourth output module includes a ninth output sub-module and a tenth output sub-module. The ninth output sub-module is configured to send the second long frame deserializing data, the second mid frame deserializing data, and the short frame deserializing data to the bus interface 26 according to the second long frame timing signal, the second mid frame timing signal, and the short frame timing signal, respectively. The tenth output submodule is configured to convert the second long frame deserializing data, the second middle frame deserializing data, and the short frame deserializing data into second long frame output data, second middle frame output data, and short frame output data that meet the bus protocol standard through the bus interface 26, and store the second long frame deserializing data, the second middle frame deserializing data, and the short frame deserializing data in the main memory through the bus. Since the second long frame deserializing data, the second intermediate frame deserializing data, and the short frame deserializing data are not synchronized, the second long frame deserializing data, the second intermediate frame deserializing data, and the short frame deserializing data need to be converted into the second long frame output data, the second intermediate frame output data, and the short frame output data that meet the bus protocol standard through the bus interface 26 and stored into the main memory through the bus for frame buffering, so that the image signal processor can read the second long frame output data, the second intermediate frame output data, and the short frame output data from the main memory and perform subsequent processing on the second long frame output data, the second intermediate frame output data, and the short frame output data.
The division of the units in the image preprocessing device 10 is only for illustration, and in other embodiments, the image preprocessing device 10 may be divided into different units as needed to complete all or part of the functions of the image preprocessing device 10.
The image sensor interface 20 according to the embodiment of the present invention includes an input interface 22, a serial data processor 21, an on-chip buffer 24, and a buffer controller 23. The input interface 22 is a parallel input interface (not shown) for receiving raw data in parallel format, which includes a first timing signal, and a serial input interface (not shown) for receiving raw data in serial format. The serial data processor 21 is configured to convert the original data in the serial format into deserialized data in the parallel format and perform deserialization processing on the deserialized data to obtain a second timing signal. The on-chip cache 24 is used for sending the original data in the parallel format to the on-chip cache 24 for storage according to the first time sequence signal, and the serial data processor is used for sending the deserializing data to the on-chip cache 24 for storage according to the second time sequence signal. The cache controller 23 is used for controlling data input, read-write and output of the on-chip cache 24.
In the through mode, a timing signal is output from the serial data processor 21 to the video signal processor, and video data is read from the on-chip buffer 24 by the buffer controller 23 and output.
In some embodiments, the image sensor interface 20 includes a bus interface 26. The bus interface 26 is used to convert data output through the bus interface 26 into a format conforming to a bus protocol standard and to be transmitted to the main memory through the bus.
In some embodiments, the image sensor interface 20 includes a bus interface 26 and an infrared processing unit 28. The bus interface 26 is used to read infrared correction data for correcting errors of the infrared sensor itself from the main memory through the bus and send it to the infrared processing unit 28. Infrared processing unit 28 is used to receive, process and transmit infrared correction data.
Referring to fig. 4, an image processing method according to an embodiment of the invention is applied to an image processing apparatus 30. The image processing method of the embodiment of the invention comprises the following steps:
s32: acquiring data processed by an image preprocessing method to obtain data to be processed;
s34: processing the data to be processed to obtain data to be shunted;
s36: dividing data to be branched into brightness path data and chroma path data;
s38: processing the luminance channel data to obtain luminance data; and
s39: processing the chroma path data to obtain chroma data;
step S38 includes the following steps:
s382: performing interpolation processing on the brightness path data to obtain the brightness data;
step S39 includes the following steps:
s392: performing interpolation processing on the chrominance path data to obtain the chrominance data;
the image processing method comprises the following steps:
s33: outputting the brightness data;
s35: and outputting the chrominance data.
Referring to fig. 5, the image processing apparatus 30 according to the embodiment of the present invention includes an obtaining unit 32, a first processing unit 34, a splitting unit 36, a second processing unit 38, and a third processing unit 39. The obtaining unit 32 is configured to obtain the data processed by the image preprocessing method to obtain the data to be processed. The first processing unit 34 is configured to process the data to be processed to obtain data to be split. The branching unit 36 is used to divide the data to be branched into luminance path data and chrominance path data. The second processing unit 38 is configured to process the luminance channel data to obtain luminance data. The third processing unit 39 is configured to process the chroma channel data to obtain chroma data. The second processing unit 38 comprises a luminance interpolation subunit 382. The luminance interpolation subunit 382 is configured to perform interpolation processing on the luminance channel data. The image processing apparatus 10 of the embodiment of the present invention includes a luminance output unit. And the brightness output unit is used for outputting the brightness data after the interpolation processing. The luminance channel data is interpolated, i.e., demosaiced, such that the luminance channel data of the original domain is converted into luminance data of the RGB domain, thereby being displayed. The chroma interpolation subunit 392 is configured to perform interpolation processing on chroma channel data. The image processing apparatus 10 of the embodiment of the present invention includes a chromaticity output unit 35. The chrominance output unit 35 is configured to output the interpolated chrominance data. And performing interpolation processing on the chrominance path data, namely demosaicing the chrominance path data, so that the chrominance path data of the original domain is converted into the chrominance data of the RGB domain, and the chrominance data is displayed.
The image processing apparatus 30 according to the embodiment of the present invention can be applied to a terminal including a memory and a processor. The memory stores computer readable instructions. The instructions, when executed by the processor, cause the processor to perform the image processing method of any of the embodiments.
In some embodiments, the terminal further comprises a bus, an image signal processor, a display controller, and an input device. The bus connects the image sensor interface, the memory, the processor, the image signal processor, the display controller, and the input device. In this manner, information may be transferred between various functional elements of the computer via the bus.
In some embodiments, the image signal processor acquires image data directly output from the image sensor interface 20b to the image signal processor to obtain data to be processed. In some embodiments, the image signal processor reads the image data processed by the image preprocessing method and stored to the main memory from the main memory (the main memory may be a memory or a part of a memory) through the bus to obtain the data to be processed.
The image processing method, the image processing device 30, the computer readable storage medium and the terminal of the embodiment of the invention respectively process the luminance path data and the chrominance path data through the luminance path and the chrominance path, so that the image processing is more flexible. The luma path and the chroma path may go through different algorithmic processing modules as needed to reduce delayed memory resources while increasing the rate of data processing.
In some embodiments, when the data to be processed includes first long frame output data and first medium frame output data, step S34 includes the steps of: and performing wide dynamic fusion on the first long frame output data and the first medium frame output data to obtain data to be branched. In some embodiments, when the data to be processed includes the first long frame output data and the first medium frame output data, the first processing unit 34 includes a first fusion sub-unit 342. The first fusion subunit 342 is configured to perform wide dynamic fusion on the first long frame output data and the first medium frame output data to obtain data to be split. And performing wide dynamic fusion on the first long frame output data and the first medium frame output data to fuse two frames of data into one frame, so that the image has a wider dynamic range, and the display requirements of the images of different scenes are met.
In some embodiments, when the data to be processed includes the second long frame output data, the second middle frame output data, and the short frame output data, step S34 includes the steps of: and performing wide dynamic fusion on the second long frame output data, the second intermediate frame output data and the short frame output data to obtain data to be branched. In some embodiments, when the data to be processed includes the second long frame output data, the second medium frame output data, and the short frame output data, the first processing unit 34 includes the second fusion sub-unit 344. The second fusion subunit 344 is configured to perform wide dynamic fusion on the second long frame output data, the second middle frame output data, and the short frame output data to obtain data to be split. And performing wide dynamic fusion on the second long frame output data, the second middle frame output data and the short frame output data to fuse the three frames of data into one frame, so that the image has a wider dynamic range, and the display requirements of the images of different scenes are met.
Referring to fig. 6, in some embodiments, step S34 further includes the following steps: and 3A counting is carried out on the data to be processed. In certain embodiments, the first processing unit 34 includes a 3A statistics subunit 346. The 3A statistics subunit 346 is used for performing 3A statistics on the data to be processed. And counting the data to be processed to obtain statistical data related to automatic exposure, automatic white balance and automatic focusing. The software automatically controls the exposure and focusing of the image sensor and controls the gain of the white balance module according to the statistical data.
In certain embodiments, step S34 further includes the steps of: white balance correction is performed. In some embodiments, the first processing unit 34 includes a white balance syndrome unit 348. The white balance syndrome unit 348 is used to perform white balance correction. Performing white balance correction enables the subsequent units to correctly restore the colors of the image.
In certain embodiments, step S34 further includes the steps of: and carrying out dead pixel correction. In some embodiments, the first processing unit 34 includes a bad pixel correction subunit 341. The dead pixel correction subunit 341 is configured to perform dead pixel correction. Because the elements of the image sensor are numerous, defective pixels are easy to appear, and the dead pixel can be eliminated by carrying out dead pixel correction, so that the influence of the dead pixel on subsequent processing is reduced.
In certain embodiments, step S34 further includes the steps of: and carrying out lens vignetting compensation. In some embodiments, the first processing unit 34 includes a lens vignetting compensation subunit 343. The lens vignetting compensation subunit 343 is configured to perform lens vignetting compensation. When the imaging distance of the camera is far, the light beam which can pass through the camera lens is gradually reduced along with the gradual increase of the field angle, so that the middle of the obtained image is brighter, the edge of the obtained image is darker, and the brightness of the image is uneven. Lens vignetting compensation can eliminate the adverse effect and improve the accuracy of subsequent processing.
In certain embodiments, step S34 further includes the steps of: a transparency calculation is performed to obtain the transparency and the transparency is sent to the main memory storage. In some embodiments, the first processing unit 34 comprises a transparency calculation subunit 345. The transparency calculation subunit 345 is configured to perform transparency calculation and send the transparency to the main memory for storage. The transparency stored in the main memory can be read and utilized by the subsequent defogging subunit.
In certain embodiments, step S34 further includes the steps of: and 3D denoising and outputting a denoising result and a motion error. In some embodiments, the first processing unit 34 includes a 3D noise reduction subunit 347. The 3D noise reduction subunit 347 is configured to perform 3D noise reduction and output a noise reduction result and a motion error. The 3D noise reduction subunit 347 receives the reference motion error and the input of the reference image frame, and performs 3D noise reduction according to the reference motion error and the reference image frame. The noise reduction result and the motion error can be sent to a main memory for storage and can also be directly output to a subsequent subunit needing to be used.
In certain embodiments, step S34 further includes the steps of: an on-screen menu type adjustment is made. In some embodiments, the first processing unit 34 includes an on-screen menu adjustment subunit 349. The screen menu type adjustment subunit 349 is used to perform screen menu type adjustment. The screen menu type adjustment subunit 349 receives the noise reduction result and the motion error output by the 3D noise reduction subunit 347, and when the screen menu type adjustment display is enabled, maps the motion error to a certain color and superimposes it on the image. When the screen menu type adjustment is not enabled, the input is directly output after being delayed. Therefore, the motion situation in the image can be conveniently and intuitively known, so that the adaptive noise reduction, enhancement and the like of various scene images can be performed.
Referring to fig. 7, in some embodiments, step S38 includes the following steps: tone mapping is applied to the luminance channel data. In some embodiments, the second processing unit 38 includes a luma tone mapping sub-unit 384. The luma tone mapping subunit 384 is operable to apply tone mapping to the luma channel data. By applying tone mapping to the luminance channel data, a suitable dynamic range can be presented on the display device, and the contrast and detail of the image can be better presented to a certain extent. Furthermore, applying tone mapping only to luminance channel data and not to chrominance channel data solves the problem that tone mapping can cause color cast and distortion.
In certain embodiments, step S38 includes the steps of: and carrying out color matrix correction. In some embodiments, the second processing unit 38 includes a luminance color matrix syndrome unit 386. The luminance color matrix corrector subunit 386 is used to perform color matrix correction.
In certain embodiments, step S38 includes the steps of: and carrying out defogging treatment according to the transparency. In certain embodiments, the second processing unit 38 includes a defogging subunit 388. The defogging subunit 388 is configured to perform a defogging process according to the transparency.
In certain embodiments, step S38 includes the steps of: color space conversion is performed. In some embodiments, the second processing unit 38 includes a luminance color space conversion subunit 381. The luminance color space conversion subunit 381 is used to perform color space conversion. The luminance interpolation subunit 382 converts the luminance path data from the data of the original domain into the data of the RGB domain, and the luminance color space conversion subunit 381 converts the data of the RGB domain into the data of the YCbCr domain.
In certain embodiments, step S38 includes the steps of: 2D noise reduction is performed. In some embodiments, the second processing unit 38 includes a luminance 2D noise reduction subunit 383. The luminance 2D noise reduction subunit 383 is used for performing 2D noise reduction, further removing noise on image luminance, and improving the quality of an image.
In certain embodiments, step S38 includes the steps of: and carrying out sharpening processing. In some embodiments, the second processing unit 38 includes a sharpening subunit 385. Sharpening subunit 385 is used to perform the sharpening process. The sharpening subunit 385 also receives a motion error input, where the motion error is shared with the motion error of the 3D noise reduction subunit, without requiring additional computational resources, so that an adaptive processing can be performed on the sharpening according to the motion condition of the image, for example, a statically stable image can be selected to have a larger gain, and conversely, the larger gain is, the better the visual requirement of human eyes can be met.
Referring to fig. 8, in some embodiments, step S39 includes the following steps: and carrying out color matrix correction. In some embodiments, the third processing unit 39 comprises a chromatic color matrix syndrome unit 394. The chrominance color matrix syndrome unit 394 is used to perform color matrix correction. Performing color matrix correction on an image may adjust the color of the image more finely.
In certain embodiments, step S39 includes the steps of: and carrying out chrominance color space conversion. In some embodiments, the third processing unit 39 comprises a chrominance color space conversion subunit 396. The chrominance color space conversion subunit 396 is configured to perform chrominance color space conversion. The chrominance interpolation subunit 392 converts the chrominance path data from the data of the original domain into the data of the RGB domain, and the chrominance color space conversion subunit 396 converts the data of the RGB domain into the data of the YCbCr domain.
In certain embodiments, step S39 includes the steps of: and carrying out chromaticity correction. In some embodiments, the third processing unit 39 comprises a chrominance syndrome unit 398. The chrominance syndrome unit 398 is used to correct the chrominance.
In certain embodiments, step S39 includes the steps of: 2D noise reduction is performed. In some embodiments, third processing unit 39 includes a chrominance 2D noise reduction sub-unit 391. The chrominance 2D noise reduction subunit 391 is configured to perform 2D noise reduction to further remove noise on image chrominance.
In summary, the image preprocessing method, the image preprocessing device 10, the image sensor interface 20, the image processing method and the image processing device 30 according to the embodiment of the present invention have the following advantages: first, the image signal processor is enabled to support a plurality of input data formats of the image sensor (e.g., a raw data format input in parallel, a raw data format input in serial multi-frame, an input format of an infrared sensor, etc.); second, the image signal processor is enabled to support complex algorithmic processing (e.g., wide dynamic processing of multi-frame input, image defogging, etc.) and more efficient adaptive processing of image noise reduction and enhancement; third, in the chrominance and luminance processing domains, a more flexible data processing path mechanism is provided in which the chrominance path is independent of the luminance path. Fourthly, the System bandwidth, the power consumption and the storage resources (including an on-Chip static random access memory, a main memory and the like) can be flexibly configured according to different input modes and configuration modes, so that the Chip (System-on-a-Chip, SOC) works in an optimal state.
The division of the units in the image processing apparatus 30 is only for illustration, and in other embodiments, the image processing apparatus 30 may be divided into different units as needed to complete all or part of the functions of the image processing apparatus 30.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (20)

  1. An image preprocessing method for an image sensor interface, the image sensor interface comprising an input interface and an on-chip buffer, the image preprocessing method comprising the steps of:
    selecting the input interface according to user input to receive original data in a parallel format or a serial format;
    when the received original data is in a parallel format, acquiring a first time sequence signal from the original data and sending the original data to the on-chip cache for storage according to the first time sequence signal;
    when the received original data is in a serial format, converting the original data into deserializing data in a parallel format and performing deserializing processing on the deserializing data to obtain a second time sequence signal; and
    sending the deserializing data to the on-chip cache for storage according to the second time sequence signal;
    the deserializing data comprises single-frame deserializing data, the second time sequence signal comprises a single-frame time sequence signal, and the step of sending the deserializing data to the on-chip buffer for storage according to the second time sequence signal comprises the following steps:
    sending the single-frame deserializing data to the on-chip cache for storage according to the single-frame time sequence signal;
    the image preprocessing method comprises the following steps:
    reading and outputting the single-frame deserializing data from the on-chip cache according to the single-frame timing signal;
    the step of reading and outputting the single-frame deserializing data from the on-chip buffer according to the single-frame timing signal comprises the following steps:
    receiving a through signal;
    when the through signal is enabled, outputting the single-frame deserializing data to an image signal processor according to the single-frame timing signal;
    when the through signal is not enabled, outputting the single-frame deserializing data to the bus interface according to the single-frame timing signal; and
    and converting the single-frame deserializing data into single-frame output data which accords with the bus protocol standard through the bus interface and storing the single-frame output data into a main memory through a bus.
  2. The image pre-processing method of claim 1, wherein the single frame deserializing data comprises infrared single frame deserializing data, the single frame timing signal comprises an infrared single frame timing signal, the image sensor interface comprises an infrared processing unit, the image pre-processing method comprising the steps of:
    reading infrared correction data from the main memory according to the infrared single-frame timing signal and sending the infrared correction data to the on-chip cache for storage;
    reading the infrared correction data from the in-chip cache according to the infrared single-frame timing signal and sending the infrared correction data to the infrared processing unit;
    processing the infrared correction data by the infrared processing unit to obtain processed infrared correction data; and
    and outputting the processed infrared correction data.
  3. The image preprocessing method according to claim 1, wherein the deserializing data comprises two-frame deserializing data, the two-frame deserializing data comprises first long-frame deserializing data and first mid-frame deserializing data, the second timing signal comprises a first long-frame timing signal and a first mid-frame timing signal, and the step of sending the deserializing data to the on-chip buffer for storage according to the second timing signal comprises the steps of:
    sending the first long frame deserializing data and the first intermediate frame deserializing data to the on-chip cache according to the first long frame timing signal and the first intermediate frame timing signal respectively;
    the image preprocessing method comprises the following steps:
    reading and outputting the first long frame deserializing data and the first mid frame deserializing data from the on-chip buffer according to the first long frame timing signal and the first mid frame timing signal, respectively;
    the step of reading and outputting the first long frame deserializing data and the first mid frame deserializing data from the on-chip buffer according to the first long frame timing signal and the first mid frame timing signal, respectively, includes the steps of:
    outputting the first long frame deserializing data and the first mid frame deserializing data to the bus interface according to the first long frame timing signal and the first mid frame timing signal, respectively; and
    and converting the first long frame deserializing data and the first intermediate frame deserializing data into first long frame output data and first intermediate frame output data which accord with a bus protocol standard through the bus interface and storing the first long frame deserializing data and the first intermediate frame output data into a main memory through a bus.
  4. The image preprocessing method according to claim 1, wherein the deserializing data comprises three frames of deserializing data, the three frames of deserializing data comprises second long frame deserializing data, second mid frame deserializing data and short frame deserializing data, the second timing signal comprises a second long frame timing signal, a second mid frame timing signal and a short frame timing signal, and the step of sending the deserializing data to the on-chip buffer for storage according to the second timing signal comprises the steps of:
    sending the second long frame deserializing data, the second mid frame deserializing data and the short frame deserializing data to the on-chip cache for storage according to the second long frame timing signal, the second mid frame timing signal and the short frame timing signal respectively;
    the image preprocessing method comprises the following steps:
    reading and outputting the second long frame deserializing data, the second mid frame deserializing data and the short frame deserializing data from the on-chip cache according to the second long frame timing signal, the second mid frame timing signal and the short frame timing signal respectively;
    the step of reading and outputting the second long frame deserialized data, the second mid frame deserialized data and the short frame deserialized data from the on-chip buffer according to the second long frame timing signal, the second mid frame timing signal and the short frame timing signal, respectively, comprises the steps of:
    sending the second long frame deserializing data, the second mid frame deserializing data and the short frame deserializing data to the bus interface according to the second long frame timing signal, the second mid frame timing signal and the short frame timing signal, respectively; and
    and converting the second long frame deserializing data, the second intermediate frame deserializing data and the short frame deserializing data into second long frame output data, second intermediate frame output data and short frame output data which accord with a bus protocol standard through the bus interface and storing the second long frame deserializing data, the second intermediate frame deserializing data and the short frame deserializing data into a main memory through a bus.
  5. An image preprocessing apparatus characterized by comprising:
    the first processing module is used for selecting the input interface according to user input to receive original data in a parallel format or a serial format;
    the second processing module is used for acquiring a first time sequence signal from the original data and sending the original data to the on-chip cache for storage according to the first time sequence signal when the received original data is in a parallel format;
    the third processing module is used for converting the original data into deserializing data in a parallel format and performing deserializing processing on the deserializing data to obtain a second time sequence signal when the received original data is in a serial format; and
    the fourth processing module is used for sending the deserializing data to the on-chip cache for storage according to the second time sequence signal;
    the deserializing data comprises single frame deserializing data, the second timing signal comprises a single frame timing signal, and the fourth processing module comprises:
    the first sending submodule is used for sending the single-frame deserializing data to the on-chip cache for storage according to the single-frame time sequence signal;
    the image preprocessing apparatus includes:
    a second output module, configured to read and output the single-frame deserializing data from the on-chip buffer according to the single-frame timing signal;
    the second output module includes:
    a second receiving submodule, configured to receive a through signal;
    a fourth output sub-module, configured to output the single-frame deserializing data to an image signal processor according to the single-frame timing signal when the through signal is enabled;
    a fifth output sub-module, configured to output the single-frame deserializing data to the bus interface according to the single-frame timing signal when the through signal is not enabled; and
    and the sixth output submodule is used for converting the single-frame deserialized data into single-frame output data which accords with a bus protocol standard through the bus interface and storing the single-frame output data into the main memory through a bus.
  6. The image pre-processing apparatus of claim 5, wherein the single frame deserializing data comprises infrared single frame deserializing data, the single frame timing signal comprises an infrared single frame timing signal, the image sensor interface comprises an infrared processing unit, the image pre-processing apparatus comprising:
    the infrared acquisition module is used for reading infrared correction data from the main memory according to the infrared single-frame time sequence signal and sending the infrared correction data to the on-chip cache for storage;
    the infrared processing module is used for reading the infrared correction data from the in-chip cache according to the infrared single-frame time sequence signal and sending the infrared correction data to the infrared processing unit;
    the infrared processing module is used for processing the infrared correction data through the infrared processing unit to obtain processed infrared correction data; and
    and the infrared output module is used for outputting the processed infrared correction data.
  7. The image preprocessing apparatus according to claim 5, wherein the deserializing data comprises two-frame deserializing data, the two-frame deserializing data comprises first long-frame deserializing data and first mid-frame deserializing data, the second timing signal comprises a first long-frame timing signal and a first mid-frame timing signal, the fourth processing module comprises:
    a second sending submodule, configured to send the first long frame deserializing data and the first mid frame deserializing data to the on-chip cache according to the first long frame timing signal and the first mid frame timing signal, respectively;
    the image preprocessing apparatus includes:
    a third output module, configured to read and output the first long frame deserializing data and the first mid frame deserializing data from the on-chip buffer according to the first long frame timing signal and the first mid frame timing signal, respectively;
    the third output module includes:
    a seventh output submodule, configured to output the first long frame deserializing data and the first mid frame deserializing data to the bus interface according to the first long frame timing signal and the first mid frame timing signal, respectively; and
    and the eighth output submodule is used for converting the first long frame deserializing data and the first intermediate frame deserializing data into first long frame output data and first intermediate frame output data which accord with a bus protocol standard through the bus interface and storing the first long frame deserializing data and the first intermediate frame output data into the main memory through a bus.
  8. The image preprocessing apparatus according to claim 5, wherein the deserializing data comprises three-frame deserializing data, the three-frame deserializing data comprises second long-frame deserializing data, second mid-frame deserializing data, and short-frame deserializing data, the second timing signal comprises a second long-frame timing signal, a second mid-frame timing signal, and a short-frame timing signal, the fourth processing module comprises:
    a third sending submodule, configured to send the second long frame deserializing data, the second mid frame deserializing data, and the short frame deserializing data to the on-chip buffer for storage according to the second long frame timing signal, the second mid frame timing signal, and the short frame timing signal, respectively;
    the image preprocessing apparatus includes:
    a fourth output module, configured to read and output the second long frame deserializing data, the second mid frame deserializing data, and the short frame deserializing data from the on-chip buffer according to the second long frame timing signal, the second mid frame timing signal, and the short frame timing signal, respectively;
    the fourth output module includes:
    a ninth output sub-module, configured to send the second long frame deserializing data, the second mid frame deserializing data, and the short frame deserializing data to the bus interface according to the second long frame timing signal, the second mid frame timing signal, and the short frame timing signal, respectively; and
    and the tenth output submodule is used for converting the second long frame deserializing data, the second middle frame deserializing data and the short frame deserializing data into second long frame output data, second middle frame output data and short frame output data which accord with a bus protocol standard through the bus interface and storing the second long frame deserializing data, the second middle frame deserializing data and the short frame output data into the main memory through a bus.
  9. An image sensor interface, comprising:
    an input interface comprising a parallel input interface for receiving raw data in a parallel format and a serial input interface for receiving raw data in a serial format, the raw data in the parallel format comprising a first timing signal;
    the serial data processor is used for converting the original data in the serial format into deserialized data in the parallel format and performing deserializing processing on the deserialized data to obtain a second time sequence signal;
    the on-chip cache is used for sending the original data in a parallel format to the on-chip cache for storage according to a first time sequence signal, and the serial data processor is used for sending the deserialized data to the on-chip cache for storage according to a second time sequence signal;
    the cache controller is used for controlling data input, reading, writing and output of the on-chip cache; and
    and the bus interface is used for converting the output data through the bus interface into a format conforming to a bus protocol standard and transmitting the data into the main memory through a bus.
  10. The image sensor interface of claim 9, wherein the image sensor interface comprises an infrared processing unit, the bus interface is configured to read infrared correction data for correcting an error of the infrared sensor itself from a main memory through a bus and transmit the infrared correction data to the infrared processing unit, and the infrared processing unit is configured to receive, process, and transmit the infrared correction data.
  11. An image processing method, characterized by comprising the steps of:
    acquiring data processed by the image preprocessing method according to any one of claims 1 to 4 to obtain data to be processed;
    processing the data to be processed to obtain data to be shunted;
    dividing the data to be branched into brightness path data and chroma path data;
    processing the luminance channel data to obtain luminance data;
    processing the chroma path data to obtain chroma data;
    the step of processing the luminance channel data to obtain luminance data comprises the steps of:
    performing interpolation processing on the brightness path data to obtain the brightness data;
    the step of processing the chroma pass data to obtain chroma data comprises the steps of:
    performing interpolation processing on the chrominance path data to obtain the chrominance data;
    the image processing method comprises the following steps:
    outputting the brightness data;
    and outputting the chrominance data.
  12. The image processing method according to claim 11, wherein when the data to be processed includes first long frame output data and first mid frame output data, the step of processing the data to be processed to obtain data to be branched includes the steps of:
    performing wide dynamic fusion on the first long frame output data and the first intermediate frame output data to obtain the data to be branched;
    when the data to be processed comprises second long frame output data, second middle frame output data and short frame output data, the step of processing the data to be processed to obtain data to be split comprises the following steps:
    and performing wide dynamic fusion on the second long frame output data, the second intermediate frame output data and the short frame output data to obtain the data to be shunted.
  13. The image processing method according to claim 11, wherein the step of processing the luminance channel data to obtain luminance data before the interpolation processing is performed on the luminance channel data to obtain the luminance data comprises the steps of:
    applying tone mapping to the luminance channel data.
  14. The image processing method according to claim 11, wherein the step of processing the luminance channel data to obtain luminance data comprises the steps of:
    and carrying out defogging treatment.
  15. The image processing method of claim 11, wherein the step of processing the chroma pass data to obtain chroma data comprises the steps of:
    2D noise reduction is performed.
  16. An image processing apparatus characterized by comprising:
    an acquisition unit configured to acquire data processed by the image preprocessing method according to any one of claims 1 to 4 to obtain data to be processed;
    the first processing unit is used for processing the data to be processed to obtain data to be branched;
    the branching unit is used for dividing the data to be branched into luminance channel data and chrominance channel data;
    a second processing unit, configured to process the luminance channel data to obtain luminance data;
    a third processing unit, configured to process the chroma path data to obtain chroma data;
    the second processing unit includes:
    a luminance interpolation subunit configured to perform interpolation processing on the luminance channel data;
    the third processing unit includes:
    a chrominance interpolation subunit, configured to perform interpolation processing on the chrominance path data;
    the image processing apparatus includes:
    a luminance output unit for outputting the luminance data after interpolation processing;
    and the chrominance output unit is used for outputting the chrominance data after interpolation processing.
  17. The image processing apparatus according to claim 16, wherein when the data to be processed includes first long frame output data and first medium frame output data, the first processing unit includes: the first fusion subunit is used for performing wide dynamic fusion on the first long frame output data and the first intermediate frame output data to obtain the data to be branched;
    when the data to be processed includes second long frame output data, second medium frame output data, and short frame output data, the first processing unit includes: and the second fusion subunit is used for performing wide dynamic fusion on the second long frame output data, the second intermediate frame output data and the short frame output data to obtain the data to be branched.
  18. The image processing apparatus of claim 16, wherein the second processing unit comprises, before the luminance interpolation subunit, a luminance tone mapping subunit for applying tone mapping to the luminance channel data.
  19. The image processing apparatus according to claim 16, wherein the second processing unit includes a defogging subunit operable to perform a defogging process.
  20. The image processing apparatus of claim 16, wherein the third processing unit comprises a 2D noise reduction subunit, the 2D noise reduction subunit being configured to perform 2D noise reduction.
CN201880077740.1A 2018-02-09 2018-02-09 Image preprocessing method and device, image sensor interface, image processing method and device Active CN111492650B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/076041 WO2019153264A1 (en) 2018-02-09 2018-02-09 Iimage pre-processing method and device, image sensor interface, image processing method and device

Publications (2)

Publication Number Publication Date
CN111492650A true CN111492650A (en) 2020-08-04
CN111492650B CN111492650B (en) 2021-04-30

Family

ID=67548728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880077740.1A Active CN111492650B (en) 2018-02-09 2018-02-09 Image preprocessing method and device, image sensor interface, image processing method and device

Country Status (2)

Country Link
CN (1) CN111492650B (en)
WO (1) WO2019153264A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080030614A1 (en) * 1997-04-07 2008-02-07 Schwab Barry H Integrated multi-format audio/video production system
CN101221439A (en) * 2008-01-14 2008-07-16 清华大学 Embedded system for high speed parallel duplex digital image capturing and processing
CN102006420A (en) * 2010-12-17 2011-04-06 四川川大智胜软件股份有限公司 Design method capable of using external synchronous for cameral with various data output formats
CN102098441A (en) * 2010-12-16 2011-06-15 深圳市经纬科技有限公司 Image data transmission method and photographic equipment based on serial peripheral interface (SPI)
JP2012088996A (en) * 2010-10-21 2012-05-10 Konica Minolta Business Technologies Inc Memory control method, memory control device and image forming apparatus
CN105721818A (en) * 2016-03-18 2016-06-29 武汉精测电子技术股份有限公司 Signal conversion method and device
CN106791550A (en) * 2016-12-05 2017-05-31 中国航空工业集团公司洛阳电光设备研究所 The apparatus and method that a kind of low frame rate LVDS turns frame frequency DVI videos high
CN107249101A (en) * 2017-07-13 2017-10-13 浙江工业大学 A kind of sample of high-resolution image and processing unit

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101472039A (en) * 2007-12-26 2009-07-01 中国科学院沈阳自动化研究所 Digital image receiving card

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080030614A1 (en) * 1997-04-07 2008-02-07 Schwab Barry H Integrated multi-format audio/video production system
CN101221439A (en) * 2008-01-14 2008-07-16 清华大学 Embedded system for high speed parallel duplex digital image capturing and processing
JP2012088996A (en) * 2010-10-21 2012-05-10 Konica Minolta Business Technologies Inc Memory control method, memory control device and image forming apparatus
CN102098441A (en) * 2010-12-16 2011-06-15 深圳市经纬科技有限公司 Image data transmission method and photographic equipment based on serial peripheral interface (SPI)
CN102006420A (en) * 2010-12-17 2011-04-06 四川川大智胜软件股份有限公司 Design method capable of using external synchronous for cameral with various data output formats
CN105721818A (en) * 2016-03-18 2016-06-29 武汉精测电子技术股份有限公司 Signal conversion method and device
CN106791550A (en) * 2016-12-05 2017-05-31 中国航空工业集团公司洛阳电光设备研究所 The apparatus and method that a kind of low frame rate LVDS turns frame frequency DVI videos high
CN107249101A (en) * 2017-07-13 2017-10-13 浙江工业大学 A kind of sample of high-resolution image and processing unit

Also Published As

Publication number Publication date
WO2019153264A1 (en) 2019-08-15
CN111492650B (en) 2021-04-30

Similar Documents

Publication Publication Date Title
US10891722B2 (en) Display method and display device
US10237514B2 (en) Camera system, video processing apparatus, and camera apparatus
US10616511B2 (en) Method and system of camera control and image processing with a multi-frame-based window for image data statistics
WO2020057199A1 (en) Imaging method and device, and electronic device
US7512021B2 (en) Register configuration control device, register configuration control method, and program for implementing the method
US10469749B1 (en) Temporal filter with criteria setting maximum amount of temporal blend
JP2000050173A (en) Image pickup device and recording medium storing image pickup program
TWI631505B (en) Image processing method applied to a display and associated circuit
WO2020029679A1 (en) Control method and apparatus, imaging device, electronic device and readable storage medium
EP1596328A1 (en) Resolution-conversion of image data
US10600170B2 (en) Method and device for producing a digital image
JP5325655B2 (en) Imaging device
US11798143B2 (en) Image processing apparatus and control method thereof
US20140133781A1 (en) Image processing device and image processing method
US20190051270A1 (en) Display processing device and imaging device
CN111492650B (en) Image preprocessing method and device, image sensor interface, image processing method and device
US9413974B2 (en) Information processing apparatus, image sensing apparatus, control method, and recording medium for conversion processing
KR20040068127A (en) Characteristic correcting device
US20190174181A1 (en) Video signal processing apparatus, video signal processing method, and program
JP5478761B2 (en) Imaging device
US11743612B2 (en) In-line chromatic aberration correction in wide dynamic range (WDR) image processing pipeline
JP4552596B2 (en) Image processing device
JP2000115693A (en) Image data recording method and device, image data reproducing method and device, information recording medium and computer-readable recording medium
JP4978669B2 (en) Image processing apparatus, electronic camera, and image processing program
JP2022012832A (en) Image processing apparatus, image processing method, imaging device, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant