CN110557579A - image processing method, device and equipment and readable medium - Google Patents
image processing method, device and equipment and readable medium Download PDFInfo
- Publication number
- CN110557579A CN110557579A CN201810556547.8A CN201810556547A CN110557579A CN 110557579 A CN110557579 A CN 110557579A CN 201810556547 A CN201810556547 A CN 201810556547A CN 110557579 A CN110557579 A CN 110557579A
- Authority
- CN
- China
- Prior art keywords
- image
- data format
- data
- convolution
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 26
- 238000012545 processing Methods 0.000 claims abstract description 100
- 238000013528 artificial neural network Methods 0.000 claims abstract description 98
- 238000000034 method Methods 0.000 claims abstract description 30
- 230000005540 biological transmission Effects 0.000 claims abstract description 13
- 238000004364 calculation method Methods 0.000 claims description 66
- 238000007499 fusion processing Methods 0.000 claims description 59
- 230000004927 fusion Effects 0.000 claims description 30
- 238000006243 chemical reaction Methods 0.000 claims description 22
- 230000004044 response Effects 0.000 claims description 13
- 230000035945 sensitivity Effects 0.000 claims description 8
- 230000003595 spectral effect Effects 0.000 claims description 7
- 230000001419 dependent effect Effects 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 16
- 238000012549 training Methods 0.000 description 15
- 239000011159 matrix material Substances 0.000 description 13
- 230000008569 process Effects 0.000 description 13
- 238000011176 pooling Methods 0.000 description 11
- 230000009466 transformation Effects 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000005070 sampling Methods 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000012805 post-processing Methods 0.000 description 3
- 238000001994 activation Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007620 mathematical function Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/268—Signal distribution or switching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides an image processing method, an image processing device, image processing equipment and a readable medium, wherein the image processing method comprises the following steps: acquiring a first image with a data format of a first data format; acquiring equipment parameters used for acquiring the first image; inputting the first image and the device parameters to a trained neural network to convert a data format of the first image from the first data format to a second data format by the neural network, the second data format being an image format suitable for transmission and/or display of the first image. The method can be suitable for different image devices, and the universality is enhanced.
Description
Technical Field
the present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a readable medium.
Background
the image format of the image acquired by the image device is directly used for display or transmission, and the format conversion is needed. Conventionally, one or more steps are respectively adopted to solve the problems, and the steps are connected in series in a reasonable manner to form a processing flow so as to convert the image format into a format suitable for display or transmission, so that the processing flow is relatively complicated.
the patent application published by the chinese patent office under publication number CN106934426A discloses a method and apparatus for a neural network based on image signal processing, which uses the neural network to directly learn the mapping relationship between the input and output of single image signal processing, the input is an original data format image, and the output is a single image meeting the quality requirement of some aspect.
although the scheme learns the mapping relationship between the image in the original data format and the corresponding output image meeting the quality expectation by using the neural network, the image format conversion can be realized by using the neural network, in an actual situation, the distribution characteristic of the data acquired by the equipment is directly influenced by the characteristics of the image equipment, and an image processing algorithm is generally required to be customized, designed and called according to acquisition equipment of a certain type (or a plurality of similar types), so that for different acquisition equipment, the scheme needs to respectively train a neural network on each acquired image set, or respectively carry out further training and adjustment based on a certain general neural network so as to adapt to the processing requirements of different acquisition equipment.
Disclosure of Invention
in view of this, the present invention provides an image processing method, an image processing apparatus, an image processing device, and a readable medium, which are applicable to different image devices and enhance the versatility.
A first aspect of the present invention provides an image processing method, including:
Acquiring a first image with a data format of a first data format;
Acquiring equipment parameters used for acquiring the first image;
Inputting the first image and the device parameters to a trained neural network to convert a data format of the first image from the first data format to a second data format by the neural network, the second data format being an image format suitable for transmission and/or display of the first image.
According to one embodiment of the invention, the neural network comprises:
at least one first computing layer for performing a fusion process;
At least one second computation layer for performing convolution processing.
according to one embodiment of the invention, said converting, by the neural network, the data format of the first image from the first data format to a second data format comprises:
Executing fusion processing on the first image in the first data format and the equipment parameters by the first computing layer to obtain fusion data;
And performing convolution processing on the fusion data by at least one second calculation layer to obtain a first image in the second data format.
according to one embodiment of the invention, said converting, by the neural network, the data format of the first image from the first data format to a second data format comprises:
performing convolution processing on the first image in the first data format by at least one second calculation layer to obtain first convolution data;
executing fusion processing on the equipment parameters and the first volume data by the first computing layer to obtain a first image in the second data format;
or,
performing convolution processing on the equipment parameters by at least one second calculation layer to obtain second convolution data;
Executing fusion processing on the first image in the first data format and the second convolution data by the first computing layer to obtain a first image in the second data format;
or,
Performing convolution processing on a first image in the first data format by at least one second calculation layer to obtain first convolution data, and performing convolution processing on the equipment parameters by at least one second calculation layer to obtain second convolution data;
And executing fusion processing on the first convolution data and the second convolution data by the first calculation layer to obtain a first image in the second data format.
According to one embodiment of the invention, the device parameter comprises at least one of the following parameters:
An environment-independent first device parameter of a capture device used to capture a first image of the first data format;
an environmentally-dependent second device parameter of an acquisition device used to acquire the first image in the first data format.
According to one embodiment of the invention, the acquisition device for acquiring the first image in the first data format comprises a sensor and a lens;
the first device parameter comprises at least one of: the sensor comprises a sensor sensitivity, a sensor dynamic range, a sensor signal-to-noise ratio, a sensor pixel size, a sensor target surface size, a sensor resolution, a sensor frame rate, a sensor pixel number, a sensor spectral response, a sensor photoelectric response, a sensor array mode and a lens hood model;
the second device parameter comprises at least one of: the aperture of the lens, the focal length of the lens, the aperture size of the lens, the aperture of the filter of the lens and the visual angle of the lens.
according to one embodiment of the invention, the neural network is trained using image datasets corresponding to at least two different designated image devices;
The image data set corresponding to each of the designated image devices includes: the method comprises the steps of specifying a first data format image collected by image equipment, specifying equipment parameters used when the image equipment collects the first data format image, and specifying a second data format image corresponding to the first data format image.
a second aspect of the present invention provides an image processing apparatus comprising:
The image acquisition unit is used for acquiring a first image with a data format of a first data format;
The parameter acquisition unit is used for acquiring equipment parameters used for acquiring the first image;
And the format conversion unit is used for inputting the first image and the equipment parameters into a trained neural network so as to convert the data format of the first image from the first data format into a second data format by the neural network, wherein the second data format is an image format suitable for transmission and/or display of the first image.
According to one embodiment of the invention, the neural network comprises:
At least one first computing layer for performing a fusion process;
at least one second computation layer for performing convolution processing.
according to one embodiment of the invention, said converting, by the neural network, the data format of the first image from the first data format to a second data format comprises:
Executing fusion processing on the first image in the first data format and the equipment parameters by at least one first computing layer to obtain fusion data;
And performing convolution processing on the fusion data by the second calculation layer to obtain a first image in the second data format.
according to one embodiment of the invention, said converting, by the neural network, the data format of the first image from the first data format to a second data format comprises:
performing convolution processing on the first image in the first data format by at least one second calculation layer to obtain first convolution data;
Executing fusion processing on the equipment parameters and the first volume data by the first computing layer to obtain a first image in the second data format;
or,
performing convolution processing on the equipment parameters by at least one second calculation layer to obtain second convolution data;
executing fusion processing on the first image in the first data format and the second convolution data by the first computing layer to obtain a first image in the second data format;
Or,
performing convolution processing on a first image in the first data format by at least one second calculation layer to obtain first convolution data, and performing convolution processing on the equipment parameters by at least one second calculation layer to obtain second convolution data;
and executing fusion processing on the first convolution data and the second convolution data by the first calculation layer to obtain a first image in the second data format.
according to one embodiment of the invention, the device parameter comprises at least one of the following parameters:
An environment-independent first device parameter of a capture device used to capture a first image of the first data format;
an environmentally-dependent second device parameter of an acquisition device used to acquire the first image in the first data format.
according to one embodiment of the invention, the acquisition device for acquiring the first image in the first data format comprises a sensor and a lens;
The first device parameter comprises at least one of: the sensor comprises a sensor sensitivity, a sensor dynamic range, a sensor signal-to-noise ratio, a sensor pixel size, a sensor target surface size, a sensor resolution, a sensor frame rate, a sensor pixel number, a sensor spectral response, a sensor photoelectric response, a sensor array mode and a lens hood model;
The second device parameter comprises at least one of: the aperture of the lens, the focal length of the lens, the aperture size of the lens, the aperture of the filter of the lens and the visual angle of the lens.
according to one embodiment of the invention, the neural network is trained using image datasets corresponding to at least two different designated image devices;
The image data set corresponding to each of the designated image devices includes: the method comprises the steps of specifying a first data format image collected by image equipment, specifying equipment parameters used when the image equipment collects the first data format image, and specifying a second data format image corresponding to the first data format image.
a third aspect of the present invention provides an image apparatus comprising a processor and a memory; the memory stores a program that can be called by the processor; wherein the processor, when executing the program, implements the image processing method as in any one of the preceding embodiments.
a fourth aspect of the present invention provides a machine-readable storage medium on which a program is stored, which, when executed by a processor, causes an image apparatus to implement the image processing method as described in any one of the preceding embodiments.
the embodiment of the invention has the following beneficial effects:
when the neural network is used for carrying out format conversion processing on the first image in the first data format, the first image and the equipment parameters used by the equipment for collecting the first image are fused, namely, the trained neural network can be used for different image equipment, the neural network does not need to be trained respectively in each type or multiple similar types, and retraining is not needed when the neural network is applied to different equipment, so that the universality is enhanced.
drawings
FIG. 1 is a schematic flow diagram illustrating an image processing method according to an exemplary embodiment of the invention;
FIG. 2 is a block diagram of an image processing apparatus according to an exemplary embodiment of the present invention;
FIG. 3 is a block diagram of a neural network shown in an exemplary embodiment of the invention;
FIG. 3a is a block diagram of a neural network according to a first embodiment of the present invention;
FIG. 3b is a block diagram of a neural network according to a second embodiment of the present invention;
FIG. 3c is a block diagram of a neural network according to a third embodiment of the present invention;
FIG. 3d is a block diagram of a neural network according to a fourth embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a first computing layer performing a computing process in accordance with an exemplary embodiment of the present invention;
FIG. 4a is a diagram illustrating a first computing layer performing a computing process according to a first embodiment of the present invention;
FIG. 4b is a diagram illustrating a first computing layer performing a computing process according to a second embodiment of the present invention;
FIG. 4c is a diagram illustrating a first computing layer performing a computing process according to a third embodiment of the present invention;
FIG. 4d is a diagram illustrating a first computing layer performing a computing process according to a fourth embodiment of the present invention;
FIG. 5 is a schematic flow diagram illustrating neural network training in accordance with an exemplary embodiment of the present invention;
FIG. 6 is a flow diagram illustrating how an image dataset for neural network training is acquired in accordance with an exemplary embodiment of the present invention;
FIG. 7 is a flow chart illustrating how a second data format image is acquired in accordance with an exemplary embodiment of the present invention;
fig. 8 is a block diagram of an electronic device according to an exemplary embodiment of the present invention.
Detailed Description
reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one type of device from another. For example, a first device may also be referred to as a second device, and similarly, a second device may also be referred to as a first device, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In order to make the description of the present invention clearer and more concise, some technical terms in the present invention are explained below:
A neural network: a technique for simulating brain structure abstraction features that a great number of simple functions are connected to form a network system, which can fit very complex function relations, including convolution/deconvolution, activation, pooling, addition, subtraction, multiplication, division, channel merging and element rearrangement. Training the network with specific input data and output data, adjusting the connections therein, allows the neural network to learn the mapping between the fitting inputs and outputs.
The following describes the image processing method according to the embodiment of the present invention more specifically, but not limited thereto. In one embodiment, referring to FIG. 1, an image processing method of an embodiment of the present invention is shown, the method comprising the steps of:
S11: acquiring a first image with a data format of a first data format;
s12: acquiring equipment parameters used for acquiring the first image;
S13: inputting the first image and the device parameters to a trained neural network to convert a data format of the first image from the first data format to a second data format by the neural network, the second data format being an image format suitable for transmission and/or display of the first image.
In the embodiment of the present invention, the image processing method may be applied to an image device, where the image device may be a video camera or other acquisition device with an imaging function, or may be a device capable of performing image post-processing, and the image device may be a digital camera, a video recorder, a mobile terminal with an imaging function, and the like.
In step S11, a first image having a data format of a first data format is acquired. The first data format is a raw image format acquired by an imaging device and may contain data for one or more spectral bands. Generally speaking, there can be certain difficulties in using the images of the first data format directly for display or transmission. For convenience of description.
In step S12, the device parameters used to acquire the first image are obtained. Different image devices may have different device parameters when acquiring images, and the same image device may also have different device parameters when acquiring images, and the device parameters when the device acquires the first image are acquired for use in subsequent format conversion of the first image.
the device parameters may be parameters configured in the device, and/or parameters externally input to the device, and/or parameters that change after certain processing is performed on the aforementioned parameters, and so on.
Step S13 is then performed to input the first image and the device parameters into a trained neural network to convert the data format of the first image from the first data format to a second data format by the neural network, the second data format being an image format suitable for transmission and/or display of the first image.
the neural network is trained in advance, can be shared by different image devices, can be pre-stored on the device or acquired from the outside, and is not limited specifically. The other image devices described in this embodiment may be image devices of different models from the present device.
Inputting the first image and the corresponding equipment parameters into a neural network, performing format conversion on the first image by using the neural network, and converting the first image from a first originally acquired data format to a second data format, so that the first image in the second data format can be transmitted and/or displayed. The second data format may be any image format suitable for display or transmission of the first image.
When the neural network is used for carrying out format conversion processing on the first image in the first data format, the first image and the equipment parameters used by the equipment for collecting the first image are fused, namely, the trained neural network can be used for different image equipment, the neural network does not need to be trained respectively in each type or multiple similar types, and retraining is not needed when the neural network is applied to different equipment, so that the universality is enhanced.
In one embodiment, the above method flow can be executed by the image processing apparatus 100, as shown in fig. 2, the image processing apparatus 100 mainly includes 3 units: an image acquisition unit 101, a parameter acquisition unit 102 and a format conversion unit 103. The image capturing module 101 is configured to perform the step S11, the parameter obtaining unit 102 is configured to perform the step S12, and the format converting unit 103 is configured to perform the step S13.
the neural network may be integrated in the format conversion unit 103 as a part thereof, or may be disposed outside the format conversion unit 103, and may be scheduled by the format conversion unit 103.
referring to fig. 3, in one embodiment, the neural network 200 may include: at least one first computing layer 201 for performing a fusion process; and at least one second computation layer 202 for performing convolution processing. Of course, other types of computing layers may also be included, such as pooled computing layers, active computing layers, and the like, and the specific number and interconnection relationship of the computing units are not limited.
the first computing layer 201 may perform fusion processing on the first image and the image information in the first data format, and output fused data. The first image and the image information in the converted first data format may be subjected to fusion processing, the first image in the first data format and the converted image information may be subjected to fusion processing, or the first image in the converted first data format and the converted image information may be subjected to fusion processing. The transformation may be implemented by performing operation processing through each computation layer in the neural network, for example, processing through the second computation layer 202, or corresponding algorithm processing performed by other layers in the neural network, or processing of the first computation layer 201 and the second computation layer 202 or other layers, and the like, and is not limited in particular.
The second computation layer 202 may perform convolution processing on the fused data, obtain a specific convolution kernel after the neural network is trained, perform convolution processing on the data and the specific convolution kernel, and output convolution. The convolution processing refers to mathematical function convolution operation, and parameters adopted in the operation such as convolution kernel size, convolution kernel parameters, convolution step size and the like are not limited.
In one possible implementation, the format conversion unit 103 may implement the above step S13 by executing steps S1301 to S1302:
S1301: executing fusion processing on the first image in the first data format and the equipment parameters by the first computing layer to obtain fusion data;
S1302: and performing convolution processing on the fusion data by at least one second calculation layer to obtain a first image in the second data format.
In step S1301, a first computing layer may directly perform fusion processing on the input first image in the first data format and the device parameter to obtain a path of fusion data. Next, in step S1302, for the way fusion data, convolution processing may be performed on the fusion data through at least one second computation layer, convolution processing may be sequentially performed on the fusion data through one or more second computation layers, and other computation layers may be added to perform corresponding operation processing, for example, a pooling computation layer may perform pooling processing, and the like, and the first image in the required second data format is finally obtained.
in another possible implementation, the format conversion unit 103 may implement the above step S13 by performing steps S1311 to S1312:
S1311: performing convolution processing on the first image in the first data format by at least one second calculation layer to obtain first convolution data;
S1312: and executing fusion processing on the equipment parameters and the first volume data by the first computing layer to obtain a first image in the second data format.
in step S1311, the first image in the first data format may be convolved by at least one second computing layer to obtain first convolution data, the first image in the first data format may be convolved by one or more second computing layers, and other computing layers may be added to perform corresponding operation processing, for example, the pooling computing layer may perform pooling processing, and the like, without limitation. Next, in step S1312, a fusion process is performed on the device parameters and the first convolution data through the first computing layer, and since the first convolution data is the transformation data of the first image in the first data format, the first computing layer performs the fusion process on the device parameters and the transformation data of the first image in the first data format to obtain the first image in the second data format.
in still another possible embodiment, the format conversion unit 103 may implement the above step S13 by executing steps S1321 to S1322:
s1321: performing convolution processing on the equipment parameters by at least one second calculation layer to obtain second convolution data;
S1322: and executing fusion processing on the first image in the first data format and the second convolution data by the first calculation layer to obtain the first image in the second data format.
In step S1321, convolution processing may be performed on the device parameter through at least one second calculation layer to obtain second convolution data, convolution processing may be performed on the device parameter through one or more second calculation layers in sequence, and other calculation layers may be added to perform corresponding operation processing, for example, pooling processing may be performed on a pooling calculation layer, and the like, which is not limited specifically. Next, in step S1322, the first image in the first data format and the second convolution data are fused by the first computing layer, and since the second convolution data is the transformation data of the device parameter, the first computing layer performs the fusion processing of the first image in the first data format and the transformation data of the device parameter, so as to obtain the first image in the second data format.
in still another possible embodiment, the format conversion unit 103 may implement the above step S13 by executing steps S1331 to S1332:
S1331: performing convolution processing on a first image in the first data format by at least one second calculation layer to obtain first convolution data, and performing convolution processing on the equipment parameters by at least one second calculation layer to obtain second convolution data;
s1332: and executing fusion processing on the first convolution data and the second convolution data by the first calculation layer to obtain a first image in the second data format.
in step S1331, a convolution process may be performed on the first image in the first data format through at least one second calculation layer to obtain first convolution data, and a convolution process may be performed on the device parameter through at least one second calculation layer to obtain second convolution data, where the convolution process may be performed on the first image in the first data format or the device parameter through one or more second calculation layers in sequence, and other calculation layers may also be added to perform corresponding operation processes, for example, a pooling calculation layer may perform pooling processing, and the like, without limitation. Next, in step S1332, a fusion process is performed on the first convolution data and the second convolution data through the first computing layer, and since the first convolution data is the transformation data of the first image in the first data format and the second convolution data is the transformation data of the device parameter, the first computing layer performs the fusion process on the transformation data of the first image in the first data format and the transformation data of the device parameter to obtain the first image in the second data format.
It is understood that the implementation manner of step S13 is only a preferred embodiment, and specifically, the present invention is not limited thereto, as long as four combinations of the device parameter or the transformed data of the device parameter and the first image in the first data format or the transformed data of the first image in the first data format can be fused by the neural network, and the obtained fused data is used as the output of the neural network, or the data obtained by performing convolution or other operation processing on the fused data is used as the data of the neural network, so as to obtain the first image in the second data format. The transformed data may be data calculated by a first computational layer, a second computational layer, or other computational layers of the neural network.
a more specific embodiment of the neural network is provided below, and for ease of description, the image data set, the image having the first data format is simply referred to as the first data format image, and the image having the second data format is simply referred to as the second data format image.
a first embodiment of a neural network, referring to fig. 3a, the neural network may include a first computation layer 201a and a second computation layer 202a, which are connected in series; the first computing layer 201a performs fusion processing on the device parameters and the first data format image, and outputs fusion data to the second computing layer 202 a; the second computing layer 202a performs convolution processing on the fusion data and the convolution kernel thereof, and outputs a second data format image.
Referring to fig. 3b, the neural network may include a first computation layer 201b, a second computation layer 202b, a second computation layer 203b, and a second computation layer 204b, which are connected in series; the first computing layer 201b performs fusion processing on the device parameters and the first data format image, and outputs fusion data to the second computing layer 202 b; the second computing layer 202b performs convolution processing on the fusion data and the convolution kernel thereof and outputs a convolution result; the second calculation layer 203b performs convolution processing on the convolution result of the second calculation layer 202b and the convolution kernel thereof, and outputs a convolution result; the second calculation layer 204b performs convolution processing on the convolution result of the second calculation layer 203b and the convolution kernel thereof, and outputs a second data format image. The convolution kernels of the second computation layer 202b, the second computation layer 203b, and the second computation layer 204b may be different.
A third embodiment of the neural network, referring to fig. 3c, the neural network may include a first computation layer 201c, a second computation layer 202c, a second computation layer 203c, a first computation layer 204c, a second computation layer 205c, and a first computation layer 206c, the first computation layer 201c is connected to the second computation layer 203c, the second computation layer 202c, the first computation layer 204c, and the second computation layer 205c are connected in sequence, and the second computation layer 203c and the second computation layer 205c are connected to the first computation layer 206 c; the first computing layer 201c performs fusion processing on the device parameters and the first data format image, and outputs fusion data to the second computing layer 203 c; the second computing layer 203c convolves the fused data of the first computing layer 201c with the convolution kernel thereof and outputs a convolution result; the second calculation layer 202c performs convolution processing on the first data format image and the convolution kernel thereof and outputs a convolution result; the first computation layer 204c outputs fusion data after fusion processing is performed on the convolution result of the second computation layer 202c and the device parameter; the second calculation layer 205c performs convolution processing on the fusion result of the first calculation layer 204c and the convolution kernel thereof, and outputs a convolution result; the first calculation layer 206c performs fusion processing on the convolution result of the second calculation layer 203c and the convolution result of the second calculation layer 205c, and outputs a second data format image. The fusion processing algorithms of the first computation layer 201c, the first computation layer 204c, and the first computation layer 206c may be the same or different, the convolution kernels of the second computation layer 202c, the second computation layer 203c, and the second computation layer 205c may be different, and the device parameters received by the first computation layer 201c and the first computation layer 204c may be the same or different, but are all the device parameters (which may be more than one type of parameters) used by the present device to acquire the first image in the first data format.
a fourth embodiment of the neural network, referring to fig. 3d, the neural network may include a second computation layer 201d, a third computation layer 202d, a first computation layer 203d, a second computation layer 204d, and a third computation layer 205d, which are connected in sequence; the second calculation layer 201d performs convolution processing on the first data format image and the convolution kernel thereof and outputs a convolution result; the third computation layer 202d performs corresponding algorithm processing on the convolution result of the second computation layer 201d, for example, when the convolution result is a pooling computation layer, pooling processing is performed on the convolution result of the second computation layer 201 d; the first computation layer 203d outputs fusion data after fusion processing is performed on the device parameters and the result of the third computation layer 202 d; the second calculation layer 204d performs convolution processing on the convolution result of the first calculation layer 203d and the convolution kernel thereof, and outputs a convolution result; the third calculation layer 205d performs an algorithm process corresponding to the convolution result of the second calculation layer 204d, for example, when the calculation layer is activated, the activation process is performed on the convolution result of the second calculation layer 204d, and the second data format image is output.
Of course, the neural networks of the above embodiments are only examples, and of course, the types and number of the computation layers and the connections of the computation layers may be reasonably adjusted.
in one embodiment, the device parameters include at least one of the following parameters:
An environment-independent first device parameter of a capture device used to capture a first image of the first data format;
an environmentally-dependent second device parameter of an acquisition device used to acquire the first image in the first data format.
the first device parameter may be a fixed parameter (such as sensor sensitivity) of the acquisition device itself, and is not changed with the shooting environment; the second device parameter may be a parameter related to the present shooting (for example, an aperture of a lens) of the capturing device. The specific type and number of the device parameters are not limited, and the device parameters may be various parameters of imaging elements such as a sensor and a lens.
In one embodiment, the acquisition device for acquiring the first image in the first data format comprises a sensor and a lens;
The first device parameter comprises at least one of: the sensor comprises a sensor sensitivity, a sensor dynamic range, a sensor signal-to-noise ratio, a sensor pixel size, a sensor target surface size, a sensor resolution, a sensor frame rate, a sensor pixel number, a sensor spectral response, a sensor photoelectric response, a sensor array mode and a lens hood model;
The second device parameter comprises at least one of: the aperture of the lens (i.e. the clear aperture of the lens aperture), the focal length of the lens, the aperture size of the lens (i.e. the F value of the lens aperture), the aperture of the lens filter, and the viewing angle of the lens.
One or more of the above-mentioned plant parameters may be actually taken into account.
referring to fig. 4, in the neural network, the fusion processing executed by the first computation layer is implemented by inputting the first input data and the second input data into a fusion operation algorithm for fusion processing, and outputting a path of fusion data, where the fusion operation algorithm may be one or a combination of four arithmetic operations, polynomial operation, logical operation, and matrix operation, and certainly is not limited thereto.
A more specific embodiment of the first computing layer performing the fusion process is provided below, although of course not limited thereto.
the first computing layer performs a first embodiment of the fusion process, see fig. 4a, to perform the fusion process on an input data and device parameters, the device parameters are sensor pixel sizes, and the fusion process is performed by a simple summation operation. The input data may be a first image in a first data format or its transformed data.
Specifically, the first calculation layer sums each element value of the input data with a sensor pixel size value, and outputs a summation result. For example, as shown in fig. 4a, the input data is a feature matrix with a size of 3 × 3, the pixel size of the sensor is 5 μm, and the output data is also a 3 × 3 matrix, each element of the output data is 5 greater than the value of the input data element at the corresponding position.
referring to fig. 4b, the first computing layer performs a second embodiment of the fusion process, and performs the fusion process on an input data and device parameters, wherein the device parameters adopt the model of the lens hood and the aperture of the lens aperture, and the fusion adopts logical operation and product operation. The input data may be a first image in a first data format or its transformed data.
specifically, the present first calculation layer multiplies each element value of the input data by a k-th power of a corresponding lens aperture value, where k is selected by logical judgment according to the model of the lens hood, and outputs the multiplied data, for example, the input data is a feature matrix of 5 × 1 size, the lens aperture is 6mm, the model of the lens hood is model 3, and k is 2.5(k is determined according to the model, k is 1.8 for model 1, and k is 2.2 for model 2, which is merely exemplary and not limiting), the output data is also a 5 × 1 matrix, each element of the output data is 6 2.5 times the input element of the corresponding position,
the first computing layer executes a third embodiment of the fusion process, referring to fig. 4c, to perform the fusion process on the two input data, and the fusion employs a matrix merging operation. The input data may be the first image or its transform data, the device parameters or its transform data in the first data format.
specifically, the first computing layer combines two items of input data in the form of two-dimensional matrices in the third dimension, and outputs a three-dimensional matrix. For example, if the input data is two feature matrices of 4 × 3 size, the output data obtained by combining the two feature matrices is a three-dimensional matrix of 4 × 3 × 2.
The first computing layer performs a fourth embodiment of the fusion process, referring to fig. 4d, to perform the fusion process on the two input data items, and the fusion employs matrix multiplication. The input data may be the first image or its transform data, the device parameters or its transform data in the first data format.
Specifically, the first calculation layer calculates matrix multiplication of two items of input matrix data and outputs a multiplication result. For example, if the input data is two feature matrices of 2 × 2 size, the output data obtained by matrix multiplication of the two feature matrices is a 2 × 2 matrix.
in one embodiment, the first image of the first data format comprises a first band of sampled signals and/or a second band of sampled signals; the wavelength range of the first waveband sampling signal is 380-780 nm, and the wavelength range of the second waveband sampling signal is 780-2500 nm. The device collects the first wave band sampling signal and/or the second wave band sampling signal as the image information of the first image through spectrum sampling. The first image may include one band data or more band data.
In one embodiment, the neural network is trained using image datasets corresponding to at least two different designated image devices. Each of the designated image devices may be an image device of a different model.
The image data set corresponding to each of the designated image devices includes: the method comprises the steps of specifying a first data format image collected by image equipment, specifying equipment parameters used when the image equipment collects the first data format image, and specifying a second data format image corresponding to the first data format image.
the second data format image obtained by format conversion of the first data format image can be obtained by synchronously acquiring by using an image device capable of outputting the second data format image while acquiring the first data format image, so as to obtain a corresponding second data format image; alternatively, the acquired first data format image may be subjected to appropriate image processing to obtain a corresponding second data format image.
the trained neural network is shared by a plurality of image devices, optionally, the trained neural network is shared by the device and at least one other image device, namely, the trained neural network is located outside each image device and can be scheduled when processed by the image device; alternatively, the trained neural network is integrated into the device and used by the device alone, without changing the properties that the trained neural network can be shared.
illustratively, referring to fig. 5, the training process of the neural network may be implemented by a training unit of the neural network, and may include the following steps:
S21: acquiring an image data set corresponding to each appointed image device as a training sample;
s22: and taking a first data format image acquired by the appointed image equipment and equipment parameters used when the appointed image equipment acquires the first data format image as the input of an initial neural network, taking a second data format image corresponding to the first data format image as the output of a training model of the neural network, and training the training model of the neural network to obtain the neural network.
A training model of a neural network can be constructed in advance, wherein the training model of the neural network comprises at least one first computing layer and at least one second computing layer; other computing layers may of course also be included.
the training model of the pre-designed neural network is trained by using the image data sets corresponding to the at least two different designated image devices, that is, the configuration parameters of the network are adjusted, and the specific training mode is not limited, and may be, for example, a back propagation mode, an elastic propagation mode, a conjugate gradient mode, and the like, as long as the neural network is trained to be capable of receiving the first data format image and the corresponding device parameters and outputting the corresponding second data format image.
For example, fig. 6 shows how to acquire an image data set required for neural network training, which may include the following steps S211 to S214:
step S211: performing a first data format image acquisition; different appointed image equipment can be used for respectively acquiring first data format images; the number of the first data format images acquired by each appointed image device can be multiple, and a first data format image set is formed;
step S212: performing equipment parameter acquisition; the device parameters of the designated image device used in the step S211 of acquiring each image may be recorded, including fixed parameters of the device itself (e.g., the sensitivity of the sensor), and parameters related to the current shooting (e.g., the size of the aperture used in the current shooting), but are not limited thereto;
Step S213: performing a second data format image acquisition; the acquired second data format image corresponds to the first data format image, so that a second data format image set can be correspondingly formed;
There are two second data format image acquisition modes:
(1) Synchronous acquisition: synchronously acquiring images by using image equipment capable of outputting images in a second data format while acquiring images in the first data format in step S211 to obtain images in the second data format;
(2) and (3) post-processing: performing appropriate image processing on the first data format image acquired in step S211 to obtain a second data format image (dotted line portion in fig. 5); suitable image processing methods include, but are not limited to, white balance correction, color interpolation, curve mapping, and the like;
step S214: storing the association; and correspondingly storing the acquired first data format image, the acquired equipment parameters and the acquired second data format image to form an image data set.
In particular, referring to FIG. 7, post-processing is shown for obtaining a corresponding second data format image. The image devices a 1-ak respectively acquire first data format image sets b 1-bk, respectively record device parameters c 1-ck corresponding to the first data format image sets b 1-bk while acquiring, and respectively perform image processing on the first data format image sets b 1-bk by using processing methods e 1-ek to obtain second data format image sets d 1-dk corresponding to the first data format image sets b 1-bk. The number of the first data format images acquired by each designated image device may be multiple, and the number of the processed corresponding second data format images is also multiple, for example, the image device a1 acquires multiple first data format images to form a first data format image set b1, and the corresponding second data format images form a second data format image set b2, and the like. The treatment methods e1 to ek may be the same or different.
the following describes the image processing apparatus according to an embodiment of the present invention, but the present invention is not limited thereto.
in one embodiment, fig. 2 shows an image processing apparatus 100, and the image processing apparatus 100 may include:
The image acquisition unit 101 is used for acquiring a first image with a data format of a first data format;
a parameter obtaining unit 102, configured to obtain a device parameter used for acquiring the first image;
A format conversion unit 103, configured to input the first image and the device parameter to a trained neural network, so that the neural network converts a data format of the first image from the first data format into a second data format, where the second data format is an image format suitable for transmission and/or display of the first image.
in one embodiment, the neural network comprises:
At least one first computing layer for performing a fusion process;
at least one second computation layer for performing convolution processing.
In one embodiment, said converting, by said neural network, a data format of said first image from said first data format to a second data format comprises:
executing fusion processing on the first image in the first data format and the equipment parameters by the first computing layer to obtain fusion data;
And performing convolution processing on the fusion data by at least one second calculation layer to obtain a first image in the second data format.
In one embodiment, said converting, by said neural network, a data format of said first image from said first data format to a second data format comprises:
Performing convolution processing on the first image in the first data format by at least one second calculation layer to obtain first convolution data;
executing fusion processing on the equipment parameters and the first volume data by the first computing layer to obtain a first image in the second data format;
Or,
performing convolution processing on the equipment parameters by at least one second calculation layer to obtain second convolution data;
Executing fusion processing on the first image in the first data format and the second convolution data by the first computing layer to obtain a first image in the second data format;
Or,
performing convolution processing on a first image in the first data format by at least one second calculation layer to obtain first convolution data, and performing convolution processing on the equipment parameters by at least one second calculation layer to obtain second convolution data;
And executing fusion processing on the first convolution data and the second convolution data by the first calculation layer to obtain a first image in the second data format.
In one embodiment, the device parameters include at least one of the following parameters:
an environment-independent first device parameter of a capture device used to capture a first image of the first data format;
an environmentally-dependent second device parameter of an acquisition device used to acquire the first image in the first data format.
In one embodiment, the acquisition device for acquiring the first image in the first data format comprises a sensor and a lens;
The first device parameter comprises at least one of: the sensor comprises a sensor sensitivity, a sensor dynamic range, a sensor signal-to-noise ratio, a sensor pixel size, a sensor target surface size, a sensor resolution, a sensor frame rate, a sensor pixel number, a sensor spectral response, a sensor photoelectric response, a sensor array mode and a lens hood model;
the second device parameter comprises at least one of: the aperture of the lens, the focal length of the lens, the aperture size of the lens, the aperture of the filter of the lens and the visual angle of the lens.
in one embodiment, the neural network is trained using image datasets corresponding to at least two different designated image devices;
The image data set corresponding to each of the designated image devices includes: the method comprises the steps of specifying a first data format image collected by image equipment, specifying equipment parameters used when the image equipment collects the first data format image, and specifying a second data format image corresponding to the first data format image.
the implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
for the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts shown as units may or may not be physical units.
The invention also provides an electronic device, which comprises a processor and a memory; the memory stores a program that can be called by the processor; wherein the processor, when executing the program, implements the image processing method as in any one of the preceding embodiments.
The embodiment of the image processing device can be applied to electronic equipment. Taking a software implementation as an example, as a logical device, the device is formed by reading, by a processor of the electronic device where the device is located, a corresponding computer program instruction in the nonvolatile memory into the memory for operation. From a hardware aspect, as shown in fig. 8, fig. 8 is a hardware structure diagram of an electronic device where the image processing apparatus 100 is located according to an exemplary embodiment of the present invention, and besides the processor 510, the memory 530, the interface 520, and the nonvolatile memory 540 shown in fig. 8, the electronic device where the apparatus 100 is located in the embodiment may also include other hardware according to an actual function of the electronic device, which is not described again.
the present invention also provides a machine-readable storage medium on which a program is stored, which, when executed by a processor, causes an image apparatus to implement the image processing method as described in any one of the preceding embodiments.
The present invention may take the form of a computer program product embodied on one or more storage media including, but not limited to, disk storage, CD-ROM, optical storage, and the like, having program code embodied therein. Machine-readable storage media include both permanent and non-permanent, removable and non-removable media, and the storage of information may be accomplished by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of machine-readable storage media include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (16)
1. An image processing method, comprising:
Acquiring a first image with a data format of a first data format;
acquiring equipment parameters used for acquiring the first image;
inputting the first image and the device parameters to a trained neural network to convert a data format of the first image from the first data format to a second data format by the neural network, the second data format being an image format suitable for transmission and/or display of the first image.
2. the image processing method of claim 1, wherein the neural network comprises:
at least one first computing layer for performing a fusion process;
At least one second computation layer for performing convolution processing.
3. The image format conversion processing method according to claim 2, wherein the converting, by the neural network, the data format of the first image from the first data format to a second data format includes:
executing fusion processing on the first image in the first data format and the equipment parameters by the first computing layer to obtain fusion data;
And performing convolution processing on the fusion data by at least one second calculation layer to obtain a first image in the second data format.
4. The image format conversion processing method according to claim 2, wherein the converting, by the neural network, the data format of the first image from the first data format to a second data format includes:
performing convolution processing on the first image in the first data format by at least one second calculation layer to obtain first convolution data;
Executing fusion processing on the equipment parameters and the first volume data by the first computing layer to obtain a first image in the second data format;
or,
performing convolution processing on the equipment parameters by at least one second calculation layer to obtain second convolution data;
executing fusion processing on the first image in the first data format and the second convolution data by the first computing layer to obtain a first image in the second data format;
Or,
performing convolution processing on a first image in the first data format by at least one second calculation layer to obtain first convolution data, and performing convolution processing on the equipment parameters by at least one second calculation layer to obtain second convolution data;
and executing fusion processing on the first convolution data and the second convolution data by the first calculation layer to obtain a first image in the second data format.
5. the image format conversion processing method according to claim 1, wherein the device parameter includes at least one of the following parameters:
An environment-independent first device parameter of a capture device used to capture a first image of the first data format;
An environmentally-dependent second device parameter of an acquisition device used to acquire the first image in the first data format.
6. the image format conversion processing method according to claim 5, wherein the acquisition device for acquiring the first image in the first data format includes a sensor and a lens;
the first device parameter comprises at least one of: the sensor comprises a sensor sensitivity, a sensor dynamic range, a sensor signal-to-noise ratio, a sensor pixel size, a sensor target surface size, a sensor resolution, a sensor frame rate, a sensor pixel number, a sensor spectral response, a sensor photoelectric response, a sensor array mode and a lens hood model;
The second device parameter comprises at least one of: the aperture of the lens, the focal length of the lens, the aperture size of the lens, the aperture of the filter of the lens and the visual angle of the lens.
7. the image processing method of claim 1, wherein the neural network is trained using image datasets corresponding to at least two different designated image devices;
The image data set corresponding to each of the designated image devices includes: the method comprises the steps of specifying a first data format image collected by image equipment, specifying equipment parameters used when the image equipment collects the first data format image, and specifying a second data format image corresponding to the first data format image.
8. an image processing apparatus characterized by comprising:
The image acquisition unit is used for acquiring a first image with a data format of a first data format;
the parameter acquisition unit is used for acquiring equipment parameters used for acquiring the first image;
And the format conversion unit is used for inputting the first image and the equipment parameters into a trained neural network so as to convert the data format of the first image from the first data format into a second data format by the neural network, wherein the second data format is an image format suitable for transmission and/or display of the first image.
9. The image processing apparatus of claim 8, wherein the neural network comprises:
At least one first computing layer for performing a fusion process;
at least one second computation layer for performing convolution processing.
10. the image processing apparatus of claim 9, wherein the converting, by the neural network, the data format of the first image from the first data format to a second data format comprises:
executing fusion processing on the first image in the first data format and the equipment parameters by at least one first computing layer to obtain fusion data;
and performing convolution processing on the fusion data by the second calculation layer to obtain a first image in the second data format.
11. the image processing apparatus of claim 9, wherein the converting, by the neural network, the data format of the first image from the first data format to a second data format comprises:
Performing convolution processing on the first image in the first data format by at least one second calculation layer to obtain first convolution data;
executing fusion processing on the equipment parameters and the first volume data by the first computing layer to obtain a first image in the second data format;
or,
performing convolution processing on the equipment parameters by at least one second calculation layer to obtain second convolution data;
executing fusion processing on the first image in the first data format and the second convolution data by the first computing layer to obtain a first image in the second data format;
Or,
performing convolution processing on a first image in the first data format by at least one second calculation layer to obtain first convolution data, and performing convolution processing on the equipment parameters by at least one second calculation layer to obtain second convolution data;
and executing fusion processing on the first convolution data and the second convolution data by the first calculation layer to obtain a first image in the second data format.
12. the image processing apparatus according to claim 8, wherein the device parameter includes at least one of the following parameters:
an environment-independent first device parameter of a capture device used to capture a first image of the first data format;
An environmentally-dependent second device parameter of an acquisition device used to acquire the first image in the first data format.
13. the image processing apparatus of claim 12, wherein the acquisition device for acquiring the first image in the first data format comprises a sensor and a lens;
The first device parameter comprises at least one of: the sensor comprises a sensor sensitivity, a sensor dynamic range, a sensor signal-to-noise ratio, a sensor pixel size, a sensor target surface size, a sensor resolution, a sensor frame rate, a sensor pixel number, a sensor spectral response, a sensor photoelectric response, a sensor array mode and a lens hood model;
The second device parameter comprises at least one of: the aperture of the lens, the focal length of the lens, the aperture size of the lens, the aperture of the filter of the lens and the visual angle of the lens.
14. the image processing apparatus of claim 8, wherein the neural network is trained using image datasets corresponding to at least two different specified image devices;
the image data set corresponding to each of the designated image devices includes: the method comprises the steps of specifying a first data format image collected by image equipment, specifying equipment parameters used when the image equipment collects the first data format image, and specifying a second data format image corresponding to the first data format image.
15. An image device comprising a processor and a memory; the memory stores a program that can be called by the processor; wherein the processor, when executing the program, implements the image processing method of any one of claims 1 to 7.
16. a machine-readable storage medium, having stored thereon a program which, when executed by a processor, causes an image apparatus to implement the image processing method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810556547.8A CN110557579B (en) | 2018-05-31 | 2018-05-31 | Image processing method, device and equipment and readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810556547.8A CN110557579B (en) | 2018-05-31 | 2018-05-31 | Image processing method, device and equipment and readable medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110557579A true CN110557579A (en) | 2019-12-10 |
CN110557579B CN110557579B (en) | 2021-11-02 |
Family
ID=68734073
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810556547.8A Active CN110557579B (en) | 2018-05-31 | 2018-05-31 | Image processing method, device and equipment and readable medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110557579B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112990370A (en) * | 2021-04-26 | 2021-06-18 | 腾讯科技(深圳)有限公司 | Image data processing method and device, storage medium and electronic equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101242470A (en) * | 2008-02-15 | 2008-08-13 | 北京中星微电子有限公司 | Display image data processing method and device |
TW200935875A (en) * | 2008-02-15 | 2009-08-16 | Hon Hai Prec Ind Co Ltd | Camera and method of personality thereof |
CN103559150A (en) * | 2013-11-08 | 2014-02-05 | 深圳市道通科技有限公司 | Realizing method and device for external camera of host, and mobile terminal |
CN103841462A (en) * | 2013-12-03 | 2014-06-04 | 深圳市九洲电器有限公司 | Method and device for multiple-screen program playing of digital set top box |
CN105611357A (en) * | 2015-12-25 | 2016-05-25 | 百度在线网络技术(北京)有限公司 | Image processing method and device |
CN106339194A (en) * | 2016-08-31 | 2017-01-18 | 南京极域信息科技有限公司 | Method and system for dynamically adjusting multi-device display effect |
CN106934426A (en) * | 2015-12-29 | 2017-07-07 | 三星电子株式会社 | The method and apparatus of the neutral net based on picture signal treatment |
CN107483876A (en) * | 2017-07-21 | 2017-12-15 | 阔地教育科技有限公司 | Video data handling procedure, storage device, straight recorded broadcast interactive terminal |
-
2018
- 2018-05-31 CN CN201810556547.8A patent/CN110557579B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101242470A (en) * | 2008-02-15 | 2008-08-13 | 北京中星微电子有限公司 | Display image data processing method and device |
TW200935875A (en) * | 2008-02-15 | 2009-08-16 | Hon Hai Prec Ind Co Ltd | Camera and method of personality thereof |
CN103559150A (en) * | 2013-11-08 | 2014-02-05 | 深圳市道通科技有限公司 | Realizing method and device for external camera of host, and mobile terminal |
CN103841462A (en) * | 2013-12-03 | 2014-06-04 | 深圳市九洲电器有限公司 | Method and device for multiple-screen program playing of digital set top box |
CN105611357A (en) * | 2015-12-25 | 2016-05-25 | 百度在线网络技术(北京)有限公司 | Image processing method and device |
CN106934426A (en) * | 2015-12-29 | 2017-07-07 | 三星电子株式会社 | The method and apparatus of the neutral net based on picture signal treatment |
CN106339194A (en) * | 2016-08-31 | 2017-01-18 | 南京极域信息科技有限公司 | Method and system for dynamically adjusting multi-device display effect |
CN107483876A (en) * | 2017-07-21 | 2017-12-15 | 阔地教育科技有限公司 | Video data handling procedure, storage device, straight recorded broadcast interactive terminal |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112990370A (en) * | 2021-04-26 | 2021-06-18 | 腾讯科技(深圳)有限公司 | Image data processing method and device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110557579B (en) | 2021-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110428366A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
JP6091176B2 (en) | Image processing method, image processing program, image processing apparatus, and imaging apparatus | |
WO2020095233A1 (en) | Bright spot removal using a neural network | |
CN113170030A (en) | Correction of photographic underexposure using neural networks | |
JP2020144489A (en) | Image processing method, image processing device, program, method of producing trained model, and image processing system | |
Liu et al. | A new polarization image demosaicking algorithm by exploiting inter-channel correlations with guided filtering | |
CN110428362B (en) | Image HDR conversion method and device and storage medium | |
EP2795575B1 (en) | Integrated three-dimensional vision sensor | |
JP2017208641A (en) | Imaging device using compression sensing, imaging method, and imaging program | |
JP2013517728A (en) | Method and apparatus for acquiring and transforming images | |
US20220132001A1 (en) | Method of generating noise-reduced image data and electronic device for performing the same | |
CN108769523A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
FR2831753A1 (en) | Contrast matching method for image systems, involves forming filter ratio by filtering generated images and multiplying image by filter ratio to form adjusted image | |
CN110557579B (en) | Image processing method, device and equipment and readable medium | |
EP1963970A2 (en) | Method for providing data to a digital processing means | |
Monno et al. | N-to-sRGB mapping for single-sensor multispectral imaging | |
CN111383188A (en) | Image processing method, system and terminal equipment | |
CN112700378A (en) | Image signal processing | |
WO2020215263A1 (en) | Image processing method and device | |
JP7191588B2 (en) | Image processing method, image processing device, imaging device, lens device, program, and storage medium | |
US11997246B2 (en) | Trained artificial intelligence model for raw to RGB image transformation | |
CN116245968A (en) | Method for generating HDR image based on LDR image of transducer | |
Shankar et al. | Ultra-thin multiple-channel LWIR imaging systems | |
JP6611509B2 (en) | Image processing apparatus, imaging apparatus, and image processing program | |
US11988849B2 (en) | Imaging device and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |