CN116980724A - Image processing method, computer device, and computer-readable storage medium - Google Patents

Image processing method, computer device, and computer-readable storage medium Download PDF

Info

Publication number
CN116980724A
CN116980724A CN202310723133.0A CN202310723133A CN116980724A CN 116980724 A CN116980724 A CN 116980724A CN 202310723133 A CN202310723133 A CN 202310723133A CN 116980724 A CN116980724 A CN 116980724A
Authority
CN
China
Prior art keywords
image
path
feature
features
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310723133.0A
Other languages
Chinese (zh)
Inventor
王伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202310723133.0A priority Critical patent/CN116980724A/en
Publication of CN116980724A publication Critical patent/CN116980724A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/13Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
    • H04N23/16Optical arrangements associated therewith, e.g. for beam-splitting or for colour correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/74Circuits for processing colour signals for obtaining special effects
    • H04N9/76Circuits for processing colour signals for obtaining special effects for mixing of colour signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, a computer device and a computer readable storage medium. The method comprises the following steps: acquiring a first image feature of a first path of image and a second image feature of a second path of image; the contrast of the first path of image and the second path of image are different, and the first path of image and the second path of image are obtained by shooting a target scene by using different image sensors respectively; acquiring a feature matching relationship by utilizing the first image feature and the second image feature; and performing color mapping on the first path of image and the second path of image by utilizing the characteristic matching relationship to obtain an image processing result. By the aid of the scheme, applicability of image processing can be improved.

Description

Image processing method, computer device, and computer-readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, a computer device, and a computer readable storage medium.
Background
With the development of the age, the application of the image pickup apparatus in various fields is becoming more and more common, and the requirement for the shooting effect of the image pickup apparatus is also increasing.
For example, in the field of photographing, in a photographing environment in which no visible light is supplied from a light supply lamp, there are many cases where a photographed image is unclear due to insufficient illuminance of visible light. In order to reduce the occurrence of unclear photographed images, at present, a corresponding black-and-white image can be obtained by using an infrared light supplementing mode to supplement light to a photographed environment, and then the black-and-white image and a color image are fused to obtain a clear fused image.
However, when a black-and-white image and a color image are fused to obtain a fused image, the black-and-white image and the color image cannot be subjected to frame offset, accurate fusion is difficult to realize, and the applicability is low.
Disclosure of Invention
The application mainly solves the technical problem of providing an image processing method, computer equipment and a computer readable storage medium, which can improve the applicability of image processing.
In order to solve the above-described problems, a first aspect of the present application provides an image processing method including: acquiring a first image feature of a first path of image and a second image feature of a second path of image; the contrast of the first path of image and the second path of image are different, and the first path of image and the second path of image are obtained by shooting a target scene by using different image sensors respectively; acquiring a feature matching relationship by utilizing the first image feature and the second image feature; and performing color mapping on the first path of image and the second path of image by utilizing the characteristic matching relationship to obtain an image processing result.
In order to solve the above-mentioned problems, a second aspect of the present application provides a computer device including a memory and a processor coupled to each other, the memory storing program data, the processor being configured to execute the program data to implement any of the steps of the above-mentioned image processing method.
In order to solve the above-described problems, a third aspect of the present application provides a computer-readable storage medium storing program data executable by a processor for implementing any one of the steps of the above-described image processing method.
According to the scheme, the first image feature of the first path of image and the second image feature of the second path of image are obtained, the contrast ratio of the first path of image and the contrast ratio of the second path of image are different, the first image feature and the second image feature are utilized to obtain the feature matching relationship, and the color mapping is carried out on the first path of image and the second path of image by utilizing the feature matching relationship to obtain an image processing result, so that the consistency of the positions or time of two paths of image acquisition is not required to be considered, the calibration is not required to be carried out on the image sensors of the two paths of image, and the applicability of image processing can be improved; in addition, the color mapping is performed by utilizing the characteristic matching relation, so that the color information of one of the first path of image and the second path of image is mapped to the other path of image, one path of image only needs to consider the color information, and the other path of image does not need to consider the color information, so that a high-quality image is not needed, the image with a little blurring degree can be obtained, and the applicability can be further improved.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings required in the description of the embodiments will be briefly described below, it being obvious that the drawings described below are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a flow chart of an embodiment of an image processing method of the present application;
FIG. 2 is a flowchart illustrating the step S12 of FIG. 1 according to an embodiment of the present application;
FIG. 3 is a flow chart of an embodiment of the present application for obtaining feature matching relationships;
FIG. 4 is a flowchart illustrating the step S12 of FIG. 1 according to another embodiment of the present application;
FIG. 5 is a flowchart illustrating the step S13 of FIG. 1 according to an embodiment of the present application;
FIG. 6 is a flow chart of an embodiment of the present application for obtaining a color map;
FIG. 7 is a flow chart of an embodiment of the integrated splice feature codec of the present application;
FIG. 8 is a schematic view of an embodiment of an image processing apparatus according to the present application;
FIG. 9 is a schematic diagram of an embodiment of a computer device of the present application;
FIG. 10 is a schematic diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first" and "second" in the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
The present application provides the following examples, and each example is specifically described below.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of an image processing method according to the present application. The method may comprise the steps of:
s11: acquiring a first image feature of a first path of image and a second image feature of a second path of image; the contrast of the first path of image and the contrast of the second path of image are different, and the first path of image and the second path of image are obtained by shooting a target scene by using different image sensors respectively.
The image sensor of the image processing system can be utilized to shoot the target scene to obtain a first path of image and a second path of image, wherein the image sensors for acquiring the first path of image and the second path of image are different. For example, the image processing system may use a dual-image sensor system (dual sensor system), and different image sensors are used to capture a first path of image and a second path of image of the target scene respectively.
In some embodiments, the target scene corresponding to the first path image and the second path image may be the same scene, such as acquired from the same region.
In some embodiments, the target scenes corresponding to the first path image and the second path image may be different scenes, and the different scenes corresponding to the two paths images may have an associated or overlapping area, which is not limited in the present application.
In some embodiments, the acquisition time of the first path of image and the second path of image may be less than or equal to a preset time difference, or the same target is included in the two paths of images in the first path of image and the second path of image acquired within the preset time difference, and the target may be a person, an object, a car, an animal, or the like.
In some embodiments, the image sensor may include at least one of a black and white sensor, which may acquire a black and white image of the target scene, and a color sensor, which may acquire a color image of the target scene. In some application scenes, the black-and-white sensor can acquire black-and-white images of the target scene by means of the infrared light supplement lamp in the infrared mode, and the color sensor can acquire color images of the target scene by means of the visible light mode (optionally by means of the visible light supplement lamp). In order to reduce light pollution, a visible light supplementing lamp is not used for supplementing light in a visible light mode, and a color image is acquired in a low-illumination environment.
In some embodiments, the first and second images differ in contrast, e.g., one of the first and second images is more contrasted and one of the first and second images is less contrasted. The present embodiment is described by taking one of black-and-white image and one of color image as an example, but the present application is not limited thereto.
In some embodiments, the first image is a black and white image and the second image is a color image, which may be a low-light image or other light image. The first image may include more detailed information than the second image, and the second image may include more color information. This will be described below by way of example.
In some embodiments, the first image is a color image with more detail information and the second image is a color image with more color information.
In some embodiments, the first image and/or the second image are image frames of video. For example, the first path of image may be acquired according to a first time period, and the second path of image may be acquired according to a second time period, which may be the same or different. The first path of image and the second path of image can be used as the first path of image and the second path of image which are acquired by the same frame according to the acquisition sequence.
In some embodiments, a first path image and a second path image of a current frame may be acquired.
In some embodiments, before the step S11, the first path image and the second path image may be acquired.
And shooting the target scene by using different image sensors to obtain first path of image data and second path of image data. The image data of the current frame acquired by the two image sensors may be denoted as raw1 and raw2, respectively, where the height and width of the image data may be h×w.
And carrying out first format processing on the first path of image data to obtain a first path of image. The first path of image data can be converted into data in YUV format or Lab domain format to obtain the first path of image. In this embodiment, taking the first path of image as the data in the Lab domain format as an example, ISP (Image Signal Process, which is a component of an image sensor) may be used to debug the first path of image data to obtain a black-and-white image with a high signal-to-noise ratio, that is, the black-and-white image is converted into the YUV format, which may be recorded asYUV is a category of compiling true-color space (color space), where "Y" represents brightness (luminence or Luma), that is, gray-scale values, and "U" and "V" represent chromaticity (Chroma) to describe image colors and saturation for specifying colors of pixels. Black and white image with high signal-to-noise ratio>Conversion to Lab domainFormat, recorded as->The size is h×w×c, that is, the first path of image. The Lab space of the Lab domain format consists of a lightness L and two color channels a, b, a representing the channel from green to red and b representing the channel from blue to yellow. Lab is spatially advantageous in that its color gamut is larger and independent of the device being photographed.
And similarly, the second-path image data can be processed in a second format to obtain a second-path image. The second path of image data can be converted into data in YUV format or Lab domain format to obtain a second path of image. In this embodiment, taking the second path image as the data in the Lab domain format as an example, the color image containing rich color information is obtained by means of exposure gain and ISP parameter corresponding to the second path image data, that is, converted into the YUV format, which can be recorded asThe debugged color image does not require high quality but cannot accept color noise, because the image detail can accept a certain blur by taking a black-and-white color path, the color information can take a color path, and the color noise can influence the color mapping result. Thus, in the case of color image->Conversion to Lab domain format, denoted +.>The size is h×w×c, i.e., the second path image.
In some implementations, either the first and second images include color channels, and the other includes brightness channels. For example, the first image includes a brightness channel, which may include more detailed information, and the second image includes a color channel, which may include more color information.
In some embodiments, a pre-trained feature extraction module may be utilized to obtain a first image feature of a first image and a second image feature of a second image of a current frame. The feature extraction module comprises a first extraction network N1 and a second extraction network N2.
The first extraction network N1 is utilized to extract the features of the first path of images to obtain first path of features, and different layers of features of the first path of features are spliced to obtain first path of spliced features, and the expression capacity of semantic features can be enhanced by combining the multiple layers of features.
Further enhancing the first path splicing characteristic by using a second extraction network N2 to obtain a first image characteristic, which is recorded asC is the feature dimension. The second extraction network N2 changes the resolution of the first path of the stitching feature into h×w, which can prepare for the subsequent calculation of the color map. The network structure of the feature extraction module is not limited thereto, and the network structure may be defined according to requirements.
Similarly, the first extraction network N1 may be used to extract features of the second path image to obtain a second path feature, and splice different layer features of the second path feature to obtain a second path spliced feature, and the second extraction network N2 may be used to further enhance the second path spliced feature to obtain a second image feature of the second path image, which is denoted as
In some embodiments, the first image feature and the second image feature are semantic features representing a black-and-white image and a color image, respectively.
S12: and acquiring a feature matching relationship by using the first image feature and the second image feature.
By utilizing the first image feature and the second image feature, a feature matching relationship between the first path of image and the second path of image can be obtained, and the feature matching relationship can express the matching condition or similarity condition of the first path of image and the second path of image, such as the similarity of the semantic features.
In some embodiments, referring to fig. 2, step S12 of the above embodiments may be further extended. The method for obtaining the feature matching relationship by using the first image feature and the second image feature may include the following steps:
s121: and acquiring the comprehensive image characteristic by utilizing the first image characteristic and the second image characteristic.
Referring to FIG. 3, for a first image featurePerforming a first convolution to obtain a first image convolution characteristic, which is recorded asThe feature dimension of the first image feature may be compressed into (HW) ×c/N, where N is an empirical value or a customized preset value, which is not limited in the present application.
For the second image featurePerforming a second convolution to obtain a second image convolution characteristic, denoted +.>This approach may compress the feature dimension of the second image feature to (C/N) x (HW). In addition, matrix transposition (transfer) may be performed on the second image convolution characteristics.
The first convolution and the second convolution may select corresponding convolution kernels according to a specific convolution scene, for example, the first convolution or the second convolution may include convolution with the convolution kernel of 1*1 and reshape processing. The reshape processing is performed by using reshape functions, wherein the reshape functions are functions for transforming a specified matrix into a matrix with a specific dimension, the number of elements in the matrix is unchanged, and the functions can readjust the number of rows, the number of columns and the dimension of the matrix.
And performing first preset processing by using the first image convolution characteristic and the second image convolution characteristic to obtain a comprehensive image characteristic. Wherein the first preset processing comprises splicing, multiplying or the like. The process can be expressed by the following formula:
in the above formula (1), M represents the integrated image feature,and->Respectively representing a first image convolution characteristic and a second image convolution characteristic, and T represents a matrix transpose.
S122: and acquiring a feature matching relationship by utilizing the comprehensive image features.
And obtaining a characteristic matching relationship by utilizing the comprehensive image characteristics. The feature matching relationship may be represented by a relationship matrix of the first image convolution feature and the second image convolution feature, i.e. a relationship matrix expressing semantic features. The process can be expressed by the following formula:
in the above formula (2), R ij And (3) representing a characteristic matching relationship, wherein i and j are row and column positions or pixel points of the matrix, and the size of the characteristic matching relationship is HW.
In some embodiments, after obtaining the feature matching relationship, a feature relationship confidence may be obtained based on the feature matching relationship, where the feature relationship confidence represents a maximum value of the feature matching relationship for each column or each row. For example, taking the example of feature relation confidence representing the maximum value of feature matching relation of each column, the process can be expressed by the following formula:
S i =max j R ij (3)
in the above formula (3), S i Representing feature relationship confidence.
In this embodiment, semantic features of a color image of a color path and a black-and-white image of a black-and-white path are acquired, a attention mechanism is adopted to acquire a semantic feature similarity matrix for expressing feature matching relations between a first image convolution feature and a second image convolution feature, and the attention mechanism is used to perform channel compression on feature dimensions so as to enable a relation matrix to be calculated more efficiently, and the relation matrix obtained by deep learning can be used for expressing complex image feature matching relations.
In other embodiments, referring to fig. 4, step S12 of the above embodiment may be further extended. The method for obtaining the feature matching relationship by using the first image feature and the second image feature may include the following steps:
s123: performing feature point matching on the first image feature and the second image feature to obtain a feature matching relationship; wherein the feature point matching includes similarity matching of feature points.
In some embodiments, after the first image feature and the second image feature are acquired, a feature matching relationship is acquired by utilizing the similarity of the first image feature and the second image feature.
In some embodiments, the feature points of the first path of image and the second path of image can be used for similarity matching to obtain a feature matching relationship. The application does not limit the mode of obtaining the characteristic matching relation.
S13: and performing color mapping on the first path of image and the second path of image by utilizing the characteristic matching relationship to obtain an image processing result.
The color information is obtained through the characteristic matching relation of the first path of image and the second path of image, and then the color information is used for carrying out color mapping on the first path of image and the second path of image, so that an image processing result of the current frame is obtained, and the image processing result can be a color image.
In some embodiments, referring to fig. 5, step S13 of the above embodiments may be further extended. The method includes the steps of performing color mapping on a first path of image and a second path of image by utilizing a feature matching relationship to obtain an image processing result, and the method comprises the following steps:
s131: and carrying out second preset processing on the feature matching relation and the second path of image to obtain a color mapping chart.
In some embodiments, the color map may represent a mapping relationship of the color information of the second path image to the detail information of the first path image, that is, may represent a mapping relationship of the color information of the pixel point of the first path image and the detail information of the pixel point of the second path image, or may be described as a color mapping relationship of the pixel point of the first path image and the pixel point of the second path image. For example, the color information of the first pixel point of the first path of image is red, and the first pixel point corresponds to the second pixel point of the second path of image, that is, the second pixel point can be mapped to red subsequently.
In some embodiments, the color map may represent a mapping relationship of color channels of the first path image and the second path image.
In some embodiments, the color mapping is obtained by performing a second preset process on the feature matching relationship and the color channels of the second image.
In some embodiments, the second preset process may be multiplication or connection, which the present application is not limited to.
Referring to fig. 6, a second preset process may be performed on the feature matching relationship R and the second path image to obtain a color mapping chart, which may be represented by the following formula:
in the above formula (4), W ab Represents a color map, R represents a feature matching relationship,the second path of image is represented, and the size of the color map may be h×w×2.
S132: and performing color mapping on the first path of image by using the color mapping graph to obtain an image processing result.
Using colour maps W ab And a first path of imageObtaining a channel prediction result; wherein the channel prediction result comprises color prediction results of at least two color channels, i.e. the channel prediction result may comprise color prediction results of color channels a, b.
And performing channel splicing on the channel prediction results (color channels a and b) and the brightness channel L of the first path of image according to the channels to obtain an image processing result of the current frame.
The process of performing the second preset processing on the feature matching relationship and the second path image to obtain the color mapping chart may include at least two modes as follows:
mode 1: splicing the first path of image, the color mapping diagram and the feature relation confidence coefficient according to the channel to obtain comprehensive splicing features; the feature relation confidence is obtained based on the feature matching relation; performing feature encoding and decoding on the comprehensive splicing features to obtain image encoding and decoding features; performing up-sampling processing by utilizing image coding and decoding characteristics to obtain a channel prediction result; and performing channel splicing on the channel prediction results (color channels a and b) and the brightness channel L of the first path of image according to the channels to obtain an image processing result of the current frame.
Referring to fig. 7, a first path of image is shownColor map W ab The feature relation confidence S is spliced according to the channel to obtain comprehensive splicing features, the comprehensive splicing features are subjected to feature coding, then image coding and decoding features are obtained through decoding, then channel prediction results are obtained through upsampling, wherein the channel prediction results can comprise color channel a and color channel b color prediction results, namely, the color channel a and color channel b color prediction results of a first path of image of a current frame are subjected to channel splicing with the brightness channel L of the first path of image of the current frame, and the image processing results of the current frame are obtained and can be recorded asThe image processing result is a colorized image.
Mode 2: splicing the first path of image, the color mapping diagram, the feature relation confidence coefficient and the historical image processing result according to the channel to obtain comprehensive splicing features; the feature relation confidence is obtained based on the feature matching relation; performing feature encoding and decoding on the comprehensive splicing features to obtain image encoding and decoding features; performing up-sampling processing by utilizing image coding and decoding characteristics to obtain a channel prediction result; and performing channel splicing on the channel prediction results (color channels a and b) and the brightness channel L of the first path of image according to the channels to obtain an image processing result of the current frame.
Image the first pathColor map W ab Feature relation confidence S, historical image processing result +.>Splicing according to channels to obtain comprehensive splicing characteristics, performing characteristic coding on the comprehensive splicing characteristics, then performing decoding to obtain image coding and decoding characteristics, performing up-sampling processing to obtain channel prediction results, which can comprise color prediction results of color channels a and b, namely color prediction results of color channels a and b of a first path of image of a current frame, and performing channel splicing on the color prediction results of the color channels a and b and a brightness channel L of the first path of image of the current frame to obtain an image processing result of the current frame, wherein the image processing result can be recorded as +>The image processing result is a colorized image.
Wherein the history image processing result is an image processing result obtained from a previous frame or a previous preset number of frames, for example, the history image processing result obtained by executing the above image processing method on the first path image and the second path image of the previous frame may be obtained
Compared with the mode, the accuracy of color prediction of the channel preset result can be improved by adding the historical image processing result.
In some embodiments, after obtaining the image processing result, a third format processing may be performed on the image processing result to convert the image processing result into an Lab domain format or a YUV format.
In some embodiments, after obtaining the image processing result of the current frame, the first path image and the second path image of the next frame may be acquired, and the image processing result of the current frame may be obtainedAs a result of the history processing of the next frame for color mapping, the steps of the above-described embodiments are continued to be performed.
In the above embodiment, by acquiring the first image feature of the first path of image and the second image feature of the second path of image, the contrast ratio of the first path of image and the second path of image is different, and by using the first image feature and the second image feature, the feature matching relationship is acquired, and the color mapping is performed on the first path of image and the second path of image by using the feature matching relationship, so as to obtain the image processing result of the current frame, without considering the consistency of the acquisition positions or time of the two paths of images or calibrating the image sensors of the two paths of images, the applicability of image processing can be improved.
In addition, a color mapping graph is obtained through the characteristic matching relation of the first path image and the second path image, the detail information of the first path image and the color information of the second path image are obtained to complete colorization of the first path image, and as only the color information of the color path image is needed, a high-quality color image is not needed, the first path image and the second path image are not fused, the detail information comes from the black-white path image, so that the color path image can accept a certain degree of blurring or low illumination, the snap position alignment is not required, the color image of visible light is not required to have very high quality, the color mapping is completed through the similarity matching of the semantic features of the two paths of images, the color information can be well reserved while the signal to noise ratio is ensured, and the applicability and the accuracy of image processing are further improved.
In addition, the image processing method can truly render the black-and-white image by means of a video rendering technology in the field of computer vision, so that the black-and-white image and the color image are not fused, and double sensor calibration is not needed.
In some embodiments, an image processing model may be employed for implementing the steps of the above embodiments of the present application, and the image processing model may include a feature extraction module, an attention module, and a colorization module, wherein the feature extraction module includes a first extraction network N1 and a second extraction network N2, the attention module is similar to an attention mechanism, and the colorization module is similar in structure to a codec. The feature extraction module may be used to implement step S11 to obtain the first image feature and the second image feature, the attention module may be used to implement step S12 to obtain the feature matching relationship, and to obtain the color map or the confidence level of the feature relationship, and the colorization module may be used to implement step S13 to perform color mapping.
In some embodiments, the image processing model may be trained to obtain a trained image processing model, for implementing the image processing method described above.
In the training process, in order to optimize parameters in the feature extraction module, the colorization module and the attention module, the following two aspects can be considered, namely, the loss of the extracted semantic features (the first image features and the second image features) is optimized. The other side is to consider the authenticity of the colorization result (image processing result).
The black-and-white road sample image and the color road sample image can be obtained, corresponding sample image labels (such as image processing result labels, image characteristics of the labels and the like) are set, the input black-and-white road sample image and the input color road sample image are processed through the image processing model to obtain a predicted image processing result, and therefore parameters (such as parameters in a feature extraction module, a colorization module or a attention module) of the image processing model are adjusted by utilizing the sample image labels and the predicted image processing result to obtain a trained image processing model.
In the training process, the loss of the semantic features extracted by the feature extraction module is optimized, and the following formula is adopted:
in the above formula (5), i represents indexes of different layers of the feature network,representing sample image features (e.g., image features of a black and white sample image),>image features representing their labels; />Representing the convolution characteristics of the sample image,/->The image convolution characteristic representing its label. Training can be performed by a counter-propagation of a sample corresponding to a label, whose optimization objective is to minimize the colorization of the image +.>And its label X 1 Semantic loss of (c).
In the training process, the authenticity of the colorization result is considered. Edge protection optimization can be performed on boundary pixels, and the following formula is adopted:
in the above formula (6), N is the number of image blocks, N (i) represents the neighborhood of pixel i, C may represent any one of the L, a, b channels,representing predicted image results,/->A label indicating the result of image processing, d ij Representing the image distance, w ij Representing image weight, ++>Representing predicted image results containing color channels, such as Lab domain format. />And (3) representing an image processing result label containing a color channel, wherein i and j are positive integers. The meaning of the optimization target is to prevent boundary color overflow.
In addition, in the training process, colorization authenticity among generated counterdamage optimization frames can be utilized, and training of the image processing model can be completed through a generator and a trainer. The following formula is given:
in the above-mentioned formula (7),is a generated color image, Z is a true color image, D is a convolution network for discriminating imagesTrue or false, output result is 0 (false) or 1 (true), and the result is +>Representation generator->Representing the arbiter. The true color image is judged to be true through the discriminator, the false color image (the color image generated by the generator) is judged to be false through the discriminator, and the false color image is classified into two problems, 0 represents false, 1 represents true, so the true color image is supposed to be close to 1 through the discriminator, and the false color image is supposed to be close to 0 through the discriminator.
For the above embodiments, the present application provides an image processing apparatus, please refer to fig. 8, fig. 8 is a schematic structural diagram of an embodiment of the image processing apparatus of the present application.
The image processing apparatus 20 includes an image module 21, a matching module 22, and a mapping module 23, wherein the image module 21, the matching module 22, and the mapping module 23 are connected to each other.
The image module 21 is configured to acquire a first image feature of the first path of image and a second image feature of the second path of image; the contrast of the first path of image and the contrast of the second path of image are different, and the first path of image and the second path of image are obtained by shooting a target scene by using different image sensors respectively.
The matching module 22 is configured to obtain a feature matching relationship using the first image feature and the second image feature;
the mapping module 23 is configured to perform color mapping on the first path of image and the second path of image by using the feature matching relationship, so as to obtain an image processing result.
For the implementation of this embodiment, reference may be made to the implementation process of the foregoing embodiment, which is not described herein.
For the foregoing embodiments, the present application provides a computer device, please refer to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of the computer device of the present application. The computer device 30 comprises a memory 31 and a processor 32, wherein the memory 31 and the processor 32 are coupled to each other, the memory 31 stores program data, and the processor 32 is configured to execute the program data to implement the steps of any of the embodiments of the image processing method described above.
In the present embodiment, the processor 32 may also be referred to as a CPU (Central Processing Unit ). The processor 32 may be an integrated circuit chip having signal processing capabilities. Processor 32 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The general purpose processor may be a microprocessor or the processor 32 may be any conventional processor or the like.
For the method of the above embodiment, which may be implemented in the form of a computer program, the present application proposes a computer readable storage medium, please refer to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of the computer readable storage medium of the present application. The computer-readable storage medium 40 stores therein program data 41 capable of being executed by a processor, the program data 41 being executable by the processor to implement the steps of any of the embodiments of the image processing method described above.
The computer readable storage medium 40 of the present embodiment may be a medium such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, which may store the program data 41, or may be a server storing the program data 41, which may send the stored program data 41 to another device for operation, or may also run the stored program data 41 by itself.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-readable storage medium, which is a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, comprising several instructions for causing an electronic device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the method of the embodiments of the present application.
It will be apparent to those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, or they may alternatively be implemented in program code executable by computing devices, such that they may be stored in a computer readable storage medium for execution by computing devices, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
The foregoing description is only illustrative of the present application and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present application.

Claims (10)

1. An image processing method, the method comprising:
acquiring a first image feature of a first path of image and a second image feature of a second path of image; the contrast ratio of the first path of image and the second path of image is different, and the first path of image and the second path of image are obtained by shooting a target scene by using different image sensors respectively;
acquiring a feature matching relationship by utilizing the first image feature and the second image feature;
and performing color mapping on the first path of image and the second path of image by utilizing the characteristic matching relation to obtain an image processing result.
2. The method of claim 1, wherein the obtaining a feature matching relationship using the first image feature and the second image feature comprises:
acquiring comprehensive image features by utilizing the first image features and the second image features;
acquiring the characteristic matching relationship by utilizing the comprehensive image characteristics; or alternatively, the process may be performed,
performing feature point matching on the first image feature and the second image feature to obtain the feature matching relationship; wherein the feature point matching includes similarity matching of feature points.
3. The method of claim 2, wherein the obtaining a composite image feature using the first image feature and the second image feature comprises:
performing first convolution on the first image feature to obtain a first image convolution feature; and
performing second convolution on the second image feature to obtain a second image convolution feature;
and performing first preset processing by using the first image convolution characteristic and the second image convolution characteristic to obtain the comprehensive image characteristic.
4. The method according to claim 1, wherein performing color mapping on the first path image and the second path image by using the feature matching relationship to obtain an image processing result includes:
performing second preset processing on the feature matching relation and the second path of image to obtain a color mapping diagram;
and performing color mapping on the first path of image by using the color mapping graph to obtain the image processing result.
5. The method of claim 4, wherein performing color mapping on the first path image using the color map to obtain the image processing result comprises:
obtaining a channel prediction result by using the color mapping diagram and the first path of image; wherein the channel prediction result comprises color prediction results of at least two color channels;
and performing channel splicing on the channel prediction result and the brightness channel of the first path of image to obtain the image processing result.
6. The method of claim 5, wherein using the color map and the first path image to obtain a channel prediction result comprises:
splicing the first path of image, the color mapping diagram and the feature relation confidence according to the channel to obtain comprehensive splicing features; or, splicing the first path of image, the color mapping diagram, the feature relation confidence coefficient and the historical image processing result according to the channel to obtain comprehensive splicing features; the feature relation confidence is obtained based on the feature matching relation;
performing feature encoding and decoding on the comprehensive splicing features to obtain image encoding and decoding features;
and performing up-sampling processing by utilizing the image coding and decoding characteristics to obtain the channel prediction result.
7. The method of claim 1, wherein the acquiring the first image feature of the first image and the second image feature of the second image comprises:
extracting features of the first path of images to obtain first path of features;
splicing different layer features of the first path features to obtain first path splicing features;
performing enhancement processing on the first path of splicing characteristics to obtain first image characteristics of the first path of images; and
extracting features of the second path of images to obtain second path of features;
splicing different layer features of the second path features to obtain second path splicing features;
and carrying out enhancement processing on the second path splicing characteristic to obtain a second image characteristic of the second path image.
8. The method of claim 1, wherein prior to acquiring the first image feature of the first image and the second image feature of the second image, comprising:
shooting by using different image sensors to obtain first path of image data and second path of image data;
performing first format processing on the first path of image data to obtain the first path of image; and
performing second format processing on the second path of image data to obtain a second path of image;
wherein either one of the first and second images includes a color channel, and the other includes a brightness channel.
9. A computer device comprising a memory and a processor coupled to each other, the memory having stored therein program data, the processor being adapted to execute the program data to implement the steps of the method of any of claims 1 to 8.
10. A computer readable storage medium, characterized in that program data executable by a processor are stored, said program data being for implementing the steps of the method according to any one of claims 1 to 8.
CN202310723133.0A 2023-06-16 2023-06-16 Image processing method, computer device, and computer-readable storage medium Pending CN116980724A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310723133.0A CN116980724A (en) 2023-06-16 2023-06-16 Image processing method, computer device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310723133.0A CN116980724A (en) 2023-06-16 2023-06-16 Image processing method, computer device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN116980724A true CN116980724A (en) 2023-10-31

Family

ID=88473988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310723133.0A Pending CN116980724A (en) 2023-06-16 2023-06-16 Image processing method, computer device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN116980724A (en)

Similar Documents

Publication Publication Date Title
EP3937481A1 (en) Image display method and device
US11457138B2 (en) Method and device for image processing, method for training object detection model
EP3542347B1 (en) Fast fourier color constancy
EP2987134B1 (en) Generation of ghost-free high dynamic range images
RU2397542C2 (en) Method and device for creating images with high dynamic range from multiple exposures
CN105554483B (en) A kind of image processing method and terminal
CN105049718A (en) Image processing method and terminal
US8472748B2 (en) Method of image processing and image processing apparatus
CN114979500B (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
US7885458B1 (en) Illuminant estimation using gamut mapping and scene classification
CN115550570A (en) Image processing method and electronic equipment
JP7277158B2 (en) Setting device and method, program, storage medium
CN107920205A (en) Image processing method, device, storage medium and electronic equipment
WO2015189369A1 (en) Methods and systems for color processing of digital images
CN114549373A (en) HDR image generation method and device, electronic equipment and readable storage medium
JP2007311895A (en) Imaging apparatus, image processor, image processing method and image processing program
WO2023110880A1 (en) Image processing methods and systems for low-light image enhancement using machine learning models
CN111754412A (en) Method and device for constructing data pairs and terminal equipment
CN116980724A (en) Image processing method, computer device, and computer-readable storage medium
JP2007221678A (en) Imaging apparatus, image processor, image processing method and image processing program
JP2007293686A (en) Imaging apparatus, image processing apparatus, image processing method and image processing program
CN109447925B (en) Image processing method and device, storage medium and electronic equipment
JP2007312294A (en) Imaging apparatus, image processor, method for processing image, and image processing program
EP3038059A1 (en) Methods and systems for color processing of digital images
CN113962844A (en) Image fusion method, storage medium and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination