CN118247583B - Construction method and construction device of high-speed night image definition enhancement model - Google Patents

Construction method and construction device of high-speed night image definition enhancement model Download PDF

Info

Publication number
CN118247583B
CN118247583B CN202410667348.XA CN202410667348A CN118247583B CN 118247583 B CN118247583 B CN 118247583B CN 202410667348 A CN202410667348 A CN 202410667348A CN 118247583 B CN118247583 B CN 118247583B
Authority
CN
China
Prior art keywords
image
mask
speed
enhanced
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410667348.XA
Other languages
Chinese (zh)
Other versions
CN118247583A (en
Inventor
吴周检
颜世航
孙晶晶
卢晓婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Pixel Technology Co ltd
Original Assignee
Hangzhou Pixel Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Pixel Technology Co ltd filed Critical Hangzhou Pixel Technology Co ltd
Priority to CN202410667348.XA priority Critical patent/CN118247583B/en
Publication of CN118247583A publication Critical patent/CN118247583A/en
Application granted granted Critical
Publication of CN118247583B publication Critical patent/CN118247583B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The application provides a construction method of a high-speed night image definition enhancement model, which comprises the following steps: constructing a high-speed night image definition enhancement framework, wherein the high-speed night image definition enhancement framework comprises a feature extraction module, an image enhancement module and a noise correction module; at least one high-speed image to be enhanced and a reference image corresponding to each high-speed image to be enhanced are acquired to form an image pair, the image pair is input into a high-speed night image definition enhancement framework, an enhanced image is obtained through a feature extraction module, an image enhancement module and a noise correction module in sequence, a feature difference value between the enhanced image and the corresponding reference image is calculated through a loss function, and a high-speed night image definition enhancement model is obtained when the feature difference value is smaller than a set threshold value. According to the scheme, original details of the image are restored, and the part, needing to be enhanced, in the image is judged to be subjected to targeted enhancement, so that the definition of the whole image is improved by using less computing resources.

Description

Construction method and construction device of high-speed night image definition enhancement model
Technical Field
The application relates to the field of computer vision, in particular to a construction method and a construction device of a high-speed night image definition enhancement model.
Background
With the continuous development of technology, the enhancement of the night definition of the expressway becomes an important subject of traffic safety and management, and the enhancement of the night definition of the expressway has various advantages. Firstly, the night traffic safety is improved, potential safety hazards such as vehicle faults, illegal driving and the like can be more easily found and identified, secondly, clear night monitoring is very important for accident investigation and responsibility identification, powerful evidence support is provided for related departments, and in addition, the improvement of night definition can also strengthen public security management of highways and prevent various illegal criminal behaviors. In summary, the enhancement of highway night definition is an important measure for improving traffic safety and management efficiency.
In order to enhance the definition of a night scene of a highway monitoring camera, the prior art is generally designed on two levels of hardware and software, the imaging capability of the camera under the condition of low illumination is effectively improved on the hardware level by using a high-sensitivity image sensor, an advanced image processing algorithm, auxiliary lighting equipment and the like, so that the night monitoring picture is clearer and more discernable, and the image and the video can be more clearly captured at night by improving the performance of the monitoring camera.
The acquired night image is generally enhanced by using a corresponding algorithm at a software level to improve the definition of the image, but the current highway monitoring night enhancement algorithm may have the following limitations: 1. image quality problem: in a night environment, the brightness and contrast of an image are generally low, which may cause problems such as blurring and many noise points. Some algorithms may not be able to effectively remove noise or restore image detail, thereby affecting enhancement; 2. adaptability problem: different night scenes have different lighting conditions and characteristics, such as street lamp illumination, car light glare, etc. Some algorithms may not adapt well to various complex night scenes, resulting in inconsistent or non-ideal enhancement effects; 3. complexity of operation: some complex enhancement algorithms may require higher computational resources and time, which may have an impact on the performance of the real-time monitoring system; 4. light source interference: light sources such as car lights, street lamps, etc. may generate glare or reflection, affecting the quality of the image. Some algorithms may not be able to effectively handle these light source disturbances, resulting in image overexposure or reflection distortion; 5. moving object problem: vehicle movement on night highways may cause image blurring or smear. Some algorithms may not perform well when dealing with moving objects, failing to clearly capture and enhance images of moving objects; 6. environmental change: weather conditions such as fog, rain, snow, etc. may affect the quality of the night image. Some algorithms may have poor adaptability to these environmental changes and may not provide stable enhancement under different weather conditions.
In summary, how to use less cost and computing resources to avoid various kinds of interference information on the expressway, so that optimizing the acquired image to improve the definition of the image is a difficult problem to be solved in the prior art.
Disclosure of Invention
The embodiment of the application provides a construction method and a construction device for a high-speed night image definition enhancement model, which can restore the original details of an image and judge that a part of the image to be enhanced is subjected to targeted enhancement, so that less computing resources are used for improving the definition of the whole image.
In a first aspect, an embodiment of the present application provides a method for constructing a high-speed night image sharpness enhancement model, where the method includes:
constructing a high-speed night image definition enhancement framework, wherein the high-speed night image definition enhancement framework comprises a feature extraction module, an image enhancement module and a noise correction module;
at least one high-speed image to be enhanced and a reference image corresponding to each high-speed image to be enhanced are acquired to form an image pair, the image pair is input into a high-speed night image definition enhancement framework, the reference image is a clear image corresponding to the high-speed image to be enhanced, and the feature extraction module performs feature extraction on the image pair to obtain shallow features and global features;
The image enhancement module comprises a mask calculation unit, a detail restoration unit and a feature output unit, wherein the mask calculation unit obtains a first mask and a second mask based on the shallow features and the global features, the first mask is a mask matrix of a high-speed image to be enhanced in a training stage, the second mask is a mask matrix of a corresponding reference image, the detail restoration unit enhances the first mask based on an attention mechanism to obtain enhanced features, and the enhanced features are combined with the second mask in the feature output unit to obtain an output feature map;
and acquiring noise pixel points in the output feature map one by one in the noise correction module, correcting to obtain an enhanced image, calculating a feature difference value between the enhanced image and a corresponding reference image by using a loss function, and storing parameters of a current high-speed night image definition enhancement framework when the feature difference value is smaller than a set threshold value to obtain a high-speed night image definition enhancement model.
In a second aspect, an embodiment of the present application provides a high-speed night image sharpness enhancement method, including:
And acquiring a high-speed image to be enhanced, and inputting the high-speed image to be enhanced into the constructed high-speed night image definition enhancement model to obtain an enhanced image with enhanced definition.
In a third aspect, an embodiment of the present application provides a device for constructing a high-speed night image sharpness enhancement model, including:
the construction module is used for constructing a high-speed night image definition enhancement framework, and the high-speed night image definition enhancement framework comprises a feature extraction module, an image enhancement module and a noise correction module;
the feature extraction module is used for obtaining at least one high-speed image to be enhanced and a reference image corresponding to each high-speed image to be enhanced to form an image pair, inputting the image pair into the high-speed night image definition enhancement framework, wherein the reference image is a clear image corresponding to the high-speed image to be enhanced, and extracting features of the image pair by the feature extraction module to obtain shallow features and global features;
The image enhancement module comprises a mask calculation unit, a detail recovery unit and a feature output unit, wherein the mask calculation unit acquires a first mask and a second mask based on the shallow features and the global features, the first mask is a mask matrix of a high-speed image to be enhanced in a training stage, the second mask is a mask matrix of a corresponding reference image, the detail recovery unit enhances the first mask based on an attention mechanism to obtain enhanced features, and the enhanced features are combined with the second mask in the feature output unit to obtain an output feature map;
The noise correction module acquires noise pixel points in the output feature map one by one in the noise correction module, corrects the noise pixel points to obtain an enhanced image, calculates a feature difference value between the enhanced image and a corresponding reference image by using a loss function, and stores parameters of a current high-speed night image definition enhancement framework when the feature difference value is smaller than a set threshold value to obtain a high-speed night image definition enhancement model.
In a fourth aspect, embodiments of the present application provide an electronic device comprising a memory and a processor, the memory having stored therein a computer program, the processor being arranged to run the computer program to perform a method of constructing a high-speed night image sharpness enhancement model or a method of high-speed night image sharpness enhancement.
In a fifth aspect, embodiments of the present application provide a readable storage medium having stored therein a computer program including program code for controlling a process to execute a process including a construction method of a high-speed night image sharpness enhancement model or a high-speed night image sharpness enhancement method.
The main contributions and innovation points of the invention are as follows:
Aiming at the problem of image degradation caused by light in a night scene of a highway monitoring camera, the embodiment of the application firstly adopts YoloV to perform feature extraction, restores the original details of an image by fusing the mask of shallow features and global features, adopts a mask-based attention mechanism algorithm to realize image enhancement and further improves the contrast and brightness of the image, then supplements textures and details lost due to low illumination conditions by a detail restoration module of the image enhancement algorithm, and finally applies a noise reduction algorithm to reduce the image noise and integrally improve the definition of the image.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of a method of constructing a high-speed night image sharpness enhancement model in accordance with an embodiment of the present application;
Fig. 2 is a schematic structural view of an image enhancement module according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a high-speed night image sharpness enhancement model in accordance with an embodiment of the present application;
FIG. 4 is a high-speed image to be enhanced
FIG. 5 is an image processed by a high-speed night image sharpness enhancement model;
FIG. 6 is a block diagram of a construction apparatus of a high-speed night image sharpness enhancement model according to an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with aspects of one or more embodiments of the present description as detailed in the accompanying claims.
It should be noted that: in other embodiments, the steps of the corresponding method are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than described in this specification. Furthermore, individual steps described in this specification, in other embodiments, may be described as being split into multiple steps; while various steps described in this specification may be combined into a single step in other embodiments.
Example 1
The embodiment of the application provides a method for constructing a high-speed night image definition enhancement model, which can restore original details of an image and judge that a part of the image to be enhanced is subjected to targeted enhancement, so that less computing resources are used for improving the definition of the whole image, and concretely, referring to fig. 1, the method comprises the following steps:
constructing a high-speed night image definition enhancement framework, wherein the high-speed night image definition enhancement framework comprises a feature extraction module, an image enhancement module and a noise correction module;
at least one high-speed image to be enhanced and a reference image corresponding to each high-speed image to be enhanced are acquired to form an image pair, the image pair is input into a high-speed night image definition enhancement framework, the reference image is a clear image corresponding to the high-speed image to be enhanced, and the feature extraction module performs feature extraction on the image pair to obtain shallow features and global features;
The image enhancement module comprises a mask calculation unit, a detail restoration unit and a feature output unit, wherein the mask calculation unit obtains a first mask and a second mask based on the shallow features and the global features, the first mask is a mask matrix of a high-speed image to be enhanced in a training stage, the second mask is a mask matrix of a corresponding reference image, the detail restoration unit enhances the first mask based on an attention mechanism to obtain enhanced features, and the enhanced features are combined with the second mask in the feature output unit to obtain an output feature map;
and acquiring noise pixel points in the output feature map one by one in the noise correction module, correcting to obtain an enhanced image, calculating a feature difference value between the enhanced image and a corresponding reference image by using a loss function, and storing parameters of a current high-speed night image definition enhancement framework when the feature difference value is smaller than a set threshold value to obtain a high-speed night image definition enhancement model.
In the scheme, the image to be enhanced is an image obtained by a camera under the condition of weak light on a highway, and the reference image is a high-definition image synchronously shot by a high-quality camera or a clear image generated by processing the image to be enhanced by image processing technologies such as denoising, sharpening, contrast enhancement and the like.
In the step of extracting features from an image pair by the feature extraction module to obtain shallow features and global features, the image pair is input into the feature extraction module to obtain a first feature map, a second feature map, a third feature map, a fourth feature map and a fifth feature map, the second feature map and the third feature map are fused to obtain shallow features, the fourth feature map and the fifth feature map are fused to obtain global features, wherein the feature extraction module is a Yolov backbone network, the first feature map, the second feature map, the third feature map, the fourth feature map and the fifth feature map are respectively output of different stages of the Yolov backbone network, and the resolutions of the first feature map, the second feature map, the third feature map, the fourth feature map and the fifth feature map are sequentially reduced.
In some embodiments, let the width and height of the image in the image pair be 412×412, the output feature diagrams of each stage of the feature extraction module are shown in the following table one:
Table one: feature diagram size output by each stage of feature extraction module
Where h is the height of the image, w is the width of the image, conv1 is the first feature map, conv2 is the second feature map, conv3 is the third feature map, conv4 is the fourth feature map, and Conv5 is the fifth feature map.
In this aspect, as shown in fig. 2, in the step of the mask calculation unit obtaining the first mask and the second mask based on the shallow feature and the global feature, the formula for obtaining the first mask and the second mask is as follows:
Wherein, R is a first mask, T is a dynamic threshold, I is a shallow feature, I' is a global feature, conv represents a convolution operation, repC (1) represents copying a single channel to C channels to realize dimension alignment calculation, binarize is a binarization operation, and U is a second mask.
Specifically, the second mask in this scheme is a portion where the reference image is kept clear, but the high-speed image to be enhanced is not clear.
Specifically, as the shallow features are high-resolution feature graphs, the resolution of the global features is small, and more semantic features are included, the scheme can learn the first mask and the second mask in the training process by subtracting the low-resolution global features from the high-resolution shallow features and constructing a dynamic threshold T, so that the automatic distinction between the clear region and the non-clear region in the high-speed image to be enhanced can be realized in the application stage.
Specifically, in the application stage, the input of the high-speed night image definition enhancement model is a high-speed image to be enhanced, and the first mask is a non-clear area in the high-speed image to be enhanced, and the second mask is a clear area in the high-speed image to be enhanced.
In the step of enhancing the first mask by the detail restoring unit based on the attention mechanism to obtain the enhanced feature, a calculation formula of the attention mechanism is as follows:
wherein Q is a query, K is a key, V is a value, R is a first mask, The result, i.e. the enhancement features, are calculated for the attention.
Specifically, the scheme can be enhanced by focusing on the features in the original image, namely the non-clear area through the attention mechanism, so that the computational resources required to be used are reduced.
In the step of combining the enhanced feature with the second mask in the feature output unit to obtain the output feature map, the formula of combining the enhanced feature with the second mask is expressed as follows:
Wherein F is an output feature map, activation represents an Activation function, R is an enhancement feature, DW Conv represents a depth separable convolution, and n is the number of non-clear areas in the high-speed image to be enhanced.
Specifically, after original details in the high-speed image to be enhanced are restored by using an attention mechanism, because the night scene difference is large and the sizes of areas to be enhanced in the high-speed image to be enhanced are different, in order to effectively realize the capturing capability of the areas with different sizes, mixed scale feature extraction is adopted, in the scheme, the features with different scales are extracted by adopting depth separable convolution with different convolution kernel sizes, and the features with different scales and the second mask are combined to obtain an integral image as an output feature map.
Further, the scheme adjusts the output characteristic diagram through a convolution layer, and the formula is as follows:
Wherein, And outputting the characteristic diagram after detail recovery.
Specifically, the output characteristic diagram is adjusted in a dimension-reducing mode, so that detail recovery is performed.
In the step of acquiring noise pixel points in the output feature map one by one and correcting to obtain an enhanced image in the noise correction module, a formula for acquiring noise pixel points in the output feature map one by one and correcting is expressed as follows:
Wherein, In order to enhance the image is,In order to output the feature map,To output the coordinates of the noise pixels in the feature map,In order to correct the weight of the noise,Obtained during the training process.
In summary, the structure of the high-speed night image sharpness enhancement model in the present solution is shown in fig. 3, and the overall formula of the high-speed night image sharpness enhancement model constructed in the present solution is as follows:
Wherein, In order to enhance a high-speed image, θ is the learning rate,The training stage is a reference image corresponding to the high-speed image to be enhanced, and the application stage is the output of the model, wherein f is a neural network function, and the formula is as follows:
Wherein, Is the weight of the convolution kernel,Is the input data (i.e., feature map of the image pair) during training, bias is the offset term.
Illustratively, fig. 4 is a high-speed image to be enhanced, and the image obtained by inputting fig. 4 into the high-speed night image sharpness enhancement model is shown in fig. 5.
In this scheme, the pixel level loss and the perceived loss are used as a loss function to calculate the feature difference value between the enhanced image and the corresponding reference image, and the formula is as follows:
wherein MSE is pixel level loss, i.e. the difference between the high-speed image to be enhanced and the enhanced image, SSIMLoss is the similarity between the enhanced image and the high-speed image to be enhanced, for ensuring the similarity as much as possible between the enhanced image and the high-speed image to be enhanced, I is the high-speed image to be enhanced, F is the enhanced image, AndThe average brightness is indicated as such,AndThe variance of the luminance is represented and,The luminance covariance is represented by the sign of the luminance covariance,AndRepresenting a constant.
Example two
A high-speed night image sharpness enhancement method, comprising:
And acquiring a high-speed image to be enhanced, and inputting the high-speed image to be enhanced into the high-speed night image definition enhancement model constructed in the first embodiment to obtain an enhanced image with enhanced definition.
Example III
Based on the same conception, referring to fig. 6, the application also provides a device for constructing a high-speed night image definition enhancement model, which comprises:
the construction module is used for constructing a high-speed night image definition enhancement framework, and the high-speed night image definition enhancement framework comprises a feature extraction module, an image enhancement module and a noise correction module;
the feature extraction module is used for obtaining at least one high-speed image to be enhanced and a reference image corresponding to each high-speed image to be enhanced to form an image pair, inputting the image pair into the high-speed night image definition enhancement framework, wherein the reference image is a clear image corresponding to the high-speed image to be enhanced, and extracting features of the image pair by the feature extraction module to obtain shallow features and global features;
The image enhancement module comprises a mask calculation unit, a detail recovery unit and a feature output unit, wherein the mask calculation unit acquires a first mask and a second mask based on the shallow features and the global features, the first mask is a mask matrix of a high-speed image to be enhanced in a training stage, the second mask is a mask matrix of a corresponding reference image, the detail recovery unit enhances the first mask based on an attention mechanism to obtain enhanced features, and the enhanced features are combined with the second mask in the feature output unit to obtain an output feature map;
The noise correction module acquires noise pixel points in the output feature map one by one in the noise correction module, corrects the noise pixel points to obtain an enhanced image, calculates a feature difference value between the enhanced image and a corresponding reference image by using a loss function, and stores parameters of a current high-speed night image definition enhancement framework when the feature difference value is smaller than a set threshold value to obtain a high-speed night image definition enhancement model.
Example III
This embodiment also provides an electronic device, referring to fig. 7, comprising a memory 404 and a processor 402, the memory 404 having stored therein a computer program, the processor 402 being arranged to run the computer program to perform the steps of any of the method embodiments described above.
In particular, the processor 402 may include a Central Processing Unit (CPU), or an application specific integrated circuit (ApplicationSpecificIntegratedCircuit, abbreviated as ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present application.
The memory 404 may include, among other things, mass storage 404 for data or instructions. By way of example, and not limitation, memory 404 may comprise a hard disk drive (HARDDISKDRIVE, abbreviated HDD), a floppy disk drive, a solid state drive (SolidStateDrive, abbreviated SSD), flash memory, an optical disk, a magneto-optical disk, a magnetic tape, or a Universal Serial Bus (USB) drive, or a combination of two or more of these. Memory 404 may include removable or non-removable (or fixed) media, where appropriate. Memory 404 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 404 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, memory 404 includes Read-only memory (ROM) and Random Access Memory (RAM). Where appropriate, the ROM may be a mask-programmed ROM, a programmable ROM (ProgrammableRead-only memory, abbreviated PROM), an erasable PROM (ErasableProgrammableRead-only memory, abbreviated EPROM), an electrically erasable PROM (ElectricallyErasableProgrammableRead-only memory, abbreviated EEPROM), an electrically rewritable ROM (ElectricallyAlterableRead-only memory, abbreviated EAROM) or a FLASH memory (FLASH), or a combination of two or more of these. The RAM may be a static random access memory (StaticRandom-access memory, abbreviated SRAM) or a dynamic random access memory (DynamicRandomAccessMemory, abbreviated DRAM) where the DRAM may be a fast page mode dynamic random access memory 404 (FastPageModeDynamicRandomAccessMemory, abbreviated FPMDRAM), an extended data output dynamic random access memory (ExtendedDateOutDynamicRandomAccessMemory, abbreviated EDODRAM), a synchronous dynamic random access memory (SynchronousDynamicRandom-access memory, abbreviated SDRAM), or the like, where appropriate.
Memory 404 may be used to store or cache various data files that need to be processed and/or used for communication, as well as possible computer program instructions for execution by processor 402.
The processor 402 implements the method of constructing a high-speed night image sharpness enhancement model according to any of the above embodiments by reading and executing computer program instructions stored in the memory 404.
Optionally, the electronic apparatus may further include a transmission device 406 and an input/output device 408, where the transmission device 406 is connected to the processor 402 and the input/output device 408 is connected to the processor 402.
The transmission device 406 may be used to receive or transmit data via a network. Specific examples of the network described above may include a wired or wireless network provided by a communication provider of the electronic device. In one example, the transmission device includes a network adapter (Network Interface Controller, simply referred to as a NIC) that can connect to other network devices through the base station to communicate with the internet. In one example, the transmission device 406 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
The input-output device 408 is used to input or output information. In the present embodiment, the input information may be a high-speed image to be enhanced, a reference image, or the like, and the output information may be an enhanced image or the like.
Alternatively, in the present embodiment, the above-mentioned processor 402 may be configured to execute the following steps by a computer program:
s101, constructing a high-speed night image definition enhancement framework, wherein the high-speed night image definition enhancement framework comprises a feature extraction module, an image enhancement module and a noise correction module;
s102, acquiring at least one high-speed image to be enhanced and a reference image corresponding to each high-speed image to be enhanced to form an image pair, inputting the image pair into a high-speed night image definition enhancement framework, wherein the reference image is a clear image corresponding to the high-speed image to be enhanced, and the feature extraction module performs feature extraction on the image pair to obtain shallow features and global features;
S103, the image enhancement module comprises a mask calculation unit, a detail recovery unit and a feature output unit, wherein the mask calculation unit obtains a first mask and a second mask based on the shallow features and the global features, the first mask is a mask matrix of a high-speed image to be enhanced in a training stage, the second mask is a mask matrix of a corresponding reference image, the detail recovery unit enhances the first mask based on an attention mechanism to obtain enhanced features, and the enhanced features and the second mask are combined in the feature output unit to obtain an output feature map;
S104, acquiring noise pixel points in the output feature image one by one in the noise correction module, correcting to obtain an enhanced image, calculating a feature difference value between the enhanced image and a corresponding reference image by using a loss function, and storing parameters of a current high-speed night image definition enhancement framework when the feature difference value is smaller than a set threshold value to obtain a high-speed night image definition enhancement model.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not repeated herein.
In general, the various embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects of the invention may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Embodiments of the invention may be implemented by computer software executable by a data processor of a mobile device, such as in a processor entity, or by hardware, or by a combination of software and hardware. Computer software or programs (also referred to as program products) including software routines, applets, and/or macros can be stored in any apparatus-readable data storage medium and they include program instructions for performing particular tasks. The computer program product may include one or more computer-executable components configured to perform embodiments when the program is run. The one or more computer-executable components may be at least one software code or a portion thereof. In this regard, it should also be noted that any block of the logic flow as in fig. 7 may represent a program step, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on a physical medium such as a memory chip or memory block implemented within a processor, a magnetic medium such as a hard disk or floppy disk, and an optical medium such as, for example, a DVD and its data variants, a CD, etc. The physical medium is a non-transitory medium.
It should be understood by those skilled in the art that the technical features of the above embodiments may be combined in any manner, and for brevity, all of the possible combinations of the technical features of the above embodiments are not described, however, they should be considered as being within the scope of the description provided herein, as long as there is no contradiction between the combinations of the technical features.
The foregoing examples illustrate only a few embodiments of the application, which are described in greater detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. The construction method of the high-speed night image definition enhancement model is characterized by comprising the following steps of:
constructing a high-speed night image definition enhancement framework, wherein the high-speed night image definition enhancement framework comprises a feature extraction module, an image enhancement module and a noise correction module;
at least one high-speed image to be enhanced and a reference image corresponding to each high-speed image to be enhanced are acquired to form an image pair, the image pair is input into a high-speed night image definition enhancement framework, the reference image is a clear image corresponding to the high-speed image to be enhanced, and the feature extraction module performs feature extraction on the image pair to obtain shallow features and global features;
The image enhancement module comprises a mask calculation unit, a detail restoration unit and a feature output unit, wherein the mask calculation unit obtains a first mask and a second mask based on the shallow features and the global features, the first mask is a mask matrix of a high-speed image to be enhanced in a training stage, the second mask is a mask matrix of a corresponding reference image, the detail restoration unit enhances the first mask based on an attention mechanism to obtain enhanced features, and the enhanced features are combined with the second mask in the feature output unit to obtain an output feature map;
and acquiring noise pixel points in the output feature map one by one in the noise correction module, correcting to obtain an enhanced image, calculating a feature difference value between the enhanced image and a corresponding reference image by using a loss function, and storing parameters of a current high-speed night image definition enhancement framework when the feature difference value is smaller than a set threshold value to obtain a high-speed night image definition enhancement model.
2. The method for constructing the high-speed night image sharpness enhancement model according to claim 1, wherein in the step of extracting features of an image pair by the feature extraction module to obtain shallow features and global features, the image pair is input into the feature extraction module to obtain a first feature map, a second feature map, a third feature map, a fourth feature map and a fifth feature map, the second feature map and the third feature map are fused to obtain shallow features, the fourth feature map and the fifth feature map are fused to obtain global features, wherein the feature extraction module is a Yolov backbone network, the first feature map, the second feature map, the third feature map, the fourth feature map and the fifth feature map are output at different stages of the Yolov backbone network, and the resolutions of the first feature map, the second feature map, the third feature map, the fourth feature map and the fifth feature map are sequentially reduced.
3. The method according to claim 1, wherein in the step of "the mask calculation unit acquires the first mask and the second mask based on the shallow feature and the global feature", the formula of acquiring the first mask and the second mask is expressed as follows:
wherein, R is a first mask, T is a dynamic threshold, I is a shallow feature, I' is a global feature, conv represents a convolution operation, repC (1) represents copying a single channel to C channels to realize dimension alignment calculation, binarize is a binarization operation, and U is a second mask.
4. The method according to claim 1, wherein in the step of enhancing the first mask by the detail restoring unit based on an attention mechanism to obtain an enhanced feature, a calculation formula of the attention mechanism is as follows:
wherein Q is a query, K is a key, V is a value, R is a first mask, To enhance the features.
5. The method according to claim 1, wherein in the step of combining the enhancement feature with the second mask in the feature output unit to obtain the output feature map, a formula of combining the enhancement feature with the second mask is expressed as follows:
Wherein F is an output feature map, activation represents an Activation function, R is an enhancement feature, DW Conv represents a depth separable convolution, n is the number of non-clear areas in the corresponding high-speed image to be enhanced, and U is a second mask.
6. The method according to claim 1, wherein in the step of acquiring noise pixels in the output feature map one by one and correcting the obtained enhanced image in the noise correction module, a formula of acquiring noise pixels in the output feature map one by one and correcting is expressed as follows:
Wherein, In order to enhance the image is,In order to output the feature map,To output the coordinates of the noise pixels in the feature map,In order to correct the weight of the noise,Obtained during the training process.
7. A high-speed night image sharpness enhancement method, comprising:
Obtaining a high-speed image to be enhanced, and inputting the high-speed image to be enhanced into the high-speed night image definition enhancement model constructed according to claim 1 to obtain an enhanced image with enhanced definition.
8. A high-speed night image sharpness enhancement model construction apparatus, comprising:
the construction module is used for constructing a high-speed night image definition enhancement framework, and the high-speed night image definition enhancement framework comprises a feature extraction module, an image enhancement module and a noise correction module;
the feature extraction module is used for obtaining at least one high-speed image to be enhanced and a reference image corresponding to each high-speed image to be enhanced to form an image pair, inputting the image pair into the high-speed night image definition enhancement framework, wherein the reference image is a clear image corresponding to the high-speed image to be enhanced, and extracting features of the image pair by the feature extraction module to obtain shallow features and global features;
The image enhancement module comprises a mask calculation unit, a detail recovery unit and a feature output unit, wherein the mask calculation unit acquires a first mask and a second mask based on the shallow features and the global features, the first mask is a mask matrix of a high-speed image to be enhanced in a training stage, the second mask is a mask matrix of a corresponding reference image, the detail recovery unit enhances the first mask based on an attention mechanism to obtain enhanced features, and the enhanced features are combined with the second mask in the feature output unit to obtain an output feature map;
And the noise correction module acquires noise pixel points in the output feature image one by one and corrects the noise pixel points to obtain an enhanced image, a feature difference value between the enhanced image and a corresponding reference image is calculated by using a loss function, and when the feature difference value is smaller than a set threshold value, parameters of a current high-speed night image definition enhancement framework are saved to obtain a high-speed night image definition enhancement model.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform a method of constructing a high speed night image sharpness enhancement model according to any of claims 1-6 or a high speed night image sharpness enhancement method according to claim 7.
10. A readable storage medium, characterized in that the readable storage medium has stored therein a computer program comprising program code for controlling a process to execute the process, the process comprising a method of constructing a high-speed night image sharpness enhancement model according to any one of claims 1 to 6 or a high-speed night image sharpness enhancement method according to claim 7.
CN202410667348.XA 2024-05-28 2024-05-28 Construction method and construction device of high-speed night image definition enhancement model Active CN118247583B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410667348.XA CN118247583B (en) 2024-05-28 2024-05-28 Construction method and construction device of high-speed night image definition enhancement model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410667348.XA CN118247583B (en) 2024-05-28 2024-05-28 Construction method and construction device of high-speed night image definition enhancement model

Publications (2)

Publication Number Publication Date
CN118247583A CN118247583A (en) 2024-06-25
CN118247583B true CN118247583B (en) 2024-08-09

Family

ID=91562713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410667348.XA Active CN118247583B (en) 2024-05-28 2024-05-28 Construction method and construction device of high-speed night image definition enhancement model

Country Status (1)

Country Link
CN (1) CN118247583B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365429A (en) * 2020-12-21 2021-02-12 神思电子技术股份有限公司 Knowledge-driven image fuzzy region definition enhancement method
CN116385298A (en) * 2023-04-06 2023-07-04 福州大学 No-reference enhancement method for night image acquisition of unmanned aerial vehicle

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6116291B2 (en) * 2013-02-27 2017-04-19 オリンパス株式会社 Image processing apparatus, image processing method, and image processing program
US11109005B2 (en) * 2019-04-18 2021-08-31 Christie Digital Systems Usa, Inc. Device, system and method for enhancing one or more of high contrast regions and text regions in projected images
CN114764868A (en) * 2021-01-12 2022-07-19 北京三星通信技术研究有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN117151987A (en) * 2022-05-23 2023-12-01 海信集团控股股份有限公司 Image enhancement method and device and electronic equipment
CN116977208A (en) * 2023-07-06 2023-10-31 西安邮电大学 Low-illumination image enhancement method for double-branch fusion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112365429A (en) * 2020-12-21 2021-02-12 神思电子技术股份有限公司 Knowledge-driven image fuzzy region definition enhancement method
CN116385298A (en) * 2023-04-06 2023-07-04 福州大学 No-reference enhancement method for night image acquisition of unmanned aerial vehicle

Also Published As

Publication number Publication date
CN118247583A (en) 2024-06-25

Similar Documents

Publication Publication Date Title
US8774555B2 (en) Image defogging method and system
EP3712841A1 (en) Image processing method, image processing apparatus, and computer-readable recording medium
US11156564B2 (en) Dirt detection on screen
EP1705901A2 (en) Image processing apparatus and method, recording medium, and program
CN110135302B (en) Method, device, equipment and storage medium for training lane line recognition model
KR101553589B1 (en) Appratus and method for improvement of low level image and restoration of smear based on adaptive probability in license plate recognition system
CN110991310B (en) Portrait detection method, device, electronic equipment and computer readable medium
JP7537035B2 (en) Image generation method, device, equipment and storage medium
CN113269267A (en) Training method of target detection model, target detection method and device
CN113177438A (en) Image processing method, apparatus and storage medium
CN112581481B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN118247583B (en) Construction method and construction device of high-speed night image definition enhancement model
CN112784817B (en) Method, device and equipment for detecting lane where vehicle is located and storage medium
CN112435278B (en) Visual SLAM method and device based on dynamic target detection
CN116386023B (en) High-phase locomotive brand recognition method and system based on space-time diffusion and electronic equipment
KR102637711B1 (en) System and method for providing road condition identification model
CN114363697B (en) Video file generation and playing method and device
CN116110035A (en) Image processing method and device, electronic equipment and storage medium
US11107197B2 (en) Apparatus for processing image blurring and method thereof
CN114581876A (en) Method for constructing lane detection model under complex scene and method for detecting lane line
CN118366110B (en) Method and device for constructing small target vehicle recognition algorithm model of expressway high-pole monitoring camera
JP2004128643A (en) Method for compensating tilt of image
CN115861624B (en) Method, device, equipment and storage medium for detecting occlusion of camera
US11803942B2 (en) Blended gray image enhancement
EP4390836A1 (en) Image enhancement targeted at addressing degradations caused by environmental conditions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant