CN116664413A - Image volume fog eliminating method and device based on Abbe convergence operator - Google Patents
Image volume fog eliminating method and device based on Abbe convergence operator Download PDFInfo
- Publication number
- CN116664413A CN116664413A CN202310303362.7A CN202310303362A CN116664413A CN 116664413 A CN116664413 A CN 116664413A CN 202310303362 A CN202310303362 A CN 202310303362A CN 116664413 A CN116664413 A CN 116664413A
- Authority
- CN
- China
- Prior art keywords
- data
- volume
- fog
- original image
- volume fog
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000004364 calculation method Methods 0.000 claims abstract description 70
- 230000008030 elimination Effects 0.000 claims abstract description 24
- 238000003379 elimination reaction Methods 0.000 claims abstract description 24
- 239000003595 mist Substances 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 30
- 230000006870 function Effects 0.000 claims description 14
- 230000011218 segmentation Effects 0.000 claims description 8
- 230000004927 fusion Effects 0.000 claims description 3
- 238000000638 solvent extraction Methods 0.000 claims 1
- 238000012545 processing Methods 0.000 abstract description 28
- 238000004422 calculation algorithm Methods 0.000 abstract description 8
- 238000004891 communication Methods 0.000 description 13
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Classifications
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Abstract
The application discloses an image volume fog eliminating method and device based on Abelian convergence operator. Wherein the method comprises the following steps: acquiring original image information, wherein the original image information comprises: raw image data, volume fog data; dividing the original image data in the original image information, and correspondingly weighting the original image data with the volume fog data to obtain volume fog data to be processed; carrying out Abbe convergence calculation on all image elements in the volume fog data to be processed, and comparing a calculation result with a preset threshold value to obtain edge volume fog data; and carrying out elimination treatment on the edge volume mist. The application solves the technical problems that the method for eliminating the volume fog in the prior art usually eliminates the volume fog in the original image data completely through an elimination filter algorithm or carries out overall half elimination, can not eliminate useless volume fog in a targeted way, and reduces the precision and quality of image processing.
Description
Technical Field
The application relates to the field of image restoration processing, in particular to an image volume fog eliminating method and device based on Abelian convergence operator.
Background
Along with the continuous development of intelligent science and technology, intelligent equipment is increasingly used in life, work and study of people, and the quality of life of people is improved and the learning and working efficiency of people is increased by using intelligent science and technology means.
At present, aiming at a real-time monitoring system for telling movement of athletes on a sports field, image marking is generally carried out by combining ray tracing and volume haze effects, and elimination operation is carried out on marked or processed images, so that the technical effect of improving the image recognition and judgment efficiency is achieved. However, in the prior art, the method for eliminating the volume fog usually eliminates the volume fog in the original image data completely through an elimination filter algorithm, or performs overall half elimination, so that useless volume fog cannot be eliminated in a targeted manner, and the precision and quality of image processing are reduced.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides an image volume fog eliminating method and device based on Abbe convergence operator, which at least solve the technical problems that in the prior art, the volume fog in original image data is usually eliminated completely or globally and semi-eliminated through an eliminating filter algorithm, useless volume fog cannot be eliminated in a targeted way, and the accuracy and quality of image processing are reduced.
According to an aspect of the embodiment of the present application, there is provided an image volume fog removing method based on an abbe convergence operator, including: acquiring original image information, wherein the original image information comprises: raw image data, volume fog data; dividing the original image data in the original image information, and correspondingly weighting the original image data with the volume fog data to obtain volume fog data to be processed; carrying out Abbe convergence calculation on all image elements in the volume fog data to be processed, and comparing a calculation result with a preset threshold value to obtain edge volume fog data; and carrying out elimination treatment on the edge volume mist.
Optionally, after the acquiring the original image information, the method further includes: and according to preset demand parameters, performing matrix matching on the volume fog data and the original image data.
Optionally, the dividing the original image data in the original image information and weighting the original image data with the volume fog data correspondingly, so as to obtain the volume fog data to be processed includes: dividing the original image data to a preset density to obtain divided image data; inputting the segmented image data into a matching matrix to obtain a volume fog weighted value, wherein the matching matrix is composed of two-dimensional local image elements and volume fog parameter elements; and fusing the volume fog weighted value and the segmented image to obtain volume fog data to be processed.
Optionally, the performing abbe convergence calculation on all the image elements in the volumetric fog data to be processed, and comparing the calculation result with a preset threshold value, to obtain edge volumetric fog data includes: inputting elements in the volume fog data to be processed into an Abbe convergence calculation equation one by one to obtain the calculation result, wherein the Abbe convergence calculation mode is as follows
F=∫σ*(Pl·[x,y]+Lim(n))
Wherein F is a calculation result of the abel function, σ is an abbe convergence factor, pl is volume fog data of each pixel region, x and y are pixel position data, and n is a natural number of convergence degree (n=1 to 5); and comparing the calculation result with the preset threshold value to obtain all data larger than the preset threshold value, and generating the edge volume fog data.
According to another aspect of the embodiment of the present application, there is also provided an image volume fog removing apparatus based on an abbe convergence operator, including: the device comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring original image information, and the original image information comprises: raw image data, volume fog data; the segmentation module is used for segmenting the original image data in the original image information and correspondingly weighting the volume fog data to obtain volume fog data to be processed; the computing module is used for carrying out Abelian convergence computation on all image elements in the volume fog data to be processed, and comparing a computing result with a preset threshold value to obtain edge volume fog data; and the elimination module is used for eliminating the edge volume fog.
Optionally, the apparatus further includes: and the matching module is used for carrying out matrix matching on the volume fog data and the original image data according to preset demand parameters.
Optionally, the segmentation module includes: the segmentation unit is used for segmenting the original image data to a preset density to obtain segmented image data; the input unit is used for inputting the segmented image data into a matching matrix to obtain a volume fog weighted value, wherein the matching matrix is composed of two-dimensional local image elements and volume fog parameter elements; and the fusion unit is used for fusing the volume fog weighted value and the segmented image to obtain volume fog data to be processed.
Optionally, the computing module includes: the input unit is further configured to input elements in the volumetric fog data to be processed into an abbe convergence calculation equation one by one, so as to obtain the calculation result, where the abbe convergence calculation mode is that
F=∫σ*(Pl·[x,y]+Lim(n))
Wherein F is a calculation result of the abel function, σ is an abbe convergence factor, pl is volume fog data of each pixel region, x and y are pixel position data, and n is a natural number of convergence degree (n=1 to 5); and the comparison unit is used for comparing the calculation result with the preset threshold value to obtain all data larger than the preset threshold value and generating the edge volume fog data.
According to another aspect of the embodiment of the present application, there is further provided a nonvolatile storage medium, where the nonvolatile storage medium includes a stored program, and when the program runs, the program controls a device in which the nonvolatile storage medium is located to execute an image volume fog removing method based on an abbe convergence operator.
According to another aspect of the embodiment of the present application, there is also provided an electronic device including a processor and a memory; the memory stores computer readable instructions, and the processor is configured to execute the computer readable instructions, where the computer readable instructions execute an image volume fog removal method based on an Abelian convergence operator when executed.
In the embodiment of the application, the original image information is acquired, wherein the original image information comprises: raw image data, volume fog data; dividing the original image data in the original image information, and correspondingly weighting the original image data with the volume fog data to obtain volume fog data to be processed; carrying out Abbe convergence calculation on all image elements in the volume fog data to be processed, and comparing a calculation result with a preset threshold value to obtain edge volume fog data; the method for eliminating the edge volume fog solves the technical problems that in the prior art, the volume fog in the original image data is usually completely eliminated through an elimination filter algorithm or is totally and semi-eliminated, useless volume fog cannot be eliminated in a targeted manner, and the accuracy and quality of image processing are reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of a method of image volume fog cancellation based on Abbe's convergence operator in accordance with an embodiment of the present application;
FIG. 2 is a block diagram of an image volume fog eliminator based on Abbe's convergence operator according to an embodiment of the present application;
fig. 3 is a block diagram of a terminal device for performing the method according to the application according to an embodiment of the application;
fig. 4 is a memory unit for holding or carrying program code for implementing a method according to the application, according to an embodiment of the application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the present application, there is provided a method embodiment of an image volume fog removal method based on Abbe's convergence operator, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system, such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Example 1
Fig. 1 is a flow chart of a method for image volume fog removal based on an abbe convergence operator according to an embodiment of the application, as shown in fig. 1, the method comprising the steps of:
step S102, obtaining original image information, where the original image information includes: raw image data, volumetric fog data.
Specifically, in order to solve the technical problems that in the prior art, the method for eliminating the volume fog usually eliminates the volume fog in the original image data completely through an elimination filter algorithm or performs overall half elimination, useless volume fog cannot be eliminated in a targeted manner, and the precision and quality of image processing are reduced, the original image information is firstly required to be acquired and acquired, and the original image data and corresponding image volume fog data in the real-time monitoring high-precision image pickup device are transmitted and stored for processing by a subsequent processor.
Optionally, after the acquiring the original image information, the method further includes: and according to preset demand parameters, performing matrix matching on the volume fog data and the original image data.
Specifically, after the original image information is acquired, the method further includes: and performing matrix matching on the volume fog data and the original image data according to preset demand parameters, wherein the matching matrix is used for generating corresponding relations between the original image data and the corresponding volume fog demand data, so that different volume fog processing modes can be adopted for different image data.
Step S104, dividing the original image data in the original image information, and correspondingly weighting the original image data and the volume fog data to obtain volume fog data to be processed.
Optionally, the dividing the original image data in the original image information and weighting the original image data with the volume fog data correspondingly, so as to obtain the volume fog data to be processed includes: dividing the original image data to a preset density to obtain divided image data; inputting the segmented image data into a matching matrix to obtain a volume fog weighted value, wherein the matching matrix is composed of two-dimensional local image elements and volume fog parameter elements; and fusing the volume fog weighted value and the segmented image to obtain volume fog data to be processed.
Specifically, in order to weight the volume fog data, after the original image information is obtained, the embodiment of the application needs to divide the original image, split different areas represented by different volume fog to obtain image data with different identifications, and then fuse the volume fog weight value and related images by using a matching matrix to obtain the whole information of the volume fog image data, namely the volume fog data to be processed.
And S106, carrying out Abelian convergence calculation on all image elements in the volume fog data to be processed, and comparing a calculation result with a preset threshold value to obtain edge volume fog data.
Optionally, the performing abbe convergence calculation on all the image elements in the volumetric fog data to be processed, and comparing the calculation result with a preset threshold value, to obtain edge volumetric fog data includes: inputting elements in the volume fog data to be processed into an Abbe convergence calculation equation one by one to obtain the calculation result, wherein the Abbe convergence calculation mode is as follows
F=∫σ*(Pl·[x,y]+Lim(n))
Wherein F is a calculation result of the abel function, σ is an abbe convergence factor, pl is volume fog data of each pixel region, x and y are pixel position data, and n is a natural number of convergence degree (n=1 to 5); and comparing the calculation result with the preset threshold value to obtain all data larger than the preset threshold value, and generating the edge volume fog data.
Specifically, after the volume fog data to be processed is obtained, the method needs to compare the calculation result with the preset threshold according to the Abbe convergence calculation method to obtain the volume fog images meeting the preset threshold, and the volume fog images are edge volume fog data needing to be subjected to volume fog elimination.
And S108, performing elimination treatment on the edge volume mist.
Through the embodiment, the technical problems that in the prior art, the volume fog in the original image data is usually completely eliminated through an elimination filter algorithm or is totally and semi-eliminated, useless volume fog cannot be eliminated in a targeted manner, and the accuracy and quality of image processing are reduced are solved.
Example two
Fig. 2 is a block diagram of an image volume fog eliminator based on an abbe convergence operator according to an embodiment of the present application, as shown in fig. 2, the apparatus comprising:
an obtaining module 20, configured to obtain original image information, where the original image information includes: raw image data, volumetric fog data.
Specifically, in order to solve the technical problems that in the prior art, the method for eliminating the volume fog usually eliminates the volume fog in the original image data completely through an elimination filter algorithm or performs overall half elimination, useless volume fog cannot be eliminated in a targeted manner, and the precision and quality of image processing are reduced, the original image information is firstly required to be acquired and acquired, and the original image data and corresponding image volume fog data in the real-time monitoring high-precision image pickup device are transmitted and stored for processing by a subsequent processor.
Optionally, the apparatus further includes: and the matching module is used for carrying out matrix matching on the volume fog data and the original image data according to preset demand parameters.
Specifically, after the original image information is acquired, the method further includes: and performing matrix matching on the volume fog data and the original image data according to preset demand parameters, wherein the matching matrix is used for generating corresponding relations between the original image data and the corresponding volume fog demand data, so that different volume fog processing modes can be adopted for different image data.
The segmentation module 22 is configured to segment the original image data in the original image information, and weight the segmented original image data with the volume fog data accordingly, so as to obtain volume fog data to be processed.
Optionally, the segmentation module includes: the segmentation unit is used for segmenting the original image data to a preset density to obtain segmented image data; the input unit is used for inputting the segmented image data into a matching matrix to obtain a volume fog weighted value, wherein the matching matrix is composed of two-dimensional local image elements and volume fog parameter elements; and the fusion unit is used for fusing the volume fog weighted value and the segmented image to obtain volume fog data to be processed.
Specifically, in order to weight the volume fog data, after the original image information is obtained, the embodiment of the application needs to divide the original image, split different areas represented by different volume fog to obtain image data with different identifications, and then fuse the volume fog weight value and related images by using a matching matrix to obtain the whole information of the volume fog image data, namely the volume fog data to be processed.
And the calculating module 24 is configured to perform abbe convergence calculation on all the image elements in the volumetric fog data to be processed, and compare the calculation result with a preset threshold value to obtain edge volumetric fog data.
Optionally, the computing module includes: the input unit is further configured to input elements in the volumetric fog data to be processed into an abbe convergence calculation equation one by one, so as to obtain the calculation result, where the abbe convergence calculation mode is that
F=∫σ*(Pl·[x,y]+Lim(n))
Wherein F is a calculation result of the abel function, σ is an abbe convergence factor, pl is volume fog data of each pixel region, x and y are pixel position data, and n is a natural number of convergence degree (n=1 to 5); and the comparison unit is used for comparing the calculation result with the preset threshold value to obtain all data larger than the preset threshold value and generating the edge volume fog data.
Specifically, after the volume fog data to be processed is obtained, the method needs to compare the calculation result with the preset threshold according to the Abbe convergence calculation method to obtain the volume fog images meeting the preset threshold, and the volume fog images are edge volume fog data needing to be subjected to volume fog elimination.
And an elimination module 26, configured to perform elimination treatment on the edge volume mist.
Through the embodiment, the technical problems that in the prior art, the volume fog in the original image data is usually completely eliminated through an elimination filter algorithm or is totally and semi-eliminated, useless volume fog cannot be eliminated in a targeted manner, and the accuracy and quality of image processing are reduced are solved.
According to another aspect of the embodiment of the present application, there is further provided a nonvolatile storage medium, where the nonvolatile storage medium includes a stored program, and when the program runs, the program controls a device in which the nonvolatile storage medium is located to execute an image volume fog removing method based on an abbe convergence operator.
Specifically, the method comprises the following steps: acquiring original image information, wherein the original image information comprises: raw image data, volume fog data; dividing the original image data in the original image information, and correspondingly weighting the original image data with the volume fog data to obtain volume fog data to be processed; carrying out Abbe convergence calculation on all image elements in the volume fog data to be processed, and comparing a calculation result with a preset threshold value to obtain edge volume fog data; and carrying out elimination treatment on the edge volume mist. Optionally, after the acquiring the original image information, the method further includes: and according to preset demand parameters, performing matrix matching on the volume fog data and the original image data. Optionally, the dividing the original image data in the original image information and weighting the original image data with the volume fog data correspondingly, so as to obtain the volume fog data to be processed includes: dividing the original image data to a preset density to obtain divided image data; inputting the segmented image data into a matching matrix to obtain a volume fog weighted value, wherein the matching matrix is composed of two-dimensional local image elements and volume fog parameter elements; and fusing the volume fog weighted value and the segmented image to obtain volume fog data to be processed. Optionally, the performing abbe convergence calculation on all the image elements in the volumetric fog data to be processed, and comparing the calculation result with a preset threshold value, to obtain edge volumetric fog data includes: inputting elements in the volume fog data to be processed into an Abbe convergence calculation equation one by one to obtain the calculation result, wherein the Abbe convergence calculation mode is as follows
F=∫σ*(Pl·[x,y]+Lim(n))
Wherein F is a calculation result of the abel function, σ is an abbe convergence factor, pl is volume fog data of each pixel region, x and y are pixel position data, and n is a natural number of convergence degree (n=1 to 5); and comparing the calculation result with the preset threshold value to obtain all data larger than the preset threshold value, and generating the edge volume fog data.
According to another aspect of the embodiment of the present application, there is also provided an electronic device including a processor and a memory; the memory stores computer readable instructions, and the processor is configured to execute the computer readable instructions, where the computer readable instructions execute an image volume fog removal method based on an Abelian convergence operator when executed.
Specifically, the method comprises the following steps: acquiring original image information, wherein the original image information comprises: raw image data, volume fog data; dividing the original image data in the original image information, and correspondingly weighting the original image data with the volume fog data to obtain volume fog data to be processed; carrying out Abbe convergence calculation on all image elements in the volume fog data to be processed, and comparing a calculation result with a preset threshold value to obtain edge volume fog data; and carrying out elimination treatment on the edge volume mist. Optionally, after the acquiring the original image information, the method further includes: and according to preset demand parameters, performing matrix matching on the volume fog data and the original image data. Optionally, the dividing the original image data in the original image information and weighting the original image data with the volume fog data correspondingly, so as to obtain the volume fog data to be processed includes: dividing the original image data to a preset density to obtain divided image data; inputting the segmented image data into a matching matrix to obtain a volume fog weighted value, wherein the matching matrix is composed of two-dimensional local image elements and volume fog parameter elements; and fusing the volume fog weighted value and the segmented image to obtain volume fog data to be processed. Optionally, the performing abbe convergence calculation on all the image elements in the volumetric fog data to be processed, and comparing the calculation result with a preset threshold value, to obtain edge volumetric fog data includes: inputting elements in the volume fog data to be processed into an Abbe convergence calculation equation one by one to obtain the calculation result, wherein the Abbe convergence calculation mode is as follows
F=∫σ*(Pl·[x,y]+Lim(n))
Wherein F is a calculation result of the abel function, σ is an abbe convergence factor, pl is volume fog data of each pixel region, x and y are pixel position data, and n is a natural number of convergence degree (n=1 to 5); and comparing the calculation result with the preset threshold value to obtain all data larger than the preset threshold value, and generating the edge volume fog data.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, fig. 3 is a schematic hardware structure of a terminal device according to an embodiment of the present application. As shown in fig. 3, the terminal device may include an input device 30, a processor 31, an output device 32, a memory 33, and at least one communication bus 34. The communication bus 34 is used to enable communication connections between the elements. The memory 33 may comprise a high-speed RAM memory or may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, in which various programs may be stored for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 31 may be implemented as, for example, a central processing unit (Central Processing Unit, abbreviated as CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 31 is coupled to the input device 30 and the output device 32 through wired or wireless connections.
Alternatively, the input device 30 may include a variety of input devices, for example, may include at least one of a user-oriented user interface, a device-oriented device interface, a programmable interface of software, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware insertion interface (such as a USB interface, a serial port, etc.) for data transmission between devices; alternatively, the user-oriented user interface may be, for example, a user-oriented control key, a voice input device for receiving voice input, and a touch-sensitive device (e.g., a touch screen, a touch pad, etc. having touch-sensitive functionality) for receiving user touch input by a user; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, for example, an input pin interface or an input interface of a chip, etc.; optionally, the transceiver may be a radio frequency transceiver chip, a baseband processing chip, a transceiver antenna, etc. with a communication function. An audio input device such as a microphone may receive voice data. The output device 32 may include a display, audio, or the like.
In this embodiment, the processor of the terminal device may include functions for executing each module of the data processing apparatus in each device, and specific functions and technical effects may be referred to the above embodiments and are not described herein again.
Fig. 4 is a schematic hardware structure of a terminal device according to another embodiment of the present application. Fig. 4 is a specific embodiment of the implementation of fig. 3. As shown in fig. 4, the terminal device of the present embodiment includes a processor 41 and a memory 42.
The processor 41 executes the computer program code stored in the memory 42 to implement the methods of the above-described embodiments.
The memory 42 is configured to store various types of data to support operation at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, video, etc. The memory 42 may include a random access memory (random access memory, simply referred to as RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, a processor 41 is provided in the processing assembly 40. The terminal device may further include: a communication component 43, a power supply component 44, a multimedia component 45, an audio component 46, an input/output interface 47 and/or a sensor component 48. The components and the like specifically included in the terminal device are set according to actual requirements, which are not limited in this embodiment.
The processing component 40 generally controls the overall operation of the terminal device. The processing component 40 may include one or more processors 41 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 40 may include one or more modules that facilitate interactions between the processing component 40 and other components. For example, processing component 40 may include a multimedia module to facilitate interaction between multimedia component 45 and processing component 40.
The power supply assembly 44 provides power to the various components of the terminal device. Power supply components 44 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for terminal devices.
The multimedia component 45 comprises a display screen between the terminal device and the user providing an output interface. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation.
The audio component 46 is configured to output and/or input audio signals. For example, the audio component 46 includes a Microphone (MIC) configured to receive external audio signals when the terminal device is in an operational mode, such as a speech recognition mode. The received audio signals may be further stored in the memory 42 or transmitted via the communication component 43. In some embodiments, audio assembly 46 further includes a speaker for outputting audio signals.
The input/output interface 47 provides an interface between the processing assembly 40 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: volume button, start button and lock button.
The sensor assembly 48 includes one or more sensors for providing status assessment of various aspects for the terminal device. For example, the sensor assembly 48 may detect the open/closed state of the terminal device, the relative positioning of the assembly, the presence or absence of user contact with the terminal device. The sensor assembly 48 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 48 may also include a camera or the like.
The communication component 43 is configured to facilitate communication between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot, where the SIM card slot is used to insert a SIM card, so that the terminal device may log into a GPRS network, and establish communication with a server through the internet.
From the above, it will be appreciated that the communication component 43, the audio component 46, and the input/output interface 47, the sensor component 48 referred to in the embodiment of fig. 4 may be implemented as an input device in the embodiment of fig. 3.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.
Claims (10)
1. An image volume fog removal method based on an Abbe convergence operator, comprising:
acquiring original image information, wherein the original image information comprises: raw image data, volume fog data;
dividing the original image data in the original image information, and correspondingly weighting the original image data with the volume fog data to obtain volume fog data to be processed;
carrying out Abbe convergence calculation on all image elements in the volume fog data to be processed, and comparing a calculation result with a preset threshold value to obtain edge volume fog data;
and carrying out elimination treatment on the edge volume mist.
2. The method of claim 1, wherein after the acquiring the original image information, the method further comprises:
and according to preset demand parameters, performing matrix matching on the volume fog data and the original image data.
3. The method of claim 1, wherein the segmenting the raw image data in the raw image information and weighting the volumetric fog data accordingly comprises:
dividing the original image data to a preset density to obtain divided image data;
inputting the segmented image data into a matching matrix to obtain a volume fog weighted value, wherein the matching matrix is composed of two-dimensional local image elements and volume fog parameter elements;
and fusing the volume fog weighted value and the segmented image to obtain volume fog data to be processed.
4. The method according to claim 1, wherein the performing abbe convergence calculation on all the image elements in the volumetric fog data to be processed, and comparing the calculation result with a preset threshold value, and obtaining edge volumetric fog data includes:
inputting elements in the volume fog data to be processed into an Abbe convergence calculation equation one by one to obtain the calculation result, wherein the Abbe convergence calculation mode is as follows
F=∫σ*(Pl·[x,y]+Lim(n))
Wherein F is a calculation result of the abel function, σ is an abbe convergence factor, pl is volume fog data of each pixel region, x and y are pixel position data, and n is a natural number of convergence degree (n=1 to 5);
and comparing the calculation result with the preset threshold value to obtain all data larger than the preset threshold value, and generating the edge volume fog data.
5. An image volume fog removing device based on an abbe convergence operator, comprising:
the device comprises an acquisition module, a storage module and a display module, wherein the acquisition module is used for acquiring original image information, and the original image information comprises: raw image data, volume fog data;
the segmentation module is used for segmenting the original image data in the original image information and correspondingly weighting the volume fog data to obtain volume fog data to be processed;
the computing module is used for carrying out Abelian convergence computation on all image elements in the volume fog data to be processed, and comparing a computing result with a preset threshold value to obtain edge volume fog data;
and the elimination module is used for eliminating the edge volume fog.
6. The apparatus of claim 5, wherein the apparatus further comprises:
and the matching module is used for carrying out matrix matching on the volume fog data and the original image data according to preset demand parameters.
7. The apparatus of claim 5, wherein the partitioning module comprises:
the segmentation unit is used for segmenting the original image data to a preset density to obtain segmented image data;
the input unit is used for inputting the segmented image data into a matching matrix to obtain a volume fog weighted value, wherein the matching matrix is composed of two-dimensional local image elements and volume fog parameter elements;
and the fusion unit is used for fusing the volume fog weighted value and the segmented image to obtain volume fog data to be processed.
8. The apparatus of claim 5, wherein the computing module comprises:
the input unit is further configured to input elements in the volumetric fog data to be processed into an abbe convergence calculation equation one by one, so as to obtain the calculation result, where the abbe convergence calculation mode is that
F=∫σ*(Pl·[x,y]+Lim(n))
Wherein F is a calculation result of the abel function, σ is an abbe convergence factor, pl is volume fog data of each pixel region, x and y are pixel position data, and n is a natural number of convergence degree (n=1 to 5);
and the comparison unit is used for comparing the calculation result with the preset threshold value to obtain all data larger than the preset threshold value and generating the edge volume fog data.
9. A non-volatile storage medium, characterized in that the non-volatile storage medium comprises a stored program, wherein the program, when run, controls a device in which the non-volatile storage medium is located to perform the method of any one of claims 1 to 4.
10. An electronic device comprising a processor and a memory; the memory has stored therein computer readable instructions for executing the processor, wherein the computer readable instructions when executed perform the method of any of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310303362.7A CN116664413B (en) | 2023-03-27 | 2023-03-27 | Image volume fog eliminating method and device based on Abbe convergence operator |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310303362.7A CN116664413B (en) | 2023-03-27 | 2023-03-27 | Image volume fog eliminating method and device based on Abbe convergence operator |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116664413A true CN116664413A (en) | 2023-08-29 |
CN116664413B CN116664413B (en) | 2024-02-02 |
Family
ID=87723074
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310303362.7A Active CN116664413B (en) | 2023-03-27 | 2023-03-27 | Image volume fog eliminating method and device based on Abbe convergence operator |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116664413B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120213436A1 (en) * | 2011-02-18 | 2012-08-23 | Hexagon Technology Center Gmbh | Fast Image Enhancement and Three-Dimensional Depth Calculation |
US20170161943A1 (en) * | 2015-12-08 | 2017-06-08 | City University Of Hong Kong | Apparatus for generating a display medium, a method of displaying a visible image and a display apparatus |
CN113614529A (en) * | 2019-03-26 | 2021-11-05 | 国立大学法人大阪大学 | Image analysis method, image analysis program, recording medium, image analysis device, and image analysis system |
CN114170359A (en) * | 2021-11-03 | 2022-03-11 | 完美世界(北京)软件科技发展有限公司 | Volume fog rendering method, device and equipment and storage medium |
CN114445533A (en) * | 2021-12-30 | 2022-05-06 | 网易(杭州)网络有限公司 | Volume fog generating method and device, electronic equipment and storage medium |
US20230005148A1 (en) * | 2019-12-05 | 2023-01-05 | Osaka University | Image analysis method, image analysis device, image analysis system, control program, and recording medium |
WO2023280080A1 (en) * | 2021-07-07 | 2023-01-12 | 同方威视技术股份有限公司 | Shadow elimination device and method, empty disk recognition device and method |
CN115631122A (en) * | 2022-11-07 | 2023-01-20 | 北京拙河科技有限公司 | Image optimization method and device for edge image algorithm |
-
2023
- 2023-03-27 CN CN202310303362.7A patent/CN116664413B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120213436A1 (en) * | 2011-02-18 | 2012-08-23 | Hexagon Technology Center Gmbh | Fast Image Enhancement and Three-Dimensional Depth Calculation |
US20170161943A1 (en) * | 2015-12-08 | 2017-06-08 | City University Of Hong Kong | Apparatus for generating a display medium, a method of displaying a visible image and a display apparatus |
CN113614529A (en) * | 2019-03-26 | 2021-11-05 | 国立大学法人大阪大学 | Image analysis method, image analysis program, recording medium, image analysis device, and image analysis system |
US20230005148A1 (en) * | 2019-12-05 | 2023-01-05 | Osaka University | Image analysis method, image analysis device, image analysis system, control program, and recording medium |
WO2023280080A1 (en) * | 2021-07-07 | 2023-01-12 | 同方威视技术股份有限公司 | Shadow elimination device and method, empty disk recognition device and method |
CN114170359A (en) * | 2021-11-03 | 2022-03-11 | 完美世界(北京)软件科技发展有限公司 | Volume fog rendering method, device and equipment and storage medium |
CN114445533A (en) * | 2021-12-30 | 2022-05-06 | 网易(杭州)网络有限公司 | Volume fog generating method and device, electronic equipment and storage medium |
CN115631122A (en) * | 2022-11-07 | 2023-01-20 | 北京拙河科技有限公司 | Image optimization method and device for edge image algorithm |
Non-Patent Citations (3)
Title |
---|
LAM, ML等: "3D Fog Display using Parallel Linear Motion Platforms", 《PROCEEDINGS OF THE 2014 INTERNATIONAL CONFERENCE ON VIRTUAL SYSTEMS AND MULTIMEDIA (VSMM)》, pages 234 - 237 * |
任喜亮: "基于单次散射的体积雾实时绘制算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 10, pages 138 - 1925 * |
赵明敏 等: "基于波粒去噪的图像清晰化方法", 《陕西理工学院学报(自然科学版)》, vol. 31, no. 06, pages 23 - 27 * |
Also Published As
Publication number | Publication date |
---|---|
CN116664413B (en) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115631122A (en) | Image optimization method and device for edge image algorithm | |
CN115375582A (en) | Moire digestion method and device based on low-order Taylor decomposition | |
CN115293985B (en) | Super-resolution noise reduction method and device for image optimization | |
CN116614453A (en) | Image transmission bandwidth selection method and device based on cloud interconnection | |
CN116664413B (en) | Image volume fog eliminating method and device based on Abbe convergence operator | |
CN115578290A (en) | Image refining method and device based on high-precision shooting matrix | |
CN116228593B (en) | Image perfecting method and device based on hierarchical antialiasing | |
CN116088580B (en) | Flying object tracking method and device | |
CN116468883B (en) | High-precision image data volume fog recognition method and device | |
CN115205313B (en) | Picture optimization method and device based on least square algorithm | |
CN116402935B (en) | Image synthesis method and device based on ray tracing algorithm | |
CN115546053B (en) | Method and device for eliminating diffuse reflection of graphics on snow in complex terrain | |
CN116389915B (en) | Method and device for reducing flicker of light field camera | |
CN116579964B (en) | Dynamic frame gradual-in gradual-out dynamic fusion method and device | |
CN115511735B (en) | Snow field gray scale picture optimization method and device | |
CN116579965B (en) | Multi-image fusion method and device | |
CN115187570B (en) | Singular traversal retrieval method and device based on DNN deep neural network | |
CN115984333B (en) | Smooth tracking method and device for airplane target | |
CN115914819B (en) | Picture capturing method and device based on orthogonal decomposition algorithm | |
CN115809006B (en) | Method and device for controlling manual instructions through picture | |
CN116030501B (en) | Method and device for extracting bird detection data | |
CN116797479B (en) | Image vertical distortion conversion method | |
CN116452481A (en) | Multi-angle combined shooting method and device | |
CN116385302A (en) | Dynamic blur elimination method and device for optical group camera | |
CN116468751A (en) | High-speed dynamic image detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |