CN115631122A - Image optimization method and device for edge image algorithm - Google Patents
Image optimization method and device for edge image algorithm Download PDFInfo
- Publication number
- CN115631122A CN115631122A CN202211383336.1A CN202211383336A CN115631122A CN 115631122 A CN115631122 A CN 115631122A CN 202211383336 A CN202211383336 A CN 202211383336A CN 115631122 A CN115631122 A CN 115631122A
- Authority
- CN
- China
- Prior art keywords
- image data
- edge
- core
- edge calculation
- calculation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005457 optimization Methods 0.000 title claims abstract description 66
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 49
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000004364 calculation method Methods 0.000 claims abstract description 150
- 238000012545 processing Methods 0.000 claims abstract description 96
- 230000006870 function Effects 0.000 claims abstract description 29
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 15
- 230000004927 fusion Effects 0.000 claims description 17
- 230000011218 segmentation Effects 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 abstract description 9
- 238000004891 communication Methods 0.000 description 15
- 230000008878 coupling Effects 0.000 description 6
- 238000010168 coupling process Methods 0.000 description 6
- 238000005859 coupling reaction Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image optimization method and device for an edge image algorithm. Wherein, the method comprises the following steps: acquiring original image data of edge calculation; segmenting the original image data into core image data and edge image data according to a high-order Taylor decomposition algorithm; performing binarization processing on the core image data and the edge image data, and fusing the core image data and the edge image data into an edge calculation image data stream; and inputting the edge calculation image data stream into an edge calculation model to obtain an edge calculation result. The invention solves the technical problem that in the edge algorithm image data processing in the prior art, only the acquired original image data is directly subjected to edge calculation, and the edge calculation result is used as a data source for subsequent image processing, and under the condition of shooting by hundred million-level camera equipment, because the edge algorithm has the matrix camera shooting acquisition functions of wide-angle acquisition, micro-distance acquisition, dynamic tracking acquisition and the like, the edge calculation of the image data with other functions cannot be accurately and completely performed.
Description
Technical Field
The invention relates to the field of image calculation processing, in particular to an image optimization method and device for an edge image algorithm.
Background
Along with the continuous development of intelligent science and technology, people use intelligent equipment more and more among life, work, the study, use intelligent science and technology means, improved the quality of people's life, increased the efficiency of people's study and work.
At present, in the image acquisition and image data edge calculation process, image data acquired by a high-definition camera device is generally sent to a processor CPU and a memory in a whole data stream manner, and edge calculation is performed on the acquired whole image data according to an edge algorithm model, so that more image analysis, image recognition and image processing operations are facilitated. However, in the edge algorithm image data processing in the prior art, only the acquired original image data is directly subjected to edge calculation, and the result of the edge calculation is used as a data source for subsequent image processing.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image optimization method and device for an edge image algorithm, which at least solve the technical problem that in the edge algorithm image data processing in the prior art, only acquired original image data is directly subjected to edge calculation, and the result of the edge calculation is used as a data source for subsequent image processing, and under the condition of shooting by hundred million-level camera equipment, because matrix camera shooting acquisition functions such as wide-angle acquisition, macro acquisition, dynamic tracking acquisition and the like are provided, the edge calculation cannot be accurately and completely performed on the image data with other functions.
According to an aspect of an embodiment of the present invention, there is provided an image optimization method for an edge image algorithm, including: acquiring original image data of edge calculation; segmenting the original image data into core image data and edge image data according to a high-order Taylor decomposition algorithm; performing binarization processing on the core image data and the edge image data, and fusing the core image data and the edge image data into an edge calculation image data stream; and inputting the edge calculation image data stream into an edge calculation model to obtain an edge calculation result.
Optionally, the acquiring the edge calculation original image data includes: collecting edge calculation requirements and edge calculation factors; and extracting the original image data from an image database according to the edge calculation requirement and the edge calculation factor.
Optionally, the segmenting the original image data into core image data and edge image data according to a high-order taylor decomposition algorithm includes: by the formula
H=o*Tranh(C t )
B=o*Tranh(C t )
Segmenting the original image data to obtain the core image data and the edge image data, wherein H is the core image data, B is the edge image data, o is the original image data, and H (C) t ) Are taylor functions and splitting factors.
Optionally, the binarizing the core image data and the edge image data, and fusing the core image data and the edge image data into an edge calculation image data stream includes: performing binarization gray level processing on the core image data to obtain core optimization data; carrying out binarization gray level processing on the edge image data to obtain edge optimization data; and inputting the core optimization data and the edge optimization data into a fusion library to obtain the edge calculation image data flow.
According to another aspect of the embodiments of the present invention, there is also provided an image optimization apparatus for an edge image algorithm, including: the acquisition module is used for acquiring original image data of edge calculation; the segmentation module is used for segmenting the original image data into core image data and edge image data according to a high-order Taylor decomposition algorithm; the processing module is used for carrying out binarization processing on the core image data and the edge image data and fusing the core image data and the edge image data into an edge calculation image data stream; and the computing module is used for inputting the edge computing image data stream into an edge computing model to obtain an edge computing result.
Optionally, the obtaining module includes: the acquisition unit is used for acquiring edge calculation requirements and edge calculation factors; and the extracting unit is used for extracting the original image data from an image database according to the edge calculation requirement and the edge calculation factor.
Optionally, the cutting module includes: a segmentation unit for passing through a formula
H=o*Tranh(C t )
B=o*Tranh(C t )
Segmenting the original image data to obtain the core image data and the edge image data, wherein H is the core image data, B is the edge image data, o is the original image data, and H (C) t ) Are taylor functions and splitting factors.
Optionally, the processing module includes: the first processing unit is used for carrying out binarization gray level processing on the core image data to obtain core optimization data; the second processing unit is used for carrying out binarization gray level processing on the edge image data to obtain edge optimization data; and the fusion unit is used for inputting the core optimization data and the edge optimization data into a fusion library to obtain the edge calculation image data stream.
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium including a stored program, wherein the program controls, when running, an apparatus in which the non-volatile storage medium is located to perform an image optimization method for an edge image algorithm.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform an image optimization method for edge image algorithms.
In the embodiment of the invention, the original image data is calculated by acquiring edges; segmenting the original image data into core image data and edge image data according to a high-order Taylor decomposition algorithm; performing binarization processing on the core image data and the edge image data, and fusing the core image data and the edge image data into an edge calculation image data stream; the method for inputting the edge calculation image data stream into the edge calculation model to obtain the edge calculation result solves the technical problem that in the edge algorithm image data processing in the prior art, only the acquired original image data is directly subjected to edge calculation, and the edge calculation result is used as a data source for subsequent image processing, and under the condition of shooting by hundred million-level camera equipment, due to the fact that matrix camera shooting acquisition functions such as wide-angle acquisition, macro acquisition, dynamic tracking acquisition and the like are provided, edge calculation cannot be accurately and completely performed on image data with other functions.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of an image optimization method for an edge image algorithm according to an embodiment of the present invention;
FIG. 2 is a block diagram of an image optimization apparatus for edge image algorithm according to an embodiment of the present invention;
fig. 3 is a block diagram of a terminal device for performing a method according to the present invention, according to an embodiment of the present invention;
fig. 4 is a memory unit for holding or carrying program code implementing a method according to the invention, according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of an image optimization method for an edge image algorithm, it is noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than here.
Example one
Fig. 1 is a flowchart of an image optimization method for an edge image algorithm according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S102, acquiring original image data of edge calculation.
Specifically, in order to solve the technical problem that in the edge algorithm image data processing in the prior art, only the acquired original image data is directly subjected to edge calculation, and the result of the edge calculation is used as a data source for subsequent image processing, and in the case of shooting by a hundred million-level shooting device, because the edge calculation cannot be accurately and completely performed on image data with other functions due to the matrix shooting acquisition functions such as wide-angle acquisition, macro acquisition, dynamic tracking acquisition and the like, the embodiment of the present invention firstly needs to acquire the original image data subjected to edge calculation from a high-precision shooting device, and store and transmit an image for subsequent image optimization, and the optimized image is subjected to edge calculation to better improve the calculation efficiency and improve the accuracy of image data processing.
Optionally, the acquiring the edge calculation original image data includes: collecting edge calculation requirements and edge calculation factors; and extracting the original image data from an image database according to the edge calculation requirement and the edge calculation factor.
Specifically, in order to extract image data for edge calculation, the original image data in an image database, which is a collection library of all images collected by a high-precision image pickup device, needs to be correspondingly extracted according to the parameter content of the edge calculation. For example, acquiring edge computed raw image data includes: collecting edge calculation requirements and edge calculation factors; and extracting the original image data from an image database according to the edge calculation requirement and the edge calculation factor.
And step S104, segmenting the original image data into core image data and edge image data according to a high-order Taylor decomposition algorithm.
Specifically, after the original image data is acquired, in order to decompose the edge calculation more specifically, the core image and the edge image in the original image data need to be separated, so that it is ensured that the image data acquired by the cameras with different functions can be accurately processed by the edge algorithm when the high-precision shooting matrix device is in operation.
Optionally, the segmenting the original image data into core image data and edge image data according to a high-order taylor decomposition algorithm includes: by the formula
H=o*Tranh(C t )
B=o*Tranh(C t )
Segmenting the original image data to obtain the core image data and the edge image data, wherein H is the core image data, B is the edge image data, o is the original image data, and H (C) t ) Are taylor functions and splitting factors.
And step S106, performing binarization processing on the core image data and the edge image data, and fusing the core image data and the edge image data into an edge calculation image data stream.
Optionally, the binarizing the core image data and the edge image data, and fusing the core image data and the edge image data into an edge calculation image data stream includes: performing binarization gray level processing on the core image data to obtain core optimization data; carrying out binarization gray level processing on the edge image data to obtain edge optimization data; and inputting the core optimization data and the edge optimization data into a fusion library to obtain the edge calculation image data flow.
Specifically, in order to optimize the core image data and the edge image data extracted in the embodiment of the present invention, binarization processing needs to be performed respectively, so that images under different RGB conditions can be identified more accurately, and point location fusion is performed on a processing result by using a fusion algorithm, so as to obtain an edge calculation image data stream after optimization processing, which is convenient for subsequent edge calculation. For example, the binarizing the core image data and the edge image data and fusing them into an edge calculation image data stream includes: performing binarization gray level processing on the core image data to obtain core optimization data; carrying out binarization gray level processing on the edge image data to obtain edge optimization data; and inputting the core optimization data and the edge optimization data into a fusion library to obtain the edge calculation image data flow.
In the binarization processing of the image, the gray-scale value of a point on the image is set to 0 or 255, that is, the whole image is obviously black and white. That is, a gray scale image with 256 brightness levels is selected by a proper threshold value to obtain a binary image which can still reflect the whole and local features of the image. In digital image processing, binary images are very important, and particularly in practical image processing, many systems are configured by binary image processing, and in order to perform processing and analysis of binary images, a grayscale image is first binarized to obtain a binarized image, which is advantageous in that when an image is further processed, the collective property of the image is only related to the positions of points with pixel values of 0 or 255, and the multi-level values of the pixels are not related, so that the processing is simplified, and the processing and compression amount of data is small. In order to obtain an ideal binary image, a non-overlapping region is generally defined by closed and connected boundaries. All pixels with the gray levels larger than or equal to the threshold are judged to belong to the specific object, the gray level of the pixels is 255 for representation, otherwise the pixels are excluded from the object area, the gray level is 0, and the pixels represent the background or the exceptional object area. If a particular object has a uniform gray level inside it and is in a uniform background with gray levels of other levels, a comparable segmentation effect can be obtained using thresholding. If the difference between the object and the background is not represented by a gray value (such as a texture difference), the difference characteristic can be converted into a gray value difference, and then the image is segmented by using a threshold value selection technology. The threshold value is dynamically adjusted to realize the binarization of the image, and the specific result of the image segmentation can be dynamically observed.
And S108, inputting the edge calculation image data stream into an edge calculation model to obtain an edge calculation result.
Specifically, after the edge calculation image data is obtained, since the image data is the image data subjected to the step-by-step optimization, the unit precision of the image is higher and the effect is better when the edge calculation is performed than when the image is not subjected to the segmentation optimization.
Through the embodiment, the technical problem that in the edge algorithm image data processing in the prior art, only the acquired original image data is directly subjected to edge calculation, and the edge calculation result is used as a data source for subsequent image processing, and under the condition of shooting by hundred million-level camera equipment, due to the fact that the edge algorithm has matrix camera shooting acquisition functions of wide-angle acquisition, micro-distance acquisition, dynamic tracking acquisition and the like, the edge calculation cannot be accurately and completely performed on the image data with other functions is solved.
Example two
Fig. 2 is a block diagram of an image optimization apparatus for edge image algorithm according to an embodiment of the present invention, as shown in fig. 2, the apparatus includes:
and an obtaining module 20, configured to obtain original image data for edge calculation.
Specifically, in order to solve the technical problem that in the edge algorithm image data processing in the prior art, only the acquired original image data is directly subjected to edge calculation, and the result of the edge calculation is used as a data source for subsequent image processing, and in the case of shooting by a hundred million-level shooting device, because the edge calculation cannot be accurately and completely performed on image data with other functions due to the matrix shooting acquisition functions such as wide-angle acquisition, macro acquisition, dynamic tracking acquisition and the like, the embodiment of the present invention firstly needs to acquire the original image data subjected to edge calculation from a high-precision shooting device, and store and transmit an image for subsequent image optimization, and the optimized image is subjected to edge calculation to better improve the calculation efficiency and improve the accuracy of image data processing.
Optionally, the obtaining module includes: the acquisition unit is used for acquiring edge calculation requirements and edge calculation factors; and the extracting unit is used for extracting the original image data from an image database according to the edge calculation requirement and the edge calculation factor.
Specifically, in order to extract image data for edge calculation, the original image data in an image database, which is a collection library of all images collected by a high-precision image pickup device, needs to be correspondingly extracted according to the parameter content of the edge calculation. For example, acquiring edge computed raw image data includes: collecting edge calculation requirements and edge calculation factors; and extracting the original image data from an image database according to the edge calculation requirement and the edge calculation factor.
And the segmentation module 22 is configured to segment the original image data into core image data and edge image data according to a high-order taylor decomposition algorithm.
Specifically, after the original image data is acquired, in order to decompose the edge calculation more specifically, the core image and the edge image in the original image data need to be separated, so that it is ensured that the image data acquired by the cameras with different functions can be accurately processed by the edge algorithm when the high-precision shooting matrix device is in operation.
Optionally, the cutting module includes: a segmentation unit for passing through a formula
H=o*Tranh(C t )
B=o*Tranh(C t )
Segmenting the original image data to obtain the core image data and the edge image data, wherein H is the core image data, B is the edge image data, o is the original image data, and H (C) t ) Are taylor functions and splitting factors.
And the processing module 24 is configured to perform binarization processing on the core image data and the edge image data, and fuse the core image data and the edge image data into an edge calculation image data stream.
Optionally, the processing module includes: the first processing unit is used for carrying out binarization gray level processing on the core image data to obtain core optimization data; the second processing unit is used for carrying out binarization gray level processing on the edge image data to obtain edge optimization data; and the fusion unit is used for inputting the core optimization data and the edge optimization data into a fusion library to obtain the edge calculation image data stream.
Specifically, in order to optimize the core image data and the edge image data extracted in the embodiment of the present invention, binarization processing needs to be performed respectively, so that images under different RGB conditions can be identified more accurately, and point location fusion is performed on a processing result by using a fusion algorithm, so as to obtain an edge calculation image data stream after optimization processing, which is convenient for subsequent edge calculation. For example, the binarizing the core image data and the edge image data and fusing them into an edge calculation image data stream includes: performing binarization gray level processing on the core image data to obtain core optimization data; carrying out binarization gray level processing on the edge image data to obtain edge optimization data; and inputting the core optimization data and the edge optimization data into a fusion library to obtain the edge calculation image data flow.
In the binarization processing of the image, the gray scale value of a point on the image is set to 0 or 255, that is, the whole image is obviously black and white. That is, a gray scale image with 256 brightness levels is selected by a proper threshold value to obtain a binary image which can still reflect the whole and local features of the image. In digital image processing, binary images are very important, and particularly in practical image processing, many systems are configured by binary image processing, and in order to perform processing and analysis of binary images, a grayscale image is first binarized to obtain a binarized image, which is advantageous in that when an image is further processed, the collective property of the image is only related to the positions of points with pixel values of 0 or 255, and the multi-level values of the pixels are not related, so that the processing is simplified, and the processing and compression amount of data is small. In order to obtain an ideal binary image, a non-overlapping region is generally defined by closed and connected boundaries. All pixels with the gray scale larger than or equal to the threshold are judged to belong to the specific object, the gray scale value of the pixels is 255 for representing, otherwise, the pixels are excluded from the object area, the gray scale value is 0 for representing the background or the exceptional object area. If a particular object has a uniform gray level inside it and is in a uniform background with gray levels of other levels, a comparable segmentation effect can be obtained using thresholding. If the difference between the object and the background is not represented in gray scale values (e.g., different textures), the difference feature can be converted into a gray scale difference, and then the image can be segmented using a threshold selection technique. The threshold value is dynamically adjusted to realize the binarization of the image, and the specific result of the image segmentation can be dynamically observed.
And the calculation module 26 is configured to input the edge calculation image data stream into an edge calculation model to obtain an edge calculation result.
Specifically, after the edge calculation image data is obtained, since the image data is the image data subjected to the step-by-step optimization, the unit precision of the image is higher and the effect is better when the edge calculation is performed than when the image is not subjected to the segmentation optimization.
Through the embodiment, the technical problem that in the edge algorithm image data processing in the prior art, only the acquired original image data is directly subjected to edge calculation, and the edge calculation result is used as a data source for subsequent image processing, and under the condition of shooting by hundred million-level camera equipment, due to the fact that the edge algorithm has matrix camera shooting acquisition functions of wide-angle acquisition, micro-distance acquisition, dynamic tracking acquisition and the like, the edge calculation cannot be accurately and completely performed on the image data with other functions is solved.
According to another aspect of the embodiments of the present invention, there is also provided a non-volatile storage medium including a stored program, wherein the program controls, when running, an apparatus in which the non-volatile storage medium is located to perform an image optimization method for an edge image algorithm.
Specifically, the method comprises the following steps: acquiring original image data of edge calculation; segmenting the original image data into core image data and edge image data according to a high-order Taylor decomposition algorithm; performing binarization processing on the core image data and the edge image data, and fusing the core image data and the edge image data into an edge calculation image data stream; and inputting the edge calculation image data stream into an edge calculation model to obtain an edge calculation result. Optionally, the obtaining the edge-computed raw image data includes: collecting edge calculation requirements and edge calculation factors; and extracting the original image data from an image database according to the edge calculation requirement and the edge calculation factor. Optionally, the segmenting the original image data into core image data and edge image data according to a high-order taylor decomposition algorithm includes: by the formula
H=o*Tranh(C t )
B=o*Tranh(C t )
Segmenting the original image data to obtain the core image data and the edge image data, wherein H is the core image data, B is the edge image data, o is the original image data, and H (C) t ) Are taylor functions and splitting factors. Optionally, the binarizing the core image data and the edge image data, and fusing the core image data and the edge image data into an edge calculation image data stream includes: performing binarization gray level processing on the core image data to obtain core optimization data; carrying out binarization gray level processing on the edge image data to obtain edge optimization data; and inputting the core optimization data and the edge optimization data into a fusion library to obtain the edge calculation image data flow.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform a method of image optimization for edge image algorithms.
Specifically, the method comprises the following steps: acquiring original image data of edge calculation; segmenting the original image data into core image data and edge image data according to a high-order Taylor decomposition algorithm; performing binarization processing on the core image data and the edge image data, and fusing the core image data and the edge image data into an edge calculation image data stream; and inputting the edge calculation image data stream into an edge calculation model to obtain an edge calculation result. Optionally, the acquiring the edge calculation original image data includes: collecting edge calculation requirements and edge calculation factors; and extracting the original image data from an image database according to the edge calculation requirement and the edge calculation factor. Optionally, the segmenting the original image data into core image data and edge image data according to a high-order taylor decomposition algorithm includes: by the formula
H=o*Tranh(C t )
B=o*Tranh(C t )
Slicing the original image dataObtaining the core image data and the edge image data, wherein H is the core image data, B is the edge image data, o is the original image data, and H (C) t ) Are taylor functions and splitting factors. Optionally, the binarizing the core image data and the edge image data, and fusing the core image data and the edge image data into an edge calculation image data stream includes: performing binarization gray level processing on the core image data to obtain core optimization data; carrying out binarization gray level processing on the edge image data to obtain edge optimization data; and inputting the core optimization data and the edge optimization data into a fusion library to obtain the edge calculation image data flow.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, fig. 3 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present application. As shown in fig. 3, the terminal device may include an input device 30, a processor 31, an output device 32, a memory 33, and at least one communication bus 34. The communication bus 34 is used to realize communication connections between the elements. The memory 33 may comprise a high speed RAM memory, and may also include a non-volatile memory NVM, such as at least one disk memory, in which various programs may be stored for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 31 may be implemented by, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 31 is coupled to the input device 30 and the output device 32 through a wired or wireless connection.
Optionally, the input device 30 may include a variety of input devices, for example, at least one of a user-oriented user interface, a device-oriented device interface, a software programmable interface, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware plug-in interface (e.g., a USB interface, a serial port, etc.) for data transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, such as an input pin interface or an input interface of a chip; optionally, the transceiver may be a radio frequency transceiver chip with a communication function, a baseband processing chip, a transceiver antenna, and the like. An audio input device such as a microphone may receive voice data. The output device 32 may include a display, a sound, or other output device.
In this embodiment, the processor of the terminal device includes a module for executing the functions of the modules of the data processing apparatus in each device, and specific functions and technical effects may refer to the foregoing embodiments, which are not described herein again.
Fig. 4 is a schematic diagram of a hardware structure of a terminal device according to another embodiment of the present application. Fig. 4 is a specific embodiment of fig. 3 in an implementation process. As shown in fig. 4, the terminal device of the present embodiment includes a processor 41 and a memory 42.
The processor 41 executes the computer program code stored in the memory 42 to implement the method in the above-described embodiment.
The memory 42 is configured to store various types of data to support operations at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, videos, and so forth. The memory 42 may comprise a Random Access Memory (RAM) and may further comprise a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, the processor 41 is provided in the processing assembly 40. The terminal device may further include: a communication component 43, a power component 44, a multimedia component 45, an audio component 46, an input/output interface 47 and/or a sensor component 48. The specific components included in the terminal device are set according to actual requirements, which is not limited in this embodiment.
The processing component 40 generally controls the overall operation of the terminal device. Processing components 40 may include one or more processors 41 to execute instructions to perform all or a portion of the steps of the above-described method. Further, processing component 40 may include one or more modules that facilitate interaction between processing component 40 and other components. For example, the processing component 40 may include a multimedia module to facilitate interaction between the multimedia component 45 and the processing component 40.
The power supply component 44 provides power to the various components of the terminal device. The power components 44 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the terminal device.
The multimedia component 45 includes a display screen that provides an output interface between the terminal device and the user. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 46 is configured to output and/or input audio signals. For example, the audio component 46 includes a Microphone (MIC) configured to receive external audio signals when the terminal device is in an operational mode, such as a voice recognition mode. The received audio signal may further be stored in the memory 42 or transmitted via the communication component 43. In some embodiments, audio assembly 46 also includes a speaker for outputting audio signals.
The input/output interface 47 provides an interface between the processing component 40 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor assembly 48 includes one or more sensors for providing various aspects of status assessment for the terminal device. For example, the sensor assembly 48 may detect the open/closed status of the terminal device, the relative positioning of the components, the presence or absence of user contact with the terminal device. The sensor assembly 48 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 48 may also include a camera or the like.
The communication component 43 is configured to facilitate communication between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot for inserting a SIM card therein, so that the terminal device can log on to a GPRS network and establish communication with the server via the internet.
From the above, the communication component 43, the audio component 46, the input/output interface 47 and the sensor component 48 referred to in the embodiment of fig. 4 can be implemented as the input device in the embodiment of fig. 3.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (10)
1. An image optimization method for an edge image algorithm, comprising:
acquiring original image data of edge calculation;
segmenting the original image data into core image data and edge image data according to a high-order Taylor decomposition algorithm;
performing binarization processing on the core image data and the edge image data, and fusing the core image data and the edge image data into an edge calculation image data stream;
and inputting the edge calculation image data stream into an edge calculation model to obtain an edge calculation result.
2. The method of claim 1, wherein the obtaining edge computed raw image data comprises:
collecting edge calculation requirements and edge calculation factors;
and extracting the original image data from an image database according to the edge calculation requirement and the edge calculation factor.
3. The method of claim 1, wherein the segmenting the raw image data into core image data and edge image data according to a higher order taylor decomposition algorithm comprises:
by the formula
H=o*Tranh(C t )
B=o*Tranh(C t )
Segmenting the original image data to obtain the core image data and the edge image data, wherein H is the core image data, B is the edge image data, o is the original image data, and H (C) t ) Are taylor functions and splitting factors.
4. The method according to claim 1, wherein the binarizing the core image data and the edge image data and fusing them into an edge calculation image data stream comprises:
performing binarization gray level processing on the core image data to obtain core optimization data;
carrying out binarization gray level processing on the edge image data to obtain edge optimization data;
and inputting the core optimization data and the edge optimization data into a fusion library to obtain the edge calculation image data flow.
5. An image optimization apparatus for an edge image algorithm, comprising:
the acquisition module is used for acquiring original image data of edge calculation;
the segmentation module is used for segmenting the original image data into core image data and edge image data according to a high-order Taylor decomposition algorithm;
the processing module is used for carrying out binarization processing on the core image data and the edge image data and fusing the core image data and the edge image data into an edge calculation image data stream;
and the computing module is used for inputting the edge computing image data stream into an edge computing model to obtain an edge computing result.
6. The apparatus of claim 5, wherein the obtaining module comprises:
the acquisition unit is used for acquiring edge calculation requirements and edge calculation factors;
and the extracting unit is used for extracting the original image data from an image database according to the edge calculation requirement and the edge calculation factor.
7. The apparatus of claim 5, wherein the slicing module comprises:
a slicing unit for passing through a formula
H=o*Tranh(C t )
B=o*Tranh(C t )
Segmenting the original image data to obtain the core image data and the edge image data, wherein H is the core image data, B is the edge image data, o is the original image data, and H (C) t ) Are taylor functions and splitting factors.
8. The apparatus of claim 5, wherein the processing module comprises:
the first processing unit is used for carrying out binarization gray level processing on the core image data to obtain core optimization data;
the second processing unit is used for carrying out binarization gray level processing on the edge image data to obtain edge optimization data;
and the fusion unit is used for inputting the core optimization data and the edge optimization data into a fusion library to obtain the edge calculation image data stream.
9. A non-volatile storage medium, comprising a stored program, wherein the program, when executed, controls an apparatus in which the non-volatile storage medium is located to perform the method of any one of claims 1 to 4.
10. An electronic device comprising a processor and a memory; the memory has stored therein computer readable instructions for execution by the processor, wherein the computer readable instructions when executed perform the method of any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211383336.1A CN115631122A (en) | 2022-11-07 | 2022-11-07 | Image optimization method and device for edge image algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211383336.1A CN115631122A (en) | 2022-11-07 | 2022-11-07 | Image optimization method and device for edge image algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115631122A true CN115631122A (en) | 2023-01-20 |
Family
ID=84908838
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211383336.1A Pending CN115631122A (en) | 2022-11-07 | 2022-11-07 | Image optimization method and device for edge image algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115631122A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116363006A (en) * | 2023-03-28 | 2023-06-30 | 北京拙河科技有限公司 | Image calibration method and device based on normal algorithm |
CN116664413A (en) * | 2023-03-27 | 2023-08-29 | 北京拙河科技有限公司 | Image volume fog eliminating method and device based on Abbe convergence operator |
CN116883255A (en) * | 2023-05-22 | 2023-10-13 | 北京拙河科技有限公司 | Boundary correction method and device for high-precision light field image |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6430430B1 (en) * | 1999-04-29 | 2002-08-06 | University Of South Florida | Method and system for knowledge guided hyperintensity detection and volumetric measurement |
CN110458856A (en) * | 2019-07-22 | 2019-11-15 | 桂林航天工业学院 | A kind of crater image edge detection method, device and storage medium |
CN111161176A (en) * | 2019-12-24 | 2020-05-15 | RealMe重庆移动通信有限公司 | Image processing method and device, storage medium and electronic equipment |
CN111242879A (en) * | 2020-01-17 | 2020-06-05 | 郑州大学 | Image processing method, image processing apparatus, electronic device, and medium |
CN114581668A (en) * | 2022-03-08 | 2022-06-03 | 乐普(北京)医疗器械股份有限公司 | Segmentation model construction and contour recognition method and device and computer equipment |
-
2022
- 2022-11-07 CN CN202211383336.1A patent/CN115631122A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6430430B1 (en) * | 1999-04-29 | 2002-08-06 | University Of South Florida | Method and system for knowledge guided hyperintensity detection and volumetric measurement |
CN110458856A (en) * | 2019-07-22 | 2019-11-15 | 桂林航天工业学院 | A kind of crater image edge detection method, device and storage medium |
CN111161176A (en) * | 2019-12-24 | 2020-05-15 | RealMe重庆移动通信有限公司 | Image processing method and device, storage medium and electronic equipment |
CN111242879A (en) * | 2020-01-17 | 2020-06-05 | 郑州大学 | Image processing method, image processing apparatus, electronic device, and medium |
CN114581668A (en) * | 2022-03-08 | 2022-06-03 | 乐普(北京)医疗器械股份有限公司 | Segmentation model construction and contour recognition method and device and computer equipment |
Non-Patent Citations (1)
Title |
---|
殷喆 等: "基于麦克劳林展开与PCNN的医学图像融合", 微电子学与计算机 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116664413A (en) * | 2023-03-27 | 2023-08-29 | 北京拙河科技有限公司 | Image volume fog eliminating method and device based on Abbe convergence operator |
CN116664413B (en) * | 2023-03-27 | 2024-02-02 | 北京拙河科技有限公司 | Image volume fog eliminating method and device based on Abbe convergence operator |
CN116363006A (en) * | 2023-03-28 | 2023-06-30 | 北京拙河科技有限公司 | Image calibration method and device based on normal algorithm |
CN116363006B (en) * | 2023-03-28 | 2024-02-02 | 北京拙河科技有限公司 | Image calibration method and device based on normal algorithm |
CN116883255A (en) * | 2023-05-22 | 2023-10-13 | 北京拙河科技有限公司 | Boundary correction method and device for high-precision light field image |
CN116883255B (en) * | 2023-05-22 | 2024-05-24 | 北京拙河科技有限公司 | Boundary correction method and device for high-precision light field image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115631122A (en) | Image optimization method and device for edge image algorithm | |
CN115426525B (en) | High-speed dynamic frame linkage image splitting method and device | |
CN115170818A (en) | Dynamic frame image feature extraction method and device | |
CN116614453B (en) | Image transmission bandwidth selection method and device based on cloud interconnection | |
CN115293985B (en) | Super-resolution noise reduction method and device for image optimization | |
CN115474091A (en) | Motion capture method and device based on decomposition metagraph | |
CN115527045A (en) | Image identification method and device for snow field danger identification | |
CN115484408A (en) | Snow surface reflection coefficient generation method and device based on high-precision camera shooting | |
CN116228593B (en) | Image perfecting method and device based on hierarchical antialiasing | |
CN116664413B (en) | Image volume fog eliminating method and device based on Abbe convergence operator | |
CN116468883B (en) | High-precision image data volume fog recognition method and device | |
CN115511735B (en) | Snow field gray scale picture optimization method and device | |
CN116402935B (en) | Image synthesis method and device based on ray tracing algorithm | |
CN115205313B (en) | Picture optimization method and device based on least square algorithm | |
CN116579965B (en) | Multi-image fusion method and device | |
CN116468751A (en) | High-speed dynamic image detection method and device | |
CN116579964B (en) | Dynamic frame gradual-in gradual-out dynamic fusion method and device | |
CN115035467A (en) | Binary pixel cascade scenic spot monitoring method and device | |
CN115546053B (en) | Method and device for eliminating diffuse reflection of graphics on snow in complex terrain | |
CN115187570B (en) | Singular traversal retrieval method and device based on DNN deep neural network | |
CN115460389B (en) | Image white balance area optimization method and device | |
CN116723419B (en) | Acquisition speed optimization method and device for billion-level high-precision camera | |
CN116363006B (en) | Image calibration method and device based on normal algorithm | |
CN115984333A (en) | Smooth tracking method and device for airplane target | |
CN115914819A (en) | Image capturing method and device based on orthogonal decomposition algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |