CN116883255A - Boundary correction method and device for high-precision light field image - Google Patents

Boundary correction method and device for high-precision light field image Download PDF

Info

Publication number
CN116883255A
CN116883255A CN202310577489.8A CN202310577489A CN116883255A CN 116883255 A CN116883255 A CN 116883255A CN 202310577489 A CN202310577489 A CN 202310577489A CN 116883255 A CN116883255 A CN 116883255A
Authority
CN
China
Prior art keywords
image data
boundary
original image
correction
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310577489.8A
Other languages
Chinese (zh)
Other versions
CN116883255B (en
Inventor
袁潮
邓迪旻
温建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuohe Technology Co Ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN202310577489.8A priority Critical patent/CN116883255B/en
Publication of CN116883255A publication Critical patent/CN116883255A/en
Application granted granted Critical
Publication of CN116883255B publication Critical patent/CN116883255B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a boundary correction method and device for a high-precision light field image. Wherein the method comprises the following steps: collecting original image data; carrying out noise reduction treatment on the original image data to obtain optimized image data; marking the boundary of the optimized image data through a Canny algorithm to obtain boundary marking data; and carrying out correction operation on the original image data according to the boundary mark data, and fusing an operation result with the original image data to obtain a correction result. The application solves the technical problems that in the prior art, because the boundary pixels of the image data are affected by the close-up shot, the problems of visual distortion and inaccuracy of the pixels are easy to occur, and therefore, for the image processing task requiring accurate boundary pixel information, some methods are required to carry out boundary correction and fine processing.

Description

Boundary correction method and device for high-precision light field image
Technical Field
The application relates to the field of high-precision image processing, in particular to a boundary correction method and device for a high-precision light field image.
Background
Along with the continuous development of intelligent science and technology, intelligent equipment is increasingly used in life, work and study of people, and the quality of life of people is improved and the learning and working efficiency of people is increased by using intelligent science and technology means.
At present, depth and angle information of a scene can be acquired simultaneously by using a light field camera, and further image analysis operation is performed according to the acquired information, but in the prior art, since boundary pixels of image data are affected by a close-up lens, problems of visual distortion and inaccuracy of pixels are easy to occur, so that for an image processing task requiring accurate boundary pixel information, boundary correction and fine processing are required to be performed by adopting some methods
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a boundary correction method and a boundary correction device for a high-precision light field image, which at least solve the technical problems that in the prior art, as boundary pixels of image data are affected by a close-up lens, visual distortion and pixel inaccuracy are easy to occur, and therefore, for an image processing task requiring accurate boundary pixel information, some methods are required to be adopted for boundary correction and fine processing.
According to an aspect of the embodiment of the present application, there is provided a boundary correction method for a high-precision light field image, including: collecting original image data; carrying out noise reduction treatment on the original image data to obtain optimized image data; marking the boundary of the optimized image data through a Canny algorithm to obtain boundary marking data; and carrying out correction operation on the original image data according to the boundary mark data, and fusing an operation result with the original image data to obtain a correction result.
Optionally, marking the boundary of the optimized image data by a Canny algorithm, and obtaining boundary marking data includes: obtaining three-dimensional gradient parameters of a Canny algorithm, wherein the three-dimensional gradient parameters comprise: horizontal parameters, vertical parameters, diagonal parameters; and marking the optimized image data according to a preset hysteresis threshold and the three-dimensional gradient parameter to obtain the boundary marking data.
Optionally, the boundary marker data includes a plurality of binary points, where the binary points include: edge points, normal points.
Optionally, the performing a correction operation on the original image data according to the boundary mark data, and fusing an operation result with the original image data, to obtain a correction result includes: according to the boundary marking data, carrying out interpolation calculation on pixel values belonging to a boundary range to obtain a calculation result; according to the calculation result, pixel updating is carried out on the original image data to obtain a pixel value to be corrected; determining a correction area according to the pixel value to be corrected and the original image data, and performing correction operation on the correction area to generate the operation result; and fusing the operation result to the corresponding correction area position in the original image data.
According to another aspect of the embodiment of the present application, there is also provided a boundary correction apparatus for a high-precision light field image, including: the acquisition module is used for acquiring original image data; the noise reduction module is used for carrying out noise reduction processing on the original image data to obtain optimized image data; the marking module is used for marking the boundary of the optimized image data through a Canny algorithm to obtain boundary marking data; and the correction module is used for carrying out correction operation on the original image data according to the boundary mark data, and fusing an operation result with the original image data to obtain a correction result.
Optionally, the marking module includes: the acquisition unit is used for acquiring three-dimensional gradient parameters of a Canny algorithm, wherein the three-dimensional gradient parameters comprise: horizontal parameters, vertical parameters, diagonal parameters; and the marking unit is used for marking the optimized image data according to a preset hysteresis threshold and the three-dimensional gradient parameter to obtain the boundary marking data.
Optionally, the boundary marker data includes a plurality of binary points, where the binary points include: edge points, normal points.
Optionally, the correction module includes: the calculating unit is used for carrying out interpolation calculation on pixel values belonging to the boundary range according to the boundary marking data to obtain a calculation result; the updating unit is used for carrying out pixel updating on the original image data according to the calculation result to obtain a pixel value to be corrected; the correction unit is used for determining a correction area according to the pixel value to be corrected and the original image data, carrying out correction operation on the correction area and generating the operation result; and the fusion unit is used for fusing the operation result to the corresponding correction area position in the original image data.
According to another aspect of the embodiment of the present application, there is further provided a nonvolatile storage medium, where the nonvolatile storage medium includes a stored program, and when the program runs, the program controls a device in which the nonvolatile storage medium is located to execute a boundary correction method of a high-precision light field image.
According to another aspect of the embodiment of the present application, there is also provided an electronic device including a processor and a memory; the memory stores computer readable instructions, and the processor is configured to execute the computer readable instructions, where the computer readable instructions execute a boundary correction method for a high-precision light field image when executed.
In the embodiment of the application, the acquisition of original image data is adopted; carrying out noise reduction treatment on the original image data to obtain optimized image data; marking the boundary of the optimized image data through a Canny algorithm to obtain boundary marking data; the method comprises the steps of carrying out correction operation on original image data according to the boundary marking data, and fusing an operation result with the original image data to obtain a correction result, so that the problems of visual distortion and inaccuracy of pixels easily occur due to the influence of a close-up shot on boundary pixels of the image data in the prior art are solved, and therefore, for an image processing task requiring accurate boundary pixel information, the technical problems of boundary correction and fine processing by adopting a plurality of methods are required.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a flow chart of a method for boundary correction of a high-precision light field image according to an embodiment of the application;
FIG. 2 is a block diagram of a boundary correction apparatus for high-precision light field images according to an embodiment of the present application;
fig. 3 is a block diagram of a terminal device for performing the method according to the application according to an embodiment of the application;
fig. 4 is a memory unit for holding or carrying program code for implementing a method according to the application, according to an embodiment of the application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an embodiment of the present application, there is provided a method embodiment of a boundary correction method for a high-precision light field image, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different from that herein.
Example 1
Fig. 1 is a flowchart of a boundary correction method of a high-precision light field image according to an embodiment of the present application, as shown in fig. 1, the method includes the steps of:
step S102, collecting original image data.
Specifically, in order to solve the problems in the prior art that since the boundary pixels of the image data are affected by the close-up shot, visual distortion and inaccuracy of the pixels are easy to occur, the embodiments of the present application need to adopt some methods to perform boundary correction and fine processing for the image processing task requiring accurate boundary pixel information, and first, the original image data needs to be acquired through a light field camera or an array camera deployed in an application scene, and the original image data is stored and transmitted.
Step S104, noise reduction processing is carried out on the original image data, and optimized image data are obtained.
Specifically, in order to optimize the original image data, so that the subsequent image processing is more accurate and has higher efficiency, the original image data needs to be subjected to noise reduction processing, and noise reduction processing is performed on the original image data by using a noise reduction algorithm through searching noise points in the original image data, so that the optimized image data is obtained. For example, an average filter using a neighborhood averaging method is well suited for removing particle noise from an image obtained by scanning. The domain averaging method strongly suppresses noise and also causes a blurring phenomenon due to averaging, and the blurring degree is proportional to the neighborhood radius.
And S106, marking the boundary of the optimized image data through a Canny algorithm to obtain boundary marking data.
Optionally, marking the boundary of the optimized image data by a Canny algorithm, and obtaining boundary marking data includes: obtaining three-dimensional gradient parameters of a Canny algorithm, wherein the three-dimensional gradient parameters comprise: horizontal parameters, vertical parameters, diagonal parameters; and marking the optimized image data according to a preset hysteresis threshold and the three-dimensional gradient parameter to obtain the boundary marking data.
Specifically, in order to quickly locate boundary data in the optimized image data, an algorithm needs to be used to identify and mark the boundary in the optimized image data, for example, three-dimensional gradient parameters of a Canny algorithm are obtained, where the three-dimensional gradient parameters include: horizontal parameters, vertical parameters, diagonal parameters; marking the optimized image data according to a preset hysteresis threshold and the three-dimensional gradient parameters to obtain the boundary marking data, wherein a Canny algorithm comprises noise reduction, and any edge detection algorithm cannot be processed well on raw data, so that the first step is to convolve the raw data with a Gaussian smoothing template, and the obtained image is slightly blurred compared with the raw image. In this way, the noise of a single pixel becomes hardly affected on the image subjected to gaussian smoothing. Looking for gradients, edges in the image may point in different directions, so the Canny algorithm uses 4 masks to detect edges in horizontal, vertical, and diagonal directions. The convolution of the original image with each mask is stored. For each point we identify the maximum at this point and the direction of the generated edge. Thus we generate from the original image each luminance gradient map in the image and the direction of the luminance gradient. Tracking edges, higher luminance gradient ratios are more likely to be edges, but there is no exact value to define how large a luminance gradient is whether an edge is large or not, so Canny uses a hysteresis threshold that requires two thresholds, a high threshold and a low threshold. Assuming that the significant edges in the image are all continuous curves, we can track the blurred portions of a given curve and avoid having noise pixels that do not make up the curve as edges. Embodiments of the present application can begin with a larger threshold, identify the true edges that we are convincing, use the previously derived direction information, and track the entire edge in the image from these true edges. In tracking, embodiments of the present application may use a small threshold value, so that the blurred portion of the curve may be tracked until it returns to the starting point, and once this is done, a binary image is obtained, each point indicating whether it is an edge point.
Optionally, the boundary marker data includes a plurality of binary points, where the binary points include: edge points, normal points.
And step S108, carrying out correction operation on the original image data according to the boundary mark data, and fusing an operation result with the original image data to obtain a correction result.
Optionally, the performing a correction operation on the original image data according to the boundary mark data, and fusing an operation result with the original image data, to obtain a correction result includes: according to the boundary marking data, carrying out interpolation calculation on pixel values belonging to a boundary range to obtain a calculation result; according to the calculation result, pixel updating is carried out on the original image data to obtain a pixel value to be corrected; determining a correction area according to the pixel value to be corrected and the original image data, and performing correction operation on the correction area to generate the operation result; and fusing the operation result to the corresponding correction area position in the original image data.
It should be noted that, the interpolation calculation modes that can be used in the embodiment of the present application are numerous, such as four kinds of pixel interpolation algorithm, bilinear interpolation algorithm, bicubic interpolation algorithm and fractal algorithm. Among them, the pixel interpolation algorithm is the simplest one, in which a missing pixel is generated by directly using the color of the original pixel closest thereto, that is, the pixel beside it when the picture is enlarged. When the picture is enlarged, the pixels at the X point are increased, and the X point is closest to the B point in the two effective pixels of A, B, so that the X point directly carries the pixels at the B point, and the effect generated by the X point is the same as that of the B point. Although this algorithm is simple and therefore the processing speed is fast, the result is often a clearly visible saw tooth, often with poor results. The bilinear interpolation algorithm means that each pixel of the output image is the result of operation of four pixels in the original image, and the algorithm eliminates the jaggy phenomenon to a great extent because the algorithm is operated from the four pixels in the original image, and the effect is good. The bicubic interpolation algorithm is an improved algorithm of bilinear interpolation algorithm, each pixel of an output image is the result of original image 16-pixel operation, and the operation speed is not slow due to good effect, so that the interpolation mode is a very common algorithm and is widely applied to image editing software, printer drivers and digital cameras. The fractal algorithm has infinite details and self-similarity, and can make the graph enlarged anyway and look very similar to the original graph, so that the obtained image effect is clearer and sharper compared with other algorithms, but is much more complicated compared with other algorithms in calculation.
By the embodiment, the problems that in the prior art, since the boundary pixels of the image data are affected by the close-up shot, visual distortion and inaccuracy of the pixels are easy to occur are solved, and therefore, for an image processing task requiring accurate boundary pixel information, the technical problems of boundary correction and fine processing by adopting some methods are required.
Example two
Fig. 2 is a block diagram of a boundary correction apparatus for a high-precision light field image according to an embodiment of the present application, as shown in fig. 2, the apparatus includes:
and an acquisition module 20 for acquiring the original image data.
Specifically, in order to solve the problems in the prior art that since the boundary pixels of the image data are affected by the close-up shot, visual distortion and inaccuracy of the pixels are easy to occur, the embodiments of the present application need to adopt some methods to perform boundary correction and fine processing for the image processing task requiring accurate boundary pixel information, and first, the original image data needs to be acquired through a light field camera or an array camera deployed in an application scene, and the original image data is stored and transmitted.
And the noise reduction module 22 is used for carrying out noise reduction processing on the original image data to obtain optimized image data.
Specifically, in order to optimize the original image data, so that the subsequent image processing is more accurate and has higher efficiency, the original image data needs to be subjected to noise reduction processing, and noise reduction processing is performed on the original image data by using a noise reduction algorithm through searching noise points in the original image data, so that the optimized image data is obtained. For example, an average filter using a neighborhood averaging method is well suited for removing particle noise from an image obtained by scanning. The domain averaging method strongly suppresses noise and also causes a blurring phenomenon due to averaging, and the blurring degree is proportional to the neighborhood radius.
And the marking module 24 is used for marking the boundary of the optimized image data through a Canny algorithm to obtain boundary marking data.
Optionally, the marking module includes: the acquisition unit is used for acquiring three-dimensional gradient parameters of a Canny algorithm, wherein the three-dimensional gradient parameters comprise: horizontal parameters, vertical parameters, diagonal parameters; and the marking unit is used for marking the optimized image data according to a preset hysteresis threshold and the three-dimensional gradient parameter to obtain the boundary marking data.
Specifically, in order to quickly locate boundary data in the optimized image data, an algorithm needs to be used to identify and mark the boundary in the optimized image data, for example, three-dimensional gradient parameters of a Canny algorithm are obtained, where the three-dimensional gradient parameters include: horizontal parameters, vertical parameters, diagonal parameters; marking the optimized image data according to a preset hysteresis threshold and the three-dimensional gradient parameters to obtain the boundary marking data, wherein a Canny algorithm comprises noise reduction, and any edge detection algorithm cannot be processed well on raw data, so that the first step is to convolve the raw data with a Gaussian smoothing template, and the obtained image is slightly blurred compared with the raw image. In this way, the noise of a single pixel becomes hardly affected on the image subjected to gaussian smoothing. Looking for gradients, edges in the image may point in different directions, so the Canny algorithm uses 4 masks to detect edges in horizontal, vertical, and diagonal directions. The convolution of the original image with each mask is stored. For each point we identify the maximum at this point and the direction of the generated edge. Thus we generate from the original image each luminance gradient map in the image and the direction of the luminance gradient. Tracking edges, higher luminance gradient ratios are more likely to be edges, but there is no exact value to define how large a luminance gradient is whether an edge is large or not, so Canny uses a hysteresis threshold that requires two thresholds, a high threshold and a low threshold. Assuming that the significant edges in the image are all continuous curves, we can track the blurred portions of a given curve and avoid having noise pixels that do not make up the curve as edges. Embodiments of the present application can begin with a larger threshold, identify the true edges that we are convincing, use the previously derived direction information, and track the entire edge in the image from these true edges. In tracking, embodiments of the present application may use a small threshold value, so that the blurred portion of the curve may be tracked until it returns to the starting point, and once this is done, a binary image is obtained, each point indicating whether it is an edge point.
Optionally, the boundary marker data includes a plurality of binary points, where the binary points include: edge points, normal points.
And the correction module 26 is configured to perform correction operation on the original image data according to the boundary mark data, and fuse an operation result with the original image data to obtain a correction result.
Optionally, the correction module includes: the calculating unit is used for carrying out interpolation calculation on pixel values belonging to the boundary range according to the boundary marking data to obtain a calculation result; the updating unit is used for carrying out pixel updating on the original image data according to the calculation result to obtain a pixel value to be corrected; the correction unit is used for determining a correction area according to the pixel value to be corrected and the original image data, carrying out correction operation on the correction area and generating the operation result; and the fusion unit is used for fusing the operation result to the corresponding correction area position in the original image data.
It should be noted that, the interpolation calculation modes that can be used in the embodiment of the present application are numerous, such as four kinds of pixel interpolation algorithm, bilinear interpolation algorithm, bicubic interpolation algorithm and fractal algorithm. Among them, the pixel interpolation algorithm is the simplest one, in which a missing pixel is generated by directly using the color of the original pixel closest thereto, that is, the pixel beside it when the picture is enlarged. When the picture is enlarged, the pixels at the X point are increased, and the X point is closest to the B point in the two effective pixels of A, B, so that the X point directly carries the pixels at the B point, and the effect generated by the X point is the same as that of the B point. Although this algorithm is simple and therefore the processing speed is fast, the result is often a clearly visible saw tooth, often with poor results. The bilinear interpolation algorithm means that each pixel of the output image is the result of operation of four pixels in the original image, and the algorithm eliminates the jaggy phenomenon to a great extent because the algorithm is operated from the four pixels in the original image, and the effect is good. The bicubic interpolation algorithm is an improved algorithm of bilinear interpolation algorithm, each pixel of an output image is the result of original image 16-pixel operation, and the operation speed is not slow due to good effect, so that the interpolation mode is a very common algorithm and is widely applied to image editing software, printer drivers and digital cameras. The fractal algorithm has infinite details and self-similarity, and can make the graph enlarged anyway and look very similar to the original graph, so that the obtained image effect is clearer and sharper compared with other algorithms, but is much more complicated compared with other algorithms in calculation.
By the embodiment, the problems that in the prior art, since the boundary pixels of the image data are affected by the close-up shot, visual distortion and inaccuracy of the pixels are easy to occur are solved, and therefore, for an image processing task requiring accurate boundary pixel information, the technical problems of boundary correction and fine processing by adopting some methods are required.
According to another aspect of the embodiment of the present application, there is further provided a nonvolatile storage medium, where the nonvolatile storage medium includes a stored program, and when the program runs, the program controls a device in which the nonvolatile storage medium is located to execute a boundary correction method of a high-precision light field image.
Specifically, the method comprises the following steps: collecting original image data; carrying out noise reduction treatment on the original image data to obtain optimized image data; marking the boundary of the optimized image data through a Canny algorithm to obtain boundary marking data; and carrying out correction operation on the original image data according to the boundary mark data, and fusing an operation result with the original image data to obtain a correction result. Optionally, marking the boundary of the optimized image data by a Canny algorithm, and obtaining boundary marking data includes: obtaining three-dimensional gradient parameters of a Canny algorithm, wherein the three-dimensional gradient parameters comprise: horizontal parameters, vertical parameters, diagonal parameters; and marking the optimized image data according to a preset hysteresis threshold and the three-dimensional gradient parameter to obtain the boundary marking data. Optionally, the boundary marker data includes a plurality of binary points, where the binary points include: edge points, normal points. Optionally, the performing a correction operation on the original image data according to the boundary mark data, and fusing an operation result with the original image data, to obtain a correction result includes: according to the boundary marking data, carrying out interpolation calculation on pixel values belonging to a boundary range to obtain a calculation result; according to the calculation result, pixel updating is carried out on the original image data to obtain a pixel value to be corrected; determining a correction area according to the pixel value to be corrected and the original image data, and performing correction operation on the correction area to generate the operation result; and fusing the operation result to the corresponding correction area position in the original image data.
According to another aspect of the embodiment of the present application, there is also provided an electronic device including a processor and a memory; the memory stores computer readable instructions, and the processor is configured to execute the computer readable instructions, where the computer readable instructions execute a boundary correction method for a high-precision light field image when executed.
Specifically, the method comprises the following steps: collecting original image data; carrying out noise reduction treatment on the original image data to obtain optimized image data; marking the boundary of the optimized image data through a Canny algorithm to obtain boundary marking data; and carrying out correction operation on the original image data according to the boundary mark data, and fusing an operation result with the original image data to obtain a correction result. Optionally, marking the boundary of the optimized image data by a Canny algorithm, and obtaining boundary marking data includes: obtaining three-dimensional gradient parameters of a Canny algorithm, wherein the three-dimensional gradient parameters comprise: horizontal parameters, vertical parameters, diagonal parameters; and marking the optimized image data according to a preset hysteresis threshold and the three-dimensional gradient parameter to obtain the boundary marking data. Optionally, the boundary marker data includes a plurality of binary points, where the binary points include: edge points, normal points. Optionally, the performing a correction operation on the original image data according to the boundary mark data, and fusing an operation result with the original image data, to obtain a correction result includes: according to the boundary marking data, carrying out interpolation calculation on pixel values belonging to a boundary range to obtain a calculation result; according to the calculation result, pixel updating is carried out on the original image data to obtain a pixel value to be corrected; determining a correction area according to the pixel value to be corrected and the original image data, and performing correction operation on the correction area to generate the operation result; and fusing the operation result to the corresponding correction area position in the original image data.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, fig. 3 is a schematic hardware structure of a terminal device according to an embodiment of the present application. As shown in fig. 3, the terminal device may include an input device 30, a processor 31, an output device 32, a memory 33, and at least one communication bus 34. The communication bus 34 is used to enable communication connections between the elements. The memory 33 may comprise a high-speed RAM memory or may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, in which various programs may be stored for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 31 may be implemented as, for example, a central processing unit (Central Processing Unit, abbreviated as CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 31 is coupled to the input device 30 and the output device 32 through wired or wireless connections.
Alternatively, the input device 30 may include a variety of input devices, for example, may include at least one of a user-oriented user interface, a device-oriented device interface, a programmable interface of software, a camera, and a sensor. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may be a hardware insertion interface (such as a USB interface, a serial port, etc.) for data transmission between devices; alternatively, the user-oriented user interface may be, for example, a user-oriented control key, a voice input device for receiving voice input, and a touch-sensitive device (e.g., a touch screen, a touch pad, etc. having touch-sensitive functionality) for receiving user touch input by a user; optionally, the programmable interface of the software may be, for example, an entry for a user to edit or modify a program, for example, an input pin interface or an input interface of a chip, etc.; optionally, the transceiver may be a radio frequency transceiver chip, a baseband processing chip, a transceiver antenna, etc. with a communication function. An audio input device such as a microphone may receive voice data. The output device 32 may include a display, audio, or the like.
In this embodiment, the processor of the terminal device may include functions for executing each module of the data processing apparatus in each device, and specific functions and technical effects may be referred to the above embodiments and are not described herein again.
Fig. 4 is a schematic hardware structure of a terminal device according to another embodiment of the present application. Fig. 4 is a specific embodiment of the implementation of fig. 3. As shown in fig. 4, the terminal device of the present embodiment includes a processor 41 and a memory 42.
The processor 41 executes the computer program code stored in the memory 42 to implement the methods of the above-described embodiments.
The memory 42 is configured to store various types of data to support operation at the terminal device. Examples of such data include instructions for any application or method operating on the terminal device, such as messages, pictures, video, etc. The memory 42 may include a random access memory (random access memory, simply referred to as RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
Optionally, a processor 41 is provided in the processing assembly 40. The terminal device may further include: a communication component 43, a power supply component 44, a multimedia component 45, an audio component 46, an input/output interface 47 and/or a sensor component 48. The components and the like specifically included in the terminal device are set according to actual requirements, which are not limited in this embodiment.
The processing component 40 generally controls the overall operation of the terminal device. The processing component 40 may include one or more processors 41 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 40 may include one or more modules that facilitate interactions between the processing component 40 and other components. For example, processing component 40 may include a multimedia module to facilitate interaction between multimedia component 45 and processing component 40.
The power supply assembly 44 provides power to the various components of the terminal device. Power supply components 44 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for terminal devices.
The multimedia component 45 comprises a display screen between the terminal device and the user providing an output interface. In some embodiments, the display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the display screen includes a touch panel, the display screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation.
The audio component 46 is configured to output and/or input audio signals. For example, the audio component 46 includes a Microphone (MIC) configured to receive external audio signals when the terminal device is in an operational mode, such as a speech recognition mode. The received audio signals may be further stored in the memory 42 or transmitted via the communication component 43. In some embodiments, audio assembly 46 further includes a speaker for outputting audio signals.
The input/output interface 47 provides an interface between the processing assembly 40 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: volume button, start button and lock button.
The sensor assembly 48 includes one or more sensors for providing status assessment of various aspects for the terminal device. For example, the sensor assembly 48 may detect the open/closed state of the terminal device, the relative positioning of the assembly, the presence or absence of user contact with the terminal device. The sensor assembly 48 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact, including detecting the distance between the user and the terminal device. In some embodiments, the sensor assembly 48 may also include a camera or the like.
The communication component 43 is configured to facilitate communication between the terminal device and other devices in a wired or wireless manner. The terminal device may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one embodiment, the terminal device may include a SIM card slot, where the SIM card slot is used to insert a SIM card, so that the terminal device may log into a GPRS network, and establish communication with a server through the internet.
From the above, it will be appreciated that the communication component 43, the audio component 46, and the input/output interface 47, the sensor component 48 referred to in the embodiment of fig. 4 may be implemented as an input device in the embodiment of fig. 3.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (10)

1. The boundary correction method of the high-precision light field image is characterized by comprising the following steps of:
collecting original image data;
carrying out noise reduction treatment on the original image data to obtain optimized image data;
marking the boundary of the optimized image data through a Canny algorithm to obtain boundary marking data;
and carrying out correction operation on the original image data according to the boundary mark data, and fusing an operation result with the original image data to obtain a correction result.
2. The method of claim 1, wherein marking the boundaries of the optimized image data by a Canny algorithm to obtain boundary marking data comprises:
obtaining three-dimensional gradient parameters of a Canny algorithm, wherein the three-dimensional gradient parameters comprise: horizontal parameters, vertical parameters, diagonal parameters;
and marking the optimized image data according to a preset hysteresis threshold and the three-dimensional gradient parameter to obtain the boundary marking data.
3. The method of claim 2, wherein the boundary marker data comprises a number of binary points, wherein the binary points comprise: edge points, normal points.
4. The method according to claim 1, wherein the performing a correction operation on the original image data according to the boundary mark data, and fusing the operation result with the original image data, and obtaining a correction result includes:
according to the boundary marking data, carrying out interpolation calculation on pixel values belonging to a boundary range to obtain a calculation result;
according to the calculation result, pixel updating is carried out on the original image data to obtain a pixel value to be corrected;
determining a correction area according to the pixel value to be corrected and the original image data, and performing correction operation on the correction area to generate the operation result;
and fusing the operation result to the corresponding correction area position in the original image data.
5. A boundary correction device for a high-precision light field image, comprising:
the acquisition module is used for acquiring original image data;
the noise reduction module is used for carrying out noise reduction processing on the original image data to obtain optimized image data;
the marking module is used for marking the boundary of the optimized image data through a Canny algorithm to obtain boundary marking data;
and the correction module is used for carrying out correction operation on the original image data according to the boundary mark data, and fusing an operation result with the original image data to obtain a correction result.
6. The apparatus of claim 5, wherein the marking module comprises:
the acquisition unit is used for acquiring three-dimensional gradient parameters of a Canny algorithm, wherein the three-dimensional gradient parameters comprise: horizontal parameters, vertical parameters, diagonal parameters;
and the marking unit is used for marking the optimized image data according to a preset hysteresis threshold and the three-dimensional gradient parameter to obtain the boundary marking data.
7. The apparatus of claim 6, wherein the boundary marker data comprises a number of binary points, wherein the binary points comprise: edge points, normal points.
8. The apparatus of claim 5, wherein the correction module comprises:
the calculating unit is used for carrying out interpolation calculation on pixel values belonging to the boundary range according to the boundary marking data to obtain a calculation result;
the updating unit is used for carrying out pixel updating on the original image data according to the calculation result to obtain a pixel value to be corrected;
the correction unit is used for determining a correction area according to the pixel value to be corrected and the original image data, carrying out correction operation on the correction area and generating the operation result;
and the fusion unit is used for fusing the operation result to the corresponding correction area position in the original image data.
9. A non-volatile storage medium, characterized in that the non-volatile storage medium comprises a stored program, wherein the program, when run, controls a device in which the non-volatile storage medium is located to perform the method of any one of claims 1 to 4.
10. An electronic device comprising a processor and a memory; the memory has stored therein computer readable instructions for executing the processor, wherein the computer readable instructions when executed perform the method of any of claims 1 to 4.
CN202310577489.8A 2023-05-22 2023-05-22 Boundary correction method and device for high-precision light field image Active CN116883255B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310577489.8A CN116883255B (en) 2023-05-22 2023-05-22 Boundary correction method and device for high-precision light field image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310577489.8A CN116883255B (en) 2023-05-22 2023-05-22 Boundary correction method and device for high-precision light field image

Publications (2)

Publication Number Publication Date
CN116883255A true CN116883255A (en) 2023-10-13
CN116883255B CN116883255B (en) 2024-05-24

Family

ID=88266794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310577489.8A Active CN116883255B (en) 2023-05-22 2023-05-22 Boundary correction method and device for high-precision light field image

Country Status (1)

Country Link
CN (1) CN116883255B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136752A (en) * 2013-02-05 2013-06-05 浙江大学 Image magnification method based on edge extraction
CN111179289A (en) * 2019-12-31 2020-05-19 重庆邮电大学 Image segmentation method suitable for webpage length and width images
CN111489383A (en) * 2020-04-10 2020-08-04 山东师范大学 Depth image up-sampling method and system based on depth edge point and color image
CN115526903A (en) * 2022-11-01 2022-12-27 南京宁麒智能计算芯片研究院有限公司 Hardware computing system and method for image upsampling based on Canny algorithm
CN115631122A (en) * 2022-11-07 2023-01-20 北京拙河科技有限公司 Image optimization method and device for edge image algorithm
CN115731254A (en) * 2021-08-25 2023-03-03 威海北洋电气集团股份有限公司 Method for solving size of image based on sub-pixel
CN115984126A (en) * 2022-12-05 2023-04-18 北京拙河科技有限公司 Optical image correction method and device based on input instruction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136752A (en) * 2013-02-05 2013-06-05 浙江大学 Image magnification method based on edge extraction
CN111179289A (en) * 2019-12-31 2020-05-19 重庆邮电大学 Image segmentation method suitable for webpage length and width images
CN111489383A (en) * 2020-04-10 2020-08-04 山东师范大学 Depth image up-sampling method and system based on depth edge point and color image
CN115731254A (en) * 2021-08-25 2023-03-03 威海北洋电气集团股份有限公司 Method for solving size of image based on sub-pixel
CN115526903A (en) * 2022-11-01 2022-12-27 南京宁麒智能计算芯片研究院有限公司 Hardware computing system and method for image upsampling based on Canny algorithm
CN115631122A (en) * 2022-11-07 2023-01-20 北京拙河科技有限公司 Image optimization method and device for edge image algorithm
CN115984126A (en) * 2022-12-05 2023-04-18 北京拙河科技有限公司 Optical image correction method and device based on input instruction

Also Published As

Publication number Publication date
CN116883255B (en) 2024-05-24

Similar Documents

Publication Publication Date Title
KR102126300B1 (en) Method and apparatus for generating an all-in-focus image
CN108230333B (en) Image processing method, image processing apparatus, computer program, storage medium, and electronic device
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
US20190122377A1 (en) 3d modeling method based on point cloud data
KR20110124965A (en) Apparatus and method for generating bokeh in out-of-focus shooting
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN113706421B (en) Image processing method and device, electronic equipment and storage medium
CN111131688B (en) Image processing method and device and mobile terminal
EP3822757A1 (en) Method and apparatus for setting background of ui control
CN111968052B (en) Image processing method, image processing apparatus, and storage medium
CN110009555A (en) Image weakening method, device, storage medium and electronic equipment
CN111311481A (en) Background blurring method and device, terminal equipment and storage medium
CN115631122A (en) Image optimization method and device for edge image algorithm
CN116883255B (en) Boundary correction method and device for high-precision light field image
FR2984668A3 (en) METHOD FOR PROCESSING VIDEO SEQUENCE ON REAL-TIME MOBILE TERMINALS
CN115293985B (en) Super-resolution noise reduction method and device for image optimization
CN108810407B (en) Image processing method, mobile terminal and computer readable storage medium
CN111754435A (en) Image processing method, image processing device, terminal equipment and computer readable storage medium
CN112887611A (en) Image processing method, device, equipment and storage medium
CN116664413B (en) Image volume fog eliminating method and device based on Abbe convergence operator
CN113327290B (en) Binocular module calibration method and device, storage medium and electronic equipment
CN116630643A (en) Pixel splitting method and device based on image object boundary recognition
CN113837955B (en) Image antialiasing processing method and electronic equipment
CN116757983B (en) Main and auxiliary image fusion method and device
CN116579965B (en) Multi-image fusion method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant