CN111292272A - Image processing method, image processing apparatus, image processing medium, and electronic device - Google Patents

Image processing method, image processing apparatus, image processing medium, and electronic device Download PDF

Info

Publication number
CN111292272A
CN111292272A CN202010143766.0A CN202010143766A CN111292272A CN 111292272 A CN111292272 A CN 111292272A CN 202010143766 A CN202010143766 A CN 202010143766A CN 111292272 A CN111292272 A CN 111292272A
Authority
CN
China
Prior art keywords
image
repaired
mask
stripes
filtered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010143766.0A
Other languages
Chinese (zh)
Other versions
CN111292272B (en
Inventor
谢植淮
刘杉
李松南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010143766.0A priority Critical patent/CN111292272B/en
Publication of CN111292272A publication Critical patent/CN111292272A/en
Application granted granted Critical
Publication of CN111292272B publication Critical patent/CN111292272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides an image processing method, an image processing apparatus, a medium, and an electronic device. The method comprises the following steps: acquiring a gray image of an image to be restored, and filtering the gray image by using a filter to obtain a filtered image; detecting image stripes in the filtered image, and generating an image mask according to the image stripes; and repairing the image to be repaired according to the image mask to obtain a repaired image. According to the method and the device, the image mask for repairing the image to be repaired is generated through the detected image stripes in the filtering image corresponding to the gray level image, and the function of repairing the image to be repaired is achieved. On one hand, the filter is used for directly acquiring the edge information in a specific direction, so that the limit of edge detection is eliminated, and the detection efficiency is improved; on the other hand, in the process of generating the filtering image and the image stripe, parameter adjustment is convenient, the complexity of time and space is reduced, and the effectiveness of stripe detection is improved.

Description

Image processing method, image processing apparatus, image processing medium, and electronic device
Technical Field
The present disclosure relates to image processing, and more particularly, to an image processing method, an image processing apparatus, a computer-readable medium, and an electronic device.
Background
With the rapid development of social economy, the number of cameras is increased explosively, so that the quality requirements of people on images and videos are continuously improved, and the rapid development of video processing and image processing is promoted. However, in complex scenes such as old patch restoration and streak detection, noise streaks in images and video pictures become a key event affecting image and video quality.
Generally, the stripes in the image are detected by detecting the image frame by frame and statistically analyzing the detection result. However, in the detection process, only the direction of the straight line can be determined, but the length information of the line segment is lost, and missing detection and multiple detection are easy to occur, so that image stripes in the video cannot be processed well.
In view of the above, there is a need in the art to develop a new image processing method and apparatus.
It should be noted that the information disclosed in the above background section is only for enhancement of understanding of the technical background of the present application, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to an image processing method, an image processing apparatus, a computer readable medium, and an electronic device, so as to overcome technical problems of information loss and poor video processing effect, at least to some extent.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of an embodiment of the present disclosure, there is provided an image processing method including: acquiring a gray image of an image to be restored, and filtering the gray image by using a filter to obtain a filtered image; detecting image stripes in the filtering image, and generating an image mask according to the image stripes; and repairing the image to be repaired according to the image mask to obtain a repaired image.
According to an aspect of an embodiment of the present disclosure, there is provided an image processing apparatus including: the image filtering module is configured to acquire a gray image of an image to be restored and filter the gray image by using a filter to obtain a filtered image; a mask generation module configured to detect image fringes in the filtered image and generate an image mask from the image fringes; and the image restoration module is configured to restore the image to be restored according to the image mask to obtain a restored image.
In some embodiments of the present disclosure, based on the above technical solutions, the mask generating module includes: an image scanning unit configured to scan the filtered image and determine image stripes in the filtered image according to a detection threshold; and the mask updating unit is configured to generate an original mask according to the image to be repaired and update the original mask according to the image stripes to obtain an image mask.
In some embodiments of the present disclosure, based on the above technical solutions, the image scanning unit includes: a parameter determination subunit configured to determine a scan start point, a scan end point, and a scan proportion at which the filtered image is scanned; a threshold calculation subunit, configured to scan the filtered image, and calculate the scanning start point, the scanning end point, and the scanning proportion to obtain a detection threshold of an image stripe; and the position recording subunit is configured to determine image stripes in the filtered image according to the scanning result and the detection threshold value and record position information of the image stripes.
In some embodiments of the present disclosure, based on the above technical solutions, the mask updating unit includes: the size obtaining subunit is configured to obtain size information of the image to be repaired and generate an original mask according to the size information; and the stripe replacing subunit is configured to replace the image stripes into the original mask according to the position information to obtain an image mask.
In some embodiments of the present disclosure, based on the above technical solutions, the image filtering module includes: a direction determination unit configured to determine a lateral direction and a longitudinal direction of a kernel function of the Gabor filter; a filtering processing unit configured to perform filtering processing on the grayscale image by using the transverse direction and the longitudinal direction of the Gabor filter, respectively, to obtain a filtered image.
In some embodiments of the present disclosure, based on the above technical solutions, the image inpainting module includes: a convolution kernel generation unit configured to determine a size of an expanded convolution kernel to generate the expanded convolution kernel corresponding to the image mask; and the expansion processing unit is configured to perform expansion processing on the image mask according to the expansion convolution kernel and perform restoration processing on the image to be restored according to the image mask after the expansion processing.
In some embodiments of the present disclosure, based on the above technical solutions, the image filtering module includes: the video acquisition unit is configured to acquire a video to be repaired and extract an image to be repaired from the video to be repaired; and the graying processing unit is configured to perform graying processing on the image to be repaired to obtain a grayscale image of the image to be repaired.
According to an aspect of the embodiments of the present disclosure, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor implements an image processing method as in the above technical solution.
According to an aspect of an embodiment of the present disclosure, there is provided an electronic apparatus including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the image processing method as in the above technical solution via executing the executable instructions.
In the technical scheme provided by the embodiment of the disclosure, an image mask for repairing the image to be repaired is generated through the detected image stripes in the filtered image corresponding to the gray-scale image, so that the function of repairing the image to be repaired is realized. On one hand, the filter is used for directly acquiring the edge information in a specific direction, so that the limit of edge detection is eliminated, and the detection efficiency is improved; on the other hand, in the process of generating the filtering image and the image stripe, parameter adjustment is convenient, the complexity of time and space is reduced, and the effectiveness of stripe detection is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
fig. 1 schematically illustrates an exemplary system architecture diagram to which the disclosed solution applies;
FIG. 2 schematically illustrates a flow chart of steps of an image processing method in some embodiments of the present disclosure;
FIG. 3 schematically illustrates a flow chart of steps of a method of obtaining a grayscale image in some embodiments of the present disclosure;
FIG. 4 schematically illustrates a flow chart of steps of a method of obtaining a filtered image in some embodiments of the present disclosure;
FIG. 5 schematically illustrates a flow chart of steps of a method of generating an image mask in some embodiments of the present disclosure;
FIG. 6 schematically illustrates a flow chart of steps of a method of determining image streaks in some embodiments of the present disclosure;
FIG. 7 schematically illustrates a flow chart of steps of a method of further generating an image mask in some embodiments of the present disclosure;
FIG. 8 schematically illustrates a flow chart of steps of a method of repairing an image to be repaired in some embodiments of the present disclosure;
FIG. 9 is a flow chart schematically illustrating steps of an image processing method in an application scenario according to an embodiment of the present disclosure;
FIG. 10 schematically illustrates a grayscale image of an old photograph in an application scenario in accordance with an embodiment of the disclosure;
FIG. 11 schematically illustrates a filtered image of an old photograph in an application scenario in accordance with an embodiment of the present disclosure;
FIG. 12 schematically illustrates an image mask of an old photo after dilation processing in an application scenario in accordance with an embodiment of the disclosure;
FIG. 13 schematically illustrates an old photo after the embodiment of the present disclosure repairs in an application scenario;
FIG. 14 schematically illustrates another old photograph of an embodiment of the disclosure before a repair process in an application scenario;
FIG. 15 schematically illustrates another old photograph after a repair process in an application scenario in accordance with an embodiment of the present disclosure;
FIG. 16 schematically shows a block diagram of an image processing apparatus in some embodiments of the present disclosure;
fig. 17 schematically illustrates a structural diagram of a computer system suitable for use in implementing an electronic device of an embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In the related art, upon receiving the image streak detection task, the task message of the image streak detection task may be parsed to determine the detection area of the image. Further, one frame of image in the acquired video data is read in a circulating mode, the stripe of the one frame of image in the detection area is detected, and the detection result corresponding to the one frame of image is recorded until the frame of image with the preset frame number is detected. Then, the detection result corresponding to each recorded frame image is statistically analyzed to determine the image streak detection result of the detection area.
Although the method can detect image stripes in any region, and can detect periodic image stripes, and can also detect non-periodic image stripes, since the line segment detection is performed by using the hough transform algorithm in the detection process, the straight line direction can only be determined in the detection process, and the length information of the line segment cannot be obtained. On the other hand, in a complex and noisy image, the parameter adjustment process of the hough transform algorithm becomes abnormally complex, so that the situation that the number of detected stripes is too large or the stripes are missed may occur, and the situation in the video is difficult to process.
Based on the problems of the above solutions, the present disclosure provides an image processing method, an image processing apparatus, a computer readable medium, and an electronic device relating to artificial intelligence.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
The computer vision technology is utilized to carry out image restoration on the image to be restored, so that the noise stripes in the picture can be removed, and better visual experience is brought to a user.
Fig. 1 shows an exemplary system architecture diagram to which the disclosed solution is applied.
As shown in fig. 1, system architecture 100 may include one or more of terminal devices 110, 120, 130, a network 140, and a server 150. The terminal devices 110, 120, and 130 may be various electronic devices with a display screen, and specifically may be desktop terminals or mobile terminals, and the mobile terminal may be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. Network 140 may be any of a variety of connection types of communication media capable of providing communication links between end devices 110, 120, 130 and server 150, such as wired communication links, wireless communication links, or fiber optic cables. The server 150 may be implemented as a stand-alone server or as a server cluster of multiple servers.
The image processing method related to artificial intelligence provided by the embodiment of the present disclosure is generally performed by the server 150, and accordingly, an image processing apparatus related to artificial intelligence is generally provided in the server 150. However, it is easily understood by those skilled in the art that the image processing method related to artificial intelligence provided in the embodiment of the present disclosure may also be executed by the terminal device 110, 120, 130, and accordingly, the image processing apparatus related to artificial intelligence may also be disposed in the terminal device 110, 120, 130, which is not particularly limited in the present exemplary embodiment.
For example, in an exemplary embodiment, the image to be repaired may be uploaded to the server 150 through the terminal device 110, 120, or 130, the server 150 performs filtering processing on the grayscale image of the image to be repaired by using the image processing method based on artificial intelligence provided in the embodiment of the present disclosure, detects image stripes in the filtered image obtained through the filtering processing, so as to generate an image mask, and further performs repairing processing on the image to be repaired through the image mask, and transmits the image to the terminal device 110, 120, or 130.
The following describes the image processing method, the image processing apparatus, the computer readable medium, and the electronic device provided in the present disclosure in detail with reference to specific embodiments.
Fig. 2 schematically illustrates a flow chart of steps of an image processing method in some embodiments of the present disclosure. As shown in fig. 2, the image processing method may mainly include the following steps:
and S210, acquiring a gray level image of the image to be restored, and filtering the gray level image by using a filter to obtain a filtered image.
And S220, detecting image stripes in the filtered image, and generating an image mask according to the image stripes.
And step 230, repairing the image to be repaired according to the image mask to obtain a repaired image.
In the exemplary embodiment of the disclosure, an image mask for repairing an image to be repaired is generated through the detected image stripes in the filtered image corresponding to the grayscale image, so that the function of repairing the image to be repaired is realized. On one hand, the filter is used for directly acquiring the edge information in a specific direction, so that the limit of edge detection is eliminated, and the detection efficiency is improved; on the other hand, in the process of generating the filtering image and the image stripe, parameter adjustment is convenient, the complexity of time and space is reduced, and the effectiveness of stripe detection is improved.
Each step of the image processing method is explained in detail below.
In step S210, a grayscale image of the image to be restored is obtained, and the grayscale image is filtered by a filter to obtain a filtered image.
In the exemplary embodiment of the present disclosure, the image to be repaired may be an input image, or may also be an image having interference fringes and derived from any source, such as a frame of image read from a video, and this exemplary embodiment is not particularly limited in this respect. Further, the corresponding gray level image can be obtained by performing gray level processing on the image to be processed
In an alternative embodiment, fig. 3 shows a flow chart of the steps of a method of obtaining a grayscale image, which, as shown in fig. 3, comprises at least the following steps: in step S310, a video to be repaired is acquired, and an image to be repaired is extracted from the video to be repaired. The video to be repaired may be video data provided in real time or video data buffered previously, which is not particularly limited in this exemplary embodiment. The format of the video data is not limited at all, and may be a Real Media Variable Bit Rate (RMVB) format, an Audio Video Interleaved (AVI) format, or any other video format.
Further, frame extraction is carried out on the video to be repaired, and one frame of image in the extracted video to be repaired is used as the image to be repaired.
In step S320, the image to be repaired is grayed to obtain a grayscale image of the image to be repaired. Specifically, each pixel point in the image to be restored can be traversed, and the weighted average value or the average value of the three components of R (red), G (green) and B (blue) of the pixel value of each pixel point is taken as the gray value of the corresponding pixel point in the gray image; or the component with the largest or smallest weight of R, G and B components of the pixel value of each pixel point can be taken as the gray value of the pixel point corresponding to the gray image, so that the gray image of the image to be repaired is obtained.
In the exemplary embodiment, the image to be restored extracted from the video is grayed to obtain a corresponding grayscale image, and the image to be restored in the video can be further processed, so that the application scene of the image processing method is enlarged.
After obtaining the grayscale image, the grayscale image may be subjected to a filtering process to obtain a corresponding filtered image.
In an alternative embodiment, fig. 4 shows a flow chart of the steps of a method of obtaining a filtered image, as shown in fig. 4, the method comprising at least the steps of: in step S410, the lateral direction and the longitudinal direction of the kernel function of the Gabor filter are determined. Among them, the Gabor filter can be said to be a linear filter for edge extraction. In the spatial domain, a two-dimensional Gabor filter is a product of a sinusoidal plane wave and a gaussian kernel function, and has the characteristic of obtaining optimal localization in the spatial domain and the frequency domain at the same time, so that local structural information corresponding to spatial frequency (scale), spatial position and direction selectivity can be well described. The Gabor kernel function is obtained by multiplying a Gaussian function and a cosine function, and theta, phi, gamma, lambda and sigma are parameters. Wherein, theta represents the direction of the parallel stripes of the Gabor kernel function, and the effective value is a real number from 0 degree to 360 degrees.
In view of this, θ may be selected to be 90 ° and 180 °. Wherein θ is determined to be equal to 90 ° as the longitudinal direction and θ is determined to be equal to 180 ° as the lateral direction.
In step S420, the grayscale image is filtered in the transverse direction and the longitudinal direction of the Gabor filter, respectively, to obtain a filtered image. After generating the kernel functions of 90 ° and 180 °, the grayscale images are convolved to obtain filtered images corresponding to 90 ° and 180 °, that is, filtered images in the longitudinal direction and the transverse direction.
In the exemplary embodiment, the Gabor filter is used for filtering the gray level image, so that the advantage that the Gabor filter has better sensitivity to the image edge is fully exerted, the recognition rate of the image stripes is improved, and the method has an important significance for image restoration.
In step S220, image fringes in the filtered image are detected, and an image mask is generated from the image fringes.
In an exemplary embodiment of the present disclosure, an image mask may be generated according to the image stripes in the filtered image to further perform a repairing process on the image to be repaired.
In an alternative embodiment, fig. 5 shows a flow chart of the steps of a method of generating an image mask, as shown in fig. 5, the method comprising at least the steps of: in step S510, the filtered image is scanned and image stripes in the filtered image are determined according to the detection threshold. Before scanning the filtered image, a detection threshold for determining whether the image is an image streak may be determined to determine the image streak.
In an alternative embodiment, fig. 6 shows a flow chart of the steps of a method of determining image streaks, as shown in fig. 6, the method comprising at least the steps of: in step S610, a scan start point, a scan end point, and a scan ratio of the scan-filtered image are determined. Since the filtered image is obtained by performing filtering processing on the grayscale image in the horizontal and vertical directions, the determined scan start point and scan end point correspond to the horizontal and vertical directions, and thus the horizontal stripe and the vertical stripe can be further obtained.
For a horizontal stripe, the scan start point may be XbeginThe scanning end point can be Xend. Wherein, XbeginThe horizontal axis starting point of the filtered image representing the start of scanning is generally set to 0, but when the initial image to be repaired has a frame, other values may be set, which is not particularly limited in the present exemplary embodiment; xendThe horizontal axis end point of the filtered image indicating the end of scanning is generally set to be the width of the image to be repaired, but when the image to be repaired is a framed image, the horizontal axis end point may be set to other values, which is not particularly limited in this exemplary embodiment.
For longitudinal stripes, the starting point of the scan may be YbeginThe scan end point may be Yend. Wherein, YbeginThe starting point of the vertical axis of the filtered image indicating the start of scanning is generally set to 0, but when the initial image to be repaired is a framed image, other values may also be set, and this exemplary embodiment is not particularly limited in this respect; y isendThe vertical axis end point of the filtered image indicating the end of scanning is generally set to be higher than the image to be repaired, but may be set to other values when the image to be repaired is a framed image, which is not particularly limited in the present exemplary embodiment.
The scanning start point and the scanning end point set in this way may achieve the purpose of traversing the filtered image during scanning, and may also be set according to actual requirements, which is not particularly limited in this exemplary embodiment.
Whether horizontal or vertical, it may be from top to bottom or from left to right of the filtered image, or it may occupy only a portion of the image length or width. Therefore, the scan ratio is a parameter set in terms of the length or width of the image for determining the detection threshold. For example, the scan ratio may be a percentage of the number of pixels, and is set to 0-100%, which represents the ratio of the stripe to the image length or width.
In step S620, the filtered image is scanned, and the scanning start point, the scanning end point, and the scanning ratio are calculated to obtain the detection threshold of the image stripe. As the filtered image is a real part of Gabor filtering, the pixel value is 0-255, and the pixel values of the non-edge part are all 0, the number of non-0 pixel points can be recorded during scanning so as to further determine whether the filtered image is a stripe. For the horizontal stripes, each line of the filtering image can be scanned, and the number of non-0 pixel points of the line is counted at the same time; for the longitudinal stripe, each column of the filtering image can be scanned, and the number of non-0 pixel points in the column can be counted at the same time. In order to determine whether the length formed by the number of the non-0 pixel points counted in each row or each column belongs to the stripe, a detection threshold of the image stripe may be determined.
Specifically, for the horizontal stripes, the detection threshold is set to N, and the calculation method refers to formula 1:
N=P*(Xend-Xbegin) Equation 1
Wherein, XbeginDenotes the scanning start point, X, of the horizontal axis scanningendDenotes the end of the scan at the end of the horizontal axis scan, and P denotes the scan ratio, which may be a percentage of the number of pixels.
For the longitudinal stripe, the detection threshold is set to N, and the calculation mode refers to equation 2:
N=P*(Yend-Ybegin) Equation 2
Wherein, YbeginDenotes the scanning start point, Y, of the vertical axis scanendRepresents the end of the scan at the end of the vertical axis scan and P represents the scan fraction, which here may be a percentage of the number of pixels.
In step S630, image stripes in the filtered image are determined according to the scanning result and the detection threshold, and position information of the image stripes is recorded. After scanning, the number of non-0 pixels of the horizontal stripe and the number of non-0 pixels of the vertical stripe can be counted, and the result is the scanning result. Further, the number of the non-0 pixel points may be compared with a detection threshold, and a row or a column of the non-0 pixel points whose number is greater than the detection threshold is determined as the image stripe.
At the same time, position information of the image stripes can be recorded. Specifically, if the vertical stripe is from the vertical axis scanning start point of the image to a certain height h in the row, the horizontal axis pixel point corresponding to the height h may be recorded, or other recording manners may be used. If the vertical stripe starts in the middle of one image and ends from the end point of the image, the horizontal axis pixel points corresponding to the vertical stripe are correspondingly recorded, and the recording mode of the horizontal axis stripe is similar, which is not described herein again.
In the exemplary embodiment, by comparing the scanning result with the calculated detection threshold, the position information of the image stripe can be determined and recorded, the determination method is simple and accurate, and the method can be set according to actual conditions, and the universality is extremely strong.
Besides, the detection mode of the image stripes can also adopt Hough transform algorithm. Hough transform is a feature detection that is widely used in image analysis, computer vision, and digital image processing. The hough transform is used to identify features, such as lines, in the object being found. The process of hough transform algorithm to determine the image stripes is roughly as follows, given a kind of shape of the stripe, i.e. the line, the algorithm performs voting in the parameter space to determine the shape of the stripe, which is determined by the local maximum in the accumulation space.
In step S520, an original mask is generated according to the image to be repaired, and the original mask is updated according to the image stripes to obtain an image mask. In order to repair the image to be repaired by using the determined image stripes, an image mask needs to be obtained to act together.
In an alternative embodiment, fig. 7 shows a flow chart of the steps of a method of further generating an image mask, as shown in fig. 7, the method comprising at least the steps of: in step S710, size information of the image to be repaired is obtained, and an original mask is generated according to the size information. The original mask is generated from the image to be repaired. Specifically, according to the size information, i.e., width and height, of the image to be repaired, an all-0 single-channel image, i.e., the original mask, of the same size is generated. In step S720, the image stripes are replaced into the original mask according to the position information, resulting in an image mask. From the position information of the recorded image stripes, the corresponding positions can be determined in the original mask. For the horizontal stripes, copying the line pixel values of the corresponding positions of the filtering image, and replacing the line pixel values on the original mask to generate a corresponding image mask; for the vertical stripes, the column pixel values of the corresponding positions of the filtering image are copied and replaced on the original mask, and then the corresponding image mask can be generated.
In the exemplary embodiment, the image mask can be generated according to the image to be repaired and the filtered image, the updating mode of the original mask is simple and feasible, and preparation is made for further repairing processing.
In step S230, the image to be repaired is repaired according to the image mask, so as to obtain a repaired image.
In exemplary embodiments of the present disclosure, since the width of the image stripes is random, the resulting image mask may not be sufficient to cover the entire image stripe. In order to remove the stripes completely, the image mask may be expanded for further repair.
In an alternative embodiment, fig. 8 shows a flow chart of the steps of a method of repairing an image to be repaired, which, as shown in fig. 8, comprises at least the following steps: in step S810, the size of the dilated convolution kernel is determined to generate a dilated convolution kernel corresponding to the image mask. The size of the expansion convolution kernel can be set according to actual conditions, and the wider the image stripe is, the larger the image stripe can be set, and the larger the image mask is expanded. Typically, the size of the dilated convolution kernel is set to 3, 5, 7, … …. Preferably, it can be set to 3 or 5. After determining the dilated convolution kernel, a one-dimensional convolution kernel with a weight of all 1 may be set as the dilated convolution kernel.
In step S820, the image mask is subjected to expansion processing according to the expansion convolution kernel, and the image to be repaired is subjected to repair processing according to the image mask after the expansion processing. Specifically, the expansion convolution kernel and the image mask may be used as the expansion function dilate of the parameter input value opencv. In the expansion process, the expansion convolution kernel is ANDed with the corresponding image mask. That is, if the corresponding pixel values of the expansion convolution kernel and the image mask are both 0, the pixel value of the expanded image at the pixel point is 0; otherwise, the corresponding pixel value is a non-0 value. The expansion process can expand the image mask by one circle, that is, the white area in the image mask, that is, the white area where the image stripe is located, is increased, so as to obtain the expanded image mask.
And further, repairing the image to be repaired according to the image mask subjected to the expansion treatment. Specifically, the image to be processed and the image mask after the expansion processing are input into an image repairing inpaint function of opencv. The function may repair non-0 pixel values therein based on the input image mask. And, the parameter inpaintRadius can be set according to the actual situation, and the parameter identifies the neighborhood radius of the repair algorithm, and takes the value of [3,11 ]. In addition, the parameter flags represents the selected repair algorithm, which generally includes INPAINT _ NS and INPAINT _ TELEA, and only one of them may be selected as needed.
In the exemplary embodiment, the image to be repaired is repaired through the image mask after the expansion processing, so that the image stripes are ensured to be covered by the image mask, the image stripes are completely removed, and the image repairing effect is optimized.
After the image to be repaired is repaired through the image mask, the repaired image can be obtained. It should be noted that, if the image to be repaired is extracted from the video to be repaired, each frame of image of the video to be repaired may be merged after being repaired, so as to obtain the repaired video.
The following describes the image processing method provided in the embodiment of the present disclosure in detail with reference to a specific application scenario.
Fig. 9 shows a flowchart of steps of an image processing method in an application scene, and as shown in fig. 9, in step S910, a video is input. The video is a video to be repaired, each frame of image has noise stripes or one or more frames of the image have noise stripes.
In step S920, a gray image is obtained according to the image to be restored in the video. The grayscale image may be obtained by performing a graying process on each extracted frame of image to be repaired.
And a step S930 of Gabor filtering the grayscale image. Fig. 10 shows a gray scale image of an old photograph, which is an old photograph to be repaired, filled with longitudinal stripes, as shown in fig. 10.
To remove the longitudinal stripes, the old photograph may be filtered by setting the stripes θ of the Gabor filter to 90 °. Fig. 11 shows a filtered image of an old photograph, which has non-0 pixel values at the edge and 0 pixel values at other positions as shown in fig. 11.
In step S940, the scanning module of the horizontal stripe and the vertical stripe of the image is filtered. The generated filtered image is scanned and image stripes in the filtered image are determined based on a detection threshold. Further, the image stripes are copied and replaced on the original mask generated according to the image to be repaired, and a corresponding image mask is generated.
In step S950, it is determined whether there is a streak in the image, which may be a module that determines a scanning result of the scanning module. If no image stripe is detected in the filtered image, the video can be directly output; if image streaks are detected in the filtered image, subsequent processing steps may be performed.
In step S960, a dilation process module of the image mask. When an image streak is scanned in the filtered image, but at this time, since the width of the streak is not determined, the image mask can be subjected to expansion processing by the expansion convolution kernel. Fig. 12 shows the image mask of an old photograph after the expansion process, and as shown in fig. 12, the white area after the expansion process is the position where the stripe in fig. 10 is located, and the pixel values of the other non-repair positions are all 0.
In step S970, an image restoration module for the image to be restored. And performing image restoration processing on the image to be restored through the expanded image mask, wherein a specific restoration algorithm can be selected according to actual conditions. Fig. 13 shows an old photo after being repaired, as shown in fig. 13, a plurality of longitudinal stripes on the old photo have been repaired, and the overall picture effect is improved.
In step S980, after each frame of image to be restored in the video to be restored is successfully restored, the restored images are merged and a restored video is generated, and the video is output for viewing.
In addition, fig. 14 shows another old photo before the repair process, as shown in fig. 14, the old photo may be a good image, the image quality of the old photo is not clear, and the number of the longitudinal stripes is more, and more significant, which seriously affects the viewing effect.
Correspondingly, fig. 15 shows another old photo after the repair process, and as shown in fig. 15, the longitudinal stripes of the repaired old photo are completely eliminated, and the repair effect is excellent.
Based on the application scenarios, the image processing method provided by the embodiment of the disclosure generates an image mask for repairing an image to be repaired through the detected image stripes in the filtered image corresponding to the gray-scale image, so as to realize the function of repairing the image to be repaired. On one hand, the filter is used for directly acquiring the edge information in a specific direction, so that the limit of edge detection is eliminated, and the detection efficiency is improved; on the other hand, in the process of generating the filtering image and the image stripe, parameter adjustment is convenient, the complexity of time and space is reduced, and the effectiveness of stripe detection is improved.
It should be noted that although the various steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
The following describes embodiments of the apparatus of the present disclosure, which may be used to perform the image processing method in the above-described embodiments of the present disclosure. For details that are not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the image processing method described above in the present disclosure.
Fig. 16 schematically shows a block diagram of an image processing apparatus in some embodiments of the present disclosure. As shown in fig. 16, the image processing apparatus 1600 may mainly include: an image filtering module 1610, a mask generating module 1620, and an image inpainting module 1630.
The image filtering module 1610 is configured to obtain a grayscale image of an image to be restored, and perform filtering processing on the grayscale image by using a filter to obtain a filtered image; a mask generation module 1620 configured to detect image stripes in the filtered image and generate an image mask from the image stripes; the image restoration module 1630 is configured to perform restoration processing on the image to be restored according to the image mask, so as to obtain a restored image.
In some embodiments of the present disclosure, the mask generation module comprises: an image scanning unit configured to scan the filtered image and determine image stripes in the filtered image according to a detection threshold; and the mask updating unit is configured to generate an original mask according to the image to be repaired and update the original mask according to the image stripes to obtain an image mask.
In some embodiments of the present disclosure, an image scanning unit includes: a parameter determination subunit configured to determine a scanning start point, a scanning end point, and a scanning proportion of the scanning filtered image; the threshold calculation subunit is configured to scan the filtered image, and calculate a scanning starting point, a scanning end point and a scanning proportion to obtain a detection threshold of the image stripe; and the position recording subunit is configured to determine image stripes in the filtered image according to the scanning result and the detection threshold value and record position information of the image stripes.
In some embodiments of the present disclosure, the base film update unit includes: the size obtaining subunit is configured to obtain size information of the image to be repaired and generate an original mask according to the size information; and the stripe replacing subunit is configured to replace the image stripes into the original mask according to the position information to obtain an image mask.
In some embodiments of the present disclosure, the image filtering module comprises: a direction determination unit configured to determine a lateral direction and a longitudinal direction of a kernel function of the Gabor filter; and the filtering processing unit is configured to perform filtering processing on the gray level image by using the transverse direction and the longitudinal direction of the Gabor filter respectively to obtain a filtering image.
In some embodiments of the present disclosure, an image inpainting module includes: a convolution kernel generation unit configured to determine a size of an expansion convolution kernel to generate an expansion convolution kernel corresponding to the image mask; and the expansion processing unit is configured to perform expansion processing on the image mask according to the expansion convolution kernel and perform restoration processing on the image to be restored according to the image mask after the expansion processing.
In some embodiments of the present disclosure, the image filtering module comprises: the video acquisition unit is configured to acquire a video to be repaired and extract an image to be repaired from the video to be repaired; and the graying processing unit is configured to perform graying processing on the image to be repaired to obtain a grayscale image of the image to be repaired.
The specific details of the image processing apparatus provided in the embodiments of the present disclosure have been described in detail in the corresponding method embodiments, and therefore, the details are not described herein again.
FIG. 17 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present disclosure.
It should be noted that the computer system 1700 of the electronic device shown in fig. 17 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiments of the present disclosure.
As shown in fig. 17, a computer system 1700 includes a Central Processing Unit (CPU)1701 that can perform various appropriate actions and processes in accordance with a program stored in a Read-Only Memory (ROM) 1702 or a program loaded from a storage section 1708 into a Random Access Memory (RAM) 1703. In the RAM 1703, various programs and data necessary for system operation are also stored. The CPU1701, ROM 1702, and RAM 1703 are connected to each other through a bus 1704. An Input/Output (I/O) interface 1705 is also connected to the bus 1704.
The following components are connected to the I/O interface 1705: an input section 1706 including a keyboard, a mouse, and the like; an output section 1707 including a Display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 1708 including a hard disk and the like; and a communication section 1709 including a network interface card such as a LAN (Local area network) card, a modem, or the like. The communication section 1709 performs communication processing via a network such as the internet. A driver 1710 is also connected to the I/O interface 1705 as necessary. A removable medium 1711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1710 as necessary, so that a computer program read out therefrom is mounted into the storage portion 1708 as necessary.
In particular, the processes described in the various method flowcharts may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication portion 1709, and/or installed from the removable media 1711. When the computer program is executed by a Central Processing Unit (CPU)1701, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring a gray image of an image to be restored, and filtering the gray image by using a filter to obtain a filtered image;
detecting image stripes in the filtering image, and generating an image mask according to the image stripes;
and repairing the image to be repaired according to the image mask to obtain a repaired image.
2. The method of claim 1, wherein the detecting image stripes in the filtered image and generating an image mask from the image stripes comprises:
scanning the filtering image, and determining image stripes in the filtering image according to a detection threshold;
and generating an original mask according to the image to be repaired, and updating the original mask according to the image stripes to obtain an image mask.
3. The method of claim 2, wherein the scanning the filtered image and determining image stripes in the filtered image based on a detection threshold comprises:
determining a scanning starting point, a scanning end point and a scanning proportion for scanning the filtering image;
scanning the filtering image, and calculating the scanning starting point, the scanning end point and the scanning proportion to obtain a detection threshold value of image stripes;
and determining image stripes in the filtered image according to the scanning result and the detection threshold value, and recording the position information of the image stripes.
4. The image processing method according to claim 3, wherein the generating an original mask according to the image to be repaired and updating the original mask according to the image stripes to obtain an image mask comprises:
acquiring size information of the image to be repaired, and generating an original mask according to the size information;
and replacing the image stripes into the original mask according to the position information to obtain an image mask.
5. The image processing method according to claim 1, wherein the filtering the grayscale image with a filter to obtain a filtered image comprises;
determining the transverse direction and the longitudinal direction of a kernel function of the Gabor filter;
and respectively utilizing the transverse direction and the longitudinal direction of the Gabor filter to filter the gray level image to obtain a filtered image.
6. The image processing method according to claim 1, wherein the performing the repairing process on the image to be repaired according to the image mask comprises:
determining a size of an expansion convolution kernel to generate the expansion convolution kernel corresponding to the image mask;
and performing expansion processing on the image mask according to the expansion convolution core, and performing restoration processing on the image to be restored according to the image mask after the expansion processing.
7. The image processing method according to claim 1, wherein the obtaining a grayscale image of the image to be repaired comprises:
acquiring a video to be repaired, and extracting an image to be repaired from the video to be repaired;
and carrying out graying processing on the image to be repaired to obtain a grayscale image of the image to be repaired.
8. An image processing apparatus, characterized in that the apparatus comprises:
the image filtering module is configured to acquire a gray image of an image to be restored and filter the gray image by using a filter to obtain a filtered image;
a mask generation module configured to detect image fringes in the filtered image and generate an image mask from the image fringes;
and the image restoration module is configured to restore the image to be restored according to the image mask to obtain a restored image.
9. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method of any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the image processing method of any of claims 1 to 7 via execution of the executable instructions.
CN202010143766.0A 2020-03-04 2020-03-04 Image processing method, image processing apparatus, image processing medium, and electronic device Active CN111292272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010143766.0A CN111292272B (en) 2020-03-04 2020-03-04 Image processing method, image processing apparatus, image processing medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010143766.0A CN111292272B (en) 2020-03-04 2020-03-04 Image processing method, image processing apparatus, image processing medium, and electronic device

Publications (2)

Publication Number Publication Date
CN111292272A true CN111292272A (en) 2020-06-16
CN111292272B CN111292272B (en) 2022-03-25

Family

ID=71020883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010143766.0A Active CN111292272B (en) 2020-03-04 2020-03-04 Image processing method, image processing apparatus, image processing medium, and electronic device

Country Status (1)

Country Link
CN (1) CN111292272B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754439A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN112488942A (en) * 2020-12-02 2021-03-12 北京字跳网络技术有限公司 Method, device, equipment and computer readable medium for repairing image
CN112651886A (en) * 2020-11-25 2021-04-13 杭州微帧信息科技有限公司 Method for removing color bands in enhanced image by combining original image
CN116051386A (en) * 2022-05-30 2023-05-02 荣耀终端有限公司 Image processing method and related device

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0816825A2 (en) * 1996-06-26 1998-01-07 Toshiba Engineering Corporation Method and apparatus for inspecting streak
US20050002551A1 (en) * 2003-07-01 2005-01-06 Konica Minolta Medical & Graphic, Inc. Medical image processing apparatus, medical image processing system and medical image processing method
JP2005223883A (en) * 2004-01-09 2005-08-18 Nippon Hoso Kyokai <Nhk> Image remedying device, method and program
US20060109522A1 (en) * 2004-11-19 2006-05-25 Xerox Corporation Method for run-time streak detection by profile analysis
US20110007163A1 (en) * 2008-03-19 2011-01-13 Nec Corporation Stripe pattern detection system, stripe pattern detection method, and program for stripe pattern detection
CN102542557A (en) * 2010-12-30 2012-07-04 方正国际软件(北京)有限公司 Method and system for extracting lines from image
CN104966275A (en) * 2015-06-12 2015-10-07 中国科学院深圳先进技术研究院 Method and system for removing raindrop influence from single image based on rain frequency characteristics
CN105263018A (en) * 2015-10-12 2016-01-20 浙江宇视科技有限公司 Method and device for detecting superposed stripe in video image
US20170330338A1 (en) * 2014-12-19 2017-11-16 Compagnie Generale Des Etablissements Michelin Method for detecting striations in a tire
CN107545546A (en) * 2016-06-24 2018-01-05 柯尼卡美能达株式会社 Radiation image picking-up system, image processing apparatus and image processing method
CN107767346A (en) * 2017-09-08 2018-03-06 湖北久之洋红外系统股份有限公司 A kind of infrared image fringes noise filtering method
US20180089820A1 (en) * 2015-07-31 2018-03-29 Hp Indigo B.V. Detection of streaks in images
CN109360168A (en) * 2018-10-16 2019-02-19 烟台艾睿光电科技有限公司 Infrared image removes method, apparatus, infrared detector and the storage medium of striped
CN109509158A (en) * 2018-11-19 2019-03-22 电子科技大学 Method based on the removal of amplitude constraint infrared image striped

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0816825A2 (en) * 1996-06-26 1998-01-07 Toshiba Engineering Corporation Method and apparatus for inspecting streak
US20050002551A1 (en) * 2003-07-01 2005-01-06 Konica Minolta Medical & Graphic, Inc. Medical image processing apparatus, medical image processing system and medical image processing method
JP2005223883A (en) * 2004-01-09 2005-08-18 Nippon Hoso Kyokai <Nhk> Image remedying device, method and program
US20060109522A1 (en) * 2004-11-19 2006-05-25 Xerox Corporation Method for run-time streak detection by profile analysis
US20110007163A1 (en) * 2008-03-19 2011-01-13 Nec Corporation Stripe pattern detection system, stripe pattern detection method, and program for stripe pattern detection
CN102542557A (en) * 2010-12-30 2012-07-04 方正国际软件(北京)有限公司 Method and system for extracting lines from image
US20170330338A1 (en) * 2014-12-19 2017-11-16 Compagnie Generale Des Etablissements Michelin Method for detecting striations in a tire
CN104966275A (en) * 2015-06-12 2015-10-07 中国科学院深圳先进技术研究院 Method and system for removing raindrop influence from single image based on rain frequency characteristics
US20180089820A1 (en) * 2015-07-31 2018-03-29 Hp Indigo B.V. Detection of streaks in images
CN105263018A (en) * 2015-10-12 2016-01-20 浙江宇视科技有限公司 Method and device for detecting superposed stripe in video image
CN107545546A (en) * 2016-06-24 2018-01-05 柯尼卡美能达株式会社 Radiation image picking-up system, image processing apparatus and image processing method
CN107767346A (en) * 2017-09-08 2018-03-06 湖北久之洋红外系统股份有限公司 A kind of infrared image fringes noise filtering method
CN109360168A (en) * 2018-10-16 2019-02-19 烟台艾睿光电科技有限公司 Infrared image removes method, apparatus, infrared detector and the storage medium of striped
CN109509158A (en) * 2018-11-19 2019-03-22 电子科技大学 Method based on the removal of amplitude constraint infrared image striped

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
J. FEHRENBACH等: "Variational Algorithms to Remove Stationary Noise: Applications to Microscopy Imaging", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
张宸宇等: "云纹干涉法中干涉条纹的计算机自动识别研究", 《江西科学》 *
张鸿宾等: "Gabor小波变换在检测纹理边界及相位展开中的应用", 《计算机学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754439A (en) * 2020-06-28 2020-10-09 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN111754439B (en) * 2020-06-28 2024-01-12 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN112651886A (en) * 2020-11-25 2021-04-13 杭州微帧信息科技有限公司 Method for removing color bands in enhanced image by combining original image
CN112488942A (en) * 2020-12-02 2021-03-12 北京字跳网络技术有限公司 Method, device, equipment and computer readable medium for repairing image
CN116051386A (en) * 2022-05-30 2023-05-02 荣耀终端有限公司 Image processing method and related device
CN116051386B (en) * 2022-05-30 2023-10-20 荣耀终端有限公司 Image processing method and related device

Also Published As

Publication number Publication date
CN111292272B (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN111292272B (en) Image processing method, image processing apparatus, image processing medium, and electronic device
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN111488865B (en) Image optimization method and device, computer storage medium and electronic equipment
US10062195B2 (en) Method and device for processing a picture
CN111695421B (en) Image recognition method and device and electronic equipment
CN111192226B (en) Image fusion denoising method, device and system
CN111192241B (en) Quality evaluation method and device for face image and computer storage medium
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN112785572A (en) Image quality evaluation method, device and computer readable storage medium
CN115270184A (en) Video desensitization method, vehicle video desensitization method and vehicle-mounted processing system
CN113781356A (en) Training method of image denoising model, image denoising method, device and equipment
CN110473176B (en) Image processing method and device, fundus image processing method and electronic equipment
CN113688839B (en) Video processing method and device, electronic equipment and computer readable storage medium
CN113902636A (en) Image deblurring method and device, computer readable medium and electronic equipment
CN116798041A (en) Image recognition method and device and electronic equipment
US20220207917A1 (en) Facial expression image processing method and apparatus, and electronic device
CN116129417A (en) Digital instrument reading detection method based on low-quality image
CN112052863B (en) Image detection method and device, computer storage medium and electronic equipment
CN115115535A (en) Depth map denoising method, device, medium and equipment
CN112487943A (en) Method and device for removing duplicate of key frame and electronic equipment
CN111985510B (en) Generative model training method, image generation device, medium, and terminal
CN115222606A (en) Image processing method, image processing device, computer readable medium and electronic equipment
CN116978081A (en) Image processing method and device, storage medium, and program product
Qi et al. A 3D visual comfort metric based on binocular asymmetry factor
CN114511458A (en) Image processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40024254

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant