CN113240595B - Image detection method, device, storage medium and computer equipment - Google Patents

Image detection method, device, storage medium and computer equipment Download PDF

Info

Publication number
CN113240595B
CN113240595B CN202110490641.XA CN202110490641A CN113240595B CN 113240595 B CN113240595 B CN 113240595B CN 202110490641 A CN202110490641 A CN 202110490641A CN 113240595 B CN113240595 B CN 113240595B
Authority
CN
China
Prior art keywords
image
target image
mean value
target
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110490641.XA
Other languages
Chinese (zh)
Other versions
CN113240595A (en
Inventor
刘恩雨
李松南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110490641.XA priority Critical patent/CN113240595B/en
Publication of CN113240595A publication Critical patent/CN113240595A/en
Application granted granted Critical
Publication of CN113240595B publication Critical patent/CN113240595B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

The embodiment of the application discloses an image detection method, an image detection device, a storage medium and computer equipment, wherein the method comprises the following steps: acquiring a target image and converting the target image into a gray scale image; then, carrying out directional filtering on the gray level image to obtain a filtering image of the target image; then carrying out line segment detection on the filter map to obtain a line segment detection map of the target image; and then determining the water ripple in the target image according to the line segment detection diagram. According to the embodiment of the application, the target image is converted into the gray level image, the gray level image is subjected to directional filtering, then the line segment detection is performed to obtain the line segment detection image, then the water ripple in the target image is determined according to the line segment detection image, the accuracy of water ripple detection is improved, and an effective reference is provided for the subsequent effect processing in the water surface area according to the detected position of the water ripple.

Description

Image detection method, device, storage medium and computer equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image detection method, an image detection device, a storage medium, and a computer device.
Background
With the development of the internet and terminal technology, the presentation forms of images or pictorial representations are increasingly tending toward diversity. The water ripple or wave line is a line frequently appearing in the image or the pictorial representation, and the detection of the water ripple or wave line is beneficial to providing references for the position of the water surface, the size of the water surface and the like in the image and also providing references for the subsequent corresponding treatment of the water surface and the like. How to detect the moire in the image has become one of the important research subjects in the industry. At present, a relatively effective water ripple detection method does not exist.
Disclosure of Invention
The embodiment of the application provides an image detection method, an image detection device, a storage medium and computer equipment, which can effectively detect water waves in a target image and improve the accuracy of water wave detection.
In a first aspect, there is provided an image detection method, the method comprising: acquiring a target image and converting the target image into a gray scale image; performing directional filtering on the gray scale image to obtain a filtering image of the target image; performing line segment detection on the filter map to obtain a line segment detection map of the target image; and determining the water ripple in the target image according to the line segment detection diagram.
In a second aspect, there is provided an image detection apparatus, the apparatus comprising:
an acquisition unit configured to acquire a target image and convert the target image into a grayscale image;
the filtering unit is used for carrying out directional filtering on the gray level image so as to obtain a filtering image of the target image;
the detection unit is used for carrying out line segment detection on the filter image so as to obtain a line segment detection image of the target image;
and the determining unit is used for determining the water ripple in the target image according to the line segment detection diagram.
In a third aspect, there is provided a computer readable storage medium storing a computer program adapted to be loaded by a processor for performing the steps of the image detection method as described in the first aspect above.
In a fourth aspect, there is provided a computer device comprising a processor and a memory, the memory having stored therein a computer program for executing the steps of the image detection method as described in the first aspect above, by invoking the computer program stored in the memory.
According to the embodiment of the application, the target image is obtained and converted into the gray level image; then, carrying out directional filtering on the gray level image to obtain a filtering image of the target image; then carrying out line segment detection on the filter map to obtain a line segment detection map of the target image; and then determining the water ripple in the target image according to the line segment detection diagram. According to the embodiment of the application, the target image is converted into the gray level image, the gray level image is subjected to directional filtering, then the line segment detection is performed to obtain the line segment detection image, then the water ripple in the target image is determined according to the line segment detection image, the accuracy of water ripple detection is improved, and an effective reference is provided for the subsequent effect processing in the water surface area according to the detected position of the water ripple.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1a is a schematic structural diagram of an image detection system according to an embodiment of the present application.
Fig. 1b is a schematic flow chart of an image detection method according to an embodiment of the present application.
Fig. 1c is a schematic diagram of a Sobel convolution factor according to an embodiment of the present application.
Fig. 1d is a schematic diagram of an application scenario of an image detection method according to an embodiment of the present application.
Fig. 2 is another flow chart of an image detection method according to an embodiment of the application
Fig. 3 is a schematic structural diagram of an image detection device according to an embodiment of the present application.
Fig. 4 is a schematic diagram of another structure of an image detection device according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The embodiment of the application provides an image detection method, an image detection device, computer equipment and a storage medium. Specifically, the image detection method according to the embodiment of the present application may be performed by a computer device, where the computer device may be a terminal or a server. The portable electronic device can be a smart phone, a tablet computer, a notebook computer, a touch screen, a smart television, an electronic drawing board, a personal computer (Personal Computer, PC), a personal digital assistant (Personal Digital Assistant, PDA), an intelligent wearable device and the like, and is not limited herein. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content distribution network services, basic cloud computing services such as big data and an artificial intelligence platform.
Cloud technology (Cloud technology) refers to a hosting technology for integrating hardware, software, network and other series resources in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied by the cloud computing business mode, can form a resource pool, and is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
Deep Learning (DL) is a branch of machine Learning, an algorithm that attempts to abstract data at a high level using multiple processing layers that contain complex structures or consist of multiple nonlinear transformations.
The image detection method of the embodiment of the application can be realized by the terminal or can be realized by the terminal and the server together.
The embodiment of the application is exemplified by a method for jointly realizing image detection by a terminal and a server.
Referring to fig. 1a, an image detection system provided in an embodiment of the present application includes a terminal 10, a server 20, and the like; the terminal 10 and the server 20 are connected to each other through a network, for example, a wired or wireless network connection.
Wherein the terminal 10 may be used to display a graphical user interface. The terminal is used for interacting with a user through a graphical user interface, for example, a drawing application program is downloaded and installed through the terminal and operated. The way in which the terminal device presents the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device, or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including a drawing screen and receiving operation instructions generated by a user acting on the graphical user interface, and a processor for running the drawing application, generating the graphical user interface, responding to the operation instructions, and controlling the display of the graphical user interface on the touch display screen. In the embodiment of the present application, a target image to be detected is input through the terminal 10, and then the target image is sent to the server 20, so that the server 20 detects the target image, and after the server 20 detects the water ripple of the target image, the detection result is sent to the terminal 10, so that the terminal 10 displays a detection result image containing the water ripple.
The server 20 may specifically be configured to: acquiring a target image and converting the target image into a gray scale image; performing directional filtering on the gray scale image to obtain a filtering image of the target image; performing line segment detection on the filter map to obtain a line segment detection map of the target image; and determining the water wave in the target image according to the line segment detection diagram, and then sending the detection result of the water wave in the determined target image to the terminal 10.
The terminal 10 may display a detection result page after receiving the detection result, wherein the detection result page includes a detection result image of the water ripple.
The image detection method can be applied to software ends such as embedded applets on a client, a browser client or an instant messaging client. A user can input a target image through a software end such as a client end, a browser client end or an embedded applet and the like which are installed in computer equipment and used for realizing the method provided by the embodiment of the application, the target image is converted into a gray level image, the gray level image is subjected to directional filtering, line segment detection is carried out to obtain a line segment detection image, water waves in the target image are determined according to the line segment detection image, the accuracy of water wave detection is improved, and an effective reference is provided for subsequent effect processing in a water surface area according to the detected position of the water waves.
The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
Referring to fig. 1b to fig. 1d, fig. 1b is a flow chart of an image detection method according to an embodiment of the present application, fig. 1c is a schematic diagram of a Sobel convolution factor according to an embodiment of the present application, and fig. 1d is an application scenario diagram of an image detection method according to an embodiment of the present application. The specific flow of the method can be as follows:
step 101, acquiring a target image and converting the target image into a gray scale image.
For example, the target image obtained is a color photograph or pictorial representation. Wherein the color target image is converted into a gray-scale image, which can be expressed as the following formula (1):
Gray=R*0.299+G*0.587+B*0.114 (1);
the Gray represents a Gray level of the target image, RGB is three color channels of the target image, that is, R is a red channel of the target image, G is a green channel of the target image, and B is a blue channel of the target image.
And 102, performing directional filtering on the gray scale image to obtain a filtering image of the target image.
In some embodiments, the performing directional filtering on the gray scale map to obtain a filtered map of the target image includes:
Performing Sobel operator edge detection on the gray level image to detect a transverse edge image and a longitudinal edge image of the gray level image;
and performing directional filtering according to the size relation between the pixel mean value of the transverse edge image and the pixel mean value of the longitudinal edge image to obtain a filtering diagram of the target image.
In some embodiments, the reserving, according to the comparison result, the pattern with the large pixel mean value in the lateral edge image and the longitudinal edge image to obtain a filter map of the target image includes:
if the pixel mean value of the transverse edge image is larger than the pixel mean value of the longitudinal edge image, reserving the transverse edge image, and filtering the longitudinal edge image to obtain a filter diagram only comprising the transverse edge image; or alternatively
If the pixel mean value of the transverse edge image is smaller than the pixel mean value of the longitudinal edge image, reserving the longitudinal edge image, and filtering the transverse edge image to obtain a filter diagram only comprising the longitudinal edge image; or alternatively
And if the pixel mean value of the transverse edge image is equal to the pixel mean value of the longitudinal edge image, reserving the transverse edge image and the longitudinal edge image to obtain a filter diagram containing the transverse edge image and the longitudinal edge image.
For example, the Sobel (Sobel) operator has Sobel convolution factors s_x and s_y, as shown in fig. 1c in particular. Such as Gray represents a Gray scale of the target image; gx represents the gray value of the image detected by the transverse edge, namely the transverse edge image of the gray image; gy represents the gray value of the image detected by the longitudinal edge, namely the longitudinal edge image of the gray image; wherein Gx may be expressed as the following formula (2), gy may be expressed as the following formula (3):
Gx=S_x*Gray (2);
Gy=S_y*Gray (3)。
since the water waves are generally unidirectional and wholly parallel when occurring, the situation that two water waves are perpendicular does not occur basically. Thus, the lateral edge image Gx and the longitudinal edge image Gy of the gray-scale image are detected, respectively, and the pixel mean values of the lateral edge image Gx and the longitudinal edge image Gy are calculated. And reserving one of the transverse edge image Gx and the longitudinal edge image Gy, which has a large pixel mean value, for the next step of line segment detection.
If the pixel mean values of the transverse edge image Gx and the longitudinal edge image Gy are equal, then hough line segment detection needs to be performed on Gx and Gy respectively in the next step, and then further screening is performed.
And step 103, performing line segment detection on the filter graph to obtain a line segment detection graph of the target image.
In some embodiments, the performing line segment detection on the filter map to obtain a line segment detection map of the target image includes:
detecting Hough line segments of the filter graph, and labeling all pixel points detected as line segments in the filter graph;
and setting the pixel value of each marked pixel point in the filter map as a first pixel value, and setting the pixel values of other pixel points which are not marked in the filter map as a second pixel value, so as to obtain a line segment detection map of the target image, wherein the first pixel value is larger than the second pixel value.
For example, the water wave is constituted by a straight line or a wavy line. The bending amplitude of the wavy line is generally small, so that the wavy line can be detected as a plurality of small straight line segments in the process of detecting the line segments.
For example, a straight line may be represented in a polar coordinate system by the parameters polar diameter and polar angle (r, θ). The straight line can be expressed as the following expression (4):
r=x*cosθ+y*sinθ (4);
where r represents the polar diameter of the straight line in the polar coordinate system, θ represents the polar angle of the straight line in the polar coordinate system, and point (x, y) represents a point on the straight line.
For example, for a point (x 0 ,y 0 ) A set of straight lines passing through this point can be defined as the following equation (5):
r 0 =x 0 *cosθ+y 0 *sinθ (5);
Wherein equation (5) represents the parameters (r) of each pair of polar coordinate systems 0 θ) represents a point (x) on the filter map passing through the target image 0 ,y 0 ) Is a straight line of (a). If for a given point (x 0 ,y 0 ) All passing points (x 0 ,y 0 ) Will result in a sinusoidal curve. Thus, if the curves obtained after the above operations are performed at two different points in the polar diameter and polar angle planes, it is explained that the parameters at the intersection point pass through the same straight line. If the number of curves intersecting a point in the polar radial and polar angular plane exceeds a given threshold, then the pair of parameters (r 0 θ) is a straight line in the filter map of the target image, that is, the point (x 0 ,y 0 ) Is positioned on a straight line. Wherein the threshold value range of the given threshold value is 10-30; experiments show that when the given threshold value is 15, the line segment detection effect is achievedThe fruit is better.
After detecting a line segment in a filter graph through Hough line segment detection, labeling all pixel points detected as the line segment in the filter graph, setting the pixel value of each labeled pixel point detected to belong to the line segment as 255, and setting the pixel values of the rest unlabeled pixel points as 0.
In some embodiments, if the filter map of the target image is a filter map including the lateral edge image and the longitudinal edge image, after the obtaining the line segment detection map of the target image, the method further includes:
dividing the line segment detection graph into a plurality of first image blocks according to a first preset size and a first preset step length;
respectively calculating an x-direction pixel mean value and a y-direction pixel mean value of each first image block in the plurality of first image blocks;
determining a first target image block with the largest x-direction pixel mean value from the plurality of first image blocks, and determining a second target image block with the largest y-direction pixel mean value from the plurality of first image blocks;
comparing the x-direction pixel mean value of the first target image block with the y-direction pixel mean value of the second target image block;
selecting a direction corresponding to the target image block with the large mean value from the first target image block and the second target image block as a target direction;
and reserving the line segments corresponding to the target direction in the line segment detection diagram of the target image so as to obtain an updated line segment detection diagram.
In some embodiments, after said comparing the size of the x-direction pixel mean of the first target image block with the y-direction pixel mean of the second target image block, further comprising:
If the x-direction pixel mean value of the first target image block is equal to the y-direction pixel mean value of the second target image block, re-dividing the line segment detection graph into a plurality of second image blocks according to a second preset size and a second preset step length, wherein the second preset size is smaller than the first preset size;
respectively calculating an x-direction pixel mean value and a y-direction pixel mean value of each of the plurality of second image blocks;
determining a third target image block with the largest x-direction pixel mean value from the plurality of second image blocks, and determining a fourth target image block with the largest y-direction pixel mean value from the plurality of second image blocks;
comparing the x-direction pixel mean value of the third target image block with the y-direction pixel mean value of the fourth target image block;
selecting a direction corresponding to the target image block with the large mean value from the third target image block and the fourth target image block as a target direction;
and reserving the line segments corresponding to the target direction in the line segment detection diagram of the target image so as to obtain an updated line segment detection diagram.
In step 102, if the pixel mean value of the lateral edge image Gx is equal to the pixel mean value of the longitudinal edge image Gy, the lateral edge image Gx and the longitudinal edge image Gy are retained, so as to obtain a filter map including the lateral edge image Gx and the longitudinal edge image Gy. In the case that the pixel mean values of Gx and Gy are equal, further screening is required from the hough line segment detection results of Gx and Gy.
For example, a line segment detection diagram obtained after line segment detection needs to be cut into a plurality of small image blocks with overlapping, then an x-direction pixel mean value and a y-direction pixel mean value of the image blocks are respectively counted, then the image block corresponding to the maximum x-direction pixel mean value and the image block corresponding to the maximum y-direction pixel mean value are compared, and the larger direction is reserved.
For example, the first preset size is 100×100, the first preset step size is 13, the line segment detection diagram obtained after the line segment detection is cut into image blocks (first image blocks) of 100×100, the step size is 13, and the x-direction pixel mean value and the y-direction pixel mean value of each of the plurality of image blocks are respectively counted. Comparing the x-direction pixel mean value and the y-direction pixel mean value of all the image blocks, because the water waves are densely generated in a certain area, the larger the pixel mean value of the image block in a certain direction is, the larger the occurrence probability of the water waves in the image block is, and when the pixel mean value of the image block in a certain direction is smaller, the occurrence lines are more likely to be not waves. And comparing the pixel mean value of the maximum pixel mean value image block in the x direction with the pixel mean value of the maximum pixel mean value image block in the y direction, and reserving the direction with the larger pixel mean value. If the pixel mean value of the maximum pixel mean value image block in the x direction is equal to the pixel mean value of the maximum pixel mean value image block in the y direction, the line segment detection graph of the target image is continuously cut into 80 x 80 image blocks for comparison, if the line segment detection graph is equal to the image block, 60 x 60 image blocks are cut, and the line segment detection graph is circulated in the mode of 40 x 40 until 20 x 20. The preset step length is set to overlap the small image blocks after cutting every time the line segment detection graph is re-segmented, for example, the preset step length can be set to 13 continuously. If the average values of the final x and y directions are still equal, the line segment detection results of the two directions are overlapped, and a special situation, such as a central symmetry situation, is determined. The step of determining the water ripple in step 104 is then performed.
If the obtained filter map is a filter map including only unidirectional edge images in step 102, for example, a filter map including only transverse edge images or a filter map including only longitudinal edge images, step 104 is directly performed after obtaining a line segment detection map by hough line segment detection in step 103.
And 104, determining the water ripple in the target image according to the line segment detection diagram.
For example, all labeled pixel points in the line segment detection graph are determined as water waves in the target image.
In some embodiments, the determining the water ripple in the target image according to the line segment detection diagram includes:
dividing the line segment detection graph into a plurality of third image blocks according to a third preset size and a third preset step length;
and determining the water ripple in the target image according to the pixel mean value of the third image blocks.
In some embodiments, the determining the water ripple in the target image according to the pixel mean value of the third image blocks includes:
calculating a pixel mean value of each of the plurality of third image blocks;
reserving all marked pixel points in the image blocks with the pixel mean value larger than or equal to a preset pixel threshold value in the plurality of third image blocks; and
Canceling labeling of all labeled pixel points in the image blocks, wherein the pixel mean value of the image blocks in the third image blocks is smaller than the preset pixel threshold value;
traversing and comparing the size relation between the pixel mean value of each third image block in the plurality of third image blocks and the preset pixel threshold value to update the labeled pixel point of the line segment detection diagram;
and determining all the marked pixel points in the line segment detection graph after the marked pixel points are updated as the water ripple in the target image.
For example, there is a possibility that there is false detection of the label of the line segment in the line segment detection chart, such as a case that a line in another non-water ripple region is also detected as a line segment. In order to detect the water ripple more accurately, further labeling pixel screening is required. Since the moire generally appears in a large scale, only one line cannot appear, and the false detection line segment is most likely to be one line segment, the pixel mean value of the image block can be adopted for screening. Specifically, the line segment detection diagram obtained in the step 103 is cut to form a plurality of small overlapped image blocks, the pixel mean value of the image blocks is counted, when the pixel mean value is greater than a preset pixel threshold value, the area is judged to be the area where the water ripple belongs, and the pixel point with the pixel value of 255 in the area where the water ripple belongs is the water ripple.
For example, the third preset size is 100×100, the third preset step size is 38, the line segment detection diagram obtained in step 103 is cut into image blocks (third image blocks) of 100×100, the step size is 38, that is, information of an image block of 100×100 is calculated every 38 pixels. Specifically, the pixel mean value of the image block is counted, and the pixel value of the marked position (marked pixel point) of the Hough line segment is 255, and the unmarked area is 0, so that the larger the pixel mean value of the image block is, the more marked pixel points of the area are indicated, and the more detected line segments are appeared. For example, the preset pixel threshold value is 16, when the pixel mean value of the image block is greater than or equal to 16, the labeled pixel point in the image block is determined to be accurate, the original labeled pixel point in the image block with the pixel mean value greater than or equal to 16 is reserved, and the original labeled pixel points in the image block with the rest pixel mean values not reaching 16 are not reserved. After traversing all the image blocks, finishing updating the marked pixel points in the line segment detection graph, and determining all the marked pixel points in the line segment detection graph after updating the marked pixel points as water waves in the target image.
For example, as shown in fig. 1d, the gray scale of the target image obtained by processing in step 101 is shown as an image a in fig. 1d, where a region a in the image a is a water ripple region in the target image, the image detection method provided by the embodiment of the present application needs to perform Sobel operator directional filtering on the gray scale image a, then perform hough line segment detection, label the line segment detection image, finally cut the line segment detection image into a plurality of small image blocks with overlapping, count the pixel mean value of the image blocks, and determine that the region is a region to which the water ripple belongs when the mean value is greater than a preset pixel threshold, and the pixel point with a pixel value of 255 in the region is the water ripple. The image B in fig. 1d is a detection result image, and the region B shown in the image B is a region to which the water ripple belongs, and the line segment in the region B is the detected water ripple.
All the above technical solutions may be combined to form an optional embodiment of the present application, and will not be described in detail herein.
According to the embodiment of the application, the target image is obtained and converted into the gray level image; then, carrying out directional filtering on the gray level image to obtain a filtering image of the target image; then carrying out line segment detection on the filter map to obtain a line segment detection map of the target image; and then determining the water ripple in the target image according to the line segment detection diagram. According to the embodiment of the application, the target image is converted into the gray level image, the gray level image is subjected to directional filtering, then the line segment detection is performed to obtain the line segment detection image, then the water ripple in the target image is determined according to the line segment detection image, the accuracy of water ripple detection is improved, and an effective reference is provided for the subsequent effect processing in the water surface area according to the detected position of the water ripple.
Referring to fig. 2, fig. 2 is another flow chart of the image detection method according to the embodiment of the application. The specific flow of the method can be as follows:
step 201, a target image is acquired and converted into a gray scale image. The specific description of step 201 refers to step 101, and will not be described herein.
And 202, performing directional filtering on the gray scale image to obtain a filtered image of the target image. The specific description of step 202 refers to step 102, and will not be repeated here.
And 203, performing line segment detection on the filter graph to obtain a line segment detection graph of the target image. The specific description of step 203 refers to step 103, and will not be repeated here.
And 204, determining the water ripple in the target image according to the line segment detection diagram. The specific description of step 204 refers to step 104, and will not be repeated here.
And 205, performing target effect processing on the area of the water ripple in the target image.
In some embodiments, the processing the target effect on the area of the water ripple in the target image includes:
detecting object events in the area of the water wave in the target image;
and carrying out target effect processing on the area where the water ripple in the target image belongs according to the detected object event.
In some embodiments, the processing the target effect on the area of the water ripple in the target image according to the detected object event includes:
If the object event is that the object falls into the water surface, generating a water ring ripple effect at the position of the object at the water ripple; or alternatively
If the object event is that the object floats on the water surface, generating a non-parallel moving ripple effect at the position of the object at the water ripple; or alternatively
And if the object event is that no object exists on the water surface, controlling the water wave in the target image to move in parallel.
For example, after detecting the water wave of the target image, the position of the water surface in the target image and the size of the water surface may be further determined, and the water surface or the object on the water surface may be subjected to corresponding effect processing based on the detected water wave. For example, the dynamic effect is generated on the water surface, and the display effect of the target image is increased.
For example, if the detected object event is that the object falls into the water surface, a water ring ripple effect is created where the object is located in the water ripple. In daily life, objects fall into the water surface, concentric water ring waves are formed around the water falling point on the water surface, the water ring waves diffuse from inside to outside, the circle is larger and larger, and finally the water ring waves gradually disappear.
For example, if the detected object event is an object floating on the water surface, a non-parallel moving wave effect is created where the object is located in the water wave. For example, an object event in which the object floats on the water surface may include an event in which the object floats and moves forward, such as a bird's nest, a duck's nest, a boat, etc., and when the object moves forward on the water surface, non-parallel moving waves, such as chevron waves, may be formed. The herringbone waves are outwards diffused from two sides of the object, are elongated and dispersed in the opposite direction of the advancing direction of the object, and gradually disappear.
For example, if the detected object event is that there is no object on the water surface, the water wave in the target image is controlled to move in parallel. For example, the individual segments representing the water wave may be controlled to move in parallel, such as from the front end of the water wave to the rear end of the water wave. For example, the effects of water droplets, spray, etc. may also be increased.
For example, if the detected object event is that there is no object on the water surface, the water wave in the target image can be controlled to move in a streamline shape according to the curved line path.
For example, the moving speed of the dynamic moving effect may be controlled according to different screen contents of the target image.
The effect of the water wave is reflected on the image, and is the offset of the pixel point. The image is thus processed just like an image scaling, calculating for each point of the output image its pixel point corresponding to the original input image. In addition, the color value can be calculated by interpolation. The wave propagation direction is transverse wave and the same wave is longitudinal wave, and the water wave is superposition of transverse wave and longitudinal wave, so that when the water wave propagates, each particle has horizontal movement and up-down movement, and the two movements are combined to form an elliptical circular movement. In the embodiment of the application, when the ripple effect is formed, the offset parameters of different pixel points can be set according to different ripple effects.
All the above technical solutions may be combined to form an optional embodiment of the present application, and will not be described in detail herein.
According to the embodiment of the application, the target image is obtained and converted into the gray level image; then, carrying out directional filtering on the gray level image to obtain a filtering image of the target image; then carrying out line segment detection on the filter map to obtain a line segment detection map of the target image; then determining the water ripple in the target image according to the line segment detection diagram; and then carrying out target effect processing on the area where the water ripple in the target image belongs. According to the embodiment of the application, the target image is converted into the gray level image, the gray level image is subjected to directional filtering, then the line segment detection is performed to obtain the line segment detection image, then the water ripple in the target image is determined according to the line segment detection image, and the target effect processing is performed on the area of the water ripple in the target image, so that the accuracy of the water ripple detection is improved, the dynamic effect of the image picture is increased, and the image content is enriched.
In order to facilitate better implementation of the image detection method of the embodiment of the application, the embodiment of the application also provides an image detection device. Referring to fig. 3, fig. 3 is a schematic structural diagram of an image detection device according to an embodiment of the application. The image detection apparatus 300 may include:
An acquisition unit 301 configured to acquire a target image and convert the target image into a gray scale;
a filtering unit 302, configured to perform directional filtering on the gray scale map, so as to obtain a filtered map of the target image;
a detection unit 303, configured to perform line segment detection on the filter map, so as to obtain a line segment detection map of the target image;
and the determining unit 304 is configured to determine the water ripple in the target image according to the line segment detection diagram.
In some embodiments, the filtering unit 302 is configured to:
performing Sobel operator edge detection on the gray level image to detect a transverse edge image and a longitudinal edge image of the gray level image;
and performing directional filtering according to the size relation between the pixel mean value of the transverse edge image and the pixel mean value of the longitudinal edge image to obtain a filtering diagram of the target image.
In some embodiments, the filtering unit 302 is configured to retain, according to a comparison result, an image with a large pixel mean value in the lateral edge image and the longitudinal edge image, so as to obtain a filtering diagram of the target image, and specifically includes:
if the pixel mean value of the transverse edge image is larger than the pixel mean value of the longitudinal edge image, reserving the transverse edge image, and filtering the longitudinal edge image to obtain a filter diagram only comprising the transverse edge image; or alternatively
If the pixel mean value of the transverse edge image is smaller than the pixel mean value of the longitudinal edge image, reserving the longitudinal edge image, and filtering the transverse edge image to obtain a filter diagram only comprising the longitudinal edge image; or alternatively
And if the pixel mean value of the transverse edge image is equal to the pixel mean value of the longitudinal edge image, reserving the transverse edge image and the longitudinal edge image to obtain a filter diagram containing the transverse edge image and the longitudinal edge image.
In some embodiments, the detecting unit 303 is configured to:
detecting Hough line segments of the filter graph, and labeling all pixel points detected as line segments in the filter graph;
and setting the pixel value of each marked pixel point in the filter map as a first pixel value, and setting the pixel values of other pixel points which are not marked in the filter map as a second pixel value, so as to obtain a line segment detection map of the target image, wherein the first pixel value is larger than the second pixel value.
In some embodiments, if the filter map of the target image is a filter map including the lateral edge image and the longitudinal edge image, the detecting unit 303 is further configured to:
Dividing the line segment detection graph into a plurality of first image blocks according to a first preset size and a first preset step length;
respectively calculating an x-direction pixel mean value and a y-direction pixel mean value of each first image block in the plurality of first image blocks;
determining a first target image block with the largest x-direction pixel mean value from the plurality of first image blocks, and determining a second target image block with the largest y-direction pixel mean value from the plurality of first image blocks;
comparing the x-direction pixel mean value of the first target image block with the y-direction pixel mean value of the second target image block;
selecting a direction corresponding to the target image block with the large mean value from the first target image block and the second target image block as a target direction;
and reserving the line segments corresponding to the target direction in the line segment detection diagram of the target image so as to obtain an updated line segment detection diagram.
In some embodiments, the detecting unit 303, after comparing the size of the x-direction pixel mean of the first target image block with the size of the y-direction pixel mean of the second target image block, is further configured to:
if the x-direction pixel mean value of the first target image block is equal to the y-direction pixel mean value of the second target image block, re-dividing the line segment detection graph into a plurality of second image blocks according to a second preset size and a second preset step length, wherein the second preset size is smaller than the first preset size;
Respectively calculating an x-direction pixel mean value and a y-direction pixel mean value of each of the plurality of second image blocks;
determining a third target image block with the largest x-direction pixel mean value from the plurality of second image blocks, and determining a fourth target image block with the largest y-direction pixel mean value from the plurality of second image blocks;
comparing the x-direction pixel mean value of the third target image block with the y-direction pixel mean value of the fourth target image block;
selecting a direction corresponding to the target image block with the large mean value from the third target image block and the fourth target image block as a target direction;
and reserving the line segments corresponding to the target direction in the line segment detection diagram of the target image so as to obtain an updated line segment detection diagram.
In some embodiments, the determining unit 304 is configured to:
dividing the line segment detection graph into a plurality of third image blocks according to a third preset size and a third preset step length;
and determining the water ripple in the target image according to the pixel mean value of the third image blocks.
In some embodiments, the determining unit 304 is configured to determine, according to the pixel mean value of the plurality of third image blocks, a moire in the target image, and specifically includes:
Calculating a pixel mean value of each of the plurality of third image blocks;
reserving all marked pixel points in the image blocks with the pixel mean value larger than or equal to a preset pixel threshold value in the plurality of third image blocks; and
canceling labeling of all labeled pixel points in the image blocks, wherein the pixel mean value of the image blocks in the third image blocks is smaller than the preset pixel threshold value;
traversing and comparing the size relation between the pixel mean value of each third image block in the plurality of third image blocks and the preset pixel threshold value to update the labeled pixel point of the line segment detection diagram;
and determining all the marked pixel points in the line segment detection graph after the marked pixel points are updated as the water ripple in the target image.
Referring to fig. 4, fig. 4 is a schematic diagram of another structure of an image detection device according to an embodiment of the application. Fig. 4 differs from fig. 3 in that the image detection apparatus 400 may further include a processing unit 400.
The processing unit 400 is configured to perform target effect processing on an area where the water ripple in the target image belongs.
In some embodiments, the processing unit 400 is specifically configured to:
detecting object events in the area of the water wave in the target image;
And carrying out target effect processing on the area where the water ripple in the target image belongs according to the detected object event.
In some embodiments, the processing unit 400 is configured to perform, according to the detected object event, target effect processing on an area to which the water ripple in the target image belongs, and specifically includes:
if the object event is that the object falls into the water surface, generating a water ring ripple effect at the position of the object at the water ripple; or alternatively
If the object event is that the object floats on the water surface, generating a non-parallel moving ripple effect at the position of the object at the water ripple; or alternatively
And if the object event is that no object exists on the water surface, controlling the water wave in the target image to move in parallel.
All the above technical solutions may be combined to form an optional embodiment of the present application, and will not be described in detail herein.
It should be understood that apparatus embodiments and method embodiments may correspond with each other and that similar descriptions may refer to the method embodiments. To avoid repetition, no further description is provided here. Specifically, the apparatus shown in fig. 3 and fig. 4 may perform the above-mentioned image detection method embodiment, and the foregoing and other operations and/or functions of each module in the apparatus implement respective flows of the above-mentioned method embodiment, which are not repeated herein for brevity.
Correspondingly, the embodiment of the application also provides computer equipment, which can be a terminal or a server, wherein the terminal can be a smart phone, a tablet personal computer, a notebook computer, a touch screen, a smart television, an electronic drawing board, a PC (Personal Computer, PC), a personal digital assistant (Personal Digital Assistant, PDA), an intelligent wearable device and the like. The server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms and the like. As shown in fig. 5, the computer device may include Radio Frequency (RF) circuitry 501, memory 502 including one or more computer readable storage media, an input unit 503, a display unit 504, a sensor 505, an audio circuit 506, a wireless fidelity (Wireless Fidelity, wiFi) module 507, a processor 508 including one or more processing cores, and a power supply 509. Those skilled in the art will appreciate that the computer device structure shown in FIG. 5 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components. Wherein:
The RF circuit 501 may be configured to receive and send information or signals during a call, and in particular, after receiving downlink information of a base station, the downlink information is processed by one or more processors 508; in addition, data relating to uplink is transmitted to the base station. In addition, RF circuitry 501 may also communicate with networks and other devices via wireless communications.
The memory 502 may be used to store software programs and modules that the processor 508 performs various functional applications and data processing by executing the software programs and modules stored in the memory 502. The memory 502 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for at least one function, and the like; the storage data area may store data created according to the use of the computer device, etc.
The input unit 503 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
The display unit 504 may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of a computer device, which may be composed of graphics, text, icons, video, and any combination thereof. The display unit 504 may include a display panel.
The computer device may also include at least one sensor 505, such as a light sensor, a motion sensor, and other sensors.
Audio circuitry 506, speakers, and a microphone may provide an audio interface between the user and the computer device. The audio circuit 506 may transmit the received electrical signal after audio data conversion to a speaker, where the electrical signal is converted into a sound signal for output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 506 and converted into audio data, which are processed by the audio data output processor 508 for transmission to, for example, another computer device via the RF circuit 501, or which are output to the memory 502 for further processing. Audio circuitry 506 may also include an ear bud jack to provide communication of the peripheral ear bud with the computer device.
WiFi belongs to a short-distance wireless transmission technology, and computer equipment can help a user to send and receive emails, browse webpages, access streaming media and the like through a WiFi module 507, so that wireless broadband Internet access is provided for the user. Although fig. 5 shows a WiFi module 507, it is understood that it is not a necessary component of a computer device and may be omitted entirely as desired within the scope of not changing the essence of the invention.
The processor 508 is a control center of the computer device and uses various interfaces and lines to connect the various parts of the overall handset, perform various functions of the computer device and process data by running or executing software programs and/or modules stored in the memory 502, and invoking data stored in the memory 502, thereby performing overall monitoring of the computer device.
The computer device also includes a power supply 509 (e.g., a battery) for powering the various components, which may be logically connected to the processor 508 via a power management system so as to perform functions such as managing charge, discharge, and power consumption via the power management system.
Although not shown, the computer device may further include a camera, a bluetooth module, etc., which will not be described herein. In particular, in this embodiment, the processor 508 in the computer device loads executable files corresponding to the processes of one or more computer programs into the memory 502 according to the following instructions, and the processor 508 executes the computer programs stored in the memory 502, so as to implement various functions:
acquiring a target image and converting the target image into a gray scale image; performing directional filtering on the gray scale image to obtain a filtering image of the target image; performing line segment detection on the filter map to obtain a line segment detection map of the target image; and determining the water ripple in the target image according to the line segment detection diagram.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer readable storage medium having stored therein a plurality of computer programs that can be loaded by a processor to perform the steps of any of the image detection methods provided by the embodiments of the present application.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps in any image detection method provided by the embodiment of the present application can be executed by the computer program stored in the storage medium, so that the beneficial effects that any image detection method provided by the embodiment of the present application can be achieved, and detailed descriptions of the previous embodiments are omitted herein.
The foregoing describes in detail an image detection method, apparatus, storage medium and computer device provided by the embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, where the foregoing examples are only used to help understand the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (13)

1. An image detection method, the method comprising:
acquiring a target image and converting the target image into a gray scale image;
performing directional filtering on the gray scale image to obtain a filtering image of the target image;
performing line segment detection on the filter map to obtain a line segment detection map of the target image;
determining the water ripple in the target image according to the line segment detection diagram;
the step of performing directional filtering on the gray scale image to obtain a filtered image of the target image includes:
performing Sobel operator edge detection on the gray level image to detect a transverse edge image and a longitudinal edge image of the gray level image;
Performing directional filtering according to the size relation between the pixel mean value of the transverse edge image and the pixel mean value of the longitudinal edge image to obtain a filtering diagram of the target image;
the performing directional filtering according to the magnitude relation between the pixel mean value of the transverse edge image and the pixel mean value of the longitudinal edge image to obtain a filtering diagram of the target image, including:
and if the pixel mean value of the transverse edge image is larger than the pixel mean value of the longitudinal edge image, reserving the transverse edge image, and filtering the longitudinal edge image to obtain a filter diagram only comprising the transverse edge image.
2. The image detection method according to claim 1, wherein the performing directional filtering according to a magnitude relation between a pixel mean of the lateral edge image and a pixel mean of the longitudinal edge image to obtain the filter map of the target image includes:
if the pixel mean value of the transverse edge image is smaller than the pixel mean value of the longitudinal edge image, reserving the longitudinal edge image, and filtering the transverse edge image to obtain a filter diagram only comprising the longitudinal edge image; or alternatively
And if the pixel mean value of the transverse edge image is equal to the pixel mean value of the longitudinal edge image, reserving the transverse edge image and the longitudinal edge image to obtain a filter diagram containing the transverse edge image and the longitudinal edge image.
3. The image detection method according to claim 1, wherein the performing line segment detection on the filter map to obtain a line segment detection map of the target image includes:
detecting Hough line segments of the filter graph, and labeling all pixel points detected as line segments in the filter graph;
and setting the pixel value of each marked pixel point in the filter map as a first pixel value, and setting the pixel values of other pixel points which are not marked in the filter map as a second pixel value, so as to obtain a line segment detection map of the target image, wherein the first pixel value is larger than the second pixel value.
4. The image detection method according to claim 3, further comprising, after the obtaining the line segment detection map of the target image, if the filter map of the target image is a filter map including the lateral edge image and the longitudinal edge image:
Dividing the line segment detection graph into a plurality of first image blocks according to a first preset size and a first preset step length;
respectively calculating an x-direction pixel mean value and a y-direction pixel mean value of each first image block in the plurality of first image blocks;
determining a first target image block with the largest x-direction pixel mean value from the plurality of first image blocks, and determining a second target image block with the largest y-direction pixel mean value from the plurality of first image blocks;
comparing the x-direction pixel mean value of the first target image block with the y-direction pixel mean value of the second target image block;
selecting a direction corresponding to the target image block with the large mean value from the first target image block and the second target image block as a target direction;
and reserving the line segments corresponding to the target direction in the line segment detection diagram of the target image so as to obtain an updated line segment detection diagram.
5. The image detection method according to claim 4, further comprising, after said comparing the magnitudes of the x-direction pixel mean of the first target image block and the y-direction pixel mean of the second target image block:
If the x-direction pixel mean value of the first target image block is equal to the y-direction pixel mean value of the second target image block, re-dividing the line segment detection graph into a plurality of second image blocks according to a second preset size and a second preset step length, wherein the second preset size is smaller than the first preset size;
respectively calculating an x-direction pixel mean value and a y-direction pixel mean value of each of the plurality of second image blocks;
determining a third target image block with the largest x-direction pixel mean value from the plurality of second image blocks, and determining a fourth target image block with the largest y-direction pixel mean value from the plurality of second image blocks;
comparing the x-direction pixel mean value of the third target image block with the y-direction pixel mean value of the fourth target image block;
selecting a direction corresponding to the target image block with the large mean value from the third target image block and the fourth target image block as a target direction;
and reserving the line segments corresponding to the target direction in the line segment detection diagram of the target image so as to obtain an updated line segment detection diagram.
6. The image detection method as claimed in any one of claims 2 to 5, wherein the determining the water ripple in the target image according to the line segment detection map includes:
Dividing the line segment detection graph into a plurality of third image blocks according to a third preset size and a third preset step length;
and determining the water ripple in the target image according to the pixel mean value of the third image blocks.
7. The image detection method according to claim 6, wherein determining the moire in the target image according to the pixel mean value of the plurality of third image blocks comprises:
calculating a pixel mean value of each of the plurality of third image blocks;
reserving all marked pixel points in the image blocks with the pixel mean value larger than or equal to a preset pixel threshold value in the plurality of third image blocks; and
canceling labeling of all labeled pixel points in the image blocks, wherein the pixel mean value of the image blocks in the third image blocks is smaller than the preset pixel threshold value;
traversing and comparing the size relation between the pixel mean value of each third image block in the plurality of third image blocks and the preset pixel threshold value to update the labeled pixel point of the line segment detection diagram;
and determining all the marked pixel points in the line segment detection graph after the marked pixel points are updated as the water ripple in the target image.
8. The image detection method according to claim 1, further comprising, after the determination of the water ripple in the target image:
and carrying out target effect processing on the area where the water ripple in the target image belongs.
9. The image detection method according to claim 8, wherein the performing target effect processing on the area to which the water ripple in the target image belongs includes:
detecting object events in the area of the water wave in the target image;
and carrying out target effect processing on the area where the water ripple in the target image belongs according to the detected object event.
10. The image detection method as claimed in claim 9, wherein the performing the target effect processing on the area of the water wave in the target image according to the detected object event includes:
if the object event is that the object falls into the water surface, generating a water ring ripple effect at the position of the object at the water ripple; or alternatively
If the object event is that the object floats on the water surface, generating a non-parallel moving ripple effect at the position of the object at the water ripple; or alternatively
And if the object event is that no object exists on the water surface, controlling the water wave in the target image to move in parallel.
11. An image detection apparatus, the apparatus comprising:
an acquisition unit configured to acquire a target image and convert the target image into a grayscale image;
the filtering unit is used for carrying out directional filtering on the gray level image so as to obtain a filtering image of the target image;
the detection unit is used for carrying out line segment detection on the filter image so as to obtain a line segment detection image of the target image;
the determining unit is used for determining the water ripple in the target image according to the line segment detection diagram;
wherein, the filtering unit is used for:
performing Sobel operator edge detection on the gray level image to detect a transverse edge image and a longitudinal edge image of the gray level image;
performing directional filtering according to the size relation between the pixel mean value of the transverse edge image and the pixel mean value of the longitudinal edge image to obtain a filtering diagram of the target image;
the filtering unit is configured to perform directional filtering according to a magnitude relation between a pixel mean value of the lateral edge image and a pixel mean value of the longitudinal edge image, so as to obtain a filtering diagram of the target image, and includes:
and if the pixel mean value of the transverse edge image is larger than the pixel mean value of the longitudinal edge image, reserving the transverse edge image, and filtering the longitudinal edge image to obtain a filter diagram only comprising the transverse edge image.
12. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, which is adapted to be loaded by a processor for performing the steps in the image detection method according to any of claims 1-9.
13. A computer device, characterized in that it comprises a processor and a memory, in which a computer program is stored, the processor being arranged to perform the steps of the image detection method according to any of claims 1-9 by invoking the computer program stored in the memory.
CN202110490641.XA 2021-05-06 2021-05-06 Image detection method, device, storage medium and computer equipment Active CN113240595B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110490641.XA CN113240595B (en) 2021-05-06 2021-05-06 Image detection method, device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110490641.XA CN113240595B (en) 2021-05-06 2021-05-06 Image detection method, device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN113240595A CN113240595A (en) 2021-08-10
CN113240595B true CN113240595B (en) 2023-09-08

Family

ID=77132108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110490641.XA Active CN113240595B (en) 2021-05-06 2021-05-06 Image detection method, device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN113240595B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09179989A (en) * 1995-12-26 1997-07-11 Nissan Motor Co Ltd Road recognition device for vehicle
JP2005141498A (en) * 2003-11-06 2005-06-02 Fuji Photo Film Co Ltd Method, device, and program for edge detection
JP2008171455A (en) * 2008-03-31 2008-07-24 Fujifilm Corp Method, device, and program for edge detection
CN110651299A (en) * 2018-02-28 2020-01-03 深圳市大疆创新科技有限公司 Image water ripple detection method and device, unmanned aerial vehicle and storage device
CN111080661A (en) * 2019-12-09 2020-04-28 Oppo广东移动通信有限公司 Image-based line detection method and device and electronic equipment
CN111681256A (en) * 2020-05-07 2020-09-18 浙江大华技术股份有限公司 Image edge detection method and device, computer equipment and readable storage medium
WO2021004180A1 (en) * 2019-07-09 2021-01-14 平安科技(深圳)有限公司 Texture feature extraction method, texture feature extraction apparatus, and terminal device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09179989A (en) * 1995-12-26 1997-07-11 Nissan Motor Co Ltd Road recognition device for vehicle
JP2005141498A (en) * 2003-11-06 2005-06-02 Fuji Photo Film Co Ltd Method, device, and program for edge detection
JP2008171455A (en) * 2008-03-31 2008-07-24 Fujifilm Corp Method, device, and program for edge detection
CN110651299A (en) * 2018-02-28 2020-01-03 深圳市大疆创新科技有限公司 Image water ripple detection method and device, unmanned aerial vehicle and storage device
WO2021004180A1 (en) * 2019-07-09 2021-01-14 平安科技(深圳)有限公司 Texture feature extraction method, texture feature extraction apparatus, and terminal device
CN111080661A (en) * 2019-12-09 2020-04-28 Oppo广东移动通信有限公司 Image-based line detection method and device and electronic equipment
CN111681256A (en) * 2020-05-07 2020-09-18 浙江大华技术股份有限公司 Image edge detection method and device, computer equipment and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
楼吉林.《基于颜色和纹理特征的道路网络自动提取技术研究》.《中国优秀硕士学位论文全文数据库信息科技辑》.2007,摘要、正文第1-65页. *

Also Published As

Publication number Publication date
CN113240595A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
US20220261960A1 (en) Super-resolution reconstruction method and related apparatus
US10943145B2 (en) Image processing methods and apparatus, and electronic devices
EP3644219A1 (en) Human face feature point tracking method, device, storage medium and apparatus
CN110189246B (en) Image stylization generation method and device and electronic equipment
CN111275784B (en) Method and device for generating image
CN110059623B (en) Method and apparatus for generating information
CN112766189A (en) Depth forgery detection method, device, storage medium, and electronic apparatus
JP2014527210A (en) Content adaptive system, method and apparatus for determining optical flow
CN112084959B (en) Crowd image processing method and device
CN113034523A (en) Image processing method, image processing device, storage medium and computer equipment
CN110211017B (en) Image processing method and device and electronic equipment
CN116310745A (en) Image processing method, data processing method, related device and storage medium
CN113516697B (en) Image registration method, device, electronic equipment and computer readable storage medium
CN113658196A (en) Method and device for detecting ship in infrared image, electronic equipment and medium
CN108734712B (en) Background segmentation method and device and computer storage medium
CN113240595B (en) Image detection method, device, storage medium and computer equipment
CN110197459B (en) Image stylization generation method and device and electronic equipment
EP4318314A1 (en) Image acquisition model training method and apparatus, image detection method and apparatus, and device
CN110766610A (en) Super-resolution image reconstruction method and electronic equipment
CN113838166B (en) Image feature migration method and device, storage medium and terminal equipment
CN113435393B (en) Forest fire smoke root node detection method, device and equipment
CN110503189B (en) Data processing method and device
CN114742934A (en) Image rendering method and device, readable medium and electronic equipment
CN114612531A (en) Image processing method and device, electronic equipment and storage medium
CN113706446A (en) Lens detection method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40049947

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant