CN113240595A - Image detection method, image detection device, storage medium and computer equipment - Google Patents

Image detection method, image detection device, storage medium and computer equipment Download PDF

Info

Publication number
CN113240595A
CN113240595A CN202110490641.XA CN202110490641A CN113240595A CN 113240595 A CN113240595 A CN 113240595A CN 202110490641 A CN202110490641 A CN 202110490641A CN 113240595 A CN113240595 A CN 113240595A
Authority
CN
China
Prior art keywords
image
target image
line segment
target
mean value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110490641.XA
Other languages
Chinese (zh)
Other versions
CN113240595B (en
Inventor
刘恩雨
李松南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110490641.XA priority Critical patent/CN113240595B/en
Publication of CN113240595A publication Critical patent/CN113240595A/en
Application granted granted Critical
Publication of CN113240595B publication Critical patent/CN113240595B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

The embodiment of the application discloses an image detection method, an image detection device, a storage medium and computer equipment, wherein the method comprises the following steps: acquiring a target image, and converting the target image into a gray-scale image; then, carrying out directional filtering on the gray level image to obtain a filtering image of the target image; then, carrying out line segment detection on the filter image to obtain a line segment detection image of the target image; and then determining the water ripple in the target image according to the line segment detection diagram. The embodiment of the application converts the target image into the gray image, then carries out direction filtering on the gray image, then carries out line segment detection again to obtain the line segment detection image, then determines the water ripple in the target image according to the line segment detection image, improves the accuracy of water ripple detection, and provides effective reference for subsequent effect processing in the water surface area according to the position of the detected water ripple.

Description

Image detection method, image detection device, storage medium and computer equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image detection method, an image detection device, a storage medium, and a computer apparatus.
Background
With the development of internet and terminal technologies, the presentation modes of images or paintings tend to be more and more diverse. The water ripple or the wave line is a line which often appears in the image or the painting, and the detection of the water ripple or the wave line is beneficial to providing reference for the position of the water surface in the image, the size of the water surface and the like and also providing reference for the subsequent corresponding processing of the water surface and the like. How to detect the water ripple in the image has become one of the important research topics in the industry. At present, no effective water ripple detection method exists.
Disclosure of Invention
The embodiment of the application provides an image detection method, an image detection device, a storage medium and computer equipment, which can effectively detect the water ripple in a target image and improve the accuracy of water ripple detection.
In a first aspect, an image detection method is provided, the method including: acquiring a target image, and converting the target image into a gray-scale image; performing directional filtering on the gray level image to obtain a filtering image of the target image; performing line segment detection on the filter graph to obtain a line segment detection graph of the target image; and determining the water ripple in the target image according to the line segment detection graph.
In a second aspect, there is provided an image detection apparatus, the apparatus comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a target image and converting the target image into a gray image;
the filtering unit is used for carrying out directional filtering on the gray level image to obtain a filtering image of the target image;
the detection unit is used for carrying out line segment detection on the filter graph to obtain a line segment detection graph of the target image;
and the determining unit is used for determining the water ripple in the target image according to the line segment detection graph.
In a third aspect, a computer readable storage medium is provided, the computer readable storage medium storing a computer program adapted to be loaded by a processor for performing the steps of the image detection method according to the first aspect.
In a fourth aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory having stored therein a computer program, the processor being configured to execute the steps in the image detection method according to the first aspect by calling the computer program stored in the memory.
According to the embodiment of the application, a target image is obtained and converted into a gray-scale image; then, carrying out directional filtering on the gray level image to obtain a filtering image of the target image; then, carrying out line segment detection on the filter image to obtain a line segment detection image of the target image; and then determining the water ripple in the target image according to the line segment detection diagram. The embodiment of the application converts the target image into the gray image, then carries out direction filtering on the gray image, then carries out line segment detection again to obtain the line segment detection image, then determines the water ripple in the target image according to the line segment detection image, improves the accuracy of water ripple detection, and provides effective reference for subsequent effect processing in the water surface area according to the position of the detected water ripple.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1a is a schematic structural diagram of an image detection system according to an embodiment of the present application.
Fig. 1b is a schematic flowchart of an image detection method according to an embodiment of the present application.
Fig. 1c is a schematic diagram of a Sobel convolution factor provided in this embodiment.
Fig. 1d is a schematic view of an application scenario of the image detection method provided in the embodiment of the present application.
FIG. 2 is another schematic flow chart of an image detection method according to an embodiment of the present disclosure
Fig. 3 is a schematic structural diagram of an image detection apparatus according to an embodiment of the present application.
Fig. 4 is another schematic structural diagram of an image detection apparatus according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an image detection method, an image detection device, computer equipment and a storage medium. Specifically, the image detection method of the embodiment of the present application may be executed by a computer device, where the computer device may be a terminal or a server or other devices. The device may be a smart phone, a tablet Computer, a notebook Computer, a touch screen, a smart television, an electronic drawing board, a Personal Computer (PC), a Personal Digital Assistant (PDA), a smart wearable device, etc., and is not limited herein. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, content distribution network service, big data and an artificial intelligence platform.
Cloud technology refers to a hosting technology for unifying serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. The cloud technology is based on the general names of network technology, information technology, integration technology, management platform technology, application technology and the like applied in the cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
Deep Learning (DL) is a branch of machine Learning, an algorithm that attempts to perform high-level abstraction of data using multiple processing layers that contain complex structures or consist of multiple nonlinear transformations.
The image detection method of the embodiment of the application can be realized by a terminal, or can be realized by the terminal and a server together.
The embodiment of the application takes an example that a terminal and a server realize an image detection method together.
Referring to fig. 1a, an image detection system provided in the embodiment of the present application includes a terminal 10, a server 20, and the like; the terminal 10 and the server 20 are connected via a network, such as a wired or wireless network connection.
The terminal 10, among other things, may be used to display a graphical user interface. The terminal is used for interacting with a user through a graphical user interface, for example, downloading and installing a drawing application program through the terminal and running the drawing application program. The manner in which the terminal device provides the graphical user interface to the user may include a variety of ways, for example, the graphical user interface may be rendered for display on a display screen of the terminal device or presented by holographic projection. For example, the terminal device may include a touch display screen for presenting a graphical user interface including a drawing picture and receiving an operation instruction generated by a user acting on the graphical user interface, and a processor for executing the drawing application, generating the graphical user interface, responding to the operation instruction, and controlling display of the graphical user interface on the touch display screen. In the embodiment of the application, a target image to be detected is input through the terminal 10, and then the target image is sent to the server 20, so that the server 20 detects the target image, and after the server 20 detects the water ripple of the target image, the detection result is sent to the terminal 10, so that the terminal 10 displays the detection result image containing the water ripple.
Among them, the server 20 may be specifically configured to: acquiring a target image, and converting the target image into a gray-scale image; performing directional filtering on the gray level image to obtain a filtering image of the target image; performing line segment detection on the filter graph to obtain a line segment detection graph of the target image; and determining the water ripple in the target image according to the line segment detection image, and then sending a detection result of the water ripple in the target image to the terminal 10.
The terminal 10 may display a detection result page including a detection result image of moire after receiving the detection result.
The image detection method can be applied to software terminals such as small programs embedded in a client, a browser client or an instant messaging client. The user can input a target image through a client installed in computer equipment and used for realizing the method provided by the embodiment of the application, a browser client or a software end such as an embedded applet on an instant messaging client, the target image is converted into a gray-scale image, then the gray-scale image is subjected to direction filtering, then line segment detection is carried out to obtain a line segment detection image, then water ripples in the target image are determined according to the line segment detection image, the accuracy of water ripple detection is improved, and effective reference is provided for subsequent effect processing in a water surface area according to the position of the detected water ripples.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
Referring to fig. 1b to fig. 1d, fig. 1b is a schematic flowchart of an image detection method according to an embodiment of the present application, fig. 1c is a schematic diagram of a Sobel convolution factor according to an embodiment of the present application, and fig. 1d is a schematic diagram of an application scenario of the image detection method according to an embodiment of the present application. The specific process of the method can be as follows:
step 101, acquiring a target image, and converting the target image into a gray-scale image.
The acquired target image is, for example, a color photograph or pictorial representation. Wherein, the color target image is converted into a gray scale map, which can be expressed as the following formula (1):
Gray=R*0.299+G*0.587+B*0.114 (1);
wherein Gray represents a Gray level image of the target image, RGB is three color channels of the target image, i.e., R is a red channel of the target image, G is a green channel of the target image, and B is a blue channel of the target image.
And 102, performing directional filtering on the gray level image to obtain a filtering image of the target image.
In some embodiments, the directionally filtering the grayscale map to obtain a filtered map of the target image includes:
carrying out Sobel operator edge detection on the gray level image so as to detect a transverse edge image and a longitudinal edge image of the gray level image;
and performing directional filtering according to the size relationship between the pixel mean value of the transverse edge image and the pixel mean value of the longitudinal edge image to obtain a filter graph of the target image.
In some embodiments, the retaining the pattern with the large pixel mean in the transverse edge image and the longitudinal edge image according to the comparison result to obtain the filter map of the target image includes:
if the pixel mean value of the transverse edge image is larger than the pixel mean value of the longitudinal edge image, the transverse edge image is reserved, and the longitudinal edge image is filtered out, so that a filter graph only containing the transverse edge image is obtained; or
If the pixel mean value of the transverse edge image is smaller than the pixel mean value of the longitudinal edge image, the longitudinal edge image is reserved, and the transverse edge image is filtered out, so that a filter graph only containing the longitudinal edge image is obtained; or
And if the pixel mean value of the transverse edge image is equal to the pixel mean value of the longitudinal edge image, reserving the transverse edge image and the longitudinal edge image to obtain a filter graph containing the transverse edge image and the longitudinal edge image.
For example, the Sobel operator has Sobel convolution factors S _ x and S _ y, as shown in fig. 1 c. Such as Gray representing a grayscale map of the target image; gx represents the gray value of the image subjected to the transverse edge detection, namely the transverse edge image of the gray image; gy represents the gray value of the image detected by the longitudinal edge, namely the longitudinal edge image of the gray image; where Gx can be expressed as the following formula (2), Gy can be expressed as the following formula (3):
Gx=S_x*Gray (2);
Gy=S_y*Gray (3)。
because the water ripples are generally unidirectional and parallel as a whole when appearing, the condition that two water ripples are perpendicular basically can not appear. Therefore, the lateral edge image Gx and the longitudinal edge image Gy of the grayscale image are detected, respectively, and the pixel mean values of the lateral edge image Gx and the longitudinal edge image Gy are calculated. And reserving one of the transverse edge image Gx and the longitudinal edge image Gy with a large pixel mean value to perform the next step of line segment detection.
If the pixel mean values of the transverse edge image Gx and the longitudinal edge image Gy are equal, then the Hough line segment detection needs to be carried out on Gx and Gy respectively in the next step, and then the Hough line segment detection is carried out further.
And 103, performing line segment detection on the filter graph to obtain a line segment detection graph of the target image.
In some embodiments, the performing line segment detection on the filter map to obtain a line segment detection map of the target image includes:
carrying out Hough line segment detection on the filter graph, and labeling all pixel points detected as line segments in the filter graph;
setting the pixel value of each labeled pixel point in the filter image as a first pixel value, and setting the pixel values of other pixel points which are not labeled in the filter image as a second pixel value to obtain a line segment detection image of the target image, wherein the first pixel value is larger than the second pixel value.
For example, the water waves are formed by straight or wavy lines. And the bending amplitude of the wavy line is generally very small, so that the wavy line can be detected as a plurality of small straight line segments in the line segment detection process.
For example, a straight line may be represented by the parameters polar radius and polar angle (r, θ) in a polar coordinate system. The straight line may be expressed as the following expression (4):
r=x*cosθ+y*sinθ (4);
where r represents the diameter of a straight line in a polar coordinate system, θ represents the polar angle of a straight line in a polar coordinate system, and point (x, y) represents a point on a straight line.
For example, for one point (x) on the filter map of the target image0,y0) A set of straight lines passing through this point can be defined as the following equation (5):
r0=x0*cosθ+y0*sinθ (5);
wherein, the formula (5) represents the parameter (r) of each pair of polar coordinate systems0θ) represents a point (x) on a filter map passing through the target image0,y0) Is measured. If for a given point (x)0,y0) Drawing on polar radius and polar angle planes of polar coordinatesGo out all passing points (x)0,y0) Will result in a sinusoidal curve. Therefore, if the curves obtained by performing the above operations at two different points on the polar radius and polar angle planes intersect, it means that the parameters at the intersection point pass through the same straight line. If the number of curves intersecting a point on the polar radius and polar angle plane exceeds a given threshold, the pair of parameters (r) represented by the intersection of the polar radius and polar angle plane is considered to be0θ) is a straight line in the filter map of the target image, that is, the point (x) can be considered as0,y0) Are located on a straight line. Wherein the threshold range of the given threshold is 10-30; experiments show that the line segment detection effect is good when the given threshold value is 15.
After a line segment in the filter graph is detected through Hough line segment detection, all pixel points detected as the line segment in the filter graph are labeled, the pixel value of each labeled pixel point detected to belong to the line segment is set to be 255, and the pixel values of the other unlabeled pixel points are set to be 0.
In some embodiments, if the filter map of the target image is a filter map including the lateral edge image and the longitudinal edge image, after the obtaining the line segment detection map of the target image, the method further includes:
dividing the line segment detection graph into a plurality of first image blocks according to a first preset size and a first preset step length;
respectively calculating the x-direction pixel mean value and the y-direction pixel mean value of each first image block in the plurality of first image blocks;
determining a first target image block with the largest x-direction pixel mean value from the plurality of first image blocks, and determining a second target image block with the largest y-direction pixel mean value from the plurality of first image blocks;
comparing the x-direction pixel mean value of the first target image block with the y-direction pixel mean value of the second target image block;
selecting a direction corresponding to a target image block with a large mean value from the first target image block and the second target image block as a target direction;
and reserving the direction of the line segment in the line segment detection graph of the target image and the line segment corresponding to the target direction to obtain an updated line segment detection graph.
In some embodiments, after the comparing the magnitudes of the x-direction pixel mean of the first target image block and the y-direction pixel mean of the second target image block, further comprises:
if the pixel mean value in the x direction of the first target image block is equal to the pixel mean value in the y direction of the second target image block, the line segment detection graph is divided into a plurality of second image blocks again according to a second preset size and a second preset step length, wherein the second preset size is smaller than the first preset size;
respectively calculating the x-direction pixel mean value and the y-direction pixel mean value of each second image block in the plurality of second image blocks;
determining a third target image block with the largest x-direction pixel mean value from the plurality of second image blocks, and determining a fourth target image block with the largest y-direction pixel mean value from the plurality of second image blocks;
comparing the x-direction pixel mean value of the third target image block with the y-direction pixel mean value of the fourth target image block;
selecting a direction corresponding to a target image block with a large mean value from the third target image block and the fourth target image block as a target direction;
and reserving the direction of the line segment in the line segment detection graph of the target image and the line segment corresponding to the target direction to obtain an updated line segment detection graph.
In step 102, if the pixel mean of the horizontal edge image Gx is equal to the pixel mean of the vertical edge image Gy, the horizontal edge image Gx and the vertical edge image Gy are retained to obtain a filter map including the horizontal edge image Gx and the vertical edge image Gy. In the case that the pixel mean values of Gx and Gy are equal, further screening needs to be performed on the hough line segment detection results of Gx and Gy.
For example, it is necessary to cut a line segment detection map obtained after line segment detection into a plurality of small overlapped image blocks, then count an x-direction pixel mean value and a y-direction pixel mean value of the image blocks respectively, then compare an image block corresponding to a maximum x-direction pixel mean value with an image block corresponding to a maximum y-direction pixel mean value, and reserve a larger direction.
For example, the first preset size is 100 × 100, the first preset step size is 13, the line segment detection graph obtained after line segment detection is cut into 100 × 100 image blocks (first image blocks), the step size is 13, and the x-direction pixel mean value and the y-direction pixel mean value of each of the plurality of image blocks are respectively counted. Comparing the pixel mean values in the x direction and the y direction of all the image blocks, because the water ripples are densely appeared in a certain area, the larger the pixel mean value in a certain direction of the image block is, the larger the occurrence probability of the water ripples in the image block is, and when the pixel mean value in a certain direction of the image block is smaller, the lines which appear are probably not ripples. Comparing the pixel mean values of the maximum pixel mean image block in the x direction with the maximum pixel mean image block in the y direction, and reserving the direction with the larger pixel mean value. And if the pixel mean values of the maximum pixel mean image block in the x direction and the maximum pixel mean image block in the y direction are equal, continuing to cut the line segment detection graph of the target image into 80 × 80 image blocks for comparison, and if the pixel mean values of the maximum pixel mean image block in the x direction and the maximum pixel mean image block in the y direction are equal, cutting into 60 × 60 image blocks, 40 × 40 till 20 × 20, and repeating the steps. When the line segment detection graph is re-segmented each time, the preset step length needs to be set so that the cut small image blocks are overlapped, for example, the preset step length can be further set to 13. If the final mean values in the x and y directions are still equal, the line segment detection results in the two directions are superposed, and the special case, such as the central symmetry case, is determined. The step of determining the water ripple in step 104 is then performed.
If the obtained filter map is a filter map including only a one-way edge image in step 102, such as a filter map including only a horizontal edge image or a filter map including only a vertical edge image, the step 104 is directly performed after the hough line segment detection in step 103 obtains a line segment detection map.
And step 104, determining the water ripple in the target image according to the line segment detection graph.
For example, all the labeled pixel points in the line segment detection graph are determined as the water ripples in the target image.
In some embodiments, the determining, from the line segment detection map, water ripples in the target image includes:
dividing the line segment detection graph into a plurality of third image blocks according to a third preset size and a third preset step length;
and determining the water ripple in the target image according to the pixel mean values of the plurality of third image blocks.
In some embodiments, the determining the water ripple in the target image according to the pixel mean of the plurality of third image blocks includes:
calculating a pixel mean value of each of the plurality of third image blocks;
reserving all labeled pixel points in the image blocks of which the pixel mean values are greater than or equal to a preset pixel threshold value in the plurality of third image blocks; and
all labeled pixel points in the image blocks of which the pixel mean values are smaller than the preset pixel threshold value in the plurality of third image blocks are de-labeled;
traversing and comparing the size relation between the pixel mean value of each third image block in the plurality of third image blocks and the preset pixel threshold value to update the labeling pixel points of the line segment detection graph;
and determining all the marking pixel points in the updated line segment detection graph as the water ripples in the target image.
For example, there may be a case where false detection is possible due to the labeling of the line segment in the line segment detection map, such as a case where a line in other non-water ripple region is also detected as a line segment. In order to detect the water ripple more accurately, further labeled pixel point screening is required. The water ripple generally appears in a large area, and only one water ripple does not appear, but the false detection line segment is most likely to be a line segment, so that the pixel mean value of the image block can be screened. Specifically, the line segment detection map obtained in step 103 is cut into a plurality of small overlapped image blocks, the pixel mean value of the image blocks is counted, when the pixel mean value is greater than a preset pixel threshold, it is determined that the area belongs to the water ripple, and a pixel point with a pixel value of 255 in the area belongs to the water ripple.
For example, the third preset size is 100 × 100, the third preset step size is 38, the line segment detection map obtained in step 103 is cut into 100 × 100 image blocks (third image blocks), and the step size is 38, that is, every 38 pixels, information of one 100 × 100 image block is calculated. Specifically, the pixel value of the labeled position (labeled pixel point) of the hough line segment is 255, and the unlabeled area is 0, so that the larger the pixel value of the image block is, the more labeled pixel points in the area are, and the more detected line segments appear. For example, the preset pixel threshold is 16, when the pixel mean value of the image block is greater than or equal to 16, it is determined that the annotated pixel point in the image block is accurate, the original annotated pixel point in the image block whose pixel mean value is greater than or equal to 16 is retained, and the original annotated pixel point in the image block whose pixel mean value does not reach 16 is not retained. And after traversing all the image blocks, updating the marking pixel points in the line segment detection graph, and determining all the marking pixel points in the line segment detection graph after the marking pixel points are updated as the water ripple in the target image.
For example, as shown in fig. 1d, the grayscale image of the target image obtained after the processing in step 101 is as an image a in fig. 1d, where an area a in the image a is a water ripple area in the target image, the image detection method provided in the embodiment of the present application is required to perform Sobel operator direction filtering on the grayscale image a, perform hough line segment detection, label the line segment detection image, finally cut the line segment detection image into a plurality of small overlapped image blocks, count the pixel mean value of the image block, and when the mean value is greater than a preset pixel threshold, determine that the area belongs to the water ripple area, and determine that a pixel point with a pixel value of 255 in the area is water ripple. The image B shown in fig. 1d is the detection result image, and the area B shown in the image B is the area to which the water ripples belong, and the line segments in the area B are the detected water ripples.
All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.
According to the embodiment of the application, a target image is obtained and converted into a gray-scale image; then, carrying out directional filtering on the gray level image to obtain a filtering image of the target image; then, carrying out line segment detection on the filter image to obtain a line segment detection image of the target image; and then determining the water ripple in the target image according to the line segment detection diagram. The embodiment of the application converts the target image into the gray image, then carries out direction filtering on the gray image, then carries out line segment detection again to obtain the line segment detection image, then determines the water ripple in the target image according to the line segment detection image, improves the accuracy of water ripple detection, and provides effective reference for subsequent effect processing in the water surface area according to the position of the detected water ripple.
Referring to fig. 2, fig. 2 is another schematic flow chart of an image detection method according to an embodiment of the present disclosure. The specific process of the method can be as follows:
step 201, acquiring a target image, and converting the target image into a gray-scale image. For a detailed description of step 201, please refer to step 101, which is not described herein again.
And 202, performing directional filtering on the gray level image to obtain a filtering image of the target image. For a detailed description of step 202, please refer to step 102, which is not described herein again.
Step 203, performing line segment detection on the filter map to obtain a line segment detection map of the target image. For the detailed description of step 203, please refer to step 103, which is not described herein again.
And step 204, determining the water ripple in the target image according to the line segment detection graph. For a detailed description of step 204, please refer to step 104, which is not described herein again.
And step 205, performing target effect processing on the area to which the water ripple in the target image belongs.
In some embodiments, the performing target effect processing on the region to which the water ripple belongs in the target image includes:
carrying out object event detection on the region to which the water ripple in the target image belongs;
and according to the detected object event, carrying out target effect processing on the region to which the water ripple in the target image belongs.
In some embodiments, the performing, according to the detected object event, target effect processing on the region to which the water ripple belongs in the target image includes:
if the object event is that the object falls into the water surface, a water ring ripple effect is generated at the position of the object, which is located at the water ripple; or
If the object event is that the object floats on the water surface, generating a non-parallel moving ripple effect at the position of the object, which is located at the water ripple; or
And if the object event is that no object exists on the water surface, controlling the water ripple in the target image to move in parallel.
For example, after the water ripple of the target image is detected, the position of the water surface and the size of the water surface in the target image may be further determined, and the water surface or the object on the water surface may be subjected to corresponding effect processing based on the detected water ripple. For example, the water surface generates dynamic effect, and the display effect of the target image is increased.
For example, if the detected object event is an object falling into the water surface, a water ring ripple effect is produced where the object is located in a water ripple. Wherein, in daily life, the object falls into the surface of water, can form the water ring ripple of concentric circles form around the point of falling into water on the surface of water, and the water ring ripple is from inside to outside diffusion, and the circle is bigger and bigger, finally disappears gradually.
For example, if the detected object event is an object floating on the water surface, a non-parallel moving ripple effect is created where the object is located in a water ripple. For example, an object event in which the object floats on the water surface may include an event in which the object floats and moves forward, such as a waterfowl, a duck, a boat, etc., and the object may form non-parallel moving waves, such as herringbone waves, as it advances on the water surface. The herringbone ripples are diffused outwards from two sides of the object, are elongated and dispersed in the direction opposite to the advancing direction of the object, and gradually disappear.
For example, if the detected object event is that there is no object on the water surface, the water ripples in the control target image are moved in parallel. For example, the line segments representing the water ripples may be controlled to move in parallel, such as from the front end of the water ripples to the back end of the water ripples. For example, the effects of water drops, waves and the like can also be increased.
For example, if the detected object event is that no object is on the water surface, the water ripple in the target image can be controlled to move in a streamline manner according to a line segment curved path.
For example, the moving speed of the dynamic moving effect can be controlled according to different screen contents of the target image.
Wherein, the effect of the water wave is reflected on the image, and is the offset of the pixel point. The image is thus processed as if it were scaled, and for each point of the output image, its pixel points corresponding to the original input image are calculated. In addition, the color value can be calculated by an interpolation method. The propagation direction of the wave is transverse wave which is vertical to the vibration direction of the mass points, and the transverse wave and the longitudinal wave are the same, and the water wave is the superposition of the transverse wave and the longitudinal wave, so that when the water wave propagates, each mass point has horizontal motion and up-and-down motion, and the sum is elliptical circular motion. In the embodiment of the application, when the ripple effect is formed, the offset parameters of different pixel points can be set according to different ripple effects.
All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.
According to the embodiment of the application, a target image is obtained and converted into a gray-scale image; then, carrying out directional filtering on the gray level image to obtain a filtering image of the target image; then, carrying out line segment detection on the filter image to obtain a line segment detection image of the target image; then determining the water ripple in the target image according to the line segment detection graph; and then carrying out target effect processing on the region to which the water ripple in the target image belongs. The embodiment of the application converts the target image into the gray image, then carries out direction filtering on the gray image, then carries out line segment detection to obtain a line segment detection image, then determines the water ripple in the target image according to the line segment detection image, and carries out target effect processing on the region to which the water ripple in the target image belongs, so that the accuracy of water ripple detection is improved, the dynamic effect of an image picture is increased, and the image content is enriched.
In order to better implement the image detection method according to the embodiment of the present application, an embodiment of the present application further provides an image detection apparatus. Referring to fig. 3, fig. 3 is a schematic structural diagram of an image detection apparatus according to an embodiment of the present disclosure. The image detection apparatus 300 may include:
an acquisition unit 301, configured to acquire a target image and convert the target image into a grayscale image;
a filtering unit 302, configured to perform directional filtering on the grayscale map to obtain a filtered map of the target image;
a detecting unit 303, configured to perform line segment detection on the filter map to obtain a line segment detection map of the target image;
and the determining unit 304 is used for determining the water ripple in the target image according to the line detection graph.
In some embodiments, the filtering unit 302 is configured to:
carrying out Sobel operator edge detection on the gray level image so as to detect a transverse edge image and a longitudinal edge image of the gray level image;
and performing directional filtering according to the size relationship between the pixel mean value of the transverse edge image and the pixel mean value of the longitudinal edge image to obtain a filter graph of the target image.
In some embodiments, the filtering unit 302 is configured to reserve a pattern with a large pixel mean value in the horizontal edge image and the vertical edge image according to the comparison result to obtain a filtered map of the target image, and specifically includes:
if the pixel mean value of the transverse edge image is larger than the pixel mean value of the longitudinal edge image, the transverse edge image is reserved, and the longitudinal edge image is filtered out, so that a filter graph only containing the transverse edge image is obtained; or
If the pixel mean value of the transverse edge image is smaller than the pixel mean value of the longitudinal edge image, the longitudinal edge image is reserved, and the transverse edge image is filtered out, so that a filter graph only containing the longitudinal edge image is obtained; or
And if the pixel mean value of the transverse edge image is equal to the pixel mean value of the longitudinal edge image, reserving the transverse edge image and the longitudinal edge image to obtain a filter graph containing the transverse edge image and the longitudinal edge image.
In some embodiments, the detecting unit 303 is configured to:
carrying out Hough line segment detection on the filter graph, and labeling all pixel points detected as line segments in the filter graph;
setting the pixel value of each labeled pixel point in the filter image as a first pixel value, and setting the pixel values of other pixel points which are not labeled in the filter image as a second pixel value to obtain a line segment detection image of the target image, wherein the first pixel value is larger than the second pixel value.
In some embodiments, if the filter map of the target image is a filter map including the horizontal edge image and the vertical edge image, the detecting unit 303 is further configured to:
dividing the line segment detection graph into a plurality of first image blocks according to a first preset size and a first preset step length;
respectively calculating the x-direction pixel mean value and the y-direction pixel mean value of each first image block in the plurality of first image blocks;
determining a first target image block with the largest x-direction pixel mean value from the plurality of first image blocks, and determining a second target image block with the largest y-direction pixel mean value from the plurality of first image blocks;
comparing the x-direction pixel mean value of the first target image block with the y-direction pixel mean value of the second target image block;
selecting a direction corresponding to a target image block with a large mean value from the first target image block and the second target image block as a target direction;
and reserving the direction of the line segment in the line segment detection graph of the target image and the line segment corresponding to the target direction to obtain an updated line segment detection graph.
In some embodiments, the detecting unit 303, after comparing the magnitudes of the x-direction pixel mean value of the first target image block and the y-direction pixel mean value of the second target image block, is further configured to:
if the pixel mean value in the x direction of the first target image block is equal to the pixel mean value in the y direction of the second target image block, the line segment detection graph is divided into a plurality of second image blocks again according to a second preset size and a second preset step length, wherein the second preset size is smaller than the first preset size;
respectively calculating the x-direction pixel mean value and the y-direction pixel mean value of each second image block in the plurality of second image blocks;
determining a third target image block with the largest x-direction pixel mean value from the plurality of second image blocks, and determining a fourth target image block with the largest y-direction pixel mean value from the plurality of second image blocks;
comparing the x-direction pixel mean value of the third target image block with the y-direction pixel mean value of the fourth target image block;
selecting a direction corresponding to a target image block with a large mean value from the third target image block and the fourth target image block as a target direction;
and reserving the direction of the line segment in the line segment detection graph of the target image and the line segment corresponding to the target direction to obtain an updated line segment detection graph.
In some embodiments, the determining unit 304 is configured to:
dividing the line segment detection graph into a plurality of third image blocks according to a third preset size and a third preset step length;
and determining the water ripple in the target image according to the pixel mean values of the plurality of third image blocks.
In some embodiments, the determining unit 304 is configured to determine the water ripple in the target image according to the pixel mean of the plurality of third image blocks, and specifically includes:
calculating a pixel mean value of each of the plurality of third image blocks;
reserving all labeled pixel points in the image blocks of which the pixel mean values are greater than or equal to a preset pixel threshold value in the plurality of third image blocks; and
all labeled pixel points in the image blocks of which the pixel mean values are smaller than the preset pixel threshold value in the plurality of third image blocks are de-labeled;
traversing and comparing the size relation between the pixel mean value of each third image block in the plurality of third image blocks and the preset pixel threshold value to update the labeling pixel points of the line segment detection graph;
and determining all the marking pixel points in the updated line segment detection graph as the water ripples in the target image.
Referring to fig. 4, fig. 4 is another schematic structural diagram of an image detection apparatus according to an embodiment of the present disclosure. Fig. 4 differs from fig. 3 in that the image detection apparatus 400 may further include a processing unit 400.
The processing unit 400 is configured to perform target effect processing on a region to which the water ripple in the target image belongs.
In some embodiments, the processing unit 400 is specifically configured to:
carrying out object event detection on the region to which the water ripple in the target image belongs;
and according to the detected object event, carrying out target effect processing on the region to which the water ripple in the target image belongs.
In some embodiments, the processing unit 400 is configured to perform target effect processing on a region to which the water ripple in the target image belongs according to the detected object event, and specifically includes:
if the object event is that the object falls into the water surface, a water ring ripple effect is generated at the position of the object, which is located at the water ripple; or
If the object event is that the object floats on the water surface, generating a non-parallel moving ripple effect at the position of the object, which is located at the water ripple; or
And if the object event is that no object exists on the water surface, controlling the water ripple in the target image to move in parallel.
All the above technical solutions can be combined arbitrarily to form the optional embodiments of the present application, and are not described herein again.
It is to be understood that apparatus embodiments and method embodiments may correspond to one another and that similar descriptions may refer to method embodiments. To avoid repetition, further description is omitted here. Specifically, the apparatuses shown in fig. 3 and fig. 4 may execute the above-mentioned embodiment of the image detection method, and the foregoing and other operations and/or functions of each module in the apparatuses implement the corresponding processes of the above-mentioned embodiment of the method, which are not described herein again for brevity.
Correspondingly, the embodiment of the present application further provides a Computer device, where the Computer device may be a terminal or a server, and the terminal may be a smart phone, a tablet Computer, a notebook Computer, a touch screen, a smart television, an electronic drawing board, a PC (Personal Computer), a Personal Digital Assistant (PDA), an intelligent wearable device, and the like. The server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, big data and artificial intelligence platform and the like. As shown in fig. 5, the computer device may include Radio Frequency (RF) circuitry 501, memory 502 including one or more computer-readable storage media, input unit 503, display unit 504, sensor 505, audio circuitry 506, Wireless Fidelity (WiFi) module 507, processor 508 including one or more processing cores, and power supply 509, among other components. Those skilled in the art will appreciate that the computer device configuration illustrated in FIG. 5 does not constitute a limitation of computer devices, and may include more or fewer components than those illustrated, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 501 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for receiving downlink information of a base station and then sending the received downlink information to the one or more processors 508 for processing; in addition, data relating to uplink is transmitted to the base station. In addition, the RF circuitry 501 may also communicate with networks and other devices via wireless communications.
The memory 502 may be used to store software programs and modules, and the processor 508 executes various functional applications and data processing by operating the software programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to use of the computer device, and the like.
The input unit 503 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
The display unit 504 may be used to display information input by or provided to a user as well as various graphical user interfaces of the computer device, which may be made up of graphics, text, icons, video, and any combination thereof. The display unit 504 may include a display panel.
The computer device may also include at least one sensor 505, such as light sensors, motion sensors, and other sensors.
Audio circuitry 506, a speaker, and a microphone may provide an audio interface between a user and a computer device. The audio circuit 506 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 506 and converted into audio data, which is then processed by the audio data output processor 508 and then sent to, for example, another computer device via the RF circuit 501, or output to the memory 502 for further processing. The audio circuit 506 may also include an earbud jack to provide communication of peripheral headphones with the computer device.
WiFi belongs to short-distance wireless transmission technology, and computer equipment can help a user to receive and send emails, browse webpages, access streaming media and the like through a WiFi module 507, and provides wireless broadband internet access for the user. Although fig. 5 shows the WiFi module 507, it is understood that it does not belong to the essential constitution of the computer device, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 508 is a control center of the computer device, connects various parts of the entire cellular phone using various interfaces and lines, and performs various functions of the computer device and processes data by operating or executing software programs and/or modules stored in the memory 502 and calling data stored in the memory 502, thereby integrally monitoring the computer device.
The computer device also includes a power supply 509 (such as a battery) for powering the various components, which may preferably be logically connected to the processor 508 via a power management system that may be used to manage charging, discharging, and power consumption.
Although not shown, the computer device may further include a camera, a bluetooth module, etc., which will not be described herein. Specifically, in this embodiment, the processor 508 in the computer device loads the executable file corresponding to the process of one or more computer programs into the memory 502 according to the following instructions, and the processor 508 runs the computer programs stored in the memory 502, so as to implement various functions:
acquiring a target image, and converting the target image into a gray-scale image; performing directional filtering on the gray level image to obtain a filtering image of the target image; performing line segment detection on the filter graph to obtain a line segment detection graph of the target image; and determining the water ripple in the target image according to the line segment detection graph.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a computer-readable storage medium, in which a plurality of computer programs are stored, and the computer programs can be loaded by a processor to execute the steps in any one of the image detection methods provided by the embodiments of the present application.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the computer program stored in the storage medium can execute the steps in any image detection method provided in the embodiments of the present application, the beneficial effects that can be achieved by any image detection method provided in the embodiments of the present application can be achieved, and detailed descriptions are omitted here for the foregoing embodiments.
The image detection method, the image detection device, the storage medium, and the computer apparatus provided in the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (14)

1. An image detection method, characterized in that the method comprises:
acquiring a target image, and converting the target image into a gray-scale image;
performing directional filtering on the gray level image to obtain a filtering image of the target image;
performing line segment detection on the filter graph to obtain a line segment detection graph of the target image;
and determining the water ripple in the target image according to the line segment detection graph.
2. The image detection method of claim 1, wherein the performing directional filtering on the gray-scale map to obtain a filtered map of the target image comprises:
carrying out Sobel operator edge detection on the gray level image so as to detect a transverse edge image and a longitudinal edge image of the gray level image;
and performing directional filtering according to the size relationship between the pixel mean value of the transverse edge image and the pixel mean value of the longitudinal edge image to obtain a filter graph of the target image.
3. The image detection method according to claim 2, wherein the retaining the pattern with the large pixel mean value in the transverse edge image and the longitudinal edge image according to the comparison result to obtain the filter map of the target image comprises:
if the pixel mean value of the transverse edge image is larger than the pixel mean value of the longitudinal edge image, the transverse edge image is reserved, and the longitudinal edge image is filtered out, so that a filter graph only containing the transverse edge image is obtained; or
If the pixel mean value of the transverse edge image is smaller than the pixel mean value of the longitudinal edge image, the longitudinal edge image is reserved, and the transverse edge image is filtered out, so that a filter graph only containing the longitudinal edge image is obtained; or
And if the pixel mean value of the transverse edge image is equal to the pixel mean value of the longitudinal edge image, reserving the transverse edge image and the longitudinal edge image to obtain a filter graph containing the transverse edge image and the longitudinal edge image.
4. The image detection method of claim 3, wherein the performing line segment detection on the filter map to obtain a line segment detection map of the target image comprises:
carrying out Hough line segment detection on the filter graph, and labeling all pixel points detected as line segments in the filter graph;
setting the pixel value of each labeled pixel point in the filter image as a first pixel value, and setting the pixel values of other pixel points which are not labeled in the filter image as a second pixel value to obtain a line segment detection image of the target image, wherein the first pixel value is larger than the second pixel value.
5. The image detection method of claim 4, wherein if the filter map of the target image is a filter map including the transverse edge image and the longitudinal edge image, after obtaining the line segment detection map of the target image, the method further comprises:
dividing the line segment detection graph into a plurality of first image blocks according to a first preset size and a first preset step length;
respectively calculating the x-direction pixel mean value and the y-direction pixel mean value of each first image block in the plurality of first image blocks;
determining a first target image block with the largest x-direction pixel mean value from the plurality of first image blocks, and determining a second target image block with the largest y-direction pixel mean value from the plurality of first image blocks;
comparing the x-direction pixel mean value of the first target image block with the y-direction pixel mean value of the second target image block;
selecting a direction corresponding to a target image block with a large mean value from the first target image block and the second target image block as a target direction;
and reserving the direction of the line segment in the line segment detection graph of the target image and the line segment corresponding to the target direction to obtain an updated line segment detection graph.
6. The image detection method of claim 5, wherein after the comparing the magnitudes of the x-direction pixel mean value of the first target image block and the y-direction pixel mean value of the second target image block, further comprising:
if the pixel mean value in the x direction of the first target image block is equal to the pixel mean value in the y direction of the second target image block, the line segment detection graph is divided into a plurality of second image blocks again according to a second preset size and a second preset step length, wherein the second preset size is smaller than the first preset size;
respectively calculating the x-direction pixel mean value and the y-direction pixel mean value of each second image block in the plurality of second image blocks;
determining a third target image block with the largest x-direction pixel mean value from the plurality of second image blocks, and determining a fourth target image block with the largest y-direction pixel mean value from the plurality of second image blocks;
comparing the x-direction pixel mean value of the third target image block with the y-direction pixel mean value of the fourth target image block;
selecting a direction corresponding to a target image block with a large mean value from the third target image block and the fourth target image block as a target direction;
and reserving the direction of the line segment in the line segment detection graph of the target image and the line segment corresponding to the target direction to obtain an updated line segment detection graph.
7. The image detection method of any one of claims 4 to 6, wherein said determining water ripples in the target image from the line segment detection map comprises:
dividing the line segment detection graph into a plurality of third image blocks according to a third preset size and a third preset step length;
and determining the water ripple in the target image according to the pixel mean values of the plurality of third image blocks.
8. The image detection method according to claim 7, wherein the determining the water ripple in the target image according to the pixel mean of the plurality of third image blocks comprises:
calculating a pixel mean value of each of the plurality of third image blocks;
reserving all labeled pixel points in the image blocks of which the pixel mean values are greater than or equal to a preset pixel threshold value in the plurality of third image blocks; and
all labeled pixel points in the image blocks of which the pixel mean values are smaller than the preset pixel threshold value in the plurality of third image blocks are de-labeled;
traversing and comparing the size relation between the pixel mean value of each third image block in the plurality of third image blocks and the preset pixel threshold value to update the labeling pixel points of the line segment detection graph;
and determining all the marking pixel points in the updated line segment detection graph as the water ripples in the target image.
9. The image detection method of claim 1, after said determining the water ripple in the target image, further comprising:
and carrying out target effect processing on the region to which the water ripples in the target image belong.
10. The image detection method according to claim 9, wherein the performing the target effect processing on the region to which the moire in the target image belongs comprises:
carrying out object event detection on the region to which the water ripple in the target image belongs;
and according to the detected object event, carrying out target effect processing on the region to which the water ripple in the target image belongs.
11. The image detection method according to claim 10, wherein the performing target effect processing on the region to which the water ripple belongs in the target image according to the detected object event comprises:
if the object event is that the object falls into the water surface, a water ring ripple effect is generated at the position of the object, which is located at the water ripple; or
If the object event is that the object floats on the water surface, generating a non-parallel moving ripple effect at the position of the object, which is located at the water ripple; or
And if the object event is that no object exists on the water surface, controlling the water ripple in the target image to move in parallel.
12. An image detection apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a target image and converting the target image into a gray image;
the filtering unit is used for carrying out directional filtering on the gray level image to obtain a filtering image of the target image;
the detection unit is used for carrying out line segment detection on the filter graph to obtain a line segment detection graph of the target image;
and the determining unit is used for determining the water ripple in the target image according to the line segment detection graph.
13. A computer-readable storage medium, characterized in that it stores a computer program adapted to be loaded by a processor for performing the steps of the image detection method according to any one of claims 1 to 11.
14. A computer device, characterized in that the computer device comprises a processor and a memory, wherein the memory stores a computer program, and the processor is used for executing the steps in the image detection method according to any one of claims 1-11 by calling the computer program stored in the memory.
CN202110490641.XA 2021-05-06 2021-05-06 Image detection method, device, storage medium and computer equipment Active CN113240595B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110490641.XA CN113240595B (en) 2021-05-06 2021-05-06 Image detection method, device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110490641.XA CN113240595B (en) 2021-05-06 2021-05-06 Image detection method, device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN113240595A true CN113240595A (en) 2021-08-10
CN113240595B CN113240595B (en) 2023-09-08

Family

ID=77132108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110490641.XA Active CN113240595B (en) 2021-05-06 2021-05-06 Image detection method, device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN113240595B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09179989A (en) * 1995-12-26 1997-07-11 Nissan Motor Co Ltd Road recognition device for vehicle
JP2005141498A (en) * 2003-11-06 2005-06-02 Fuji Photo Film Co Ltd Method, device, and program for edge detection
JP2008171455A (en) * 2008-03-31 2008-07-24 Fujifilm Corp Method, device, and program for edge detection
CN110651299A (en) * 2018-02-28 2020-01-03 深圳市大疆创新科技有限公司 Image water ripple detection method and device, unmanned aerial vehicle and storage device
CN111080661A (en) * 2019-12-09 2020-04-28 Oppo广东移动通信有限公司 Image-based line detection method and device and electronic equipment
CN111681256A (en) * 2020-05-07 2020-09-18 浙江大华技术股份有限公司 Image edge detection method and device, computer equipment and readable storage medium
WO2021004180A1 (en) * 2019-07-09 2021-01-14 平安科技(深圳)有限公司 Texture feature extraction method, texture feature extraction apparatus, and terminal device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09179989A (en) * 1995-12-26 1997-07-11 Nissan Motor Co Ltd Road recognition device for vehicle
JP2005141498A (en) * 2003-11-06 2005-06-02 Fuji Photo Film Co Ltd Method, device, and program for edge detection
JP2008171455A (en) * 2008-03-31 2008-07-24 Fujifilm Corp Method, device, and program for edge detection
CN110651299A (en) * 2018-02-28 2020-01-03 深圳市大疆创新科技有限公司 Image water ripple detection method and device, unmanned aerial vehicle and storage device
WO2021004180A1 (en) * 2019-07-09 2021-01-14 平安科技(深圳)有限公司 Texture feature extraction method, texture feature extraction apparatus, and terminal device
CN111080661A (en) * 2019-12-09 2020-04-28 Oppo广东移动通信有限公司 Image-based line detection method and device and electronic equipment
CN111681256A (en) * 2020-05-07 2020-09-18 浙江大华技术股份有限公司 Image edge detection method and device, computer equipment and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
楼吉林: "《基于颜色和纹理特征的道路网络自动提取技术研究》", 《中国优秀硕士学位论文全文数据库信息科技辑》, pages 1 - 65 *
田伟;沈浩;李晓;师磊磊;: "基于图像处理的廊道表面裂缝检测技术研究", 电子设计工程, no. 05, pages 148 - 151 *

Also Published As

Publication number Publication date
CN113240595B (en) 2023-09-08

Similar Documents

Publication Publication Date Title
CN110059623B (en) Method and apparatus for generating information
CN113642673B (en) Image generation method, device, equipment and storage medium
CN113284142B (en) Image detection method, image detection device, computer-readable storage medium and computer equipment
US20210089913A1 (en) Information processing method and apparatus, and storage medium
CN113034523A (en) Image processing method, image processing device, storage medium and computer equipment
CN113658339B (en) Contour line-based three-dimensional entity generation method and device
CN110211017B (en) Image processing method and device and electronic equipment
CN114842120A (en) Image rendering processing method, device, equipment and medium
CN108563982B (en) Method and apparatus for detecting image
TW202219822A (en) Character detection method, electronic equipment and computer-readable storage medium
CN113240595B (en) Image detection method, device, storage medium and computer equipment
CN111797822A (en) Character object evaluation method and device and electronic equipment
EP4318314A1 (en) Image acquisition model training method and apparatus, image detection method and apparatus, and device
CN111461965A (en) Picture processing method and device, electronic equipment and computer readable medium
CN113762266B (en) Target detection method, device, electronic equipment and computer readable medium
CN114742934A (en) Image rendering method and device, readable medium and electronic equipment
CN111680754B (en) Image classification method, device, electronic equipment and computer readable storage medium
CN110503189B (en) Data processing method and device
CN116109744A (en) Fluff rendering method, device, equipment and medium
CN114612531A (en) Image processing method and device, electronic equipment and storage medium
CN111862015A (en) Image quality grade determining method and device and electronic equipment
CN111784710B (en) Image processing method, device, electronic equipment and medium
CN116993637B (en) Image data processing method, device, equipment and medium for lane line detection
CN113538537B (en) Image registration and model training method, device, equipment, server and medium
CN111405003B (en) Resource loading method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40049947

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant