CN115035316A - Target area image identification method and device and computer equipment - Google Patents

Target area image identification method and device and computer equipment Download PDF

Info

Publication number
CN115035316A
CN115035316A CN202210763993.2A CN202210763993A CN115035316A CN 115035316 A CN115035316 A CN 115035316A CN 202210763993 A CN202210763993 A CN 202210763993A CN 115035316 A CN115035316 A CN 115035316A
Authority
CN
China
Prior art keywords
image
target area
line segment
line
edge detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210763993.2A
Other languages
Chinese (zh)
Inventor
林晓聪
陈鸿
李治
赵山河
邬稳
梁毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Merchants Union Consumer Finance Co Ltd
Original Assignee
Merchants Union Consumer Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Merchants Union Consumer Finance Co Ltd filed Critical Merchants Union Consumer Finance Co Ltd
Priority to CN202210763993.2A priority Critical patent/CN115035316A/en
Publication of CN115035316A publication Critical patent/CN115035316A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The invention relates to the technical field of image identification processing, and particularly discloses a target area image identification method, a device and computer equipment, wherein the method comprises the following steps: carrying out edge detection processing on an image to be recognized to obtain an edge detection image of the image to be recognized; detecting line segments in the edge detection image; performing interference line segment filtering on the line segments in the edge detection image to obtain filtered line segment images; and performing line segment density clustering on the line segment images to obtain a clustering range, and identifying a target area in the line segment images according to the clustering range. According to the method and the device, the processed data amount is reduced by carrying out edge detection on the image to be identified, and the interference line segments and noise points are removed through linear detection, line segment filtering and line segment density clustering to determine the target area, so that the accuracy of target area identification is improved, and the efficiency of garbage area identification is greatly improved.

Description

Target area image identification method and device and computer equipment
Technical Field
The present disclosure relates to the field of image recognition processing technologies, and in particular, to a target area image recognition method, an apparatus, and a computer device.
Background
With the development of computer intelligence algorithms, image recognition techniques are widely used in a variety of situations. Especially in the field of environmental hygiene, image recognition reduces labor costs in the context of waste classification and waste volume estimation. In the garbage collection and treatment link, the volume of the garbage is often calculated according to the area of the garbage in the garbage can. In the process, identifying the garbage area is an extremely important link.
Conventionally, identifying the garbage area is usually performed by manually (such as a sanitation company) visually estimating the garbage in the garbage can, or based on a machine learning algorithm to identify the garbage area. Because the garbage in the garbage can is various in types and quantity, the manual garbage area identification consumes more energy and is lower in accuracy. The existing machine learning algorithm needs more learning features and is limited by recognition cost and efficiency.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a target area image recognition method, apparatus, computer device, storage medium, and computer program product for solving the above technical problems.
In a first aspect, the present disclosure provides a target area image recognition method. The method comprises the following steps:
carrying out edge detection processing on an image to be recognized to obtain an edge detection image of the image to be recognized;
detecting line segments in the edge detection image;
performing interference line segment filtering on the line segments in the edge detection image to obtain filtered line segment images;
and performing line segment density clustering on the line segment images to obtain a clustering range, and identifying a target area in the line segment images according to the clustering range.
In one embodiment, the image to be recognized is obtained by:
acquiring an initial image;
and carrying out segmentation pretreatment on the initial image to obtain the image to be identified.
In one embodiment, the performing interference line segment filtering on the line segment in the edge detection image, and acquiring a filtered line segment image includes:
calculating the line length distribution of the line segments, and determining a filtering threshold value according to the line length distribution;
and filtering the interference line segment according to the threshold value to obtain a filtered line segment image.
In one embodiment, the method further comprises:
determining the outer contour of the target area according to the target area;
and calculating the edge pixel scale of the target area according to the outer contour of the target area.
In one embodiment, the calculating the edge pixel scale of the target region according to the outer contour of the target region includes:
determining a minimum circumscribed rectangle and a maximum inscribed rectangle based on the outer contour of the target area, wherein the lengths of the minimum circumscribed rectangle and the maximum inscribed rectangle are parallel to a set direction respectively;
determining a vertical distance between the length of the minimum bounding rectangle and the length of the adjacent maximum inscribed rectangle;
acquiring two intersection points of a straight line which is parallel to the set direction and passes through the middle point of the vertical distance and the outer contour of the target area;
and calculating the straight-line distance between the two intersection points as the edge pixel scale.
In a second aspect, the present disclosure also provides a target area image recognition apparatus. The device comprises:
the edge detection module is used for carrying out edge detection processing on an image to be identified so as to obtain an edge detection image of the image to be identified;
the line segment detection module is used for detecting a line segment in the edge detection image;
the line segment filtering module is used for carrying out interference line segment filtering on the line segments in the edge detection image to obtain filtered line segment images;
and the line segment clustering module is used for performing line segment density clustering on the basis of the line segment images to obtain a clustering range and identifying a target area in the line segment images according to the clustering range.
In one embodiment, the apparatus further includes a preprocessing module, through which the image to be recognized is obtained, the preprocessing module including:
an initial unit for acquiring an initial image;
and the segmentation unit is used for carrying out segmentation pretreatment on the initial image to obtain the image to be identified.
In one embodiment, the line segment filtering module comprises:
the threshold unit is used for calculating the line length distribution of the line segment and determining a filtering threshold according to the line length distribution;
and the filtering unit is used for filtering the interference line segment according to the threshold value to obtain a filtered line segment image.
In one embodiment, the apparatus further comprises:
the outer contour module is used for determining the outer contour of the target area according to the target area;
and the edge pixel scale module is used for calculating the edge pixel scale of the target area according to the outer contour of the target area.
In one embodiment, the edge pixel scale module comprises:
the rectangle unit is used for determining a minimum circumscribed rectangle and a maximum inscribed rectangle based on the outer contour of the target area, and the lengths of the minimum circumscribed rectangle and the maximum inscribed rectangle are parallel to the set direction respectively;
a vertical distance unit, configured to determine a vertical distance between a length of the minimum circumscribed rectangle and a length of the adjacent maximum inscribed rectangle;
the straight line intersection unit is used for acquiring two intersection points of a straight line which is parallel to the set direction and passes through the middle point of the vertical distance and the outer contour of the target area;
and the calculation unit is used for calculating the straight-line distance between the two intersection points as the edge pixel dimension.
In a third aspect, the present disclosure also provides a computer device. The computer device comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the target area image identification method when executing the computer program.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium. The computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the above-mentioned target area image recognition method.
In a fifth aspect, the present disclosure also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the above-described target area image recognition method.
The target area image identification method, the target area image identification device, the computer equipment, the storage medium and the computer program product at least have the following beneficial effects:
according to the method, the edge detection is carried out on the image to be recognized, the processing data amount is reduced, the interference line segments and noise points are removed through linear detection, line segment filtering and line segment density clustering to determine the target region, and the accuracy of target region recognition is improved; on the other hand, the image to be recognized can be conveniently and rapidly acquired based on a pure monocular vision technology, a target region of garbage is obtained according to messy garbage in the garbage can, and the garbage region recognition efficiency is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the conventional technologies of the present disclosure, the drawings used in the descriptions of the embodiments or the conventional technologies will be briefly introduced below, it is obvious that the drawings in the following descriptions are only some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an exemplary embodiment of a target area image recognition method;
FIG. 2 is a schematic flow chart diagram illustrating a method for identifying an image of a target area in one embodiment;
FIG. 3 is a diagram illustrating an image to be recognized according to an embodiment;
FIG. 4 is a diagram of an edge detection image in one embodiment;
FIG. 5 is a diagram illustrating detection of line segments in an edge detection image, according to an embodiment;
FIG. 6 is a schematic diagram of an image of filtered line segments in one embodiment;
FIG. 7 is a schematic illustration of a target area in one embodiment;
FIG. 8 is a flowchart illustrating a method for identifying an image of a target area according to one embodiment;
FIG. 9 is a flowchart illustrating a method for identifying an image of a target area according to one embodiment;
FIG. 10 is a flowchart illustrating a method for identifying an image of a target area according to one embodiment;
FIG. 11 is a flowchart illustrating a method for identifying an image of a target area according to one embodiment;
FIG. 12 is a diagram illustrating the calculation of edge pixel dimensions in one embodiment;
FIG. 13 is a block diagram of an embodiment of a target area image recognition apparatus;
FIG. 14 is a block diagram of a target area image recognition apparatus according to an embodiment;
FIG. 15 is a block diagram of an embodiment of a target area image recognition apparatus;
FIG. 16 is a block diagram showing the construction of a target area image recognition apparatus according to an embodiment;
FIG. 17 is a block diagram of an embodiment of a target area image recognition apparatus;
FIG. 18 is a block diagram showing an internal configuration of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terminology used herein in the description of the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure.
The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded. For example, the use of the terms first, second, etc. are used to denote names, but not to denote any particular order.
As used herein, the singular forms "a", "an" and "the" may include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises/comprising," "includes" or "including," or "having," and the like, specify the presence of stated features, integers, steps, operations, components, parts, or combinations thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, components, parts, or combinations thereof. Also, in this specification, the term "and/or" includes any and all combinations of the associated listed items.
The target area image identification method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. The terminal 102 is configured or connected with the image capturing device 106 (the image capturing device 106 may also be a part of the terminal 102), and the terminal 102 may process the trash can image captured by the image capturing device 106, identify the trash area, and display the trash area through a display device on the terminal 102. The terminal 102 may also communicate with the server 104 through a network, and the terminal 102 may send the trash can image collected by the image collection device 106 to the server 104 for processing, identify a trash area, and receive an identification result transmitted by the server 104. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the portable wearable devices may be smart watches, smart bands, and head-mounted devices. The server 104 may be implemented as a stand-alone server or a server cluster comprised of multiple servers.
In some embodiments of the present disclosure, as shown in fig. 2, a method for identifying an image of a target area is provided, which is described by taking the method as an example applied to the terminal in fig. 1, and includes the following steps:
step S10: and carrying out edge detection processing on the image to be recognized to obtain an edge detection image of the image to be recognized.
Specifically, edge detection may refer to identifying points in an image where brightness changes are significant in image processing, that is, edge lines of an object in the image may be identified. The image to be recognized may be subjected to Edge Detection processing by using a Canny Edge Detection algorithm (Canny Edge detector), and in some embodiments, the image to be recognized may be subjected to Edge Detection processing by using an HED Edge Detection algorithm (hotictically-Nested Edge Detection) or a pixel difference network (PidiNet) Edge Detection algorithm. In this embodiment, taking the garbage area identification in the garbage can as an example, the image to be identified at least includes the top edge of the garbage can and the garbage image in the garbage can. The image to be recognized can refer to fig. 3, and the edge detection image after the edge detection processing of the image to be recognized can refer to fig. 4. The outer ring contour line shown in fig. 4 is the upper edge of the garbage can, and the inner line of the upper edge of the garbage can is a garbage edge line.
Step S20: and detecting line segments in the edge detection image.
Specifically, the edge detection image, on the basis of which the line segments in the edge detection image are detected, has a greatly reduced image data amount compared to the image to be recognized. Specifically, the FLD line detection algorithm may be used to identify line segments in the edge detection image. In some embodiments. The line segments in the edge-detected image may also be identified using an LSD line detection algorithm or an LSM line detection algorithm. The line segment identified based on the edge detection image in this embodiment can be referred to in fig. 5.
Step S30: and carrying out interference line segment filtering on the line segments in the edge detection image to obtain a filtered line segment image.
Specifically, the line segments in the edge detection image are filtered to remove the interference line segments. The disturbance line segment may refer to a line segment of a non-target area (i.e., a non-garbage area), such as a garbage can edge line segment. The filtered line segment image can be seen in fig. 6.
Step S40: and performing line segment density clustering on the line segment images to obtain a clustering range, and identifying a target area in the line segment images according to the clustering range.
Specifically, the line segment density clustering may refer to density clustering based on line segment density in a line segment image, where the density clustering may identify core objects of a line segment, find out samples that the core objects can reach, generate a cluster until all the core objects are visited, mark a line segment that is not in any one core object range as a noise point, remove the noise point (for example, a residual line segment along an edge on a trash can may be removed), finally reach a clustering range, and take the clustering range with the largest range as a target area (i.e., a trash area). For example, the Clustering range can be obtained by DBSCAN (sensitivity-Based Spatial Clustering of Applications with Noise) Density Clustering algorithm. In some embodiments, the Clustering range may also be obtained using an Optics Density Clustering algorithm (Ordering to identification the Clustering structure) or an MDCA (maximum Density Clustering application) Density Clustering algorithm. Fig. 7 can be referred to for the image of the cluster range obtained in this embodiment.
In the target area image identification method, the processed data amount is reduced by carrying out edge detection on the image to be identified, and the target area is determined by removing interference line segments and noise points through linear detection, line segment filtering and line segment density clustering, so that the accuracy of target area identification is improved; on the other hand, the image to be recognized can be conveniently and rapidly acquired based on a pure monocular vision technology, a target region of garbage is obtained according to messy garbage in the garbage can, and the garbage region recognition efficiency is greatly improved.
In some embodiments of the present disclosure, as shown in fig. 8, the image to be recognized is obtained by:
step A10: acquiring an initial image;
step A20: and carrying out segmentation pretreatment on the initial image to obtain the image to be identified.
In particular, the initial image may be acquired using purely monocular vision techniques (e.g., a monocular camera). The initial image may refer to an image obtained by shooting the trash can to be identified through the terminal. The initial image at least comprises the upper edge of the garbage can and the garbage area needing to be identified. The acquired initial image is subjected to segmentation preprocessing, where the segmentation preprocessing may be to separate a background region and an effective region of the initial image, and only the effective region is reserved, for example, the background region may be segmented in a manner of setting pixels of the background region to a specified color (for example, black), and an image to be recognized may be obtained, as shown in fig. 3. Referring to fig. 7, the image to be recognized includes an effective area and a black background, and the effective area may be referred to as including a complete target area and a partial trash can.
In some embodiments, the effective area may be a preset frame, such as a rectangular frame, a trapezoidal frame, and the like, where the preset frame is displayed at an initial image capturing stage, and the preset frame indicates that the target area is framed in the preset frame when the image is captured. Through the setting of presetting the frame, angle, distance when can also be through showing guide image acquisition etc..
In the embodiment, the initial image is subjected to segmentation preprocessing, so that the background area of the image to be recognized is removed, data processing is greatly reduced, the interference of the background on the recognition of the target area is reduced, and the recognition speed and accuracy are improved.
In some embodiments of the present disclosure, as shown in fig. 9, the step S30 includes:
step S32: and calculating the line length distribution of the line segments, and determining a filtering threshold value according to the line length distribution.
Specifically, based on the line segments in the edge detection image, the line length distribution of the line segments is calculated. And counting the line lengths of all the identified line segments, and arranging the line lengths in sequence according to the lengths. For example, the statistics may be performed by means of a profile. And selecting a filtering threshold value according to the line length distribution. The filtering threshold may be selected according to the line length of the line segment (which may be the longest line segment, the shortest line segment, the median line segment, the average line segment, or the quartile). In some embodiments, the filtering threshold may also be selected based on the frequency of the line length of the line segment.
Step S34: and filtering the interference line segment according to the threshold value to obtain a filtered line segment image.
In particular, the line segments are filtered according to a selected filtering threshold. The filtering threshold may be a line length, and according to the filtering threshold, line segments with a line length greater than the filtering threshold may be removed (for example, edge line segments along the garbage can and edge line segments of the effective area may be removed), and line segments less than or equal to the filtering threshold are reserved. In some embodiments, the filtering threshold may be a line length frequency, and the line segments with the line length frequency smaller than the filtering threshold are removed according to the filtering threshold, and the line segments with the line length frequency larger than or equal to the filtering threshold are reserved. In some embodiments, a plurality of filtering thresholds can be set, and the filtering threshold can be a line length and a frequency, and combined filtering is performed through the plurality of filtering thresholds.
In the embodiment, the interference line segments in the edge detection image are filtered, and the filtering threshold is set based on the line length distribution of the line segments, so that the interference line segments in the non-target area can be rapidly filtered, and the identification accuracy of the target area is improved.
In some embodiments of the present disclosure, as shown in fig. 10, the method further comprises:
step S50: and determining the outer contour of the target area according to the target area.
Step S60: and calculating the edge pixel scale of the target area according to the outer contour of the target area.
Specifically, convex hull detection can be performed on the target area to obtain the outer contour of the target area. The target area outer contour is often irregular and not a standard figure (e.g., square, rectangle, trapezoid, etc.). And calculating according to the irregular region outer contour to obtain a standard line segment length for representing the edge pixel scale of the target region. The edge pixel scale may characterize the edge scale size of the target region in the acquired image to be identified.
In the embodiment, the outer contour of the target area is determined, the target area and the edge of the target area are further accurately identified, and the edge pixel scale of the target area is calculated, so that the irregular outer contour of the target area is subjected to data processing, and the standard line segment length for representing the edge pixel scale of the target area is obtained, which is beneficial to data standardization, and is convenient for further calculation according to the target area, such as calculating the area or volume of the target area according to the edge pixel scale and the standard pixel scale of the garbage can.
In some embodiments of the present disclosure, as shown in fig. 11, the step S60 further includes:
step S62: and determining a minimum circumscribed rectangle and a maximum inscribed rectangle based on the outer contour of the target area, wherein the lengths of the minimum circumscribed rectangle and the maximum inscribed rectangle are parallel to the set direction respectively.
Specifically, referring to FIG. 12, a minimum bounding rectangle is determined outside the target region outline, and a maximum bounding rectangle is determined inside the target region outline. The lengths of the minimum circumscribed rectangle and the maximum inscribed rectangle are parallel to the set direction respectively, and the widths of the minimum circumscribed rectangle and the maximum inscribed rectangle are perpendicular to the set direction respectively. The setting direction may be set to a horizontal direction, i.e., parallel to the upper and lower edges of the image to be recognized (which is a standard rectangle). In some embodiments, according to the set direction, the actual horizontal edge of the trash can is parallel to the set direction or the included angle is smaller than the error threshold by the guidance of the effective area frame or other display guidance when the initial image is acquired. In some embodiments, when the actual horizontal edge of the trash can is not parallel to the upper edge and the lower edge of the image to be recognized, the setting direction may be set to have an included angle with the horizontal direction, so that the actual horizontal edge of the trash can is parallel to the setting direction; or the set direction is kept to be the horizontal direction, and the collected image to be recognized is subjected to angle adjustment, so that the actual horizontal edge of the garbage can is parallel to the set direction.
Step S64: determining a vertical distance between a length of the minimum bounding rectangle and a length of the adjacent maximum inscribed rectangle.
Specifically, the length of the minimum circumscribed rectangle and the length of the adjacent maximum inscribed rectangle are two groups, namely the length a of the minimum circumscribed rectangle and the length c of the maximum inscribed rectangle in the figure, the length b of the minimum circumscribed rectangle and the length d of the maximum inscribed rectangle, and any one group is selected for next calculation; or respectively calculating according to the two groups of data, and averaging the finally obtained edge pixel scale. In some embodiments, the length a of the smallest bounding rectangle and the length c of the largest inscribed rectangle in the figure are selected for calculation. The vertical distance l between the length a of the smallest circumscribed rectangle and the length c of the largest inscribed rectangle is determined.
Step S66: and acquiring two intersection points of a straight line which is parallel to the set direction and passes through the midpoint of the vertical distance and the outer contour of the target area.
Step S68: and calculating the straight-line distance between the two intersection points as the edge pixel scale.
Specifically, a midpoint M of the vertical distance l is determined, a straight line parallel to the set direction is determined based on the midpoint M, and two intersection points of the straight line and the outer contour of the target region, i.e., a point a, a point B, are determined. And calculating the linear distance between the two points AB, namely the edge pixel scale.
In some embodiments, a vertical distance between the width of the minimum circumscribed rectangle and the width of the adjacent maximum inscribed rectangle may also be determined, two intersection points of a straight line perpendicular to the set direction and passing through a midpoint of the vertical distance and the outer contour of the target region are obtained, and a straight-line distance between the two intersection points is calculated as the edge pixel scale. The specific steps can refer to steps S62-S68, which are not repeated herein.
According to the method, the minimum circumscribed rectangle and the maximum inscribed rectangle of the outer contour of the target area are determined, calculation is performed based on the minimum circumscribed rectangle and the maximum inscribed rectangle, the acquisition specification of the edge pixel scale is specified, the method is suitable for garbage cans with different rules, and the edge pixel scale can be accurately acquired.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present disclosure further provides a target area image recognition apparatus for implementing the above-mentioned target area image recognition method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so that specific limitations in one or more embodiments of the target area image recognition device provided below can be referred to the limitations of the target area image recognition method in the above, and details are not repeated herein.
The apparatus may include systems (including distributed systems), software (applications), modules, components, servers, clients, etc. that use the methods described in embodiments of the present specification in conjunction with any necessary apparatus to implement the hardware. Based on the same innovative concept, the embodiments of the present disclosure provide an apparatus in one or more embodiments as described in the following embodiments. Since the implementation scheme of the apparatus for solving the problem is similar to that of the method, the specific implementation of the apparatus in the embodiment of the present specification may refer to the implementation of the foregoing method, and repeated details are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
In some embodiments of the present disclosure, as shown in fig. 13, there is provided a target area image recognition apparatus, which may be the aforementioned terminal, or may also be a server, or a module, a component, a device, a unit, etc. integrated in the terminal.
The device Z00 may include:
the edge detection module Z10 is used for performing edge detection processing on an image to be recognized so as to obtain an edge detection image of the image to be recognized;
a line segment detection module Z20, configured to detect a line segment in the edge detection image;
the line segment filtering module Z30 is configured to perform interference line segment filtering on a line segment in the edge detection image to obtain a filtered line segment image;
and the line segment clustering module Z40 is used for performing line segment density clustering on the basis of the line segment images to obtain a clustering range, and identifying a target area in the line segment images according to the clustering range.
In some embodiments of the present disclosure, as shown in fig. 14, the apparatus Z00 further includes a preprocessing module Z50, the image to be recognized is obtained through the preprocessing module Z50, and the preprocessing module Z50 includes:
an initial unit Z52 for acquiring an initial image;
a segmentation unit Z54, configured to perform segmentation preprocessing on the initial image to obtain the image to be identified
In some embodiments of the present disclosure, as shown in fig. 15, the line segment filtering module Z30 includes:
the threshold unit Z32 is used for calculating the line length distribution of the line segment and determining a filtering threshold according to the line length distribution; and the filtering unit Z34 is configured to filter the interference line segment according to the threshold, and obtain a filtered line segment image.
In some embodiments of the present disclosure, as shown in fig. 16, the apparatus Z00 further comprises:
an outer contour module Z60, configured to determine, according to the target area, an outer contour of the target area; and the edge pixel scale module Z70 is used for calculating the edge pixel scale of the target area according to the outer contour of the target area.
In some embodiments of the present disclosure, as shown in fig. 17, the edge pixel scale module Z70 includes:
a rectangle unit Z72, configured to determine a minimum circumscribed rectangle and a maximum inscribed rectangle based on the outer contour of the target area, where the lengths of the minimum circumscribed rectangle and the maximum inscribed rectangle are parallel to a set direction, respectively; a vertical distance unit Z74 for determining a vertical distance between the length of the minimum bounding rectangle and the length of the adjacent maximum inscribed rectangle; a straight line intersection unit Z76, configured to obtain two intersection points between a straight line parallel to the set direction and passing through the middle point of the vertical distance and the outer contour of the target area; a calculating unit Z78 for calculating a straight-line distance between the two intersections as the edge pixel scale.
The modules in the target area image recognition device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules. It should be noted that, the division of the modules in the embodiments of the present disclosure is illustrative, and is only one division of logic functions, and there may be another division in actual implementation.
Based on the foregoing description of the embodiment of the target area image recognition method, in another embodiment provided by the present disclosure, a computer device is provided, and the computer device may be a terminal, and the internal structure diagram of the computer device may be as shown in fig. 18. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a target area image recognition method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the configurations shown in the figures are block diagrams of only some of the configurations relevant to the present application, and do not constitute a limitation on the computing devices to which the present application may be applied, and that a particular computing device may include more or less components than those shown in the figures, or may combine certain components, or have a different arrangement of components.
Based on the foregoing description of embodiments of the target area image recognition method, in another embodiment provided by the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, performs the steps in the above-mentioned method embodiments.
Based on the foregoing description of the embodiments of the target area image recognition method, in another embodiment provided by the present disclosure, a computer program product is provided, which comprises a computer program that, when being executed by a processor, implements the steps in the above-mentioned embodiments of the method.
It should be noted that the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present disclosure are information and data that are authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
In the description herein, references to "some embodiments," "other embodiments," "desired embodiments," or the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, a schematic description of the above terminology may not necessarily refer to the same embodiment or example.
It is to be understood that each embodiment of the method described above is described in a progressive manner, and like/similar parts of each embodiment may be referred to each other, and each embodiment is described with emphasis on differences from the other embodiments. Reference may be made to the description of other method embodiments for relevant points.
All the possible combinations of the technical features of the embodiments described above may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present disclosure, and the description thereof is more specific and detailed, but not construed as limiting the claims. It should be noted that, for those skilled in the art, various changes and modifications can be made without departing from the spirit of the present disclosure, and these changes and modifications are all within the scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the appended claims.

Claims (10)

1. A method for identifying an image of a target area, the method comprising:
carrying out edge detection processing on an image to be recognized to obtain an edge detection image of the image to be recognized;
detecting line segments in the edge detection image;
performing interference line segment filtering on the line segments in the edge detection image to obtain filtered line segment images;
and performing line segment density clustering on the line segment images to obtain a clustering range, and identifying a target area in the line segment images according to the clustering range.
2. The method according to claim 1, characterized in that the image to be recognized is obtained by:
acquiring an initial image;
and carrying out segmentation pretreatment on the initial image to obtain the image to be identified.
3. The method of claim 1, wherein the performing interference line filtering on the line segments in the edge detection image, and obtaining the filtered line segment image comprises:
calculating the line length distribution of the line segments, and determining a filtering threshold value according to the line length distribution;
and filtering the interference line segment according to the threshold value to obtain a filtered line segment image.
4. The method of claim 1, further comprising:
determining the outer contour of the target area according to the target area;
and calculating the edge pixel scale of the target area according to the outer contour of the target area.
5. The method of claim 4, wherein calculating the edge pixel scale of the target region according to the target region outer contour comprises:
determining a minimum circumscribed rectangle and a maximum inscribed rectangle based on the outline of the target area, wherein the lengths of the minimum circumscribed rectangle and the maximum inscribed rectangle are parallel to a set direction respectively;
determining a vertical distance between the length of the minimum bounding rectangle and the length of the adjacent maximum inscribed rectangle;
acquiring two intersection points of a straight line which is parallel to the set direction and passes through the middle point of the vertical distance and the outer contour of the target area;
and calculating the straight-line distance between the two intersection points as the edge pixel scale.
6. An apparatus for recognizing an image of a target area, the apparatus comprising:
the edge detection module is used for carrying out edge detection processing on an image to be identified so as to obtain an edge detection image of the image to be identified;
the line segment detection module is used for detecting a line segment in the edge detection image;
the line segment filtering module is used for carrying out interference line segment filtering on the line segments in the edge detection image to obtain filtered line segment images;
and the line segment clustering module is used for performing line segment density clustering on the basis of the line segment images to obtain a clustering range and identifying a target area in the line segment images according to the clustering range.
7. The apparatus of claim 6, further comprising:
the outer contour module is used for determining the outer contour of the target area according to the target area;
and the edge pixel scale module is used for calculating the edge pixel scale of the target area according to the outer contour of the target area.
8. The apparatus of claim 7, wherein the edge pixel scale module comprises:
the rectangle unit is used for determining a minimum circumscribed rectangle and a maximum inscribed rectangle based on the outer contour of the target area, and the lengths of the minimum circumscribed rectangle and the maximum inscribed rectangle are parallel to the set direction respectively;
a vertical distance unit, configured to determine a vertical distance between a length of the minimum circumscribed rectangle and a length of the adjacent maximum inscribed rectangle;
the straight line intersection unit is used for acquiring two intersection points of a straight line which is parallel to the set direction and passes through the middle point of the vertical distance and the outer contour of the target area;
and the calculation unit is used for calculating the straight-line distance between the two intersection points as the edge pixel dimension.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 5.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN202210763993.2A 2022-06-30 2022-06-30 Target area image identification method and device and computer equipment Pending CN115035316A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210763993.2A CN115035316A (en) 2022-06-30 2022-06-30 Target area image identification method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210763993.2A CN115035316A (en) 2022-06-30 2022-06-30 Target area image identification method and device and computer equipment

Publications (1)

Publication Number Publication Date
CN115035316A true CN115035316A (en) 2022-09-09

Family

ID=83129308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210763993.2A Pending CN115035316A (en) 2022-06-30 2022-06-30 Target area image identification method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN115035316A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649402A (en) * 2024-01-29 2024-03-05 惠州市德立电子有限公司 Magnetic glue inductance glue hidden crack detection method and system based on image characteristics

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649402A (en) * 2024-01-29 2024-03-05 惠州市德立电子有限公司 Magnetic glue inductance glue hidden crack detection method and system based on image characteristics
CN117649402B (en) * 2024-01-29 2024-04-19 惠州市德立电子有限公司 Magnetic glue inductance glue hidden crack detection method and system based on image characteristics

Similar Documents

Publication Publication Date Title
Huang et al. Road centreline extraction from high‐resolution imagery based on multiscale structural features and support vector machines
Masias et al. A review of source detection approaches in astronomical images
Haverkamp Extracting straight road structure in urban environments using IKONOS satellite imagery
CN110781756A (en) Urban road extraction method and device based on remote sensing image
CN110909611A (en) Method and device for detecting attention area, readable storage medium and terminal equipment
Almagbile Estimation of crowd density from UAVs images based on corner detection procedures and clustering analysis
CN114169381A (en) Image annotation method and device, terminal equipment and storage medium
CN109657543B (en) People flow monitoring method and device and terminal equipment
CN112668577A (en) Method, terminal and device for detecting target object in large-scale image
CN114004818A (en) Spinneret defect detection method and device, electronic equipment and readable storage medium
CN115759148B (en) Image processing method, device, computer equipment and computer readable storage medium
CN114241358A (en) Equipment state display method, device and equipment based on digital twin transformer substation
CN115035316A (en) Target area image identification method and device and computer equipment
CN108960246B (en) Binarization processing device and method for image recognition
Gueguen et al. Urbanization detection by a region based mixed information change analysis between built-up indicators
Azeez et al. Urban tree classification using discrete-return LiDAR and an object-level local binary pattern algorithm
CN115880362B (en) Code region positioning method, device, computer equipment and computer readable storage medium
CN115063473A (en) Object height detection method and device, computer equipment and storage medium
CN116206125A (en) Appearance defect identification method, appearance defect identification device, computer equipment and storage medium
CN114882020B (en) Product defect detection method, device, equipment and computer readable medium
Byun et al. A framework for the segmentation of high-resolution satellite imagery using modified seeded-region growing and region merging
Andrianov et al. The review of spatial objects recognition models and algorithms
Zong et al. Building change detection from remotely sensed images based on spatial domain analysis and Markov random field
Anandhalli et al. An approach to detect vehicles in multiple climatic conditions using the corner point approach
CN113963004A (en) Sampling method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant after: Zhaolian Consumer Finance Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: MERCHANTS UNION CONSUMER FINANCE Co.,Ltd.

Country or region before: China