CN112053360B - Image segmentation method, device, computer equipment and storage medium - Google Patents

Image segmentation method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN112053360B
CN112053360B CN202011079909.2A CN202011079909A CN112053360B CN 112053360 B CN112053360 B CN 112053360B CN 202011079909 A CN202011079909 A CN 202011079909A CN 112053360 B CN112053360 B CN 112053360B
Authority
CN
China
Prior art keywords
corrugated
feature
type
plate image
corrugated plate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011079909.2A
Other languages
Chinese (zh)
Other versions
CN112053360A (en
Inventor
郭双双
龚星
李斌
陈会娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011079909.2A priority Critical patent/CN112053360B/en
Publication of CN112053360A publication Critical patent/CN112053360A/en
Application granted granted Critical
Publication of CN112053360B publication Critical patent/CN112053360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30161Wood; Lumber
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The embodiment of the application discloses an image segmentation method, an image segmentation device, computer equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: the method comprises the steps of extracting characteristics of a corrugated plate image to obtain characteristic information of the corrugated plate image, wherein the characteristic information comprises dividing line characteristics and type characteristics, dividing the corrugated plate image into a plurality of reference corrugated areas according to dividing lines indicated by the dividing line characteristics, and adjusting the plurality of reference corrugated areas according to the type characteristics to obtain a plurality of target corrugated areas of the corrugated plate image so that the types of corrugated surfaces corresponding to every two adjacent target corrugated areas are different. The dividing line characteristics and the type characteristics of the corrugated plate image are acquired to determine the dividing line in the corrugated plate image and the type of the corrugated surface corresponding to each pixel point, so that the accuracy of the corrugated area is improved by performing data calculation on the corrugated plate image in a plurality of corrugated areas obtained by dividing according to the dividing line and the type corresponding to each pixel point.

Description

Image segmentation method, device, computer equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an image segmentation method, an image segmentation device, computer equipment and a storage medium.
Background
The corrugated plate is a plate with a corrugated shape, the corrugated plate is provided with a plurality of corrugated surfaces, different corrugated surfaces are not arranged on a plane, and the corrugated surfaces are arranged according to fixed rules to form the corrugated plate with the corrugated shape. Corrugated board has a very wide range of applications, such as in decorative panels, containers, etc. Prior to the use of corrugated board, it is often necessary to perform a quality analysis of the corrugations in the corrugated board to ensure the quality of the product made from the corrugated board.
In the related art, a corrugated plate image is subjected to a planar division process to obtain a plurality of corrugated regions of the corrugated plate image, so that quality analysis is performed on the plurality of corrugated regions in the corrugated plate image. In the method, the corrugated region is acquired only by a plane segmentation processing mode, so that the processing mode is simple, and the accuracy of the corrugated region is poor.
Disclosure of Invention
The embodiment of the application provides an image segmentation method, an image segmentation device, computer equipment and a storage medium, which can improve the accuracy of a ripple area. The technical scheme is as follows:
In one aspect, there is provided an image segmentation method, the method comprising:
extracting characteristics of a corrugated plate image to obtain characteristic information of the corrugated plate image, wherein the characteristic information comprises dividing line characteristics and type characteristics, the dividing line characteristics are used for indicating dividing lines between different corrugated surfaces in the corrugated plate image, the type characteristics are used for indicating types corresponding to each pixel point in the corrugated plate image, and the types are types of the corrugated surfaces where the pixel points are located;
dividing the corrugated plate image into a plurality of reference corrugated areas according to the dividing lines indicated by the dividing line characteristics;
and according to the type characteristics, the plurality of reference corrugated areas are adjusted to obtain a plurality of target corrugated areas of the corrugated plate image, so that the types of corrugated surfaces corresponding to every two adjacent target corrugated areas are different.
In one possible implementation, the keypoint feature is used to indicate a keypoint located on a segmentation line in the corrugated plate image and a confidence level of the keypoint, and determining at least two target keypoints located within the reference corrugated region according to positions of a plurality of keypoints in the keypoint feature and positions of the reference corrugated region in the corrugated plate image includes:
Determining a plurality of reference key points positioned in the reference corrugated area according to the positions of a plurality of key points in the key point characteristics and the positions of the reference corrugated area in the corrugated plate image;
and selecting the at least two target key points from the plurality of reference key points according to the confidence degrees of the plurality of reference key points in the key point characteristics.
In another possible implementation manner, the adjusting the plurality of reference ripple areas according to the types corresponding to the plurality of reference ripple areas to obtain the plurality of target ripple areas further includes:
and responding to the confidence coefficient corresponding to any reference ripple area being greater than the reference confidence coefficient, and taking the any reference ripple area as a target ripple area.
In another possible implementation manner, after the determining the type corresponding to the reference ripple area according to the type corresponding to each pixel point in the reference ripple area, the method further includes:
acquiring the number of pixels with the same type as that corresponding to the reference corrugated area in the reference corrugated area;
and taking the ratio between the number of the pixel points and the total number of the pixel points in the reference ripple area as the confidence of the type corresponding to the reference ripple area.
In another possible implementation manner, the type feature includes a confidence that a corrugated surface where each pixel point in the corrugated image is located belongs to each type;
and determining the type corresponding to each pixel point in the corrugated plate image according to the type characteristics, wherein the determining comprises the following steps:
for any pixel point, determining the type corresponding to the maximum confidence according to the confidence that the corrugated surface where the pixel point is located belongs to each type, and determining the type corresponding to the maximum confidence as the type of the corrugated surface where the pixel point is located.
In another possible implementation, the feature detection sub-model includes a split line detection sub-model and a type detection sub-model;
and invoking the feature detection sub-model to perform feature detection on the second feature map to obtain the feature information, wherein the feature information comprises:
invoking the parting line detection sub-model, and detecting parting lines on the second feature map to obtain the parting line features;
and calling the type detection sub-model, and carrying out type detection on the second feature map to obtain the type feature.
In another possible implementation, the feature detection sub-model further includes a keypoint detection sub-model; and invoking the feature detection sub-model to perform feature detection on the second feature map to obtain the feature information, and further comprising:
And calling the key point detection sub-model, and carrying out key point detection on the second feature map to obtain the key point features.
In another aspect, there is provided an image segmentation apparatus, the apparatus including:
the device comprises a feature extraction module, a feature detection module and a feature detection module, wherein the feature extraction module is used for extracting features of a corrugated plate image to obtain feature information of the corrugated plate image, the feature information comprises a parting line feature and a type feature, the parting line feature is used for indicating parting lines between different corrugated surfaces in the corrugated plate image, the type feature is used for indicating a type corresponding to each pixel point in the corrugated plate image, and the type is a type of the corrugated surface where the pixel point is located;
the image segmentation module is used for segmenting the corrugated plate image into a plurality of reference corrugated areas according to the segmentation lines indicated by the segmentation line characteristics;
and the region adjustment module is used for adjusting the plurality of reference corrugated regions according to the type characteristics to obtain a plurality of target corrugated regions of the corrugated plate image so as to enable the types of corrugated surfaces corresponding to every two adjacent target corrugated regions to be different.
In one possible implementation, the image segmentation module includes:
A position determining unit configured to determine positions of a plurality of dividing lines in the corrugated plate image based on the dividing line characteristics;
and the area determining unit is used for determining the area between every two adjacent dividing lines as a reference ripple area according to the positions of the dividing lines to obtain the plurality of reference ripple areas.
In another possible implementation manner, the area adjustment module includes:
the first determining unit is used for determining the type corresponding to each pixel point in the corrugated plate image according to the type characteristics;
the second determining unit is used for determining the type corresponding to each reference ripple area according to the pixel point in each reference ripple area and the type corresponding to each pixel point;
and the region adjustment unit is used for adjusting the plurality of reference corrugated regions according to the types corresponding to the plurality of reference corrugated regions to obtain the plurality of target corrugated regions.
In another possible implementation manner, the area adjusting unit includes:
the region merging subunit is used for merging any two adjacent reference corrugated regions to obtain a target corrugated region in response to the fact that the types corresponding to the two adjacent reference corrugated regions are the same;
And the first determination subunit is used for determining any reference corrugated region as a target corrugated region in response to the fact that the type corresponding to any reference corrugated region and the adjacent reference corrugated region are different.
In another possible implementation manner, the second determining unit is configured to determine, according to the pixel point in each reference ripple area and the type corresponding to each pixel point, the type corresponding to each reference ripple area and the confidence level of the type.
In another possible implementation, the feature information further includes a keypoint feature for indicating a keypoint located on a segmentation line in the corrugated plate image;
the region adjustment unit includes:
the second determining subunit is used for determining at least two target key points positioned in the reference corrugated area according to the key point characteristics in response to the fact that the confidence coefficient corresponding to any reference corrugated area is smaller than the reference confidence coefficient;
and the segmentation processing subunit is used for carrying out segmentation processing on the reference ripple area according to the segmentation lines formed by the at least two key points to obtain a target ripple area.
In another possible implementation manner, the second determining subunit is configured to determine at least two target keypoints located within the reference corrugated region according to positions of a plurality of keypoints in the keypoint feature and positions of the reference corrugated region in the corrugated plate image.
In another possible implementation manner, the keypoint feature is used for indicating a keypoint located on a segmentation line in the corrugated plate image and a confidence level of the keypoint, and the second determining subunit is used for determining a plurality of reference keypoints located in the reference corrugated region according to positions of the plurality of keypoints in the keypoint feature and positions of the reference corrugated region in the corrugated plate image; and selecting the at least two target key points from the plurality of reference key points according to the confidence degrees of the plurality of reference key points in the key point characteristics.
In another possible implementation manner, the area adjusting unit further includes:
and the third determination subunit is used for responding to the fact that the confidence coefficient corresponding to any reference ripple area is larger than the reference confidence coefficient, and taking the any reference ripple area as a target ripple area.
In another possible implementation manner, the second determining unit includes:
a fourth determining subunit, configured to determine, for any reference ripple area, a type corresponding to each pixel point in the reference ripple area according to the pixel point in the reference ripple area and the type corresponding to each pixel point;
And a fifth determining subunit, configured to determine, according to the type corresponding to each pixel point in the reference ripple area, the type corresponding to the reference ripple area.
In another possible implementation manner, the fifth determining subunit is configured to determine, according to a type corresponding to each pixel point in the reference ripple area, a type with the largest number of corresponding pixel points as a type corresponding to the reference ripple area.
In another possible implementation, the apparatus further includes:
the number acquisition module is used for acquiring the number of pixels with the same type as that corresponding to the reference ripple area in the reference ripple area;
and the confidence determining module is used for taking the ratio between the number of the pixel points and the total number of the pixel points in the reference ripple area as the confidence of the type corresponding to the reference ripple area.
In another possible implementation manner, the type feature includes a confidence that a corrugated surface where each pixel point in the corrugated image is located belongs to each type;
the first determination unit includes:
and the type determining subunit is used for determining the type corresponding to the maximum confidence as the type of the corrugated surface where the pixel point is located for any pixel point according to the confidence that the corrugated surface where the pixel point is located belongs to each type.
In another possible implementation manner, the feature extraction module includes:
and the feature extraction unit is used for calling a feature extraction model to perform feature extraction on the corrugated plate image so as to obtain the feature information.
In another possible implementation, the feature extraction model includes a feature extraction sub-model, a scale conversion sub-model, and a feature detection sub-model;
the feature extraction unit includes:
the first extraction subunit is used for calling the feature extraction sub-model to perform feature extraction on the corrugated plate image so as to obtain a first feature map of the corrugated plate image;
the second extraction subunit is used for calling the scale conversion sub-model, and performing scale conversion on the first characteristic image to obtain a second characteristic image of the corrugated plate image;
and the third extraction subunit is used for calling the feature detection submodel, and carrying out feature detection on the second feature map to obtain the feature information.
In another possible implementation, the feature detection sub-model includes a split line detection sub-model and a type detection sub-model;
the third extraction subunit is configured to invoke the parting line detection sub-model, and perform parting line detection on the second feature map to obtain the parting line feature; and calling the type detection sub-model, and carrying out type detection on the second feature map to obtain the type feature.
In another possible implementation, the feature detection sub-model further includes a keypoint detection sub-model; and the third extraction subunit is further configured to invoke the keypoint detection sub-model to perform keypoint detection on the second feature map, so as to obtain the keypoint feature.
In another possible implementation manner, the second extraction subunit is configured to invoke the scale conversion sub-model, and perform scale conversion processing on the first feature map to obtain a plurality of scale reference feature maps corresponding to the first feature map; and carrying out fusion processing on the reference feature images with the multiple scales to obtain a second feature image of the corrugated plate image.
In another aspect, a computer device is provided, the computer device comprising a processor and a memory, the memory having stored therein at least one program code that is loaded and executed by the processor to implement the operations performed in the image segmentation method as described in the above aspects.
In another aspect, there is provided a computer readable storage medium having stored therein at least one program code loaded and executed by a processor to implement the operations performed in the image segmentation method as described in the above aspects.
In yet another aspect, a computer program product or computer program is provided, the computer program product or computer program comprising computer program code, the computer program code being stored in a computer readable storage medium. The computer program code is read from a computer readable storage medium by a processor of a computer device, which executes the computer program code such that the computer device implements the operations performed in the image segmentation method as described in the above aspects.
The beneficial effects that technical scheme that this application embodiment provided include at least:
according to the method, the device, the computer equipment and the storage medium, the fact that the types of the adjacent corrugated surfaces in the corrugated plate are different is considered, and the dividing lines are arranged between the adjacent corrugated surfaces is adopted, when the corrugated plate image is divided, the dividing line characteristics and the type characteristics of the corrugated plate image are obtained to determine the dividing lines in the corrugated plate image and the types of the corrugated surfaces corresponding to each pixel point, the characteristics of the corrugated plate image are enriched, and the types corresponding to every two adjacent corrugated areas are different in a plurality of corrugated areas obtained by dividing according to the dividing lines and the types corresponding to each pixel point, so that the accuracy of the corrugated areas is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of an image segmentation method according to an embodiment of the present application;
fig. 3 is a flowchart of an image segmentation method according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a scaling sub-model according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of a feature extraction model according to an embodiment of the present disclosure;
FIG. 6 is a schematic illustration of a corrugated region provided in an embodiment of the present application;
fig. 7 is a flowchart of image segmentation of a corrugated plate image according to an embodiment of the present application;
FIG. 8 is a flow chart of a corrugated board defect analysis provided in an embodiment of the present application;
fig. 9 is a schematic diagram of key points in a corrugated plate image according to an embodiment of the present application;
Fig. 10 is a schematic view of a plurality of dividing lines in a corrugated board image according to an embodiment of the present application;
fig. 11 is a schematic view of a plurality of corrugated areas in a corrugated plate image according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an image segmentation apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an image segmentation apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," and the like, as used herein, may be used to describe various concepts, but are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, a first feature map may be referred to as a second feature map, and similarly, a second feature map may be referred to as a first feature map, without departing from the scope of the present application.
The terms "at least two," a plurality, "" each, "" any one, "and" at least two include two or more, a plurality includes two or more, and each refers to each of the corresponding plurality, any one refers to any one of the plurality, as used herein. For example, the plurality of keypoints includes 3 keypoints, and each of the keypoints refers to each of the 3 keypoints, and any of the keypoints refers to any of the 3 keypoints, which may be a first keypoint, a second keypoint, or a third keypoint.
Cloud technology (Cloud technology) refers to a hosting technology for integrating hardware, software, network and other series resources in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
Cloud technology (Cloud technology) is based on the general terms of network technology, information technology, integration technology, management platform technology, application technology and the like applied by Cloud computing business models, and can form a resource pool, so that the Cloud computing business model is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a large amount of computing, storage resources, such as video websites, picture-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing.
Big data (Big data) refers to a data set which cannot be captured, managed and processed by a conventional software tool within a certain time range, and is a massive, high-growth-rate and diversified information asset which needs a new processing mode to have stronger decision-making ability, insight discovery ability and flow optimization ability. With the advent of the cloud age, big data has attracted more and more attention, and special techniques are required for big data to effectively process a large amount of data within a tolerant elapsed time. Technologies applicable to big data include massively parallel processing databases, data mining, distributed file systems, distributed databases, cloud computing platforms, the internet, and scalable storage systems. The method for image segmentation is realized by utilizing cloud technology and big data to perform data calculation on the ripple image.
The image segmentation method provided by the embodiment of the application can be used in computer equipment. Optionally, the computer device is a terminal or a server. Optionally, the server is a stand-alone physical server, or is a server cluster or a distributed system formed by a plurality of physical servers, or is a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like. Optionally, the terminal is a smart phone, tablet computer, notebook computer, desktop computer, smart speaker, smart watch, etc., but is not limited thereto.
Fig. 1 is a schematic structural diagram of an implementation environment provided in an embodiment of the present application, as shown in fig. 1, where the system includes a terminal 101 and a server 102, and the terminal 101 and the server 102 can be directly or indirectly connected through a wired or wireless communication manner, which is not limited herein.
The terminal 101 is configured to acquire a corrugated board image, for example, the terminal 101 obtains a corrugated board image by photographing corrugated board, and can transmit the corrugated board image to the server 102. The server 102 provides a function of processing a corrugated board image, and can image-divide the corrugated board image transmitted from the terminal 101.
The method provided by the embodiment of the application can be used for various scenes.
For example, in a container inspection scenario:
because the surface of container comprises the buckled plate, in order to carry out quality testing to the container, through shooing the container surface, obtain the buckled plate image, adopt the image segmentation method that this application embodiment provided, obtain a plurality of ripple regions of buckled plate image, can follow-up according to a plurality of ripple regions quality testing, confirm the defect of buckled plate to confirm the quality of container.
For another example, in the corrugated plate quality monitoring scenario:
In the process of generating the buckled plate, in order to guarantee the quality of the buckled plate of generating, through shooing the buckled plate surface, obtain the buckled plate image, adopt the image segmentation method that this application embodiment provided, obtain a plurality of ripple regions of buckled plate image, follow-up carry out defect analysis to this buckled plate according to a plurality of ripple regions to confirm whether the buckled plate exists the defect, thereby guarantee the quality of the buckled plate of generating.
Fig. 2 is a flowchart of an image segmentation method according to an embodiment of the present application, applied to a computer device, as shown in fig. 2, where the method includes:
201. the computer equipment performs feature extraction on the corrugated plate image to obtain feature information of the corrugated plate image, wherein the feature information comprises parting line features and type features.
In this application embodiment, the buckled plate is the panel that has the ripple shape, has a plurality of ripple faces on the buckled plate, and every ripple face is a plane, and different ripple faces are not on a plane, and a plurality of ripple faces are arranged according to fixed rule, constitute the buckled plate that has the ripple shape. For example, the corrugated plate includes a concave surface, a left inclined surface, a convex surface, and a right inclined surface, two adjacent corrugated surfaces of one concave surface in the corrugated plate are the right inclined surface and the left inclined surface, two adjacent corrugated surfaces of one convex surface are the left inclined surface and the right inclined surface, and the corrugated plate includes 8 corrugated surfaces as an example, in the corrugated plate, the arrangement sequence of the 8 corrugated surfaces is the concave surface, the left inclined surface, the convex surface, the right inclined surface, the concave surface, the left inclined surface, the convex surface, and the right inclined surface in sequence, and the surface of the corrugated plate presents a corrugated shape. For another example, a corrugated plate having a corrugated shape is obtained by press-molding a flat plate. For another example, in practical application, the surface of the container is corrugated plate, i.e. corrugated plate. For example, the corrugated plate includes a plurality of different types of corrugated surfaces, the types of which include a concave type, a beveled type, a convex type, and the like. In a plurality of corrugated surfaces of the corrugated plate, any corrugated surface is different from the adjacent corrugated surface in type, so that different types of corrugated surfaces are distinguished, therefore, the corrugated plate image is divided into a plurality of corrugated areas according to dividing line characteristics and type characteristics of the corrugated plate image, each corrugated area corresponds to one corrugated surface, so that the types of every two adjacent corrugated surfaces are different, and quality detection can be carried out on the corrugated plate according to the divided plurality of corrugated areas.
The dividing line features are used for indicating dividing lines between different corrugated surfaces in the corrugated plate image, and the type features are used for indicating types corresponding to each pixel point in the corrugated plate image, wherein the types are types of the corrugated surfaces where the pixel points are located.
202. The computer device segments the corrugated board image into a plurality of reference corrugated areas according to the segmentation lines indicated by the segmentation line features.
Since there are parting lines between adjacent corrugated surfaces in the corrugated plate, the image of the corrugated plate is divided into a plurality of reference corrugated regions, each possibly including one corrugated surface, by parting lines indicated by the parting line features.
203. And the computer equipment adjusts the multiple reference corrugated areas according to the type characteristics to obtain multiple target corrugated areas of the corrugated plate image so that the types of corrugated surfaces corresponding to every two adjacent target corrugated areas are different.
Since the types of the two adjacent corrugated surfaces in the corrugated plate are different, the types of the corrugated surfaces corresponding to each adjacent corrugated region are different by indicating the type of the corrugated surface of each pixel point in the corrugated image in the type characteristic and adjusting the multiple reference corrugated regions.
According to the method provided by the embodiment of the application, the fact that the types of the adjacent corrugated surfaces in the corrugated plate are different is considered, and the dividing lines are arranged between the adjacent corrugated surfaces is adopted, when the corrugated plate image is divided into images, the dividing line characteristics and the type characteristics of the corrugated plate image are obtained to determine the dividing line in the corrugated plate image and the type of the corrugated surface corresponding to each pixel point, the characteristics of the corrugated plate image are enriched, and the types corresponding to every two adjacent corrugated areas are different in a plurality of corrugated areas obtained by dividing according to the dividing line and the type corresponding to each pixel point, so that the accuracy of the corrugated areas is improved.
Fig. 3 is a flowchart of an image segmentation method according to an embodiment of the present application, applied to a computer device, as shown in fig. 3, where the method includes:
301. the computer equipment calls a feature extraction model to extract the features of the corrugated plate image to obtain feature information, wherein the feature information comprises parting line features and type features.
Wherein the corrugated board image is an image comprising corrugated board, optionally obtained by photographing corrugated board. For example, a corrugated board is scanned by a line scanning camera, an image of the corrugated board is obtained by line scanning, or it is photographed by other cameras, and an image of the corrugated board is obtained. The feature extraction model is used to extract feature information of the corrugated plate image, which is used to describe features of the corrugated surface in the corrugated plate image. The split line feature is used to indicate split lines between different corrugated surfaces in a corrugated plate image, each split line being an intersection between two adjacent corrugated surfaces. The type characteristic is used for indicating the type corresponding to each pixel point in the corrugated plate image, and the type is the type of the corrugated surface where the pixel point is located.
Since the corrugated plate comprises a plurality of corrugated surfaces, dividing lines are arranged between every two adjacent corrugated surfaces, and the types of the two adjacent corrugated surfaces are different, the corrugated plate image can be divided according to the dividing line characteristics and the type characteristics by extracting the dividing line characteristics and the type characteristics in the corrugated plate image.
In one possible implementation, the segmentation line feature includes a probability that each pixel point in the corrugated plate image is located on a segmentation line. The larger the probability, the greater the likelihood that the corresponding pixel point is located on the dividing line, the smaller the probability, the less the likelihood that the corresponding pixel point is located on the dividing line.
In one possible implementation, the type of feature includes a probability that a corrugated surface in which each pixel point in the corrugated image is located belongs to each type. For example, if the moire image includes 3 types of moire planes, for any pixel point, the type feature includes 3 probabilities corresponding to the pixel point, and the 3 probabilities respectively represent the probability that the moire plane where the pixel point is located belongs to each of the 3 types.
The larger the probability that the corrugated surface where the pixel point is belongs to any type is, the larger the probability that the corrugated surface where the pixel point is belongs to the type is, the smaller the probability that the corrugated surface where the pixel point is belongs to any type is, and the smaller the probability that the corrugated surface where the pixel point is belongs to the type is.
In one possible implementation, the feature extraction model includes a feature extraction sub-model, a scale conversion sub-model, and a feature detection sub-model, and then the step 301 includes the following steps 3011-3013:
3011. and calling the feature extraction sub-model to perform feature extraction on the corrugated plate image to obtain a first feature map of the corrugated plate image.
Wherein the first feature map is used to describe the features of the corrugated board in the corrugated board image. The feature extraction sub-model is used to obtain a feature map of the corrugated plate image describing information of the corrugated surface in the corrugated plate image, optionally the feature extraction sub-model is a convolution model. Optionally, the convolution model includes a convolution layer, a normalization layer, and an activation layer. The convolution layer has convolution parameters including convolution kernel size, output channel number and convolution step length. For example, the convolution kernel size of the convolution layer is 7×7, the number of output channels of the convolution layer is 64, and the convolution step size of the convolution layer is 2.
3012. And calling a scale conversion sub-model, and performing scale conversion on the first characteristic map to obtain a second characteristic map of the corrugated plate image.
The scale conversion sub-model is used for performing scale conversion on the characteristic map of the corrugated plate image to obtain another characteristic map of the corrugated plate image. The second feature map has a plurality of scale features corresponding to the corrugated plate image. The scale of the second feature map obtained by scale conversion may be the same as or different from that of the first feature map.
And the first characteristic image is subjected to scale conversion through the scale conversion sub-model, so that the second characteristic image contains characteristic information of a plurality of scales, the characteristics of the corrugated surface in the corrugated plate image are enhanced, and the accuracy of the characteristic image of the corrugated plate image is improved.
In one possible implementation, the step 3012 includes: and calling a scale conversion sub-model, performing scale conversion processing on the first feature map to obtain a plurality of scale reference feature maps corresponding to the first feature map, and performing fusion processing on the plurality of scale reference feature maps to obtain a second feature map of the corrugated plate image.
Because the feature information contained in the feature images with different scales may be different, the reference feature images with multiple scales corresponding to the first feature image are obtained through the scale conversion sub-model, and the reference feature images with multiple scales are fused, so that the feature information with multiple scales is contained in the second feature image, and the accuracy of the feature images is improved.
Optionally, the scale conversion sub-model includes a plurality of branches, the first branch performs convolution processing and scale conversion on the first feature map to obtain a first reference feature map, then the latter branch performs convolution processing and scale conversion processing on the reference feature map after the scale conversion of the former branch to obtain a reference feature map, until the last branch performs convolution processing on the feature map after the scale conversion of the former branch to obtain a last reference feature map, and the reference feature maps of a plurality of scales obtained by the plurality of branches are fused to obtain a second feature map of the corrugated plate image.
As shown in fig. 4, taking an example that the scale conversion sub-model includes 3 branches, the first branch includes 6 convolution layers, the second branch includes 4 convolution layers, the third branch includes 2 convolution layers, the first feature map is subjected to convolution processing through the first and second convolution layers in the first branch to obtain a feature map 1, and the feature map 1 is subjected to scale conversion processing to obtain a feature map 2; carrying out convolution processing on the feature map 1 through a third convolution layer and a fourth convolution layer in the first branch to obtain a feature map 3; carrying out convolution processing on the feature map 2 through a first convolution layer and a second convolution layer in the second branch to obtain a feature map 4, and carrying out scale conversion processing on the feature map 4 to obtain a feature map 5 and a feature map 6; fusing the feature images 3 and 5 through a first branch, and convolving the fused reference feature images through a fifth convolution layer and a sixth convolution layer to obtain a feature image 7; carrying out convolution processing on the feature map 4 through a third convolution layer and a fourth convolution layer in the second branch to obtain a feature map 8; and carrying out convolution processing on the feature map 6 through two convolution layers in the third branch to obtain a feature map 9, carrying out scale conversion on the feature map 9 and the feature map 8 to enable the scale of the converted reference feature map to be the same as that of the feature map 7, and carrying out fusion processing on the reference feature map and the feature map 7 after the scale conversion of the feature map 8 and the feature map 9 to obtain the second feature map.
3013. And calling a feature detection sub-model, and carrying out feature detection on the second feature map to obtain feature information.
The feature detection sub-model is used for detecting feature information in the second feature map.
The ripple feature in the second feature map is enhanced through the feature extraction sub-model and the scale conversion sub-model, and then the feature information in the second feature map is detected through the feature detection sub-model, so that accurate feature information can be obtained.
In one possible implementation, the feature detection sub-model includes a split line detection sub-model and a type detection sub-model; then step 3013 includes: and calling a parting line detection sub-model, carrying out parting line detection on the second feature map to obtain parting line features, calling a type detection sub-model, and carrying out type detection on the second feature map to obtain type features.
The parting line detection sub-model is used for detecting parting line features in the feature map, and the type detection sub-model is used for detecting type features in the feature map. And the second feature map is respectively subjected to feature detection through the parting line detection sub-model and the type detection sub-model so as to obtain parting line features and type features in the feature map, and the accuracy of the parting line features and the type features is improved.
In one possible implementation, the feature information further includes a keypoint feature, and the feature detection sub-model further includes a keypoint detection sub-model; then step 3013 further comprises: and calling a key point detection sub-model, and carrying out key point detection on the second feature map to obtain key point features.
The key point detection sub-model is used for detecting key point characteristics in the characteristic diagram, and the key point characteristics are used for indicating key points located on the dividing lines in the corrugated plate image. And acquiring key point features in the feature map through the key point detection sub-model, and subsequently carrying out segmentation processing on the corrugated plate image according to the key point features.
As shown in fig. 5, feature extraction is performed on a corrugated plate image through two convolution modules to obtain a first feature map of the corrugated plate image, scale conversion is performed on the first feature map through three high-resolution networks to obtain a second feature map of the corrugated plate image, and feature detection is performed on the second feature map through a key point detection sub-model, a parting line detection sub-model and a type detection sub-model subsequently to obtain key point features, parting line features and type features respectively. The key point detection sub-model, the parting line detection sub-model and the type detection sub-model comprise two convolution layers. Optionally, the high resolution network is an hmnet (High Resolution Net, high resolution network).
Optionally, a key point detection sub-model is called, key point detection is carried out on the second feature map, so that key point initial features are obtained, and screening processing is carried out on pixel points in the key point initial features, so that the key point features are obtained.
The key point initial feature comprises the confidence coefficient corresponding to each pixel point in the corrugated plate image, and the key point feature comprises the confidence coefficient corresponding to part of pixel points in the corrugated plate image. And screening the pixel points in the initial key point characteristics to improve the accuracy of the key point characteristics.
The method for screening the pixel points in the initial characteristics of the key points comprises the following three modes:
the first way is: and selecting the pixel points with the probability larger than the reference probability according to the probability that each pixel point in the initial characteristic of the key point is the key point, and generating the characteristic of the key point.
The reference probability is any value, such as 0.8 or 0.9. The keypoint initial feature comprises a probability that each pixel is a keypoint, the probability representing a likelihood that the corresponding pixel is a keypoint. Through the initial feature of the key point, the pixel point with high probability is selected as the key point, so that the accuracy of the feature of the key point is improved.
The second way is: and screening the plurality of pixel points in the initial key point feature according to the position of each pixel point in the initial key point feature so that the distance between any two pixel points in the screened pixel points is larger than the reference distance, and generating the key point feature according to the screened pixel points.
The reference distance is an arbitrary value, such as 3 or 6, and the key point features include the position of each pixel after screening.
Third mode: selecting pixel points with probability larger than reference probability according to the probability that each pixel point in the initial characteristic of the key point is the key point, generating a reference characteristic of the key point, screening a plurality of pixel points in the reference characteristic of the key point according to the position of each pixel point in the reference characteristic of the key point, so that the distance between any two pixel points in the screened pixel points is larger than the reference distance, and generating the characteristic of the key point according to the screened pixel points.
The key point features comprise the probability corresponding to each pixel after screening and the position corresponding to each pixel.
Optionally, when screening a plurality of pixel points in the key point reference feature, determining a distance between every two pixel points according to positions of the plurality of pixel points, and screening out the pixel point with smaller probability in any two pixel points according to the probability and the positions of the rest pixel points in response to the distance between any two pixel points being smaller than the reference distance.
302. The computer device segments the corrugated board image into a plurality of reference corrugated areas according to the segmentation lines indicated by the segmentation line features.
In this embodiment of the present application, a dividing line is provided between every two adjacent corrugated surfaces, so that the corrugated plate image is divided by the dividing line indicated by the dividing line feature, so as to obtain a plurality of reference corrugated areas, where each reference corrugated area may represent one corrugated surface.
In one possible implementation, this step 302 includes: and determining the positions of a plurality of dividing lines in the corrugated plate image according to the characteristics of the dividing lines, and determining the area between every two adjacent dividing lines as a reference corrugated area according to the positions of the dividing lines to obtain a plurality of reference corrugated areas.
And determining the positions of a plurality of dividing lines in the corrugated plate image through the dividing line characteristics, and determining the area between every two adjacent dividing lines as a reference corrugated area according to the positions of the plurality of dividing lines, so that a plurality of reference corrugated areas can be obtained.
Optionally, according to the characteristics of the dividing lines, the coordinates of two endpoints of each dividing line in the corrugated plate image are determined, for any two adjacent dividing lines, the coordinates of two endpoints of each dividing line are determined, the coordinates of two endpoints of each dividing line are used as the coordinates of a reference corrugated area, and the coordinates of the four endpoints form the reference corrugated area.
For example, the coordinates of both ends of the first dividing line are (1, 2), (1, 11), and the coordinates of both ends of the second dividing line are (8, 2), (8, 11), and a rectangle composed of the coordinates (1, 2), (1, 11), (8, 2), (8, 11) is used as one reference ripple region.
In one possible implementation manner, the dividing line feature includes probability that each pixel point in the corrugated plate image is located on the dividing line, and then the probability that each pixel point in the dividing line feature is located on the dividing line is transformed to obtain a plurality of dividing lines in the corrugated plate image.
For example, by using the hough transform technique, the probability corresponding to each pixel point in the split line feature is processed by using the correspondence between the straight line in the rectangular coordinate system and the polar coordinate system, so as to obtain a plurality of split lines corresponding to the split line feature.
Optionally, the probability of each pixel point in the feature of the dividing line on the dividing line is transformed to obtain a plurality of dividing line segments, and the two dividing line segments are connected to obtain a dividing line in response to the distance between the two ends of any two dividing line segments being smaller than the reference distance.
303. And the computer equipment determines the type corresponding to each pixel point in the corrugated plate image according to the type characteristics.
In the embodiment of the application, the corrugated plate image includes a plurality of types of corrugated surfaces, for example, the types of corrugated surfaces include concave surfaces, left inclined surfaces, convex surfaces, and right inclined surfaces. Through the type feature, each pixel point can be known, and the corresponding type is obtained, namely each pixel point can be located on the corrugated surface of the corresponding type.
In one possible implementation, the type features include confidence that the corrugated surface where each pixel point in the corrugated plate image is located belongs to each type; this step 303 comprises: for any pixel point, determining the type corresponding to the maximum confidence according to the confidence that the corrugated surface where the pixel point is located belongs to each type, and determining the type to which the corrugated surface where the pixel point is located belongs.
The confidence level of the type is used for indicating the possibility that the corrugated surface where the pixel point is located is the corrugated surface of the type, and the higher the confidence level of the type is, the greater the possibility that the corrugated surface where the pixel point is located is the corrugated surface of the type is, and the lower the confidence level of the type is, the less the possibility that the corrugated surface where the pixel point is located is the corrugated surface of the type is. Optionally, the confidence level is represented by a probability.
Since the corrugated plate image comprises a plurality of types of corrugated surfaces, the type of characteristics can represent the confidence that the corrugated surface where any pixel point is located belongs to each type, and the type corresponding to the pixel point can be determined according to the plurality of confidence. For example, the plurality of types of corrugated surfaces include a concave surface, a left inclined surface, a convex surface and a right inclined surface, for any pixel point, according to the type of characteristics, the confidence coefficient of the pixel point on the concave surface is determined to be 0.3, the confidence coefficient of the pixel point on the left inclined surface is determined to be 0.5, the confidence coefficient of the pixel point on the convex surface is determined to be 0.7, and the confidence coefficient of the pixel point on the right inclined surface is determined to be 0.9, so that the pixel point on the right inclined surface, that is, the type corresponding to the pixel point is determined to be the right inclined surface type.
304. The computer equipment respectively determines the type corresponding to each reference ripple area according to the pixel point in each reference ripple area and the type corresponding to each pixel point.
For any reference ripple region corresponding to the corrugated plate image, the type corresponding to each pixel point in the reference ripple region can be determined, so that the type corresponding to the reference ripple region is determined according to the type corresponding to each reference ripple region.
In one possible implementation manner, for any reference ripple region, each pixel point belonging to the reference ripple region is determined according to the coordinate information of the reference ripple region, the type corresponding to each pixel point in the reference ripple region is determined, and the type corresponding to the reference ripple region is determined according to the type corresponding to each pixel point.
The coordinate information of the reference ripple area is used for indicating the position of the reference ripple area. Optionally, the coordinate information of the reference moire region includes coordinates of two dividing lines constituting the reference moire region. The two dividing lines are two adjacent dividing lines in the plurality of dividing lines, and the two dividing lines form the reference ripple area.
In one possible implementation, this step 304 includes: and respectively determining the type corresponding to each reference ripple area and the confidence coefficient of the type according to the pixel point in each reference ripple area and the type corresponding to each pixel point in the corrugated plate image.
The confidence of the type is used for indicating the possibility that the reference corrugated area is the corrugated surface of the type, and the higher the confidence of the type is, the higher the possibility that the reference corrugated area is the corrugated surface of the type is, and the lower the confidence of the type is, the lower the possibility that the reference corrugated area is the corrugated surface of the type is.
The type corresponding to the reference ripple region can be determined by the type corresponding to the pixel points in the reference ripple region, and the type corresponding to each pixel point in the reference ripple region may not be completely the same as the type corresponding to the different pixel points in the reference ripple region, so that the type corresponding to the reference ripple region may not be accurate, and in order to accurately represent whether the type corresponding to the reference ripple region is accurate or not, the confidence of the type is adopted to represent the accuracy of the type.
In one possible implementation, this step 304 includes the following steps 3041-3044:
3041. And for any reference ripple area, determining the type corresponding to each pixel point in the reference ripple area according to the pixel point in the reference ripple area and the type corresponding to each pixel point in the corrugated plate image.
The type corresponding to each pixel point in the corrugated plate image can be obtained through the type characteristics, and the pixel points in the reference corrugated area can be determined through the reference corrugated area, so that the type corresponding to each pixel point in the reference corrugated area is determined.
3042. And determining the type corresponding to the reference ripple area according to the type corresponding to each pixel point in the reference ripple area.
After determining the type corresponding to each pixel point in the reference ripple area, the type corresponding to the reference ripple area can be determined through the type corresponding to each pixel point.
In one possible implementation, the step 3042 includes: and determining the type with the largest number of the corresponding pixel points as the type corresponding to the reference ripple area according to the type corresponding to each pixel point in the reference ripple area.
In the reference ripple area, the types corresponding to each pixel point may be different, and then the type with the largest number of corresponding pixel points is selected from the determined number of pixel points corresponding to each type as the type corresponding to the reference ripple area. For example, for any reference ripple region, the number of pixels corresponding to the first type is 10, the number of pixels corresponding to the second type is 8, and the number of pixels corresponding to the third type is 80, and then the third type is determined to be the type corresponding to the reference ripple region.
3043. And acquiring the number of pixels in the reference ripple area, wherein the type of the pixels is the same as the type corresponding to the reference ripple area.
The number of pixels with the same type as the type corresponding to the reference ripple area can be determined by the type corresponding to each pixel in the reference ripple area.
3044. And taking the ratio between the number of the pixel points and the total number of the pixel points in the reference ripple area as the confidence of the type corresponding to the reference ripple area.
The confidence coefficient can represent the accuracy of the type corresponding to the reference ripple region, and the greater the confidence coefficient is, the higher the accuracy is, the smaller the confidence coefficient is, and the lower the accuracy is.
305. And the computer equipment adjusts the multiple reference corrugated areas according to the types corresponding to the multiple reference corrugated areas to obtain multiple target corrugated areas.
Because inaccurate corrugated areas may exist in the obtained multiple reference corrugated areas according to the dividing lines indicated by the dividing line characteristics, in order to improve the accuracy of the corrugated areas, the multiple reference corrugated areas are adjusted according to the types corresponding to the multiple reference corrugated areas, so that the types of corrugated surfaces corresponding to every two adjacent target corrugated areas are different, and the accuracy of the target corrugated areas is improved.
In one possible implementation, this step 305 includes: in response to the same type corresponding to any two adjacent reference corrugated regions, merging any two reference corrugated regions to obtain a target corrugated region; in response to any of the reference ripple regions being different in the type corresponding to the adjacent reference ripple region, any of the reference ripple regions is determined as the target ripple region.
A plurality of types of corrugated surfaces are included in the corrugated plate, and the corresponding types of any two adjacent corrugated surfaces are different. Therefore, after determining the types corresponding to the reference ripple regions, if two adjacent reference ripple regions exist and the types corresponding to the reference ripple regions are the same according to the positions of the reference ripple regions and the types corresponding to each reference ripple region, the two reference ripple regions are combined to be used as one target ripple region, and if the types of the adjacent reference ripple regions are different, the reference ripple region is used as one target ripple region, so that the accuracy of the target ripple regions is ensured. Fig. 6 includes 3 figures, each of which includes a top view of a plurality of corrugated regions of the corrugated plate and a corresponding shape of the corrugated plate. In the first drawing of fig. 6, the corrugated plate includes 5 corrugated regions, and by the shape of the corrugated plate, the 5 corrugated regions are sequentially concave, left inclined, convex, right inclined, and concave from left to right. The reference corrugated region 601 and the reference corrugated region 602 in the third drawing in fig. 6 are on the same concave surface, that is, the reference corrugated region 601 and the reference corrugated region 602 are over-divided reference corrugated regions, by merging the reference corrugated region 601 and the reference corrugated region 602 as one target corrugated region 603, as shown in the first drawing in fig. 6.
Optionally, when the types corresponding to any two adjacent reference corrugated regions are the same, merging any two reference corrugated regions to obtain a merged corrugated region, wherein the types corresponding to the merged corrugated region are the same as the types corresponding to the two reference corrugated regions, if the types corresponding to the merged corrugated region and the two adjacent reference corrugated regions are different, the merged corrugated region is taken as a target corrugated region, and if the types corresponding to any one adjacent reference corrugated region are the same, the merged corrugated region and the adjacent reference corrugated region are merged until the types corresponding to the obtained merged corrugated region and the adjacent reference region are different, and if the types corresponding to the merged corrugated region and the adjacent reference region are different, the obtained merged corrugated region is taken as a target corrugated region.
In one possible implementation, the feature information further includes a key point feature, where the key point feature is used to indicate a key point located on a dividing line in the corrugated plate image; this step 305 includes the following steps 3051-3053:
3051. and determining at least two target key points positioned in the reference corrugated region according to the key point characteristics in response to the confidence coefficient corresponding to any reference corrugated region being smaller than the reference confidence coefficient.
The reference confidence is any confidence, such as 0.8 or 0.9. The confidence coefficient corresponding to the reference ripple area is the confidence coefficient of the type corresponding to the reference ripple area and is used for indicating the accuracy of the type corresponding to the reference ripple area.
In this embodiment of the present application, if the confidence coefficient corresponding to the arbitrary reference corrugated region is smaller than the reference confidence coefficient, which indicates that the accuracy of the type corresponding to the reference corrugated region is low, the reference corrugated region needs to be adjusted subsequently, so at least two target key points located in the reference corrugated region are determined, so that the reference corrugated region is adjusted subsequently according to the at least two target key points.
In one possible implementation, the step 3051 includes: and determining at least two target key points positioned in the reference corrugated area according to the positions of a plurality of key points in the key point characteristics and the positions of the reference corrugated area in the corrugated plate image.
Optionally, the key point feature is used for indicating the key points located on the dividing line and the confidence degrees of the key points in the corrugated plate image, and then a plurality of reference key points located in the reference corrugated area are determined according to the positions of a plurality of key points in the key point feature and the positions of the reference corrugated area in the corrugated plate image, and at least two target key points are selected from the plurality of reference key points according to the confidence degrees of the plurality of reference key points in the key point feature.
The target key points are key points on a segmentation line which can exist in the reference ripple area.
3052. And according to the dividing line formed by the at least two key points, dividing the reference ripple area to obtain the target ripple area.
And forming at least one dividing line through the at least two key points, and dividing the reference corrugated region through the at least one dividing line to obtain at least two target corrugated regions.
In one possible implementation, the step 3052 includes: according to a dividing line formed by at least two key points, dividing the reference corrugated region to obtain at least two divided corrugated regions, determining the type and the confidence coefficient of the type of each divided corrugated region, taking the divided corrugated region as a target corrugated region in response to the confidence coefficient corresponding to any divided corrugated region being greater than the reference confidence coefficient, and repeating the steps in response to the confidence coefficient corresponding to any divided corrugated region being less than the reference confidence coefficient, and continuing dividing the divided corrugated region. The reference ripple region 604 in the second diagram in fig. 6 needs to be divided according to the target key point, and a target ripple region 605 and a target ripple region 606 obtained by the division processing are shown in the first diagram in fig. 6.
3053. And taking any reference ripple area as a target ripple area in response to the confidence coefficient corresponding to any reference ripple area being greater than the reference confidence coefficient.
If the confidence coefficient corresponding to any reference ripple area is larger than the reference confidence coefficient, the reference ripple area can be used as a target ripple area, and the type corresponding to the reference ripple area is accurate.
It should be noted that, in the embodiment of the present application, the adjustment of the multiple reference corrugated areas is described by determining the type of each pixel point and the type of each reference corrugated area, and in another embodiment, steps 303 to 305 are not required to be performed, and other manners can be adopted to adjust the multiple reference corrugated areas according to the type characteristics, so as to obtain multiple target corrugated areas of the corrugated plate image.
It should be noted that, in this embodiment of the present application, only the image segmentation process is performed on the corrugated plate image, so as to obtain a plurality of corrugated areas of the corrugated plate image, and in another embodiment, point cloud data of the corrugated plate image can also be extracted, and then the point cloud data of the corrugated plate image is analyzed through the feature extraction model, so as to obtain feature information of the corrugated plate image, so as to obtain a plurality of corrugated areas of the corrugated plate image.
By the method provided by the embodiment of the application, the acquired corrugated plate image is subjected to corrugated region positioning by utilizing the image processing technology and the image segmentation technology, so that the accuracy and the robustness of image segmentation are improved. The feature extraction model comprises a high-resolution network, a parting line detection sub-model, a type detection sub-model and a key point detection sub-model, and is trained in a multi-task learning mode, so that the trained feature extraction model can accurately output key point features, parting line features and type features in a corrugated plate image, and the synergistic effect among the three features is utilized, and meanwhile, the key point features, the parting line features and the type features are combined, so that the problem of region positioning errors caused by using only the parting line features is avoided, and the accuracy of an image ripple region is improved.
According to the method provided by the embodiment of the application, the fact that the types of the adjacent corrugated surfaces in the corrugated plate are different is considered, and the dividing lines are arranged between the adjacent corrugated surfaces is adopted, when the corrugated plate image is divided into images, the dividing line characteristics and the type characteristics of the corrugated plate image are obtained to determine the dividing line in the corrugated plate image and the type of the corrugated surface corresponding to each pixel point, the characteristics of the corrugated plate image are enriched, and the types corresponding to every two adjacent corrugated areas are different in a plurality of corrugated areas obtained by dividing according to the dividing line and the type corresponding to each pixel point, so that the accuracy of the corrugated areas is improved.
And the corrugated plate image is segmented through the segmentation line features, the type features and the key point features in the corrugated plate image, so that the features in the corrugated plate image are enriched, and the accuracy of the corrugated region is improved.
Based on the above embodiment, fig. 7 shows a flow of image segmentation of a corrugated plate image, as shown in fig. 7, the flow including:
1. and shooting the corrugated plate through a line scanning camera to obtain a corrugated plate image.
2. Inputting the obtained corrugated plate image into a feature extraction model, and obtaining key point features by detecting key points of the corrugated plate image; the method comprises the steps of detecting a dividing line of a corrugated plate image to obtain a dividing line characteristic; and (5) detecting the type of the corrugated plate image to obtain the type characteristics of the corrugated plate image.
3. Dividing the corrugated plate image into a plurality of reference corrugated areas through a plurality of dividing lines indicated by dividing line characteristics, and adjusting the plurality of reference corrugated areas according to the key point characteristics and the type characteristics to obtain a plurality of target corrugated areas and types of each target corrugated area in the corrugated plate image.
It should be noted that, in the embodiment of the present application, only the parting line detection sub-model, the type detection sub-model, and the keypoint detection sub-model are used for illustration, and before the parting line detection sub-model, the type detection sub-model, and the keypoint detection sub-model are called, the parting line detection sub-model, the type detection sub-model, and the keypoint detection sub-model need to be trained. In the embodiment of the application, the parting line detection sub-model, the type detection sub-model and the key point detection sub-model are synchronously trained in a multitasking mode, so that the three sub-models are mutually promoted, and the parting line detection sub-model, the type detection sub-model and the key point detection sub-model with correlation are obtained. Training processes for the parting line detection sub-model, the type detection sub-model and the key point detection sub-model are as follows:
1. And acquiring a sample characteristic diagram of the sample corrugated plate image, and a sample dividing line characteristic, a sample type characteristic and a sample key point characteristic corresponding to the sample corrugated plate image.
The sample dividing line features are used for indicating the features of real dividing lines in the sample corrugated plate images, the sample type features are used for indicating the real types corresponding to each pixel point in the sample corrugated plate images, and the sample key point features are used for indicating the real key points in the sample corrugated plate images.
2. And calling a parting line detection sub-model, carrying out parting line detection on the sample feature map to obtain a predicted parting line feature, and training the parting line detection sub-model according to the predicted parting line feature and the sample parting line feature.
In one possible implementation, when training the parting line detection sub-model, a first loss value of the parting line detection sub-model is determined according to the predicted parting line characteristic and the sample parting line characteristic, and the parting line detection sub-model is trained according to the first loss value.
Optionally, the first loss value is a cross entropy loss value.
Optionally, the predictive cut line feature, the sample cut line feature, and the first loss value satisfy the following relationship:
wherein L is 1 Representing a first loss value; n represents the total number of pixel points in the sample corrugated plate image; x and y represent coordinates of pixel points in the sample corrugated plate image and are used for indicating the pixel points in the sample corrugated plate image; c represents the category of the pixel points, and the category is used for indicating whether the pixel points are the pixel points on the dividing line or not; m is M xyc Representing the true probability that the pixel points with coordinates x and y belong to the category c;the prediction probability that a pixel point with coordinates x and y belongs to the category c is represented.
3. And calling a type detection sub-model, carrying out type detection on the sample feature map to obtain a prediction type feature, and training the type detection sub-model according to the prediction type feature and the sample type feature.
In one possible implementation, when training the type detection sub-model, a second loss value of the type detection sub-model is determined according to the prediction type feature and the sample type feature, and the type detection sub-model is trained according to the second loss value.
4. And calling a key point detection sub-model, carrying out key point detection on the second characteristic diagram of the sample to obtain predicted key point characteristics, and training the key point detection sub-model according to the predicted key point characteristics and the sample key point characteristics.
In one possible implementation, when training the keypoint detection sub-model, a third loss value of the keypoint detection sub-model is determined according to the predicted keypoint feature and the sample keypoint feature, and the keypoint detection sub-model is trained according to the third loss value.
Optionally, the predicted keypoint feature, the sample keypoint feature and the third loss value satisfy the following relationship:
wherein L is 2 Representing a third loss value; n represents the total number of pixel points in the sample corrugated plate image; x and y represent coordinates of pixel points in the sample corrugated plate image and are used for indicating the pixel points in the sample corrugated plate image; z is Z xy Representing the true probability that the pixel points with the coordinates of x and y are key points; z is Z xy =1 indicates that the pixel points with coordinates x and y are true key points; z is Z xy Not equal to 1, the pixel points with coordinates x and y are not real key points;the prediction probability of the pixel points with coordinates of x and y as key points is represented; alpha and beta are adjusting parameters, and can be any numerical value.
In the embodiment of the application, in the training process of the parting line detection submodel, the type detection submodel and the key point detection submodel, the key points, the parting line and the corrugated surface in the corrugated plate image are utilized to train the parting line detection submodel, the type detection submodel and the key point detection submodel, so that the robustness of the feature extraction model is improved, and the unstable performance of the feature extraction model caused by single information is avoided. The method adopts a multitask learning strategy, and the key points, the parting lines and the corrugated surfaces of the corrugated plate image are used for respectively training the parting line detection submodel, the type detection submodel and the key point detection submodel, so that mutual reference and mutual promotion of information of the parting lines detection submodel, the type detection submodel and the key point detection submodel are facilitated. Three kinds of information are output through a unified network structure, so that the occupied space of a network model can be reduced, redundant calculation during feature extraction can be reduced, and the processing speed of the feature extraction model is increased. In the feature extraction model, a high-resolution network is adopted, and the bottom-up and top-down processing modes are repeatedly used to obtain multi-scale information of each input corrugated plate image, so that more accurate pixel-level prediction results are obtained, and the accuracy of the feature extraction model is improved.
The image segmentation method provided by the embodiment is applied to the scene of defect analysis of the corrugated plate, and as shown in fig. 8, the acquired corrugated plate image is subjected to feature extraction based on the image segmentation method provided by the embodiment to obtain key point features, segmentation line features and type features of the corrugated plate image. As shown in fig. 9, a plurality of keypoints 901 indicated by the keypoint feature are included in the corrugated plate image; as shown in fig. 10, a plurality of dividing lines 1001 indicated by the dividing line feature are included in the corrugated plate image. And (3) image segmentation is carried out on the corrugated plate image through key point features, segmentation line features and type features of the corrugated plate image, so that a plurality of corrugated areas in the corrugated plate image are positioned. As shown in fig. 11, a plurality of corrugated regions 1101 positioned are included in the corrugated plate image. After locating the plurality of corrugation areas in the corrugated board image, a defect analysis is subsequently performed on the corrugated board through the located corrugation areas to determine the quality of the corrugated board.
Fig. 12 is a schematic structural diagram of an image segmentation apparatus according to an embodiment of the present application, as shown in fig. 12, including:
The feature extraction module 1201 is configured to perform feature extraction on the corrugated plate image to obtain feature information of the corrugated plate image, where the feature information includes a parting line feature and a type feature, the parting line feature is used to indicate a parting line between different corrugated surfaces in the corrugated plate image, and the type feature is used to indicate a type corresponding to each pixel point in the corrugated plate image, and the type is a type of the corrugated surface where the pixel point is located;
an image segmentation module 1202 for segmenting the corrugated plate image into a plurality of reference corrugated regions according to a segmentation line indicated by the segmentation line feature;
the region adjustment module 1203 is configured to adjust the plurality of reference corrugated regions according to the type feature, so as to obtain a plurality of target corrugated regions of the corrugated plate image, so that the types of corrugated surfaces corresponding to every two adjacent target corrugated regions are different.
In one possible implementation, as shown in fig. 13, the image segmentation module 1202 includes:
a position determining unit 1221 for determining positions of a plurality of dividing lines in the corrugated plate image based on the dividing line characteristics;
the area determining unit 1222 is configured to determine an area between every two adjacent dividing lines as one reference ripple area according to the positions of the plurality of dividing lines, so as to obtain a plurality of reference ripple areas.
In another possible implementation, as shown in fig. 13, the area adjustment module 1203 includes:
a first determining unit 1231, configured to determine a type corresponding to each pixel point in the corrugated plate image according to the type feature;
a second determining unit 1232, configured to determine a type corresponding to each reference ripple area according to the pixel point in each reference ripple area and the type corresponding to each pixel point;
the region adjustment unit 1233 is configured to adjust the plurality of reference ripple regions according to the types corresponding to the plurality of reference ripple regions, so as to obtain a plurality of target ripple regions.
In another possible implementation manner, as shown in fig. 13, the area adjusting unit 1233 includes:
the region merging subunit 12331 is configured to, in response to the same type corresponding to any two adjacent reference ripple regions, merge any two reference ripple regions to obtain a target ripple region;
the first determining subunit 12332 is configured to determine, as the target ripple region, any reference ripple region in response to a difference in a type of any reference ripple region corresponding to an adjacent reference ripple region.
In another possible implementation manner, the second determining unit 1232 is configured to determine the type and the confidence level of the type corresponding to each reference ripple area according to the pixel point in each reference ripple area and the type corresponding to each pixel point.
In another possible implementation manner, the feature information further includes a key point feature, wherein the key point feature is used for indicating a key point located on a dividing line in the corrugated plate image;
as shown in fig. 13, the region adjustment unit 1233 includes:
a second determining subunit 12333, configured to determine at least two target keypoints located within the reference ripple region according to the keypoint feature in response to the confidence level corresponding to any reference ripple region being less than the reference confidence level;
the segmentation processing subunit 12334 is configured to perform segmentation processing on the reference ripple region according to a segmentation line formed by at least two key points, so as to obtain a target ripple region.
In another possible implementation, the second determining subunit 12333 is configured to determine at least two target keypoints located within the reference corrugated region according to the positions of the plurality of keypoints in the keypoint feature and the positions of the reference corrugated region in the corrugated plate image.
In another possible implementation, the key point feature is used to indicate the key points located on the segmentation line and the confidence of the key points in the corrugated plate image, and the second determining subunit 12333 is used to determine a plurality of reference key points located in the reference corrugated region according to the positions of the plurality of key points in the key point feature and the positions of the reference corrugated region in the corrugated plate image; and selecting at least two target key points from the plurality of reference key points according to the confidence degrees of the plurality of reference key points in the key point characteristics.
In another possible implementation manner, as shown in fig. 13, the area adjusting unit 1233 further includes:
the third determining subunit 12335 is configured to, in response to the confidence level corresponding to any one of the reference ripple regions being greater than the reference confidence level, set any one of the reference ripple regions as the target ripple region.
In another possible implementation manner, as shown in fig. 13, the second determining unit 1232 includes:
a fourth determining subunit 12321, configured to determine, for any reference ripple area, a type corresponding to each pixel point in the reference ripple area according to the pixel point in the reference ripple area and the type corresponding to each pixel point;
a fifth determining subunit 12322 is configured to determine a type corresponding to the reference ripple area according to the type corresponding to each pixel point in the reference ripple area.
In another possible implementation manner, the fifth determining subunit 12322 is configured to determine, according to the type corresponding to each pixel point in the reference ripple area, the type with the largest number of corresponding pixel points as the type corresponding to the reference ripple area.
In another possible implementation, as shown in fig. 13, the apparatus further includes:
a number obtaining module 1204, configured to obtain, in the reference ripple area, the number of pixels having the same type as the type corresponding to the reference ripple area;
The confidence determining module 1205 is configured to use a ratio between the number of pixels and the total number of pixels in the reference ripple area as a confidence of a type corresponding to the reference ripple area.
In another possible implementation manner, the type feature includes a confidence that a corrugated surface where each pixel point in the corrugated image is located belongs to each type;
the first determining unit 1231 includes:
the type determining subunit 12311 is configured to determine, for any pixel, a type corresponding to the maximum confidence, as a type to which the corrugated surface where the pixel is located belongs, according to the confidence that the corrugated surface where the pixel is located belongs to each type.
In another possible implementation, as shown in fig. 13, the feature extraction module 1201 includes:
the feature extraction unit 1211 is configured to invoke the feature extraction model, and perform feature extraction on the corrugated plate image, so as to obtain feature information.
In another possible implementation, the feature extraction model includes a feature extraction sub-model, a scale conversion sub-model, and a feature detection sub-model;
as shown in fig. 13, the feature extraction unit 1211 includes:
a first extraction subunit 12111, configured to invoke the feature extraction sub-model, and perform feature extraction on the corrugated plate image, so as to obtain a first feature map of the corrugated plate image;
A second extraction subunit 12112, configured to invoke the scaling sub-model, and scale-convert the first feature map to obtain a second feature map of the corrugated plate image;
the third extraction subunit 12113 is configured to invoke the feature detection sub-model to perform feature detection on the second feature map, so as to obtain feature information.
In another possible implementation, the feature detection sub-model includes a split line detection sub-model and a type detection sub-model;
a third extraction subunit 12113, configured to invoke the parting line detection sub-model, and perform parting line detection on the second feature map to obtain a parting line feature; and calling a type detection sub-model, and carrying out type detection on the second feature map to obtain type features.
In another possible implementation, the feature detection sub-model further includes a keypoint detection sub-model; the third extraction subunit 12113 is further configured to invoke the keypoint detection submodel to perform keypoint detection on the second feature map, thereby obtaining a keypoint feature.
In another possible implementation manner, the second extracting subunit 12112 is configured to invoke the scaling sub-model, and scale-convert the first feature map to obtain a plurality of scale reference feature maps corresponding to the first feature map; and carrying out fusion processing on the reference feature images of the multiple scales to obtain a second feature image of the corrugated plate image.
It should be noted that: the image dividing apparatus provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation can be performed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the image segmentation apparatus and the image segmentation method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
The present application also provides a computer apparatus including a processor and a memory, the memory storing at least one program code loaded and executed by the processor to implement the operations performed in the image segmentation method of the above embodiments.
Optionally, the computer device is provided as a terminal. Fig. 14 shows a block diagram of a terminal 1400 provided in an exemplary embodiment of the present application. The terminal 1400 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Electronic device 1400 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, and the like.
The electronic device 1400 includes: a processor 1401 and a memory 1402.
Processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1401 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1401 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1401 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and rendering of content that the display screen is required to display. In some embodiments, the processor 1401 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1402 may include one or more computer-readable storage media, which may be non-transitory. Memory 1402 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1402 is used to store at least one program code for execution by processor 1401 to implement the image segmentation methods provided by the method embodiments herein.
In some embodiments, the electronic device 1400 may further optionally include: a peripheral interface 1403 and at least one peripheral. The processor 1401, memory 1402, and peripheral interface 1403 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1403 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1404, a display screen 1405, a camera assembly 1406, an audio circuit 1407, a positioning assembly 1408, and a power source 1409.
Peripheral interface 1403 may be used to connect at least one Input/Output (I/O) related peripheral to processor 1401 and memory 1402. In some embodiments, processor 1401, memory 1402, and peripheral interface 1403 are integrated on the same chip or circuit board; in some other embodiments, either or both of processor 1401, memory 1402, and peripheral interface 1403 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1404 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1404 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1404 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1404 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1404 may also include NFC (Near Field Communication, short range wireless communication) related circuits, which are not limited in this application.
The display screen 1405 is used to display UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1405 is a touch display screen, the display screen 1405 also has the ability to collect touch signals at or above the surface of the display screen 1405. The touch signal may be input to the processor 1401 as a control signal for processing. At this time, the display 1405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1405 may be one, disposed on the front panel of the electronic device 1400; in other embodiments, the display 1405 may be at least two, respectively disposed on different surfaces of the electronic device 1400 or in a folded design; in other embodiments, the display 1405 may be a flexible display disposed on a curved surface or a folded surface of the electronic device 1400. Even more, the display 1405 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 1405 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera component 1406 is used to capture images or video. Optionally, camera assembly 1406 includes a front camera and a rear camera. The front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1406 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 1407 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1401 for processing, or inputting the electric signals to the radio frequency circuit 1404 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple, and disposed at different locations of the electronic device 1400. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1401 or the radio frequency circuit 1404 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuitry 1407 may also include a headphone jack.
The locating component 1408 is used to locate the current geographic location of the electronic device 1400 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1408 may be a positioning component based on the united states GPS (Global Positioning System ), the chinese beidou system, or the russian galileo system.
The power supply 1409 is used to power the various components in the electronic device 1400. The power supply 1409 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1409 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 1400 also includes one or more sensors 1410. The one or more sensors 1410 include, but are not limited to: acceleration sensor 1411, gyroscope sensor 1412, pressure sensor 1413, fingerprint sensor 1414, optical sensor 1415, and proximity sensor 1416.
The acceleration sensor 1411 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the electronic device 1400. For example, the acceleration sensor 1411 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1401 may control the display screen 1405 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 1411. The acceleration sensor 1411 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1412 may detect a body direction and a rotation angle of the electronic device 1400, and the gyro sensor 1412 may collect a 3D motion of the user on the electronic device 1400 in cooperation with the acceleration sensor 1411. The processor 1401 may implement the following functions based on the data collected by the gyro sensor 1412: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 1413 may be disposed on a side frame of the electronic device 1400 and/or on an underside of the display 1405. When the pressure sensor 1413 is disposed at a side frame of the electronic device 1400, a grip signal of the electronic device 1400 by a user may be detected, and the processor 1401 performs a left-right hand recognition or a quick operation according to the grip signal collected by the pressure sensor 1413. When the pressure sensor 1413 is disposed at the lower layer of the display screen 1405, the processor 1401 realizes control of the operability control on the UI interface according to the pressure operation of the user on the display screen 1405. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1414 is used to collect a fingerprint of a user, and the processor 1401 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 1414, or the fingerprint sensor 1414 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by the processor 1401 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1414 may be disposed on the front, back, or side of the electronic device 1400. When a physical key or vendor Logo is provided on the electronic device 1400, the fingerprint sensor 1414 may be integrated with the physical key or vendor Logo.
The optical sensor 1415 is used to collect the ambient light intensity. In one embodiment, processor 1401 may control the display brightness of display screen 1405 based on the intensity of ambient light collected by optical sensor 1415. Specifically, when the intensity of the ambient light is high, the display luminance of the display screen 1405 is turned high; when the ambient light intensity is low, the display luminance of the display screen 1405 is turned down. In another embodiment, the processor 1401 may also dynamically adjust the shooting parameters of the camera assembly 1406 based on the ambient light intensity collected by the optical sensor 1415.
A proximity sensor 1416, also referred to as a distance sensor, is provided on the front panel of the electronic device 1400. The proximity sensor 1416 is used to capture the distance between the user and the front of the electronic device 1400. In one embodiment, when the proximity sensor 1416 detects a gradual decrease in the distance between the user and the front of the electronic device 1400, the processor 1401 controls the display 1405 to switch from the on-screen state to the off-screen state; when the proximity sensor 1416 detects that the distance between the user and the front of the electronic device 1400 gradually increases, the display 1405 is controlled by the processor 1401 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 14 is not limiting of the electronic device 1400 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
Optionally, the computer device is provided as a server. Fig. 15 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 1500 may have a relatively large difference due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU) 1501 and one or more memories 1502, where the memories 1502 store at least one program code, and the at least one program code is loaded and executed by the processors 1501 to implement the methods according to the above-mentioned method embodiments. Of course, the server may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
The server 1500 may be used to perform the steps performed by the server in the image segmentation method described above.
The present application also provides a computer-readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to implement the operations performed in the image segmentation method of the above-described embodiments.
Embodiments of the present application also provide a computer program product or computer program comprising computer program code stored in a computer readable storage medium. The processor of the computer device reads the computer program code from the computer readable storage medium, and the processor executes the computer program code so that the computer device realizes the operations performed in the image segmentation method of the above-described embodiment.
Those of ordinary skill in the art will appreciate that all or a portion of the steps implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the above storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the embodiments is merely an optional embodiment and is not intended to limit the embodiments, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principles of the embodiments of the present application are intended to be included in the scope of the present application.

Claims (15)

1. An image segmentation method, the method comprising:
Extracting characteristics of a corrugated plate image to obtain characteristic information of the corrugated plate image, wherein the characteristic information comprises dividing line characteristics and type characteristics, the dividing line characteristics are used for indicating dividing lines between different corrugated surfaces in the corrugated plate image, the type characteristics are used for indicating types corresponding to each pixel point in the corrugated plate image, and the types are types of the corrugated surfaces where the pixel points are located;
dividing the corrugated plate image into a plurality of reference corrugated areas according to the dividing lines indicated by the dividing line characteristics;
and according to the type characteristics, the plurality of reference corrugated areas are adjusted to obtain a plurality of target corrugated areas of the corrugated plate image, so that the types of corrugated surfaces corresponding to every two adjacent target corrugated areas are different.
2. The method of claim 1, wherein the segmenting the corrugated board image into a plurality of reference corrugated regions according to the segmentation lines indicated by the segmentation line features comprises:
determining the positions of a plurality of dividing lines in the corrugated plate image according to the dividing line characteristics;
and determining the area between every two adjacent dividing lines as a reference ripple area according to the positions of the dividing lines to obtain the plurality of reference ripple areas.
3. The method of claim 1, wherein adjusting the plurality of reference corrugation regions in accordance with the type of feature results in a plurality of target corrugation regions of the corrugated plate image, comprising:
according to the type characteristics, determining the type corresponding to each pixel point in the corrugated plate image;
according to the pixel points in each reference ripple area and the types corresponding to each pixel point, determining the types corresponding to each reference ripple area respectively;
and adjusting the plurality of reference corrugated regions according to the types corresponding to the plurality of reference corrugated regions to obtain the plurality of target corrugated regions.
4. The method according to claim 3, wherein adjusting the plurality of reference corrugated regions according to the types corresponding to the plurality of reference corrugated regions to obtain the plurality of target corrugated regions includes:
in response to the same type corresponding to any two adjacent reference ripple areas, merging any two reference ripple areas to obtain a target ripple area;
in response to any one of the reference ripple regions being different in the type corresponding to an adjacent reference ripple region, the any one of the reference ripple regions is determined to be a target ripple region.
5. A method according to claim 3, wherein the determining the type corresponding to each reference corrugated region according to the pixel point in each reference corrugated region and the type corresponding to each pixel point respectively includes:
and respectively determining the type corresponding to each reference ripple area and the confidence coefficient of the type according to the pixel point in each reference ripple area and the type corresponding to each pixel point.
6. The method of claim 5, wherein the feature information further comprises a keypoint feature for indicating a keypoint located on a segmentation line in the corrugated plate image;
the adjusting the plurality of reference corrugated regions according to the types corresponding to the plurality of reference corrugated regions to obtain the plurality of target corrugated regions includes:
determining at least two target key points in the reference corrugated region according to the key point characteristics in response to the confidence coefficient corresponding to any reference corrugated region being smaller than the reference confidence coefficient;
and according to the dividing line formed by the at least two key points, dividing the reference corrugated area to obtain a target corrugated area.
7. The method of claim 6, wherein determining at least two target keypoints located within the reference corrugated region from the keypoint features comprises:
and determining at least two target key points positioned in the reference corrugated area according to the positions of a plurality of key points in the key point characteristics and the positions of the reference corrugated area in the corrugated plate image.
8. A method according to claim 3, wherein the determining the type corresponding to each reference corrugated region according to the pixel point in each reference corrugated region and the type corresponding to each pixel point respectively includes:
for any reference ripple area, determining the type corresponding to each pixel point in the reference ripple area according to the pixel point in the reference ripple area and the type corresponding to each pixel point;
and determining the type corresponding to the reference ripple area according to the type corresponding to each pixel point in the reference ripple area.
9. The method of claim 8, wherein determining the type corresponding to the reference ripple region from the type corresponding to each pixel point in the reference ripple region comprises:
And determining the type with the largest number of the corresponding pixel points as the type corresponding to the reference ripple area according to the type corresponding to each pixel point in the reference ripple area.
10. The method according to claim 1, wherein the feature extraction of the corrugated board image to obtain feature information of the corrugated board image includes:
and calling a feature extraction model to extract features of the corrugated plate image to obtain the feature information.
11. The method of claim 10, wherein the feature extraction model comprises a feature extraction sub-model, a scale conversion sub-model, and a feature detection sub-model;
the step of calling a feature extraction model to extract the features of the corrugated plate image to obtain the feature information comprises the following steps:
invoking the feature extraction sub-model to extract features of the corrugated plate image to obtain a first feature map of the corrugated plate image;
invoking the scale conversion sub-model to scale-convert the first feature map to obtain a second feature map of the corrugated plate image;
and calling the feature detection sub-model, and carrying out feature detection on the second feature map to obtain the feature information.
12. The method of claim 11, wherein the invoking the scaling sub-model to scale the first feature map to obtain a second feature map of the corrugated plate image comprises:
invoking the scale conversion sub-model, and performing scale conversion processing on the first feature map to obtain a plurality of scale reference feature maps corresponding to the first feature map;
and carrying out fusion processing on the reference feature images with the multiple scales to obtain a second feature image of the corrugated plate image.
13. An image segmentation apparatus, the apparatus comprising:
the device comprises a feature extraction module, a feature detection module and a feature detection module, wherein the feature extraction module is used for extracting features of a corrugated plate image to obtain feature information of the corrugated plate image, the feature information comprises a parting line feature and a type feature, the parting line feature is used for indicating parting lines between different corrugated surfaces in the corrugated plate image, the type feature is used for indicating a type corresponding to each pixel point in the corrugated plate image, and the type is a type of the corrugated surface where the pixel point is located;
the image segmentation module is used for segmenting the corrugated plate image into a plurality of reference corrugated areas according to the segmentation lines indicated by the segmentation line characteristics;
And the region adjustment module is used for adjusting the plurality of reference corrugated regions according to the type characteristics to obtain a plurality of target corrugated regions of the corrugated plate image so as to enable the types of corrugated surfaces corresponding to every two adjacent target corrugated regions to be different.
14. A computer device comprising a processor and a memory having stored therein at least one program code that is loaded and executed by the processor to perform the operations performed in the image segmentation method of any of claims 1-12.
15. A computer readable storage medium having stored therein at least one program code loaded and executed by a processor to implement the operations performed in the image segmentation method of any one of claims 1 to 12.
CN202011079909.2A 2020-10-10 2020-10-10 Image segmentation method, device, computer equipment and storage medium Active CN112053360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011079909.2A CN112053360B (en) 2020-10-10 2020-10-10 Image segmentation method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011079909.2A CN112053360B (en) 2020-10-10 2020-10-10 Image segmentation method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112053360A CN112053360A (en) 2020-12-08
CN112053360B true CN112053360B (en) 2023-07-25

Family

ID=73606056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011079909.2A Active CN112053360B (en) 2020-10-10 2020-10-10 Image segmentation method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112053360B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223020B (en) * 2021-05-21 2024-03-26 深圳乐居智能电子有限公司 Partition method and device for cleaning area and cleaning equipment
CN113223019B (en) * 2021-05-21 2024-03-26 深圳乐居智能电子有限公司 Partition method and device for cleaning area and cleaning equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5247583A (en) * 1989-11-01 1993-09-21 Hitachi, Ltd. Image segmentation method and apparatus therefor
CN101826204A (en) * 2009-03-04 2010-09-08 中国人民解放军63976部队 Quick particle image segmentation method based on improved waterline algorithm
CN107767412A (en) * 2017-09-11 2018-03-06 西安中兴新软件有限责任公司 A kind of image processing method and device
CN109523553A (en) * 2018-11-13 2019-03-26 华际科工(北京)卫星通信科技有限公司 A kind of container unusual fluctuation monitoring method based on LSD straight-line detection partitioning algorithm
CN111091572A (en) * 2019-12-18 2020-05-01 上海众源网络有限公司 Image processing method and device, electronic equipment and storage medium
WO2020134010A1 (en) * 2018-12-27 2020-07-02 北京字节跳动网络技术有限公司 Training of image key point extraction model and image key point extraction
CN111445486A (en) * 2020-03-25 2020-07-24 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN111754513A (en) * 2020-08-07 2020-10-09 腾讯科技(深圳)有限公司 Product surface defect segmentation method, defect segmentation model learning method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10330608B2 (en) * 2012-05-11 2019-06-25 Kla-Tencor Corporation Systems and methods for wafer surface feature detection, classification and quantification with wafer geometry metrology tools

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5247583A (en) * 1989-11-01 1993-09-21 Hitachi, Ltd. Image segmentation method and apparatus therefor
CN101826204A (en) * 2009-03-04 2010-09-08 中国人民解放军63976部队 Quick particle image segmentation method based on improved waterline algorithm
CN107767412A (en) * 2017-09-11 2018-03-06 西安中兴新软件有限责任公司 A kind of image processing method and device
CN109523553A (en) * 2018-11-13 2019-03-26 华际科工(北京)卫星通信科技有限公司 A kind of container unusual fluctuation monitoring method based on LSD straight-line detection partitioning algorithm
WO2020134010A1 (en) * 2018-12-27 2020-07-02 北京字节跳动网络技术有限公司 Training of image key point extraction model and image key point extraction
CN111091572A (en) * 2019-12-18 2020-05-01 上海众源网络有限公司 Image processing method and device, electronic equipment and storage medium
CN111445486A (en) * 2020-03-25 2020-07-24 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN111754513A (en) * 2020-08-07 2020-10-09 腾讯科技(深圳)有限公司 Product surface defect segmentation method, defect segmentation model learning method and device

Also Published As

Publication number Publication date
CN112053360A (en) 2020-12-08

Similar Documents

Publication Publication Date Title
CN111091132B (en) Image recognition method and device based on artificial intelligence, computer equipment and medium
CN109299315B (en) Multimedia resource classification method and device, computer equipment and storage medium
CN110059685B (en) Character area detection method, device and storage medium
CN110807361B (en) Human body identification method, device, computer equipment and storage medium
CN111489378B (en) Video frame feature extraction method and device, computer equipment and storage medium
CN110059652B (en) Face image processing method, device and storage medium
CN112749613B (en) Video data processing method, device, computer equipment and storage medium
CN111931877B (en) Target detection method, device, equipment and storage medium
CN111897996B (en) Topic label recommendation method, device, equipment and storage medium
CN110162604B (en) Statement generation method, device, equipment and storage medium
CN112581358B (en) Training method of image processing model, image processing method and device
CN111209377B (en) Text processing method, device, equipment and medium based on deep learning
CN111104980B (en) Method, device, equipment and storage medium for determining classification result
CN110991457B (en) Two-dimensional code processing method and device, electronic equipment and storage medium
CN111178343A (en) Multimedia resource detection method, device, equipment and medium based on artificial intelligence
CN112053360B (en) Image segmentation method, device, computer equipment and storage medium
CN110991445B (en) Vertical text recognition method, device, equipment and medium
CN111325220A (en) Image generation method, device, equipment and storage medium
CN112818979B (en) Text recognition method, device, equipment and storage medium
CN110990728B (en) Method, device, equipment and storage medium for managing interest point information
CN111639639B (en) Method, device, equipment and storage medium for detecting text area
CN110232417B (en) Image recognition method and device, computer equipment and computer readable storage medium
CN111611414B (en) Vehicle searching method, device and storage medium
CN113378705B (en) Lane line detection method, device, equipment and storage medium
CN113343709B (en) Method for training intention recognition model, method, device and equipment for intention recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40035791

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant