CN112053360A - Image segmentation method and device, computer equipment and storage medium - Google Patents

Image segmentation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112053360A
CN112053360A CN202011079909.2A CN202011079909A CN112053360A CN 112053360 A CN112053360 A CN 112053360A CN 202011079909 A CN202011079909 A CN 202011079909A CN 112053360 A CN112053360 A CN 112053360A
Authority
CN
China
Prior art keywords
corrugated
type
feature
plate image
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011079909.2A
Other languages
Chinese (zh)
Inventor
郭双双
龚星
李斌
陈会娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011079909.2A priority Critical patent/CN112053360A/en
Publication of CN112053360A publication Critical patent/CN112053360A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/36Image preprocessing, i.e. processing the image information without deciding about the identity of the image
    • G06K9/46Extraction of features or characteristics of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • G06N3/0454Architectures, e.g. interconnection topology using a combination of multiple neural nets
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30161Wood; Lumber
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The embodiment of the application discloses an image segmentation method, an image segmentation device, computer equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: the method comprises the steps of extracting features of a corrugated plate image to obtain feature information of the corrugated plate image, wherein the feature information comprises a dividing line feature and a type feature, dividing the corrugated plate image into a plurality of reference corrugated areas according to a dividing line indicated by the dividing line feature, and adjusting the plurality of reference corrugated areas according to the type feature to obtain a plurality of target corrugated areas of the corrugated plate image, so that the types of corrugated surfaces corresponding to every two adjacent target corrugated areas are different. By obtaining the dividing line characteristics and the type characteristics of the corrugated plate image, the dividing line in the corrugated plate image and the type of the corrugated surface corresponding to each pixel point are determined, so that the corrugated plate image is divided into a plurality of corrugated areas according to the dividing line and the type corresponding to each pixel point, and data calculation is performed on the corrugated plate image, and therefore the accuracy of the corrugated areas is improved.

Description

Image segmentation method and device, computer equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to an image segmentation method, an image segmentation device, computer equipment and a storage medium.
Background
The corrugated plate is a plate with a corrugated shape, a plurality of corrugated surfaces are arranged on the corrugated plate, different corrugated surfaces are not arranged on one plane, and the plurality of corrugated surfaces are arranged according to a fixed rule to form the corrugated plate with the corrugated shape. Corrugated board has a wide range of applications, such as in decorative panels, shipping containers, and the like. Prior to the use of corrugated board, it is often necessary to perform a quality analysis of the corrugations in the corrugated board to ensure the quality of the product made from the corrugated board.
In the related art, a corrugated plate image is subjected to plane segmentation processing to obtain a plurality of corrugated areas of the corrugated plate image, so that quality analysis is performed on the plurality of corrugated areas in the corrugated plate image. Since the moire region is obtained only by the plane segmentation processing method in the above method, the processing method is simple, resulting in poor accuracy of the moire region.
Disclosure of Invention
The embodiment of the application provides an image segmentation method, an image segmentation device, computer equipment and a storage medium, which can improve the accuracy of a ripple region. The technical scheme is as follows:
in one aspect, an image segmentation method is provided, and the method includes:
extracting features of a corrugated plate image to obtain feature information of the corrugated plate image, wherein the feature information comprises a partition line feature and a type feature, the partition line feature is used for indicating partition lines among different corrugated surfaces in the corrugated plate image, the type feature is used for indicating a type corresponding to each pixel point in the corrugated plate image, and the type is the type of the corrugated surface where the pixel point belongs;
dividing the corrugated plate image into a plurality of reference corrugated areas according to the dividing lines indicated by the dividing line characteristics;
and adjusting the plurality of reference corrugated areas according to the type characteristics to obtain a plurality of target corrugated areas of the corrugated plate image, so that the types of the corresponding corrugated surfaces of every two adjacent target corrugated areas are different.
In one possible implementation, the determining at least two target keypoints located within the reference corrugated region according to the positions of the plurality of keypoints in the keypoint feature and the position of the reference corrugated region in the corrugated plate image includes:
determining a plurality of reference key points positioned in the reference corrugated area according to the positions of a plurality of key points in the key point features and the positions of the reference corrugated area in the corrugated plate image;
and selecting the at least two target key points from the plurality of reference key points according to the confidence degrees of the plurality of reference key points in the key point features.
In another possible implementation manner, the adjusting the plurality of reference ripple regions according to the types corresponding to the plurality of reference ripple regions to obtain the plurality of target ripple regions further includes:
and in response to the confidence degree corresponding to any reference ripple region being greater than the reference confidence degree, taking the any reference ripple region as a target ripple region.
In another possible implementation manner, after determining the type corresponding to the reference moire area according to the type corresponding to each pixel point in the reference moire area, the method further includes:
acquiring the number of pixel points in the reference ripple region, wherein the type of the pixel points is the same as that of the pixel points in the reference ripple region;
and taking the ratio of the number of the pixel points to the total number of the pixel points in the reference ripple region as the confidence coefficient of the type corresponding to the reference ripple region.
In another possible implementation manner, the type feature includes a confidence that a corrugated surface where each pixel point is located in the corrugated image belongs to each type;
the determining the type corresponding to each pixel point in the corrugated plate image according to the type feature includes:
and for any pixel point, determining the type corresponding to the maximum confidence coefficient as the type of the corrugated surface to which the pixel point belongs according to the confidence coefficient that the corrugated surface to which the pixel point belongs to each type.
In another possible implementation manner, the feature detection submodel includes a parting line detection submodel and a type detection submodel;
the calling the feature detection submodel to perform feature detection on the second feature map to obtain the feature information includes:
calling the parting line detection submodel to perform parting line detection on the second characteristic diagram to obtain the parting line characteristics;
and calling the type detection submodel to perform type detection on the second characteristic diagram to obtain the type characteristics.
In another possible implementation, the feature detection submodel further includes a key point detection submodel; the calling the feature detection submodel to perform feature detection on the second feature map to obtain the feature information, further comprising:
and calling the key point detection submodel to perform key point detection on the second feature map to obtain the key point features.
In another aspect, an image segmentation apparatus is provided, the apparatus comprising:
the characteristic extraction module is used for extracting characteristics of a corrugated plate image to obtain characteristic information of the corrugated plate image, wherein the characteristic information comprises a dividing line characteristic and a type characteristic, the dividing line characteristic is used for indicating a dividing line between different corrugated surfaces in the corrugated plate image, the type characteristic is used for indicating a type corresponding to each pixel point in the corrugated plate image, and the type is the type of the corrugated surface where the pixel point belongs;
the image segmentation module is used for segmenting the corrugated plate image into a plurality of reference corrugated areas according to the segmentation lines indicated by the segmentation line characteristics;
and the region adjusting module is used for adjusting the reference corrugated regions according to the type characteristics to obtain a plurality of target corrugated regions of the corrugated plate image, so that the types of the corrugated surfaces corresponding to every two adjacent target corrugated regions are different.
In one possible implementation, the image segmentation module includes:
the position determining unit is used for determining the positions of a plurality of dividing lines in the corrugated plate image according to the characteristics of the dividing lines;
and the area determining unit is used for determining an area between every two adjacent dividing lines as a reference corrugated area according to the positions of the dividing lines to obtain the plurality of reference corrugated areas.
In another possible implementation manner, the region adjusting module includes:
a first determining unit, configured to determine, according to the type feature, a type corresponding to each pixel point in the corrugated plate image;
the second determining unit is used for respectively determining the type corresponding to each reference ripple region according to the pixel points in each reference ripple region and the type corresponding to each pixel point;
and the region adjusting unit is used for adjusting the plurality of reference corrugated regions according to the types corresponding to the plurality of reference corrugated regions to obtain the plurality of target corrugated regions.
In another possible implementation manner, the area adjusting unit includes:
the region merging subunit is used for merging any two adjacent reference ripple regions to obtain a target ripple region in response to the fact that the corresponding types of the any two reference ripple regions are the same;
the first determining subunit is used for determining any reference ripple area as the target ripple area in response to the fact that the type of the any reference ripple area is different from that of the adjacent reference ripple area.
In another possible implementation manner, the second determining unit is configured to determine a type corresponding to each reference moire region and a confidence of the type according to the pixel point in each reference moire region and the type corresponding to each pixel point.
In another possible implementation manner, the feature information further includes a keypoint feature, where the keypoint feature is used to indicate a keypoint located on a segmentation line in the corrugated board image;
the area adjustment unit includes:
the second determining subunit is used for determining at least two target key points located in any reference ripple region according to the key point features in response to the fact that the corresponding confidence coefficient of the reference ripple region is smaller than the reference confidence coefficient;
and the division processing subunit is used for carrying out division processing on the reference ripple region according to a division line formed by the at least two key points to obtain a target ripple region.
In another possible implementation manner, the second determining subunit is configured to determine, according to the positions of a plurality of key points in the key point features and the position of the reference corrugated region in the corrugated plate image, at least two target key points located in the reference corrugated region.
In another possible implementation manner, the keypoint features are used to indicate keypoints located on a segmentation line in the corrugated plate image and confidence degrees of the keypoints, and the second determining subunit is used to determine a plurality of reference keypoints located in the reference corrugated region according to positions of the keypoints in the keypoint features and positions of the reference corrugated region in the corrugated plate image; and selecting the at least two target key points from the plurality of reference key points according to the confidence degrees of the plurality of reference key points in the key point features.
In another possible implementation manner, the area adjusting unit further includes:
and the third determining subunit is used for taking any reference corrugated area as the target corrugated area in response to the fact that the corresponding confidence of the any reference corrugated area is greater than the reference confidence.
In another possible implementation manner, the second determining unit includes:
a fourth determining subunit, configured to determine, for any reference ripple region, a type corresponding to each pixel point in the reference ripple region according to the pixel points in the reference ripple region and the type corresponding to each pixel point;
and the fifth determining subunit is configured to determine a type corresponding to the reference moire area according to the type corresponding to each pixel point in the reference moire area.
In another possible implementation manner, the fifth determining subunit is configured to determine, according to a type corresponding to each pixel point in the reference moire area, a type with a largest number of corresponding pixel points as the type corresponding to the reference moire area.
In another possible implementation manner, the apparatus further includes:
the number acquisition module is used for acquiring the number of pixel points in the reference ripple region, wherein the types of the pixel points are the same as the types corresponding to the reference ripple region;
and the confidence determining module is used for taking the ratio of the number of the pixel points to the total number of the pixel points in the reference ripple region as the confidence of the type corresponding to the reference ripple region.
In another possible implementation manner, the type feature includes a confidence that a corrugated surface where each pixel point is located in the corrugated image belongs to each type;
the first determination unit includes:
and the type determining subunit is used for determining the type corresponding to the maximum confidence as the type of the corrugated surface to which the pixel point belongs according to the confidence that the corrugated surface to which the pixel point belongs to each type for any pixel point.
In another possible implementation manner, the feature extraction module includes:
and the characteristic extraction unit is used for calling a characteristic extraction model and extracting the characteristics of the corrugated plate image to obtain the characteristic information.
In another possible implementation manner, the feature extraction model comprises a feature extraction submodel, a scale conversion submodel and a feature detection submodel;
the feature extraction unit includes:
the first extraction subunit is used for calling the feature extraction submodel and extracting features of the corrugated plate image to obtain a first feature map of the corrugated plate image;
the second extraction subunit is used for calling the scale conversion sub-model and carrying out scale conversion on the first characteristic diagram to obtain a second characteristic diagram of the corrugated plate image;
and the third extraction subunit is used for calling the feature detection submodel and carrying out feature detection on the second feature map to obtain the feature information.
In another possible implementation manner, the feature detection submodel includes a parting line detection submodel and a type detection submodel;
the third extraction subunit is configured to invoke the parting line detection submodel, and perform parting line detection on the second feature map to obtain parting line features; and calling the type detection submodel to perform type detection on the second characteristic diagram to obtain the type characteristics.
In another possible implementation, the feature detection submodel further includes a key point detection submodel; the third extraction subunit is further configured to invoke the key point detection submodel, and perform key point detection on the second feature map to obtain the key point features.
In another possible implementation manner, the second extraction subunit is configured to invoke the scale conversion sub-model, perform scale conversion processing on the first feature map, and obtain reference feature maps of multiple scales corresponding to the first feature map; and carrying out fusion processing on the reference characteristic graphs of the multiple scales to obtain a second characteristic graph of the corrugated plate image.
In another aspect, a computer device is provided, which includes a processor and a memory, wherein at least one program code is stored in the memory, and the at least one program code is loaded and executed by the processor to implement the operations performed in the image segmentation method according to the above aspect.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, the at least one program code being loaded and executed by a processor to implement the operations performed in the image segmentation method according to the above aspect.
In yet another aspect, a computer program product or a computer program is provided, the computer program product or the computer program comprising computer program code, the computer program code being stored in a computer readable storage medium. The processor of the computer device reads the computer program code from the computer-readable storage medium, and executes the computer program code, so that the computer device implements the operations performed in the image segmentation method as described in the above aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the method, the device, the computer equipment and the storage medium provided by the embodiment of the application consider that the types of adjacent corrugated surfaces in a corrugated plate are different, and a partition line is arranged between the adjacent corrugated surfaces, when the image of the corrugated plate is segmented, the partition line characteristics and the type characteristics of the image of the corrugated plate are obtained, so that the partition line in the image of the corrugated plate and the type of the corrugated surface corresponding to each pixel point are determined, the characteristics of the image of the corrugated plate are enriched, a plurality of corrugated areas obtained by dividing the type corresponding to the partition line and each pixel point are different, the types corresponding to every two adjacent corrugated areas are different, and the accuracy of the corrugated areas is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of an image segmentation method provided in an embodiment of the present application;
FIG. 3 is a flowchart of an image segmentation method provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a scale conversion submodel provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a feature extraction model provided in an embodiment of the present application;
FIG. 6 is a schematic view of a corrugated region provided by an embodiment of the present application;
FIG. 7 is a flowchart of image segmentation for a corrugated plate image according to an embodiment of the present application;
FIG. 8 is a flow chart of a corrugated board defect analysis provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of key points in a corrugated sheet image according to an embodiment of the present disclosure;
FIG. 10 is a schematic view of a plurality of lines of separation in an image of a corrugated board according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a plurality of corrugated regions in a corrugated sheet image according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an image segmentation apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an image segmentation apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
The terms "first," "second," and the like as used herein may be used herein to describe various concepts that are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, a first feature may be termed a second feature, and, similarly, a second feature may be termed a first feature, without departing from the scope of the present application.
As used herein, the terms "at least two," a plurality, "" each, "" any, "and" at least two, including two or more, and a plurality including two or more, each refer to each of the corresponding plurality, and any refers to any one of the plurality. For example, the plurality of key points includes 3 key points, each of the 3 key points refers to each of the 3 key points, and any one of the 3 key points refers to any one of the 3 key points, which may be the first key point, the second key point, or the third key point.
Cloud technology refers to a hosting technology for unifying serial resources such as hardware, software, network and the like in a wide area network or a local area network to realize calculation, storage, processing and sharing of data.
Cloud technology (Cloud technology) is based on a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied in a Cloud computing business model, can form a resource pool, is used as required, and is flexible and convenient. Cloud computing technology will become an important support. Background services of the technical network system require a large amount of computing and storage resources, such as video websites, picture-like websites and more web portals. With the high development and application of the internet industry, each article may have its own identification mark and needs to be transmitted to a background system for logic processing, data in different levels are processed separately, and various industrial data need strong system background support and can only be realized through cloud computing.
Big data (Big data) refers to a data set which cannot be captured, managed and processed by a conventional software tool within a certain time range, and is a massive, high-growth-rate and diversified information asset which can have stronger decision-making power, insight discovery power and flow optimization capability only by a new processing mode. With the advent of the cloud era, big data has attracted more and more attention, and the big data needs special technology to effectively process a large amount of data within a tolerance elapsed time. The method is suitable for the technology of big data, and comprises a large-scale parallel processing database, data mining, a distributed file system, a distributed database, a cloud computing platform, the Internet and an extensible storage system. The method utilizes the cloud technology and the big data to perform data calculation on the ripple image, thereby realizing the image segmentation method.
The image segmentation method provided by the embodiment of the application can be used in computer equipment. Optionally, the computer device is a terminal or a server. Optionally, the server is an independent physical server, or a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content Delivery Network), a big data and artificial intelligence platform, and the like. Optionally, the terminal is a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc., but is not limited thereto.
Fig. 1 is a schematic structural diagram of an implementation environment provided by an embodiment of the present application, and as shown in fig. 1, the system includes a terminal 101 and a server 102, and the terminal 101 and the server 102 can be directly or indirectly connected through wired or wireless communication, which is not limited herein.
The terminal 101 is configured to obtain a corrugated plate image, for example, the terminal 101 obtains the corrugated plate image by shooting a corrugated plate, and can send the corrugated plate image to the server 102. The server 102 provides a function of processing a corrugated plate image, and can perform image segmentation on the corrugated plate image transmitted from the terminal 101.
The method provided by the embodiment of the application can be used for various scenes.
For example, in a container detection scenario:
because the surface of the container is formed by the corrugated plate, in order to carry out quality detection on the container, the image of the corrugated plate is obtained by shooting the surface of the container, and by adopting the image segmentation method provided by the embodiment of the application, a plurality of corrugated areas of the image of the corrugated plate are obtained, and then the quality detection can be carried out according to the plurality of corrugated areas, so that the defects of the corrugated plate are determined, and the quality of the container is determined.
For another example, in a corrugated board quality monitoring scenario:
at the in-process that generates the buckled plate, in order to guarantee the quality of the buckled plate that generates, through shooing the buckled plate surface, obtain the buckled plate image, adopt the image segmentation method that this application embodiment provided, obtain a plurality of ripple regions of buckled plate image, follow-up carries out defect analysis to this buckled plate according to a plurality of ripple regions to confirm whether there is the defect in the buckled plate, thereby guarantee the quality of the buckled plate that generates.
Fig. 2 is a flowchart of an image segmentation method provided in an embodiment of the present application, which is applied to a computer device, and as shown in fig. 2, the method includes:
201. and the computer equipment extracts the characteristics of the corrugated plate image to obtain the characteristic information of the corrugated plate image, wherein the characteristic information comprises the characteristics of the dividing line and the type characteristics.
In the embodiment of the present application, the corrugated plate is a plate material having a corrugated shape, and the corrugated plate has a plurality of corrugated surfaces, each of the corrugated surfaces is a plane, different corrugated surfaces are not on a plane, and the plurality of corrugated surfaces are arranged according to a fixed rule to form the corrugated plate having the corrugated shape. For example, a plurality of corrugated surfaces in the corrugated plate include concave surface, left inclined plane, convex surface, right inclined plane, two adjacent corrugated surfaces of a concave surface in this corrugated plate are right inclined plane and left inclined plane, two adjacent corrugated surfaces of a convex surface are left inclined plane and right inclined plane, take the corrugated plate to include 8 corrugated surfaces as an example, in this corrugated plate, the order of arrangement of 8 corrugated surfaces is concave surface, left inclined plane, convex surface, right inclined plane, concave surface, left inclined plane, convex surface, right inclined plane in proper order, the surface of corrugated plate presents the ripple shape. For another example, a corrugated plate having a corrugated shape is obtained by press-molding a flat plate. For another example, in practical applications, the surface of the container body of the container is corrugated, that is, corrugated plate. For example, the corrugated plate includes a plurality of different types of corrugated surfaces, and the types of the corrugated surfaces include a concave type, a slant type, a convex type, and the like. In a plurality of corrugated surfaces of buckled plate, arbitrary corrugated surface is different with the type of adjacent corrugated surface to distinguish the corrugated surface of different grade type, consequently, according to the cut-off line characteristic and the type characteristic of buckled plate image, cut apart into a plurality of ripple regions with the buckled plate image, every ripple region corresponds a corrugated surface, so that the affiliated type of every two adjacent corrugated surfaces is different, and follow-up can carry out quality testing to the buckled plate according to a plurality of ripple regions of cutting apart.
The type feature is used for indicating the type corresponding to each pixel point in the corrugated plate image, and the type is the type of the corrugated surface where the pixel point belongs.
202. The computer device segments the corrugated sheet image into a plurality of reference corrugated regions based on the segmentation lines indicated by the segmentation line features.
Since in corrugated board there are cut lines between adjacent corrugated surfaces, the corrugated board image is divided into a plurality of reference corrugated areas by the cut lines indicated by the cut line features, each reference corrugated area may comprise one corrugated surface.
203. And the computer equipment adjusts the multiple reference corrugated areas according to the type characteristics to obtain multiple target corrugated areas of the corrugated plate image, so that the types of the corresponding corrugated surfaces of every two adjacent target corrugated areas are different.
Because the types of the two adjacent corrugated surfaces in the corrugated plate are different, the reference corrugated areas are adjusted by indicating the type of the corrugated surface where each pixel point in the corrugated image belongs in the type characteristic, so that the types of the corrugated surfaces corresponding to the adjacent corrugated areas are different.
The method provided by the embodiment of the application, the affiliated type of adjacent corrugated surface is different in considering the buckled plate, and have the parting line between the adjacent corrugated surface, then when carrying out image segmentation to the buckled plate image, parting line characteristic and type characteristic through obtaining the buckled plate image, in order to confirm the parting line in the buckled plate image and the corrugated surface type that every pixel corresponds, the characteristic of buckled plate image has been richened, make according to parting line and the corresponding type division of every pixel in a plurality of ripple regions that obtain, the type that every two adjacent ripple regions correspond is different, thereby the regional accuracy of ripple has been improved.
Fig. 3 is a flowchart of an image segmentation method provided in an embodiment of the present application, which is applied to a computer device, and as shown in fig. 3, the method includes:
301. and calling a feature extraction model by the computer equipment, and extracting features of the corrugated plate image to obtain feature information, wherein the feature information comprises the characteristics of the dividing line and the type characteristics.
Wherein, the buckled plate image is the image that includes the buckled plate, and optionally, this buckled plate image is obtained through shooing the buckled plate. For example, a corrugated sheet is scanned by a line scan camera, and an image of the corrugated sheet is obtained by line scan imaging, or it is photographed by another camera to obtain an image of the corrugated sheet. The feature extraction model is used for extracting feature information of the corrugated plate image, and the feature information is used for describing features of corrugated surfaces in the corrugated plate image. The cut-line features are used to indicate cut-lines between different corrugated surfaces in the corrugated sheet image, each cut-line being an intersection between two adjacent corrugated surfaces. The type feature is used for indicating the type corresponding to each pixel point in the corrugated plate image, and the type is the type of the corrugated surface where the pixel point belongs.
Because the buckled plate includes a plurality of corrugated surfaces, has the cut-off line between the adjacent corrugated surface, and the affiliated type of two adjacent corrugated surfaces is different, consequently, through cutting off line characteristic and type characteristic in drawing the buckled plate image, can cut apart the processing to the buckled plate image according to this segmentation characteristic and type characteristic.
In one possible implementation, the segmentation line feature includes a probability that each pixel point in the corrugated sheet image is located on the segmentation line. The higher the probability is, the higher the probability of representing that the corresponding pixel point is located on the dividing line is, and the lower the probability is, the higher the probability is.
In one possible implementation, the type feature includes a probability that a corrugated surface on which each pixel point in the corrugated image belongs to each type. For example, the ripple image includes 3 types of ripple surfaces, and for any pixel point, the type feature includes 3 probabilities corresponding to the pixel point, and the 3 probabilities respectively represent the probability that the ripple surface where the pixel point is located belongs to each of the 3 types.
The higher the probability that the corrugated surface where the pixel point is located belongs to any type is, the higher the possibility that the corrugated surface where the pixel point is located belongs to the type is, the lower the probability that the corrugated surface where the pixel point is located belongs to any type is, and the lower the possibility that the corrugated surface where the pixel point is located belongs to the type is.
In a possible implementation manner, the feature extraction model includes a feature extraction sub-model, a scale conversion sub-model and a feature detection sub-model, and then the step 301 includes the following steps 3011-3013:
3011. and calling a feature extraction sub-model, and performing feature extraction on the corrugated plate image to obtain a first feature map of the corrugated plate image.
The first characteristic diagram is used for describing the characteristics of the corrugated plate in the corrugated plate image. The feature extraction sub-model is used for obtaining a feature map of the corrugated plate image, the feature map is used for describing information of corrugated surfaces in the corrugated plate image, and optionally, the feature extraction sub-model is a convolution model. Optionally, the convolution model includes a convolution layer, a normalization layer, and an activation layer. The convolution layer has convolution parameters including convolution kernel size, output channel number and convolution step size. For example, the convolution kernel size of the convolutional layer is 7 × 7, the number of output channels of the convolutional layer is 64, and the convolution step size of the convolutional layer is 2.
3012. And calling a scale conversion sub-model, and carrying out scale conversion on the first characteristic diagram to obtain a second characteristic diagram of the corrugated plate image.
And the scale conversion sub-model is used for carrying out scale conversion on the characteristic diagram of the corrugated plate image so as to obtain another characteristic diagram of the corrugated plate image. The second feature map has features of a plurality of scales corresponding to the corrugated sheet image. The scale of the second feature map obtained by scale conversion may be the same as or different from that of the first feature map.
And carrying out scale conversion on the first characteristic diagram through the scale conversion sub-model, so that the second characteristic diagram comprises characteristic information of a plurality of scales to enhance the characteristics of the corrugated surface in the corrugated plate image and improve the accuracy of the characteristic diagram of the corrugated plate image.
In one possible implementation, this step 3012 includes: and calling a scale conversion sub-model, carrying out scale conversion processing on the first characteristic diagram to obtain reference characteristic diagrams of multiple scales corresponding to the first characteristic diagram, and carrying out fusion processing on the reference characteristic diagrams of multiple scales to obtain a second characteristic diagram of the corrugated plate image.
Because the feature information contained in the feature maps with different scales may be different, the reference feature maps with multiple scales corresponding to the first feature map are obtained through the scale conversion sub-model, and the reference feature maps with multiple scales are subjected to fusion processing, so that the second feature map contains the feature information with multiple scales, and the accuracy of the feature map is improved.
Optionally, the scale conversion sub-model includes multiple branches, a first branch performs convolution processing and scale conversion on a first feature map to obtain a first reference feature map, a subsequent branch performs convolution processing and scale conversion on a reference feature map subjected to scale conversion on a previous branch to obtain a reference feature map, until a last branch performs convolution processing on a feature map subjected to scale conversion on a previous branch to obtain a last reference feature map, and the reference feature maps of multiple scales obtained by the multiple branches are fused to obtain a second feature map of the corrugated plate image.
As shown in fig. 4, taking an example that the scale conversion sub-model includes 3 branches, a first branch includes 6 convolutional layers, a second branch includes 4 convolutional layers, and a third branch includes 2 convolutional layers, performing convolution processing on the first feature map through the first and second convolutional layers in the first branch to obtain a feature map 1, and performing scale conversion processing on the feature map 1 to obtain a feature map 2; carrying out convolution processing on the feature map 1 through a third convolution layer and a fourth convolution layer in the first branch to obtain a feature map 3; carrying out convolution processing on the feature map 2 through the first convolution layer and the second convolution layer in the second branch to obtain a feature map 4, and carrying out scale conversion processing on the feature map 4 to obtain a feature map 5 and a feature map 6; fusing the feature map 3 and the feature map 5 through a first branch, and performing convolution processing on the fused reference feature map through a fifth convolution layer and a sixth convolution layer to obtain a feature map 7; carrying out convolution processing on the feature map 4 through a third convolution layer and a fourth convolution layer in the second branch to obtain a feature map 8; the feature map 6 is convolved by two convolution layers in the third branch to obtain a feature map 9, the scale of the transformed reference feature map is made to be the same as that of the feature map 7 by performing scale transformation on the feature map 9 and the feature map 8, and the second feature map is obtained by performing fusion processing on the feature map 8 and the feature map 9 after scale transformation and the feature map 7.
3013. And calling a feature detection sub-model, and carrying out feature detection on the second feature graph to obtain feature information.
And the characteristic detection submodel is used for detecting the characteristic information in the second characteristic diagram.
The ripple characteristics in the second characteristic diagram are enhanced through the characteristic extraction submodel and the scale conversion submodel, and the characteristic information in the second characteristic diagram is detected through the characteristic detection submodel, so that accurate characteristic information can be obtained.
In one possible implementation, the feature detection submodel includes a parting line detection submodel and a type detection submodel; then step 3013 includes: calling the parting line detection submodel, carrying out parting line detection on the second characteristic diagram to obtain parting line characteristics, calling the type detection submodel, and carrying out type detection on the second characteristic diagram to obtain type characteristics.
The parting line detection submodel is used for detecting parting line characteristics in the characteristic diagram, and the type detection submodel is used for detecting type characteristics in the characteristic diagram. And respectively carrying out feature detection on the second feature map through the parting line detection submodel and the type detection submodel to obtain parting line features and type features in the feature map, so that the accuracy of the parting line features and the type features is improved.
In one possible implementation, the feature information further includes a key point feature, and the feature detection submodel further includes a key point detection submodel; then step 3013 further includes: and calling the key point detection submodel, and carrying out key point detection on the second characteristic diagram to obtain the key point characteristics.
The key point detection submodel is used for detecting key point features in the feature map, and the key point features are used for indicating key points on the dividing lines in the corrugated plate images. And key point characteristics in the characteristic diagram are obtained through the key point detection submodel, and then the corrugated plate image can be segmented according to the key point characteristics.
As shown in fig. 5, feature extraction is performed on a corrugated plate image through two convolution modules to obtain a first feature map of the corrugated plate image, scale conversion is performed on the first feature map through three high-resolution networks to obtain a second feature map of the corrugated plate image, and feature detection is subsequently performed on the second feature map through a key point detection submodel, a partition line detection submodel and a type detection submodel respectively to obtain key point features, partition line features and type features respectively. The key point detection submodel, the dividing line detection submodel and the type detection submodel all comprise two convolution layers. Optionally, the High Resolution network is a hnnet (High Resolution network).
Optionally, calling a key point detection sub-model, performing key point detection on the second feature map to obtain key point initial features, and performing screening processing on pixel points in the key point initial features to obtain the key point features.
The initial feature of the key point comprises the confidence coefficient corresponding to each pixel point in the corrugated plate image, and the feature of the key point comprises the confidence coefficient corresponding to part of the pixel points in the corrugated plate image. The accuracy of the key point characteristics is improved by screening the pixel points in the key point initial characteristics.
The method for screening the pixel points in the initial characteristics of the key points comprises the following three modes:
the first mode is as follows: and selecting the pixel points with the probability greater than the reference probability according to the probability that each pixel point in the initial characteristics of the key points is the key point, and generating the characteristics of the key points.
The reference probability is an arbitrary value, such as 0.8 or 0.9. The initial feature of the key point comprises the probability that each pixel point is the key point, and the probability represents the possibility that the corresponding pixel point is the key point. Through the initial feature of the key point, the pixel point with high probability is selected as the key point, so that the accuracy of the feature of the key point is improved.
The second mode is as follows: and according to the position of each pixel point in the initial characteristic of the key point, screening a plurality of pixel points in the initial characteristic of the key point so as to enable the distance between any two pixel points in the screened pixel points to be larger than the reference distance, and according to the screened pixel points, generating the characteristic of the key point.
The reference distance is any value, such as 3 or 6, and the key point feature includes the position of each pixel after screening.
The third mode is as follows: selecting pixel points with the probability greater than the reference probability according to the probability that each pixel point in the initial feature of the key point is the key point, generating a reference feature of the key point, screening a plurality of pixel points in the reference feature of the key point according to the position of each pixel point in the reference feature of the key point, so that the distance between any two pixel points in the screened pixel points is greater than the reference distance, and generating the feature of the key point according to the screened pixel points.
The key point features comprise the probability corresponding to each pixel point after screening and the position corresponding to each pixel point.
Optionally, when a plurality of pixel points in the key point reference feature are screened, the distance between every two pixel points is determined according to the positions of the plurality of pixel points, in response to that the distance between any two pixel points is smaller than the reference distance, the pixel point with the smaller probability in the two pixel points is screened, and the key point feature is generated according to the probability and the position of the remaining pixel points.
302. The computer device segments the corrugated sheet image into a plurality of reference corrugated regions based on the segmentation lines indicated by the segmentation line features.
In the embodiment of the present application, a dividing line is located between every two adjacent corrugated surfaces, so that the corrugated plate image is divided by the dividing line indicated by the dividing line feature to obtain a plurality of reference corrugated areas, and each reference corrugated area may represent one corrugated surface.
In one possible implementation, this step 302 includes: and determining the positions of a plurality of dividing lines in the corrugated plate image according to the characteristics of the dividing lines, and determining the area between every two adjacent dividing lines as a reference corrugated area according to the positions of the plurality of dividing lines to obtain a plurality of reference corrugated areas.
The positions of a plurality of dividing lines in the corrugated plate image are determined through the characteristics of the dividing lines, and the area between every two adjacent dividing lines is determined as a reference corrugated area according to the positions of the plurality of dividing lines, so that a plurality of reference corrugated areas can be obtained.
Optionally, the coordinates of the two end points of each dividing line in the corrugated plate image are determined according to the characteristics of the dividing lines, the coordinates of the two end points of each dividing line are determined for any two adjacent dividing lines, the coordinates of the two end points of each dividing line are used as the coordinates of the reference corrugated area, and the coordinates of the four end points form the reference corrugated area.
For example, coordinates of both ends of the first dividing line are (1,2), (1,11), and coordinates of both ends of the second dividing line are (8,2), (8,11), and a rectangle composed of the coordinates (1,2), (1,11), (8,2), (8,11) is used as one reference corrugated region.
In a possible implementation manner, the dividing line feature includes a probability that each pixel point in the corrugated plate image is located on the dividing line, and then the probability that each pixel point in the dividing line feature is located on the dividing line is transformed, so that a plurality of dividing lines in the corrugated plate image are obtained.
For example, by using hough transform technology, the probability corresponding to each pixel point in the segment line feature is processed by using the correspondence relationship of straight lines in a rectangular coordinate system and a polar coordinate system, so as to obtain a plurality of segment lines corresponding to the segment line feature.
Optionally, the probability that each pixel point in the characteristics of the dividing line is located on the dividing line is transformed to obtain a plurality of dividing line segments, and in response to that the distance between two ends of any two dividing line segments is smaller than the reference distance, the two dividing line segments are connected to obtain one dividing line.
303. And the computer equipment determines the type corresponding to each pixel point in the corrugated plate image according to the type characteristics.
In the embodiment of the present application, the corrugated plate image includes a plurality of types of corrugated surfaces, for example, the types of corrugated surfaces include concave, left inclined, convex, and right inclined. Through the type characteristics, each pixel point can be known, and the corresponding type, namely each pixel point is possibly positioned on the corrugated surface of the corresponding type.
In a possible implementation manner, the type feature includes a confidence that a corrugated surface where each pixel point in the corrugated plate image is located belongs to each type; this step 303 includes: and for any pixel point, determining the type corresponding to the maximum confidence coefficient as the type of the corrugated surface to which the pixel point belongs according to the confidence coefficient that the corrugated surface to which the pixel point belongs to each type.
The confidence coefficient of the type is used for indicating the possibility that the corrugated surface where the pixel points are located is the corrugated surface of the type, the higher the confidence coefficient of the type is, the higher the possibility that the corrugated surface where the pixel points are located is the corrugated surface of the type is, and the lower the confidence coefficient of the type is, the lower the possibility that the corrugated surface where the pixel points are located is the corrugated surface of the type is. Optionally, the confidence level is represented by a probability.
Because the corrugated plate image comprises a plurality of types of corrugated surfaces, the type characteristics can represent the confidence that the corrugated surface where any pixel point belongs to each type, and the type corresponding to the pixel point can be determined according to the confidence. For example, the plurality of types of corrugated surfaces include a concave surface, a left inclined surface, a convex surface and a right inclined surface, and for any pixel point, according to the type characteristic, the confidence that the pixel point is located on the concave surface is 0.3, the confidence that the pixel point is located on the left inclined surface is 0.5, the confidence that the pixel point is located on the convex surface is 0.7, and the confidence that the pixel point is located on the right inclined surface is 0.9, so that it can be determined that the pixel point is located on the right inclined surface, that is, the type corresponding to the pixel point is the right inclined surface type.
304. And the computer equipment respectively determines the type corresponding to each reference ripple region according to the pixel points in each reference ripple region and the type corresponding to each pixel point.
For any reference corrugated area corresponding to the corrugated plate image, the type corresponding to each pixel point in the reference corrugated area can be determined, so that the type corresponding to the reference corrugated area is determined according to the type corresponding to each reference corrugated area.
In a possible implementation manner, for any reference ripple region, according to the coordinate information of the reference ripple region, each pixel point belonging to the reference ripple region is determined, a type corresponding to each pixel point in the reference ripple region is determined, and according to the type corresponding to each pixel point, the type corresponding to the reference ripple region is determined.
And the coordinate information of the reference corrugated area is used for indicating the position of the reference corrugated area. Optionally, the coordinate information of the reference corrugated region includes coordinates of two dividing lines constituting the reference corrugated region. The two dividing lines are two adjacent dividing lines in the plurality of dividing lines, and the two dividing lines form the reference corrugated area.
In one possible implementation, this step 304 includes: and respectively determining the type corresponding to each reference corrugated area and the confidence coefficient of the type according to the pixel points in each reference corrugated area and the type corresponding to each pixel point in the corrugated plate image.
The confidence of the type is used for representing the possibility that the reference corrugated area is the corrugated surface of the type, the higher the confidence of the type is, the higher the possibility that the reference corrugated area is the corrugated surface of the type is, and the lower the confidence of the type is, the lower the possibility that the reference corrugated area is the corrugated surface of the type is.
The type corresponding to the reference corrugated area can be determined by referring to the type corresponding to the pixel point in the corrugated area, and since the types corresponding to different pixel points in the reference corrugated area may be different, that is, the type corresponding to each pixel point in the reference corrugated area may not be completely the same, the type corresponding to the reference corrugated area may not be accurate, and in order to accurately reflect whether the type corresponding to the reference corrugated area is accurate or not, the accuracy of the type is expressed by adopting the confidence coefficient of the type.
In one possible implementation, the step 304 includes the following steps 3041 and 3044:
3041. and for any reference corrugated area, determining the type corresponding to each pixel point in the reference corrugated area according to the pixel point in the reference corrugated area and the type corresponding to each pixel point in the corrugated plate image.
Through the type characteristics, the type corresponding to each pixel point in the corrugated plate image can be obtained, and through the reference corrugated area, the pixel point in the reference corrugated area can be determined, so that the type corresponding to each pixel point in the reference corrugated area is determined.
3042. And determining the type corresponding to the reference ripple region according to the type corresponding to each pixel point in the reference ripple region.
After the type corresponding to each pixel point in the reference ripple region is determined, the type corresponding to the reference ripple region can be determined through the type corresponding to each pixel point.
In one possible implementation, this step 3042 includes: and determining the type with the maximum number of the corresponding pixel points as the type corresponding to the reference ripple region according to the type corresponding to each pixel point in the reference ripple region.
In the reference ripple region, the type corresponding to each pixel point may be different, and then the type with the largest number of corresponding pixel points is selected from the determined number of pixel points corresponding to each type to serve as the type corresponding to the reference ripple region. For example, for any reference corrugated area, the number of pixel points corresponding to the first type is 10, the number of pixel points corresponding to the second type is 8, and the number of pixel points corresponding to the third type is 80, then the third type is determined as the type corresponding to the reference corrugated area.
3043. And acquiring the number of pixel points in the reference ripple region, wherein the type of the pixel points is the same as that of the pixel points corresponding to the reference ripple region.
The number of the pixel points with the same type as that of the reference ripple region can be determined according to the type corresponding to each pixel point in the reference ripple region.
3044. And taking the ratio of the number of the pixel points to the total number of the pixel points in the reference ripple region as the confidence coefficient of the type corresponding to the reference ripple region.
The confidence coefficient can represent the accuracy of the type corresponding to the reference corrugated region, and the greater the confidence coefficient is, the higher the accuracy is, and the smaller the confidence coefficient is, the lower the accuracy is.
305. And the computer equipment adjusts the multiple reference corrugated areas according to the corresponding types of the multiple reference corrugated areas to obtain multiple target corrugated areas.
Because inaccurate corrugated areas may exist in the obtained multiple reference corrugated areas according to the dividing line indicated by the dividing line feature, in order to improve the accuracy of the corrugated areas, the multiple reference corrugated areas are adjusted according to the types corresponding to the multiple reference corrugated areas, so that the types of the corrugated surfaces corresponding to every two adjacent target corrugated areas are different, and the accuracy of the target corrugated areas is improved.
In one possible implementation, this step 305 includes: in response to that the corresponding types of any two adjacent reference corrugated areas are the same, combining any two reference corrugated areas to obtain a target corrugated area; and in response to the fact that the type of any reference ripple region is different from that of the adjacent reference ripple region, determining any reference ripple region as the target ripple region.
The corrugated plate comprises a plurality of types of corrugated surfaces, and the corresponding types of any two adjacent corrugated surfaces are different. Therefore, after the types corresponding to the multiple reference corrugated regions are determined, according to the positions of the multiple reference corrugated regions and the type corresponding to each reference corrugated region, if the types corresponding to two adjacent reference corrugated regions are the same, the two reference corrugated regions are combined to be used as one target corrugated region, and if the types of the adjacent reference corrugated regions are different, the reference corrugated region is used as one target corrugated region, so that the accuracy of the multiple target corrugated regions is ensured. Fig. 6 includes 3 figures, each figure including a top view of a plurality of corrugated regions of a corrugated sheet and corresponding shapes of the corrugated sheet. In the first drawing of fig. 6, the corrugated plate includes 5 corrugated regions, and the 5 corrugated regions are sequentially concave, left inclined, convex, right inclined, and concave from left to right according to the shape of the corrugated plate. The reference corrugated area 601 and the reference corrugated area 602 in the third drawing in fig. 6 are on the same concave surface, that is, the reference corrugated area 601 and the reference corrugated area 602 are over-divided reference corrugated areas, and the reference corrugated area 601 and the reference corrugated area 602 are merged to be a target corrugated area 603, as shown in the first drawing in fig. 6.
Optionally, in response to that the types corresponding to any two adjacent reference corrugated regions are the same, any two reference corrugated regions are merged to obtain a merged corrugated region, where the type corresponding to the merged corrugated region is the same as the types corresponding to the two reference corrugated regions, in response to that the merged corrugated region is different from the types corresponding to the two adjacent reference corrugated regions, the merged corrugated region is used as a target corrugated region, in response to that the merged corrugated region is the same as the type corresponding to any one of the adjacent reference corrugated regions, the merged corrugated region is merged with the adjacent reference corrugated regions, until the types corresponding to the obtained merged corrugated region and the adjacent reference regions are different, the obtained merged corrugated region is used as a target corrugated region.
In one possible implementation, the feature information further includes a keypoint feature, where the keypoint feature is used to indicate a keypoint located on the segmentation line in the corrugated plate image; this step 305 includes the following steps 3051-3053:
3051. and in response to the fact that the corresponding confidence degree of any reference ripple region is smaller than the reference confidence degree, determining at least two target key points located in the reference ripple region according to the key point features.
Wherein, the reference confidence is any confidence, such as 0.8 or 0.9. The confidence corresponding to the reference corrugated region is the confidence of the type corresponding to the reference corrugated region, and is used for indicating the accuracy of the type corresponding to the reference corrugated region.
In this embodiment of the present application, if the confidence corresponding to any reference ripple region is less than the reference confidence, which indicates that the accuracy of the type corresponding to the reference ripple region is low, the reference ripple region needs to be adjusted subsequently, and therefore, at least two target keypoints located within the reference ripple region are determined, so that the reference ripple region is adjusted subsequently according to the at least two target keypoints.
In one possible implementation, this step 3051 includes: and determining at least two target key points in the reference corrugated area according to the positions of a plurality of key points in the key point features and the position of the reference corrugated area in the corrugated plate image.
Optionally, the keypoint features are used for indicating keypoints and confidence degrees of the keypoints located on the dividing line in the corrugated plate image, a plurality of reference keypoints located in the reference corrugated region are determined according to positions of a plurality of keypoints in the keypoint features and positions of the reference corrugated region in the corrugated plate image, and at least two target keypoints are selected from the plurality of reference keypoints according to the confidence degrees of the plurality of reference keypoints in the keypoint features.
Wherein the target key point is a key point on a partition line which may exist in the reference corrugated area.
3052. And according to a dividing line formed by at least two key points, dividing the reference ripple region to obtain a target ripple region.
And forming at least one dividing line through the at least two key points, and dividing the reference ripple region through the at least one dividing line to obtain at least two target ripple regions.
In one possible implementation, this step 3052 includes: and according to a segmentation line formed by at least two key points, carrying out segmentation processing on the reference corrugated region to obtain at least two segmentation corrugated regions, determining the type and the confidence degree of the type of each segmentation corrugated region, taking the segmentation corrugated region as a target corrugated region in response to the fact that the confidence degree corresponding to any segmentation corrugated region is greater than the reference confidence degree, and repeating the steps to continuously segment the segmentation corrugated region in response to the fact that the confidence degree corresponding to any segmentation corrugated region is less than the reference confidence degree. The reference moire area 604 in the second graph of fig. 6 needs to be divided according to the target key points, and a target moire area 605 and a target moire area 606 are obtained by the division process, as shown in the first graph of fig. 6.
3053. And in response to the confidence degree corresponding to any one reference ripple region being greater than the reference confidence degree, taking any one reference ripple region as the target ripple region.
If the confidence corresponding to any reference ripple region is greater than the reference confidence, the type corresponding to the reference ripple region is accurate, and the reference ripple region can be used as a target ripple region.
It should be noted that, in the embodiment of the present application, the multiple reference corrugated regions are adjusted by determining the type of each pixel and the type of each reference corrugated region, but in another embodiment, step 303 and step 305 do not need to be executed, and other manners can be adopted to adjust the multiple reference corrugated regions according to the type characteristics, so as to obtain multiple target corrugated regions of the corrugated plate image.
It should be noted that in the embodiment of the present application, only the image segmentation processing is performed on the corrugated plate image to obtain a plurality of corrugated areas of the corrugated plate image, but in another embodiment, the point cloud data of the corrugated plate image can also be extracted, and then the point cloud data of the corrugated plate image is analyzed through the feature extraction model to obtain the feature information of the corrugated plate image, so as to obtain a plurality of corrugated areas of the corrugated plate image.
According to the method provided by the embodiment of the application, the corrugated plate image acquired is subjected to corrugated region positioning by utilizing an image processing technology and an image segmentation technology, and the accuracy and the robustness of image segmentation are improved. And the feature extraction model comprises a high-resolution network, a parting line detection submodel, a type detection submodel and a key point detection submodel, and is trained in a multi-task learning mode, so that the trained feature extraction model can accurately output key point features, parting line features and type features in a corrugated plate image, a synergistic promotion effect among the three features is utilized, and the key point features, the parting line features and the type features are combined, so that the problem of region positioning errors caused by only using the parting line features is avoided, and the accuracy of a corrugated region of the image is improved.
The method provided by the embodiment of the application, the affiliated type of adjacent corrugated surface is different in considering the buckled plate, and have the parting line between the adjacent corrugated surface, then when carrying out image segmentation to the buckled plate image, parting line characteristic and type characteristic through obtaining the buckled plate image, in order to confirm the parting line in the buckled plate image and the corrugated surface type that every pixel corresponds, the characteristic of buckled plate image has been richened, make according to parting line and the corresponding type division of every pixel in a plurality of ripple regions that obtain, the type that every two adjacent ripple regions correspond is different, thereby the regional accuracy of ripple has been improved.
And moreover, the corrugated plate image is segmented through the segmentation line features, the type features and the key point features in the corrugated plate image, so that the features in the corrugated plate image are enriched, and the accuracy of a corrugated area is improved.
Based on the above embodiment, fig. 7 shows a process of image segmentation on a corrugated plate image, as shown in fig. 7, the process includes:
1. the buckled plate is shot through the line camera of sweeping, obtains the buckled plate image.
2. Inputting the obtained corrugated plate image into a feature extraction model, and detecting key points of the corrugated plate image to obtain key point features; carrying out segmentation line detection on the corrugated plate image to obtain the characteristics of a segmentation line; the type characteristics of the corrugated plate image are obtained by carrying out type detection on the corrugated plate image.
3. And segmenting the corrugated plate image into a plurality of reference corrugated areas through a plurality of segmenting lines indicated by the segmenting line characteristics, and adjusting the plurality of reference corrugated areas according to the key point characteristics and the type characteristics to obtain a plurality of target corrugated areas and the type of each target corrugated area in the corrugated plate image.
It should be noted that, in the embodiments of the present application, only the dividing line detection submodel, the type detection submodel, and the key point detection submodel are used for explanation, and before the dividing line detection submodel, the type detection submodel, and the key point detection submodel are called, the dividing line detection submodel, the type detection submodel, and the key point detection submodel need to be trained. In the embodiment of the application, the dividing line detection submodel, the type detection submodel and the key point detection submodel are synchronously trained in a multitask mode, so that the three submodels are mutually promoted, and the dividing line detection submodel, the type detection submodel and the key point detection submodel with correlation are obtained. The training process for the parting line detection submodel, the type detection submodel and the key point detection submodel is as follows:
1. and acquiring a sample characteristic diagram of the sample corrugated plate image, and a sample dividing line characteristic, a sample type characteristic and a sample key point characteristic corresponding to the sample corrugated plate image.
The sample corrugated plate image identification method comprises the steps of obtaining a sample corrugated plate image by marking a sample cut line feature, a sample type feature and a sample key point feature, wherein the sample cut line feature, the sample type feature and the sample key point feature are obtained by marking the sample corrugated plate image, the sample cut line feature is used for indicating the feature of a real cut line in the sample corrugated plate image, the sample type feature is used for indicating the real type corresponding to each pixel point in the sample corrugated plate image, and the sample key point feature is used for indicating the real key point in the sample corrugated plate image.
2. And calling a parting line detection submodel, carrying out parting line detection on the sample characteristic graph to obtain a predicted parting line characteristic, and training the parting line detection submodel according to the predicted parting line characteristic and the sample parting line characteristic.
In a possible implementation manner, when the parting line detection submodel is trained, a first loss value of the parting line detection submodel is determined according to the predicted parting line characteristic and the sample parting line characteristic, and the parting line detection submodel is trained according to the first loss value.
Optionally, the first penalty value is a cross-entropy penalty value.
Optionally, the predicted partition line characteristic, the sample partition line characteristic and the first loss value satisfy the following relationship:
wherein L is1Representing a first loss value; n represents the total number of pixel points in the sample corrugated plate image; x and y represent coordinates of pixel points in the sample corrugated plate image and are used for indicating the pixel points in the sample corrugated plate image; c, representing the category of the pixel points, wherein the category is used for indicating whether the pixel points are the pixel points on the partition line; mxycRepresenting the real probability that pixel points with coordinates of x and y belong to the category c;and the prediction probability that the pixel points with the coordinates of x and y belong to the class c is represented.
3. Calling a type detection submodel, carrying out type detection on the sample characteristic diagram to obtain a predicted type characteristic, and training the type detection submodel according to the predicted type characteristic and the sample type characteristic.
In a possible implementation manner, when the type detection submodel is trained, a second loss value of the type detection submodel is determined according to the predicted type feature and the sample type feature, and the type detection submodel is trained according to the second loss value.
4. And calling a key point detection sub-model, carrying out key point detection on the second characteristic graph of the sample to obtain predicted key point characteristics, and training the key point detection sub-model according to the predicted key point characteristics and the key point characteristics of the sample.
In a possible implementation manner, when the keypoint detection submodel is trained, a third loss value of the keypoint detection submodel is determined according to the predicted keypoint feature and the sample keypoint feature, and the keypoint detection submodel is trained according to the third loss value.
Optionally, the predicted keypoint feature, the sample keypoint feature, and the third loss value satisfy the following relationship:
wherein L is2Represents a third loss value; n represents the total number of pixel points in the sample corrugated plate image; x and y represent coordinates of pixel points in the sample corrugated plate image and are used for indicating the pixel points in the sample corrugated plate image; zxyRepresenting the real probability that pixel points with coordinates of x and y are key points; zxy1 represents that the pixel point with coordinates x and y is a real key point; zxyNot equal to 1 indicates that pixel points with coordinates of x and y are not real key points;expressing the prediction probability that pixel points with coordinates of x and y are key points; alpha and beta are adjusting parameters and can be any values.
In the embodiment of the application, in the process of training the dividing line detection submodel, the type detection submodel and the key point detection submodel, the key points, the dividing lines and the corrugated surfaces in the corrugated plate image are simultaneously utilized to train the dividing line detection submodel, the type detection submodel and the key point detection submodel, so that the robustness of the feature extraction model is improved, and the unstable performance of the feature extraction model caused by single information is avoided. By adopting a multi-task learning strategy, the dividing line detection submodel, the type detection submodel and the key point detection submodel are trained respectively through key points, dividing lines and corrugated surfaces of corrugated plate images, and information of the dividing line detection submodel, the type detection submodel and the key point detection submodel can be favorably referenced and promoted mutually. Three kinds of information are output through a unified network structure, so that the occupied space of a network model can be reduced, the redundant calculation during feature extraction can be reduced, and the processing speed of the feature extraction model is accelerated. In the feature extraction model, a high-resolution network is adopted, and a bottom-up and top-down processing mode is repeatedly used to obtain multi-scale information of each input corrugated plate image, so that a more accurate pixel-level prediction result can be obtained, and the accuracy of the feature extraction model is improved.
The image segmentation method provided based on the above embodiment is applied to a scene of analyzing defects of a corrugated plate, and as shown in fig. 8, by acquiring images of the corrugated plate, features of the acquired images of the corrugated plate are extracted based on the image segmentation method provided by the above embodiment, so as to obtain key point features, segmentation line features, and type features of the images of the corrugated plate. As shown in fig. 9, a plurality of key points 901 indicated by the key point feature are included in the corrugated plate image; as shown in fig. 10, a corrugated board image includes a plurality of dividing lines 1001 indicated by the dividing line feature. And (3) carrying out image segmentation on the corrugated plate image through the key point characteristics, the segmentation line characteristics and the type characteristics of the corrugated plate image, and positioning a plurality of corrugated areas in the corrugated plate image. As shown in fig. 11, a plurality of located corrugated regions 1101 are included in the corrugated sheet image. After a plurality of corrugated regions in the corrugated sheet image are located, the corrugated sheet is subsequently defect analyzed by the located corrugated regions to determine the quality of the corrugated sheet.
Fig. 12 is a schematic structural diagram of an image segmentation apparatus according to an embodiment of the present application, and as shown in fig. 12, the apparatus includes:
the characteristic extraction module 1201 is configured to perform characteristic extraction on a corrugated plate image to obtain characteristic information of the corrugated plate image, where the characteristic information includes a partition line characteristic and a type characteristic, the partition line characteristic is used to indicate a partition line between different corrugated surfaces in the corrugated plate image, the type characteristic is used to indicate a type corresponding to each pixel point in the corrugated plate image, and the type is a type to which a corrugated surface to which a pixel point belongs;
an image segmentation module 1202 for segmenting the corrugated plate image into a plurality of reference corrugated regions according to the segmentation lines indicated by the segmentation line characteristics;
the region adjusting module 1203 is configured to adjust the multiple reference corrugated regions according to the type characteristics to obtain multiple target corrugated regions of the corrugated plate image, so that types of corrugated surfaces corresponding to every two adjacent target corrugated regions are different.
In one possible implementation, as shown in fig. 13, the image segmentation module 1202 includes:
a position determining unit 1221, configured to determine positions of multiple dividing lines in the corrugated plate image according to the characteristics of the dividing lines;
the region determining unit 1222 is configured to determine a region between every two adjacent dividing lines as a reference moire region according to the positions of the dividing lines, so as to obtain a plurality of reference moire regions.
In another possible implementation, as shown in fig. 13, the area adjustment module 1203 includes:
a first determining unit 1231, configured to determine, according to the type feature, a type corresponding to each pixel point in the corrugated plate image;
a second determining unit 1232, configured to determine, according to the pixel points in each reference ripple region and the type corresponding to each pixel point, a type corresponding to each reference ripple region respectively;
the region adjusting unit 1233 is configured to adjust the multiple reference corrugated regions according to the types corresponding to the multiple reference corrugated regions, so as to obtain multiple target corrugated regions.
In another possible implementation manner, as shown in fig. 13, the area adjusting unit 1233 includes:
the region merging subunit 12331 is configured to merge any two adjacent reference ripple regions to obtain a target ripple region, in response to that the types of any two adjacent reference ripple regions are the same;
a first determining subunit 12332, configured to determine any one of the reference ripple regions as the target ripple region in response to a difference in type between the reference ripple region and an adjacent reference ripple region.
In another possible implementation manner, the second determining unit 1232 is configured to determine, according to the pixel point in each reference ripple region and the type corresponding to each pixel point, the type corresponding to each reference ripple region and the confidence of the type respectively.
In another possible implementation manner, the feature information further includes a keypoint feature, where the keypoint feature is used to indicate a keypoint located on the dividing line in the corrugated plate image;
as shown in fig. 13, the area adjusting unit 1233 includes:
a second determining subunit 12333, configured to, in response to that the confidence degree corresponding to any one of the reference moire regions is smaller than the reference confidence degree, determine, according to the keypoint features, at least two target keypoints located in the reference moire region;
and a dividing sub-unit 12334, configured to perform dividing processing on the reference moire area according to a dividing line formed by at least two key points, so as to obtain a target moire area.
In another possible implementation manner, the second determining subunit 12333 is configured to determine at least two target keypoints located within the reference corrugated region according to the positions of the plurality of keypoints in the keypoint feature and the position of the reference corrugated region in the corrugated plate image.
In another possible implementation manner, the keypoint features are used to indicate keypoints located on the segmentation line in the corrugated plate image and confidence degrees of the keypoints, and the second determining subunit 12333 is used to determine a plurality of reference keypoints located in the reference corrugated region according to positions of the plurality of keypoints in the keypoint features and positions of the reference corrugated region in the corrugated plate image; and selecting at least two target key points from the plurality of reference key points according to the confidence degrees of the plurality of reference key points in the key point features.
In another possible implementation manner, as shown in fig. 13, the area adjusting unit 1233 further includes:
a third determining subunit 12335, configured to, in response to that the confidence level corresponding to any one of the reference ripple regions is greater than the reference confidence level, take any one of the reference ripple regions as the target ripple region.
In another possible implementation manner, as shown in fig. 13, the second determining unit 1232 includes:
a fourth determining subunit 12321, configured to determine, for any reference ripple region, a type corresponding to each pixel point in the reference ripple region according to the pixel points in the reference ripple region and the type corresponding to each pixel point;
a fifth determining subunit 12322, configured to determine, according to the type corresponding to each pixel point in the reference moire area, the type corresponding to the reference moire area.
In another possible implementation manner, the fifth determining subunit 12322 is configured to determine, according to the type corresponding to each pixel point in the reference moire area, the type with the largest number of corresponding pixel points as the type corresponding to the reference moire area.
In another possible implementation manner, as shown in fig. 13, the apparatus further includes:
a number obtaining module 1204, configured to obtain the number of pixel points in the reference moire area, where the type of the pixel points is the same as the type of the pixel points corresponding to the reference moire area;
the confidence determining module 1205 is configured to use a ratio between the number of the pixel points and the total number of the pixel points in the reference moire area as a confidence of the type corresponding to the reference moire area.
In another possible implementation manner, the type feature includes a confidence that a corrugated surface where each pixel point in the corrugated plate image is located belongs to each type;
the first determining unit 1231 includes:
the type determining subunit 12311 is configured to, for any pixel point, determine, according to the confidence that the corrugated surface where the pixel point is located belongs to each type, the type corresponding to the maximum confidence as the type to which the corrugated surface where the pixel point is located belongs.
In another possible implementation manner, as shown in fig. 13, the feature extraction module 1201 includes:
the feature extraction unit 1211 is configured to invoke a feature extraction model, perform feature extraction on the corrugated plate image, and obtain feature information.
In another possible implementation manner, the feature extraction model comprises a feature extraction submodel, a scale conversion submodel and a feature detection submodel;
as shown in fig. 13, the feature extraction unit 1211 includes:
a first extraction subunit 12111, configured to invoke the feature extraction submodel, perform feature extraction on the corrugated plate image, to obtain a first feature map of the corrugated plate image;
a second extraction subunit 12112, configured to invoke the scale conversion sub-model, perform scale conversion on the first feature map, to obtain a second feature map of the corrugated plate image;
and a third extraction subunit 12113, configured to invoke the feature detection sub-model, perform feature detection on the second feature map, and obtain feature information.
In another possible implementation manner, the feature detection submodel includes a parting line detection submodel and a type detection submodel;
a third extraction subunit 12113, configured to invoke a parting line detection submodel, and perform parting line detection on the second feature map to obtain parting line features; and calling a type detection submodel, and carrying out type detection on the second characteristic diagram to obtain type characteristics.
In another possible implementation, the feature detection submodel further includes a key point detection submodel; the third extraction subunit 12113 is further configured to invoke a key point detection sub-model, and perform key point detection on the second feature map to obtain key point features.
In another possible implementation manner, the second extracting subunit 12112 is configured to invoke a scale conversion sub-model, perform scale conversion processing on the first feature map, and obtain reference feature maps of multiple scales corresponding to the first feature map; and carrying out fusion processing on the reference characteristic graphs of the multiple scales to obtain a second characteristic graph of the corrugated plate image.
It should be noted that: the image segmentation apparatus provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical applications, the above functions can be distributed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules to complete all or part of the above described functions. In addition, the image segmentation apparatus and the image segmentation method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
The embodiment of the present application further provides a computer device, which includes a processor and a memory, where the memory stores at least one program code, and the at least one program code is loaded and executed by the processor to implement the operations executed in the image segmentation method of the foregoing embodiment.
Optionally, the computer device is provided as a terminal. Fig. 14 shows a block diagram of a terminal 1400 according to an exemplary embodiment of the present application. The terminal 1400 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The electronic device 1400 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
The electronic device 1400 includes: a processor 1401, and a memory 1402.
Processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1401 may be implemented in at least one hardware form of DSP (Digital Signal Processing), FPGA (Field-Programmable Gate Array), and PLA (Programmable Logic Array). Processor 1401 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1401 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1401 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1402 may include one or more computer-readable storage media, which may be non-transitory. Memory 1402 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1402 is used to store at least one program code for execution by processor 1401 to implement the image segmentation method provided by the method embodiments herein.
In some embodiments, the electronic device 1400 may further include: a peripheral device interface 1403 and at least one peripheral device. The processor 1401, the memory 1402, and the peripheral device interface 1403 may be connected by buses or signal lines. Each peripheral device may be connected to the peripheral device interface 1403 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1404, a display 1405, a camera assembly 1406, audio circuitry 1407, a positioning assembly 1408, and a power supply 1409.
The peripheral device interface 1403 can be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1401 and the memory 1402. In some embodiments, the processor 1401, memory 1402, and peripheral interface 1403 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 1401, the memory 1402, and the peripheral device interface 1403 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1404 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1404 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1404 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1404 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1404 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1405 is a touch display screen, the display screen 1405 also has the ability to capture touch signals at or above the surface of the display screen 1405. The touch signal may be input to the processor 1401 for processing as a control signal. At this point, the display 1405 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 1405 may be one, disposed on the front panel of the electronic device 1400; in other embodiments, the display 1405 may be at least two, respectively disposed on different surfaces of the electronic device 1400 or in a foldable design; in other embodiments, the display 1405 may be a flexible display disposed on a curved surface or a folded surface of the electronic device 1400. Even further, the display 1405 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1405 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 1406 is used to capture images or video. Optionally, camera assembly 1406 includes a front camera and a rear camera. The front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1406 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1407 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1401 for processing or inputting the electric signals to the radio frequency circuit 1404 to realize voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and disposed at different locations of the electronic device 1400. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is then used to convert electrical signals from the processor 1401 or the radio frequency circuit 1404 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1407 may also include a headphone jack.
The positioning component 1408 is operable to locate a current geographic Location of the electronic device 1400 for navigation or LBS (Location Based Service). The Positioning component 1408 may be based on the Positioning component of the GPS (Global Positioning System) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 1409 is used to power the various components of the electronic device 1400. The power source 1409 may be alternating current, direct current, disposable or rechargeable. When the power source 1409 comprises a rechargeable battery, the rechargeable battery can be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the electronic device 1400 further includes one or more sensors 1410. The one or more sensors 1410 include, but are not limited to: acceleration sensor 1411, gyroscope sensor 1412, pressure sensor 1413, fingerprint sensor 1414, optical sensor 1415, and proximity sensor 1416.
The acceleration sensor 1411 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the electronic device 1400. For example, the acceleration sensor 1411 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1401 can control the display 1405 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1411. The acceleration sensor 1411 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1412 may detect a body direction and a rotation angle of the electronic device 1400, and the gyro sensor 1412 and the acceleration sensor 1411 may cooperate to collect a 3D motion of the user on the electronic device 1400. The processor 1401 can realize the following functions according to the data collected by the gyro sensor 1412: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 1413 may be disposed on the side frame of the electronic device 1400 and/or underneath the display 1405. When the pressure sensor 1413 is disposed on the side frame of the electronic device 1400, the user's holding signal of the electronic device 1400 can be detected, and the processor 1401 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1413. When the pressure sensor 1413 is disposed at the lower layer of the display screen 1405, the processor 1401 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1405. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1414 is used for collecting a fingerprint of a user, and the processor 1401 identifies the user according to the fingerprint collected by the fingerprint sensor 1414, or the fingerprint sensor 1414 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 1401 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for, and changing settings, etc. The fingerprint sensor 1414 may be disposed on a front, back, or side of the electronic device 1400. When a physical button or vendor Logo is provided on the electronic device 1400, the fingerprint sensor 1414 may be integrated with the physical button or vendor Logo.
The optical sensor 1415 is used to collect ambient light intensity. In one embodiment, processor 1401 may control the display brightness of display 1405 based on the ambient light intensity collected by optical sensor 1415. Specifically, when the ambient light intensity is high, the display luminance of the display screen 1405 is increased; when the ambient light intensity is low, the display brightness of the display screen 1405 is reduced. In another embodiment, the processor 1401 can also dynamically adjust the shooting parameters of the camera assembly 1406 according to the intensity of the ambient light collected by the optical sensor 1415.
A proximity sensor 1416, also known as a distance sensor, is disposed on the front panel of the electronic device 1400. The proximity sensor 1416 is used to capture the distance between the user and the front of the electronic device 1400. In one embodiment, the processor 1401 controls the display 1405 to switch from the bright screen state to the dark screen state when the proximity sensor 1416 detects that the distance between the user and the front of the electronic device 1400 is gradually decreased; when the proximity sensor 1416 detects that the distance between the user and the front of the electronic device 1400 is gradually increased, the processor 1401 controls the display 1405 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in FIG. 14 is not intended to be limiting of the electronic device 1400 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Optionally, the computer device is provided as a server. Fig. 15 is a schematic structural diagram of a server 1500 according to an embodiment of the present application, where the server 1500 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1501 and one or more memories 1502, where at least one program code is stored in the memory 1502, and the at least one program code is loaded and executed by the processors 1501 to implement the methods provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
The server 1500 may be configured to perform the steps performed by the server in the image segmentation method.
The embodiment of the present application further provides a computer-readable storage medium, in which at least one program code is stored, and the at least one program code is loaded and executed by a processor to implement the operations performed in the image segmentation method of the foregoing embodiment.
Embodiments of the present application also provide a computer program product or a computer program comprising computer program code stored in a computer readable storage medium. The processor of the computer apparatus reads the computer program code from the computer-readable storage medium, and the processor executes the computer program code, so that the computer apparatus realizes the operations performed in the image segmentation method as described above in the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only an alternative embodiment of the present application and should not be construed as limiting the present application, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method of image segmentation, the method comprising:
extracting features of a corrugated plate image to obtain feature information of the corrugated plate image, wherein the feature information comprises a partition line feature and a type feature, the partition line feature is used for indicating partition lines among different corrugated surfaces in the corrugated plate image, the type feature is used for indicating a type corresponding to each pixel point in the corrugated plate image, and the type is the type of the corrugated surface where the pixel point belongs;
dividing the corrugated plate image into a plurality of reference corrugated areas according to the dividing lines indicated by the dividing line characteristics;
and adjusting the plurality of reference corrugated areas according to the type characteristics to obtain a plurality of target corrugated areas of the corrugated plate image, so that the types of the corresponding corrugated surfaces of every two adjacent target corrugated areas are different.
2. The method according to claim 1, wherein said segmenting the corrugated sheet image into a plurality of reference corrugated regions according to the segmentation lines indicated by the segmentation line features comprises:
determining the positions of a plurality of dividing lines in the corrugated plate image according to the characteristics of the dividing lines;
and determining the region between every two adjacent dividing lines as a reference corrugated region according to the positions of the dividing lines to obtain the reference corrugated regions.
3. The method according to claim 1, wherein said adjusting the plurality of reference corrugated regions according to the type feature to obtain a plurality of target corrugated regions of the corrugated sheet image comprises:
determining the type corresponding to each pixel point in the corrugated plate image according to the type characteristics;
respectively determining the type corresponding to each reference ripple region according to the pixel points in each reference ripple region and the type corresponding to each pixel point;
and adjusting the plurality of reference corrugated areas according to the types corresponding to the plurality of reference corrugated areas to obtain the plurality of target corrugated areas.
4. The method according to claim 3, wherein the adjusting the plurality of reference ripple regions according to the types corresponding to the plurality of reference ripple regions to obtain the plurality of target ripple regions comprises:
in response to that the corresponding types of any two adjacent reference corrugated areas are the same, combining the any two reference corrugated areas to obtain a target corrugated area;
in response to any reference ripple region being of a different type from the adjacent reference ripple regions, determining the any reference ripple region as a target ripple region.
5. The method according to claim 3, wherein the determining the type corresponding to each reference moire region according to the pixel points in each reference moire region and the type corresponding to each pixel point respectively comprises:
and respectively determining the type corresponding to each reference ripple region and the confidence coefficient of the type according to the pixel points in each reference ripple region and the type corresponding to each pixel point.
6. The method according to claim 5, wherein the feature information further comprises a keypoint feature for indicating a keypoint of the corrugated sheet image located on a segmentation line;
the adjusting the plurality of reference ripple regions according to the types corresponding to the plurality of reference ripple regions to obtain the plurality of target ripple regions includes:
in response to the fact that the corresponding confidence degree of any reference ripple region is smaller than the reference confidence degree, determining at least two target key points located in the reference ripple region according to the key point features;
and according to a dividing line formed by the at least two key points, dividing the reference ripple region to obtain a target ripple region.
7. The method of claim 6, wherein said determining at least two target keypoints located within the reference moire region from the keypoint features comprises:
and determining at least two target key points positioned in the reference corrugated area according to the positions of a plurality of key points in the key point features and the position of the reference corrugated area in the corrugated plate image.
8. The method according to claim 3, wherein the determining the type corresponding to each reference moire region according to the pixel points in each reference moire region and the type corresponding to each pixel point respectively comprises:
for any reference ripple region, determining the type corresponding to each pixel point in the reference ripple region according to the pixel points in the reference ripple region and the type corresponding to each pixel point;
and determining the type corresponding to the reference ripple region according to the type corresponding to each pixel point in the reference ripple region.
9. The method according to claim 8, wherein the determining the type corresponding to the reference moire area according to the type corresponding to each pixel point in the reference moire area comprises:
and determining the type with the maximum number of the corresponding pixel points as the type corresponding to the reference ripple region according to the type corresponding to each pixel point in the reference ripple region.
10. The method according to claim 1, wherein the extracting the characteristics of the corrugated plate image to obtain the characteristic information of the corrugated plate image comprises:
and calling a feature extraction model, and extracting features of the corrugated plate image to obtain the feature information.
11. The method of claim 10, wherein the feature extraction model comprises a feature extraction submodel, a scale conversion submodel, and a feature detection submodel;
calling a feature extraction model, and extracting features of the corrugated plate image to obtain feature information, wherein the feature information comprises:
calling the feature extraction submodel to extract features of the corrugated plate image to obtain a first feature map of the corrugated plate image;
calling the scale conversion sub-model to perform scale conversion on the first characteristic diagram to obtain a second characteristic diagram of the corrugated plate image;
and calling the feature detection submodel to perform feature detection on the second feature graph to obtain the feature information.
12. The method according to claim 11, wherein the invoking the scaling sub-model to scale the first feature map to obtain a second feature map of the corrugated sheet image comprises:
calling the scale conversion submodel to perform scale conversion processing on the first feature map to obtain reference feature maps of multiple scales corresponding to the first feature map;
and carrying out fusion processing on the reference characteristic graphs of the multiple scales to obtain a second characteristic graph of the corrugated plate image.
13. An image segmentation apparatus, characterized in that the apparatus comprises:
the characteristic extraction module is used for extracting characteristics of a corrugated plate image to obtain characteristic information of the corrugated plate image, wherein the characteristic information comprises a dividing line characteristic and a type characteristic, the dividing line characteristic is used for indicating a dividing line between different corrugated surfaces in the corrugated plate image, the type characteristic is used for indicating a type corresponding to each pixel point in the corrugated plate image, and the type is the type of the corrugated surface where the pixel point belongs;
the image segmentation module is used for segmenting the corrugated plate image into a plurality of reference corrugated areas according to the segmentation lines indicated by the segmentation line characteristics;
and the region adjusting module is used for adjusting the reference corrugated regions according to the type characteristics to obtain a plurality of target corrugated regions of the corrugated plate image, so that the types of the corrugated surfaces corresponding to every two adjacent target corrugated regions are different.
14. A computer device comprising a processor and a memory, the memory having stored therein at least one program code, the at least one program code being loaded into and executed by the processor to perform operations of the image segmentation method as claimed in any one of claims 1 to 12.
15. A computer-readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to perform the operations performed in the image segmentation method according to any one of claims 1 to 12.
CN202011079909.2A 2020-10-10 2020-10-10 Image segmentation method and device, computer equipment and storage medium Pending CN112053360A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011079909.2A CN112053360A (en) 2020-10-10 2020-10-10 Image segmentation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011079909.2A CN112053360A (en) 2020-10-10 2020-10-10 Image segmentation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112053360A true CN112053360A (en) 2020-12-08

Family

ID=73606056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011079909.2A Pending CN112053360A (en) 2020-10-10 2020-10-10 Image segmentation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112053360A (en)

Similar Documents

Publication Publication Date Title
CN108415705B (en) Webpage generation method and device, storage medium and equipment
CN111091132B (en) Image recognition method and device based on artificial intelligence, computer equipment and medium
CN111444749A (en) Method and device for identifying road surface guide mark and storage medium
CN111931877B (en) Target detection method, device, equipment and storage medium
CN111489378B (en) Video frame feature extraction method and device, computer equipment and storage medium
CN111325220A (en) Image generation method, device, equipment and storage medium
CN110807361A (en) Human body recognition method and device, computer equipment and storage medium
CN110738185A (en) Form object identification method and device and storage medium
CN112053360A (en) Image segmentation method and device, computer equipment and storage medium
CN111586279A (en) Method, device and equipment for determining shooting state and storage medium
CN109815150B (en) Application testing method and device, electronic equipment and storage medium
CN111104980B (en) Method, device, equipment and storage medium for determining classification result
CN111738365B (en) Image classification model training method and device, computer equipment and storage medium
CN111209377B (en) Text processing method, device, equipment and medium based on deep learning
CN113378705A (en) Lane line detection method, device, equipment and storage medium
CN112818979A (en) Text recognition method, device, equipment and storage medium
CN113392688A (en) Data processing method and device, computer equipment and storage medium
CN113343709A (en) Method for training intention recognition model, method, device and equipment for intention recognition
CN112163062A (en) Data processing method and device, computer equipment and storage medium
CN112749613A (en) Video data processing method and device, computer equipment and storage medium
CN113705677A (en) Image recognition method and device, electronic equipment and storage medium
CN110647881A (en) Method, device, equipment and storage medium for determining card type corresponding to image
CN112699906A (en) Method, device and storage medium for acquiring training data
CN111178343A (en) Multimedia resource detection method, device, equipment and medium based on artificial intelligence
CN111429106A (en) Resource transfer certificate processing method, server, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40035791

Country of ref document: HK