CN117975099A - Pixel-level target labeling method and device, electronic equipment and storage medium - Google Patents

Pixel-level target labeling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117975099A
CN117975099A CN202311811412.9A CN202311811412A CN117975099A CN 117975099 A CN117975099 A CN 117975099A CN 202311811412 A CN202311811412 A CN 202311811412A CN 117975099 A CN117975099 A CN 117975099A
Authority
CN
China
Prior art keywords
pixel
target area
target
determining
edge line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311811412.9A
Other languages
Chinese (zh)
Inventor
孔潮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Lingyunguang Industrial Intelligent Technology Co Ltd
Original Assignee
Suzhou Lingyunguang Industrial Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Lingyunguang Industrial Intelligent Technology Co Ltd filed Critical Suzhou Lingyunguang Industrial Intelligent Technology Co Ltd
Priority to CN202311811412.9A priority Critical patent/CN117975099A/en
Publication of CN117975099A publication Critical patent/CN117975099A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a pixel-level target labeling method, a pixel-level target labeling device, pixel-level target labeling equipment and a storage medium. The method comprises the following steps: acquiring description information of a target area in a sample image; determining pixel points associated with each edge line segment of the target area according to the description information; and determining the labeling result of the target region according to the pixel points associated with each edge line segment. The technical scheme solves the problem of low target labeling refinement degree of the sample image, can realize the target labeling of the pixel level, effectively improves the accuracy of the sample image label, and meets the requirement of refined target labeling.

Description

Pixel-level target labeling method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer vision, and in particular, to a pixel-level target labeling method, a pixel-level target labeling device, an electronic device, and a storage medium.
Background
In the field of computer vision, for tasks such as target detection and target segmentation realized by a supervised vision processing algorithm, target labeling is generally required to be performed on sample images in a data set to generate labels matched with the sample images, so that supervised model training is performed.
At present, in the prior art, a sample image is marked mainly through a target area scaling mode, namely the sample image is scaled gradually, when the edge distortion of the target area in the scaled sample image is detected, scaling is stopped, and the target area is marked. However, the labeling mode of scaling the target area is difficult to accurately cover all edge pixel points of the target area, the labels matched with the sample images are easy to have position deviation, and the requirement of refined target labeling cannot be met.
Disclosure of Invention
The invention provides a pixel-level target labeling method, a pixel-level target labeling device and a pixel-level target labeling storage medium, so that the problem of low target labeling refinement degree of a sample image is solved, the pixel-level target labeling can be realized, the accuracy of a sample image label is effectively improved, and the refined target labeling requirement is met.
According to an aspect of the present invention, there is provided a pixel-level target labeling method, the method comprising:
acquiring description information of a target area in a sample image;
determining pixel points associated with each edge line segment of the target area according to the description information;
and determining the labeling result of the target region according to the pixel points associated with each edge line segment.
According to another aspect of the present invention, there is provided a pixel-level object labeling apparatus, the apparatus comprising:
The descriptive information acquisition module is used for acquiring descriptive information of a target area in the sample image;
The pixel point determining module is used for determining pixel points associated with each edge line segment of the target area according to the description information;
And the labeling result determining module is used for determining the labeling result of the target area according to the pixel points associated with each edge line segment.
According to another aspect of the present invention, there is provided an electronic apparatus including:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the pixel-level targeting method according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement the pixel-level target labeling method according to any of the embodiments of the present invention when executed.
According to the technical scheme, the description information of the target area in the sample image is obtained; determining pixel points associated with each edge line segment of the target area according to the description information; and determining the labeling result of the target region according to the pixel points associated with each edge line segment. The technical scheme solves the problem of low target labeling refinement degree of the sample image, can realize the target labeling of the pixel level, effectively improves the accuracy of the sample image label, and meets the requirement of refined target labeling.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a pixel-level object labeling method according to a first embodiment of the present invention;
FIG. 2A is a flowchart of a pixel-level object labeling method according to a second embodiment of the present invention;
FIG. 2B is a schematic diagram of vertex positions of edges of a target area according to a second embodiment of the present invention;
fig. 2C is a schematic diagram of each edge line segment passing through a pixel point of a target area according to a second embodiment of the present invention;
Fig. 2D is a schematic diagram of a square area associated with each pixel according to a second embodiment of the present invention;
FIG. 2E is a schematic diagram of a polygon area according to a second embodiment of the present invention;
FIG. 2F is a schematic diagram of a labeling result of a target area according to a second embodiment of the present invention;
FIG. 3 is a flowchart of a pixel-level object labeling method according to a third embodiment of the present invention;
FIG. 4 is a schematic diagram of a pixel-level object labeling apparatus according to a fourth embodiment of the present invention;
Fig. 5 is a schematic structural diagram of an electronic device implementing a pixel-level target labeling method according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The technical scheme of the application obtains, stores, uses, processes and the like the data, which all meet the relevant regulations of national laws and regulations.
Example 1
Fig. 1 is a flowchart of a pixel-level object labeling method according to an embodiment of the present invention, where the embodiment is applicable to a fine object labeling scene, especially a pixel-level object labeling situation that is difficult for human eyes to distinguish. The method may be performed by a pixel-level object labeling apparatus, which may be implemented in hardware and/or software, which may be configured in an electronic device. As shown in fig. 1, the method includes:
S110, acquiring description information of a target area in the sample image.
The scheme can be executed by electronic equipment such as a computer, a server and the like, and the electronic equipment can acquire a data set to be marked and acquire target area description information of each sample image in the data set. The target area may be an area where the target to be identified is distributed in the sample image, for example, the target to be identified is a vehicle, and the target area may be an area where the vehicle is located in the sample image. The target area in each sample image can be one or more, and each target area can be matched with one piece of descriptive information. The description information of the target area may include information of a position, a contour, etc. of the target area, for example, the description information of the target area may include a center position of the target area and a distance of an edge of the target area from the center.
S120, determining pixel points associated with each edge line segment of the target area according to the description information.
Based on the description information of the target area, the electronic device may determine vertex positions of edges of the target area. The electronic device can connect every two adjacent edge vertexes into an edge line segment according to the positions of the edge vertexes to obtain the edge line segments. The electronic device may determine pixel points traversed by each edge line segment of the target region based on a computer graphics algorithm.
S130, determining the labeling result of the target area according to the pixel points associated with the edge line segments.
It can be understood that the electronic device can determine the pixel-level boundary of the target area based on the pixel points associated with each edge line segment, so as to obtain the labeling result of the target area.
According to the technical scheme, the description information of the target area in the sample image is obtained; determining pixel points associated with each edge line segment of the target area according to the description information; and determining the labeling result of the target region according to the pixel points associated with each edge line segment. The technical scheme solves the problem of low target labeling refinement degree of the sample image, can realize the target labeling of the pixel level, effectively improves the accuracy of the sample image label, and meets the requirement of refined target labeling.
Example two
Fig. 2 is a flowchart of a pixel-level object labeling method according to a second embodiment of the present invention, which is based on the above embodiment. As shown in fig. 2, the method includes:
s210, obtaining the vertex positions of all edges of the target area in the sample image.
In this embodiment, the description information of the target area may include each edge vertex position, for example, coordinate values of each edge vertex. Fig. 2B is a schematic diagram of the vertex positions of edges of a target area according to a second embodiment of the present invention, where, as shown in fig. 2B, the target area is an area surrounded by triangle ABC, and the vertex positions of edges of the target area are three vertex coordinates of triangle ABC.
S220, determining each edge line segment of the target area according to the vertex positions of each edge of the target area.
The electronic device may determine adjacent edge vertices according to the edge vertex positions, and connect each group of adjacent edge vertices with a straight line to obtain each edge line segment of the target area.
S230, determining pixel points associated with each edge line segment of the target area based on a preset straight line drawing model.
It is readily understood that line segments are made up of a myriad of points in the mathematical field and a limited number of pixels in the computer display field. Therefore, it is necessary to approximate infinite points with a limited number of pixel points based on the straight line drawing model to realize image display.
In this embodiment, optionally, the straight line drawing model includes one of a numerical differential model (DIGITAL DIFFERENTIAL Analyzer, DDA), a midpoint drawing model, and a Bresenham straight line model. Fig. 2C is a schematic diagram of a pixel point along which each edge line segment of the target area passes according to the second embodiment of the present invention, as shown in fig. 2C, according to the straight line drawing model, the electronic device may generate the pixel point along which each edge line segment of the target area passes.
It can be understood that the pixel points associated with the two intersecting edge line segments may overlap, so after the pixel points associated with each edge line segment are obtained, the electronic device may perform a deduplication operation on the pixel points associated with each edge line segment of the target area, so as to reduce the workload of subsequent pixel point processing.
S240, using the pixel points associated with the edge line segments as reference points of a preset pixel model, and generating a polygon area matched with the target area.
The electronics can pre-build a pixel model, e.g., simulating the area covered by pixels with a circular, square, etc. shape. And taking each pixel point passing through each edge line segment as a reference point of the pixel model, and generating a pixel model area matched with each pixel point by the electronic equipment. It will be appreciated that the reference points of the pixel model may be reference points that describe the position of the pixel model, such as circle centers, square vertices, etc. Based on the polygon processing model, the electronic device can obtain a polygon area matched with the target area according to the pixel model area matched with each pixel point.
S250, taking the region union of the target region and the polygonal region as a labeling result of the target region.
After obtaining the polygonal region matched with the target region, the electronic device can determine a region union set of the target region and the polygonal region, and take the region union set as a labeling result of the target region.
In one possible solution, the pixel model is a square with a preset side length; the datum point is one of square vertexes;
the generating a polygon area matched with the target area by taking the pixel points associated with each edge line segment as the reference points of a preset pixel model comprises the following steps:
the pixel points associated with the edge line segments are used as target vertexes of squares, and square areas associated with the pixel points are generated;
And generating a polygonal area matched with the target area according to the square area associated with each pixel point.
Fig. 2D is a schematic diagram of a square area associated with each pixel, as shown in fig. 2D, where each pixel is associated with a square area, for example, a pixel a is only used as a vertex of a square 1, a plurality of pixels are associated with a square area, for example, a pixel b and a pixel a are both used as vertices of a square 1, and a pixel is associated with a plurality of square areas, for example, a pixel b is used as a vertex of a square 1 and a vertex of a square 2.
The electronic equipment can draw a polygonal area formed by each square area according to the square area associated with each pixel point. Fig. 2E is a schematic diagram of a polygon area according to a second embodiment of the present invention, and fig. 2F is a schematic diagram of a labeling result of a target area according to a second embodiment of the present invention. The polygon area shown in fig. 2E is combined with the target area triangle ABC, so that the labeling result of the target area shown in fig. 2F can be obtained.
According to the technical scheme, the description information of the target area in the sample image is obtained; determining pixel points associated with each edge line segment of the target area according to the description information; and determining the labeling result of the target region according to the pixel points associated with each edge line segment. The technical scheme solves the problem of low target labeling refinement degree of the sample image, can realize the target labeling of the pixel level, effectively improves the accuracy of the sample image label, and meets the requirement of refined target labeling.
Example III
Fig. 3 is a flowchart of a pixel-level object labeling method according to a third embodiment of the present invention, where the embodiment is refined based on the foregoing embodiment. As shown in fig. 3, the method includes:
S310, obtaining the vertex positions of all edges of the target area in the sample image.
S320, determining each edge line segment of the target area according to each edge vertex position of the target area.
S330, determining pixel points associated with each edge line segment of the target area based on a preset straight line drawing model.
In this solution, optionally, the straight line drawing model includes one of a numerical differential model, a midpoint drawing line model, and a Bresenham straight line model.
S340, using the pixel points associated with the edge line segments as reference points of a preset pixel model, and generating a pixel model area associated with each pixel point.
It will be readily appreciated that the electronics can pre-build a pixel model, e.g., simulating the area covered by pixels with a circular, square, etc. shape. And taking each pixel point passing through each edge line segment as a reference point of a pixel model, such as a circle center, a square vertex and the like, and generating a pixel model area matched with each pixel point by the electronic equipment.
S350, determining the labeling result of the target area according to the target area and the pixel model area associated with each pixel point.
The electronic device can sequentially determine the region union of the pixel model region and the target region associated with each pixel point, and determine the labeling result of the target region according to the region union.
On the basis of the scheme, the pixel model is square with a preset side length; the datum point is one of square vertexes;
the generating a pixel model area associated with each pixel point by taking the pixel point associated with each edge line segment as a reference point of a preset pixel model includes:
the pixel points associated with the edge line segments are used as target vertexes of squares, and square areas associated with the pixel points are generated;
the determining the labeling result of the target area according to the target area and the pixel model area associated with each pixel point comprises the following steps:
And sequentially determining the region union set of the square region and the target region associated with each pixel point, and determining the labeling result of the target region according to the region union set.
After obtaining the pixel model area associated with each pixel point, the electronic device may sequentially determine the area union set of each square area and the target area as shown in fig. 2D, and after obtaining the area union set corresponding to each square, determine the union set of each area union set, to obtain the labeling result of the target area, for example, the labeling result of the target area as shown in fig. 2F.
According to the technical scheme, the description information of the target area in the sample image is obtained; determining pixel points associated with each edge line segment of the target area according to the description information; and determining the labeling result of the target region according to the pixel points associated with each edge line segment. The technical scheme solves the problem of low target labeling refinement degree of the sample image, can realize the target labeling of the pixel level, effectively improves the accuracy of the sample image label, and meets the requirement of refined target labeling.
Example IV
Fig. 4 is a schematic structural diagram of a pixel-level object labeling apparatus according to a fourth embodiment of the present invention. As shown in fig. 4, the apparatus includes:
a description information obtaining module 410, configured to obtain description information of a target area in a sample image;
the pixel point determining module 420 is configured to determine, according to the description information, a pixel point associated with each edge line segment of the target area;
The labeling result determining module 430 is configured to determine a labeling result of the target area according to the pixel points associated with each edge line segment.
In this solution, optionally, the description information includes vertex positions of edges of the target area;
the pixel point determining module 420 is specifically configured to:
determining each edge line segment of the target area according to each edge vertex position of the target area;
and determining pixel points associated with each edge line segment of the target area based on a preset straight line drawing model.
On the basis of the scheme, optionally, the straight line drawing model comprises one of a numerical differential model, a midpoint drawing line model and a Bresenham straight line model.
In one possible implementation, the labeling result determining module 430 includes:
the polygon area generating unit is used for generating a polygon area matched with the target area by taking the pixel points associated with the edge line segments as reference points of a preset pixel model;
and the first labeling result determining unit is used for taking the region union of the target region and the polygonal region as a labeling result of the target region.
On the basis of the scheme, the pixel model is square with a preset side length; the datum point is one of square vertexes;
The polygon area generating unit is specifically configured to:
the pixel points associated with the edge line segments are used as target vertexes of squares, and square areas associated with the pixel points are generated;
And generating a polygonal area matched with the target area according to the square area associated with each pixel point.
In another possible implementation, the labeling result determining module 430 includes:
the pixel model area generating unit is used for generating a pixel model area associated with each pixel point by taking the pixel point associated with each edge line segment as a reference point of a preset pixel model;
and the second labeling result determining unit is used for determining the labeling result of the target area according to the target area and the pixel model area associated with each pixel point.
On the basis of the scheme, optionally, the pixel model is square with a preset side length; the datum point is one of square vertexes;
the pixel model region generating unit is specifically configured to:
the pixel points associated with the edge line segments are used as target vertexes of squares, and square areas associated with the pixel points are generated;
The second labeling result determining unit is specifically configured to:
And sequentially determining the region union set of the square region and the target region associated with each pixel point, and determining the labeling result of the target region according to the region union set.
The pixel-level target labeling device provided by the embodiment of the invention can execute the pixel-level target labeling method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example five
Fig. 5 shows a schematic diagram of an electronic device 510 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 5, the electronic device 510 includes at least one processor 511, and a memory communicatively connected to the at least one processor 511, such as a Read Only Memory (ROM) 512, a Random Access Memory (RAM) 513, etc., in which the memory stores computer programs executable by the at least one processor, and the processor 511 may perform various suitable actions and processes according to the computer programs stored in the Read Only Memory (ROM) 512 or the computer programs loaded from the storage unit 518 into the Random Access Memory (RAM) 513. In the RAM 513, various programs and data required for the operation of the electronic device 510 can also be stored. The processor 511, the ROM 512, and the RAM 513 are connected to each other by a bus 514. An input/output (I/O) interface 515 is also connected to bus 514.
Various components in the electronic device 510 are connected to the I/O interface 515, including: an input unit 516 such as a keyboard, a mouse, etc.; an output unit 517 such as various types of displays, speakers, and the like; a storage unit 518 such as a magnetic disk, optical disk, etc.; and a communication unit 519 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 519 allows the electronic device 510 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks.
The processor 511 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of processor 511 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 511 performs the various methods and processes described above, such as the pixel-level target labeling method.
In some embodiments, the pixel-level targeting method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 518. In some embodiments, some or all of the computer program may be loaded and/or installed onto the electronic device 510 via the ROM 512 and/or the communication unit 519. When the computer program is loaded into RAM 513 and executed by processor 511, one or more steps of the pixel-level targeting method described above may be performed. Alternatively, in other embodiments, processor 511 may be configured to perform the pixel-level targeting method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable pixel level target labeling apparatus, such that the computer programs, when executed by the processor, cause the functions/operations specified in the flowchart and/or block diagram block or blocks to be performed. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for labeling a pixel-level object, the method comprising:
acquiring description information of a target area in a sample image;
determining pixel points associated with each edge line segment of the target area according to the description information;
and determining the labeling result of the target region according to the pixel points associated with each edge line segment.
2. The method of claim 1, wherein the descriptive information includes edge vertex positions of the target region;
the determining, according to the description information, the pixel point associated with each edge line segment of the target area includes:
determining each edge line segment of the target area according to each edge vertex position of the target area;
and determining pixel points associated with each edge line segment of the target area based on a preset straight line drawing model.
3. The method of claim 2, wherein the straight line drawing model comprises one of a numerical differential model, a midpoint drawing model, and a Bresenham straight line model.
4. The method according to claim 2, wherein the determining the labeling result of the target area according to the pixel points associated with each edge line segment includes:
the pixel points associated with the edge line segments are used as reference points of a preset pixel model, and a polygon area matched with the target area is generated;
and taking the region union set of the target region and the polygonal region as a labeling result of the target region.
5. A method according to claim 3, wherein the pixel model is square with a preset side length; the datum point is one of square vertexes;
the generating a polygon area matched with the target area by taking the pixel points associated with each edge line segment as the reference points of a preset pixel model comprises the following steps:
the pixel points associated with the edge line segments are used as target vertexes of squares, and square areas associated with the pixel points are generated;
And generating a polygonal area matched with the target area according to the square area associated with each pixel point.
6. The method according to claim 2, wherein the determining the labeling result of the target area according to the pixel points associated with each edge line segment includes:
The pixel points associated with the edge line segments are used as reference points of a preset pixel model, and a pixel model area associated with each pixel point is generated;
And determining the labeling result of the target area according to the target area and the pixel model area associated with each pixel point.
7. The method of claim 6, wherein the pixel model is a square of a preset side length; the datum point is one of square vertexes;
the generating a pixel model area associated with each pixel point by taking the pixel point associated with each edge line segment as a reference point of a preset pixel model includes:
the pixel points associated with the edge line segments are used as target vertexes of squares, and square areas associated with the pixel points are generated;
the determining the labeling result of the target area according to the target area and the pixel model area associated with each pixel point comprises the following steps:
And sequentially determining the region union set of the square region and the target region associated with each pixel point, and determining the labeling result of the target region according to the region union set.
8. A pixel-level object labeling apparatus, comprising:
The descriptive information acquisition module is used for acquiring descriptive information of a target area in the sample image;
The pixel point determining module is used for determining pixel points associated with each edge line segment of the target area according to the description information;
And the labeling result determining module is used for determining the labeling result of the target area according to the pixel points associated with each edge line segment.
9. An electronic device, the electronic device comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the pixel-level targeting method of any one of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to implement the pixel-level targeting method of any one of claims 1-7 when executed.
CN202311811412.9A 2023-12-26 2023-12-26 Pixel-level target labeling method and device, electronic equipment and storage medium Pending CN117975099A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311811412.9A CN117975099A (en) 2023-12-26 2023-12-26 Pixel-level target labeling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311811412.9A CN117975099A (en) 2023-12-26 2023-12-26 Pixel-level target labeling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117975099A true CN117975099A (en) 2024-05-03

Family

ID=90856529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311811412.9A Pending CN117975099A (en) 2023-12-26 2023-12-26 Pixel-level target labeling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117975099A (en)

Similar Documents

Publication Publication Date Title
CN112597837B (en) Image detection method, apparatus, device, storage medium, and computer program product
CN113362420B (en) Road label generation method, device, equipment and storage medium
CN113205041B (en) Structured information extraction method, device, equipment and storage medium
CN112802037A (en) Portrait extraction method, device, electronic equipment and storage medium
CN113610809B (en) Fracture detection method, fracture detection device, electronic equipment and storage medium
CN115719356A (en) Image processing method, apparatus, device and medium
CN115272290A (en) Defect detection method and device, electronic equipment and storage medium
CN114445825A (en) Character detection method and device, electronic equipment and storage medium
CN116385789B (en) Image processing method, training device, electronic equipment and storage medium
CN114677566B (en) Training method of deep learning model, object recognition method and device
CN114882313B (en) Method, device, electronic equipment and storage medium for generating image annotation information
CN113361371B (en) Road extraction method, device, equipment and storage medium
CN112991308B (en) Image quality determining method and device, electronic equipment and medium
CN117975099A (en) Pixel-level target labeling method and device, electronic equipment and storage medium
CN114266879A (en) Three-dimensional data enhancement method, model training detection method, three-dimensional data enhancement equipment and automatic driving vehicle
CN114119990A (en) Method, apparatus and computer program product for image feature point matching
CN113190150A (en) Display method, device and storage medium of covering
CN116503407B (en) Method and device for detecting foreign object region in image and electronic equipment
CN117350995A (en) Product defect detection method, device, equipment and storage medium
CN115761123B (en) Three-dimensional model processing method, three-dimensional model processing device, electronic equipment and storage medium
CN114327346B (en) Display method, display device, electronic apparatus, and storage medium
CN114092874B (en) Training method of target detection model, target detection method and related equipment thereof
CN116883648B (en) Foreign matter detection method and device, electronic equipment and storage medium
CN113538644B (en) Character dynamic video generation method, device, electronic equipment and storage medium
CN114092698A (en) Target information processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination