CN110796201A - Method for correcting label frame, electronic equipment and storage medium - Google Patents

Method for correcting label frame, electronic equipment and storage medium Download PDF

Info

Publication number
CN110796201A
CN110796201A CN201911053228.6A CN201911053228A CN110796201A CN 110796201 A CN110796201 A CN 110796201A CN 201911053228 A CN201911053228 A CN 201911053228A CN 110796201 A CN110796201 A CN 110796201A
Authority
CN
China
Prior art keywords
labeling
frames
overlapped
frame
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911053228.6A
Other languages
Chinese (zh)
Other versions
CN110796201B (en
Inventor
王文琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Robotics Co Ltd
Original Assignee
Cloudminds Shenzhen Robotics Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shenzhen Robotics Systems Co Ltd filed Critical Cloudminds Shenzhen Robotics Systems Co Ltd
Priority to CN201911053228.6A priority Critical patent/CN110796201B/en
Publication of CN110796201A publication Critical patent/CN110796201A/en
Application granted granted Critical
Publication of CN110796201B publication Critical patent/CN110796201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The embodiment of the invention relates to the field of data processing, and discloses a method for correcting a label box, electronic equipment and a storage medium. In some embodiments of the present application, a method for correcting a label box includes: determining overlapping information between the labeling frames of the images according to the labeling frame information of the images, wherein the overlapping information indicates whether the labeling frames of the images are overlapped; and according to the overlapping information among the labeling frames of the image, if the overlapped labeling frames exist in the image, adjusting the labeling frames of the image to reduce the overlapping area among the overlapped labeling frames. In the embodiment, the overlapping area between the labeling frames can be reduced, the learning effect of the target detection algorithm based on deep learning is improved, and the identification accuracy is improved.

Description

Method for correcting label frame, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the field of data processing, in particular to a method for correcting a label box, electronic equipment and a storage medium.
Background
A fisheye lens is a special lens with a larger viewing angle and a shorter focal length than a standard lens. Due to the structural characteristics of large visual angle and short focal length of the fisheye lens, the fisheye image acquired by the fisheye lens has distortion. The object in the fisheye image is seriously deformed, and the same object is deformed differently in different positions within the range of the visual angle of the fisheye lens, so that the characteristics of the same object cannot be directly used by the traditional target detection and identification algorithm. The fisheye lens is mainly used as a visual sensor in video monitoring scenes in the industry. For an image shot by the fisheye lens, firstly, the distorted fisheye image collected by the fisheye lens is corrected by using a distortion correction algorithm to be a picture according with the visual effect of human eyes, and then, the corrected fisheye image is spliced by using a splicing algorithm. Because the correction algorithm of the fisheye image directly influences the correction effect, the correction effect of the current correction algorithm has the limitation of the correction algorithm, and the corrected fisheye image is still difficult to be directly used by the traditional target detection and identification algorithm. Therefore, the industry has begun to use more accurate target detection and recognition algorithms based on deep learning. The principle of the target detection and identification algorithm based on deep learning is as follows: training a detection and recognition model by using the well-arranged training data (pictures and corresponding markup documents), and detecting and recognizing the object in one picture by using the trained model. The training data is required to cover as much as possible the individual features of a target object.
However, the inventors found that at least the following problems exist in the prior art: at present, training data of an image algorithm based on deep learning are collected by a standard shot. The optical distortion of an object in a picture caused by the lens is almost negligible. The algorithm based on deep learning can easily learn the characteristics of a fixed object, and the trained model has good detection and identification effects on undistorted pictures. When the distorted fisheye image collected by the fisheye lens is directly used as training data to train the detection and recognition model, due to the optical distortion of the fisheye lens, the learning effect of the target detection and recognition algorithm based on deep learning on the characteristics of the same type of objects in the distorted fisheye image is poor.
Disclosure of Invention
An object of an embodiment of the present invention is to provide a method for correcting a label frame, an electronic device, and a storage medium, which can reduce an overlapping area between label frames, improve a learning effect of a target detection algorithm based on deep learning, and improve recognition accuracy.
In order to solve the above technical problem, an embodiment of the present invention provides a method for correcting a label box, including the following steps: determining overlapping information between the labeling frames of the images according to the labeling frame information of the images, wherein the overlapping information indicates whether the labeling frames of the images are overlapped; and according to the overlapping information among the labeling frames of the image, if the overlapped labeling frames exist in the image, adjusting the labeling frames of the image to reduce the overlapping area among the overlapped labeling frames.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the correction method mentioned in the above embodiments.
The embodiment of the invention also provides a computer readable storage medium, which stores a computer program, and the computer program is executed by a processor to implement the method for correcting the label box mentioned in the above embodiment.
Compared with the prior art, the method and the device for adjusting the annotation frames in the image have the advantages that the annotation frames in the image are adjusted based on the overlapping information of the annotation frames in the image, and the overlapping area of the annotation frames is reduced. Due to the fact that the overlapping area between the labeling frames in the image is reduced, the situation that the target detection algorithm based on deep learning learns the features of the object labeled by the small labeling frame into the features of the object labeled by the large labeling frame can be reduced, the learning effect of the target detection algorithm based on deep learning is improved, and the recognition accuracy is improved.
In addition, adjusting the labeling frames of the image to reduce the overlapping area between the overlapped labeling frames specifically includes: aiming at any two overlapped labeling frames, the following operations are respectively carried out: determining a labeling frame with a larger area in the two overlapped labeling frames; and adjusting the boundary of the marking frame with larger area according to a preset adjustment rule so as to reduce the overlapping area between the overlapped marking frames.
In addition, before determining the labeling frame with larger area in the two overlapped labeling frames, the method for correcting the labeling frame further comprises the following steps: judging whether the overlapping area ratio between the two overlapped labeling frames is larger than a first threshold value or not; if yes, executing the step of determining the labeling box with larger area in the two overlapped labeling boxes. In the embodiment, the processing efficiency can be improved, and the consumption of computing resources can be reduced.
In addition, before determining whether the overlapping area ratio between two overlapped labeling frames is greater than the first threshold, the method for correcting the labeling frame further includes: judging whether the area ratio between the two overlapped labeling frames is smaller than a second threshold value or not; if yes, executing the step of judging whether the overlapping area proportion between the two overlapped labeling frames is larger than a first threshold value. In the embodiment, the processing efficiency can be improved, and the consumption of computing resources can be reduced.
In addition, the preset regulation rule is as follows: and reducing the labeling frame with the larger area towards the gravity center direction of the labeling frame with the larger area along the longest edge of the labeling frame with the larger area, so that the overlapping area proportion of the reduced labeling frame and the labeling frame with the smaller area in the two overlapped labeling frames is less than or equal to a third threshold, wherein the third threshold is less than or equal to the first threshold.
In addition, the ratio of the overlapping area between the two overlapping labeling frames is equal to the area of the labeling frame having a larger overlapping area/area between the two overlapping labeling frames, and the first threshold value is 20% or more and 30% or less.
In addition, the area ratio between the two overlapped labeling boxes is equal to the area of the labeling box with the smaller area/the area of the labeling box with the larger area, and the second threshold value is greater than or equal to 20% and less than or equal to 30%.
In addition, after adjusting the boundary of the labeling frame with a larger area according to a preset adjustment rule to reduce the overlapping area between the overlapping labeling frames, the method for correcting the labeling frame further comprises the following steps: updating the overlapping information of the labeling frame with larger area and other labeling frames; wherein, the other labeling frames are the labeling frames except the labeling frame with larger area in the image; alternatively, the first and second electrodes may be,
updating the overlapping information of other marking frames which are overlapped with the marking frame with the larger area and the marking frame with the larger area; wherein, other marking frames are: and determining the labeling frames which are overlapped with the labeling frame with the larger area according to the overlapping information before updating, wherein the labeling frames except the labeling frame with the smaller area in the two overlapped labeling frames.
The annotation frame information of the image is position information of the annotation frame of the image.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in the accompanying drawings and not in limitation thereof, wherein elements having the same reference numeral designations represent like elements throughout and wherein the drawings are not to scale unless otherwise specified.
FIG. 1 is a flow chart of a method for correcting a label box according to a first embodiment of the present invention;
FIG. 2 is a flow chart of a method for correcting a label box according to a second embodiment of the present invention;
FIG. 3a is a schematic position diagram of the label frame 1 and the label frame 2 before adjustment according to the second embodiment of the present invention;
FIG. 3b is a schematic position diagram of the adjusted label box 1 and label box 2 according to the second embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a correction device for a label box according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, what is meant is "including, but not limited to".
In the description of the present disclosure, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present disclosure, "a plurality" means two or more unless otherwise specified.
A first embodiment of the present invention relates to a method for correcting a label box, which is applied to an electronic device, such as a terminal or a server, and as shown in fig. 1, the method for correcting a label box includes the following steps:
step 101: and determining overlapping information between the labeling frames of the images according to the labeling frame information of the images.
Specifically, the overlap information indicates whether or not the annotation frames of the images overlap. The image may be an image in the training data of the target detection algorithm based on the deep learning, and the label box information may be label information in the training data of the target detection algorithm based on the deep learning.
In one embodiment, the annotation frame information of the image is position information of the annotation frame of the image. The position information of the labeling frame may include the coordinate of the upper left corner of the labeling frame, the x axial length information and the y axial length information of the labeling frame. The electronic equipment determines overlapping information between any two labeling frames in the image based on the position information of the two labeling frames. The overlap information between the two labeling boxes indicates whether the two labeling boxes overlap or do not overlap.
For example, the superimposition information is represented by binary, a 1 for superimposition information indicates that both are superimposed, and a 0 for superimposition information indicates that both are not superimposed. Assume that the position information of the label box 1 indicates: the coordinate of the upper left corner of the labeling frame 1 is (1,1), the x axial length is 10 unit lengths, and the y axial length is 20 unit lengths; position information indication of the label frame 2; the coordinates at the upper left corner are (6,11), the x-axis length is 5 units long, and the y-axis length is 10 units long. According to the position information of the labeling frame 1, the labeling frame 1 can be determined as a frame formed by sequentially ending coordinate points (1,1), (11,21) and (1, 21). And the coordinate of the upper left corner of the labeling frame 2 is located in the labeling frame 1, so the overlapping information of the labeling frame 1 and the labeling frame 2 is 1, that is, the two are overlapped.
As another example, the overlap information is an overlap area between two labeling boxes. If the overlapping area between the two labeling frames is 0, the overlapping information between the two labeling frames indicates that the two labeling frames are not overlapped, and if the overlapping area between the two labeling frames is not 0, the overlapping information between the two labeling frames indicates that the two labeling frames are overlapped. Assume that the position information of the label box 1 indicates: the coordinate of the upper left corner of the labeling frame 1 is (1,1), the x axial length is 10 unit lengths, and the y axial length is 20 unit lengths; the position information of the label box 2 indicates: the coordinate of the upper left corner of the labeling box 2 is (6,11), the x axial length is 5 unit lengths, and the y axial length is 10 unit lengths. From the position information of the mark frame 1 and the position information of the mark frame 2, it can be determined that the overlapping area of the both is 50 unit areas, and therefore, the mark frame 1 and the mark frame 2 overlap.
It should be noted that, as can be understood by those skilled in the art, in practical applications, a parameter that can indicate whether a labeled box is overlapped or not can be selected as overlap information between two labeled boxes as needed, and the specific meaning of the overlap information is not limited in this embodiment.
Step 102: and according to the overlapping information among the labeling frames of the image, if the overlapped labeling frames exist in the image, adjusting the labeling frames of the image to reduce the overlapping area among the overlapped labeling frames.
Specifically, if an image includes an overlapped annotation frame, the annotation frame of the image is adjusted. Optionally, if there is no overlapped annotation frame in the image, the annotation frame in the image is not adjusted.
The inventor has found that when two objects are placed next to each other or in the shadow, the annotation frame of the large object in the image may cover the annotation frame of the small object in whole or in part. A target detection algorithm trained using training data constructed from such images will result in small objects learning as part of large objects, thereby affecting recognition accuracy. Therefore, when the electronic equipment determines that the two labeling frames are overlapped, the labeling frames of the image are adjusted according to the overlapping information so as to reduce the overlapping area between the overlapped labeling frames, so that the learning effect of the target detection algorithm based on deep learning is improved, and the identification accuracy is improved.
In one embodiment, the image is a fisheye image. The fisheye image is an image photographed through a fisheye lens. Due to optical distortion of fisheye images, labeling frames between different objects often overlap, which affects the learning effect of the target detection algorithm based on deep learning. In the embodiment, the learning effect of the target detection algorithm based on deep learning can be improved and the identification accuracy can be improved by correcting the labeling frame of the fisheye image.
The above description is only for illustrative purposes and does not limit the technical aspects of the present invention.
Compared with the prior art, the method for correcting the annotation frames provided in the embodiment adjusts the annotation frames in the image based on the overlapping information between the annotation frames of the image, so as to reduce the overlapping area between the annotation frames. Due to the fact that the overlapping area between the labeling frames in the image is reduced, the situation that the target detection algorithm based on deep learning learns the features of the object labeled by the small labeling frame into the features of the object labeled by the large labeling frame can be reduced, the learning effect of the target detection algorithm based on deep learning is improved, and the recognition accuracy is improved.
A second embodiment of the present invention relates to a method for correcting a label box. This embodiment is an example of the first embodiment, and specifically describes a process of adjusting an annotation frame of an image according to the first embodiment.
Specifically, in step 102, if the electronic device determines that there are overlapping annotation frames in the image according to the overlapping information between the annotation frames of the image, the electronic device performs the following operations for any two overlapping annotation frames:
step 201: and determining the labeling box with larger area in the two overlapped labeling boxes.
Specifically, the electronic device may determine the area of each of the labeled frames according to the position information of the labeled frames. And determining the labeling frame with larger area by comparing the areas of the two overlapped labeling frames.
In one embodiment, before performing step 201, the electronic device determines whether the overlapping area ratio between the two overlapped labeling boxes is greater than a first threshold, and performs step 201 if it is determined that the overlapping area ratio between the two overlapped labeling boxes is greater than the first threshold, otherwise, does not adjust the two overlapped labeling boxes.
In one embodiment, the ratio of the overlapping area between the two overlapping labeled boxes is equal to the area of the labeled box with the larger overlapping area/area between the two overlapping labeled boxes, and the first threshold is greater than or equal to 20%, and less than or equal to 30%, for example, 25%.
It should be noted that, as can be understood by those skilled in the art, in practical applications, the first threshold may be selected as needed, and the embodiment is not limited.
It is worth mentioning that the mutual influence between the labeling frames of the two objects with small overlapping areas is small, and the effect difference between the two objects before and after adjustment is not large. In this embodiment, the overlapped label boxes whose overlap area ratio reaches the preset first threshold are adjusted, so that the processing efficiency can be improved, and the consumption of computing resources can be reduced.
In another embodiment, before determining whether the overlapping area ratio between the two overlapping labeling boxes is greater than a first threshold, the electronic device determines whether the area ratio between the two overlapping labeling boxes is less than a second threshold; if so, executing a step of judging whether the overlapping area ratio between the two overlapped labeling frames is larger than a first threshold value, otherwise, not performing subsequent operation.
In one embodiment, the area ratio between the two overlapped labeling boxes is equal to the area of the labeling box with the larger area/the smaller area of the two overlapped labeling boxes, and the second threshold is greater than or equal to 20% and less than or equal to 30%, for example, 30%.
The area ratio between the labeling frames of the two objects is small, which indicates that the difference between the two objects is small, and the influence of the overlapped part on the target detection algorithm based on deep learning is small. In the embodiment, the overlapped labeling frames of the objects with larger differences are adjusted, so that the adjustment efficiency can be improved, and the consumption of computing resources can be reduced.
It should be noted that, as can be understood by those skilled in the art, in practical application, the second threshold may be selected according to needs, and the embodiment is not limited.
For example, the first threshold is 25% and the second threshold is 30%, the electronic device determines whether to adjust the labeled frames of the image by reading the labeled frame information (position information of the labeled frames) of the image and the image, then, the electronic device determines the overlapping relationship between the labeled frames according to the labeled frame information, and finally, the electronic device sequentially takes out each pair of the overlapped labeled frames, determines whether to adjust, if it is determined that adjustment is required, jumps the two overlapped labeled frames by executing step 201 and step 202, otherwise, does not adjust the two overlapped labeled frames, assuming that, in a certain determination process, the taken out pair of the overlapped labeled frames is labeled frame a and labeled frame B, Scale ═ min (SA, SB)/max (SA, SB), Scale iou ═ S (a ∩ B)/min (SA, SB), where Scale represents the area ratio between the two overlapped labeled frames, Scale u represents the area ratio between the two overlapped labeled frames, Scale ≧ SA, represents the ratio of the overlapping area of the two overlapped labeled frames, when the overlapping area of the labeled frames is determined that the size of the two overlapped labeled frames is larger than 30%, if the size of the size is not smaller than the size of the target size of the size, the size of the smaller size, if the size of the size is determined that the size of the size smaller size of the target frame, the size is smaller size of the size < 30%, then, the size < 30%, the size < B, if the size < 30%, the size < the:
Figure BDA0002255864230000061
Figure BDA0002255864230000071
it should be noted that, in practical applications, some screening conditions may be set before the label boxes are adjusted according to needs, and some overlapped label boxes are selectively adjusted, and the specific screening conditions are not limited in this embodiment.
Step 202: and adjusting the boundary of the marking frame with larger area according to a preset adjustment rule so as to reduce the overlapping area between the overlapped marking frames.
In one embodiment, the preset adjustment rule is: and moving the longest edge of the labeling frame with the larger area, which is overlapped with the labeling frame with the smaller area, of the labeling frame with the larger area to the gravity center direction of the labeling frame with the larger area until the overlapped area ratio of the adjusted labeling frame with the larger area to the labeling frame with the smaller area in the two overlapped labeling frames is less than or equal to a third threshold, wherein the third threshold is less than or equal to a second threshold.
It is worth mentioning that the marking frame is adjusted along the longest edge of the marking frame with the larger area, so that more features of the object marked by the marking frame with the larger area can be reserved, the feature of the object marked by the marking frame with the larger area can be conveniently learned based on the target detection algorithm of deep learning, and the detection accuracy is improved.
It should be noted that, as can be understood by those skilled in the art, in practical application, the adjustment rule may be set as needed, for example, the adjustment rule is set to be along the shortest side of the labeling frame with a larger area, and to be toward the direction of the gravity center of the labeling frame with a larger area, so as to reduce the overlapping area ratio between the two overlapping labeling frames, and the specific adjustment method of the labeling frame is not limited in this embodiment.
In one example, the third threshold is 0%. Namely, the longest edge of the labeling frame with larger area is shortened to the boundary of the labeling frame with smaller area. Assuming that the labeling frames 1 and 2 before being adjusted are as shown in fig. 3a, the labeling frame 1 is a labeling frame with a larger area, the labeling frame 2 is a labeling frame with a smaller area, an area ratio between two overlapped labeling frames is smaller than a second threshold, and an overlapped area ratio is larger than a first threshold. Therefore, the mark frame 1 needs to be corrected, the shortest edge of the mark frame 1 overlapping with the mark frame 2 is moved toward the center of gravity (as indicated by an arrow in fig. 3 a) along the long side of the mark frame 1, and the mark frame 1 and the mark frame 2 are obtained by adjustment as shown in fig. 3 b.
It should be noted that, as can be understood by those skilled in the art, in practical application, any value of the third threshold may be selected to be greater than or equal to 0 and less than or equal to the first threshold, and the specific value of the third threshold is not limited in this embodiment.
In one embodiment, after adjusting a label frame with a larger area for any two overlapped label frames, the electronic device selectively updates the overlapped information of the label frame with the larger area and other label frames; wherein, the other labeling frames are the labeling frames except the labeling frame with larger area in the image; or updating the overlapping information of other labeling frames which are overlapped with the labeling frame with the larger area and the labeling frame with the larger area; wherein, other marking frames are: and determining the labeling frames which are overlapped with the labeling frame with the larger area according to the overlapping information before updating, wherein the labeling frames except the labeling frame with the smaller area in the two overlapped labeling frames.
It is worth mentioning that the overlapped information between the label frame with the larger area and the other label frame is updated, so that the waste of computing resources caused by the fact that the electronic device still processes the overlapped information according to the un-updated overlapped information after the overlapped information between the label frame with the larger area and the other label frame is changed after the label frame with the larger area is adjusted can be avoided.
The above description is only for illustrative purposes and does not limit the technical aspects of the present invention.
Compared with the prior art, the method for correcting the annotation frames provided in the embodiment adjusts the annotation frames in the image based on the overlapping information between the annotation frames of the image, so as to reduce the overlapping area between the annotation frames. Due to the fact that the overlapping area between the labeling frames in the image is reduced, the situation that the target detection algorithm based on deep learning learns the features of the object labeled by the small labeling frame into the features of the object labeled by the large labeling frame can be reduced, the learning effect of the target detection algorithm based on deep learning is improved, and the recognition accuracy is improved.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A third embodiment of the present invention relates to a device for correcting a mark frame, as shown in fig. 4, including: a determination module 401 and an adjustment module 402. The determining module 401 is configured to determine overlap information between the annotation frames of the images according to the annotation frame information of the images, where the overlap information indicates whether the annotation frames of the images overlap with each other. The adjusting module 402 is configured to adjust the annotation frames of the image according to the overlapping information between the annotation frames of the image, so as to reduce the overlapping area between the overlapped annotation frames.
It should be understood that this embodiment is a system example corresponding to the first embodiment, and may be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
It should be noted that each module referred to in this embodiment is a logical module, and in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, and may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
A fourth embodiment of the present invention relates to an electronic apparatus, as shown in fig. 5, including: at least one processor 501; and a memory 502 communicatively coupled to the at least one processor 501; the memory 502 stores instructions executable by the at least one processor 501, and the instructions are executed by the at least one processor 501, so that the at least one processor 501 can execute the method for modifying the label box as described in the above embodiments.
The electronic device includes: one or more processors 501 and a memory 502, with one processor 501 being an example in fig. 5. The processor 501 and the memory 502 may be connected by a bus or other means, and fig. 5 illustrates the connection by the bus as an example. Memory 502, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The processor 501 executes various functional applications and data processing of the device, that is, implements the above-described correction method of the labeled block, by executing the nonvolatile software program, instructions, and modules stored in the memory 502.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 502 may optionally include memory located remotely from processor 501, which may be connected to an external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 502 and when executed by the one or more processors 501 perform the method of modifying the labeled blocks of any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the method provided by the embodiment of the application without detailed technical details in the embodiment.
A fifth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method according to the above embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps in the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. A method for correcting a label box is characterized by comprising the following steps:
determining overlapping information between the labeling frames of the images according to the labeling frame information of the images, wherein the overlapping information indicates whether the labeling frames of the images are overlapped;
and according to the overlapping information between the labeling frames of the image, if the overlapping labeling frames exist in the image, adjusting the labeling frames of the image to reduce the overlapping area between the overlapping labeling frames.
2. The method for modifying an annotation frame according to claim 1, wherein the adjusting the annotation frame of the image to reduce an overlapping area between the overlapped annotation frames specifically comprises:
aiming at any two overlapped labeling frames, the following operations are respectively carried out:
determining a labeling frame with a larger area in the two overlapped labeling frames;
and adjusting the boundary of the marking frame with the larger area according to a preset adjustment rule so as to reduce the overlapping area between the overlapped marking frames.
3. The method for modifying a label box according to claim 2, wherein before the determining the label box with a larger area in the two overlapped label boxes, the method further comprises:
judging whether the overlapping area ratio between the two overlapped labeling frames is larger than a first threshold value or not;
if yes, executing the step of determining the labeling box with larger area in the two overlapped labeling boxes.
4. The method for modifying a label box according to claim 3, wherein before the determining whether the ratio of the overlapping area between the two overlapped label boxes is greater than the first threshold, the method further comprises:
judging whether the area ratio between the two overlapped labeling frames is smaller than a second threshold value or not;
if yes, the step of judging whether the overlapping area proportion between the two overlapped labeling frames is larger than a first threshold value is executed.
5. The method for modifying a label box according to any one of claims 2 to 4, wherein the preset adjustment rule is: and the length of the longest edge of the labeling frame with the larger area is reduced in the gravity center direction of the labeling frame with the larger area, so that the reduced labeling frame and the overlapping area proportion of the labeling frame with the smaller area in the two overlapped labeling frames are less than or equal to a third threshold, wherein the third threshold is less than or equal to the first threshold.
6. The method according to claim 3, wherein a ratio of an overlapping area between the two overlapping labeled boxes is equal to an overlapping area between the two overlapping labeled boxes/an area of the labeled box with a larger area, and the first threshold is greater than or equal to 20% and less than or equal to 30%.
7. The method according to claim 4, wherein an area ratio between the two overlapped labeling boxes is equal to an area of a smaller labeling box/an area of a larger labeling box, and the second threshold is greater than or equal to 20% and less than or equal to 30%.
8. The method for modifying a markup frame according to any one of claims 2 to 7, wherein after the adjusting the boundary of the markup frame with a larger area according to a preset adjustment rule to reduce the overlapping area between the overlapped markup frames, the method further comprises:
updating the overlapping information of the labeling frame with the larger area and other labeling frames; the other labeling frames are labeling frames in the image except the labeling frame with the larger area; alternatively, the first and second electrodes may be,
updating the overlapping information of other labeling frames which are overlapped with the labeling frame with the larger area and the labeling frame with the larger area; wherein the other labeling boxes are: and determining the labeling frames which are overlapped with the labeling frame with the larger area according to the overlapping information before updating, wherein the labeling frames except the labeling frame with the smaller area in the two overlapped labeling frames.
9. An electronic device, comprising: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of modifying a label box of any one of claims 1 to 8.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method for modifying a markup box of any one of claims 1 to 8.
CN201911053228.6A 2019-10-31 2019-10-31 Correction method of annotation frame, electronic equipment and storage medium Active CN110796201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911053228.6A CN110796201B (en) 2019-10-31 2019-10-31 Correction method of annotation frame, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911053228.6A CN110796201B (en) 2019-10-31 2019-10-31 Correction method of annotation frame, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110796201A true CN110796201A (en) 2020-02-14
CN110796201B CN110796201B (en) 2023-07-11

Family

ID=69440811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911053228.6A Active CN110796201B (en) 2019-10-31 2019-10-31 Correction method of annotation frame, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110796201B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444945A (en) * 2020-03-20 2020-07-24 北京每日优鲜电子商务有限公司 Sample information filtering method and device, computer equipment and storage medium
CN112200064A (en) * 2020-09-30 2021-01-08 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium
CN112418263A (en) * 2020-10-10 2021-02-26 上海鹰瞳医疗科技有限公司 Medical image focus segmentation and labeling method and system
CN112508020A (en) * 2020-12-22 2021-03-16 深圳市商汤科技有限公司 Labeling method and device, electronic equipment and storage medium
CN113808186A (en) * 2021-03-04 2021-12-17 京东鲲鹏(江苏)科技有限公司 Training data generation method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100894A (en) * 2014-08-26 2015-11-25 Tcl集团股份有限公司 Automatic face annotation method and system
CN105447839A (en) * 2015-11-20 2016-03-30 上海华力创通半导体有限公司 Rectangular frame merging method and system
CN109685870A (en) * 2018-11-21 2019-04-26 北京慧流科技有限公司 Information labeling method and device, tagging equipment and storage medium
CN109710148A (en) * 2018-12-19 2019-05-03 广州文远知行科技有限公司 Selection method, device, computer equipment and the storage medium of image labeling frame
US20190220506A1 (en) * 2018-01-15 2019-07-18 Trimble Solutions Corporation Fine-tuning automatically label locations in engineering drawings
CN110135407A (en) * 2018-02-09 2019-08-16 北京世纪好未来教育科技有限公司 Sample mask method and computer storage medium
CN110135417A (en) * 2018-02-09 2019-08-16 北京世纪好未来教育科技有限公司 Sample mask method and computer storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105100894A (en) * 2014-08-26 2015-11-25 Tcl集团股份有限公司 Automatic face annotation method and system
CN105447839A (en) * 2015-11-20 2016-03-30 上海华力创通半导体有限公司 Rectangular frame merging method and system
US20190220506A1 (en) * 2018-01-15 2019-07-18 Trimble Solutions Corporation Fine-tuning automatically label locations in engineering drawings
CN110135407A (en) * 2018-02-09 2019-08-16 北京世纪好未来教育科技有限公司 Sample mask method and computer storage medium
CN110135417A (en) * 2018-02-09 2019-08-16 北京世纪好未来教育科技有限公司 Sample mask method and computer storage medium
CN109685870A (en) * 2018-11-21 2019-04-26 北京慧流科技有限公司 Information labeling method and device, tagging equipment and storage medium
CN109710148A (en) * 2018-12-19 2019-05-03 广州文远知行科技有限公司 Selection method, device, computer equipment and the storage medium of image labeling frame

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444945A (en) * 2020-03-20 2020-07-24 北京每日优鲜电子商务有限公司 Sample information filtering method and device, computer equipment and storage medium
CN112200064A (en) * 2020-09-30 2021-01-08 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium
CN112200064B (en) * 2020-09-30 2021-07-30 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium
CN112418263A (en) * 2020-10-10 2021-02-26 上海鹰瞳医疗科技有限公司 Medical image focus segmentation and labeling method and system
CN112508020A (en) * 2020-12-22 2021-03-16 深圳市商汤科技有限公司 Labeling method and device, electronic equipment and storage medium
WO2022134390A1 (en) * 2020-12-22 2022-06-30 深圳市商汤科技有限公司 Labeling method and apparatus, electronic device, and storage medium
CN113808186A (en) * 2021-03-04 2021-12-17 京东鲲鹏(江苏)科技有限公司 Training data generation method and device and electronic equipment
CN113808186B (en) * 2021-03-04 2024-01-16 京东鲲鹏(江苏)科技有限公司 Training data generation method and device and electronic equipment

Also Published As

Publication number Publication date
CN110796201B (en) 2023-07-11

Similar Documents

Publication Publication Date Title
CN110796201A (en) Method for correcting label frame, electronic equipment and storage medium
CN108009543B (en) License plate recognition method and device
US11151723B2 (en) Image segmentation method, apparatus, and fully convolutional network system
US20200074646A1 (en) Method for obtaining image tracking points and device and storage medium thereof
CN108921161B (en) Model training method and device, electronic equipment and computer readable storage medium
CN109858309B (en) Method and device for identifying road route
EP3965054A1 (en) Image distortion correction method and apparatus
EP3319038A1 (en) Panoramic image generation method and apparatus for user terminal
US10733705B2 (en) Information processing device, learning processing method, learning device, and object recognition device
CN111738036B (en) Image processing method, device, equipment and storage medium
CN111985465A (en) Text recognition method, device, equipment and storage medium
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
CN113947768A (en) Monocular 3D target detection-based data enhancement method and device
JPWO2019167238A1 (en) Image processing apparatus and image processing method
CN114897684A (en) Vehicle image splicing method and device, computer equipment and storage medium
CN112950528A (en) Certificate posture determining method, model training method, device, server and medium
US8929590B2 (en) Object tracking device capable of tracking object accurately, object tracking method, and storage medium
CN113592739A (en) Method and device for correcting lens shadow and storage medium
US11948287B2 (en) Image processing method and system
CN113096016A (en) Low-altitude aerial image splicing method and system
CN114821513B (en) Image processing method and device based on multilayer network and electronic equipment
CN109816709B (en) Monocular camera-based depth estimation method, device and equipment
CN110874814A (en) Image processing method, image processing device and terminal equipment
CN115937003A (en) Image processing method, image processing device, terminal equipment and readable storage medium
CN110751163A (en) Target positioning method and device, computer readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210207

Address after: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: CLOUDMINDS (SHENZHEN) ROBOTICS SYSTEMS Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 200245 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

GR01 Patent grant
GR01 Patent grant