CN115063323B - Image processing method and device based on adaptive network - Google Patents

Image processing method and device based on adaptive network Download PDF

Info

Publication number
CN115063323B
CN115063323B CN202210977909.7A CN202210977909A CN115063323B CN 115063323 B CN115063323 B CN 115063323B CN 202210977909 A CN202210977909 A CN 202210977909A CN 115063323 B CN115063323 B CN 115063323B
Authority
CN
China
Prior art keywords
target area
image
corrected
distance
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210977909.7A
Other languages
Chinese (zh)
Other versions
CN115063323A (en
Inventor
马潇
王昕煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weihai Kaisi Information Technology Co ltd
Original Assignee
Weihai Kaisi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weihai Kaisi Information Technology Co ltd filed Critical Weihai Kaisi Information Technology Co ltd
Priority to CN202210977909.7A priority Critical patent/CN115063323B/en
Publication of CN115063323A publication Critical patent/CN115063323A/en
Application granted granted Critical
Publication of CN115063323B publication Critical patent/CN115063323B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image processing method and device based on an adaptive network, and relates to the technical field of image processing. The image processing method comprises the following steps: acquiring an image to be corrected; marking a target area in the image to be corrected; obtaining a processed image to be corrected according to the image to be corrected and a pre-trained detection model; the detection model is used for marking a non-target area in the image, and the processed image to be corrected is marked with the target area and the non-target area; judging whether the target area and the non-target area have a superposed part or not; if the target area and the non-target area have an overlapped part, correcting the target area based on the overlapped part and the non-target area to obtain a corrected image; and if the target area and the non-target area do not have the overlapped part, deleting the marking information of the non-target area from the processed image to be corrected to obtain the corrected image. The method is used for correcting the target area through the non-target area, and the accuracy of the image detection result is improved.

Description

Image processing method and device based on adaptive network
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus based on an adaptive network.
Background
In the prior art, an image processing technology is used to detect an object, and in the process of detecting the object, detection of a target area is usually involved, and the target area can be understood as an area where the object to be detected is located or the object to be detected itself.
In the current image processing technology, a detection model is trained in advance, then an image to be detected is input into the trained detection model, whether the image to be detected contains a target area or not is judged through the detection model, and if the image to be detected contains the target area, the image to be detected is usually marked directly.
This detection method depends on the accuracy of the detection model, and when the accuracy of the detection model is not sufficient, the final labeling result is biased.
Disclosure of Invention
The embodiment of the invention aims to provide an image processing method and device based on an adaptive network, which are used for improving the accuracy of a detection result for realizing object detection based on an image.
In a first aspect, an embodiment of the present invention provides an image processing method based on an adaptive network, including: acquiring an image to be corrected; marking a target area in the image to be corrected; obtaining a processed image to be corrected according to the image to be corrected and a pre-trained detection model; the detection model is used for marking a non-target area in an image, and the target area and the non-target area are marked in the processed image to be corrected; judging whether the target area and the non-target area have a superposition part; if the target area and the non-target area have an overlapped part, correcting the target area based on the overlapped part and the non-target area to obtain a corrected image; and if the target area and the non-target area do not have the overlapped part, deleting the marking information of the non-target area from the processed image to be corrected to obtain a corrected image.
In the embodiment of the invention, the image marked with the target area is used as the image to be corrected, the pre-trained detection model is used for marking the non-target area in the image to be corrected, and then the target area in the image to be corrected is corrected by using the overlapped part of the target area and the non-target area. Compared with the prior art, the accuracy of the image detection result (namely the detected target area) is improved by correcting the target area based on the non-target area determined by the adaptive network.
As a possible implementation manner, the image processing method further includes: determining the area of the target region and the area of the non-target region; if the area of the target area is the same as that of the non-target area, identifying the image to be corrected as an uncorrectable image; and if the area of the target area is different from the area of the non-target area, correcting the target area based on the overlapped part and the non-target area to obtain a corrected image.
In the embodiment of the invention, if the areas of the target area and the non-target area are the same, the occupation proportions of the overlapped part in the two areas are the same, and at the moment, the target area cannot be effectively calibrated, and corresponding identification can be carried out; if not, the target area may be corrected using the overlapping portion and the non-target area. In this way, invalid corrections can be avoided, ensuring that the target area is effectively corrected.
As a possible implementation manner, the correcting the target region based on the overlapped part and the non-target region to obtain a corrected image includes: determining a first occupancy proportion of the coincident portion in the target region and a second occupancy proportion of the coincident portion in the non-target region; comparing the first occupancy proportion and the second occupancy proportion; if the first occupation ratio is larger than the second occupation ratio, deleting the labeling information of the non-target area from the processed image to be corrected to obtain a corrected image; and if the first occupation proportion is smaller than the second occupation proportion, deleting the marking information of the non-target area from the processed image to be corrected, and deleting the overlapped part from the target area to obtain a corrected image.
In the embodiment of the invention, the occupation ratios of the overlapped parts in the two areas are respectively calculated, if the occupation ratio in the target area is larger than that in the non-target area, the target area does not need to be corrected, and the overlapped parts can be deleted from the non-target area; deleting the overlapped part from the target area if the occupation ratio in the target area is smaller than that in the non-target area; thereby realizing effective correction of the target area.
As a possible implementation manner, the correcting the target region based on the overlapped part and the non-target region to obtain a corrected image includes: respectively determining the central position of the target area, the central position of the overlapped part and the central position of the non-target area; calculating a first distance between a center position of the target area and a center position of the overlapped part; calculating a second distance between the center position of the target region and the center position of the non-target region; calculating a third distance between a center position of the coincident portion and a center position of the non-target region; determining the offset direction corresponding to the overlapped part according to the first distance, the second distance and the third distance; the offset direction is offset to the target area or offset to the non-target area; and correcting the target area according to the offset direction to obtain a corrected image.
In the embodiment of the present invention, whether the overlapped part is shifted to the target area or the non-target area is determined by calculating the first distance, the second distance, and the third distance, and further, the target area can be effectively corrected according to the shift direction.
As a possible implementation manner, the determining, according to the first distance, the second distance, and the third distance, an offset direction corresponding to the overlapped portion includes: determining an absolute value of a distance difference between the first distance and the second distance, and determining an absolute value of a distance difference between the second distance and the third distance; if the absolute value of the distance difference between the first distance and the second distance is greater than the absolute value of the distance difference between the second distance and the third distance, determining that the offset direction is offset towards the target area; and if the absolute value of the distance difference between the first distance and the second distance is smaller than the absolute value of the distance difference between the second distance and the third distance, determining that the offset direction is towards the non-target area.
In the embodiment of the invention, the effective and accurate determination of the offset direction can be realized by determining the absolute value of the distance difference between the first distance and the second distance, determining the absolute value of the distance difference between the second distance and the third distance, and then comparing the two absolute values of the distance difference.
As a possible implementation manner, the correcting the target area according to the offset direction to obtain a corrected image includes: if the deviation direction is towards the target area, deleting the labeling information of the non-target area from the processed image to be corrected, and deleting the overlapped part from the target area to obtain a corrected image; and if the deviation direction is the deviation towards the non-target area, deleting the marking information of the non-target area from the processed image to be corrected.
In the embodiment of the present invention, if the overlapped part is shifted toward the target region, the possibility that the overlapped part belongs to the non-target region is higher; otherwise, the probability of belonging to the target area is higher; further, in this way, effective correction of the target region is achieved.
As a possible implementation manner, the correcting the target region based on the overlapped part and the non-target region to obtain a corrected image includes: determining a first degree of match between the coincident portion and the target region; the first matching degree comprises a chroma matching degree and a brightness matching degree; determining a second degree of match between the coincident portion and the non-target region; the second matching degree comprises a chrominance matching degree and a luminance matching degree; if the first matching degree is greater than the second matching degree, deleting the labeling information of the non-target area from the processed image to be corrected to obtain a corrected image; and if the first matching degree is smaller than the second matching degree, deleting the marking information of the non-target area from the processed image to be corrected, and deleting the overlapped part from the target area to obtain a corrected image.
In the embodiment of the invention, the matching degrees between the overlapped part and the target area and the non-target area are respectively determined, and the area to which the overlapped part belongs is determined according to the matching degrees, so that the effective and accurate correction of the target area is realized.
As a possible implementation manner, the first matching degree is a matching degree between the overlapping portion and the entire target region or a matching degree between the overlapping portion and a region where a first designated pixel point in the target region is located; the second matching degree is the matching degree between the overlapped part and the whole non-target area or the matching degree between the overlapped part and an area where a second designated pixel point in the non-target area is located; and matching degree between the first designated pixel point and the second designated pixel point meets a preset matching degree condition.
In the embodiment of the invention, the determined matching degree can be a global matching degree or a local matching degree, and can be selected by combining with an actual application scene, so that the flexibility of image processing is improved.
As a possible implementation manner, the image processing method further includes: correspondingly storing the corrected image and the processed image to be corrected into a preset correction data set; judging whether the data volume in the preset correction data set is larger than a preset number or not; if the data amount in the preset correction data set is larger than the preset amount, taking the correction data set as a training data set, and training an initial correction model to obtain a trained correction model; the trained correction model is used for correcting the image marked with the target area and the non-target area.
In the embodiment of the invention, if the target area is corrected, the image before correction and the image after correction are stored in the correction data set, when the data amount in the correction data set is greater than the preset amount, the correction data set is used as a training data set to train the correction model, and the trained correction model can directly output the correction result based on the image marked with the target area and the non-target area; thereby improving the applicability of the treatment method.
In a second aspect, an embodiment of the present invention provides an adaptive network-based image processing apparatus, including: functional modules for implementing the adaptive network-based image processing method described in the first aspect and any one of its possible implementation manners.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a processor and a memory communicatively coupled to the processor; wherein the memory stores instructions executable by the processor to enable the processor to perform the method for adaptive network-based image processing according to the first aspect and any one of the possible implementations of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a computer, the computer program performs the first aspect and the adaptive network-based image processing method described in any one of the possible implementation manners of the first aspect.
The image processing method and device based on the adaptive network, the electronic device and the computer readable storage medium provided by the embodiment of the invention take the image marked with the target area as the image to be corrected, mark the non-target area in the image to be corrected by using the pre-trained detection model, and then correct the target area in the image to be corrected by using the overlapped part of the target area and the non-target area. Compared with the prior art, the target area is corrected based on the non-target area determined by the adaptive network, so that the accuracy of the image detection result (namely the detected target area) is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a diagram illustrating a first example of a processed image to be corrected according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a second example of a processed image to be corrected according to an embodiment of the present invention;
FIG. 3 is a flowchart of an adaptive network-based image processing method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an adaptive network-based image processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Icon: 400-an adaptive network based image processing apparatus; 410-an acquisition module; 420-a processing module; 500-an electronic device; 510-a processor; 520-memory.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
The technical scheme provided by the embodiment of the invention can be applied to various application scenes for object detection based on the image processing technology and is used for correcting the object detection result.
In different application scenarios, the detected objects may not be the same. For example: assuming that the image is a road image, the detected object may be a vehicle, a pedestrian, a sign, or the like in the road image. Correspondingly, in the application scenario, if a vehicle needs to be detected, a road image may be input into the detection model, and the detection model may output a result of whether the vehicle is included in the image, for example: labeled with an image of the vehicle. For another example: assuming that the image is a device image, the detected object may be a defect of the device or a specific part, or the like. Correspondingly, in the application scenario, if a defect needs to be detected, the device image may be input into the detection model, and the detection model may output a result of whether the defect is included in the image, for example: the image of the defective area is marked.
In the object detection scenario described above, it can be understood that, assuming that an image is divided into a target region (e.g., a defect region) and a non-target region, if the target region and the non-target region are correctly labeled, there should be no overlapping portion between the target region and the non-target region. If there is an overlap between the target area and the non-target area, it indicates that the detection of the target area and/or the non-target area may be erroneous, such as: since the part that should belong to the target region is divided into the non-target region or the part that should belong to the non-target region is divided into the target region, the labeling result of the target region can be corrected based on the labeling result of the non-target region.
For ease of understanding, referring to fig. 1 and 2, for the same image, if the labels of the target area and the non-target area are both correct, it should be as shown in fig. 1, and if there is a problem with the label of the target area and/or the non-target area, it may be as shown in fig. 2.
The hardware running environment corresponding to the technical scheme provided by the embodiment of the invention can be an image processing device, such as: servers, computers, etc., without limitation.
In addition, the detection models involved in the following embodiments are all adaptive network models.
Based on the application scenario, referring to fig. 3, a flowchart of an image processing method based on an adaptive network according to an embodiment of the present invention is shown, where the image processing method includes:
step 310: and acquiring an image to be corrected. The image to be corrected is marked with a target area.
Step 320: and obtaining the processed image to be corrected according to the image to be corrected and the pre-trained detection model. The detection model is used for marking a non-target area in the image, and the processed image to be corrected is marked with the target area and the non-target area.
Step 330: it is determined whether the target region and the non-target region have an overlapping portion.
Step 340: and if the target area and the non-target area have an overlapped part, correcting the target area based on the overlapped part and the non-target area to obtain a corrected image.
Step 350: and if the target area and the non-target area do not have the overlapped part, deleting the marking information of the non-target area from the processed image to be corrected to obtain the corrected image.
In the embodiment of the invention, the image marked with the target area is used as the image to be corrected, the pre-trained detection model is used for marking the non-target area in the image to be corrected, and then the target area in the image to be corrected is corrected by using the overlapped part of the target area and the non-target area. Compared with the prior art, the accuracy of the image detection result is improved by correcting the target area through the non-target area determined based on the self-adaptive network.
A detailed embodiment of the image processing method will be described below.
In step 310, the target region labeled in the image to be corrected may be a target region labeled manually, and in this embodiment, the finally corrected image may be used as a training data set to train a detection model for detecting the target region. But also a target area marked by a detection model for detecting the target area, embodiments of which refer to object detection techniques in the art.
The number of the images to be corrected can be one or more, and if the number of the images to be corrected is more than one, each image to be corrected is corrected according to the same correction mode.
In step 320, a processed image to be corrected is obtained according to the image to be corrected and a pre-trained detection model. The pre-trained detection model is used for labeling non-target areas in the image.
In some embodiments, the training process of the detection model includes: acquiring a training data set; the images in the training data set are all images including the target region, but the target region is not marked in the images, and the non-target region is marked. And training the initial detection model by using a training data set to obtain the trained detection model. The trained detection model learns the characteristics which do not belong to the target region, so that the trained detection model can be used for labeling the non-target region and belongs to a reverse model.
Therefore, in step 320, the image labeled with the target region is input into the trained detection model, and the detection model may further label the non-target region on the basis of the image labeled with the target region. However, since there is a possibility that there is a problem in labeling either a target region or a non-target region, the target region can be corrected by using the relationship between the two regions.
In step 330, it is determined whether the target region and the non-target region have an overlapping portion.
It is understood that the labels of the target area and the non-target area correspond to label information, such as: a label box, a label line, etc.
Therefore, in some embodiments, the coordinates of each pixel point in the target region and the non-target region may be determined through the labeling information, and then whether the pixel point coordinates of the two regions include the same pixel point coordinate or not is compared, and if so, a superposition portion exists; otherwise, there is no overlap.
In other embodiments, if there is no overlapping part, there should be only two regions on the image, and if there is, the labeling information of the two regions will form a new one. Therefore, whether a new region is formed by the boundary pixel points of the target region and the non-target region can be detected, and if the new region is formed, a superposition part is formed; otherwise, there is no overlap.
Further, if it is determined that the target region and the non-target region have an overlapping portion, in step 340, the target region is corrected based on the overlapping portion and the non-target region, and a corrected image is obtained.
If it is determined that the target region and the non-target region do not have an overlapped portion, in step 350, deleting the labeling information of the non-target region from the processed image to be corrected to obtain a corrected image; equivalently, the target area does not need to be corrected, and the labeling information of the non-target area is deleted, so that the target area can be conveniently applied in the following process.
In some embodiments, even if the target region and the non-target region have an overlap, effective correction may not be achieved, such as: when the areas of the two regions are the same, the occupation ratio of the overlapping portion in the two regions is the same, and in this case, the target region may not be corrected.
Therefore, as an optional implementation manner, the image processing method further includes: determining the area of a target region and the area of a non-target region; if the area of the target area is the same as that of the non-target area, identifying the image to be corrected as an uncorrectable image; and if the area of the target area is different from the area of the non-target area, correcting the target area based on the overlapped part and the non-target area to obtain a corrected image.
In this embodiment, if the areas of the target region and the non-target region are the same, the occupation ratios of the overlapped part in the two regions are the same, and at this time, the target region cannot be effectively calibrated, and corresponding identification can be performed; if not, the target area can be corrected using the overlapping portion and the non-target area. In this way, invalid corrections can be avoided, ensuring that the target area is effectively corrected.
Of course, in some embodiments, it may also not be necessary to care for the same definition of area, i.e. the correction of the target region does not have a precondition.
In the embodiment of the present invention, the target region is corrected based on the overlapped portion and the non-target region, and various embodiments may be adopted, and these embodiments are described below separately.
As a first alternative implementation, step 340 includes: determining a first occupation ratio of the overlapped part in the target area and determining a second occupation ratio of the overlapped part in the non-target area; comparing the first occupation ratio and the second occupation ratio; if the first occupation ratio is larger than the second occupation ratio, deleting the labeling information of the non-target area from the processed image to be corrected to obtain a corrected image; and if the first occupation proportion is smaller than the second occupation proportion, deleting the labeling information of the non-target area from the processed image to be corrected, and deleting the overlapped part from the target area to obtain a corrected image.
In this embodiment, in combination with the above-described determination of the overlapped portion, after the overlapped portion is determined, the coordinates of each pixel point corresponding to the overlapped portion are also determined correspondingly, so that the corresponding areas can be determined regardless of whether the overlapped portion is an overlapped portion, a target area, or a non-target area. Then correspondingly, the first occupation ratio is the ratio of the area of the overlapping portion to the area of the target region, and the second occupation ratio is the ratio of the area of the overlapping portion to the area of the non-target region.
Further, the first occupancy ratio and the second occupancy ratio are compared. If the first occupation proportion is larger than the second occupation proportion, the overlapped part is more likely to belong to the target area, the target area does not need to be corrected, and only the mark information of the non-target area needs to be deleted from the processed image to be corrected.
If the first occupation ratio is smaller than the second occupation ratio, it is indicated that the overlapped part is more likely to belong to the non-target area and the target area needs to be corrected. And then, deleting the labeling information of the non-target area from the processed image to be corrected, and deleting the overlapped part from the target area to obtain a corrected image.
In the embodiment of the invention, the occupation ratios of the overlapped parts in the two areas are respectively calculated, if the occupation ratio in the target area is larger than that in the non-target area, the target area does not need to be corrected, and the overlapped parts can be deleted from the non-target area; deleting the overlapped part from the target area if the occupation ratio in the target area is smaller than that in the non-target area; thereby realizing effective correction of the target area.
As a second alternative, step 340 includes: respectively determining the central position of a target area, the central position of an overlapped part and the central position of the non-target area; calculating a first distance between a center position of the target area and a center position of the overlapped part; calculating a second distance between the center position of the target region and the center position of the non-target region; calculating a third distance between the center position of the overlapped part and the center position of the non-target area; determining the offset direction corresponding to the overlapped part according to the first distance, the second distance and the third distance; the offset direction is towards the target area or towards the non-target area; and correcting the target area according to the offset direction to obtain a corrected image.
In this embodiment, the center positions of the target area and the non-target area, respectively, and the center positions of the overlapping portions are determined, and if there is no partitioning error, the overlapping portions should not occur, and the sum of the first distance and the third distance should be equal to the second distance. Therefore, whether the overlapped part is shifted to the target region or the non-target region can be judged by the magnitude relation of the distances. If the target region is shifted, the possibility that the overlapped part belongs to the non-target region is higher, and if the target region is shifted, the possibility that the overlapped part belongs to the target region is higher.
Therefore, by calculating the first distance, the second distance, and the third distance, it is determined whether the overlapped part is shifted toward the target region or the non-target region, and further, the target region can be effectively corrected according to the shift direction.
In some embodiments, when the coordinates of each pixel point corresponding to the target area, the overlapped part, and the non-target area are known, the corresponding coordinates of the center position may be calculated, and then the first distance, the second distance, and the third distance may be calculated.
As an alternative implementation, determining the offset direction corresponding to the overlapped part according to the first distance, the second distance and the third distance includes: determining an absolute value of a distance difference between the first distance and the second distance, and determining an absolute value of a distance difference between the second distance and the third distance; if the absolute value of the distance difference between the first distance and the second distance is larger than the absolute value of the distance difference between the second distance and the third distance, determining that the offset direction is towards the target area; and if the absolute value of the distance difference between the first distance and the second distance is smaller than the absolute value of the distance difference between the second distance and the third distance, determining that the deviation direction is the deviation towards the non-target area.
In such an embodiment, the absolute value of the distance difference between the first distance and the second distance may represent the degree to which the overlapping portion is shifted toward the target region, and the absolute value of the distance difference between the second distance and the third distance may represent the degree to which the overlapping portion is shifted toward the non-target region. Therefore, if the absolute value of the distance difference between the first distance and the second distance is greater than the absolute value of the distance difference between the second distance and the third distance, determining that the offset direction is offset towards the target area; and if the absolute value of the distance difference between the first distance and the second distance is smaller than the absolute value of the distance difference between the second distance and the third distance, determining that the offset direction is towards the non-target area.
In the embodiment of the present invention, by determining the absolute value of the distance difference between the first distance and the second distance, determining the absolute value of the distance difference between the second distance and the third distance, and then comparing the two absolute values of the distance difference, effective and accurate determination of the offset direction can be achieved.
Further, after determining the offset direction, as an optional implementation, correcting the target area according to the offset direction to obtain a corrected image, including: if the deviation direction is towards the target area, deleting the labeling information of the non-target area from the processed image to be corrected, and deleting the overlapped part from the target area to obtain a corrected image; and if the deviation direction is towards the non-target area, deleting the marking information of the non-target area from the processed image to be corrected.
In the foregoing description, if the overlapped part is shifted to the target area, it indicates that the overlapped part belongs to the non-target area, and at this time, the label information of the non-target area needs to be deleted, and the overlapped part needs to be deleted from the target area, so as to realize the correction of the target area. If the overlapped part deviates to the non-target area, the overlapped part belongs to the target area, and at this time, the labeling information of the non-target area is directly deleted from the processed image to be corrected, namely the default target area does not need to be corrected.
In the embodiment of the present invention, if the overlapped part is shifted toward the target region, the possibility that the overlapped part belongs to the non-target region is higher; otherwise, the probability of belonging to the target area is higher; further, in this way, effective correction of the target area is achieved.
As a third alternative, step 340 includes: determining a first degree of matching between the coincident portion and the target region; the first matching degree comprises a chroma matching degree and a brightness matching degree; determining a second matching degree between the overlapped part and the non-target area; the second matching degree comprises a chroma matching degree and a brightness matching degree; if the first matching degree is greater than the second matching degree, deleting the labeling information of the non-target area from the processed image to be corrected to obtain a corrected image; and if the first matching degree is smaller than the second matching degree, deleting the marking information of the non-target area from the processed image to be corrected, and deleting the overlapped part from the target area to obtain the corrected image.
In this embodiment, the chroma matching degree can be determined by obtaining the chroma information of each pixel point and then based on the chroma information of each pixel point; the brightness matching degree can be determined by obtaining the brightness information of each pixel point and then based on the brightness information of each pixel point.
In addition to chroma matching and luma matching, in some embodiments, other matching information may be employed, such as: the hash value matching degree, the feature value matching degree, and the like, which are not limited herein.
In some embodiments, the first matching degree is a matching degree between the overlapping portion and the entire target region or a matching degree between the overlapping portion and a region where the first designated pixel point in the target region is located; the second matching degree is the matching degree between the overlapped part and the whole non-target area or the matching degree between the overlapped part and the area where the second designated pixel point in the non-target area is located; and the matching degree between the first designated pixel point and the second designated pixel point accords with a preset matching degree condition.
That is, in the embodiment of the present invention, the matching degree may be a global matching degree or a local matching degree, and in different application scenarios, different manners may be adopted. For example: when the running performance of the processor is relatively strong, a global matching degree may be adopted, and when the running performance of the processor is general, a local matching degree may be adopted. For another example: the identified object is highly influenced by the global, the global matching degree can be adopted, the influence degree by the global is small, and the local matching degree can be adopted.
The preset matching degree condition may be: the brightness matching degree and the chroma matching degree are both smaller than a preset threshold value; or the luminance matching degree is smaller than the first preset threshold, and the chrominance matching degree is smaller than the second preset threshold, and the like.
In some embodiments, the first pixel point and the second pixel point may also be pixel points within a preset range of a central pixel point of the overlapped portion, or other embodiments, which are not limited herein.
In the embodiment of the invention, the determined matching degree can be a global matching degree or a local matching degree, and can be selected by combining with an actual application scene, so that the flexibility of image processing is improved.
Further, if the first matching degree is greater than the second matching degree, it indicates that the overlapped part is more likely to belong to the target area, and at this time, the label information of the non-target area is deleted from the processed image to be corrected, so as to obtain a corrected image. If the first matching degree is less than the second matching degree, the overlapped part is more likely to belong to the non-target area, at this time, the marking information of the non-target area is deleted from the processed image to be corrected, and the overlapped part is deleted from the target area, so that the corrected image is obtained.
In the embodiment of the invention, the matching degrees between the overlapped part and the target area and the non-target area are respectively determined, and the area to which the overlapped part belongs is determined according to the matching degrees, so that the effective and accurate correction of the target area is realized.
Regardless of the two correction embodiments, after the image to be corrected is corrected, a corrected image can be obtained, and further applications can be made based on the corrected image.
As an optional implementation manner, the image processing method further includes: correspondingly storing the corrected image and the processed image to be corrected into a preset correction data set; judging whether the data volume in the preset correction data set is larger than a preset number or not; if the data amount in the preset correction data set is larger than the preset amount, taking the correction data set as a training data set, and training the initial correction model to obtain a trained correction model; the trained correction model is used for correcting the image marked with the target area and the non-target area.
Wherein, the preset correction data set includes: the corrected image and the processed image to be corrected are in one-to-one correspondence, and these images may be images obtained by the technical solution of the embodiment of the present invention, or may be obtained in other manners, which is not limited herein.
When the data in the correction data set is large enough, the data can be used for training the model. The preset number can be flexibly set, for example: 100 200, etc., without limitation.
Based on the correction data set, the initial correction model is trained, so that a trained correction model can be obtained, and the correction model can be directly used for correcting the image marked with the target region and the non-target region.
In the embodiment of the invention, if the target area is corrected, the image before correction and the image after correction are stored in a correction data set, when the data quantity in the correction data set is greater than the preset quantity, the correction data set is used as a training data set to train the correction model, and the trained correction model can directly output the correction result based on the image marked with the target area and the non-target area; thereby improving the applicability of the treatment method.
In the embodiment of the invention, when the target area cannot be effectively corrected, the target area can be fed back to the user, and the user can manually correct the target area to ensure that the corrected image can be finally output.
And, the training of each model mentioned above can be performed by using a test data set, iterative training, etc. to ensure the accuracy of the model, which is not described in detail herein.
Based on the same inventive concept, referring to fig. 4, an embodiment of the present invention further provides an adaptive network-based image processing apparatus 400, including: an acquisition module 410 and a processing module 420.
The obtaining module 410 is configured to: acquiring an image to be corrected; and marking a target area in the image to be corrected. The processing module 420 is configured to: obtaining a processed image to be corrected according to the image to be corrected and a pre-trained detection model; the detection model is used for marking a non-target area in an image, and the target area and the non-target area are marked in the processed image to be corrected; judging whether the target area and the non-target area have a superposition part; if the target area and the non-target area have an overlapped part, correcting the target area based on the overlapped part and the non-target area to obtain a corrected image; and if the target area and the non-target area do not have the overlapped part, deleting the marking information of the non-target area from the processed image to be corrected to obtain a corrected image.
In this embodiment of the present invention, the processing module 420 is further configured to: determining the area of the target region and the area of the non-target region; if the area of the target area is the same as that of the non-target area, identifying the image to be corrected as an uncorrectable image; and if the area of the target area is different from the area of the non-target area, correcting the target area based on the overlapped part and the non-target area to obtain a corrected image.
In this embodiment of the present invention, the processing module 420 is specifically configured to: determining a first occupancy proportion of the coincident portion in the target region and a second occupancy proportion of the coincident portion in the non-target region; comparing the first occupancy proportion and the second occupancy proportion; if the first occupation ratio is larger than the second occupation ratio, deleting the labeling information of the non-target area from the processed image to be corrected to obtain a corrected image; and if the first occupation proportion is smaller than the second occupation proportion, deleting the marking information of the non-target area from the processed image to be corrected, and deleting the overlapped part from the target area to obtain a corrected image.
In this embodiment of the present invention, the processing module 420 is specifically configured to: respectively determining the central position of the target area, the central position of the overlapped part and the central position of the non-target area; calculating a first distance between a center position of the target area and a center position of the overlapped part; calculating a second distance between the center position of the target region and the center position of the non-target region; calculating a third distance between a center position of the coincident portion and a center position of the non-target region; determining the offset direction corresponding to the overlapped part according to the first distance, the second distance and the third distance; the offset direction is offset to the target area or offset to the non-target area; and correcting the target area according to the offset direction to obtain a corrected image.
In this embodiment of the present invention, the processing module 420 is specifically configured to: determining an absolute value of a distance difference between the first distance and the second distance, and determining an absolute value of a distance difference between the second distance and the third distance; if the absolute value of the distance difference between the first distance and the second distance is greater than the absolute value of the distance difference between the second distance and the third distance, determining that the offset direction is offset towards the target area; and if the absolute value of the distance difference between the first distance and the second distance is smaller than the absolute value of the distance difference between the second distance and the third distance, determining that the offset direction is offset towards the non-target area.
In this embodiment of the present invention, the processing module 420 is specifically configured to: if the deviation direction is towards the target area, deleting the labeling information of the non-target area from the processed image to be corrected, and deleting the overlapped part from the target area to obtain a corrected image; and if the deviation direction is the deviation towards the non-target area, deleting the marking information of the non-target area from the processed image to be corrected.
In this embodiment of the present invention, the processing module 420 is specifically configured to: determining a first degree of match between the coincident portion and the target region; the first matching degree comprises a chroma matching degree and a brightness matching degree; determining a second degree of match between the coincident portion and the non-target region; the second matching degree comprises a chroma matching degree and a brightness matching degree; if the first matching degree is greater than the second matching degree, deleting the labeling information of the non-target area from the processed image to be corrected to obtain a corrected image; and if the first matching degree is smaller than the second matching degree, deleting the marking information of the non-target area from the processed image to be corrected, and deleting the overlapped part from the target area to obtain a corrected image.
In this embodiment of the present invention, the processing module 420 is further configured to: correspondingly storing the corrected image and the processed image to be corrected into a preset correction data set; judging whether the data volume in the preset correction data set is larger than a preset number or not; if the data amount in the preset correction data set is larger than the preset amount, taking the correction data set as a training data set, and training an initial correction model to obtain a trained correction model; the trained correction model is used for correcting the image marked with the target area and the non-target area.
The image processing apparatus 400 based on the adaptive network corresponds to the image processing method described above, and therefore, the embodiments of the respective functional modules refer to the description in the foregoing embodiments, and are not described again here.
Referring to fig. 5, an embodiment of the invention provides an electronic device 500, and the electronic device 500 may be used as an execution main body of the image processing method.
The electronic device 500 includes: a processor 510 and a memory 520; processor 510 and memory 520 are communicatively coupled; the memory 520 stores instructions executable by the processor 510, and the instructions are executed by the processor 510 to enable the processor 510 to execute the image processing method in the foregoing embodiments.
The processor 510 and the memory 520 may be connected by a communication bus.
It is understood that the electronic device 500 may further include more general modules required by itself, which are not described in the embodiments of the present invention.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a computer, the computer program executes the image processing method described in the foregoing embodiment.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present invention, and is not intended to limit the scope of the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. An image processing method based on an adaptive network is characterized by comprising the following steps:
acquiring an image to be corrected; marking a target area in the image to be corrected;
obtaining a processed image to be corrected according to the image to be corrected and a pre-trained detection model; the detection model is used for marking a non-target area in an image, and the target area and the non-target area are marked in the processed image to be corrected;
judging whether the target area and the non-target area have a superposition part;
if the target area and the non-target area have an overlapped part, correcting the target area based on the overlapped part and the non-target area to obtain a corrected image;
if the target area and the non-target area do not have the overlapped part, deleting the labeling information of the non-target area from the processed image to be corrected to obtain a corrected image;
the correcting the target region based on the overlapped part and the non-target region to obtain a corrected image includes:
respectively determining the central position of the target area, the central position of the overlapped part and the central position of the non-target area;
calculating a first distance between a center position of the target area and a center position of the overlapped part;
calculating a second distance between the center position of the target region and the center position of the non-target region;
calculating a third distance between a center position of the coincident portion and a center position of the non-target region;
determining the offset direction corresponding to the overlapped part according to the first distance, the second distance and the third distance; the offset direction is offset to the target area or offset to the non-target area;
and correcting the target area according to the offset direction to obtain a corrected image.
2. The image processing method according to claim 1, characterized in that the image processing method further comprises:
determining the area of the target region and the area of the non-target region;
if the area of the target area is the same as that of the non-target area, identifying the image to be corrected as an uncorrectable image;
and if the area of the target area is different from the area of the non-target area, correcting the target area based on the overlapped part and the non-target area to obtain a corrected image.
3. The method according to claim 1, wherein the determining the offset direction corresponding to the overlapped part according to the first distance, the second distance and the third distance comprises:
determining an absolute value of a distance difference between the first distance and the second distance, and determining an absolute value of a distance difference between the second distance and the third distance;
if the absolute value of the distance difference between the first distance and the second distance is greater than the absolute value of the distance difference between the second distance and the third distance, determining that the offset direction is offset towards the target area;
and if the absolute value of the distance difference between the first distance and the second distance is smaller than the absolute value of the distance difference between the second distance and the third distance, determining that the offset direction is offset towards the non-target area.
4. The image processing method according to claim 1, wherein the correcting the target region according to the offset direction to obtain a corrected image comprises:
if the deviation direction is towards the target area, deleting the labeling information of the non-target area from the processed image to be corrected, and deleting the overlapped part from the target area to obtain a corrected image;
and if the deviation direction is the deviation towards the non-target area, deleting the marking information of the non-target area from the processed image to be corrected.
5. The image processing method according to claim 1, characterized in that the image processing method further comprises:
correspondingly storing the corrected image and the processed image to be corrected into a preset correction data set;
judging whether the data volume in the preset correction data set is larger than a preset number or not;
if the data amount in the preset correction data set is larger than the preset amount, taking the correction data set as a training data set, and training an initial correction model to obtain a trained correction model; the trained correction model is used for correcting the image marked with the target area and the non-target area.
6. An adaptive network-based image processing apparatus, comprising:
an acquisition module to: acquiring an image to be corrected; marking a target area in the image to be corrected;
a processing module to: obtaining a processed image to be corrected according to the image to be corrected and a pre-trained detection model; the detection model is used for marking a non-target area in an image, and the target area and the non-target area are marked in the processed image to be corrected; judging whether the target area and the non-target area have a superposition part; if the target area and the non-target area have an overlapped part, correcting the target area based on the overlapped part and the non-target area to obtain a corrected image; if the target area and the non-target area do not have the overlapped part, deleting the labeling information of the non-target area from the processed image to be corrected to obtain a corrected image;
the processing module is further configured to: respectively determining the central position of the target area, the central position of the overlapped part and the central position of the non-target area; calculating a first distance between a center position of the target area and a center position of the overlapped part; calculating a second distance between the center position of the target region and the center position of the non-target region; calculating a third distance between a center position of the coincident portion and a center position of the non-target region; determining the offset direction corresponding to the overlapped part according to the first distance, the second distance and the third distance; the offset direction is offset to the target area or offset to the non-target area; and correcting the target area according to the offset direction to obtain a corrected image.
CN202210977909.7A 2022-08-16 2022-08-16 Image processing method and device based on adaptive network Active CN115063323B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210977909.7A CN115063323B (en) 2022-08-16 2022-08-16 Image processing method and device based on adaptive network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210977909.7A CN115063323B (en) 2022-08-16 2022-08-16 Image processing method and device based on adaptive network

Publications (2)

Publication Number Publication Date
CN115063323A CN115063323A (en) 2022-09-16
CN115063323B true CN115063323B (en) 2022-11-15

Family

ID=83208556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210977909.7A Active CN115063323B (en) 2022-08-16 2022-08-16 Image processing method and device based on adaptive network

Country Status (1)

Country Link
CN (1) CN115063323B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127747A (en) * 2016-06-17 2016-11-16 史方 Car surface damage classifying method and device based on degree of depth study
CN110033425A (en) * 2018-01-10 2019-07-19 富士通株式会社 Interference region detection device and method, electronic equipment
CN110569840A (en) * 2019-08-13 2019-12-13 浙江大华技术股份有限公司 Target detection method and related device
CN112989872A (en) * 2019-12-12 2021-06-18 华为技术有限公司 Target detection method and related device
CN114821513A (en) * 2022-06-29 2022-07-29 威海凯思信息科技有限公司 Image processing method and device based on multilayer network and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107212899B (en) * 2017-05-25 2020-12-11 上海联影医疗科技股份有限公司 Medical imaging method and medical imaging system
CN111242126A (en) * 2020-01-15 2020-06-05 上海眼控科技股份有限公司 Irregular text correction method and device, computer equipment and storage medium
CN113515981A (en) * 2020-05-22 2021-10-19 阿里巴巴集团控股有限公司 Identification method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127747A (en) * 2016-06-17 2016-11-16 史方 Car surface damage classifying method and device based on degree of depth study
CN110033425A (en) * 2018-01-10 2019-07-19 富士通株式会社 Interference region detection device and method, electronic equipment
CN110569840A (en) * 2019-08-13 2019-12-13 浙江大华技术股份有限公司 Target detection method and related device
CN112989872A (en) * 2019-12-12 2021-06-18 华为技术有限公司 Target detection method and related device
CN114821513A (en) * 2022-06-29 2022-07-29 威海凯思信息科技有限公司 Image processing method and device based on multilayer network and electronic equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Adaptive selection of non-target cluster centers for K-means tracker;Hiroshi Oike等;《2008 19th International Conference on Pattern Recognition》;20090123;第1-4页 *
基于SMF的遥感图像纹理目标识别方法;陈韶斌等;《华中科技大学学报(自然科学版)》;20101115(第11期);第33-36页 *
无人机对地多目标检测跟踪;秦耀龙;《中国优秀硕士学位论文全文数据库工程科技II辑》;20190515;C031-190 *

Also Published As

Publication number Publication date
CN115063323A (en) 2022-09-16

Similar Documents

Publication Publication Date Title
CN107464266B (en) Bearing calibration, device, equipment and the storage medium of camera calibration parameter
US20230086961A1 (en) Parallax image processing method, apparatus, computer device and storage medium
CN111931864B (en) Method and system for multiple optimization of target detector based on vertex distance and cross-over ratio
US20220270204A1 (en) Image registration method, terminal, and computer storage medium
CN113971727A (en) Training method, device, equipment and medium of semantic segmentation model
CN115393815A (en) Road information generation method and device, electronic equipment and computer readable medium
CN111709884A (en) License plate key point correction method, system, equipment and storage medium
CN112580734A (en) Target detection model training method, system, terminal device and storage medium
CN110942455A (en) Method and device for detecting missing of cotter pin of power transmission line and computer equipment
US20220222859A1 (en) Difference detection apparatus, difference detection method, and program
CN112686835A (en) Road obstacle detection device, method and computer-readable storage medium
CN111724396A (en) Image segmentation method and device, computer-readable storage medium and electronic device
CN115063323B (en) Image processing method and device based on adaptive network
CN112001357B (en) Target identification detection method and system
CN114821513B (en) Image processing method and device based on multilayer network and electronic equipment
CN112286780B (en) Method, device, equipment and storage medium for testing recognition algorithm
CN112465886A (en) Model generation method, device, equipment and readable storage medium
US20240011792A1 (en) Method and apparatus for updating confidence of high-precision map
CN115272462A (en) Camera pose estimation method and device and electronic equipment
CN110197228B (en) Image correction method and device
CN116257273B (en) Updating method, terminal and computer storage medium of obstacle detection model
CN111161225A (en) Image difference detection method and device, electronic equipment and storage medium
CN114881908B (en) Abnormal pixel identification method, device and equipment and computer storage medium
US11636619B2 (en) System and method for generating basic information for positioning and self-positioning determination device
CN114202542B (en) Visibility inversion method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant