CN115937003A - Image processing method, image processing device, terminal equipment and readable storage medium - Google Patents
Image processing method, image processing device, terminal equipment and readable storage medium Download PDFInfo
- Publication number
- CN115937003A CN115937003A CN202211361760.6A CN202211361760A CN115937003A CN 115937003 A CN115937003 A CN 115937003A CN 202211361760 A CN202211361760 A CN 202211361760A CN 115937003 A CN115937003 A CN 115937003A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- detected
- target object
- effective area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 29
- 238000003672 processing method Methods 0.000 title claims abstract description 24
- 239000011159 matrix material Substances 0.000 claims abstract description 48
- 238000003709 image segmentation Methods 0.000 claims abstract description 45
- 230000009466 transformation Effects 0.000 claims abstract description 45
- 230000005540 biological transmission Effects 0.000 claims abstract description 38
- 238000000034 method Methods 0.000 claims description 33
- 238000004590 computer program Methods 0.000 claims description 20
- 238000012937 correction Methods 0.000 claims description 11
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 7
- 238000003702 image correction Methods 0.000 claims description 5
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 abstract description 16
- 238000011426 transformation method Methods 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides an image processing method, an image processing device, terminal equipment and a readable storage medium, wherein a target image is obtained; inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image; determining mask information of the target object to be detected according to the target object to be detected; determining a transmission transformation matrix and a corresponding image deflection angle according to the effective area and the target area after the effective area is mapped; according to the transmission transformation matrix and the corresponding image deflection angle, the target image is corrected, actual edge information of the object to be detected can be accurately obtained based on the mask, and then parameters such as the deflection angle of the object and the corresponding affine matrix are calculated according to a traditional affine transformation method, so that the object to be detected in a complex scene can be corrected through rectification.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, a terminal device, and a readable storage medium.
Background
In the prior art, when a deviation image is corrected, the edge contour of an object to be corrected is obtained by methods such as image binarization, image sharpening and the like, and then the subsequent deviation correction operation is performed by using the edge contour of the object; however, when the object to be detected is in an excessively complex environment, the edge detection of the object to be detected is prone to have a large deviation, so that the final deviation correction of the object to be detected has a large error and even fails, and therefore the normal implementation of the deviation correction function in the complex background environment cannot be well guaranteed.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide an image processing method, apparatus, terminal device and readable storage medium that overcome or at least partially solve the above problems.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring a target image;
inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image;
determining mask information of the target object to be detected according to the target object to be detected; the mask information is used for distinguishing an effective area of the target object to be detected from a background area;
determining a transmission transformation matrix and a corresponding image deflection angle according to the effective area and the target area after the effective area is mapped;
and correcting the target image according to the transmission transformation matrix and the corresponding image deflection angle.
Optionally, the determining a transmission transformation matrix and a corresponding image deflection angle according to the effective region and the target region mapped by the effective region includes:
acquiring first vertex information of an effective area of the target object to be detected and second vertex information of the target area;
and calculating a transmission transformation matrix and a corresponding image deflection angle required by correcting the image according to the first vertex information and the second vertex information and a preset relational expression.
Optionally, the inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image includes:
carrying out scaling transformation pretreatment on the target image to obtain an input image;
and inputting the input image into a pre-established image segmentation model, and marking the target object to be detected in the target image by adopting a mask, wherein the pre-established image segmentation model is obtained after training based on a neural network.
Optionally, the acquiring first vertex information of the effective region of the target object to be detected includes:
and acquiring first vertex information of the effective area by adopting a corner detection algorithm.
Optionally, the method further comprises:
and if the target object to be detected is not detected in the target image, the target image is not corrected.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the acquisition module is used for acquiring a target image;
the determining module is used for inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image;
the processing module is used for determining mask information of the target object to be detected according to the target object to be detected; the mask information is used for distinguishing an effective area of the target object to be detected from a background area;
the calculation module is used for determining a transmission transformation matrix and a corresponding image deflection angle according to the effective area and the target area after the effective area is mapped;
and the correction module is used for correcting the target image according to the transmission transformation matrix and the corresponding image deflection angle.
Optionally, the computing module is configured to:
acquiring first vertex information of an effective area of the target object to be detected and second vertex information of the target area;
and calculating a transmission transformation matrix and a corresponding image deflection angle required by image correction according to the first vertex information and the second vertex information and a preset relational expression.
Optionally, the determining module is configured to:
carrying out scaling transformation pretreatment on the target image to obtain an input image;
inputting the input image into a pre-established image segmentation model, and marking a target object to be detected in the target image by adopting a mask, wherein the pre-established image segmentation model is obtained after training based on a neural network.
Optionally, the computing module is configured to:
and acquiring first vertex information of the effective area by adopting a corner detection algorithm.
Optionally, the processing module is configured to:
and if the target object to be detected is not detected in the target image, the target image is not corrected.
In a third aspect, an embodiment of the present invention provides a terminal device, including: at least one processor and a memory;
the memory stores a computer program; the at least one processor executes the computer program stored by the memory to implement the image processing method provided by the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed, implements the image processing method provided in the first aspect.
The embodiment of the invention has the following advantages:
according to the image processing method, the image processing device, the terminal equipment and the readable storage medium, the target image is obtained; inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image; determining mask information of the target object to be detected according to the target object to be detected; the mask information is used for distinguishing an effective area of the target object to be detected from a background area; determining a transmission transformation matrix and a corresponding image deflection angle according to the effective area and the target area after the effective area is mapped; the method comprises the steps of correcting a target image according to a transmission transformation matrix and a corresponding image deflection angle, positioning the position of an object to be detected by using an image segmentation model, acquiring accurate outline mask information of the object, accurately acquiring actual edge information of the object to be detected based on the mask, calculating the deflection angle of the object and parameters such as a corresponding affine matrix according to a traditional affine transformation method, and correcting the object to be detected in a complex scene.
Drawings
FIG. 1 is a flow chart of the steps of an embodiment of an image processing method of the present invention;
FIG. 2 is a flowchart of the steps of the image segmentation module training of the present invention;
FIG. 3 is a flow chart of steps of yet another image processing method embodiment of the present invention;
FIG. 4 is an original image to be corrected according to the present invention;
FIG. 5 is an effective mask image of the present invention;
FIG. 6 is a schematic view of the rectified object of the present invention;
FIG. 7 is a schematic diagram of an active corner point of the present invention;
FIG. 8 is a block diagram of an embodiment of an image processing apparatus according to the present invention;
fig. 9 is a schematic structural diagram of a terminal device of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
An embodiment of the present invention provides an image processing method, which is used for performing error correction processing on an image. The execution subject of the embodiment is an image processing apparatus, and is disposed on a terminal device, where the terminal device includes at least a computer, a tablet terminal, and the like.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of an image processing method according to the present invention is shown, where the method may specifically include the following steps:
s101, acquiring a target image;
specifically, the terminal device acquires a target image to be corrected, which is acquired by the acquisition device (high-speed camera), and then transmits the target image to the terminal device.
S102, inputting a target image into a pre-established image segmentation model to obtain a target object to be detected of the target image;
specifically, an image segmentation model is established in advance on the terminal device, the image segmentation model is based on a deep learning image segmentation model, as shown in fig. 2, model parameters trained in advance are input into a neural network, and then a data layer reads in a target image processed in the previous step.
S103, determining mask information of the target object to be detected according to the target object to be detected; the mask information is used for distinguishing an effective area of the target object to be detected from a background area;
specifically, the object to be detected in the picture is marked out by using a mask through convolution operation of an image segmentation model, a specific pixel discrimination calculation formula is as follows,
where (r, c) is the pixel coordinate, (H, W) is the image width and height, P G(r,c) And P S(r,c) Respectively representing the true value and the pixel value of the prediction significance probability graph, and finally calculating the obtained l of the fusion output fuse Namely the final mask of the object to be detected; distinguish the object to be detected (the book or paper which needs to realize the deviation rectifying function) from the complex background areaTherefore, the effective area and the background area of the object to be detected can be stably distinguished, and the most stable guarantee is provided for subsequent deviation correction.
S104, determining a transmission transformation matrix and a corresponding image deflection angle according to the effective area and the target area after the effective area is mapped;
and S105, correcting the target image according to the transmission transformation matrix and the corresponding image deflection angle.
Aiming at the problem that the conventional image rectification is limited by a complex background environment, the embodiment of the invention positions an object to be detected in an image by using an image segmentation algorithm based on deep learning, stably and efficiently segments the object to be detected from the image and the background, and directly generates a corresponding mask image so as to facilitate the later accurate positioning of the angular point information and the edge line segment information of the object to be detected; the method well avoids the problems of edge detection interference and false detection brought by the original image rectification method in a complex environment, and greatly improves the accuracy and effectiveness of image rectification.
According to the image processing method provided by the embodiment of the invention, the target image is obtained; inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image; determining mask information of the target object to be detected according to the target object to be detected; the mask information is used for distinguishing an effective area of the target object to be detected from a background area; determining a transmission transformation matrix and a corresponding image deflection angle according to the effective area and the target area after the effective area is mapped; the method comprises the steps of correcting a target image according to a transmission transformation matrix and a corresponding image deflection angle, positioning the position of an object to be detected by using an image segmentation model, acquiring accurate outline mask information of the object, accurately acquiring actual edge information of the object to be detected based on the mask, calculating the deflection angle of the object and parameters such as a corresponding affine matrix according to a traditional affine transformation method, and correcting the object to be detected in a complex scene.
The present invention further provides a supplementary description of the image processing method provided in the above embodiment.
As shown in fig. 3, the invention is directed at the problem that the image rectification is limited to a complex background environment in the past, and utilizes an image segmentation algorithm based on deep learning to position an object to be detected in an image, stably and efficiently segment the object to be detected from the image and the background, and directly generate a corresponding mask image so as to facilitate the later accurate positioning of the corner point information and the edge line segment information of the object to be detected; the method well avoids the problems of edge detection interference and false detection brought by the original image deviation rectifying method in a complex environment, and greatly improves the accuracy and effectiveness of image deviation rectification.
As shown in fig. 7, determining the transmission transformation matrix and the corresponding image deflection angle according to the effective area and the target area after the effective area mapping includes:
acquiring first vertex information of an effective area of a target object to be detected and second vertex information of the target area;
and calculating a transmission transformation matrix and a corresponding image deflection angle required by correcting the image according to the first vertex information and the second vertex information and a preset relational expression.
Optionally, inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image, including:
carrying out scaling transformation pretreatment on the target image to obtain an input image;
the input image is input into a pre-established image segmentation model, a mask is adopted to mark an object to be detected in a target image, and the pre-established image segmentation model is obtained after training based on a neural network.
Reading the image by using an acquisition device (high-speed shooting instrument) and converting the image into a proper size (320 x 320) through scaling so as to prepare for entering a subsequent image segmentation model;
acquiring first vertex information of an effective area of a target object to be detected, wherein the first vertex information comprises:
and acquiring first vertex information of the effective area by adopting a corner detection algorithm.
Optionally, the method further comprises:
and if the target object to be detected is not detected in the target image, the target image is not corrected.
Loading pre-trained model parameters based on an image segmentation model of deep learning, reading in a data layer into a pre-processed picture processed in the previous step, marking an object to be detected in the picture by using a mask through convolution operation of the model, wherein a specific pixel discrimination calculation formula is as follows,
where (r, c) is the pixel coordinate, (H, W) is the image width and height, P G(r,c) And P S(r,c) Respectively representing the true value and the pixel value of the prediction significance probability graph, and finally calculating the obtained l of the fusion output fuse Namely the final mask of the object to be detected; an object to be detected (books or paper needing to realize the deviation correction function) and a complex background area are distinguished, so that an effective area and a background area of the object to be detected can be stably distinguished, and the most stable guarantee is provided for subsequent deviation correction;
and judging the processing result of the previous step, and if an effective mask image is obtained, obtaining the corresponding image as shown in fig. 4-6.
Then continuing to perform subsequent deviation rectifying operation; otherwise, the picture cannot detect the effective object to be detected and cannot be corrected;
obtaining the effective mask image obtained above, extracting 4 effective corners (corner 1, corner 2, corner 3, corner 4) on the mask by using a corner detection algorithm, and calculating by using the relationship between the four corners and the mapped corresponding points according to the following relationship:
after the above matrix is expanded into an equation, 8 pixel points are needed for solving the equation, 4 four corner points are detected on the original image, meanwhile, coordinates of the eight pixel points are taken into four corner points (usually four vertexes of the corrected image) corresponding to the corrected image, so that a transmission transformation matrix and a corresponding image deflection angle needed by image correction can be calculated, the original image is corrected by using the parameters of the perspective transformation matrix obtained by the calculation, and a final corrected image can be obtained.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
According to the image processing method provided by the embodiment of the invention, the target image is obtained; inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image; determining mask information of the target object to be detected according to the target object to be detected; the mask information is used for distinguishing an effective area of the target object to be detected from a background area; determining a transmission transformation matrix and a corresponding image deflection angle according to the effective area and the target area after the effective area is mapped; the method comprises the steps of correcting a target image according to a transmission transformation matrix and a corresponding image deflection angle, positioning the position of an object to be detected by using an image segmentation model, acquiring accurate outline mask information of the object, accurately acquiring actual edge information of the object to be detected based on the mask, calculating the deflection angle of the object and parameters such as a corresponding affine matrix according to a traditional affine transformation method, and correcting the object to be detected in a complex scene.
Another embodiment of the present invention provides an image processing apparatus, configured to execute the image processing method provided in the foregoing embodiment.
Referring to fig. 8, a block diagram of an embodiment of an image processing apparatus according to the present invention is shown, and the apparatus may specifically include the following modules: an obtaining module 701, a determining module 702, a processing module 703, a calculating module 704 and a correcting module 705, wherein:
the obtaining module 701 is configured to obtain a target image;
the determining module 702 is configured to input the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image;
the processing module 703 is configured to determine, according to the target object to be detected, mask information related to the target object to be detected; the mask information is used for distinguishing an effective area of the target object to be detected from a background area;
the calculation module 704 is configured to determine a transmission transformation matrix and a corresponding image deflection angle according to the effective region and the target region after the effective region is mapped;
the correction module 705 is configured to perform correction processing on the target image according to the transmission transformation matrix and the corresponding image deflection angle.
The image processing device provided by the embodiment of the invention obtains the target image; inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image; determining mask information of the target object to be detected according to the target object to be detected; the mask information is used for distinguishing an effective area of the target object to be detected from a background area; determining a transmission transformation matrix and a corresponding image deflection angle according to the effective area and the target area after the effective area is mapped; the method comprises the steps of correcting a target image according to a transmission transformation matrix and a corresponding image deflection angle, positioning the position of an object to be detected by using an image segmentation model, acquiring accurate outline mask information of the object, accurately acquiring actual edge information of the object to be detected based on the mask, calculating the deflection angle of the object and parameters such as a corresponding affine matrix according to a traditional affine transformation method, and correcting the object to be detected in a complex scene.
The image processing apparatus according to the above embodiment is further described in an embodiment of the present invention.
Optionally, the calculation module is configured to:
acquiring first vertex information of an effective area of a target object to be detected and second vertex information of the target area;
and calculating a transmission transformation matrix and a corresponding image deflection angle required by image correction according to the first vertex information and the second vertex information and a preset relational expression.
Optionally, the determining module is configured to:
carrying out scaling transformation pretreatment on the target image to obtain an input image;
the input image is input into a pre-established image segmentation model, a mask is adopted to mark an object to be detected in a target image, and the pre-established image segmentation model is obtained after training based on a neural network.
Optionally, the calculation module is configured to:
and acquiring first vertex information of the effective area by adopting a corner detection algorithm.
Optionally, the processing module is configured to:
and if the target object to be detected is not detected in the target image, the target image is not corrected.
It should be noted that the respective implementable modes in the embodiment may be implemented individually, or may be implemented in combination in any combination without conflict, and the present application is not limited thereto.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The image processing device provided by the embodiment of the invention obtains the target image; inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image; determining mask information of the target object to be detected according to the target object to be detected; the mask information is used for distinguishing an effective area of the target object to be detected from a background area; determining a transmission transformation matrix and a corresponding image deflection angle according to the effective area and the target area after the effective area is mapped; the method comprises the steps of correcting a target image according to a transmission transformation matrix and a corresponding image deflection angle, positioning the position of an object to be detected by using an image segmentation model, acquiring accurate outline mask information of the object, accurately acquiring actual edge information of the object to be detected based on the mask, calculating the deflection angle of the object and parameters such as a corresponding affine matrix according to a traditional affine transformation method, and correcting the object to be detected in a complex scene.
Still another embodiment of the present invention provides a terminal device, configured to execute the image processing method provided in the foregoing embodiment.
Fig. 9 is a schematic structural diagram of a terminal device of the present invention, and as shown in fig. 9, the terminal device includes: at least one processor 901 and memory 902;
the memory stores a computer program; at least one processor executes the computer program stored in the memory to implement the image processing method provided by the above-described embodiments.
The terminal device provided by the embodiment acquires a target image; inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image; determining mask information of the target object to be detected according to the target object to be detected; the mask information is used for distinguishing an effective area of the target object to be detected from a background area; determining a transmission transformation matrix and a corresponding image deflection angle according to the effective area and the target area after the effective area is mapped; the method comprises the steps of correcting a target image according to a transmission transformation matrix and a corresponding image deflection angle, positioning the position of an object to be detected by using an image segmentation model, acquiring accurate outline mask information of the object, accurately acquiring actual edge information of the object to be detected based on the mask, calculating the deflection angle of the object and parameters such as a corresponding affine matrix according to a traditional affine transformation method, and correcting the object to be detected in a complex scene.
Yet another embodiment of the present application provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed, the computer program implements the image processing method provided in any one of the above embodiments.
According to the computer-readable storage medium of the present embodiment, by acquiring a target image; inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image; determining mask information of the target object to be detected according to the target object to be detected; the mask information is used for distinguishing an effective area of the target object to be detected from a background area; determining a transmission transformation matrix and a corresponding image deflection angle according to the effective area and the target area after the effective area is mapped; the method comprises the steps of correcting a target image according to a transmission transformation matrix and a corresponding image deflection angle, positioning the position of an object to be detected by using an image segmentation model, acquiring accurate outline mask information of the object, accurately acquiring actual edge information of the object to be detected based on the mask, calculating the deflection angle of the object and parameters such as a corresponding affine matrix according to a traditional affine transformation method, and correcting the object to be detected in a complex scene.
The embodiments in the present specification are all described in a progressive manner, and each embodiment focuses on differences from other embodiments, and portions that are the same and similar between the embodiments may be referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, electronic devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing electronic device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing electronic device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing electronic devices to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing electronic device to cause a series of operational steps to be performed on the computer or other programmable electronic device to produce a computer implemented process such that the instructions which execute on the computer or other programmable electronic device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "include", "including" or any other variations thereof are intended to cover non-exclusive inclusion, so that a process, method, article, or electronic device including a series of elements includes not only those elements but also other elements not explicitly listed or inherent to such process, method, article, or electronic device. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or electronic device comprising the element.
The foregoing detailed description of an image processing method and an image processing apparatus according to the present invention has been presented, and the principles and embodiments of the present invention are explained herein by using specific examples, which are only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (10)
1. An image processing method, characterized in that the method comprises:
acquiring a target image;
inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image;
determining mask information of the target object to be detected according to the target object to be detected; the mask information is used for distinguishing an effective area of the target object to be detected from a background area;
determining a transmission transformation matrix and a corresponding image deflection angle according to the effective area and the target area after the effective area is mapped;
and correcting the target image according to the transmission transformation matrix and the corresponding image deflection angle.
2. The method of claim 1, wherein determining a transmission transformation matrix and a corresponding image deflection angle from the active area and the target area mapped to the active area comprises:
acquiring first vertex information of an effective area of the target object to be detected and second vertex information of the target area;
and calculating a transmission transformation matrix and a corresponding image deflection angle required by image correction according to the first vertex information and the second vertex information and a preset relational expression.
3. The method according to claim 1, wherein the inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image comprises:
carrying out scaling transformation pretreatment on the target image to obtain an input image;
inputting the input image into a pre-established image segmentation model, and marking a target object to be detected in the target image by adopting a mask, wherein the pre-established image segmentation model is obtained after training based on a neural network.
4. The method according to claim 2, wherein the obtaining the first vertex information of the effective area of the target object to be detected comprises:
and acquiring first vertex information of the effective area by adopting a corner detection algorithm.
5. The method of claim 1, further comprising:
and if the target object to be detected is not detected in the target image, the target image is not corrected.
6. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a target image;
the determining module is used for inputting the target image into a pre-established image segmentation model to obtain a target object to be detected of the target image;
the processing module is used for determining mask information of the target object to be detected according to the target object to be detected; the mask information is used for distinguishing an effective area of the target object to be detected from a background area;
the calculation module is used for determining a transmission transformation matrix and a corresponding image deflection angle according to the effective area and the target area after the effective area is mapped;
and the correction module is used for correcting the target image according to the transmission transformation matrix and the corresponding image deflection angle.
7. The apparatus of claim 6, wherein the computing module is configured to:
acquiring first vertex information of an effective area of the target object to be detected and second vertex information of the target area;
and calculating a transmission transformation matrix and a corresponding image deflection angle required by image correction according to the first vertex information and the second vertex information and a preset relational expression.
8. The apparatus of claim 6, wherein the determining module is configured to:
carrying out scaling transformation pretreatment on the target image to obtain an input image;
inputting the input image into a pre-established image segmentation model, and marking a target object to be detected in the target image by adopting a mask, wherein the pre-established image segmentation model is obtained after training based on a neural network.
9. A terminal device, comprising: at least one processor and memory;
the memory stores a computer program; the at least one processor executes the computer program stored by the memory to implement the image processing method of any one of claims 1-5.
10. A computer-readable storage medium, characterized in that a computer program is stored therein, which computer program, when executed, implements the image processing method of any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211361760.6A CN115937003A (en) | 2022-11-02 | 2022-11-02 | Image processing method, image processing device, terminal equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211361760.6A CN115937003A (en) | 2022-11-02 | 2022-11-02 | Image processing method, image processing device, terminal equipment and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115937003A true CN115937003A (en) | 2023-04-07 |
Family
ID=86698403
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211361760.6A Pending CN115937003A (en) | 2022-11-02 | 2022-11-02 | Image processing method, image processing device, terminal equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115937003A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116777871A (en) * | 2023-06-20 | 2023-09-19 | 无锡日联科技股份有限公司 | Defect detection method, device, equipment and medium based on X-rays |
CN117392218A (en) * | 2023-10-10 | 2024-01-12 | 钛玛科(北京)工业科技有限公司 | Method and device for correcting deviation of curled material |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006318387A (en) * | 2005-05-16 | 2006-11-24 | Namco Bandai Games Inc | Program, information storage medium, and image forming system |
CN110866871A (en) * | 2019-11-15 | 2020-03-06 | 深圳市华云中盛科技股份有限公司 | Text image correction method and device, computer equipment and storage medium |
CN111860527A (en) * | 2019-10-24 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | Image correction method, image correction device, computer device, and storage medium |
CN113033550A (en) * | 2021-03-15 | 2021-06-25 | 合肥联宝信息技术有限公司 | Image detection method and device and computer readable medium |
CN113343965A (en) * | 2020-03-02 | 2021-09-03 | 北京有限元科技有限公司 | Image tilt correction method, apparatus and storage medium |
CN113723399A (en) * | 2021-08-06 | 2021-11-30 | 浙江大华技术股份有限公司 | License plate image correction method, license plate image correction device and storage medium |
-
2022
- 2022-11-02 CN CN202211361760.6A patent/CN115937003A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006318387A (en) * | 2005-05-16 | 2006-11-24 | Namco Bandai Games Inc | Program, information storage medium, and image forming system |
CN111860527A (en) * | 2019-10-24 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | Image correction method, image correction device, computer device, and storage medium |
CN110866871A (en) * | 2019-11-15 | 2020-03-06 | 深圳市华云中盛科技股份有限公司 | Text image correction method and device, computer equipment and storage medium |
CN113343965A (en) * | 2020-03-02 | 2021-09-03 | 北京有限元科技有限公司 | Image tilt correction method, apparatus and storage medium |
CN113033550A (en) * | 2021-03-15 | 2021-06-25 | 合肥联宝信息技术有限公司 | Image detection method and device and computer readable medium |
CN113723399A (en) * | 2021-08-06 | 2021-11-30 | 浙江大华技术股份有限公司 | License plate image correction method, license plate image correction device and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116777871A (en) * | 2023-06-20 | 2023-09-19 | 无锡日联科技股份有限公司 | Defect detection method, device, equipment and medium based on X-rays |
CN117392218A (en) * | 2023-10-10 | 2024-01-12 | 钛玛科(北京)工业科技有限公司 | Method and device for correcting deviation of curled material |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110135455B (en) | Image matching method, device and computer readable storage medium | |
JP6348093B2 (en) | Image processing apparatus and method for detecting image of detection object from input data | |
CN108320290B (en) | Target picture extraction and correction method and device, computer equipment and recording medium | |
US9519968B2 (en) | Calibrating visual sensors using homography operators | |
CN115937003A (en) | Image processing method, image processing device, terminal equipment and readable storage medium | |
CN108875731B (en) | Target identification method, device, system and storage medium | |
CN111401266B (en) | Method, equipment, computer equipment and readable storage medium for positioning picture corner points | |
US10430650B2 (en) | Image processing system | |
CN109934847B (en) | Method and device for estimating posture of weak texture three-dimensional object | |
CN111860489A (en) | Certificate image correction method, device, equipment and storage medium | |
CN112017231B (en) | Monocular camera-based human body weight identification method, monocular camera-based human body weight identification device and storage medium | |
CN107452028B (en) | Method and device for determining position information of target image | |
CN110796082A (en) | Nameplate text detection method and device, computer equipment and storage medium | |
CN111832371A (en) | Text picture correction method and device, electronic equipment and machine-readable storage medium | |
CN113962306A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN110807459A (en) | License plate correction method and device and readable storage medium | |
CN114143519A (en) | Method and device for automatically matching projection image with curtain area and projector | |
CN111353325A (en) | Key point detection model training method and device | |
CN111950554A (en) | Identification card identification method, device, equipment and storage medium | |
CN111798422A (en) | Checkerboard angular point identification method, device, equipment and storage medium | |
CN115082935A (en) | Method, apparatus and storage medium for correcting document image | |
CN112950528A (en) | Certificate posture determining method, model training method, device, server and medium | |
CN110991357A (en) | Answer matching method and device and electronic equipment | |
CN117115823A (en) | Tamper identification method and device, computer equipment and storage medium | |
CN111340040A (en) | Paper character recognition method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |