CN110458855B - Image extraction method and related product - Google Patents

Image extraction method and related product Download PDF

Info

Publication number
CN110458855B
CN110458855B CN201910611808.6A CN201910611808A CN110458855B CN 110458855 B CN110458855 B CN 110458855B CN 201910611808 A CN201910611808 A CN 201910611808A CN 110458855 B CN110458855 B CN 110458855B
Authority
CN
China
Prior art keywords
image
contour
edge
area
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910611808.6A
Other languages
Chinese (zh)
Other versions
CN110458855A (en
Inventor
王忍宝
王晓斐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Toycloud Technology Co Ltd
Original Assignee
Anhui Toycloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Toycloud Technology Co Ltd filed Critical Anhui Toycloud Technology Co Ltd
Priority to CN201910611808.6A priority Critical patent/CN110458855B/en
Publication of CN110458855A publication Critical patent/CN110458855A/en
Application granted granted Critical
Publication of CN110458855B publication Critical patent/CN110458855B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking

Abstract

The embodiment of the application discloses an image extraction method and a related product, wherein the image extraction method is applied to the extraction of regional images corresponding to objects with parallel opposite sides, and comprises the following steps: acquiring an image to be processed, and partitioning the image to be processed to obtain a plurality of area block images; determining a scanning area size of each of a plurality of area block images; scanning each area block image according to the size of the scanning area of each area block image to obtain a contour extraction result corresponding to the image to be processed; and determining a target area image in the image to be processed according to the contour extraction result. According to the method and the device, the image to be processed is partitioned to obtain the plurality of area block images, each area block image is scanned to obtain the outline extraction result, and the extracted target area image is determined according to the outline extraction result. Therefore, threshold calculation can be reduced, the accuracy of effective contour information extraction is improved, and the efficiency of extracting the target area image is finally improved.

Description

Image extraction method and related product
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image extraction method and a related product.
Background
In the image processing process, effective information of the image needs to be extracted, and therefore, the contour of the effective image needs to be extracted. The technology is applied to license plate recognition, face recognition, dynamic tracking and the like, time consumption is reduced, and great convenience is brought.
Meanwhile, in image recognition, the real outline of the effective area is obtained, so that the calculation amount is reduced, and the detection accuracy is improved. Most of the existing contour detection methods mainly use operators based on sobel, roberts, prewitt, laplacian and canny for simple backgrounds. The contour detection methods have some defects, including that the applicable scene is single, effective edge information cannot be obtained for the image contour without closing, when the background is complex, the existing method can only obtain all the contours in the background, and the real image contour to be processed cannot be accurately extracted. Therefore, how to extract the real contour from the complex background, improve the accuracy of extracting the effective contour information, reduce the complex threshold calculation in the processing process, and obtain the contour by using the adaptive threshold is a problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides an image extraction method and a related product, which aim to obtain a plurality of area block images by partitioning an image to be processed, scan each area block image to obtain a contour extraction result, and determine an extracted target area image according to the contour extraction result. Therefore, threshold calculation can be reduced, the accuracy of effective contour information extraction is improved, and the efficiency of extracting the target area image is finally improved.
In a first aspect, an embodiment of the present application provides an image extraction method, which is applied to extracting an image of a region corresponding to an object with parallel edges, where the method includes:
acquiring an image to be processed, and partitioning the image to be processed to obtain a plurality of area block images;
determining a scanning area size of each of the plurality of area block images;
scanning each area block image according to the size of the scanning area of each area block image to obtain a contour extraction result corresponding to the image to be processed;
and determining a target area image in the image to be processed according to the contour extraction result.
In a second aspect, an embodiment of the present application provides an image extraction apparatus, including:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be processed and partitioning the image to be processed to obtain a plurality of area block images;
a first determination unit configured to determine a scanning area size of each of the plurality of area block images;
the scanning unit is used for scanning each area block image according to the size of the scanning area of each area block image to obtain a contour extraction result corresponding to the image to be processed;
and the second determining unit is used for determining a target area image in the image to be processed according to the contour extraction result.
In a third aspect, embodiments of the present application provide an electronic device, including a processor and a memory, and one or more programs, stored in the memory and configured to be executed by the processor, the program including instructions for performing the steps of any of the methods of the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program for electronic data exchange, wherein the computer program causes a computer to execute the instructions of the steps of the method in the first aspect.
In a fifth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that the image extraction method and apparatus provided in the embodiment of the present application are applied to extracting the region image corresponding to the object with parallel edges, where the method includes: acquiring an image to be processed, and partitioning the image to be processed to obtain a plurality of area block images; determining a scanning area size of each of a plurality of area block images; scanning each area block image according to the size of the scanning area of each area block image to obtain a contour extraction result corresponding to the image to be processed; and determining a target area image in the image to be processed according to the contour extraction result. In the process, a plurality of area block images are obtained by partitioning an image to be processed, each area block image is scanned to obtain a contour extraction result, and the extracted target area image is determined according to the contour extraction result. Therefore, threshold calculation can be reduced, the accuracy of effective contour information extraction is improved, and the efficiency of extracting the target area image is finally improved.
Drawings
Reference will now be made in brief to the accompanying drawings, to which embodiments of the present application relate.
Fig. 1 is a schematic flowchart of an image extraction method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an image coordinate system provided by an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a scan area size setting according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an edge profile tracking scanning process provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of an edge contour stitching scanning process provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of an initial background image provided by an embodiment of the present application;
fig. 7 is a to-be-processed image after a background image is removed according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a perfect edge profile extraction process provided by an embodiment of the present application;
FIG. 9 is a schematic flowchart of another image extraction method provided in the embodiments of the present application;
FIG. 10 is a schematic flowchart of another image extraction method provided in the embodiments of the present application;
FIG. 11 is a schematic flowchart of another image extraction method provided in the embodiments of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 13 is a block diagram of functional units of an image extraction apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following describes embodiments of the present application in detail.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image extraction method applied to extracting an area image corresponding to a parallel object on opposite sides according to an embodiment of the present application, as shown in fig. 1, the image extraction method includes the following steps:
101. and acquiring an image to be processed, and partitioning the image to be processed to obtain a plurality of area block images.
Specifically, the embodiment of the application is used for extracting the region image corresponding to the parallel objects on the opposite side, that is, the image to be processed includes the parallel objects on the opposite side, the parallel objects on the opposite side are rectangles, and then the parallel objects on the opposite side can be books, rectangular desktops, computer planes, mobile phone planes or other rectangular planes.
Optionally, the blocking the image to be processed to obtain a plurality of region block images includes: acquiring the size of the image to be processed, wherein the size comprises a width W and a height H; dividing the image to be processed into n2An image of each region block having a width of bW=[W/n],bH=[H/n],[W/n]Represents that W is rounded to n, [ H/n ]]Represents that H rounds up n; n is2The plurality of region block images constitute the plurality of region block images.
Specifically, after the image to be processed is divided into a plurality of region block images, edge contour recognition may be performed in units of each region block image. Assuming that the size of the image to be processed is W x H, wherein W represents the width of the image and H represents the height of the image; dividing the width and the height of the image into blocks according to n equal parts respectively to obtain n2A small block, the width (b) of each small blockW) And height (b)H) Obtained by the formula bW=[W/n],bH=[H/n]Wherein [ W/n]The expression that the width W is rounded to n, if the integer division is carried out, the width of each small block is bwOtherwise, the first n-1 ones are bwThe last one is W% n (representing the width left over n);
wherein [ H/n]The height H is expressed to be the integer of n, and if the integer is divided, the height of each small block is bHOtherwise, the first n-1 high is bHThe last one is H% n (indicating high vs. n taken)And the rest).
The value size of n is related to the complexity of the image, the larger or smaller n influences the subsequent contour extraction search, and the complexity of the image can be obtained through a gray level histogram and a contrast image together:
(1) acquiring a gray level histogram f (k) of an image to be processed, wherein the f (k) represents the number of pixel points with a gray level of k ([0,255 ]);
(2) calculating the variance of f (k) according to the following formula:
Figure BDA0002122613870000041
wherein IMAGE _ M represents the mean value of the number f (k) of histogram pixel points; IMAGE _ S represents the variance of the number f (k) of histogram pixels.
(3) Calculating image contrast:
CR=∑δδ(i,j)2*Pδ(i,j) (2)
wherein, delta (i, j)2=|i-j|2Representing the square of the difference in gray levels of adjacent pixels, Pδ(i, j) is the pixel probability distribution with the square of the gray difference of adjacent pixels being δ, and CR is the image contrast, and the visual contour effect of the image is more obvious when the contrast is larger, and is lower when the contrast is smaller.
The histogram has high variance and high contrast, and n can be a large value; the histogram variance is small and the complexity is relatively low when the contrast is small, and n needs to take a smaller value.
102. Determining a scanning area size of each of the plurality of area block images.
Specifically, for a plurality of area block images into which an image to be processed is divided, since the pixel mean value and the pixel variance of each area block image are different, it is described that the degree of uniformity of the distribution of each area block image is different, and in order to increase the scanning speed of each area block image, the scanning area size setting is different for the area block images having different pixel mean values and pixel variances.
Optionally, the determining the size of the scanning area of each of the plurality of area block images includes: calculating the pixel mean and the pixel variance of each of the plurality of region block images; determining the complexity of each area block image according to the pixel mean and the pixel variance; determining the scanning area size according to the complexity, wherein the complexity is in inverse proportion to the scanning area size.
After the image to be processed is divided into a plurality of area block images, an image coordinate system is established to determine the coordinate of each area block image.
The following description will be made by taking as an example that the image size is (640 × 480) and n is 10 (in the following description, this size is used as an example): w640; h480; bW=64;bH=48。
Referring to fig. 2, fig. 2 is a schematic diagram of an image coordinate system according to an embodiment of the present application, and as shown in fig. 2, an image coordinate system x0y is established, where a unit length of an x axis represents a width of an area block, and 0-10 corresponds to 0-639 of a real image width; the unit length of the y axis represents the height of the region block, 0-10 corresponds to 0-479 of the real image height; and simultaneously establishing a coordinate system u0v for each region block to represent the coordinate relationship of 64 × 48 pixel points in the region block, wherein the coordinates (x, y), the coordinates (u, v) and the real coordinates (i, j) of the image to be processed satisfy the following formula:
Figure BDA0002122613870000051
[ x ] and [ y ] represent rounding x and y, respectively; u is an integer ranging from 0 to 63, and v is an integer ranging from 0 to 47.
Taking the coordinate axis u0v in fig. 2 as an example, the coordinates in x0y of the corresponding region block image are [ x ] equal to 1 and [ y ] equal to 2; calculating the pixel mean M ([ x ], [ y ]) and variance S ([ x ], [ y ]) in the block according to the following formula:
Figure BDA0002122613870000061
where f (u, v) represents the pixel value of the region block [ x ] [ y ] at the coordinate (u, v) in the coordinate system u0 v; m ([ x ], [ y ]) represents the pixel mean value of the area block [ x ] [ y ], S ([ x ], [ y ]) represents the pixel variance of the area block [ x ] [ y ], and the smaller the variance is, the pixel value of the area block is uniformly distributed without obvious pixel difference; the large variance indicates that the pixel value distribution of the area block is not uniform, and the pixel difference is large. In order to increase the processing speed, the scan area size setting is different for different area block mean and variance.
Referring to fig. 3, fig. 3 is a schematic view illustrating a scan area size according to an embodiment of the present disclosure, where, as shown in fig. 3, the scan area (a thick rectangle area in fig. 3) has a size of M × N, where M is a block width bWN is the block height bHA divisor of 48, M and N may take the same value or different values;
Figure BDA0002122613870000062
according to the size of the mean value and the variance of the fast image of each region, M and N can be combined differently; the smaller the variance, the lower the complexity of the region block image, which may be a peripheral image with a single background, so that M and N may take larger values, e.g., both are 16, i.e., a larger scanning region is selected for each scanning to improve the scanning efficiency; the larger the variance is, the higher the complexity of the region block image is, and the region block image may be the target region image to be extracted, so M, N may take a smaller value, for example, both are 4, that is, a smaller scanning region is selected for each scanning so that the scanning result is more accurate; for a region block image with a middle complexity, M-N-8 may be taken, and the region block image may be located at the contour boundary.
103. And scanning each area block image according to the size of the scanning area of each area block image to obtain a contour extraction result corresponding to the image to be processed.
Specifically, after the size of the scanning area of each area block image is determined, each area block image may be scanned to obtain a contour extraction result, and finally, the extracted target area image is determined. In the scanning process, an edge detection method, such as sobel, roberts, prewitt, laplacian, canny, or other operators, may be first used to detect an edge contour in each image block, and then each region block image is scanned to obtain a contour extraction result in each region block image.
Optionally, the scanning the image of each region block according to the size of the scanning region to obtain the contour extraction result corresponding to the image to be processed includes: performing edge detection on each region block image in the plurality of region block images by adopting an edge detection operator, and extracting an edge profile in each region block image; scanning each area block image according to the scanning area size of the area block image in a preset direction, and determining whether each area block image contains an edge profile; if the first area block image is determined to contain a first edge profile, taking the first edge profile as a starting profile, determining the inclination direction of the first edge profile, and tracking whether a next area block image in the inclination direction contains a second edge profile connected with the first edge profile; if the next area block image contains a second edge contour, repeating the process of tracking the edge contour of the next area block image in the inclined direction to obtain a connecting edge contour; if the next area block image does not contain the second edge profile, scanning from the adjacent area block image of the first area block image according to a preset direction, acquiring a new edge profile which is not in the connection edge profile as an initial profile, and repeating the process of tracking the edge profile until the scanning of each area block image is completed; and taking the connecting edge contour as a contour extraction result corresponding to the image to be processed.
Specifically, referring to fig. 4, fig. 4 is a schematic diagram of an edge contour tracking scanning process provided in an embodiment of the present application, as shown in fig. 4, an image to be processed is first divided into 10 × 10 region block images, and then a contour detection operator is used to extract a contour in each region block image. The region block images are scanned in a preset direction according to the previously obtained scanning region size, where the preset direction may be from top to bottom and from left to right, as reflected in fig. 3, the first region block image is selected and scanned in the order of [ x, y ] → [1,1] → [1,2] → [ … → [1,10] → [2,1 ].
When [1,5] is scanned, it is determined that the area block image includes an edge contour, and the point a is taken as a starting contour. And the edge contour inclines to the lower left, then the area block images of the adjacent left and right sides extending from the edge contour of [1,5] are all used as the next area block image, the process of scanning and tracing the edge contour is repeated, and the connecting edge contour is obtained, namely the line 1.
After the scanning of the line 1 is completed, the next area block image without the tilt direction is displayed, and then the scanning is performed from [1,5] according to the preset direction]Is scanned for the second round because of [1,6 ]]~[1,10]There is only one edge profile and it already belongs to the connecting edge profile, so neither can be used as a new edge profile starting profile. In the scanning to [2,3]When a contour other than the connecting edge contour is found, point B is taken as the new starting contour having
Figure BDA0002122613870000071
And
Figure BDA0002122613870000072
two oblique directions, then [2,4 ]]And [3,3 ]]Are tracked separately as the next block image, and the line 2 is obtained.
As can be seen, in the embodiment of the present application, each area block image is scanned in a preset direction, a first edge profile of a scanned first area block image is used as an initial profile, and a next area block image is scanned in an oblique direction of the edge profile, so as to obtain a continuous connecting edge profile; after the scanning of one edge profile is finished, scanning of the adjacent area block image of the first area block image is performed, a new edge profile which is not in the connection edge profile is obtained as a new initial profile, and a new connection edge profile is obtained until the scanning of all the area blocks is finished. In this process, since tracking is performed in the oblique direction of the edge contour, continuous connected edge contours are obtained, and when the start contour is searched again, the adjacent region block images of the start contour are scanned in order to prevent the region block images corresponding to the line segments included in the connected edge contours from including other edge contours. According to the method, on one hand, the continuous connecting edge profiles can be efficiently obtained, on the other hand, the missing of the edge profiles can be prevented, and the efficiency and the accuracy of edge profile scanning are integrally improved.
Optionally, the scanning the image of each region block according to the size of the scanning region to obtain the contour extraction result corresponding to the image to be processed includes: performing edge detection on each region block image in the plurality of region block images by adopting an edge detection operator, and extracting an edge profile in each region block image; scanning each area block image according to the size of the scanning area of each area block image, and determining whether each area block image contains an edge profile; if the target area block image in each area block image contains the edge contour, recording the area block number and contour coordinates determined by the target area block image according to the scanning sequence; splicing the obtained edge profiles corresponding to the at least two area block numbers according to profile coordinates to obtain spliced edge profiles; and taking the spliced edge contour as a contour extraction result corresponding to the image to be processed.
Specifically, referring to fig. 5, fig. 5 is a schematic diagram of an edge contour stitching scanning process provided in the embodiment of the present application, as shown in fig. 5, an image to be processed is first divided into 10 × 10 region block images, and then a contour detection operator is used to extract a contour in each region block image. And then determining a first area block image in the plurality of area block images according to a preset direction, wherein the preset direction can be from top to bottom and from left to right.
When [1,5] is scanned, an edge profile is found to be included therein, and whether the edge profile is a straight line or not is determined. The judgment process is as follows: each pixel coordinate corresponding to the edge contour is obtained, namely the (u, v) coordinate described in fig. 2. Determining whether the pixel coordinates satisfy the equation: and v is a u + b, if satisfied, the edge contour is determined to be a straight line, the region block image number of the straight line, i.e., [ x, y ] number described in fig. 2, is recorded, and then the contour coordinates of the straight line, i.e., [ i, j ] coordinate described in fig. 2, including a first end and a second end, are recorded, and the corresponding contour coordinates are recorded as [ i1, j1: i2, j2 ]. Because the coordinates are the real coordinates of the image to be processed, all edge contours in the whole image to be processed can be subjected to contour positioning and splicing according to the coordinates.
If only one edge contour in all the region block images is determined to be a straight line, the straight line is directly used as a contour extraction result. Otherwise, splicing the straight lines corresponding to the obtained at least two area block image numbers according to the contour coordinates, for example, the straight lines
Figure BDA0002122613870000091
Has a region block image number of [3, 2]]Contour coordinates of [168,72:192,70 ]]Straight line of
Figure BDA0002122613870000092
Has a region block image number of [4, 2]]Contour coordinates of [192,70:256,65 ]]Then will be
Figure BDA0002122613870000093
And
Figure BDA0002122613870000094
splicing is carried out to obtain a spliced edge profile
Figure BDA0002122613870000095
And the contour coordinate is [168,72:256,65 ]]. All the straight lines in fig. 3 are spliced, and route 1 and route 2 can be obtained as well.
As can be seen, in the embodiment of the present application, each region block image is scanned to determine whether each region block image includes an edge contour, if so, further determine whether the edge contour is a straight line, if so, record the region image number and contour coordinates of the straight line, and splice all the obtained straight lines according to the contour coordinates to obtain a spliced edge contour. In the process, the edge contour in each area block image is determined through one-time overall scanning, repeated scanning is not needed, whether the edge contour is a straight line or not is judged, the straight lines are spliced, and the obtained spliced edge contour can be determined to be a continuous straight line type edge contour. The method of the embodiment improves the accuracy and efficiency of contour extraction on the whole.
Optionally, the method is applied to an electronic device, the electronic device includes a camera, and before the image to be processed is partitioned, the method further includes: obtaining an initial background image through a camera; extracting the characteristics of the initial background image and the image to be processed to obtain the background characteristics corresponding to the initial background image and the characteristic to be processed of the image to be processed; matching the background features with the features to be processed, and determining a plurality of target features which are successfully matched in the background features and the features to be processed; when the number of the target features is larger than a first preset threshold value, determining that the image to be processed is successfully matched with the initial background image; determining a plurality of matching point coordinates of a plurality of target features corresponding to the initial background image and the image to be processed; carrying out image registration and crack removal treatment on the initial background image and the image to be processed according to the coordinates of the matching points to obtain updated coordinates of the matching points; acquiring a matching point coordinate in an image to be processed, and normalizing the peripheral area of the matching point coordinate; and removing isolated points of the peripheral area to obtain the image to be processed after the background image is removed.
Specifically, the method can be applied to a specific electronic device, the electronic device comprises a camera, a processor and a controller, and the controller can control the camera to acquire images according to a received user instruction. Generally, the more single the background image is, the more beneficial to extracting the target region image where the object to be recognized is located in the image to be processed, but it is difficult to adapt the single background image to the object to be recognized in reality, and considering angle, if the background image not including the object to be recognized can be acquired in real time when the image to be processed including the object to be recognized is acquired, and then the background image in the image to be processed is removed according to the background image, the influence of the background image on extracting the target region image corresponding to the object to be recognized is also greatly facilitated to be eliminated.
For example, the to-be-processed image in fig. 4, where the to-be-identified object is a book, and the to-be-processed image includes an intersection line of a desktop and a floor, where the intersection line of the desktop and the floor affects an extraction result when extracting a contour of the book, then, an initial background image that does not include the book is taken, as shown in fig. 6, fig. 6 is an initial background image schematic diagram provided in an embodiment of the present application, and feature extraction is performed on fig. 6 and fig. 4 to obtain a background feature in fig. 6 and a to-be-processed feature in fig. 4, and then the background feature and the to-be-processed feature are matched to obtain a plurality of successfully-matched target features, for example, the target features may be intersection coordinates, edge coordinates, or bright point coordinates. When the number of the target features is not greater than the first preset threshold, it is described that the similarity between the initial background image and the image to be processed is low, the image to be processed cannot be subjected to background removal processing according to the initial background image, and the initial background image can be photographed again to update the initial background image so as to obtain a background image more similar to the image to be processed. When the number of the target features is larger than the first preset threshold, it is indicated that the matching degree of the background of the image to be processed and the initial background image is high, and then the background in the image to be processed can be eliminated according to the initial background image. The specific process is as follows: determining that a plurality of target features correspond to a plurality of matching point coordinates in the initial background image, for example R, L in fig. 6 may find corresponding R 'and L' matches respectively in fig. 4, and may determine the matching point coordinates of the four points. Then, image registration and crack removal processing are performed on the initial background image and the image to be processed according to the coordinates of the multiple matching points, repeated images in the image to be processed and the initial background image are eliminated, then, normalization processing is performed on peripheral areas of the coordinates of the matching points, and some isolated points are removed, so that the image to be processed after the background image is removed can be obtained, as shown in fig. 7. Therefore, the influence of the background image on the contour extraction result can be eliminated, and the efficiency and the accuracy of contour extraction are improved.
Optionally, after obtaining the connecting edge contour or the splicing edge contour, before obtaining a contour extraction result corresponding to the image to be processed, the method further includes: acquiring a visual angle 2 alpha of the camera and an inclination angle theta of the camera, wherein the visual angle of the camera represents the maximum angle which can be imaged by the camera; calculating to obtain a first angle according to the visual angle of the camera and the inclination angle of the camera, wherein the calculation formula of the first angle is a first formula as follows:
Figure BDA0002122613870000101
acquiring an included angle A between a first continuous edge profile and the vertical direction and an included angle B between a second continuous edge profile and the vertical direction in the continuous edge profiles, wherein the first continuous edge profile and the second continuous edge profile are not adjacent edge profiles, and the continuous edge profiles comprise splicing edge profiles or connecting edge profiles; determining whether A and B satisfy a second formula: 2 δ is a + B; and if so, taking the straight line where the first edge contour and the second edge contour are as a new continuous edge contour.
Specifically, referring to fig. 8, fig. 8 is a schematic diagram of a complete edge contour extraction process provided in an embodiment of the present application, as shown in fig. 8, in some to-be-processed images, besides a target object, many interferents may be included, which may cause an extracted contour line to be inaccurate, or only a small section of contour line may be extracted, and an extraction result is not complete. The method and the device can perfect and complement the incomplete extracted edge contour. The specific process is as follows: firstly, obtaining a visual angle 2 alpha of a camera and an inclination angle theta of the camera, wherein the visual angle of the camera is the maximum angle which can be imaged by the camera and is the inherent property of the camera, and the inclination angle of the camera can be adjusted according to a specific scene; calculating according to the 2 alpha and the theta and a first formula to obtain a first angle; acquiring an included angle A between a first continuous edge profile and the vertical direction and an included angle B between a second edge profile and the vertical direction in the continuous edge profiles, wherein opposite-side parallel objects in the image to be processed can be placed in any direction, the first edge profile and the second edge profile are non-adjacent edge profiles, namely opposite-side edge profiles, and the continuous edge profiles can be splicing edge profiles or connecting edge profiles obtained in the embodiment; determining whether a and B satisfy a second formula; if the first edge contour and the second edge contour are satisfied, a straight line where the first edge contour and the second edge contour are located is used as a new continuous edge contour, the part blocked by other objects in the middle can be automatically supplemented, for example, the thick dotted line part is the supplemented part, and the part blocked below can be automatically extended to the intersection point with the next continuous edge contour.
104. And determining a target area image in the image to be processed according to the contour extraction result.
The contour extraction is carried out on the image to be processed, so as to determine a target area image of an object to be identified in the image to be processed, and the target area image is obtained to carry out the next image analysis.
Optionally, determining the target area image in the image to be processed according to the contour extraction result includes: if the continuous edge contour in the contour extraction result forms a closed-loop contour, determining whether the continuous edge contour is an outermost periphery continuous edge contour, wherein the outermost periphery continuous edge contour is a continuous edge contour of which the periphery does not comprise the closed-loop contour, and the continuous edge contour comprises a connecting edge contour or a splicing edge contour; if the continuous edge contour is determined to be the outermost continuous edge contour, determining a region surrounded by the closed-loop contour to be a target region image; if the continuous edge contour in the contour extraction result does not form a closed-loop contour, acquiring the number of image edge intersection points of the continuous edge contour and the image to be processed; if the number of the intersection points is less than 2, determining that the image to be processed is a target area image; if the number of the intersection points is equal to 2, calculating pixel mean values of two sides of the continuous edge contour intersected with the image edge, and determining one side of the two sides of the continuous edge contour, which has a larger pixel mean value, as a target side; determining a region formed by the continuous edge contour, the image edge and the target side as a target region image; and if the number of the intersection points is more than 2, determining a region formed by the continuous edge contour and the image edge as a target region image.
Specifically, if the continuous edge contour in the contour extraction result forms a closed-loop contour, for example, the closed-loop contour formed by route 2 in fig. 4 or fig. 5, it is determined whether the closed-loop contour is the outermost contour, that is, whether the closed-loop contour is surrounded by other closed-loop contours, and if the closed-loop contour is the outermost contour, it may be determined that the region surrounded by the closed-loop contour is the target region image.
If the continuous edge contour in the contour extraction result does not form a closed-loop contour, the continuous edge contour will intersect with the image edge of the image to be processed. If the number of the intersection points is less than 2, the object to be recognized shot in the image to be processed is a part of the object to be recognized, and the corner of the object to be recognized is not shot at all, then the whole image to be processed can be directly used as a target area image; if the number of intersection points is equal to 2, which partial image area is the object to be recognized surrounded by the continuous edge contour is to be recognized. Under the normal condition, the pixel complexity of an object to be recognized is higher than that of a background image, so that the pixel mean value of two sides of a continuous edge contour intersected with the edge of the image is calculated, the side with the larger pixel mean value is determined as a target side, and an area formed by the continuous edge contour, the edge of the image and the target side is used as a target area image; if the number of the intersection points is more than 2, for example, the number of the intersection points is 3 or 4, the region formed by the continuous edge contour and the image edge can be directly used as the target region image at this time.
It can be seen that, in the application embodiment, it is first determined whether the continuous edge contour in the contour extraction result forms a closed-loop contour, and if the closed-loop contour is formed and is the outermost contour, it is determined that the region surrounded by the closed-loop contour is the target region image; and if the closed-loop contour is not formed, determining the target area image according to the intersection points of the continuous edge contour and the image edge. In the process, the target area image is flexibly determined through the contour extraction result, the accuracy of obtaining the target area image can be improved, the data processing amount of the non-target area image is reduced, and the image processing efficiency is improved.
The image extraction method provided by the embodiment of the application is applied to the extraction of the region image corresponding to the object with parallel opposite sides, and the method comprises the following steps: acquiring an image to be processed, and partitioning the image to be processed to obtain a plurality of area block images; determining a scanning area size of each of a plurality of area block images; scanning each area block image according to the size of the scanning area of each area block image to obtain a contour extraction result corresponding to the image to be processed; and determining a target area image in the image to be processed according to the contour extraction result. In the process, a plurality of area block images are obtained by partitioning an image to be processed, each area block image is scanned to obtain a contour extraction result, and the extracted target area image is determined according to the contour extraction result. Therefore, threshold calculation can be reduced, the accuracy of effective contour information extraction is improved, and the efficiency of extracting the target area image is finally improved.
Referring to fig. 9, fig. 9 is a schematic flowchart of another image extraction method according to an embodiment of the present application, and as shown in fig. 9, the image extraction method includes the following steps:
201. obtaining an initial background image through the camera;
202. extracting features of the initial background image and the image to be processed to obtain a background feature corresponding to the initial background image and a feature to be processed of the image to be processed;
203. matching the background features with the features to be processed, and determining a plurality of target features which are successfully matched in the background features and the features to be processed;
204. when the number of the target features is larger than a first preset threshold value, determining that the image to be processed is successfully matched with the initial background image;
205. determining a plurality of matching point coordinates of the plurality of target features corresponding to the initial background image and the image to be processed;
206. carrying out image registration crack removal treatment on the initial background image and the image to be processed according to the plurality of matching point coordinates to obtain updated matching point coordinates;
207. acquiring a matching point coordinate in the image to be processed, and normalizing the peripheral area of the matching point coordinate;
208. removing isolated points of the peripheral area to obtain an image to be processed after the background image is removed;
209. acquiring the size of the image to be processed, wherein the size comprises a width W and a height H;
210. dividing the image to be processed into n according to the size of the image to be processed2An area block image in which each area block has a width of bW ═ W/n],bH=[H/n],[W/n]Represents that W is rounded to n, [ H/n ]]Denotes H rounding n, said n2The plurality of region block images constitute the plurality of region block images;
211. calculating the pixel mean and the pixel variance of each of the plurality of region block images;
212. determining the complexity of each area block image according to the pixel mean and the pixel variance, and determining the size of the scanning area according to the complexity, wherein the complexity is in inverse proportion to the size of the scanning area;
213. scanning each area block image according to the size of the scanning area of each area block image to obtain a contour extraction result corresponding to the image to be processed;
214. and determining a target area image in the image to be processed according to the contour extraction result.
The detailed descriptions of steps 201 to 214 may refer to the corresponding descriptions of the image extraction methods described in steps 101 to 104, and are not repeated herein.
In the embodiment of the application, the initial background image is obtained through the camera, and then background elimination is carried out on the image to be processed according to the similarity between the initial background image and the image to be processed, so that the influence of the background image on the contour extraction result can be eliminated, and the efficiency and the accuracy of contour extraction are improved; when the image to be processed is blocked, the image to be processed is blocked according to the actual size of the image to be processed, so that the standard of the obtained multiple area block images is ensured. For each region block image, calculating the corresponding pixel mean value and pixel variance of the region block image, determining the complexity of each region block image, and determining different scanning regions of each region block according to the complexity, so that the scanning efficiency of the region block image with low complexity can be improved, and the scanning accuracy of the region block image with high complexity can be increased. The embodiment of the application improves the efficiency and accuracy of the contour extraction process on the whole.
Referring to fig. 10, fig. 10 is a schematic flow chart of another image extraction method according to an embodiment of the present application, and as shown in fig. 10, the image extraction method includes the following steps:
301. acquiring an image to be processed, and partitioning the image to be processed to obtain a plurality of area block images;
302. determining a scanning area size of each of the plurality of area block images;
303. performing edge detection on each region block image in the plurality of region block images by adopting an edge detection operator, and extracting an edge profile in each region block image;
304. scanning each area block image according to the scanning area size of the area block image in a preset direction, and determining whether each area block image contains an edge profile;
305. if the first area block image is determined to contain the first edge contour, determining the inclination direction of the first edge contour, and tracking whether the next area block image in the inclination direction contains a second edge contour connected with the first edge contour;
306. if the next region block image contains a second edge profile, taking the second edge profile as a new first edge profile, and repeating the step 305 to obtain a connecting edge profile;
307. if the next area block image does not contain the second edge profile, scanning from the adjacent area block image of the first area block image according to a preset direction to obtain a new first edge profile which is not in the connection edge profile, and repeating the step 305 and the step 306;
308. finishing the scanning of each area block image, and taking the connecting edge contour as a contour extraction result corresponding to the image to be processed;
309. and determining a target area image in the image to be processed according to the contour extraction result.
The detailed descriptions of steps 301 to 309 may refer to the corresponding descriptions of the image extraction methods described in steps 101 to 104, and are not repeated herein.
In the embodiment of the application, whether each region block image contains an edge contour is determined by scanning each region block image, if yes, whether the edge contour is a straight line is further determined, if yes, the region image number and contour coordinates of the straight line are recorded, and all the obtained straight lines are spliced according to the contour coordinates to obtain a spliced edge contour. In the process, the edge contour in each area block image is determined through one-time overall scanning, repeated scanning is not needed, whether the edge contour is a straight line or not is judged, the straight lines are spliced, and the obtained spliced edge contour can be determined to be a continuous straight line type edge contour. The method of the embodiment improves the accuracy and efficiency of contour extraction on the whole.
Referring to fig. 11, fig. 11 is a schematic flowchart of another image extraction method according to an embodiment of the present application, and as shown in fig. 11, the image extraction method includes the following steps:
401. acquiring an image to be processed, and partitioning the image to be processed to obtain a plurality of area block images;
402. determining a scanning area size of each of the plurality of area block images;
403. performing edge detection on each region block image in the plurality of region block images by adopting an edge detection operator, and extracting an edge profile in each region block image;
404. scanning each area block image according to the size of the scanning area of each area block image, and determining whether each area block image contains an edge profile;
405. if the target area block image in each area block image contains the edge contour, determining whether the edge contour is a straight line;
406. if so, recording the image number block and the outline coordinate of the area block of the straight line in the image coordinate system;
407. splicing straight lines corresponding to the acquired image numbers of the at least two region blocks according to the contour coordinates to acquire a spliced edge contour;
408. and taking the spliced edge contour as a contour extraction result corresponding to the image to be processed.
The detailed description of steps 401 to 408 may refer to the corresponding description of the image extraction method described in steps 101 to 104, and is not repeated herein.
In the embodiment of the application, whether each region block image contains an edge contour is determined by scanning each region block image, if yes, whether the edge contour is a straight line is further determined, if yes, the region image number and contour coordinates of the straight line are recorded, and all the obtained straight lines are spliced according to the contour coordinates to obtain a spliced edge contour. In the process, the edge contour in each area block image is determined through one-time overall scanning, repeated scanning is not needed, whether the edge contour is a straight line or not is judged, the straight lines are spliced, and the obtained spliced edge contour can be determined to be a continuous straight line type edge contour. The method of the embodiment improves the accuracy and efficiency of contour extraction on the whole.
In accordance with the above, please refer to fig. 12, fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 12, the electronic device includes a processor 501, a memory 502, a communication interface 503, and one or more programs 505, where the one or more programs are stored in the memory 502 and configured to be executed by the processor, and the programs include instructions for:
acquiring an image to be processed, and partitioning the image to be processed to obtain a plurality of area block images;
determining a scanning area size of each of the plurality of area block images;
scanning each area block image according to the size of the scanning area of each area block image to obtain a contour extraction result corresponding to the image to be processed;
and determining a target area image in the image to be processed according to the contour extraction result.
It can be seen that, in the embodiment of the application, the electronic device acquires an image to be processed, and blocks the image to be processed to obtain a plurality of region block images; determining a scanning area size of each of a plurality of area block images; scanning each area block image according to the size of the scanning area of each area block image to obtain a contour extraction result corresponding to the image to be processed; and determining a target area image in the image to be processed according to the contour extraction result. In the process, a plurality of area block images are obtained by partitioning an image to be processed, each area block image is scanned to obtain a contour extraction result, and the extracted target area image is determined according to the contour extraction result. Therefore, threshold calculation can be reduced, the accuracy of effective contour information extraction is improved, and the efficiency of extracting the target area image is finally improved.
In one possible example, in the aspect of blocking the image to be processed to obtain a plurality of region block images, the instructions in the program are specifically configured to perform the following operations:
acquiring the size of the image to be processed, wherein the size comprises a width W and a height H;
dividing the image to be processed into n2 area block images, wherein each area block has a width of bW=[W/n],bH=[H/n],[W/n]Represents that W is rounded to n, [ H/n ]]Represents that H rounds up n;
the n2 area block images constitute the plurality of area block images.
In one possible example, in the determining the scan area size of each of the plurality of area block images, the instructions in the program are specifically configured to:
calculating the pixel mean and the pixel variance of each of the plurality of region block images;
determining the complexity of each area block image according to the pixel mean and the pixel variance;
determining the scanning area size according to the complexity, wherein the complexity is in inverse proportion to the scanning area size.
In a possible example, in terms of obtaining the contour extraction result corresponding to the image to be processed by scanning each area block image according to the size of the scanning area, the instructions in the program are specifically configured to perform the following operations:
performing edge detection on each region block image in the plurality of region block images by adopting an edge detection operator, and extracting an edge profile in each region block image;
scanning each area block image according to the scanning area size of the area block image in a preset direction, and determining whether each area block image contains an edge profile;
if the first area block image is determined to contain a first edge profile, taking the first edge profile as a starting profile, determining the inclination direction of the first edge profile, and tracking whether a next area block image in the inclination direction contains a second edge profile connected with the first edge profile;
if the next area block image contains a second edge contour, repeating the process of tracking the edge contour of the next area block image in the inclined direction to obtain a first connection edge contour;
if the next area block image does not contain the second edge contour, scanning from the adjacent area block image of the first area block image according to a preset direction, acquiring a new edge contour which is not in the connection edge contour as an initial contour, and repeating the process of tracking the edge contour to acquire a second connection edge contour;
and finishing the scanning of each area block image, and taking the obtained at least one connecting edge contour as a contour extraction result corresponding to the image to be processed.
In a possible example, in the step of scanning each area block image according to the size of the scanning area to obtain a contour extraction result corresponding to the image to be processed, the instructions in the program are specifically configured to perform the following operations:
performing edge detection on each region block image in the plurality of region block images by adopting an edge detection operator, and extracting an edge profile in each region block image;
scanning each area block image according to the size of the scanning area of each area block image, and determining whether each area block image contains an edge profile;
if the target area block image in each area block image contains the edge contour, determining whether the edge contour is a straight line;
if so, recording the image number block and the outline coordinate of the area block of the straight line in the image coordinate system;
splicing straight lines corresponding to the acquired image numbers of the at least two region blocks according to the contour coordinates to acquire a spliced edge contour;
and taking the spliced edge contour as a contour extraction result corresponding to the image to be processed.
In one possible example, in the aspect of determining the target area image in the image to be processed according to the contour extraction result, the instructions in the program are specifically configured to perform the following operations:
if a continuous edge contour in the contour extraction result forms a closed-loop contour, determining whether the edge contour is an outermost periphery edge contour, wherein the outermost periphery edge contour is an edge contour whose periphery does not include the closed-loop contour, and the continuous edge contour includes the connection edge contour or the splicing edge contour;
if the continuous edge contour is determined to be the outermost periphery edge contour, determining the area surrounded by the closed loop contour to be a target area image;
if the continuous edge contour in the contour extraction result does not form a closed-loop contour, acquiring the number of image edge intersection points of the continuous edge contour and the image to be processed;
if the number of the intersection points is less than 2, determining that the image to be processed is a target area image;
if the number of the intersection points is equal to 2, calculating pixel mean values of two sides of the continuous edge contour intersected with the image edge, and determining one side of the two sides of the continuous edge contour, which has a larger pixel mean value, as a target side;
determining a region composed of the continuous edge contour, the image edge and the target side as the target region image;
and if the number of the intersection points is more than 2, determining that the area formed by the continuous edge outline and the image edge is the target area image.
In one possible example, the electronic device further includes a camera 504, and before the image to be processed is segmented, the program further includes instructions for performing the following steps:
obtaining an initial background image through the camera;
extracting features of the initial background image and the image to be processed to obtain a background feature corresponding to the initial background image and a feature to be processed of the image to be processed;
matching the background features with the features to be processed, and determining a plurality of target features which are successfully matched in the background features and the features to be processed;
when the number of the target features is larger than a first preset threshold value, determining that the image to be processed is successfully matched with the initial background image;
determining a plurality of matching point coordinates of the plurality of target features corresponding to the initial background image and the image to be processed;
carrying out image registration crack removal treatment on the initial background image and the image to be processed according to the plurality of matching point coordinates to obtain updated matching point coordinates;
acquiring a matching point coordinate in the image to be processed, and normalizing the peripheral area of the matching point coordinate;
and removing the isolated points of the peripheral area to obtain the image to be processed after the background image is removed.
In one possible example, after obtaining the connecting edge contour or the splicing edge contour, before obtaining a contour extraction result corresponding to the image to be processed, the program further includes instructions for performing the following steps:
acquiring a visual angle 2 alpha of the camera and an inclination angle theta of the camera, wherein the visual angle of the camera represents the maximum angle which can be imaged by the camera;
calculating to obtain a first angle according to the visual angle of the camera and the inclination angle of the camera, wherein the calculation formula of the first angle is a first formula as follows:
Figure BDA0002122613870000191
acquiring an included angle A between a first continuous edge profile and the vertical direction and an included angle B between a second continuous edge profile and the vertical direction in the continuous edge profiles, wherein the first continuous edge profile and the second continuous edge profile are not adjacent edge profiles, and the continuous edge profiles comprise splicing edge profiles or connecting edge profiles;
determining whether A and B satisfy a second formula: 2 δ is a + B;
and if so, taking the straight line where the first edge contour and the second edge contour are as a new continuous edge contour.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software elements for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 13 is a block diagram of functional unit components of the image extraction device 600 according to the embodiment of the present application. The image extraction apparatus 600 includes:
an obtaining unit 601, configured to obtain an image to be processed, and block the image to be processed to obtain a plurality of region block images;
a first determination unit 602 configured to determine a scanning area size of each of the plurality of area block images;
a scanning unit 603, configured to scan each region block image according to a scanning region size of each region block image, and obtain a contour extraction result corresponding to the image to be processed;
a second determining unit 604, configured to determine a target area image in the image to be processed according to the contour extraction result.
It can be seen that, in the embodiment of the application, the image extraction device acquires an image to be processed, and blocks the image to be processed to obtain a plurality of region block images; determining a scanning area size of each of a plurality of area block images; scanning each area block image according to the size of the scanning area of each area block image to obtain a contour extraction result corresponding to the image to be processed; and determining a target area image in the image to be processed according to the contour extraction result. In the process, a plurality of area block images are obtained by partitioning an image to be processed, each area block image is scanned to obtain a contour extraction result, and the extracted target area image is determined according to the contour extraction result. Therefore, threshold calculation can be reduced, the accuracy of effective contour information extraction is improved, and the efficiency of extracting the target area image is finally improved.
In one possible example, in terms of the obtaining a plurality of region block images by partitioning the image to be processed, the obtaining unit 601 is specifically configured to:
acquiring the size of the image to be processed, wherein the size comprises a width W and a height H;
dividing the image to be processed into n2An image of each region block having a width of bW=[W/n],bH=[H/n],[W/n]Represents that W is rounded to n, [ H/n ]]Represents that H rounds up n;
n is2The plurality of region block images constitute the plurality of region block images.
In one possible example, the first determining unit 602 is specifically configured to:
calculating the pixel mean and the pixel variance of each of the plurality of region block images;
determining the complexity of each area block image according to the pixel mean and the pixel variance;
determining the scanning area size according to the complexity, wherein the complexity is in inverse proportion to the scanning area size.
In one possible example, the scanning unit 603 is specifically configured to:
performing edge detection on each region block image in the plurality of region block images by adopting an edge detection operator, and extracting an edge profile in each region block image;
determining a first area block image in the plurality of area block images according to a preset direction, scanning the first area block image according to the size of a scanning area of the first area block image, and determining whether the first area block image contains an edge contour;
if the first area block image is determined to contain a first edge profile, determining the inclination direction of the first edge profile, and tracking whether the next area block image in the inclination direction contains a second edge profile;
if the next area block image contains a second edge contour, repeating the process of tracking the edge contour of the next area block image in the inclined direction to obtain a connecting edge contour;
if the next area block image does not contain the second edge profile, scanning from the adjacent area block image of the first area block image according to a preset direction, acquiring a new edge profile which is not in the connection edge profile as a new first edge profile, and repeating the step of tracking the inclination direction of the edge profile until the connection edge profile is acquired;
and finishing the scanning of each area block image, and taking the obtained connecting edge contour as a contour extraction result corresponding to the image to be processed.
In one possible example, the scanning unit 603 is specifically configured to:
performing edge detection on each region block image in the plurality of region block images by adopting an edge detection operator, and extracting an edge profile in each region block image;
scanning each area block image according to the size of the scanning area of each area block image, and determining whether each area block image contains an edge profile;
if the target area block image in each area block image contains the edge contour, recording the area block number and contour coordinates determined by the target area block image according to the scanning sequence;
splicing the obtained edge profiles corresponding to the at least two area block numbers according to profile coordinates to obtain spliced edge profiles;
and taking the spliced edge contour as a contour extraction result corresponding to the image to be processed.
In one possible example, the second determining unit 604 is specifically configured to:
if the edge contour in the contour extraction result forms a closed-loop contour, determining whether the edge contour is an outermost periphery edge contour, wherein the outermost periphery edge contour is an edge contour of which the periphery does not comprise the closed-loop contour;
if the edge contour is determined to be the outermost periphery edge contour, determining the area surrounded by the closed loop contour to be a target area image;
if the edge contour in the contour extraction result does not form a closed-loop contour, acquiring the number of image edge intersection points of the edge contour and the image to be processed;
if the number of the intersection points is less than 2, determining that the image to be processed is a target area image;
if the number of the intersection points is equal to 2, calculating pixel mean values of two sides of the edge contour intersected with the image edge, and determining one side of the two sides of the edge contour, which has a larger pixel mean value, as a target side;
determining a region composed of the edge contour, the image edge and the target side as the target region image;
and if the number of the intersection points is more than 2, determining that the area formed by the edge outline and the image edge is the target area image.
In one possible example, the method is applied to an electronic device, the electronic device includes a camera, and before the image to be processed is blocked, the obtaining unit 601 is further configured to:
obtaining an initial background image through the camera;
extracting features of the initial background image and the image to be processed to obtain a background feature corresponding to the initial background image and a feature to be processed of the image to be processed;
matching the background features with the features to be processed, and determining a plurality of target features which are successfully matched in the background features and the features to be processed;
when the number of the target features is larger than a first preset threshold value, determining that the image to be processed is successfully matched with the initial background image;
determining a plurality of matching point coordinates of the plurality of target features corresponding to the initial background image and the image to be processed;
carrying out image registration crack removal treatment on the initial background image and the image to be processed according to the plurality of matching point coordinates to obtain updated matching point coordinates;
acquiring a matching point coordinate in the image to be processed, and normalizing the peripheral area of the matching point coordinate;
and removing the isolated points of the peripheral area to obtain the image to be processed after the background image is removed.
In one possible example, the scanning unit 603 is further specifically configured to:
acquiring a visual angle 2 alpha of the camera and an inclination angle theta of the camera, wherein the visual angle of the camera represents the maximum angle which can be imaged by the camera;
calculating to obtain a first angle according to the visual angle of the camera and the inclination angle of the camera, wherein the calculation formula of the first angle is a first formula as follows:
Figure BDA0002122613870000221
acquiring an included angle A between a first continuous edge profile and the vertical direction and an included angle B between a second continuous edge profile and the vertical direction in the continuous edge profiles, wherein the first continuous edge profile and the second continuous edge profile are not adjacent edge profiles, and the continuous edge profiles comprise splicing edge profiles or connecting edge profiles;
determining whether A and B satisfy a second formula: 2 δ is a + B;
and if so, taking the straight line where the first edge contour and the second edge contour are as a new continuous edge contour.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes a mobile terminal.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising a mobile terminal.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and elements referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented as a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash disk, ROM, RAM, magnetic or optical disk, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (9)

1. An image extraction method is applied to region image extraction corresponding to an object with parallel edges, and the method comprises the following steps:
acquiring an image to be processed, and partitioning the image to be processed to obtain a plurality of area block images;
determining a scanning area size of each of the plurality of area block images;
scanning each area block image according to the size of the scanning area of each area block image to obtain a contour extraction result corresponding to the image to be processed;
determining a target area image in the image to be processed according to the contour extraction result, specifically comprising:
if the continuous edge contour in the contour extraction result forms a closed-loop contour, determining whether the edge contour is an outermost periphery edge contour, wherein the outermost periphery edge contour is an edge contour of which the periphery does not comprise the closed-loop contour;
if the continuous edge contour is determined to be the outermost periphery edge contour, determining the area surrounded by the closed loop contour to be a target area image;
if the continuous edge contour in the contour extraction result does not form a closed-loop contour, acquiring the number of image edge intersection points of the continuous edge contour and the image to be processed;
if the number of the intersection points is less than 2, determining that the image to be processed is a target area image;
if the number of the intersection points is equal to 2, calculating pixel mean values of two sides of the continuous edge contour intersected with the image edge, and determining one side of the two sides of the continuous edge contour, which has a larger pixel mean value, as a target side;
determining a region composed of the continuous edge contour, the image edge and the target side as the target region image;
and if the number of the intersection points is more than 2, determining that the area formed by the continuous edge outline and the image edge is the target area image.
2. The method according to claim 1, wherein the blocking the image to be processed to obtain a plurality of region block images comprises:
acquiring the size of the image to be processed, wherein the size comprises a width W and a height H;
dividing the image to be processed into n2An image of each region block having a width of bW=[W/n],bH=[H/n],[W/n]Represents that W is rounded to n, [ H/n ]]Represents that H rounds up n;
n is2The plurality of region block images constitute the plurality of region block images.
3. The method of claim 1, wherein the determining the scan area size for each of the plurality of area block images comprises:
calculating the pixel mean and the pixel variance of each of the plurality of region block images;
determining the complexity of each area block image according to the pixel mean and the pixel variance;
determining the scanning area size according to the complexity, wherein the complexity is in inverse proportion to the scanning area size.
4. The method according to claim 1, wherein the scanning each area block image according to the size of the scanning area to obtain the contour extraction result corresponding to the image to be processed comprises:
performing edge detection on each region block image in the plurality of region block images by adopting an edge detection operator, and extracting an edge profile in each region block image;
scanning each area block image according to the scanning area size of the area block image in a preset direction, and determining whether each area block image contains an edge profile;
if the first area block image is determined to contain a first edge profile, taking the first edge profile as a starting profile, determining the inclination direction of the first edge profile, and tracking whether a next area block image in the inclination direction contains a second edge profile connected with the first edge profile;
if the next area block image contains a second edge contour, repeating the process of tracking the edge contour of the next area block image in the inclined direction to obtain a connecting edge contour;
if the next area block image does not contain the second edge profile, scanning from the adjacent area block image of the first area block image according to a preset direction, acquiring a new edge profile which is not in the connection edge profile as a new first edge profile, and repeating the step of tracking the inclination direction of the edge profile until the connection edge profile is acquired;
and finishing the scanning of each area block image, and taking the obtained connecting edge contour as a continuous edge contour in the contour extraction result corresponding to the image to be processed.
5. The method according to claim 1, wherein the scanning each area block image according to the size of the scanning area to obtain the contour extraction result corresponding to the image to be processed comprises:
performing edge detection on each region block image in the plurality of region block images by adopting an edge detection operator, and extracting an edge profile in each region block image;
scanning each area block image according to the size of the scanning area of each area block image, and determining whether each area block image contains an edge profile;
if the target area block image in each area block image contains the edge contour, determining whether the edge contour is a straight line;
if so, recording area block image number blocks and contour coordinates of the straight line in an image coordinate system, wherein the image coordinate system is established by dividing an image to be processed into a plurality of area block images and then determining the coordinates of each area block image in the plurality of area block images;
splicing straight lines corresponding to the acquired image numbers of the at least two region blocks according to the contour coordinates to acquire a spliced edge contour;
and taking the spliced edge contour as a continuous edge contour in the contour extraction result corresponding to the image to be processed.
6. The method according to claim 4 or 5, wherein the method is applied to an electronic device, the electronic device comprises a camera, and before the image to be processed is blocked, the method further comprises:
obtaining an initial background image through the camera;
extracting features of the initial background image and the image to be processed to obtain a background feature corresponding to the initial background image and a feature to be processed of the image to be processed;
matching the background features with the features to be processed, and determining a plurality of target features which are successfully matched in the background features and the features to be processed;
when the number of the target features is larger than a first preset threshold value, determining that the image to be processed is successfully matched with the initial background image;
determining a plurality of matching point coordinates of the plurality of target features corresponding to the initial background image and the image to be processed;
carrying out image registration crack removal treatment on the initial background image and the image to be processed according to the plurality of matching point coordinates to obtain updated matching point coordinates;
acquiring a matching point coordinate in the image to be processed, and normalizing the peripheral area of the matching point coordinate;
and removing the isolated points of the peripheral area to obtain the image to be processed after the background image is removed.
7. The method of claim 6, wherein after obtaining the continuous edge profile, the method further comprises:
acquiring a visual angle 2 alpha of the camera and an inclination angle theta of the camera, wherein the visual angle of the camera represents the maximum angle which can be imaged by the camera;
calculating to obtain a first angle according to the visual angle of the camera and the inclination angle of the camera, wherein the calculation formula of the first angle is a first formula as follows:
Figure FDA0003498130660000031
acquiring an included angle A between a first continuous edge profile and the vertical direction and an included angle B between a second continuous edge profile and the vertical direction in the continuous edge profiles, wherein the first continuous edge profile and the second continuous edge profile are not adjacent edge profiles;
determining whether A and B satisfy a second formula: 2 δ is a + B;
and if so, taking the straight line where the first continuous edge contour and the second continuous edge contour are as a new continuous edge contour.
8. An image extraction device characterized by comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring an image to be processed and partitioning the image to be processed to obtain a plurality of area block images;
a first determination unit configured to determine a scanning area size of each of the plurality of area block images;
the scanning unit is used for scanning each area block image according to the size of the scanning area of each area block image to obtain a contour extraction result corresponding to the image to be processed;
a second determining unit, configured to determine a target area image in the image to be processed according to the contour extraction result, where the second determining unit specifically includes:
if the continuous edge contour in the contour extraction result forms a closed-loop contour, determining whether the edge contour is an outermost periphery edge contour, wherein the outermost periphery edge contour is an edge contour of which the periphery does not comprise the closed-loop contour;
if the continuous edge contour is determined to be the outermost periphery edge contour, determining the area surrounded by the closed loop contour to be a target area image;
if the continuous edge contour in the contour extraction result does not form a closed-loop contour, acquiring the number of image edge intersection points of the continuous edge contour and the image to be processed;
if the number of the intersection points is less than 2, determining that the image to be processed is a target area image;
if the number of the intersection points is equal to 2, calculating pixel mean values of two sides of the continuous edge contour intersected with the image edge, and determining one side of the two sides of the continuous edge contour, which has a larger pixel mean value, as a target side;
determining a region composed of the continuous edge contour, the image edge and the target side as the target region image;
and if the number of the intersection points is more than 2, determining that the area formed by the continuous edge outline and the image edge is the target area image.
9. An electronic device comprising a processor and a memory, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
CN201910611808.6A 2019-07-08 2019-07-08 Image extraction method and related product Active CN110458855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910611808.6A CN110458855B (en) 2019-07-08 2019-07-08 Image extraction method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910611808.6A CN110458855B (en) 2019-07-08 2019-07-08 Image extraction method and related product

Publications (2)

Publication Number Publication Date
CN110458855A CN110458855A (en) 2019-11-15
CN110458855B true CN110458855B (en) 2022-04-05

Family

ID=68482325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910611808.6A Active CN110458855B (en) 2019-07-08 2019-07-08 Image extraction method and related product

Country Status (1)

Country Link
CN (1) CN110458855B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079772B (en) * 2019-12-18 2024-01-26 深圳科瑞技术股份有限公司 Image edge extraction processing method, device and storage medium
CN111950464B (en) * 2020-08-13 2023-01-24 安徽淘云科技股份有限公司 Image retrieval method, server and scanning pen
CN112304239A (en) * 2020-10-16 2021-02-02 大连理工大学 Method for extracting contour and central feature of micro thread pair
CN112488037A (en) * 2020-12-15 2021-03-12 上海有个机器人有限公司 Method for identifying dangerous area in image recognition
CN113610866B (en) * 2021-07-28 2024-04-23 上海墨说科教设备有限公司 Method, device, equipment and storage medium for cutting calligraphy practicing image
CN113589985A (en) * 2021-07-30 2021-11-02 车主邦(北京)科技有限公司 Icon replacement method and device, terminal equipment and computer readable storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4525787B2 (en) * 2008-04-09 2010-08-18 富士ゼロックス株式会社 Image extraction apparatus and image extraction program
CN101727666B (en) * 2008-11-03 2013-07-10 深圳迈瑞生物医疗电子股份有限公司 Image segmentation method and device, and method for judging image inversion
CN104089575B (en) * 2014-07-02 2018-05-11 北京东方迈视测控技术有限公司 Intelligent plane detector and detection method
CN106295639A (en) * 2016-08-01 2017-01-04 乐视控股(北京)有限公司 A kind of virtual reality terminal and the extracting method of target image and device
CN106683076B (en) * 2016-11-24 2019-08-09 南京航空航天大学 The method of train wheel tread damage detection based on textural characteristics cluster
CN106682424A (en) * 2016-12-28 2017-05-17 上海联影医疗科技有限公司 Medical image adjusting method and medical image adjusting system
CN108009470B (en) * 2017-10-20 2020-06-16 深圳市朗形网络科技有限公司 Image extraction method and device
CN108596944B (en) * 2018-04-25 2021-05-07 普联技术有限公司 Method and device for extracting moving target and terminal equipment
CN109658429A (en) * 2018-12-21 2019-04-19 电子科技大学 A kind of infrared image cirrus detection method based on boundary fractal dimension

Also Published As

Publication number Publication date
CN110458855A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
CN110458855B (en) Image extraction method and related product
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
US9292759B2 (en) Methods and systems for optimized parameter selection in automated license plate recognition
EP3767520B1 (en) Method, device, equipment and medium for locating center of target object region
CN113109368B (en) Glass crack detection method, device, equipment and medium
CN109492642B (en) License plate recognition method, license plate recognition device, computer equipment and storage medium
US8526684B2 (en) Flexible image comparison and face matching application
CN105894464A (en) Median filtering image processing method and apparatus
CN111695540A (en) Video frame identification method, video frame cutting device, electronic equipment and medium
US20180144488A1 (en) Electronic apparatus and method for processing image thereof
CN105912912A (en) Method and system for user to log in terminal by virtue of identity information
JP6188452B2 (en) Image processing apparatus, image processing method, and program
CN108665495B (en) Image processing method and device and mobile terminal
CN112396050B (en) Image processing method, device and storage medium
CN111199197B (en) Image extraction method and processing equipment for face recognition
CN108960247B (en) Image significance detection method and device and electronic equipment
CN108257086B (en) Panoramic photo processing method and device
CN111881846B (en) Image processing method, image processing apparatus, image processing device, image processing apparatus, storage medium, and computer program
CN112686247A (en) Identification card number detection method and device, readable storage medium and terminal
CN109785367B (en) Method and device for filtering foreign points in three-dimensional model tracking
WO2021056501A1 (en) Feature point extraction method, movable platform and storage medium
CN115424181A (en) Target object detection method and device
CN112085683B (en) Depth map credibility detection method in saliency detection
CN113870292A (en) Edge detection method and device for depth image and electronic equipment
CN116012398A (en) Image stitching and tampering detection method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 230000 9, level 1, Tiandi science and Technology Park, 66 dive East Road, hi tech Zone, Hefei, Anhui.

Applicant after: Anhui taoyun Technology Co.,Ltd.

Address before: 230000 9, level 1, Tiandi science and Technology Park, 66 dive East Road, hi tech Zone, Hefei, Anhui.

Applicant before: ANHUI TAOYUN TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant