CN111353957A - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111353957A
CN111353957A CN202010130485.1A CN202010130485A CN111353957A CN 111353957 A CN111353957 A CN 111353957A CN 202010130485 A CN202010130485 A CN 202010130485A CN 111353957 A CN111353957 A CN 111353957A
Authority
CN
China
Prior art keywords
image
edge line
line segment
edge
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010130485.1A
Other languages
Chinese (zh)
Inventor
朱理
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202010130485.1A priority Critical patent/CN111353957A/en
Publication of CN111353957A publication Critical patent/CN111353957A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • G06T5/77
    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The present disclosure relates to an image processing method, an apparatus, a storage medium, and an electronic device, the image processing method including: determining an edge transition area between a first area and a second area in an original image to obtain a first image; performing edge detection on the original image, and determining an edge line between the first area and the second area to obtain a second image; superimposing the first image and the second image; and traversing pixel points in the superposed image according to a preset traversal path, stopping traversal if an edge point is detected on the preset traversal path, determining the traversed pixel points on the preset traversal path as the pixel points in the first region, and determining the pixel points on the preset traversal path which are not traversed as the pixel points in the second region. By the image processing method, the accuracy of image area division can be improved, and image area replacement can be better performed.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
In daily life, more and more users have demands for beautifying images. For example, in an image captured in cloudy or haze weather, a user usually replaces the sky with a blue sky to improve the beautification degree of the image.
In the related technology, a mode of removing an artificial image is mainly to fuse the high-level features and the low-level features of an image of a convolutional neural network on the basis of semantic segmentation, then train out a classifier model, separate sky pixels from non-sky pixels in the image, and finally replace the separated sky pixels with corresponding pixels of a blue-sky image, so that sky replacement is realized. However, when the buildings and the sky in the image are segmented in the above manner, due to complexity of the surrounding environment, segmentation at the edges of the buildings is not fine enough, and particularly, problems that segmentation is difficult often occur at the boundary between the buildings and the sky, such as off-white sky and off-white walls, so that jagged or curved edge features which do not conform to the actual edges of the buildings occur in the segmented buildings, and the subsequent sky replacement effect is affected.
Disclosure of Invention
The present disclosure aims to provide an image processing method, an image processing apparatus, a storage medium, and an electronic device, so as to solve the problems in the related art, and improve the accuracy of dividing an image area, thereby more accurately implementing replacement of the image area, and better meeting the requirement of users for beautifying the image.
In order to achieve the above object, in a first aspect, the present disclosure provides an image processing method, the method comprising:
determining an edge transition area between a first area and a second area in an original image to obtain a first image;
performing edge detection on the original image, and determining an edge line between the first area and the second area to obtain a second image;
superimposing the first image and the second image;
and traversing pixel points in the superposed image according to a preset traversal path, stopping traversal if an edge point is detected on the preset traversal path, determining the traversed pixel points on the preset traversal path as pixel points in a first region, and determining the pixel points on the preset traversal path which are not traversed as pixel points in a second region.
Optionally, the edge line includes a plurality of edge line segments that are not connected to each other, and the method further includes:
filling edge points of the edge line segments to enable the edge line segments to be communicated into a coherent edge line, and obtaining a third image;
superimposing the first image and the second image, comprising: and superposing the third image and the second image.
Optionally, the performing edge point filling on the plurality of edge line segments to connect the plurality of edge line segments into a coherent edge line, so as to obtain a third image, including:
regarding the plurality of edge line segments, taking any one edge line segment as an initial target edge line segment, and executing the following operations:
for each determined target edge line segment, determining edge line segments meeting preset conditions in other edge line segments except the edge line segments determined as the target edge line segments;
and filling edge points of the target edge line segment and the edge line segment meeting the preset condition to obtain a connected line segment, and determining the connected line segment as a new target edge line segment until the plurality of edge line segments are connected into a coherent edge line.
Optionally, for each determined target edge line segment, determining an edge line segment that meets a preset condition from among other edge line segments except for the edge line segment that has been determined as the target edge line segment, including:
and for each determined target edge line segment, determining an edge line segment which is within a first preset range of the target edge line segment and is closest to the target edge line segment from other edge line segments except the edge line segment determined as the target edge line segment.
Optionally, for each determined target edge line segment, determining an edge line segment that meets a preset condition from among other edge line segments except for the edge line segment that has been determined as the target edge line segment, including:
fitting a curve according to pixel points on the edge line segments for each of the plurality of edge line segments;
and aiming at the target edge line segment determined each time, determining the edge line segment which is within a second preset range of the target edge line segment and has an intersection point with the fitting curve of the target edge line segment in other edge line segments except the edge line segment determined as the target edge line segment.
Optionally, the determining an edge transition region between the first region and the second region in the original image includes:
performing semantic segmentation processing on the original image;
and respectively carrying out image expansion processing and image corrosion processing on the semantically segmented image, carrying out pixel point difference calculation on the image subjected to the image expansion processing and the image subjected to the image corrosion processing, and determining a transition zone region between a first region and a second region in the semantically segmented image.
Optionally, the traversing pixel points in the superimposed image according to a preset traversal path includes:
removing different edge points with different gray values from other edge points in the neighborhood from the superposed image to obtain a fourth image;
and traversing pixel points in the fourth image according to a preset traversal path.
Optionally, the preset traversal path includes at least one of: traversing left, right, down, up, diagonally from top left to bottom right of the superimposed image, diagonally from top right to bottom left of the superimposed image.
In a second aspect, the present disclosure also provides an image processing apparatus, the apparatus comprising:
the first determining module is configured to determine an edge transition region between a first region and a second region in the original image to obtain a first image;
the second determining module is configured to perform edge detection on the original image, determine an edge line between the first area and the second area, and obtain a second image;
a superimposing module configured to superimpose the first image and the second image;
and the traversal module is configured to perform pixel traversal according to a preset traversal path in the superposed image, stop traversal when an edge point is detected on the preset traversal path, determine the traversed pixels on the preset traversal path as pixels in the first region, and determine the non-traversed pixels on the preset traversal path as pixels in the second region.
In a third aspect, the present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any one of the first aspect.
In a fourth aspect, the present disclosure also provides an electronic device, including:
a memory having a computer program stored thereon;
a processor configured to execute the computer program in the memory to implement the steps of the method of any of the first aspects.
By the technical scheme, in a scene of dividing the building area and other areas in the image, the straight edge between the building and other areas can be obtained through edge detection, so that the edge characteristics of the building subjected to image processing, such as jaggy or bending, which are not in line with the actual edge of the building, are avoided, and the accuracy of image area division is improved. In addition, through pixel point traversal, the image area division can be accurate to the pixel level, and the accuracy of the image area division is further improved, so that the replacement of the image area is realized more accurately, and the beautifying requirement of a user on the image is better met.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is an original image to be processed;
fig. 2 is an image obtained by processing the image shown in fig. 1 and performing image region replacement according to a technique in the related art;
FIG. 3 is a flow chart illustrating a method of image processing according to an exemplary embodiment of the present disclosure;
FIG. 4 is another raw image to be processed;
FIG. 5 is a semantic segmentation result diagram obtained by processing the image shown in FIG. 4 according to a semantic segmentation technique in the related art;
FIG. 6 is a transition band region diagram resulting from image processing of FIG. 5 according to an image processing method in an exemplary embodiment of the present disclosure;
FIG. 7 is a graph of results obtained after performing edge detection on FIG. 4 according to an image processing method in an exemplary embodiment of the disclosure;
FIG. 8 is a diagram illustrating an image processing method according to an exemplary embodiment of the present disclosure;
fig. 9 is a diagram illustrating a result of processing the image shown in fig. 4 and replacing pixels with sky areas in an image processing method according to an exemplary embodiment of the disclosure;
FIG. 10 is a flowchart illustrating an image processing method according to another exemplary embodiment of the present disclosure;
fig. 11 is a diagram illustrating a result of processing the image shown in fig. 1 and replacing pixels with sky regions according to another exemplary embodiment of the disclosure;
fig. 12 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment of the present disclosure;
FIG. 13 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment of the present disclosure;
fig. 14 is a block diagram illustrating an electronic device according to another exemplary embodiment of the present disclosure.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
In daily life, more and more users have demands for beautifying images. For example, in an image captured in cloudy or haze weather, a user usually replaces the sky with a blue sky to improve the beautification degree of the image. In the related technology, a mode of removing an artificial image is mainly to fuse the high-level features and the low-level features of an image of a convolutional neural network on the basis of semantic segmentation, then train out a classifier model, separate sky pixels from non-sky pixels in the image, and finally replace the separated sky pixels with corresponding pixels of a blue-sky image, so that sky replacement is realized. However, when the buildings and the sky in the image are segmented in the above manner, due to complexity of the surrounding environment, segmentation at the edges of the buildings is not fine enough, and particularly, problems that segmentation is difficult often occur at the boundary between the buildings and the sky, such as off-white sky and off-white walls, so that jagged or curved edge features which do not conform to the actual edges of the buildings occur in the segmented buildings, and the subsequent sky replacement effect is affected.
For example, fig. 1 is an original image, and fig. 2 is an image obtained by fusing the high-level features and the low-level features of an image of a convolutional neural network based on semantic segmentation, and then training a classifier model to process the image shown in fig. 1 and perform sky replacement. Comparing fig. 1 and fig. 2, it can be seen that a pixel classification error occurs at an edge between a building and the sky in the processed image, resulting in a gap (shown by a dashed box 20 in fig. 2) at an originally straight building edge. Moreover, because some pixel points in the middle of the edge of the building and the sky are classified as sky pixels by mistake, in the subsequent sky replacement process, the pixel points belonging to the building are replaced by the sky pixels, so that the pixel point replacement mistake is caused, and the beautifying requirement of a user on the image cannot be well met.
In view of this, embodiments of the present disclosure provide an image processing method, an image processing apparatus, a storage medium, and an electronic device, so as to solve the problems in the related art, and improve the accuracy of dividing an image area, thereby achieving replacement of the image area more accurately and better meeting the requirement of users for beautifying the image.
First, it is explained that the image processing method in the embodiment of the present disclosure may be applied to an electronic device having an image processing function, such as a camera, a video camera, a computer, a mobile phone, and a Pad, or may also be applied to a server, which is not limited in the embodiment of the present disclosure. If the image processing method is applied to a server, the server may first receive an image sent by a client, and then process the received image according to the image processing method of the embodiment of the present disclosure. It should be understood that the image in the embodiment of the present disclosure may be a picture that is captured by a camera of the client and then stored in the client, may also be a picture that is downloaded and stored from a network by the client, may also be a certain frame of picture captured from a video stored in the client, and the like, which is not limited in the embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating an image processing method according to an exemplary embodiment of the present disclosure. Referring to fig. 3, the image processing method may include:
step S301, determining an edge transition region between a first region and a second region in the original image, to obtain a first image.
Step S302, edge detection is carried out on the original image, and an edge line between the first area and the second area is determined to obtain a second image.
Step S303 superimposes the first image and the second image.
Step S304, traversing pixel points in the overlapped image according to a preset traversal path, stopping traversal if an edge point is detected on the preset traversal path, determining the traversed pixel points on the preset traversal path as pixel points in a first region, and determining the pixel points on the preset traversal path which are not traversed as pixel points in a second region.
Exemplarily, an original image is shown in fig. 1, and the original image includes a building area and a sky area. If the sky area is to be replaced to obtain an image of a blue sky background, so as to achieve an image beautification effect, semantic segmentation may be performed on the original image, for example, a building in the original image may be used as a first target object, and non-buildings such as the sky may be used as a second target object, so as to perform semantic segmentation, and obtain a first area corresponding to the building and a second area corresponding to the non-buildings. Then, steps 301 to 304 may be performed to finely divide the first region and the second region, so as to more accurately implement the replacement of the sky region.
For example, referring to fig. 11, after the original image shown in fig. 1 is processed and the sky is replaced according to the embodiment of the present disclosure, the region division result diagram shown in fig. 11 may be obtained. As can be seen from fig. 2 and 11, in a scene in which a building area and other areas in an image are divided, a straight edge between the building area and the other areas can be obtained by the method in the embodiment of the present disclosure, so that edge features, such as jaggies or curves, of the building after image processing, which are not in line with the actual edge of the building area are avoided, and the accuracy of image area division is improved. In addition, through pixel point traversal, the image area division can be accurate to the pixel level, and the accuracy of the image area division is further improved, so that the replacement of the image area is realized more accurately, and the beautifying requirement of a user on the image is better met.
In order to make those skilled in the art understand the technical solutions provided by the embodiments of the present disclosure, the following detailed descriptions of the above steps are provided.
In a possible manner, the determining the edge transition region between the first region and the second region in the original image in step 301 may be: firstly, performing semantic segmentation processing on an original image, then performing image expansion processing and image corrosion processing on the image subjected to the semantic segmentation, performing pixel point difference calculation on the image subjected to the image expansion processing and the image subjected to the image corrosion processing, and determining a transition zone region between a first region and a second region in the image subjected to the semantic segmentation.
For example, the semantic segmentation processing on the original image may be processing on the original image through any semantic segmentation manner in the related art, which is not limited in the embodiment of the present disclosure. It should be understood that, in the process of dividing the building area in the image and the other image areas, since the pixel proportion of the building area in the image is usually large, a hole convolution can be adopted in the semantic segmentation process to enlarge the pooled receptive field, and pixel feature extraction is performed in a large range, so that semantic segmentation is performed more accurately.
In addition, when the feature maps are spliced in parallel, image-level features of global average pooling can be added, and the feature maps with different sampling rates can be converted into uniform scales for splicing, such as feature maps with scales of 32 × 32, 16 × 16 and 8 × 8 obtained in the semantic segmentation process, feature maps of 16 × 16 and 8 × 8 can be converted into 32 × 32 and then spliced in parallel, feature maps of 32 × 32 and 8 × 8 can be converted into 16 × 16 and then spliced in parallel, and the like.
For example, the original image shown in fig. 4 is subjected to semantic segmentation, so that a semantic segmentation result map shown in fig. 5 can be obtained. Referring to fig. 5, the original image shown in fig. 4 may be divided into a sky area a and a building area B through a semantic segmentation process. It should be understood that the embodiments of the present disclosure only focus on the division of the area between the building area and the sky area, and do not focus on the division of other areas, so that other non-building areas except the sky area may be regarded as building areas in fig. 5.
After the original image is subjected to semantic segmentation, in order to determine a transition zone region between the first region and the second region, image expansion processing and image erosion processing may be performed on the image subjected to semantic segmentation, and then pixel point difference calculation may be performed on the image subjected to image expansion processing and the image subjected to image erosion processing.
The term "image expansion" means that a white area of an image is expanded, the white area in a final result image is larger than the white area of an original image, and the term "image erosion" means that a white portion in the image is reduced and thinned, and the white area in the final result image is smaller than the white area of the original image. For example, the semantic segmentation result map shown in fig. 5 is subjected to image expansion processing and image erosion processing, respectively, and the white area (building area B) in the image after the image expansion processing becomes large and the white area (building area B) in the image after the image erosion becomes small.
After the image subjected to semantic segmentation is subjected to image expansion processing and image corrosion processing, pixel point difference calculation can be performed on the image subjected to image expansion and the image subjected to image corrosion. For example, the gray values of the pixels are sequentially subtracted according to the coordinates for the image after the image expansion and the image after the image corrosion, so as to obtain a transition zone between the first region and the second region. For example, the semantic segmentation result map shown in fig. 5 is subjected to image expansion processing and image erosion processing, and then pixel point difference calculation is performed on the image after image expansion and the image after image erosion, so that a transition zone region between the sky region and the building region shown in fig. 6 can be obtained. The width of the transition zone area is increased compared to the area dividing line between the sky area and the building area in fig. 5.
Simultaneously with or after determining the transition zone between the first area and the second area, the original image can be subjected to edge detection, and an edge line between the first area and the second area is determined, so that more accurate area division is realized for the first area and the second area by combining the transition zone and the edge detection result.
It should be understood that the part of the image where the brightness change is significant in the local region is called an edge, and the pixel points where the gray value has a large change in the neighborhood may be edge points. When the building area and other areas are divided in the image, the building area has static, regular and relatively consistent appearance characteristics in the interior compared with other areas, so that the edge information of the building area can be well extracted through edge detection.
For example, the original image may be subjected to edge detection by any edge detection manner in the related art, which is not limited in the embodiment of the present disclosure. For example, the edge detection result graph shown in fig. 7 can be obtained by performing edge detection on the original image shown in fig. 4 by using the canny operator. Specifically, the image may be smoothed with a gaussian filter after the image is grayed. Then, in the filtered image, the local gradient and the edge direction can be calculated by the sobel operator, and the point where the intensity is locally maximum in the gradient direction is taken as the edge point. And finally, comparing the gradient values of the edge points along the gradient direction, and reserving the edge point with the maximum local gradient to reduce the width of the edge. In addition, in the canny operator, there may be pixel points due to noise and color change in the remaining edge points, instead of the true edge points. Therefore, in order to improve the accuracy of the edge detection result, the canny operator can filter out target edge points with gradient values smaller than a low threshold value, reserve strong edge points with gradient values larger than a high threshold value, and perform edge neighborhood connection operation on weak edge points in a high threshold value interval and a low threshold value interval. The high threshold and the low threshold may be adaptively set based on the gradient value interval of the edge detection, which is not limited in the embodiment of the present disclosure. For example, the high threshold may be set to three quarters of the ascending gradient value interval, and the low threshold may be set to one quarter of the ascending gradient value interval.
By the method, the weak edge points in the high and low threshold value intervals are subjected to edge neighborhood communication operation, and a plurality of edge lines connected with the edge points can be obtained. However, the edge line may include a plurality of edge line segments that are not connected to each other, and in order to connect the non-connected edge line segments into edge line segments with connected neighborhoods, edge point filling may be performed on the plurality of edge line segments, so that the plurality of edge line segments are connected into a continuous edge line, and a third image is obtained. Accordingly, step 303 may be to overlay the third image and the second image.
Further, performing edge point filling on the plurality of edge line segments, so that the plurality of edge line segments are communicated into a coherent edge line to obtain a third image, which may be: regarding a plurality of edge line segments, taking any edge line segment as an initial target edge line segment, and executing the following operations: and aiming at the target edge line segment determined each time, determining the edge line segment meeting the preset condition in other edge line segments except the edge line segment determined as the target edge line segment in the plurality of edge line segments, then performing edge point filling on the target edge line segment and the edge line segment meeting the preset condition to obtain a connected line segment, and determining the connected line segment as a new target edge line segment until the plurality of edge line segments are connected into a coherent edge line.
For example, the edge line includes 4 non-connected edge line segments, i.e., edge line segment 1, edge line segment 2, edge line segment 3, and edge line segment 4. In the embodiment of the present disclosure, any one of the 4 edge line segments may be used as the initial target edge line segment, for example, the edge line segment 1 is used as the initial target edge line segment. Then, for the initial target edge line segment (i.e., edge line segment 1), an edge line segment satisfying the preset condition may be determined from the remaining 3 edge line segments, for example, the edge line segment satisfying the preset condition is determined to be edge line segment 2. In this case, the edge point filling may be performed on the edge line segment 1 and the edge line segment 2, resulting in the connected line segment N1.
And, the connected line segment N1 may be used as a new target edge line segment, and an edge line segment meeting the preset condition is determined in the remaining 2 edge line segments, for example, if the edge line segment meeting the preset condition is determined to be the edge line segment 3, then the connected line segment N1 and the edge line segment 3 may be subjected to edge point filling, so as to obtain the connected line segment N2. Then, the connected line segment N2 may be used as a new target edge line segment, and if the remaining edge line segments 4 are edge line segments that satisfy the preset condition, the connected line segment N2 and the edge line segment 4 may be subjected to edge point filling. In this way, 4 edge line segments can be connected into one coherent edge line.
It should be understood that the foregoing is illustrative only and is not intended to limit the present disclosure. In practical application, if there are 4 edge line segments, it is also possible to perform edge point filling on the edge line segment 2 and the edge line segment 3 to obtain a connected line segment, then perform edge point filling on the connected line segment and the edge line segment 4 to obtain another connected line segment, and finally perform edge point filling on the connected line segment and the edge line segment 1, thereby connecting the 4 edge line segments into a coherent edge line. Or, it may also be that the edge line segment 3 and the edge line segment 4 are first subjected to edge point filling to obtain a connected line segment, then the connected line segment and the edge line segment 1 are subjected to edge point filling to obtain another connected line segment, and finally the connected line segment and the edge line segment 2 are subjected to edge point filling, so that the 4 edge line segments are connected into a continuous edge line, and so on.
In this way, in the embodiment of the present disclosure, the process of filling the pixel points in the edge line segments to obtain the third image may be to select any two edge line segments to fill the edge points to obtain edge line segments with connected neighborhoods. If the remaining edge line segments have edge line segments which are not communicated with each other in the neighborhood, filling pixel points of the edge line segments which are communicated with the neighborhood and the remaining edge line segments which are not communicated with each other in the neighborhood, and so on until all the edge line segments are connected into one edge line which is communicated with the neighborhood.
In one possible approach, for each determined target edge line segment, among other edge line segments of the plurality of edge line segments except the edge line segment that has been determined as the target edge line segment, determining an edge line segment that satisfies a preset condition may be: and for each determined target edge line segment, determining an edge line segment which is within a first preset range of the target edge line segment and is closest to the target edge line segment from other edge line segments except the edge line segment determined as the target edge line segment. The first preset range may be set according to practical situations, for example, the first preset range may be a neighborhood of an end pixel point of an edge line segment, and the like, which is not limited in the embodiment of the present disclosure.
For example, with the top, bottom, left and right of the right image as the reference direction, an edge line segment located above the target edge line segment AB and closest to the edge line segment AB is determined in the neighborhood of the end pixel point a of the target edge line segment AB, so that the region between the target edge line segment AB and the edge line segment can be filled with edge points, that is, edge points are added to the region between the target edge line segment AB and the target edge line segment, so that the target edge line segment AB and the edge line segment are connected into an edge line segment with neighborhood communication. Or, in the neighborhood of the end pixel point D of the target edge line segment CD, the edge line segment located at the right side of the target edge line segment CD and closest to the target edge line segment CD is determined, so that the region between the target edge line segment CD and the edge line segment can be filled with edge points, that is, edge points are added between two end points closest to the target edge line segment CD and the edge line segment, so that the target edge line segment CD and the edge line segment are connected into an edge line segment with neighborhood communication, and so on.
In another possible manner, for each determined target edge line segment, determining an edge line segment that satisfies a preset condition from among other edge line segments except for the edge line segment that has been determined as the target edge line segment, where: fitting a curve according to pixel points on the edge line segments for each of the plurality of edge line segments; and aiming at the target edge line segment determined each time, determining the edge line segment which is within a second preset range of the target edge line segment and has an intersection point with the fitting curve of the target edge line segment in other edge line segments except the edge line segment determined as the target edge line segment. The second preset range may be set according to practical situations, for example, the second preset range may be a neighborhood of an end pixel point of an edge line, and the like, which is not limited in the embodiment of the present disclosure.
It should be understood that an edge line segment may be composed of a plurality of pixel points, and since a plurality of edge line segments that are not connected to each other may be line segments with shorter lengths, in order to determine two edge line segments that can be neighborhood-connected, each edge line segment may be extended by a fitting curve, and if the extended edge line segments have an intersection point, that is, the fitting curve corresponding to each edge line segment has an intersection point, it is indicated that the two edge line segments are likely to be located on the same straight edge line, so that edge point filling may be performed on the two edge line segments, so that the two edge line segments are connected into one edge line segment whose neighborhood is connected.
By the mode, all the discrete edge points can be connected into the edge line segment with the communicated neighborhoods, so that the follow-up more precise area division process is facilitated.
After the edge point filling is performed on the image, the image after the edge point filling and the image including the transition zone region may be superimposed, and pixel point traversal is performed on the superimposed image according to a preset traversal path.
For example, the superimposing the image after the edge point filling and the image including the transition band region may be that the image after the edge point filling and the image including the transition band region are sequentially added with the gray values of the pixel points according to the coordinate correspondence. Because the gray value of the edge line in the image after the edge filling has larger difference with the gray value of the pixel point in the transition zone area, the transition zone area can be trimmed in an image overlapping mode, and a more accurate image area division result is obtained. For example, the edge detection result shown in fig. 7 is subjected to edge point filling to obtain a third image, and the third image and the second image including the transition band region shown in fig. 6 are subjected to image superposition, where the gray value of each pixel point on the edge line in the third image is 0 (expressed as white), and the gray value of each pixel point in the transition band region is greater than 0 (expressed as gray). In this case, the third image is superimposed with the second image, and a result map as shown in fig. 8 can be obtained. Comparing fig. 6 and fig. 8, it can be seen that the transition zone area between the sky area and the building area is further trimmed, which is convenient for improving the efficiency of the subsequent pixel traversal.
In practical application, the result of the edge detection may have a precision error, so as to affect the accuracy of the final image processing result, and therefore, in order to avoid this problem, the result accuracy is further improved, and the traversal efficiency is improved. It should be understood that if the accuracy of the result of the edge detection is high, pixel traversal may be directly performed on the superimposed image.
For example, the preset traversal path may include at least one of: traversing left, right, down, up, diagonally from top left to bottom right of the superimposed image, diagonally from top right to bottom left of the superimposed image. It should be understood that, in the image region division process, at least one of a left-line traversal, a right-line traversal, a down-column traversal, and an up-column traversal may be employed in consideration that the first region may be located above, below, left, or right of the second region. In addition, considering that the boundary between the regions may be a convex boundary, such as a boundary between the sky and a building in fig. 1, or may be a concave boundary, the embodiment of the present disclosure may also adopt a diagonal traversal from the upper left corner to the lower right corner of the superimposed image, or a diagonal traversal from the upper right corner to the lower left corner of the superimposed image, and in specific implementation, the preset traversal path may be determined according to an actual situation.
For example, a pixel traversal is performed on the overlay result graph shown in fig. 8, the preset traversal path is a traversal, the first area may be a sky area, and the second area may be a building area. Under the condition, if the edge point is detected on the preset traversal path, traversal can be stopped, the traversed pixel points on the preset traversal path are determined as the pixel points in the sky area, and the pixel points which are not traversed on the preset traversal path are determined as the pixel points in the building area, so that the image area division result is obtained. Then, after pixel replacement is performed on the sky area, a result graph as shown in fig. 9 is obtained. As can be seen from fig. 4 and 9, when the image processing method according to the embodiment of the present disclosure is used for image area division, an edge between a building and a sky is straight, and an edge feature that is not in line with an actual edge of the building does not occur, the sky area and the building area can be divided more accurately, and a more accurate area division result is obtained, so that pixel replacement of the image area is more accurately achieved, and beautifying requirements of a user on the image are better met.
The image processing method in the present disclosure is explained below by another exemplary embodiment. Referring to fig. 10, the image processing method may include the steps of:
step 1001, semantic segmentation processing is performed on the original image.
Step 1002, performing image expansion processing and image corrosion processing on the semantically segmented image respectively, performing pixel point difference calculation on the semantically segmented image and the semantically segmented image, and determining a transition zone region between a first region and a second region in the semantically segmented image.
Step 1003, performing edge detection on the original image, and determining an edge line between the first area and the second area.
Step 1004, for each edge line segment of the plurality of edge lines, fitting a curve according to pixel points on the edge line segment.
Step 1005, if there is an intersection point between the curve corresponding to the first edge line segment and the curve corresponding to the second edge line segment within the second preset range of the first edge line segment, performing edge point filling on the first edge line segment and the second edge line segment to connect the first edge line segment and the second edge line segment into an edge line segment with neighborhood communication.
Step 1006, overlapping the image after the edge point filling and the image including the transition zone area.
Step 1007, removing the difference edge points with different gray values from other edge points in the neighborhood in the superposed image.
And 1008, traversing pixel points in the image with the difference edge points removed according to a preset traversal path.
Step 1009, if an edge point is detected on the preset traversal path, stopping traversal, determining a pixel point traversed on the preset traversal path as a pixel point in the first region, and determining a pixel point not traversed on the preset traversal path as a pixel point in the second region.
The detailed description of the above steps is given above for illustrative purposes, and will not be repeated here. It will also be appreciated that for simplicity of explanation, the above-described method embodiments are all presented as a series of acts or combination of acts, but those skilled in the art will recognize that the present disclosure is not limited by the order of acts or combination of acts described above. Further, those skilled in the art will also appreciate that the embodiments described above are preferred embodiments and that the steps involved are not necessarily required for the present disclosure.
After the image is processed by the image processing mode, the accurate division result of the first region and the second region can be obtained, so that when pixel point replacement is carried out on the first region or the second region, a more accurate pixel point replacement result can be obtained. For example, the image shown in fig. 1 is processed in the above manner, and the sky area obtained by the processing is replaced by a blue sky, so that a result graph shown in fig. 11 can be obtained. As can be seen from fig. 2 and 11, the above-mentioned method can avoid the edge features of the building edge, such as jagged or curved edge, which do not conform to the actual edge of the building, and obtain a more accurate edge result between the sky and the building, thereby achieving the replacement of the sky area more accurately and better satisfying the beautification requirement of the user on the image.
Based on the same inventive concept, the disclosed embodiments also provide an image processing apparatus, which may become part or all of an electronic device through software, hardware, or a combination of both. Referring to fig. 12, the image processing apparatus 1200 includes:
a first determining module 1201, configured to determine an edge transition region between a first region and a second region in an original image, resulting in a first image;
a second determining module 1202, configured to perform edge detection on the original image, and determine an edge line between the first region and the second region to obtain a second image;
a superimposing module 1203 configured to superimpose the first image and the second image;
the traversal module 1204 is configured to perform pixel traversal according to a preset traversal path in the superimposed image, stop traversal when an edge point is detected on the preset traversal path, determine a pixel traversed on the preset traversal path as a pixel in the first region, and determine a pixel not traversed on the preset traversal path as a pixel in the second region.
Optionally, the edge line includes a plurality of edge line segments that are not connected to each other, and the apparatus 1200 further includes:
the filling module is configured to perform edge point filling on the edge line segments so that the edge line segments are connected into a coherent edge line to obtain a third image;
the overlay module 1203 is configured to: and superposing the third image and the second image.
Optionally, the filling module is configured to:
regarding the plurality of edge line segments, taking any one edge line segment as an initial target edge line segment, and executing the following operations:
for each determined target edge line segment, determining edge line segments meeting preset conditions in other edge line segments except the edge line segments determined as the target edge line segments;
and filling edge points of the target edge line segment and the edge line segment meeting the preset condition to obtain a connected line segment, and determining the connected line segment as a new target edge line segment until the plurality of edge line segments are connected into a coherent edge line.
Optionally, the filling module is configured to:
and for each determined target edge line segment, determining an edge line segment which is within a first preset range of the target edge line segment and is closest to the target edge line segment from other edge line segments except the edge line segment determined as the target edge line segment.
Optionally, the filling module is configured to:
fitting a curve according to pixel points on the edge line segments for each of the plurality of edge line segments;
and aiming at the target edge line segment determined each time, determining the edge line segment which is within a second preset range of the target edge line segment and has an intersection point with the fitting curve of the target edge line segment in other edge line segments except the edge line segment determined as the target edge line segment.
Optionally, the first determining module 1201 is configured to:
performing semantic segmentation processing on the original image;
and respectively carrying out image expansion processing and image corrosion processing on the semantically segmented image, carrying out pixel point difference calculation on the image subjected to the image expansion processing and the image subjected to the image corrosion processing, and determining a transition zone region between a first region and a second region in the semantically segmented image.
Optionally, the traversal module 1204 is configured to:
removing different edge points with different gray values from other edge points in the neighborhood from the superposed image to obtain a fourth image;
and traversing pixel points in the fourth image according to a preset traversal path.
Optionally, the preset traversal path includes at least one of: traversing left, right, down, up, diagonally from top left to bottom right of the superimposed image, diagonally from top right to bottom left of the superimposed image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Based on the same inventive concept, an embodiment of the present disclosure further provides an electronic device, including:
a memory having a computer program stored thereon;
a processor configured to execute the computer program in the memory to implement the steps of any of the image processing methods described above.
In one possible approach, a block diagram of the electronic device is shown in FIG. 13. Referring to fig. 13, the electronic device 1300 may include: a processor 131 and a memory 132. The electronic device 130 may also include one or more of a multimedia component 133, an input/output (I/O) interface 134, and a communication component 135.
The processor 131 is configured to control the overall operation of the electronic device 130, so as to complete all or part of the steps in the image processing method. The memory 132 is used to store various types of data to support operation at the electronic device 130, such as instructions for any application or method operating on the electronic device 130, as well as application-related data, such as contact data, messaging, pictures, audio, video, and so forth. The Memory 132 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk.
The multimedia component 133 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signals may further be stored in the memory 132 or transmitted through the communication component 135. The audio assembly also includes at least one speaker for outputting audio signals. I/O interface 134 provides an interface between processor 131 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 135 is used for wired or wireless communication between the electronic device 130 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 135 may thus include: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic Device 130 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the image Processing method.
In another exemplary embodiment, there is also provided a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the image processing method described above. For example, the computer readable storage medium may be the memory 132 described above including program instructions that are executable by the processor 131 of the electronic device 130 to perform the image processing method described above.
In another possible approach, the electronic device may also be provided as a server. Referring to fig. 14, the electronic device 1400 may include a processor 1422, which may be one or more in number, and a memory 1432 for storing computer programs executable by the processor 1422. The computer programs stored in memory 1432 may include one or more modules each corresponding to a set of instructions. Further, the processor 1422 may be configured to execute the computer program to perform the image processing method described above.
Additionally, the electronic device 1400 may also include a power component 1426 and a communication component 1450, the power component 1426 may be configured to perform power management of the electronic device 1400, and the communication component 1450 may be configured to enable communication, e.g., wired or wireless communication, of the electronic device 1400. The electronic device 1400 may also include input/output (I/O) interfaces 1458. The electronic device 1400 may operate based on an operating system, such as Windows Server, Mac OS XTM, UnixTM, Linux, etc., stored in memory 1432.
In another exemplary embodiment, there is also provided a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the image processing method described above. For example, the computer readable storage medium can be the memory 1432 described above that includes program instructions executable by the processor 1422 of the electronic device 1400 to perform the image processing methods described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the image processing method described above when executed by the programmable apparatus.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (11)

1. An image processing method, characterized in that the image processing method comprises:
determining an edge transition area between a first area and a second area in an original image to obtain a first image;
performing edge detection on the original image, and determining an edge line between the first area and the second area to obtain a second image;
superimposing the first image and the second image;
and traversing pixel points in the superposed image according to a preset traversal path, stopping traversal if an edge point is detected on the preset traversal path, determining the traversed pixel points on the preset traversal path as the pixel points in the first region, and determining the pixel points on the preset traversal path which are not traversed as the pixel points in the second region.
2. The image processing method according to claim 1, wherein the edge line includes a plurality of edge line segments that are not connected to each other, the method further comprising:
filling edge points of the edge line segments to enable the edge line segments to be communicated into a coherent edge line, and obtaining a third image;
superimposing the first image and the second image, comprising: and superposing the third image and the second image.
3. The image processing method according to claim 2, wherein the performing edge point filling on the plurality of edge line segments to connect the plurality of edge line segments into a coherent edge line, and obtaining a third image comprises:
regarding the plurality of edge line segments, taking any one edge line segment as an initial target edge line segment, and executing the following operations:
for each determined target edge line segment, determining edge line segments meeting preset conditions in other edge line segments except the edge line segments determined as the target edge line segments;
and filling edge points of the target edge line segment and the edge line segment meeting the preset condition to obtain a connected line segment, and determining the connected line segment as a new target edge line segment until the plurality of edge line segments are connected into a coherent edge line.
4. The image processing method according to claim 3, wherein the determining, for each determined target edge line segment, an edge line segment that satisfies a preset condition among the edge line segments other than the edge line segment that has been determined as the target edge line segment, comprises:
and for each determined target edge line segment, determining an edge line segment which is within a first preset range of the target edge line segment and is closest to the target edge line segment from other edge line segments except the edge line segment determined as the target edge line segment.
5. The image processing method according to claim 3, wherein the determining, for each determined target edge line segment, an edge line segment that satisfies a preset condition among the edge line segments other than the edge line segment that has been determined as the target edge line segment, comprises:
fitting a curve according to pixel points on the edge line segments for each of the plurality of edge line segments;
and aiming at the target edge line segment determined each time, determining the edge line segment which is within a second preset range of the target edge line segment and has an intersection point with the fitting curve of the target edge line segment in other edge line segments except the edge line segment determined as the target edge line segment.
6. The image processing method according to any one of claims 1 to 5, wherein the determining an edge transition region between a first region and a second region in the original image comprises:
performing semantic segmentation processing on the original image;
and respectively carrying out image expansion processing and image corrosion processing on the semantically segmented image, carrying out pixel point difference calculation on the image subjected to the image expansion processing and the image subjected to the image corrosion processing, and determining a transition zone region between a first region and a second region in the semantically segmented image.
7. The image processing method according to any one of claims 1 to 5, wherein traversing pixel points in the superimposed image according to a preset traversal path includes:
removing different edge points with different gray values from other edge points in the neighborhood from the superposed image to obtain a fourth image;
and traversing pixel points in the fourth image according to a preset traversal path.
8. The image processing method according to any one of claims 1 to 5, wherein the preset traversal path includes at least one of: traversing left, right, down, up, diagonally from top left to bottom right of the superimposed image, diagonally from top right to bottom left of the superimposed image.
9. An image processing apparatus, characterized in that the apparatus comprises:
the first determining module is configured to determine an edge transition region between a first region and a second region in the original image to obtain a first image;
the second determining module is configured to perform edge detection on the original image, determine an edge line between the first area and the second area, and obtain a second image;
a superimposing module configured to superimpose the first image and the second image;
and the traversal module is configured to perform pixel traversal according to a preset traversal path in the superposed image, stop traversal when an edge point is detected on the preset traversal path, determine the traversed pixels on the preset traversal path as pixels in the first region, and determine the non-traversed pixels on the preset traversal path as pixels in the second region.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method of any one of claims 1 to 8.
11. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor configured to execute the computer program in the memory to implement the steps of the image processing method of any of claims 1-8.
CN202010130485.1A 2020-02-28 2020-02-28 Image processing method, image processing device, storage medium and electronic equipment Withdrawn CN111353957A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010130485.1A CN111353957A (en) 2020-02-28 2020-02-28 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010130485.1A CN111353957A (en) 2020-02-28 2020-02-28 Image processing method, image processing device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111353957A true CN111353957A (en) 2020-06-30

Family

ID=71195931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010130485.1A Withdrawn CN111353957A (en) 2020-02-28 2020-02-28 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111353957A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861873A (en) * 2021-01-05 2021-05-28 杭州海康威视数字技术股份有限公司 Method for processing image with cigarette case
CN113469923A (en) * 2021-05-28 2021-10-01 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN114549993A (en) * 2022-02-28 2022-05-27 成都西交智汇大数据科技有限公司 Method, system and device for scoring line segment image in experiment and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964108A (en) * 2010-09-10 2011-02-02 中国农业大学 Real-time on-line system-based field leaf image edge extraction method and system
CN106373128A (en) * 2016-09-18 2017-02-01 上海斐讯数据通信技术有限公司 Lip accuracy positioning method and system
CN108805957A (en) * 2018-06-07 2018-11-13 青岛九维华盾科技研究院有限公司 A kind of vector drawing generating method and system based on bitmap images adaptivenon-uniform sampling
CN109145922A (en) * 2018-09-10 2019-01-04 成都品果科技有限公司 A kind of automatically stingy drawing system
CN109816677A (en) * 2019-02-15 2019-05-28 新华三信息安全技术有限公司 A kind of information detecting method and device
WO2019205290A1 (en) * 2018-04-28 2019-10-31 平安科技(深圳)有限公司 Image detection method and apparatus, computer device, and storage medium
US20200034972A1 (en) * 2018-07-25 2020-01-30 Boe Technology Group Co., Ltd. Image segmentation method and device, computer device and non-volatile storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101964108A (en) * 2010-09-10 2011-02-02 中国农业大学 Real-time on-line system-based field leaf image edge extraction method and system
CN106373128A (en) * 2016-09-18 2017-02-01 上海斐讯数据通信技术有限公司 Lip accuracy positioning method and system
WO2019205290A1 (en) * 2018-04-28 2019-10-31 平安科技(深圳)有限公司 Image detection method and apparatus, computer device, and storage medium
CN108805957A (en) * 2018-06-07 2018-11-13 青岛九维华盾科技研究院有限公司 A kind of vector drawing generating method and system based on bitmap images adaptivenon-uniform sampling
US20200034972A1 (en) * 2018-07-25 2020-01-30 Boe Technology Group Co., Ltd. Image segmentation method and device, computer device and non-volatile storage medium
CN109145922A (en) * 2018-09-10 2019-01-04 成都品果科技有限公司 A kind of automatically stingy drawing system
CN109816677A (en) * 2019-02-15 2019-05-28 新华三信息安全技术有限公司 A kind of information detecting method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861873A (en) * 2021-01-05 2021-05-28 杭州海康威视数字技术股份有限公司 Method for processing image with cigarette case
CN112861873B (en) * 2021-01-05 2022-08-05 杭州海康威视数字技术股份有限公司 Method for processing image with cigarette case
CN113469923A (en) * 2021-05-28 2021-10-01 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN114549993A (en) * 2022-02-28 2022-05-27 成都西交智汇大数据科技有限公司 Method, system and device for scoring line segment image in experiment and readable storage medium
CN114549993B (en) * 2022-02-28 2022-11-11 成都西交智汇大数据科技有限公司 Method, system and device for grading line segment image in experiment and readable storage medium

Similar Documents

Publication Publication Date Title
CN111583097A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109753913B (en) Multi-mode video semantic segmentation method with high calculation efficiency
CN111353957A (en) Image processing method, image processing device, storage medium and electronic equipment
EP2916291B1 (en) Method, apparatus and computer program product for disparity map estimation of stereo images
CN112541876B (en) Satellite image processing method, network training method, related device and electronic equipment
CN112102340A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113034348A (en) Image processing method, image processing apparatus, storage medium, and device
CN109579857B (en) Method and equipment for updating map
CN111126108A (en) Training method and device of image detection model and image detection method and device
CN112750139A (en) Image processing method and device, computing equipment and storage medium
CN111563517B (en) Image processing method, device, electronic equipment and storage medium
CN109214996A (en) A kind of image processing method and device
CN111598088B (en) Target detection method, device, computer equipment and readable storage medium
CN112967381A (en) Three-dimensional reconstruction method, apparatus, and medium
CN113570626A (en) Image cropping method and device, computer equipment and storage medium
CN110910400A (en) Image processing method, image processing device, storage medium and electronic equipment
CN109753957B (en) Image significance detection method and device, storage medium and electronic equipment
CN113688832A (en) Model training and image processing method and device
CN113658196A (en) Method and device for detecting ship in infrared image, electronic equipment and medium
US20230222736A1 (en) Methods and systems for interacting with 3d ar objects from a scene
CN110689478A (en) Image stylization processing method and device, electronic equipment and readable medium
CN113436068B (en) Image splicing method and device, electronic equipment and storage medium
CN116485944A (en) Image processing method and device, computer readable storage medium and electronic equipment
CN116385415A (en) Edge defect detection method, device, equipment and storage medium
CN113256484B (en) Method and device for performing stylization processing on image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200630

WW01 Invention patent application withdrawn after publication