CN110751668B - Image processing method, device, terminal, electronic equipment and readable storage medium - Google Patents
Image processing method, device, terminal, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN110751668B CN110751668B CN201910944150.0A CN201910944150A CN110751668B CN 110751668 B CN110751668 B CN 110751668B CN 201910944150 A CN201910944150 A CN 201910944150A CN 110751668 B CN110751668 B CN 110751668B
- Authority
- CN
- China
- Prior art keywords
- image
- region
- area
- main body
- target operation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an image processing method, an image processing device, a terminal, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: generating a disparity map according to a first image and a second image obtained by image acquisition of the same target object; when a preset operation performed on the first image by a user is detected, determining a target operation area on the first image, wherein the target operation area is aimed at by the preset operation; and extracting an image corresponding to the target operation area from the first image based on the target operation area, the disparity map and color values of all pixel points on the first image. The main body area corresponding to the target operation area on the first image can be determined based on the disparity map, and then the image of the main body area is extracted, so that the image of the main body area is extracted under the condition that the number of times that the user marks the main body area needing to be extracted is few, and the user experience is improved.
Description
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to an image processing method, an image processing apparatus, a terminal, an electronic device, and a readable storage medium.
Background
Background replacement application is popular with users who extract foreground main areas according to their interests and replace backgrounds at will, and is a way of image processing for extracting partial images desired by users from original images. However, in the related art, the user is usually required to mark the body region to be extracted and even mark the edge of the body region multiple times to complete the extraction of the body region. This operation is comparatively loaded down with trivial details, and user's easy operability is not high, when needs are drawed the prospect to some rich color scenes, the main part difference that often draws is great with the main part difference that the user appointed, leads to user experience poor.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide an image processing method, apparatus, terminal, electronic device, and readable storage medium that overcome or at least partially solve the above problems.
In a first aspect of the embodiments of the present invention, an image processing method is provided, where the method includes:
generating a disparity map according to a first image and a second image obtained by image acquisition of the same target object;
when a preset operation performed on the first image by a user is detected, determining a target operation area on the first image, wherein the target operation area is aimed at by the preset operation;
and extracting an image corresponding to the target operation area from the first image based on the target operation area, the disparity map and color values of all pixel points on the first image.
Optionally, extracting, from the first image, an image corresponding to the target operation region based on the target operation region, the disparity map, and color values of pixels on the first image, includes:
determining a preset parallax range corresponding to the target operation area on the parallax map based on the target operation area and the parallax map;
based on the preset parallax range, the position of the target operation area on the first image and the color value of each pixel point of the target operation area on the first image, segmenting the parallax map to obtain other areas except the main body area;
marking the main body area and other areas except the main body area to obtain a label graph;
and extracting an image marked as a main body area on the label graph from the first image.
Optionally, the other regions than the body region include: a non-subject region, a possible subject region; marking the main body area and other areas except the main body area to obtain a label graph, wherein the label graph comprises:
and marking the non-body area, the possible body area and the body area to obtain a label graph.
Optionally, segmenting the disparity map based on the preset disparity range, the position of the target operation region on the first image, and the color value of each pixel point of the target operation region on the first image, includes:
determining each pixel point belonging to the preset parallax range in the parallax image;
determining a plurality of first pixel points which are within a preset color value range and are separated from the target operation area by a preset distance, wherein the difference value of the color value of each pixel point and the color value of each pixel point of the target operation area is determined, and determining an area formed by the first pixel points as the main area;
determining a plurality of second pixels, which are in the preset color value range or are separated from the target operation area by a preset distance, of the color values of the pixels except for the first pixels in each pixel, and determining an area formed by the second pixels as the possible main area;
determining regions formed by the remaining pixel points except the plurality of first pixel points and the plurality of second pixel points in each pixel point as the possible non-subject regions;
and determining the region formed by each pixel point which does not belong to the preset parallax range in the parallax image as the non-subject region.
Optionally, after the non-subject region, the possible subject region, and the subject region are marked to obtain a label map, the method further includes:
performing edge repairing according to the label graph and the first image to determine a repairing area belonging to the main body area;
re-marking the non-subject region, the possible subject region and the subject region based on the determined repair region belonging to the subject region;
extracting an image marked as a main body area on the label map from the first image, wherein the image comprises the following steps:
extracting an image on the label graph relabeled as a main body area from the first image.
Optionally, extracting an image marked as a main body area on the label map from the first image includes:
determining respective alpha values on the label graph labeled as the non-subject region, the possible subject region, and the subject region; wherein alpha values of the non-subject region characterize the non-subject region as a transparent region, alpha values of the subject region characterize the subject region as an opaque region, and alpha values of the possible non-subject region and the possible subject region characterize the possible non-subject region and the possible subject region as translucent regions;
obtaining an alpha image based on respective alpha values of the non-subject region, the possible subject region and the subject region on the label map;
and based on the color value of each pixel point on the first image and the alpha image, obtaining the image of the main body area in the first image.
In a second aspect of the embodiments of the present invention, there is provided an image processing apparatus, including:
the parallax map generation module is used for generating a parallax map according to a first image and a second image which are obtained by image acquisition of the same target object;
a target operation area determining module, configured to determine, when a preset operation performed on the first image by a user is detected, a target operation area on the first image to which the preset operation is directed;
and the image extraction module is used for extracting an image corresponding to the target operation area from the first image based on the target operation area, the disparity map and color values of all pixel points on the first image.
Optionally, the image extraction module comprises:
the parallax range determining unit is used for determining a preset parallax range corresponding to the target operation area on the parallax map based on the target operation area and the parallax map;
the region segmentation unit is used for segmenting the disparity map based on the preset disparity range, the position of the target operation region on the first image and the color value of each pixel point of the target operation region on the first image to obtain a main region and other regions except the main region;
the area marking unit is used for marking the main body area and other areas except the main body area to obtain a label graph;
and the main body area extracting unit is used for extracting the image marked as the main body area on the label graph from the first image.
Optionally, the other regions than the body region include: a non-subject region, a possible subject region; the region marking unit is specifically configured to mark the non-body region, the possible body region, and the body region to obtain a label map.
The region division unit includes:
a pixel point determining subunit, configured to determine each pixel point in the disparity map that belongs to the preset disparity range;
a main region determining subunit, configured to determine a plurality of first pixel points that are within a preset color value range and separated from a position of the target operation region by a preset distance, where a difference between color values of the pixel points in the respective pixel points and the pixel points in the target operation region is within the preset color value range, and determine a region formed by the plurality of first pixel points as the main region;
a first unknown region determining subunit, configured to determine, in each of the pixels except for each of the first pixels, a plurality of second pixels whose color value difference from each of the pixels in the target operation region is within the preset color value range or whose positions are separated from the position of the target operation region by a preset distance, and determine a region formed by the plurality of second pixels as the possible subject region;
a second unknown region determining subunit, configured to determine, as the possible non-subject region, a region formed by remaining pixels, excluding the plurality of first pixels and the plurality of second pixels, in each of the pixels;
and the non-main body region determining subunit is used for determining a region formed by each pixel point which does not belong to the preset parallax range in the parallax image as the non-main body region.
Optionally, the apparatus further comprises:
the repairing module is used for performing edge repairing according to the label graph and the first image so as to determine a repairing area belonging to the main body area;
an updating module, configured to re-mark the non-body region, the possible body region, and the body region based on the determined repair region belonging to the body region;
the main area extracting unit is specifically configured to extract an image that is relabeled as a main area on the label map from the first image.
Optionally, the body region extraction unit includes:
an alpha value determining subunit, configured to determine alpha values of the non-subject region, the possible subject region, and the subject region marked on the label map; wherein alpha values of the non-subject region characterize the non-subject region as a transparent region, alpha values of the subject region characterize the subject region as an opaque region, alpha values of the potential non-subject region and the potential subject region characterize the potential non-subject region and the potential subject region as translucent regions;
an alpha image generation subunit, configured to obtain an alpha image based on respective alpha values of the non-subject region, the possible subject region, and the subject region on the label map;
and the image output subunit is used for extracting the image of the main body area from the first image based on the color value of each pixel point on the first image and the alpha image.
In a third aspect of the embodiments of the present invention, an image processing terminal is provided, which includes a display and an image processing device, where the display is configured to display a first image obtained by image capturing of a same target object, and the image processing device is configured to execute the image processing method.
In a fourth aspect of the embodiments of the present invention, an electronic device is provided, which includes a memory, a processor and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the image processing method is implemented.
In a fifth aspect of the embodiments of the present invention, there is provided a computer-readable storage medium storing a computer program for causing a processor to execute the image processing method.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, a disparity map is generated based on a first image and a second image which are shot aiming at the same target object, an image corresponding to a target operation area on the first image is extracted based on the disparity map, the target operation area clicked by a user on the first image and color values of all pixel points on the first image, and the image corresponding to the target operation area is a target image which needs to be extracted by the user. The target image corresponding to the target operation region on the first image can be obtained based on the disparity map, and the target image corresponding to the target operation region is extracted, so that the times of marking a main body region needing to be extracted by a user can be reduced, the target image is extracted, the matching degree between the extracted image and the target image to which the target operation region appointed by the user belongs is improved, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flow chart of the steps of an image processing method of an embodiment of the present invention;
FIG. 2-A is a first image of an example of a method of image processing according to an embodiment of the invention;
FIG. 2-B is a tiramp diagram of an example of an image processing method of an embodiment of the invention;
FIG. 2-C is an alpha diagram of an example of an image processing method according to an embodiment of the invention;
fig. 3 is a flowchart of a step of extracting an image corresponding to the target operation area in an image processing method according to an embodiment of the present invention;
FIG. 4 is a flowchart of the steps for determining the regions such as the subject region, the non-subject region, etc. in an image processing method according to an embodiment of the present invention;
fig. 5 is a flowchart of a step of extracting an image marked as a main area on the label map from a first image in an image processing method according to an embodiment of the present invention;
FIG. 6 is an overall flow chart of an image processing method in an alternative example of the invention;
FIG. 7 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
In the related art, when an image of a main area designated by a user is extracted from a target image, the user needs to click and determine for many times to extract a corresponding image, and the matching degree between the extracted image and the main area designated by the user is not high. The present disclosure provides an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium, which are described below, to solve the above technical problems in the related art, and the image processing method, the image processing apparatus, the electronic device, and the computer-readable storage medium are described below.
Referring to fig. 1, a flowchart illustrating steps of an image processing method according to an embodiment of the present invention is shown, and as shown in fig. 1, the method may specifically include the following steps:
step S11, generating a parallax map according to a first image and a second image obtained by image acquisition of the same target object.
In this embodiment, the same target object refers to a target to be photographed by the user, and may be a building, a landscape, an object, or a person. The target object can be subjected to image acquisition by adopting a double-camera module, and the double-camera module can be an intelligent acquisition device provided with two cameras, such as a mobile phone. These two cameras can be called main camera and vice camera respectively, and wherein, the camera lens that main camera and vice camera adopted are different each other, and in the reality, the main camera can adopt telephoto lens, and vice camera can adopt wide-angle lens. The image of the target object captured by the main camera is referred to as a first image, and the image of the target object captured by the sub camera is referred to as a second image.
Here, the disparity map generated based on the first image and the second image may be understood as an image in which the size of the first image is the size of the first image and the element value is the disparity value, with reference to the first image. The parallax value of each pixel point in the parallax image can reflect the distance and direction information between the matched pixel points on the first image and the second image, the parallax value of the same subject is approximate, and the parallax values of different subjects may have larger difference. For example, taking a pot of plants placed on a table as an example, the disparity values of the pixels included in the corresponding disparity map of the pot of plants are not much different, and the disparity values of the pixels are much different from those of the pixels included in the table.
In an optional example, the disparity map may be generated by using a three-dimensional reconstruction algorithm in the related art, and optionally, the disparity map for the first image and the second image may be generated by using a semi-global block matching (SGBM) algorithm, where the algorithm is a light-weight three-dimensional reconstruction algorithm, and has the characteristics of good disparity effect and high speed, and the fluency of generating the disparity map may be improved.
Step S12, when the preset operation of the user on the first image is detected, determining a target operation area on the first image, wherein the target operation area is aimed at by the preset operation.
In this embodiment, the first image may be stored and displayed, and the second image may be stored, so that the displayed first image may be used for a user to view and perform some operations, and the preset operation may be a click operation performed by the user holding a mouse, a remote control operation performed by holding a remote controller, or a touch operation by a finger. Specifically, when a user needs to extract an image of a certain region in the first image (hereinafter referred to as a target image) from the first image, any region on the target image may be clicked by using any one of the preset operations, and then the clicked region may be determined as the target operation region. The preset operation may be performed once or multiple times.
By way of example, referring to fig. 2-a, an example diagram of a first image captured of a target plant in a natural environment is shown, and if a user wants to scratch a plant image in the first image, any area of the plant image can be clicked on the first image by using a mouse, or a sliding touch can be performed once on any area of the plant image, such as a leaf or a branch. Wherein the number of clicks may be one.
And S13, extracting an image corresponding to the target operation area from the first image based on the target operation area, the disparity map and color values of all pixel points on the first image.
In the embodiment of the present invention, the target image may be extracted from the first image based on the clicked region, the disparity map, and the color value of each pixel point on the first image. The target image corresponding to the target operation region can be extracted based on the disparity map, so that a user can select any one region from the target image to click once in practice, the clicked region can be determined immediately, a part of a complete main body possibly exists in a clicked region on the first image, the complete main body to which the clicked region belongs is determined according to the characteristics of the disparity map, and the image of the complete main body is extracted.
For example, taking fig. 2-a as an example, assuming that the user clicks the trunk on the plant image once, it is determined that the target operation area is the trunk or the leaf, the trunk and the leaf belong to a part of the plant image in the first image, and then, the plant corresponding to the trunk in fig. 2-a and the image can be determined according to the characteristics of the disparity map, and then, the plant image can be extracted from fig. 2-a according to the color values of the pixels included in the plant image in fig. 2-a, that is, the color values of the plant image are restored from fig. 2-a, and further, the extraction of the plant image is realized.
According to the embodiment of the invention, the image corresponding to the target operation area is extracted based on the disparity map, so that the images of the same subject with similar disparity and similar color can be extracted according to the disparity map, and the targets (such as plants and tables) with similar color but belonging to different subjects are prevented from being extracted together as one target, so that the matching degree between the extracted image and the target image appointed by the user is improved, and further the user experience is improved. The main body image belonging to the target operation area can be determined through the target operation area clicked by the user, so that the number of the clicked target operation areas can be reduced, the times of marking the target image needing to be extracted by the user can be further reduced, and the user experience is further optimized.
Specifically, in an embodiment, as shown in fig. 3, a flowchart illustrating a step of extracting an image corresponding to the target operation region in step S13 is shown, and specifically, the method may include the following steps:
step S131, determining a preset disparity range corresponding to the target operation area on the disparity map based on the target operation area and the disparity map.
In the embodiment of the present invention, the disparity map is generated according to the first image and the second image, in practice, the disparity map may run in the background, the first image may be displayed at the front end for a user to operate, and after the target operation area is determined on the first image, the position of the target operation area on the first image may be converted into the position on the disparity map, so as to determine the disparity value of the target operation area on the disparity map. Specifically, the position coordinates of the target operation region on the first image may be determined, and then the position coordinates of the target operation region on the first image may be converted into the position coordinates in the disparity map according to the size proportional relationship between the first image and the disparity map, so as to determine the position of the target operation region in the disparity map.
In practice, each pixel in the disparity map has its own disparity value, and the disparity value can be used to inversely map the distance and direction information of the pixel. After the disparity value of the target operation area on the disparity map is determined, a preset disparity range to which the disparity value is directed may be determined, and specifically, a difference between a disparity value of each pixel point in the disparity map within the preset disparity range and the disparity value of the target operation area on the disparity map is smaller than a preset value. The preset parallax range may be referred to as a reliable parallax range, and each pixel point in the reliable parallax range and a pixel point in the target operation region have overall correlation in distance and direction with each other, that is, each pixel point in the reliable parallax range and a pixel point in the target operation region may represent the same target image in practice.
Step S132, based on the preset parallax range, the position of the target operation region on the first image, and the color value of each pixel point of the target operation region on the first image, segmenting the parallax map to obtain a main body region and other regions except the main body region.
In the embodiment of the invention, after the preset parallax range is determined, a plurality of pixel points which are within the preset parallax range and have positions close to the target operation area and similar colors can be determined in the parallax map, each pixel point determined on the parallax map is marked as a main body area, and the pixel points except the determined pixel points are marked as other areas.
For example, if the user clicks the trunk, a preset parallax range of the trunk in the parallax map may be determined, each pixel point in the preset parallax range may belong to the same subject as the trunk, for example, the pixel point of the leaf is in the preset parallax range, the leaf and the trunk belong to the same plant, and further, according to the position of the trunk on the first image and the color value of each pixel point of the trunk in fig. 2-a, the branch, the leaf, the root, and the like are determined, the branch, the leaf, the root, the trunk, and the like are determined as a subject region, and the region other than the subject region is determined as another region.
Step S133, marking the main body region and other regions except the main body region to obtain a label map.
In practice, after the determined main region and the other regions except the main region are determined, the label map may be obtained by using an image segmentation algorithm based on graph theory.
Step S134, extracting an image marked as a main area on the label map from the first image.
In this embodiment, each pixel point in the obtained label graph is marked, and a main body region and other regions except the main body region are marked, where the pixel point in the main body region and the pixel point in the target operation region belong to the same target image, and then the image of the main body region may be extracted from the first image, so as to achieve the purpose that the user clicks the target operation region to extract the entire target image to which the target operation region belongs.
For example, taking fig. 2-a as an example, after the trunk clicked by the user is processed through the above steps, the main area in the label map is the plant image to which the trunk belongs, and the whole plant image in fig. 2-a can be extracted.
In an alternative embodiment, the other regions than the body region may include: a non-subject region, a possible subject region; accordingly, in an alternative embodiment, the step S133 may specifically be the following step S133':
step S133', mark the non-body region, the possible body region, and the body region to obtain a label map.
After the non-subject region, the possible subject region, and the subject region are determined, the label map may be obtained by using an image segmentation algorithm based on graph theory.
Alternatively, the label map may be obtained by using GrabCut in the related art, specifically, when the GrabCut is used, the first image and the image marked with the non-subject region, the possible subject region, and the subject region are used as input, and GrabCut may redistribute the marks of the regions according to the relationship between the mark regions in the first image, thereby obtaining the label map. The image size of the label graph obtained by using the grabcut is small, so that the fluency of the image processing of the application on the low-end image processing terminal can be improved, and the processing speed is improved.
Referring to fig. 4, it is shown that other regions than the body region may include: in the case of a non-subject region, a possible non-subject region, and a possible subject region, the step of determining the subject region and the other regions may specifically include the following steps, as shown in fig. 4:
and S21, determining each pixel point belonging to the preset parallax range in the parallax map.
In practice, the difference between the parallax value of each pixel point in the preset parallax range and the parallax value of the target operation area on the parallax map is smaller than a preset value. The preset value can be set according to actual requirements.
For example, by taking fig. 2-a as an example, assuming that a trunk is clicked by a user, each pixel point located in a preset parallax range corresponding to the trunk may be obtained in the parallax map, and since each pixel point in the parallax range may represent the same target image in practice, an initial plant image to which the trunk belongs may be preliminarily determined, and then each pixel point of the initial plant image may be determined as each pixel point belonging to the preset parallax range.
Step S22, determining a plurality of first pixel points, which are within a preset color value range and spaced from the target operation region by a preset distance, of the pixel points, and determining a region formed by the first pixel points as the main region.
In practice, the color value may refer to a value corresponding to a current color of the pixel, and a lowest value of the color value of each color is 0, and a highest value of the color value is 255. The difference in color values of similar colors is small, such as pink and red, while the difference in color values of different hues is large, such as red and blue. Thus, the image formed by the pixels in the preset color value range and the pixels in the target operation area can represent the pixels in the same image on the first image, for example, both belong to the plant image.
The position of the target operation region may refer to a position coordinate of the target operation region on the disparity map, and specifically, the position coordinate of the target operation region on the disparity map may be determined according to the position of the target operation region on the first image, and then, a plurality of pixel points that are separated from the position of the target operation region by a preset distance may be determined in the disparity map. In practice, the pixel point that is separated from the position of the target operation region by the predetermined distance represents the same target image as the target operation region in practice, and when the pixel point is reflected in the first image, the pixel point that is separated from the position of the target operation region by the predetermined distance and the pixel point of the target operation region all belong to the same target image, for example, all belong to a plant image.
In this embodiment, after determining each pixel point belonging to the preset parallax range, in order to improve the accuracy of the extracted image, a plurality of first pixel points close to the target operation area in color and position can be further determined among the pixel points belonging to the preset parallax range, for example, the pixel points of the determined branch area after clicking the trunk. And further, a target image corresponding to the target operation area can be determined, wherein the target image is the main body area.
Step S23, determining, among the pixels except for the first pixels, second pixels having a color value difference from the color value of each pixel in the target operation region within the preset color value range or having a preset distance from the position of the target operation region, and determining a region formed by the second pixels as the possible main region.
In practice, each pixel point in the preset parallax range does not necessarily belong to the same main body (i.e., belong to the target image in the first image), and therefore, after the first pixel point is clearly determined, the difference of the color value can be determined in the preset color value range among the remaining pixel points in the preset parallax range, or, the color value is separated from the position of the target operation area by a plurality of second pixel points with preset distance, for example, the pixel points in the leaf area. The color values of part of the second pixels and the target operation area are within a preset color value range, and the position of the other part of the second pixels is separated from the position of the target operation area by a preset distance. The region formed by the plurality of second pixel points can be determined as a possible main region, that is, the plurality of second pixel points are characterized to possibly belong to the same target image with the target operation region.
Step S24, determining a region formed by the remaining pixels of each pixel except the plurality of first pixels and the plurality of second pixels as the possible non-subject region.
The possible non-subject region represents that pixel points in the region may not belong to the same target image as the target operation region.
It should be noted that the possible non-body region and the possible body region may refer to a region between the body region and the non-body region in practice, and represent an outline of the subject region. For example, in the example of fig. 2-a, the possible non-subject areas and the possible subject areas are defined as the enclosed areas of the plant image.
And step S25, determining an area formed by each pixel point which does not belong to the preset parallax range in the parallax image as the non-subject area.
The non-main body area represents that pixel points in the area and the target operation area do not belong to the same target image.
After the non-subject region, the possible subject region, and the subject region are determined by the steps S21 to S25, the image corresponding to the target operation region may be extracted by the steps S133 to S134.
In an embodiment, referring to fig. 5, a flowchart illustrating a step of extracting an image marked as a main area on the label map from the first image in step S134 is shown, and specifically, the method may include the following steps:
step S31, determining the alpha values of the non-subject region, the possible subject region and the subject region marked on the label map.
Wherein the alpha value for the non-body region characterizes the non-body region as a transparent region, the alpha value for the body region characterizes the body region as an opaque region, and the alpha values for the possible non-body region and the possible body region characterize the possible non-body region and the possible body region as translucent regions.
The alpha value is a value of an alpha channel, represents transparency of a color, and ranges from 0 to 1, the larger the value is, the more opaque the representation is, for example, the alpha value is 0, the transparency is full, and when the alpha value is 1, the opacity is represented, and the true color is. alpha values between 0 and 1 indicate translucency.
In the embodiment of the present application, to extract the image of the main body region, it is necessary to fully represent the original colors of the image of the main body region, but not represent the images of other regions, so that an alpha value of a non-main body region may be determined to be 0, which represents that the non-main body region is completely transparent, and an alpha value of the main body region may be determined to be 1, which represents that the main body region is completely opaque. The alpha values of the possible non-subject region and the possible subject region can be determined as values between 0 and 1 by using an image matching algorithm in the related art, and the possible region is characterized by translucency.
The specific principle of the image matching algorithm is that the image matching algorithm is calculated according to the space distance and the color distance of a possible main body area, a possible non-main body area and the main body area, and the value of the alpha is closer to 0 if the current pixel point is more similar to the non-main body. For example, the more the alpha values of some pixel points in the possible non-subject region tend to be 0, the more the pixel point tends to be transparent, and when the alpha values of some pixel points in the possible subject region tend to be 1, the more the pixel point tends to be opaque.
Step S32, obtaining an alpha image based on the alpha values of the non-main body region, the possible main body region and the main body region on the label map.
In the embodiment of the present invention, the alpha image may be referred to as an alpha channel map, and specifically, the alpha image may be obtained by using the existing correlation technique for generating an alpha channel map.
Step S33, extracting the image of the main body area from the first image based on the color value of each pixel point on the first image and the alpha image.
In practice, the alpha channel and the three RGB channels form a four-channel image together, when the image of the main body region is extracted, the alpha image can be input into the alpha channel, the first image is input into the three RGB channels, because the alpha value of the main body region is 1, and the alpha value of the non-main body region is 0, after the image of the four channels is output, the color value of each pixel point of the main body region is the same as the color value displayed on the first image, that is, the color value of each pixel point of the main body region is restored, and the non-main body region displays a masked image, that is, the color value of each pixel point of the non-main body region is a background color, such as white or green. And the possible main body area and the possible non-main body area are subjected to semi-transparent display in the color of each pixel point of the two areas because the alpha value is between 0 and 1. Therefore, the finally output image shows the image of the main body area, and the outline of the main body area is softened because the pixel points of the possible main body area and the possible non-main body area are semi-transparent displayed.
In practice, after output through the alpha channel and the RGB channel, the front end displays the target image that the user needs to extract from the first image. Then, the user can carry out background replacement or edge finishing on the scratched image according to the requirement of the user.
In one embodiment, after step S133 and before step S134, the following steps may be further included:
step S133', performing edge repairing according to the label map and the first image to determine a repaired area belonging to the main body area.
Step S133 ″, based on the determined repair region belonging to the main body region, the non-main body region, the possible main body region, and the main body region are marked again.
In the embodiment of the invention, a Trimap can be constructed based on the label graph, wherein Trimap classifies the pixels in the image into 3 types, namely, a determined foreground, a determined background and an unknown region, and the pixels in the unknown region are influenced by the foreground pixels and the background pixels. In this embodiment, a foreground region in the trimap obtained after the trimap is constructed is a main region, a background region is a non-main region, an unknown region is a possible non-main region and the possible main region, and after the trimap is constructed, edge repairing can be performed on the possible non-main region and the possible main region on the label map, specifically, edge repairing can be performed based on the trimap and shared-sampling based algorithm, so that a region (i.e., the repaired region in the embodiment of the present invention) belonging to the main region in the possible non-main region and the possible main region is determined as the main region.
When the technical scheme is adopted, the trimap is constructed based on the label graph, so that the edge contour of the main body area is finer, the contour of the finally extracted image of the main body area is finer, and the matching degree and the reality degree of the extracted image and the target image on the original first image are further improved.
By way of example, taking fig. 2-a as an example, the trimap image after the label image constructs the trimap is shown in fig. 2-B, and the alpha image finally obtained after the edge restoration based on the trimap image, the color image and the edge restoration is shown in fig. 2-C, so that the edge contour of the alpha image obtained by 2-C is very fine after the edge restoration, so that the finally output plant image is more matched with the plant image in the original image 2-a, and the restoration degree is higher.
Accordingly, the step S134 may extract the image of the subject region by using the method described in the following steps:
step S134', an image on the label map that is relabeled as a main area is extracted from the first image.
The specific content of step S134' is similar to step S134, and specifically, reference may be made to step S134, which is not described herein again.
Referring to fig. 6, a complete flow chart of an alternative exemplary image processing method is shown, which is described by taking the example of extracting a plant image from the first image shown in fig. 2-a, wherein the second image taken for the target object in fig. 2-a is cached in the memory, and the specific process is as follows:
first, a disparity map is generated for the first image and the second image using a classical three-dimensional reconstruction algorithm, such as the SGBM algorithm.
Next, a target operation region clicked on the first image by the user is determined, and a subject region, a non-subject region, a region that is likely to be a subject (i.e., a likely subject region), and a region that is likely to be a non-subject (a likely non-subject region) are extracted.
And then, carrying out rough segmentation by adopting an image segmentation algorithm based on graph theory, such as Grabcut algorithm, so as to obtain a label graph.
Then, constructing a trimap based on the label map to obtain the trimap map shown in fig. 2-B, where a main region in the label map is a foreground region in the trimap map, a non-main region is a background region in the trimap map, and a possible main region and a possible non-main region are unknown regions in the trimap map, that is, grayscale regions in the trimap map.
And then, further performing edge repairing on the main body region based on the trimap, and marking the repairing regions belonging to the main body region in the possible main body region and the possible non-main body region as the main body region, so that the repairing of the unknown region in the trimap image, namely the edge repairing, is realized, and finally obtaining the alpha image based on the repaired result. In the alpha image, the alpha value of the subject region is 1, the alpha value of the non-subject region is 0, and the alpha value of the unknown region (including the possible subject region and the possible non-subject region) can be calculated by image matching to be a value between 0 and 1.
Finally, inputting the alpha image into an alpha channel (i.e. outputting the alpha image as the fourth channel of the color map as described in fig. 6), and inputting the first image into an RGB channel, thereby displaying the image of the main area on the display screen, i.e. displaying the plant image in fig. 2-a.
It should be noted that for simplicity of description, the method embodiments are shown as a series of combinations of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 7, an image processing apparatus according to an embodiment of the present invention is shown, and the apparatus may specifically include the following modules:
a disparity map generation module 71, configured to generate a disparity map according to a first image and a second image obtained by image acquisition on the same target object;
a target operation area determining module 72, configured to determine, when a preset operation performed on the first image by a user is detected, a target operation area on the first image to which the preset operation is directed;
an image extracting module 73, configured to extract, based on the target operation area, the disparity map, and color values of pixels in the first image, an image corresponding to the target operation area from the first image.
In one embodiment, the image extraction module may include:
the parallax range determining unit is used for determining a preset parallax range corresponding to the target operation area on the parallax map based on the target operation area and the parallax map;
the region segmentation unit is used for segmenting the disparity map based on the preset disparity range, the position of the target operation region on the first image and the color value of each pixel point of the target operation region on the first image to obtain a main body region and other regions except the main body region;
the region marking unit is used for marking the main body region and other regions except the main body region to obtain a label graph;
and the main body area extracting unit is used for extracting the image marked as the main body area on the label graph from the first image.
In one embodiment, the other regions than the body region may include: a non-subject region, a possible subject region; the region marking unit is specifically configured to mark the non-body region, the possible body region, and the body region to obtain a label map.
The region division unit may include:
a pixel point determining subunit, configured to determine each pixel point in the disparity map that belongs to the preset disparity range;
a main region determining subunit, configured to determine a plurality of first pixel points, which are within a preset color value range and spaced from the target operation region by a preset distance, of the pixel points, and determine a region formed by the first pixel points as the main region;
a first unknown region determining subunit, configured to determine, in each of the pixels except for each of the first pixels, a plurality of second pixels whose color value difference from each of the pixels in the target operation region is within the preset color value range or whose positions are separated from the position of the target operation region by a preset distance, and determine a region formed by the plurality of second pixels as the possible subject region;
a second unknown region determining subunit, configured to determine, as the possible non-subject region, a region formed by remaining pixels, excluding the plurality of first pixels and the plurality of second pixels, in each pixel;
and the non-main body region determining subunit is used for determining a region formed by each pixel point which does not belong to the preset parallax range in the parallax image as the non-main body region.
In one embodiment, the apparatus may further include the following modules:
the repairing module is used for performing edge repairing according to the label graph and the first image so as to determine a repairing area belonging to the main body area;
an updating module, configured to re-mark the non-body region, the possible body region, and the body region based on the determined repair region belonging to the body region;
the main area extracting unit is specifically configured to extract an image that is relabeled as a main area on the label map from the first image.
In one embodiment, the body region extraction unit may include:
an alpha value determining subunit, configured to determine alpha values of the non-subject region, the possible subject region, and the subject region marked on the label map; wherein alpha values of the non-subject region characterize the non-subject region as a transparent region, alpha values of the subject region characterize the subject region as an opaque region, alpha values of the potential non-subject region and the potential subject region characterize the potential non-subject region and the potential subject region as translucent regions;
an alpha image generation subunit, configured to obtain an alpha image based on respective alpha values of the non-subject region, the possible subject region, and the subject region on the label map;
and the image output subunit is used for extracting the image of the main body area from the first image based on the color value of each pixel point on the first image and the alpha image.
For the embodiment of the image processing apparatus, since it is basically similar to the embodiment of the image processing method, the description is relatively simple, and for relevant points, reference may be made to part of the description of the embodiment of the image processing method.
The embodiment of the invention also provides an image processing terminal, which comprises a display and an image processing device, wherein the display is used for displaying the first image obtained by image acquisition of the same target object, and the image processing device is used for executing the image processing method in the embodiment.
Referring to fig. 8, a schematic structural diagram of an electronic device 800 according to an embodiment of the present invention is shown, where the electronic device 800 may be used for image processing, and may include a memory 81, a processor 82, and a computer program stored in the memory 81 and executable on the processor, where the processor 82 is configured to execute the image processing method.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, so that a processor executes the image processing method.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising one of \ 8230; \8230;" does not exclude the presence of additional like elements in a process, method, article, or terminal device that comprises the element.
The image processing method, the image processing apparatus, the terminal, the electronic device, and the readable storage medium provided by the present invention are described in detail above, and a specific example is applied in the present disclosure to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (9)
1. An image processing method, characterized in that the method comprises:
generating a disparity map according to a first image and a second image obtained by image acquisition of the same target object;
when a preset operation performed on the first image by a user is detected, determining a target operation area on the first image, which is aimed at by the preset operation;
performing edge repairing according to the label graph and the first image to determine a repairing area belonging to a main body area, and based on the determined repairing area belonging to the main body area, re-marking a non-main body area, a possible main body area and a main body area, wherein the performing of the edge repairing according to the label graph and the first image is to determine that the edge repairing is performed on the possible non-main body area and the possible main body area on the label graph;
extracting an image corresponding to the target operation area from the first image based on the target operation area, the disparity map and color values of all pixel points on the first image, wherein the extracting includes:
determining a preset parallax range corresponding to the target operation area on the parallax map based on the target operation area and the parallax map;
based on the preset parallax range, the position of the target operation area on the first image and the color value of each pixel point of the target operation area on the first image, segmenting the parallax map to obtain a main body area and other areas except the main body area;
marking the main body area and other areas except the main body area to obtain a label graph;
and extracting an image marked as a main body area on the label graph from the first image.
2. The method of claim 1, wherein the other regions than the body region comprise: a non-subject region, a possible subject region; marking the main body area and other areas except the main body area to obtain a label graph, wherein the label graph comprises:
and marking the non-main body area, the possible main body area and the main body area to obtain a label graph.
3. The method of claim 2, wherein segmenting the disparity map based on the preset disparity range, the position of the target operation region on the first image, and color values of pixels of the target operation region on the first image comprises:
determining each pixel point belonging to the preset parallax range in the parallax map;
determining a plurality of first pixel points which are within a preset color value range and are separated from the target operation area by a preset distance, wherein the difference value of the color value of each pixel point and the color value of each pixel point of the target operation area is determined, and determining an area formed by the first pixel points as the main area;
determining a plurality of second pixels, which are in the preset color value range or are separated from the target operation area by a preset distance, of the color values of the pixels except for the first pixels in each pixel, and determining an area formed by the second pixels as the possible main area;
determining regions formed by the remaining pixel points except the plurality of first pixel points and the plurality of second pixel points in each pixel point as the possible non-subject regions;
and determining an area formed by each pixel point which does not belong to the preset parallax range in the parallax image as the non-main body area.
4. The method of claim 2 or 3, wherein after labeling the non-subject region, the possible subject region, and the subject region, resulting in a label map, the method further comprises:
performing edge repairing according to the label graph and the first image to determine a repairing area belonging to the main body area;
relabeling the non-subject region, the possible subject region and the subject region based on the determined repair region belonging to the subject region;
extracting an image marked as a main body area on the label graph from the first image, wherein the image comprises:
and extracting an image which is re-marked as a main body area on the label graph from the first image.
5. The method of claim 2 or 3, wherein extracting the image marked as the subject region on the label graph from the first image comprises:
determining respective alpha values on the label graph labeled as the non-subject region, the possible subject region, and the subject region; wherein an alpha value for the non-body region characterizes the non-body region as a transparent region, an alpha value for the body region characterizes the body region as an opaque region, and alpha values for the possible non-body region and the possible body region characterize the possible non-body region and the possible body region as translucent regions;
obtaining an alpha image based on respective alpha values of the non-subject region, the possible subject region and the subject region on the label map;
and extracting the image of the main body area from the first image based on the color value of each pixel point on the first image and the alpha image.
6. An image processing apparatus, characterized in that the apparatus comprises:
the parallax map generation module is used for generating a parallax map according to a first image and a second image which are obtained by image acquisition on the same target object;
a target operation area determining module, configured to determine, when a preset operation performed on the first image by a user is detected, a target operation area on the first image to which the preset operation is directed;
a repairing module, configured to perform edge repairing according to a label map and the first image to determine a repairing area belonging to a main area, and based on the determined repairing area belonging to the main area, re-mark a non-main area, a possible main area, and the main area, where the performing edge repairing according to the label map and the first image is to determine to perform edge repairing on the possible non-main area and the possible main area on the label map;
an image extraction module, configured to extract, from the first image, an image corresponding to the target operation area based on the target operation area, the disparity map, and a color value of each pixel point on the first image, where the image extraction module includes:
the parallax range determining unit is used for determining a preset parallax range corresponding to the target operation area on the parallax map based on the target operation area and the parallax map;
the region segmentation unit is used for segmenting the disparity map based on the preset disparity range, the position of the target operation region on the first image and the color value of each pixel point of the target operation region on the first image to obtain a main body region and other regions except the main body region;
the region marking unit is used for marking the main body region and other regions except the main body region to obtain a label graph;
and the main body area extracting unit is used for extracting the image marked as the main body area on the label graph from the first image.
7. An image processing terminal, characterized by comprising a display for displaying a first image obtained by image-capturing the same target object, and an image processing apparatus for executing the image processing method according to any one of claims 1 to 5.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing implementing the image processing method according to any of claims 1-5.
9. A computer-readable storage medium storing a computer program for causing a processor to execute the image processing method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910944150.0A CN110751668B (en) | 2019-09-30 | 2019-09-30 | Image processing method, device, terminal, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910944150.0A CN110751668B (en) | 2019-09-30 | 2019-09-30 | Image processing method, device, terminal, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110751668A CN110751668A (en) | 2020-02-04 |
CN110751668B true CN110751668B (en) | 2022-12-27 |
Family
ID=69277653
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910944150.0A Active CN110751668B (en) | 2019-09-30 | 2019-09-30 | Image processing method, device, terminal, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110751668B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111640123B (en) | 2020-05-22 | 2023-08-11 | 北京百度网讯科技有限公司 | Method, device, equipment and medium for generating background-free image |
CN111754521B (en) * | 2020-06-17 | 2024-06-25 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112150464B (en) * | 2020-10-23 | 2024-01-30 | 腾讯科技(深圳)有限公司 | Image detection method and device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105046689A (en) * | 2015-06-24 | 2015-11-11 | 北京工业大学 | Method for fast segmenting interactive stereo image based on multilayer graph structure |
CN106327473A (en) * | 2016-08-10 | 2017-01-11 | 北京小米移动软件有限公司 | Method and device for acquiring foreground images |
CN106355583A (en) * | 2016-08-30 | 2017-01-25 | 成都丘钛微电子科技有限公司 | Image processing method and device |
CN107301642A (en) * | 2017-06-01 | 2017-10-27 | 中国人民解放军国防科学技术大学 | A kind of full-automatic prospect background segregation method based on binocular vision |
CN107808137A (en) * | 2017-10-31 | 2018-03-16 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
CN108848367A (en) * | 2018-07-26 | 2018-11-20 | 宁波视睿迪光电有限公司 | A kind of method, device and mobile terminal of image procossing |
CN109389664A (en) * | 2017-08-04 | 2019-02-26 | 腾讯科技(深圳)有限公司 | Model pinup picture rendering method, device and terminal |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103955890B (en) * | 2014-05-29 | 2017-01-11 | 浙江工商大学 | Stereoscopic image restoration method |
EP3070669A1 (en) * | 2015-03-18 | 2016-09-21 | Thomson Licensing | Method and apparatus for color smoothing in an alpha matting process |
CN107977940B (en) * | 2017-11-30 | 2020-03-17 | Oppo广东移动通信有限公司 | Background blurring processing method, device and equipment |
CN109727192B (en) * | 2018-12-28 | 2023-06-27 | 北京旷视科技有限公司 | Image processing method and device |
CN110059212A (en) * | 2019-03-16 | 2019-07-26 | 平安科技(深圳)有限公司 | Image search method, device, equipment and computer readable storage medium |
-
2019
- 2019-09-30 CN CN201910944150.0A patent/CN110751668B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105046689A (en) * | 2015-06-24 | 2015-11-11 | 北京工业大学 | Method for fast segmenting interactive stereo image based on multilayer graph structure |
CN106327473A (en) * | 2016-08-10 | 2017-01-11 | 北京小米移动软件有限公司 | Method and device for acquiring foreground images |
CN106355583A (en) * | 2016-08-30 | 2017-01-25 | 成都丘钛微电子科技有限公司 | Image processing method and device |
CN107301642A (en) * | 2017-06-01 | 2017-10-27 | 中国人民解放军国防科学技术大学 | A kind of full-automatic prospect background segregation method based on binocular vision |
CN109389664A (en) * | 2017-08-04 | 2019-02-26 | 腾讯科技(深圳)有限公司 | Model pinup picture rendering method, device and terminal |
CN107808137A (en) * | 2017-10-31 | 2018-03-16 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
CN108848367A (en) * | 2018-07-26 | 2018-11-20 | 宁波视睿迪光电有限公司 | A kind of method, device and mobile terminal of image procossing |
Also Published As
Publication number | Publication date |
---|---|
CN110751668A (en) | 2020-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110751668B (en) | Image processing method, device, terminal, electronic equipment and readable storage medium | |
US10824910B2 (en) | Image processing method, non-transitory computer readable storage medium and image processing system | |
CN111460967B (en) | Illegal building identification method, device, equipment and storage medium | |
TW202234341A (en) | Image processing method and device, electronic equipment and storage medium | |
CN103544685B (en) | A kind of image composition beautification method adjusted based on main body and system | |
US10347000B2 (en) | Entity visualization method | |
CN111145135B (en) | Image descrambling processing method, device, equipment and storage medium | |
CN106530309B (en) | A kind of video matting method and system based on mobile platform | |
CN108537782A (en) | A method of building images match based on contours extract with merge | |
CN109525786B (en) | Video processing method and device, terminal equipment and storage medium | |
WO2018053952A1 (en) | Video image depth extraction method based on scene sample library | |
CN108377374A (en) | Method and system for generating depth information related to an image | |
JP5811416B2 (en) | Image processing apparatus, image processing method, and program | |
CN106683051A (en) | Image stitching method and apparatus | |
CN107833193A (en) | A kind of simple lens global image restored method based on refinement network deep learning models | |
CN116012232A (en) | Image processing method and device, storage medium and electronic equipment | |
CN116308530A (en) | Advertisement implantation method, advertisement implantation device, advertisement implantation equipment and readable storage medium | |
Du et al. | Double-channel guided generative adversarial network for image colorization | |
CN105678301A (en) | Method, system and device for automatically identifying and segmenting text image | |
CN107543507A (en) | The determination method and device of screen profile | |
CN114758054A (en) | Light spot adding method, device, equipment and storage medium | |
CN114119695A (en) | Image annotation method and device and electronic equipment | |
US12131482B2 (en) | Learning apparatus, foreground region estimation apparatus, learning method, foreground region estimation method, and program | |
CN110852172B (en) | Method for expanding crowd counting data set based on Cycle Gan picture collage and enhancement | |
CN109040612B (en) | Image processing method, device and equipment of target object and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |