CN115690130B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN115690130B
CN115690130B CN202211714136.XA CN202211714136A CN115690130B CN 115690130 B CN115690130 B CN 115690130B CN 202211714136 A CN202211714136 A CN 202211714136A CN 115690130 B CN115690130 B CN 115690130B
Authority
CN
China
Prior art keywords
target
image
picture
graph
channel number
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211714136.XA
Other languages
Chinese (zh)
Other versions
CN115690130A (en
Inventor
张伟
张靖
崔涛
贺扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yongliu Technology Co ltd
Original Assignee
Hangzhou Yongliu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yongliu Technology Co ltd filed Critical Hangzhou Yongliu Technology Co ltd
Priority to CN202211714136.XA priority Critical patent/CN115690130B/en
Publication of CN115690130A publication Critical patent/CN115690130A/en
Application granted granted Critical
Publication of CN115690130B publication Critical patent/CN115690130B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an image processing method and device. The image processing method comprises the following steps: dividing the target area of the acquired picture to obtain a target area of the picture; extracting a target detail texture of the picture to obtain a target detail texture of a target area; overlapping and fusing the target area and the target detail texture to obtain a fused image; and obtaining a photographing posture template by adjusting the transparency of the fused image. The scheme provided by the invention can help users to accurately and rapidly take pictures.

Description

Image processing method and device
Technical Field
The present invention relates to the field of image recognition technologies, and in particular, to an image processing method and apparatus.
Background
In the related application program, the user needs to take a picture according to certain fixed gestures, so that the effectiveness of image capturing is ensured, so that some human body contour lines can be preset to help the user to align with the gestures of the user, but the forms of each person are different, the rough contour lines cannot ensure that the gestures of the user are perfectly overlapped with the preset contour lines, and the user needs to adjust the gestures and the shooting angles for a plurality of times.
Aiming at the problem that the shooting efficiency is low because the gesture of the user cannot be perfectly overlapped with the preset contour line in the prior art, no effective solution is proposed at present.
Disclosure of Invention
In order to solve the above technical problems, it is desirable in the embodiments of the present invention to provide an image processing method and apparatus, so as to at least solve the problem of low photographing efficiency caused by the fact that in the prior art, perfect overlapping of the user gesture and the preset contour line cannot be ensured.
The technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, including: dividing the target area of the acquired picture to obtain a target area of the picture; extracting a target detail texture of the picture to obtain a target detail texture of a target area; overlapping and fusing the target area and the target detail texture to obtain a fused image; and obtaining a photographing posture template by adjusting the transparency of the fused image.
Optionally, performing target region segmentation on the obtained picture, where obtaining the target region of the picture includes: acquiring a picture in a user terminal under the condition that a target area comprises a human body part; predicting the probability of whether each pixel in the picture is a human body part or not through a segmentation algorithm; and extracting the region where the pixels meeting the probability of the human body part are located to obtain a target region.
Further, optionally, extracting the region where the pixel satisfying the probability of the human body part is located, to obtain the target region includes: extracting the region where the pixels meeting the probability of the human body part are located to obtain a probability map; carrying out gray level conversion on the probability map, and carrying out Gaussian processing on the probability map after gray level conversion to obtain a gray level map after noise reduction; and performing binarization threshold processing on the noise-reduced gray level image to obtain a target area.
Optionally, extracting the target detail texture of the picture, where obtaining the target detail texture of the target area includes: converting the picture into a gray scale picture; respectively deriving the gray level image in the x-axis direction and the y-axis direction by an edge detection algorithm to obtain a derivative image in the x-axis direction and a derivative image in the y-axis direction; superposing and fusing the derivative image in the x-axis direction and the derivative image in the y-axis direction to obtain a fused image; and processing the fused graph through a preset algorithm to obtain the target detail texture.
Further, optionally, overlapping and fusing the target area and the target detail texture, and obtaining the fused image includes: and superposing and fusing the target area, the target contour edge line and the target detail texture to obtain a fused image.
Optionally, overlapping and fusing the target area, the target contour edge line and the target detail texture, and obtaining the fused image includes: taking the target area as a mask, and extracting target area information from the target detail texture; performing binarization processing on the target area to obtain a target contour line; and superposing and fusing the target contour line and the target detail texture to obtain a fused image.
Further, optionally, by adjusting the transparency of the fused image, obtaining the photographing posture template includes: setting the foreground part pixels of the fused image to be white, and setting the transparency of the image to be full transparent; setting the background part pixels of the fused image to zero, and setting the transparency of the image to be semitransparent; and determining the fused image with the transparency adjusted as a photographing posture template.
Optionally, the method further comprises: reducing the picture size of the fused image to a fixed value to obtain a reduced fused image; converting the texture map of the fused image into a map of a first channel number, and converting the map of the first channel number into a map of a second channel number; traversing the graph of the second channel number, wherein if the designated pixels in the graph of the second channel number are smaller than a preset value, the designated pixels are set to be a first value, and the transparency is modified to be a second value; if the appointed pixel in the second channel number chart is larger than the preset value, the pixel value of the appointed pixel is unchanged, and the transparency is modified to be a third value; and amplifying the traversed second channel number graph to the picture size of the fused image.
In a second aspect, an embodiment of the present invention provides an image processing apparatus including: the segmentation module is used for carrying out target region segmentation on the acquired picture to obtain a target region of the picture; the extraction module is used for extracting the target detail texture of the picture to obtain the target detail texture of the target area; the fusion module is used for carrying out superposition fusion on the target area and the target detail texture to obtain a fused image; and the adjusting module is used for obtaining a photographing posture template by adjusting the transparency of the fused image.
Optionally, the segmentation module includes: an acquisition unit configured to acquire a picture in a user terminal in a case where a target area includes a human body part; the prediction unit is used for predicting the probability of whether each pixel in the picture is a human body part or not through a segmentation algorithm; and the segmentation unit is used for extracting the region where the pixels meeting the probability of the human body part are located to obtain a target region.
The embodiment of the invention provides an image processing method and device. The method comprises the steps of dividing an obtained picture into target areas to obtain the target areas of the picture; extracting a target detail texture of the picture to obtain a target detail texture of a target area; overlapping and fusing the target area and the target detail texture to obtain a fused image; the photographing gesture template is obtained by adjusting the transparency of the fused image, so that a user can be helped to accurately and rapidly photograph and take an image.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
fig. 1 is a flowchart of an image processing method according to a first embodiment of the present invention;
fig. 2 is a schematic diagram of superposition fusion in an image processing method according to a first embodiment of the present invention;
fig. 3 is a flowchart of another image processing method according to a first embodiment of the present invention;
fig. 4 is a schematic diagram of an image processing apparatus according to a second embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and in the drawings are used for distinguishing between different objects and not for limiting a particular order.
It should be noted that, the following embodiments of the present invention may be implemented separately or in combination with each other, and the embodiments of the present invention are not limited thereto.
Example 1
In a first aspect, an embodiment of the present invention provides an image processing method, and fig. 1 is a schematic flow chart of an image processing method provided in a first embodiment of the present invention; as shown in fig. 1, the image processing method provided in the embodiment of the present application includes:
step S102, dividing the target area of the acquired picture to obtain the target area of the picture;
optionally, in step S102, performing target region segmentation on the obtained picture, where obtaining the target region of the picture includes: acquiring a picture in a user terminal under the condition that a target area comprises a human body part; predicting the probability of whether each pixel in the picture is a human body part or not through a segmentation algorithm; and extracting the region where the pixels meeting the probability of the human body part are located to obtain a target region.
Further, optionally, extracting the region where the pixel satisfying the probability of the human body part is located, to obtain the target region includes: extracting the region where the pixels meeting the probability of the human body part are located to obtain a probability map; carrying out gray level conversion on the probability map, and carrying out Gaussian processing on the probability map after gray level conversion to obtain a gray level map after noise reduction; and performing binarization threshold processing on the noise-reduced gray level image to obtain a target area.
Specifically, in the embodiment of the present application, taking a portrait as an example for illustration, in the embodiment of the present application, the target area may be a human body area, a user first takes a picture including a human body part and uploads the picture, extracts the human body area in the picture as a foreground, and uses a part outside the human body area as a background. The specific implementation is as follows:
(1) Firstly, a user is required to shoot a picture containing any human body part, the picture is taken as an original picture, then human body region extraction is carried out, specifically, a segmentation algorithm (deep Lab V3+) is adopted to predict whether each pixel is the probability of a human body or not (the probability distribution range is between 0 and 1, the probability that the pixel point is the human body is larger as the value which is closer to 1 is the probability that the pixel point is the human body is larger, and the probability that the pixel point is the non-human body is larger as the value which is closer to 0 is the probability that the pixel point is the non-human body is the larger) (namely, the region where the probability of the human body part is met is extracted to obtain a probability map in the embodiment of the application);
(2) Converting the probability map into 0-255, and performing Gaussian blur (the Gaussian kernel size is set to 5*5) (namely, performing gray conversion on the probability map and performing Gaussian processing on the probability map after gray conversion in the embodiment of the application) to reduce the influence of noise;
(3) Performing binarization threshold processing on the denoised gray level image, wherein the threshold value is set to 125 (corresponding to the probability value of 0.5 in the step (1)), and the image passing through the binarization threshold value is the area where people and the background are separated;
(4) Human body areas in the picture are reserved as foreground, and non-human body areas are filtered as background.
Step S104, extracting a target detail texture of the picture to obtain the target detail texture of the target area;
optionally, in step S104, extracting the target detail texture of the picture, where obtaining the target detail texture of the target area includes: converting the picture into a gray scale picture; respectively deriving the gray level image in the x-axis direction and the y-axis direction by an edge detection algorithm to obtain a derivative image in the x-axis direction and a derivative image in the y-axis direction; superposing and fusing the derivative image in the x-axis direction and the derivative image in the y-axis direction to obtain a fused image; and processing the fused graph through a preset algorithm to obtain the target detail texture.
Specifically, after the human body part is extracted as the foreground, the foreground part is subjected to edge extraction by using an edge detection algorithm (Sobel) to be used as the detail texture (namely, the target detail texture in the embodiment of the application) of the human body; the specific implementation is as follows:
(1) Firstly, converting an original image uploaded by a user into a gray image;
(2) Respectively carrying out first-order derivation on the x and y directions of the gray level diagram by utilizing an edge detection algorithm (Sobel); the first-order derivative is carried out on the x direction and the y direction respectively, so that two first-order derivative graphs can be obtained;
(3) Then, overlapping and fusing the images subjected to the first-order derivation; wherein, two first-order derivative graphs are overlapped and fused;
(4) The fused graph is a detail texture graph obtained by a Sobel algorithm, that is, the detail texture at the moment comprises the detail texture of the human body and the texture of the background.
Step S106, overlapping and fusing the target area and the target detail texture to obtain a fused image;
optionally, in step S106, the target area and the target detail texture are overlapped and fused, and the obtaining the fused image includes: and superposing and fusing the target area, the target contour edge line and the target detail texture to obtain a fused image.
Optionally, overlapping and fusing the target area, the target contour edge line and the target detail texture, and obtaining the fused image includes: taking the target area as a mask, and extracting target area information from the target detail texture; performing binarization processing on the target area to obtain a target contour line; and superposing and fusing the target contour line and the target detail texture to obtain a fused image.
Specifically, fig. 2 is a schematic diagram of superposition and fusion in an image processing method according to a first embodiment of the present invention, as shown in fig. 2, an image picture used in fig. 2 is an image of a published database, and information of the published database is shown in table 1:
Figure 365551DEST_PATH_IMAGE001
it should be noted that, the portrait image shown in fig. 2 provided in the embodiment of the present application is only used to illustrate the process of superposition and fusion in the image processing method provided in the embodiment of the present application, so that the image processing method provided in the embodiment of the present application is implemented, and the present application is not limited specifically.
As shown in fig. 2, the human body region obtained by the segmentation algorithm, the human body contour line (i.e., the target contour line in the embodiment of the application) and the human body detail texture line obtained by the edge detection algorithm (i.e., the target detail texture in the embodiment of the application) are superimposed, so that the rough human body contour and the fine human body texture are integrated into a whole; the specific implementation is as follows:
(1) According to the obtained original image (the original image may be a color image), the binarized image in step S102 is used as a mask, the human body region information (i.e., the target region information in the embodiment of the present application) is extracted from the detail texture image in step S104, and after this step, only the detail texture of the human body region is retained, and the background region is filtered;
(2) Performing binarization processing on the binarized picture in the step S102 again to obtain the contour of the human body (note that distinguishing the contour of the human body at this point from the human body region in the step S102 is different-the human body region is a region slice of the human body occupied picture, and the human body contour is only the external contour line of the human body in the image);
(3) And overlapping the human body contour line (rough texture) and the human body texture line (fine texture) to obtain the whole texture representation of the human body.
Step S108, obtaining a photographing posture template by adjusting the transparency of the fused image.
Optionally, in step S108, by adjusting the transparency of the fused image, obtaining the photographing posture template includes: setting the foreground part pixels of the fused image to be white, and setting the transparency of the image to be full transparent; setting the background part pixels of the fused image to zero, and setting the transparency of the image to be semitransparent; and determining the fused image with the transparency adjusted as a photographing posture template.
Optionally, the image processing method provided in the embodiment of the present application further includes: reducing the picture size of the fused image to a fixed value to obtain a reduced fused image; converting the texture map of the fused image into a map of a first channel number, and converting the map of the first channel number into a map of a second channel number; traversing the graph of the second channel number, wherein if the designated pixels in the graph of the second channel number are smaller than a preset value, the designated pixels are set to be a first value, and the transparency is modified to be a second value; if the appointed pixel in the second channel number chart is larger than the preset value, the pixel value of the appointed pixel is unchanged, and the transparency is modified to be a third value; and amplifying the traversed second channel number graph to the picture size of the fused image.
Specifically, the foreground pixels are set to be white, the transparency of the image is set to be fully transparent, the background pixels are set to be zero and to be semitransparent, and the image is used as the private and customized human body outline of the user and is used as the gesture template of the user during subsequent photographing. The specific implementation steps are as follows:
(1) The picture size is scaled down to a fixed size 480 x 640 (i.e., a fixed value in the embodiments of the present application) to reduce the subsequent pixel traversal time;
(2) Converting the texture map (gray map) fused in step S106 into a map of RGB 3 channels (i.e., a map of the first channel number in the embodiment of the present application), and then converting the 3-channel map into a 4-channel map including transparency channels (i.e., a map of the second channel number in the embodiment of the present application); wherein, the conversion code: image_edge. Cover ('RGBA'); the 3 channel is an RGB channel, the 4 channel is an RGBA channel, the 4 channel is increased in transparency by an alpha channel compared with the 3 channel, and subsequent image fusion is also operated based on the alpha channel.
(3) Traversing the converted picture:
a. if a pixel value is less than 50 (i.e., the preset value in the embodiment of the application), the pixel is set to 255 (i.e., the first value in the embodiment of the application), and the transparency is modified to 125 (i.e., the second value in the embodiment of the application);
b. if a pixel value is greater than 50, the pixel value is kept unchanged, and the transparency is modified to 255 (i.e., the third value in the embodiment of the present application);
the purpose of the traversal in the embodiment of the present application is to operate on the transparency value and the pixel value of each pixel.
(4) After the traversing is finished, the transparency map is enlarged to the original size again so as to ensure that the transparency map and the original map can be completely overlapped. The original size is the original size before being compressed to 480 x 640.
As shown in fig. 2, the photographing pose template may be ultimately used in a photographing function (or photographing APP) of the mobile terminal.
In summary, referring to steps S102 to S108, fig. 3 is a flowchart of another image processing method according to the first embodiment of the present invention, as shown in fig. 3, the image processing method according to the embodiment of the present invention specifically includes:
s1, photographing and uploading by a user; in this embodiment of the present application, the image used for image processing may be a picture after the user finishes uploading, or may be a picture that the user shoots in real time. Taking the chart after the user finishes uploading as an example for explanation, the image processing method provided by the embodiment of the application is realized, and details are not repeated.
S2, dividing the human body area based on a deep learning algorithm (deep Lab V & lt3+ & gt);
s3, extracting human detail textures based on an edge detection algorithm (Sobel);
s4, overlapping and fusing the human body outline and the detail texture;
s5, setting the transparency of the human body area image to be fully transparent, setting pixels outside the human body area to be zero and setting the pixels outside the human body area to be semitransparent;
and S6, taking the image with the transparency adjusted as a template of a subsequent photographing posture.
The image processing method provided by the embodiment of the application is not only effective under natural light, but also can accurately divide the human body area and extract the detailed texture of the human body under specific illumination (such as woodlamp). Different from the mode of presetting a human body outline frame adopted before, the human body outline obtained by the image processing method provided by the embodiment of the application is an outline frame which is private and customized for a user, so that the outline frame obtained by the method can be 100% overlapped with the gesture of the user, and the outline also contains detail texture information of the human body, so that the human body outline is more complete, and the user can conveniently and easily align the human body outline; because the human body outline template of each user is customized privately, the template can be aligned with the template gesture quickly, and the gesture and the shooting angle do not need to be adjusted for multiple times; unlike the related art that the auxiliary photographing is only performed under the natural light condition, the image processing method provided by the embodiment of the application can perform the auxiliary photographing under any illumination condition, and can simultaneously divide the outline of the human body and extract the detailed textures of the human body, so that the image processing method provided by the embodiment of the application has more universality in practical use, and can be used for not only the natural light condition, but also the auxiliary photographing under the special illumination (such as the woodlamp) condition.
The embodiment of the invention provides an image processing method. The method comprises the steps of dividing an obtained picture into target areas to obtain the target areas of the picture; extracting a target detail texture of the picture to obtain a target detail texture of a target area; overlapping and fusing the target area and the target detail texture to obtain a fused image; the photographing gesture template is obtained by adjusting the transparency of the fused image, so that a user can be helped to accurately and rapidly photograph and take an image.
Example two
In a second aspect, an embodiment of the present invention provides an image processing apparatus, and fig. 4 is a schematic diagram of an image processing apparatus provided in a second embodiment of the present invention; as shown in fig. 4, an image processing apparatus provided in an embodiment of the present application includes: the segmentation module 42 is configured to segment an obtained picture into a target region to obtain a target region of the picture; the extracting module 44 is configured to extract a target detail texture of the picture to obtain a target detail texture of the target area; the fusion module 46 is configured to superimpose and fuse the target area and the target detail texture to obtain a fused image; and the adjusting module 48 is used for obtaining a photographing posture template by adjusting the transparency of the fused image.
Optionally, the segmentation module 42 includes: an acquisition unit configured to acquire a picture in a user terminal in a case where a target area includes a human body part; the prediction unit is used for predicting the probability of whether each pixel in the picture is a human body part or not through a segmentation algorithm; and the segmentation unit is used for extracting the region where the pixels meeting the probability of the human body part are located to obtain a target region.
The embodiment of the invention provides an image processing device. The method comprises the steps of dividing an obtained picture into target areas to obtain the target areas of the picture; extracting a target detail texture of the picture to obtain a target detail texture of a target area; overlapping and fusing the target area and the target detail texture to obtain a fused image; the photographing gesture template is obtained by adjusting the transparency of the fused image, so that a user can be helped to accurately and rapidly photograph and take an image.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention.

Claims (9)

1. An image processing method, comprising:
carrying out target region segmentation on the acquired picture to obtain a target region of the picture;
extracting a target detail texture of the picture to obtain a target detail texture of the target area;
overlapping and fusing the target area and the target detail texture to obtain a fused image;
obtaining a photographing posture template by adjusting the transparency of the fused image;
wherein the method further comprises: reducing the picture size of the fused image to a fixed value to obtain a reduced fused image; converting the texture map of the fused image into a map of a first channel number, and converting the map of the first channel number into a map of a second channel number; traversing the graph of the second channel number, wherein if the designated pixel in the graph of the second channel number is smaller than a preset value, the designated pixel is set to a first value, and the transparency is modified to a second value; if the appointed pixels in the graph of the second channel number are larger than a preset value, the pixel values of the appointed pixels are unchanged, and the transparency is modified to be a third value; amplifying the traversed graph of the second channel number to the picture size of the fused image;
the first channel number graph is an RGB graph, the second channel number graph is an RGBA graph, the preset value is 50, the first value is 255, the second value is 125, and the third value is 255.
2. The image processing method according to claim 1, wherein the performing target region segmentation on the acquired picture to obtain the target region of the picture includes:
acquiring a picture in a user terminal under the condition that the target area comprises a human body part;
predicting the probability of whether each pixel in the picture is a human body part or not through a segmentation algorithm;
and extracting the region where the pixels meeting the probability of the human body part are located to obtain the target region.
3. The image processing method according to claim 2, wherein extracting the region where the pixel satisfying the probability of the human body part is located, to obtain the target region includes:
extracting the region where the pixels meeting the probability of the human body part are located to obtain a probability map;
performing gray level conversion on the probability map, and performing Gaussian processing on the probability map after gray level conversion to obtain a gray level map after noise reduction;
and performing binarization threshold processing on the noise-reduced gray level image to obtain the target area.
4. An image processing method according to any one of claims 1 to 3, wherein said extracting the target detail texture of the picture to obtain the target detail texture of the target region includes:
converting the picture into a gray scale map;
respectively deriving the gray level image in the x-axis direction and the y-axis direction by an edge detection algorithm to obtain a derivative image in the x-axis direction and a derivative image in the y-axis direction;
superposing and fusing the derivative image in the x-axis direction and the derivative image in the y-axis direction to obtain a fused graph;
and processing the fused graph through a preset algorithm to obtain the target detail texture.
5. The method according to claim 4, wherein the performing superposition fusion on the target region and the target detail texture to obtain a fused image includes:
and superposing and fusing the target area, the target contour edge line and the target detail texture to obtain the fused image.
6. The image processing method according to claim 5, wherein the performing superposition fusion on the target area, the target contour edge line, and the target detail texture to obtain the fused image includes:
extracting target area information from the target detail texture by taking the target area as a mask;
performing binarization processing on the target area to obtain a target contour line;
and superposing and fusing the target contour line and the target detail texture to obtain the fused image.
7. The image processing method according to claim 5, wherein the obtaining a photographing posture template by adjusting transparency of the fused image includes:
setting the foreground part pixels of the fused image to be white, and setting the transparency of the image to be full transparent;
setting the background part of the fused image to be zero, and setting the transparency of the image to be semitransparent;
and determining the fused image with the transparency adjusted as the photographing posture template.
8. An image processing apparatus, comprising:
the segmentation module is used for carrying out target region segmentation on the acquired picture to obtain a target region of the picture;
the extraction module is used for extracting the target detail texture of the picture to obtain the target detail texture of the target area;
the fusion module is used for carrying out superposition fusion on the target area and the target detail texture to obtain a fused image;
the adjusting module is used for obtaining a photographing posture template by adjusting the transparency of the fused image;
the image size of the fused image is reduced to a fixed value, and the reduced fused image is obtained; converting the texture map of the fused image into a map of a first channel number, and converting the map of the first channel number into a map of a second channel number; traversing the graph of the second channel number, wherein if the designated pixel in the graph of the second channel number is smaller than a preset value, the designated pixel is set to a first value, and the transparency is modified to a second value; if the appointed pixels in the graph of the second channel number are larger than a preset value, the pixel values of the appointed pixels are unchanged, and the transparency is modified to be a third value; amplifying the traversed graph of the second channel number to the picture size of the fused image;
the first channel number graph is an RGB graph, the second channel number graph is an RGBA graph, the preset value is 50, the first value is 255, the second value is 125, and the third value is 255.
9. The image processing apparatus of claim 8, wherein the segmentation module comprises:
an acquisition unit configured to acquire a picture in a user terminal in a case where the target area includes a human body part;
the prediction unit is used for predicting the probability of whether each pixel in the picture is a human body part or not through a segmentation algorithm;
and the segmentation unit is used for extracting the region where the pixels meeting the probability of the human body part are located to obtain the target region.
CN202211714136.XA 2022-12-30 2022-12-30 Image processing method and device Active CN115690130B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211714136.XA CN115690130B (en) 2022-12-30 2022-12-30 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211714136.XA CN115690130B (en) 2022-12-30 2022-12-30 Image processing method and device

Publications (2)

Publication Number Publication Date
CN115690130A CN115690130A (en) 2023-02-03
CN115690130B true CN115690130B (en) 2023-06-27

Family

ID=85057576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211714136.XA Active CN115690130B (en) 2022-12-30 2022-12-30 Image processing method and device

Country Status (1)

Country Link
CN (1) CN115690130B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117095299B (en) * 2023-10-18 2024-01-26 浙江省测绘科学技术研究院 Grain crop extraction method, system, equipment and medium for crushing cultivation area

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104184934A (en) * 2013-05-23 2014-12-03 北京千橡网景科技发展有限公司 Method and apparatus for providing auxiliary reference for shooting
CN107742311A (en) * 2017-09-29 2018-02-27 北京易达图灵科技有限公司 A kind of method and device of vision positioning
CN110544258A (en) * 2019-08-30 2019-12-06 北京海益同展信息科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN111862110A (en) * 2020-06-30 2020-10-30 辽宁向日葵教育科技有限公司 Green curtain image matting method, system, equipment and readable storage medium
WO2020224424A1 (en) * 2019-05-07 2020-11-12 腾讯科技(深圳)有限公司 Image processing method and apparatus, computer readable storage medium, and computer device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080088718A1 (en) * 2006-10-17 2008-04-17 Cazier Robert P Template Creator For Digital Cameras
JP4834606B2 (en) * 2007-05-16 2011-12-14 キヤノン株式会社 Image processing method and image processing apparatus
CN101841642B (en) * 2010-04-22 2013-07-24 南京航空航天大学 Edge detection method based on fractional-order signal processing
CN106202086B (en) * 2015-05-04 2020-02-28 阿里巴巴集团控股有限公司 Picture processing and obtaining method, device and system
CN107358593B (en) * 2017-06-16 2020-06-26 Oppo广东移动通信有限公司 Image forming method and apparatus
CN109842757B (en) * 2017-11-29 2021-08-24 中兴通讯股份有限公司 Image processing method and device
CN109767401B (en) * 2019-01-15 2021-02-12 深圳看到科技有限公司 Picture optimization method, device, terminal and corresponding storage medium
CN110399849B (en) * 2019-07-30 2021-07-27 北京市商汤科技开发有限公司 Image processing method and device, processor, electronic device and storage medium
CN112330688A (en) * 2020-11-02 2021-02-05 腾讯科技(深圳)有限公司 Image processing method and device based on artificial intelligence and computer equipment
CN112866577B (en) * 2021-01-20 2022-05-27 腾讯科技(深圳)有限公司 Image processing method and device, computer readable medium and electronic equipment
CN113393430B (en) * 2021-06-09 2023-05-12 东方电气集团科学技术研究院有限公司 Thermal imaging image enhancement training method and device for fan blade defect detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104184934A (en) * 2013-05-23 2014-12-03 北京千橡网景科技发展有限公司 Method and apparatus for providing auxiliary reference for shooting
CN107742311A (en) * 2017-09-29 2018-02-27 北京易达图灵科技有限公司 A kind of method and device of vision positioning
WO2020224424A1 (en) * 2019-05-07 2020-11-12 腾讯科技(深圳)有限公司 Image processing method and apparatus, computer readable storage medium, and computer device
CN110544258A (en) * 2019-08-30 2019-12-06 北京海益同展信息科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN111862110A (en) * 2020-06-30 2020-10-30 辽宁向日葵教育科技有限公司 Green curtain image matting method, system, equipment and readable storage medium

Also Published As

Publication number Publication date
CN115690130A (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN108810418B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107730444B (en) Image processing method, image processing device, readable storage medium and computer equipment
CN107945135B (en) Image processing method, image processing apparatus, storage medium, and electronic device
US8718356B2 (en) Method and apparatus for 2D to 3D conversion using scene classification and face detection
WO2016101883A1 (en) Method for face beautification in real-time video and electronic equipment
CN104486552B (en) A kind of method and electronic equipment obtaining image
CN107993209B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107172354B (en) Video processing method and device, electronic equipment and storage medium
CN105303161A (en) Method and device for shooting multiple people
CN110827371B (en) Certificate generation method and device, electronic equipment and storage medium
CN105049718A (en) Image processing method and terminal
WO2017131952A1 (en) Unified multi-image fusion approach
CN113888437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115690130B (en) Image processing method and device
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
WO2017173578A1 (en) Image enhancement method and device
CN107564085B (en) Image warping processing method and device, computing equipment and computer storage medium
CN107578372B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107705279B (en) Image data real-time processing method and device for realizing double exposure and computing equipment
CN105580050A (en) Providing control points in images
Zheng et al. Windowing decomposition convolutional neural network for image enhancement
CN117496019B (en) Image animation processing method and system for driving static image
CN112581481A (en) Image processing method and device, electronic equipment and computer readable storage medium
EP3175609B1 (en) Processing data representing an image
JP2005316743A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhang Wei

Inventor after: Zhang Jing

Inventor after: Cui Tao

Inventor after: He Yang

Inventor before: Zhang Wei

Inventor before: Zhang Jing

Inventor before: Cui Tao

Inventor before: He Yang

GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: An image processing method and device

Granted publication date: 20230627

Pledgee: Hangzhou branch of Bank of Nanjing Co.,Ltd.

Pledgor: Hangzhou Yongliu Technology Co.,Ltd.

Registration number: Y2024980015252