CN117670868A - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117670868A
CN117670868A CN202311865203.2A CN202311865203A CN117670868A CN 117670868 A CN117670868 A CN 117670868A CN 202311865203 A CN202311865203 A CN 202311865203A CN 117670868 A CN117670868 A CN 117670868A
Authority
CN
China
Prior art keywords
target image
transparent
image
obtaining
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311865203.2A
Other languages
Chinese (zh)
Inventor
肖启华
莫志坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202311865203.2A priority Critical patent/CN117670868A/en
Publication of CN117670868A publication Critical patent/CN117670868A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: obtaining a transparent judgment result corresponding to a target image, wherein the transparent judgment result represents whether a transparent area exists in the target image; and if the transparent judging result represents that the transparent region exists in the target image, obtaining a non-transparent region in the target image by utilizing a background image of a display region where the target image is positioned.

Description

Image processing method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a storage medium.
Background
Currently, in some scenes, it is necessary to perform content conversion on an image, such as converting an image style, lines, and the like.
However, there is a case where a transparent region exists in the image, so that a partial desktop region where the image is located (a region where the transparent region of the image corresponds to the desktop) may be recognized as an image region when the image is converted, thereby causing an error in the image conversion.
Therefore, a technical solution capable of identifying non-transparent areas in an image is needed.
Disclosure of Invention
In view of this, the present application provides an image processing method, apparatus, electronic device, and storage medium, as follows:
an image processing method, comprising:
obtaining a transparent judgment result corresponding to a target image, wherein the transparent judgment result represents whether a transparent area exists in the target image;
and if the transparent judging result represents that the transparent region exists in the target image, obtaining a non-transparent region in the target image by utilizing a background image of a display region where the target image is positioned.
In the above method, preferably, obtaining a transparent judgment result corresponding to the target image includes:
obtaining a predicted size of the target image;
comparing the predicted size with the actual size of the target image to obtain a transparent judgment result; the actual size is obtained from image attributes of the target image;
wherein if the difference between the predicted size and the actual size is greater than or equal to a first threshold, the transparency determination indicates that a transparency region exists in the target image;
and if the difference value between the predicted size and the actual size is smaller than the first threshold value, the transparency judgment result represents that no transparent area exists in the target image.
The above method, preferably, obtains a predicted size of the target image, including:
reading pixel points in the target image;
and obtaining the predicted size of the target image according to the read pixel points.
In the above method, preferably, obtaining a transparent judgment result corresponding to the target image includes:
and obtaining a transparent judgment result corresponding to the target image according to the target image and the background image.
According to the above method, preferably, obtaining a transparent judgment result corresponding to the target image according to the target image and the background image includes:
comparing the pixel points in the target image with the pixel points in the background image according to the pixel point positions to obtain a comparison result;
obtaining a transparent judgment result corresponding to the target image according to the comparison result;
wherein, when the comparison result indicates that the pixel area overlapped between the target image and the background image is greater than or equal to a second threshold value, the transparent judgment result indicates that a transparent area exists in the target image;
under the condition that the comparison result indicates that the pixel area overlapped between the target image and the background image is smaller than the second threshold value, the transparent judgment result indicates that a transparent area does not exist in the target image;
the overlapped pixel areas are areas where the pixels with the same values on the same pixel position are located.
In the above method, preferably, the obtaining the non-transparent area in the target image by using the background image of the display area where the target image is located includes:
under the condition that a first background image is displayed in the display area, reading pixel points in the target image to obtain a first pixel point set;
under the condition that a second background image is displayed in the display area, reading pixel points in the target image to obtain a second pixel point set; the pixel point values of the second background image and the first background image at the corresponding pixel point positions are different;
and comparing the first pixel point set with the second pixel point set to obtain a target area with the same pixel point, wherein the target area is a non-transparent area in the target image.
The above method, preferably, the method further comprises:
and according to the target input content, carrying out image characteristic adjustment on the non-transparent area in the target image through an image processing model so as to obtain a new image.
An image processing apparatus comprising:
the transparent judging unit is used for obtaining a transparent judging result corresponding to the target image, and the transparent judging result represents whether a transparent area exists in the target image;
and the non-transparent obtaining unit is used for obtaining the non-transparent area in the target image by utilizing the background image of the display area where the target image is located if the transparent judging result represents that the transparent area exists in the target image.
An electronic device, comprising:
a memory for storing a computer program and data resulting from the execution of the computer program;
a processor for executing the computer program to implement: obtaining a transparent judgment result corresponding to a target image, wherein the transparent judgment result represents whether a transparent area exists in the target image; and if the transparent judging result represents that the transparent region exists in the target image, obtaining a non-transparent region in the target image by utilizing a background image of a display region where the target image is positioned.
A storage medium storing a computer program that when executed implements:
obtaining a transparent judgment result corresponding to a target image, wherein the transparent judgment result represents whether a transparent area exists in the target image;
and if the transparent judging result represents that the transparent region exists in the target image, obtaining a non-transparent region in the target image by utilizing a background image of a display region where the target image is positioned.
According to the technical scheme, in the image processing method, the device, the electronic equipment and the storage medium disclosed by the application, the transparent judgment result corresponding to the target image is obtained, and then when the transparent judgment result represents that the transparent area exists in the target image, the background image of the display area where the target image is positioned is utilized to obtain the non-transparent area in the target image so as to facilitate corresponding image processing. Therefore, the method and the device can identify the non-transparent area by utilizing the background image when judging that the transparent area exists in the target image, so that processing errors are avoided when the target image is processed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an image processing method according to a first embodiment of the present application;
FIG. 2 is a diagram of an example target image including transparent regions and non-transparent regions;
fig. 3 is a flowchart of obtaining a transparent judgment result in an image processing method according to a first embodiment of the present application;
fig. 4 is another flowchart of obtaining a transparency determination result in an image processing method according to the first embodiment of the present application;
fig. 5 is a flowchart of an image processing method according to an embodiment of the present application to obtain a non-transparent area;
fig. 6 is an example diagram of switching background images for a target image containing transparent and non-transparent regions;
FIG. 7 is another flowchart of an image processing method according to the first embodiment of the present application;
fig. 8 is a schematic structural diagram of an image processing apparatus according to a second embodiment of the present disclosure;
fig. 9 is another schematic structural diagram of an image processing apparatus according to a second embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an electronic device according to a third embodiment of the present application;
FIG. 11 is a flow chart of a scheme for implementing image processing by image size analysis and comparison in a graph-to-graph scene;
fig. 12 is a flowchart of a scheme for implementing image processing by image background analysis in a scene of image-to-image conversion.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Referring to fig. 1, a flowchart of an implementation of an image processing method according to an embodiment of the present application is shown, and the method may be applied to an electronic device capable of performing image processing, such as a computer or a server. The technical scheme in the embodiment is mainly used for identifying the non-transparent area in the image so as to facilitate image processing.
Specifically, the method in this embodiment may include the following steps:
step 101: and obtaining a transparent judgment result corresponding to the target image.
The transparent judging result represents whether a transparent area exists in the target image.
The transparent region refers to an image region in which pixel values of pixel points in the target image are empty. For example, as shown in fig. 2, there are transparent regions and non-transparent regions in the target image.
Step 102: judging whether the transparent judging result represents that the transparent area exists in the target image or not, and executing step 103 if the transparent judging result represents that the transparent area exists in the target image.
Step 103: and obtaining a non-transparent area in the target image by using the background image of the display area where the target image is located.
For example, as shown in fig. 2, the display area where the target image is located is a desktop area where the target image is located, and the background image is a desktop image, and because the transparent area in the target image may cause a state that part of the pixels of the background image are not blocked, but the non-transparent area in the target image may block part of the pixels in the background image, based on this, in this embodiment, the transparent area and the non-transparent area in the target image may be obtained by using the background image.
As can be seen from the above technical solution, in the image processing method provided in the first embodiment of the present application, by obtaining the transparent determination result corresponding to the target image, and then, when the transparent determination result indicates that the transparent region exists in the target image, the background image of the display region where the target image is located is used to obtain the non-transparent region in the target image, so as to facilitate corresponding image processing. Therefore, in the embodiment, when the transparent area in the target image is judged, the non-transparent area can be identified by utilizing the background image, so that processing errors in processing the target image are avoided.
In one implementation manner, when the transparency determination result corresponding to the target image is obtained in step 101, this may be achieved as shown in fig. 3:
step 301: a predicted size of the target image is obtained.
Wherein the target image has an actual size, and the actual size of the target image can be read from the image attribute of the target image, such as 2M or 10M.
It should be noted that, in step 301, the actual size of the target image is different from the actual size of the target image, and the size of the target image is predicted to obtain the predicted size of the target image. The predicted size of the target image is based on whether a transparent region is present in the target image that is less than or equal to the actual size of the target image. For example, if there is a transparent region in the target image, the predicted size of the target image must be larger than the actual size of the target image, and if there is no transparent region in the target image, the difference between the predicted size of the target image and the actual size of the target image is small, i.e., the difference is smaller than a certain threshold, taking into account the error of the size prediction.
Specifically, in step 301, first, a pixel point in a target image may be read, and then, according to the read pixel point, a predicted size of the target image may be obtained.
Step 302: and comparing the predicted size with the actual size of the target image to obtain a transparent judgment result.
If the difference value between the predicted size and the actual size is larger than or equal to a first threshold value, the transparent judging result represents that a transparent area exists in the target image; if the difference between the predicted size and the actual size is less than the first threshold, the transparent judgment result indicates that no transparent area exists in the target image.
For example, if the predicted size of the target image is 4.5M significantly greater than the actual size of 3M, i.e., the difference of 1.5M is greater than the first threshold of 0.2M, then a transparent region is characterized as being present in the target image; if the predicted size of the target image differs by less than 0.2M from the actual size of 3M by 2.9M or 3.1M, then no transparent region is present in the target image.
It should be noted that, in this embodiment, the first threshold may be set according to the sensitivity requirement, taking into consideration the error of the size prediction.
In another implementation manner, in step 101, when the transparent determination result corresponding to the target image is obtained, the transparent determination result corresponding to the target image may also be obtained according to the target image and the background image. This is achieved in particular by the following way, as shown in fig. 4:
step 401: and comparing the pixel points in the target image with the pixel points in the background image according to the pixel point positions to obtain a comparison result.
The comparison result represents the region where the pixel points which are overlapped between the target image and the background image are located, namely the overlapped pixel region, wherein the overlapped pixel region is the region where the pixel points with the same value on the same pixel point position between the target image and the background image are located. If the transparent region exists in the target image, the value of the pixel point at each pixel point position in the transparent region in the target image is the value of the pixel point at the corresponding pixel point position in the background image, the pixel point with different pixel values at the same pixel point position in the target image and the background image is the pixel point of the non-transparent region, and the pixel point at each pixel point position in the non-transparent region is the pixel point which is not overlapped with the background region in the target image.
Step 402: and obtaining a transparent judgment result corresponding to the target image according to the comparison result.
And under the condition that the pixel area overlapped between the comparison result representation target image and the background image is larger than or equal to a second threshold value, the transparent judgment result representation target image has a transparent area. And under the condition that the pixel area overlapped between the comparison result representation target image and the background image is smaller than a second threshold value, the transparent judgment result representation target image does not have a transparent area.
Note that, in this embodiment, the pixel point comparison error is considered, and the second threshold is set according to the sensitivity requirement. If the pixel area coinciding between the target image and the background image is large, i.e. greater than or equal to the second threshold value, it may be determined that the target image has a transparent area, and if the pixel area coinciding between the target image and the background image is small, i.e. less than the second threshold value, it may be determined that the target image does not have a transparent area.
For example, as shown in fig. 2, the target image is compared with the pixel points at the same pixel point position in the desktop image, thereby comparing: the pixel points a with the same pixel values on the same pixel point position, the pixel points b with different pixel values on the same pixel point position and the pixel points c on other pixel point positions are overlapped, wherein the region where the pixel points a with the same pixel values on the same pixel point position are located is the overlapped pixel region, the partial region is the transparent region in the target image, the region where the pixel points b with different pixel values on the same pixel point position are located is the pixel region which is not overlapped with the background image in the target image, the partial region is the non-transparent region in the target image, and the region where the pixel points c on other pixel point positions are located is the non-target image region in the background image. Since the area where the pixel point a is located exceeds the second threshold, it is indicated that the transparent area exists in the target image, and if the area where the pixel point a is located is small, it is indicated that the transparent area does not exist in the target image.
In one implementation, when the background image of the display area where the target image is located is used to obtain the non-transparent area in the target image in step 103, this may be achieved as shown in fig. 5 by:
step 501: and under the condition that the first background image is displayed in the display area, reading the pixel points in the target image to obtain a first pixel point set.
The first pixel point set comprises pixel points of a non-transparent area in the target image and pixel points of a transparent area corresponding to the target image in the first background image.
Step 502: and under the condition that the second background image is displayed in the display area, reading the pixel points in the target image to obtain a second pixel point set.
The second pixel point set comprises the pixel points of the non-transparent area in the target image and the pixel points of the transparent area corresponding to the target image in the second background image.
The pixel values of the second background image and the first background image at the corresponding pixel positions are different. For example, the first background image is an image with pixel values of 0, and the second background image is an image with pixel values of 255, and the pixel values of the two images at the same pixel position are different.
Step 503: and comparing the first pixel point set with the second pixel point set to obtain a target area with the same pixel points, wherein the target area is a non-transparent area in the target image.
Specifically, in step 503, the pixel values of the corresponding pixels between the first pixel set and the second pixel set are compared according to the same pixel positions, and the pixels with the same pixel values are screened out, where the regions formed by the pixels with the same pixel values at the same pixel positions are the target regions, that is, the non-transparent regions in the target image.
For example, taking the target image shown in fig. 2 as an example, as shown in fig. 6, a background image is switched in a display area, so that a first pixel point set and a second pixel point set are read, the two pixel point sets each include a pixel point of a non-transparent area in the target image, and because the pixel point values of the first background image and the second background image at the corresponding pixel point positions are different, the pixel point with different values at the same pixel point position in the first pixel point set and the second pixel point set is a pixel point a in the transparent area in the target image, and the pixel point with the same value at the same pixel point position in the first pixel point set and the second pixel point set is a pixel point b in the non-transparent area in the target image, the first pixel point set and the second pixel point set are compared according to the pixel value of the same pixel point position, and the pixel point b with the same pixel point value at the same pixel point position is screened, so that the non-transparent area in the target image composed of the pixel point b is obtained.
In one implementation, after step 103, the method in this embodiment may further include the following steps, as shown in fig. 7:
step 104: and according to the target input content, carrying out image characteristic adjustment on the non-transparent area in the target image through an image processing model so as to obtain a new image.
The target input content may include: text, pictures, speech, etc., such as "modify red to pink" etc. The image processing model may be a text-based graphics model.
Based on this, in this embodiment, the image processing model may be used to perform image feature adjustment on the target image, in order to avoid interference of the transparent area in the target image, in this embodiment, the non-transparent area in the target image is first identified, and then the image processing model may be used to adjust the non-transparent area in the target image according to the indication of the target input content, for example, adjust the value of the red pixel point to be the pixel value of pink, and do not process the transparent area in the target image, so as to obtain a new image. Therefore, after the non-transparent area in the target image is identified, the image characteristics of the non-transparent area are adjusted in a targeted manner, and interference cannot be generated due to the pixel points of the background image in the transparent area, so that the accuracy of image processing is improved.
Referring to fig. 8, a schematic structural diagram of an image processing apparatus according to a second embodiment of the present application may be configured in an electronic device capable of performing image processing, such as a computer or a server. The technical scheme in the embodiment is mainly used for identifying the non-transparent area in the image so as to facilitate image processing.
Specifically, the apparatus in this embodiment may include the following units:
a transparency determining unit 801, configured to obtain a transparency determination result corresponding to a target image, where the transparency determination result indicates whether a transparency region exists in the target image;
and a non-transparent obtaining unit 802, configured to obtain a non-transparent area in the target image by using a background image of a display area where the target image is located if the transparent determination result indicates that the transparent area exists in the target image.
As can be seen from the above technical solution, in the image processing apparatus provided in the second embodiment of the present application, by obtaining the transparent determination result corresponding to the target image, and further, when the transparent determination result indicates that the transparent region exists in the target image, the background image of the display region where the target image is located is used to obtain the non-transparent region in the target image, so as to facilitate corresponding image processing. Therefore, in the embodiment, when the transparent area in the target image is judged, the non-transparent area can be identified by utilizing the background image, so that processing errors in processing the target image are avoided.
In one implementation, the transparency determination unit 801 is specifically configured to: obtaining a predicted size of the target image; comparing the predicted size with the actual size of the target image to obtain a transparent judgment result; the actual size is obtained from image attributes of the target image; wherein if the difference between the predicted size and the actual size is greater than or equal to a first threshold, the transparency determination indicates that a transparency region exists in the target image; and if the difference value between the predicted size and the actual size is smaller than the first threshold value, the transparency judgment result represents that no transparent area exists in the target image.
Wherein, the transparent judging unit 801 is specifically configured to, when obtaining the predicted size of the target image: reading pixel points in the target image; and obtaining the predicted size of the target image according to the read pixel points.
In one implementation, the transparency determination unit 801 is specifically configured to: and obtaining a transparent judgment result corresponding to the target image according to the target image and the background image. For example, comparing the pixel points in the target image with the pixel points in the background image according to the pixel point positions to obtain a comparison result; obtaining a transparent judgment result corresponding to the target image according to the comparison result; wherein, when the comparison result indicates that the pixel area overlapped between the target image and the background image is greater than or equal to a second threshold value, the transparent judgment result indicates that a transparent area exists in the target image; under the condition that the comparison result indicates that the pixel area overlapped between the target image and the background image is smaller than the second threshold value, the transparent judgment result indicates that a transparent area does not exist in the target image; the overlapped pixel areas are areas where the pixels with the same values on the same pixel position are located.
In one implementation, the non-transparent obtaining unit 802 is specifically configured to: under the condition that a first background image is displayed in the display area, reading pixel points in the target image to obtain a first pixel point set; under the condition that a second background image is displayed in the display area, reading pixel points in the target image to obtain a second pixel point set; the pixel point values of the second background image and the first background image at the corresponding pixel point positions are different; and comparing the first pixel point set with the second pixel point set to obtain a target area with the same pixel point, wherein the target area is a non-transparent area in the target image.
In one implementation, the following units may be further included in this embodiment, as shown in fig. 9:
and a feature adjustment unit 803, configured to perform image feature adjustment on the non-transparent area in the target image through an image processing model according to the target input content, so as to obtain a new image.
It should be noted that, the specific implementation manner of each unit in this embodiment may refer to the corresponding content in the foregoing, which is not described in detail herein.
Referring to fig. 10, a schematic structural diagram of an electronic device according to a third embodiment of the present application may include the following structures:
a memory 1001 for storing a computer program and data generated by the execution of the computer program;
a processor 1002 for executing the computer program to implement: obtaining a transparent judgment result corresponding to a target image, wherein the transparent judgment result represents whether a transparent area exists in the target image; and if the transparent judging result represents that the transparent region exists in the target image, obtaining a non-transparent region in the target image by utilizing a background image of a display region where the target image is positioned.
Of course, the electronic device in this embodiment may also include other structures, such as an input/output device, a communication module, a bus, etc., for implementing the corresponding functions.
As can be seen from the above technical solution, in the electronic device provided in the third embodiment of the present application, by obtaining the transparent determination result corresponding to the target image, and further, when the transparent determination result indicates that the transparent area exists in the target image, the background image of the display area where the target image is located is used to obtain the non-transparent area in the target image, so as to facilitate corresponding image processing. Therefore, in the embodiment, when the transparent area in the target image is judged, the non-transparent area can be identified by utilizing the background image, so that processing errors in processing the target image are avoided.
In addition, the embodiment of the present application also provides a storage medium for storing a computer program, which when executed, implements the image processing method in the above embodiment.
Taking a scenario of graph transformation by a text graph model as an example, the following illustrates a technical scheme of the application:
at present, the image generation mode generally comprises a text-to-image mode and an image-to-image mode, in the process of converting an image into an image, the input image needs to be analyzed, key information of the image is extracted, and then a new image is generated according to the key information. However, in the conventional scheme, when the background of the input image is transparent, the transparent part is affected by the background behind the computer or the background behind the software, so that the recognition and analysis of the picture are caused, and besides the content of the image, the information behind the transparent background is introduced, so that the generation deviation of the picture is overlarge.
In view of this, the present application proposes a new image processing scheme as follows:
scheme 1: image size analysis contrast scheme, as shown in fig. 11:
1.1, inputting an image to be processed, namely inputting an image processing device realized by the application;
1.2, after inputting the image, obtaining the actual size of the image, namely the image size information;
1.3, reading the content of the image pixel points, and estimating the size according to the read content, namely estimating the size;
1.4, comparing the estimated size with the actual size to determine whether a transparent area exists, if the estimated size is obviously larger than the actual size (the difference between the estimated size and the actual size is larger than or equal to a first threshold), determining that the transparent area exists, executing step 1.5, if the estimated size is equal to the actual size (the difference between the estimated size and the actual size is smaller than the first threshold, namely, the difference is not large), determining that the transparent area does not exist, and executing step 1.7;
1.5, transforming the desktop background where the image to be processed is located;
1.6, according to the transformation of the desktop background, determining the change range of the pixel points and the non-change range of the pixel points;
1.7, determining a non-transparent area, and performing image analysis;
1.8, generating an image.
Scheme 2: scheme of image background analysis, as shown in fig. 12:
2.1, inputting an image to be processed, namely inputting an image processing device realized by the application;
2.2, analyzing the image to be processed;
2.3, analyzing the desktop background;
2.4, analyzing the overlapped pixel areas between the desktop background and the image to be processed, if the overlapped pixel areas are larger, namely larger than or equal to a second threshold value, executing 2.5, and if the overlapped pixel areas are smaller, namely smaller than the second threshold value, executing 2.7;
2.5, transforming the desktop background where the image to be processed is located;
2.6, according to the transformation of the desktop background, determining the change range of the pixel points and the non-change range of the pixel points;
2.7, determining a non-transparent area, and performing image analysis;
2.8, generating an image.
In summary, by identifying the transparent region and the non-transparent region in the image, in the process of performing image conversion, interference to image processing of the non-transparent region due to the tabletop background related to the transparent region is avoided, so that the problem of overlarge deviation in the process of generating the image is avoided, and the accuracy of generating the image is improved.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An image processing method, comprising:
obtaining a transparent judgment result corresponding to a target image, wherein the transparent judgment result represents whether a transparent area exists in the target image;
and if the transparent judging result represents that the transparent region exists in the target image, obtaining a non-transparent region in the target image by utilizing a background image of a display region where the target image is positioned.
2. The method of claim 1, obtaining a transparency determination result corresponding to the target image, comprising:
obtaining a predicted size of the target image;
comparing the predicted size with the actual size of the target image to obtain a transparent judgment result; the actual size is obtained from image attributes of the target image;
wherein if the difference between the predicted size and the actual size is greater than or equal to a first threshold, the transparency determination indicates that a transparency region exists in the target image;
and if the difference value between the predicted size and the actual size is smaller than the first threshold value, the transparency judgment result represents that no transparent area exists in the target image.
3. The method of claim 2, obtaining a predicted size of the target image, comprising:
reading pixel points in the target image;
and obtaining the predicted size of the target image according to the read pixel points.
4. The method of claim 1, obtaining a transparency determination result corresponding to the target image, comprising:
and obtaining a transparent judgment result corresponding to the target image according to the target image and the background image.
5. The method according to claim 4, according to the target image and the background image, obtaining a transparency determination result corresponding to the target image, comprising:
comparing the pixel points in the target image with the pixel points in the background image according to the pixel point positions to obtain a comparison result;
obtaining a transparent judgment result corresponding to the target image according to the comparison result;
wherein, when the comparison result indicates that the pixel area overlapped between the target image and the background image is greater than or equal to a second threshold value, the transparent judgment result indicates that a transparent area exists in the target image;
under the condition that the comparison result indicates that the pixel area overlapped between the target image and the background image is smaller than the second threshold value, the transparent judgment result indicates that a transparent area does not exist in the target image;
the overlapped pixel areas are areas where the pixels with the same values on the same pixel position are located.
6. The method according to claim 1 or 2, obtaining a non-transparent area in the target image using a background image of a display area in which the target image is located, comprising:
under the condition that a first background image is displayed in the display area, reading pixel points in the target image to obtain a first pixel point set;
under the condition that a second background image is displayed in the display area, reading pixel points in the target image to obtain a second pixel point set; the pixel point values of the second background image and the first background image at the corresponding pixel point positions are different;
and comparing the first pixel point set with the second pixel point set to obtain a target area with the same pixel point, wherein the target area is a non-transparent area in the target image.
7. The method of claim 1 or 2, the method further comprising:
and according to the target input content, carrying out image characteristic adjustment on the non-transparent area in the target image through an image processing model so as to obtain a new image.
8. An image processing apparatus comprising:
the transparent judging unit is used for obtaining a transparent judging result corresponding to the target image, and the transparent judging result represents whether a transparent area exists in the target image;
and the non-transparent obtaining unit is used for obtaining the non-transparent area in the target image by utilizing the background image of the display area where the target image is located if the transparent judging result represents that the transparent area exists in the target image.
9. An electronic device, comprising:
a memory for storing a computer program and data resulting from the execution of the computer program;
a processor for executing the computer program to implement: obtaining a transparent judgment result corresponding to a target image, wherein the transparent judgment result represents whether a transparent area exists in the target image; and if the transparent judging result represents that the transparent region exists in the target image, obtaining a non-transparent region in the target image by utilizing a background image of a display region where the target image is positioned.
10. A storage medium storing a computer program that when executed implements:
obtaining a transparent judgment result corresponding to a target image, wherein the transparent judgment result represents whether a transparent area exists in the target image;
and if the transparent judging result represents that the transparent region exists in the target image, obtaining a non-transparent region in the target image by utilizing a background image of a display region where the target image is positioned.
CN202311865203.2A 2023-12-29 2023-12-29 Image processing method, device, electronic equipment and storage medium Pending CN117670868A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311865203.2A CN117670868A (en) 2023-12-29 2023-12-29 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311865203.2A CN117670868A (en) 2023-12-29 2023-12-29 Image processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117670868A true CN117670868A (en) 2024-03-08

Family

ID=90077146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311865203.2A Pending CN117670868A (en) 2023-12-29 2023-12-29 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117670868A (en)

Similar Documents

Publication Publication Date Title
CN110516577B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110189336B (en) Image generation method, system, server and storage medium
CN108830780B (en) Image processing method and device, electronic device and storage medium
CN110222694B (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN108647351B (en) Text image processing method and device, storage medium and terminal
CN111460355B (en) Page analysis method and device
CN110708568B (en) Video content mutation detection method and device
CN112001983B (en) Method and device for generating occlusion image, computer equipment and storage medium
CN108763350B (en) Text data processing method and device, storage medium and terminal
CN111145202B (en) Model generation method, image processing method, device, equipment and storage medium
CN113657518B (en) Training method, target image detection method, device, electronic device, and medium
CN113344907B (en) Image detection method and device
CN112312001A (en) Image detection method, device, equipment and computer storage medium
US11295494B2 (en) Image modification styles learned from a limited set of modified images
CN111062922B (en) Method and system for distinguishing flip image and electronic equipment
CN117670868A (en) Image processing method, device, electronic equipment and storage medium
CN114387315A (en) Image processing model training method, image processing device, image processing equipment and image processing medium
CN114764839A (en) Dynamic video generation method and device, readable storage medium and terminal equipment
CN112559340A (en) Picture testing method, device, equipment and storage medium
CN112801960A (en) Image processing method and device, storage medium and electronic equipment
US20240104908A1 (en) Evaluation method
CN113117341B (en) Picture processing method and device, computer readable storage medium and electronic equipment
CN114373153B (en) Video imaging optimization system and method based on multi-scale array camera
CN116309274B (en) Method and device for detecting small target in image, computer equipment and storage medium
CN111626919B (en) Image synthesis method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination