CN117575976A - Image shadow processing method, device, equipment and storage medium - Google Patents
Image shadow processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN117575976A CN117575976A CN202410049044.7A CN202410049044A CN117575976A CN 117575976 A CN117575976 A CN 117575976A CN 202410049044 A CN202410049044 A CN 202410049044A CN 117575976 A CN117575976 A CN 117575976A
- Authority
- CN
- China
- Prior art keywords
- image
- shadow
- processed
- pixel
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title description 7
- 238000012545 processing Methods 0.000 claims abstract description 161
- 238000000034 method Methods 0.000 claims abstract description 74
- 238000000605 extraction Methods 0.000 claims abstract description 37
- 238000001514 detection method Methods 0.000 claims description 89
- 238000005286 illumination Methods 0.000 claims description 33
- 230000008569 process Effects 0.000 claims description 32
- 238000009792 diffusion process Methods 0.000 claims description 20
- 238000006243 chemical reaction Methods 0.000 claims description 14
- 238000001914 filtration Methods 0.000 claims description 11
- 230000010354 integration Effects 0.000 claims description 11
- 238000012216 screening Methods 0.000 claims description 8
- 238000003702 image correction Methods 0.000 claims description 7
- 238000003709 image segmentation Methods 0.000 claims description 7
- 238000009877 rendering Methods 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 17
- 238000004891 communication Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 238000005260 corrosion Methods 0.000 description 2
- 230000007797 corrosion Effects 0.000 description 2
- 230000010339 dilation Effects 0.000 description 2
- 230000003628 erosive effect Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application discloses a method, a device, equipment and a storage medium for processing image shadows, and belongs to the technical field of image processing. The method comprises the following steps: acquiring an image to be processed, and a first candidate shadow image and a second candidate shadow image corresponding to the image to be processed, wherein the first candidate shadow image is obtained by carrying out image processing on the image to be processed, and the second candidate shadow image is obtained by carrying out image semantic feature extraction on the image to be processed; correcting the second candidate shadow image by using the first candidate shadow image to obtain a target shadow image corresponding to the image to be processed; performing shadow feature enhancement processing on a shadow region in the image to be processed by using the target shadow image to obtain a feature enhancement image corresponding to the image to be processed; shadow removal processing is carried out on the characteristic enhanced image, and a shadow-free image corresponding to the image to be processed is obtained; by adopting the scheme provided by the application, the image shadow removing effect can be optimized, and the image shadow removing efficiency is improved.
Description
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a method, a device, equipment and a storage medium for processing image shadows.
Background
Shadows are dark areas created by objects that are not completely transparent to light. In some cases, shadows can occupy the entire three-dimensional space of an object, resulting in reduced quality of the acquired images, and shadows on the images can prevent the images from being used for a range of downstream computer vision tasks, thus image shadow processing is also a relatively important processing step in computer vision technology.
In the related art, on one hand, shadow detection and removal processing are performed on an image through a traditional computer vision technology, and due to the lack of advanced semantic features, the problem of false recognition is often caused; on the other hand, since the shadow-forming illumination condition is complicated, there is a possibility that a feature of a certain color component is missing, and thus, even if the feature extraction of the shadow region is performed on the image, there is a problem of erroneous recognition.
Disclosure of Invention
The embodiment of the application provides a processing method, device and equipment for image shadows and a storage medium, which can optimize the image shadow removal effect and improve the image shadow removal efficiency. The technical scheme is as follows.
In one aspect, an embodiment of the present application provides a method for processing an image shadow, where the method includes:
acquiring an image to be processed, and a first candidate shadow image and a second candidate shadow image corresponding to the image to be processed, wherein the first candidate shadow image is obtained by carrying out image science processing on the image to be processed, and the second candidate shadow image is obtained by carrying out image semantic feature extraction on the image to be processed;
correcting the second candidate shadow image by using the first candidate shadow image to obtain a target shadow image corresponding to the image to be processed;
performing shadow feature enhancement processing on a shadow region in the image to be processed by using the target shadow image to obtain a feature enhancement image corresponding to the image to be processed;
and performing shadow removal processing on the characteristic enhanced image to obtain a shadowless image corresponding to the image to be processed.
In another aspect, an embodiment of the present application provides an apparatus for processing an image shadow, including:
the image acquisition module is used for acquiring an image to be processed, and a first candidate shadow image and a second candidate shadow image corresponding to the image to be processed, wherein the first candidate shadow image is obtained by carrying out image science processing on the image to be processed, and the second candidate shadow image is obtained by carrying out image semantic feature extraction on the image to be processed;
The image correction module is used for correcting the second candidate shadow image by utilizing the first candidate shadow image to obtain a target shadow image corresponding to the image to be processed;
the feature enhancement module is used for carrying out shadow feature enhancement processing on a shadow region in the image to be processed by utilizing the target shadow image to obtain a feature enhancement image corresponding to the image to be processed;
and the shadow removing module is used for carrying out shadow removing processing on the characteristic enhanced image to obtain a shadowless image corresponding to the image to be processed.
In another aspect, embodiments of the present application provide a computer device comprising a processor and a memory; the memory stores at least one computer instruction for execution by the processor to implement the method of processing image shadows as described in the above aspects.
In another aspect, embodiments of the present application provide a computer readable storage medium having stored therein at least one computer instruction that is loaded and executed by a processor to implement a method of processing image shadows as described in the above aspects.
In another aspect, embodiments of the present application provide a computer program product comprising computer instructions stored in a computer-readable storage medium; a processor of a computer device reads the computer instructions from the computer-readable storage medium, the processor executing the computer instructions, causing the computer device to perform the method of processing image shadows as described in the above aspect.
In the embodiment of the application, the first candidate shadow image and the second candidate shadow image corresponding to the image to be processed are obtained by respectively carrying out the image learning process and the image semantic feature extraction on the image to be processed, so that the first candidate shadow image is utilized to correct the second candidate shadow image, namely, the target shadow image corresponding to the image to be processed can be obtained, and the detection efficiency of the image shadows is improved. And the shadow feature enhancement processing is carried out on the shadow region in the image to be processed by utilizing the target shadow image to obtain a feature enhancement image corresponding to the image to be processed, so that the feature enhancement image is used for replacing the original image to be processed to carry out the shadow removal processing, a shadow-free image is obtained, the accuracy of removing the shadow of the image is improved, and the generation quality of the shadow-free image is optimized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a schematic diagram of an implementation environment provided by an exemplary embodiment of the present application;
FIG. 2 illustrates a flow chart of a method for processing image shadows provided by an exemplary embodiment of the present application;
FIG. 3 illustrates a schematic diagram of generating a target shadow image provided by an exemplary embodiment of the present application;
FIG. 4 illustrates a flowchart of a method for processing image shadows provided by another exemplary embodiment of the present application;
FIG. 5 illustrates a schematic diagram of generating a first candidate shadow image provided by an exemplary embodiment of the present application;
FIG. 6 illustrates a schematic diagram of a shadow detection network provided in one exemplary embodiment of the present application;
FIG. 7 illustrates a network architecture diagram of a diffusion model provided by an exemplary embodiment of the present application;
FIG. 8 illustrates a schematic diagram of an unshaded image based on an image processing method according to an exemplary embodiment of the present application;
FIG. 9 illustrates a schematic diagram of a target illumination image provided by an exemplary embodiment of the present application;
FIG. 10 illustrates a schematic diagram of an image overlap region provided by an exemplary embodiment of the present application;
FIG. 11 is a flowchart illustrating a method for processing image shadows provided by another exemplary embodiment of the present application;
FIG. 12 is a block diagram illustrating an image shadow processing apparatus according to an exemplary embodiment of the present application;
fig. 13 shows a schematic structural diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, a schematic diagram of an implementation environment provided in one embodiment of the present application is shown. The implementation environment includes a terminal 120 and a server 140. The data communication between the terminal 120 and the server 140 is performed through a communication network, alternatively, the communication network may be a wired network or a wireless network, and the communication network may be at least one of a local area network, a metropolitan area network, and a wide area network.
The terminal 120 is a computer device installed with an application program having an image shading function. The image shadow processing function may be a function of an original application in the terminal 120, or a function of a third party application; the terminal 120 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart television, a wearable device, a vehicle-mounted terminal, etc., and in fig. 1, the terminal 120 is taken as an example of a desktop computer, but the present invention is not limited thereto.
The server 140 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), basic cloud computing services such as big data and artificial intelligence platforms, and the like. In the embodiment of the present application, the server 140 may be a background server of an application having an image shading function.
In one possible implementation, as shown in fig. 1, there is data interaction between the server 140 and the terminal 120. After the terminal 120 obtains the image to be processed, the image to be processed is sent to the server 140, the server 140 performs image processing and image semantic feature extraction on the image to be processed, so as to obtain a first candidate shadow image and a second candidate shadow image corresponding to the image to be processed, the server 140 corrects the second candidate shadow image by using the first candidate shadow image, so as to obtain a target shadow image, performs shadow feature enhancement processing on a shadow region of the image to be processed by using the target shadow image, so as to obtain a feature enhancement image corresponding to the image to be processed, and further performs shadow removal processing on the feature enhancement image, so as to obtain a shadow-free image corresponding to the image to be processed, and returns the shadow-free image to the terminal 120.
Referring to fig. 2, a flowchart of a method for processing image shadows according to an exemplary embodiment of the present application is shown, where the method is used for a computer device, and the computer device may be the terminal 120 or the server 140 shown in fig. 1, and the method includes the following steps.
Step 201, obtaining an image to be processed, and a first candidate shadow image and a second candidate shadow image corresponding to the image to be processed, wherein the first candidate shadow image is obtained by performing image processing on the image to be processed, and the second candidate shadow image is obtained by performing image semantic feature extraction on the image to be processed.
In the related art, the shadow image corresponding to the image to be processed is obtained only by carrying out image processing on the image to be processed, or the shadow image corresponding to the image to be processed is obtained only by carrying out image semantic feature extraction on the image to be processed, and the recognition accuracy of the shadow region of the image is low due to the lack of high-level image semantic features in the image processing mode; the image semantic feature extraction mode is low in accuracy of identifying the image shadow boundary due to complexity of shadow generation conditions, namely, a single shadow detection mode is adopted, so that the image shadow detection efficiency is low.
In the embodiment of the application, the first candidate shadow image and the second candidate shadow image corresponding to the image to be processed are obtained by respectively carrying out the imaging processing and the image semantic feature extraction on the image to be processed, so that the shadow detection efficiency of the image to be processed is improved.
In some embodiments, the computer device obtains the first candidate shadow image by performing an imaging process on the image to be processed. Optionally, the computer device may determine, according to the pixel values of each pixel point in the image to be processed, a first candidate shadow image corresponding to the image to be processed through binarization processing.
In some embodiments, the computer device obtains the second candidate shadow image by performing image semantic feature extraction on the image to be processed. Alternatively, the computer device may output the second candidate shadow image via the shadow detection network by inputting the image to be processed into the shadow detection network.
Illustratively, as shown in fig. 3, the computer device obtains a first candidate shadow image 302 by performing an imaging process on the image 301 to be processed, and obtains a second candidate shadow image 303 by performing an image semantic feature extraction on the image 301 to be processed.
Step 202, correcting the second candidate shadow image by using the first candidate shadow image to obtain a target shadow image corresponding to the image to be processed.
In some embodiments, after obtaining the first candidate shadow image and the second candidate shadow image corresponding to the image to be processed, the computer device may correct one candidate shadow image by using the other candidate shadow image in order to improve accuracy of the shadow image, considering that the image processing manner and the image semantic feature extraction manner are each emphasized in the image shadow detection process.
In one possible implementation, the computer device may correct the second candidate shadow image by using the first candidate shadow image to obtain the target shadow image corresponding to the image to be processed, considering that the first candidate shadow image obtained by the image chemistry process generally has more accurate shadow boundary characteristics, and the second candidate shadow image extracted by the image semantic characteristics generally has more accurate shadow region characteristics.
Illustratively, as shown in fig. 3, the computer device corrects the second candidate shadow image 303 by using the first candidate shadow image 302, so as to obtain a target shadow image 304 corresponding to the image 301 to be processed.
And 203, performing shadow feature enhancement processing on a shadow region in the image to be processed by using the target shadow image to obtain a feature enhancement image corresponding to the image to be processed.
In some embodiments, after obtaining the target shadow image corresponding to the image to be processed, in order to improve efficiency of removing the image shadow in the image to be processed, before removing the image shadow, the computer device may further perform shadow feature enhancement processing on the shadow area in the image to be processed by using the target shadow image, to obtain a feature enhanced image corresponding to the image to be processed.
Optionally, the shadow regions in the feature enhanced image have features that are more easily identified and extracted than non-shadow regions, thereby facilitating subsequent separate processing of the shadow regions in the feature enhanced image.
And 204, performing shadow removal processing on the characteristic enhanced image to obtain a shadowless image corresponding to the image to be processed.
In some embodiments, after obtaining the feature enhanced image corresponding to the image to be processed, the computer device may perform a shadow removal process on each shadow region in the feature enhanced image, thereby obtaining a shadow-free image corresponding to the image to be processed.
In one possible implementation, the computer device may adjust the pixel values of each pixel point in the shadow region in the feature enhanced image by using an image processing manner, so as to implement shadow removal.
In another possible implementation manner, the computer device may perform a shadow removal process on the feature enhanced image by using a shadow removal network in a manner of extracting semantic features of the image, so as to obtain a shadowless image.
In summary, in the embodiment of the present application, by performing the image processing and the image semantic feature extraction on the image to be processed, the first candidate shadow image and the second candidate shadow image corresponding to the image to be processed are obtained, so that the first candidate shadow image is used to correct the second candidate shadow image, that is, the target shadow image corresponding to the image to be processed can be obtained, and the detection efficiency of the image shadow is improved. And the shadow feature enhancement processing is carried out on the shadow region in the image to be processed by utilizing the target shadow image to obtain a feature enhancement image corresponding to the image to be processed, so that the feature enhancement image is used for replacing the original image to be processed to carry out the shadow removal processing, a shadow-free image is obtained, the accuracy of removing the shadow of the image is improved, and the generation quality of the shadow-free image is optimized.
In some embodiments, the computer device may optimize the image shadow processing process from two aspects, respectively, including improving the accuracy of shadow detection and improving the accuracy of shadow removal. Wherein, in order to improve the accuracy of shadow detection, the computer device can optimize the image processing process and the image semantic feature extraction process; to improve the accuracy of shadow removal, the computer device may optimize the shadow feature enhancement process.
Referring to fig. 4, a flowchart of a method for processing image shadows according to another exemplary embodiment of the present application is shown, where the method is used for a computer device, and the computer device may be the terminal 120 or the server 140 shown in fig. 1, and the method includes the following steps.
Step 401, acquiring an image to be processed.
Step 402, performing color space conversion on the image to be processed to obtain the image to be processed after space conversion.
In some embodiments, considering that the lighting conditions for forming shadows are very complex, in the case that the image to be processed is an RGB image, the components of the characteristics lost by the shadow regions in different color spaces are different, and for some shadow regions that are darker, the characteristics of a certain color component may be completely lost, so in order to improve the accuracy of shadow detection in the image processing process, the computer device may perform color space conversion on the image to be processed first, to obtain the image to be processed after spatial conversion.
In one possible implementation, the computer device may convert the color space of the image to be processed from RGB to LAB space. In the LAB space, the image to be processed has three channels L, A, B, where the L channel is used to represent the luminance value of the pixel, the a channel is used to represent the component from green to red, and the B channel is used to represent the component from blue to yellow.
Step 403, performing binarization, opening and closing operation and connected domain screening processing on the image to be processed after the space conversion to obtain a first candidate shadow image corresponding to the image to be processed.
In some embodiments, after obtaining the image to be processed after the spatial conversion, the computer device may sequentially perform binarization, on-off operation, and connected domain screening processing on the image to be processed to obtain a first candidate shadow image corresponding to the image to be processed.
In a possible implementation manner, in the case of converting the image to be processed into the LAB space, the computer device may perform a threshold operation on the image to be processed, determine, according to a pixel value of the image to be processed in the LAB space, a pixel point with a pixel value greater than the pixel threshold value as a shadow area pixel point, and assign the shadow area pixel point as a first pixel value; and determining the pixel points with the pixel values smaller than or equal to the pixel threshold value as the pixel points of the non-shadow region, and assigning the pixel points with the pixel values as the second pixel values, for example, the first pixel value is 255 and the second pixel value is 0, so that a binarized image is obtained.
In one possible implementation, the computer device needs to perform the on-off operation on the binarized image, considering that the image noise and other interference pixels still exist in the binarized image. Firstly, performing open operation processing on the binarized image, including performing corrosion processing on the binarized image to remove noise in the image, and since the corroded image is compressed, the image is also required to be subjected to expansion processing, so that the original image is reserved while the image noise is removed. And then, performing closed operation processing on the binarized image, wherein the closed operation processing comprises expansion processing on the image so as to remove small holes or small black spots in the foreground object, and then performing corrosion processing on the image. Therefore, the optimal adjustment of the binarized image can be realized through the open-close operation processing.
In one possible implementation manner, considering that each shadow area in the image is usually a single connected area, that is, only one pixel value is included in one shadow area in the shadow image, for the accuracy of determining the shadow area, after performing binarization processing and on-off operation processing on the image to be processed, the computer device also needs to perform connected area screening processing on the binarized image, that is, determine, in the binarized image, pixel points that have the same pixel value and are adjacent in position, so as to determine a pixel area formed by the pixel points located in the same area as one connected area. And then, after the connected domain screening processing is completed, a first candidate shadow image corresponding to the image to be processed can be obtained.
Schematically, as shown in fig. 5, the computer device performs color space conversion on the image to be processed 501 to obtain a space-converted image to be processed, and performs binarization processing to obtain a binarized image 502 corresponding to the image to be processed 501, so that the computer device performs open-close operation processing and connected domain screening processing on the binarized image 502 to obtain a first candidate shadow image 503 corresponding to the image to be processed 501.
And step 404, inputting the image to be processed into a shadow detection model to obtain a second candidate shadow image corresponding to the image to be processed output by the shadow detection model.
In some embodiments, the computer device may perform image semantic feature extraction on the image to be processed by using the shadow detection model by inputting the image to be processed into the shadow detection model, so as to obtain a second candidate shadow image corresponding to the image to be processed output by the shadow detection model.
Alternatively, a feature extraction network and a shadow detection network may be included in the shadow detection model. In one possible implementation manner, the computer device inputs the image to be processed into the shadow detection model, so that semantic information extraction is performed on the image to be processed through the feature extraction network in the shadow detection model, and image semantic information corresponding to the image to be processed is obtained. Alternatively, the feature extraction network may be a DINOv2 network, or may be another network capable of implementing image semantic feature extraction, which is not limited in the embodiment of the present application.
Furthermore, considering that the data dimension corresponding to the image semantic information output by the feature extraction network may not match with the input data dimension of the shadow detection network, in order to improve the accuracy of shadow detection, the computer device may further add a dimension integration network between the feature extraction network and the shadow detection network in the shadow detection model, and dimension adjust the image semantic information through the dimension integration network, so that the data dimension of the image semantic information is adapted to the shadow detection network in the shadow detection model.
In one possible implementation manner, after obtaining the image semantic information corresponding to the image to be processed, the computer device inputs the image semantic information into the dimension integration network, and performs dimension adjustment on the image semantic information through the dimension integration network, so as to obtain the image semantic information after dimension adjustment. And inputting the image semantic information subjected to dimension adjustment into a shadow detection network in a shadow detection model, and performing image shadow detection on the image semantic information through the shadow detection network, so as to obtain a second candidate shadow image corresponding to the image to be processed.
Referring to fig. 6, a schematic diagram of a shadow detection network according to an exemplary embodiment of the present application is shown. After obtaining the image semantic information 601 output by the feature extraction network, the data dimension of the image semantic information 601 is as follows Whereas the shadow detection network inputs the required data dimension +.>The computer device therefore needs to dimension the image semantic information 601 through the dimension integration network 602, so that the adjusted image semantic information is input into the shadow detection network. Wherein the shadow detection network comprises a combination of a plurality of convolutional layers 603, an upsampling layer 604, and an active layer 605.
Optionally, in order to improve the shadow detection efficiency of the shadow detection model, the computer device also needs to train the shadow detection model in advance. In one possible implementation manner, the computer device firstly obtains a sample image and a true value of a shadow image corresponding to the sample image, inputs the sample image into a shadow detection model, sequentially extracts semantic information of the sample image through a feature extraction network to obtain semantic information of the sample image, performs dimension adjustment on the semantic information of the sample image through a dimension integration network to obtain semantic information of the adjusted sample image, and further performs shadow detection on the semantic information of the sample image through a shadow detection network to obtain the sample shadow image.
In one possible implementation, after obtaining the sample shadow image, the computer device may determine a shadow detection loss based on the sample shadow image and the shadow image truth values. In addition, in order to improve accuracy of shadow detection, the computer device may directly apply a feature extraction network and a dimension integration network trained in advance to train the shadow detection network in the shadow detection model only with shadow detection loss, so as to obtain a trained shadow detection network.
And step 405, performing open-close operation processing on the first candidate shadow image and the second candidate shadow image respectively to obtain the processed first candidate shadow image and second candidate shadow image.
In some embodiments, to improve the efficiency of image correction, after obtaining the first candidate shadow image and the second candidate shadow image, the computer device may further perform an on-off operation on the first candidate shadow image and the second candidate shadow image, to obtain the processed first candidate shadow image and the processed second candidate shadow image, respectively.
In one possible implementation, the computer device performs an open-close operation on the first candidate shadow image, so that the first candidate shadow image is subjected to corrosion-before-expansion processing, thereby removing image noise in the first candidate shadow image, and is subjected to expansion-before-expansion processing, thereby closing small holes or black spots inside the foreground object in the first candidate shadow image.
In one possible implementation, the computer device performs an open-close operation on the second candidate shadow image, so that the second candidate shadow image is subjected to a first erosion and then dilation process, thereby removing image noise in the second candidate shadow image, and is subjected to a first dilation and then erosion process, thereby closing small holes or black spots inside the foreground object in the second candidate shadow image.
Step 406, matching the first candidate shadow region in the first candidate shadow image and the second candidate shadow region in the second candidate shadow image.
In some embodiments, after obtaining the processed first candidate shadow image and the second candidate shadow image, the computer device may further match the first candidate shadow region in the first candidate shadow image and the second candidate shadow region in the second candidate shadow image before performing image correction on the second candidate shadow image using the first candidate shadow image, in consideration of that a plurality of shadow regions may exist in one image, in order to improve image correction efficiency.
In one possible implementation, the computer device may set the first candidate shadow image and the second candidate shadow image in the same image coordinate system, so as to perform matching processing on the first candidate shadow region and the second candidate shadow region according to pixel coordinates of each pixel point in the first candidate shadow region and pixel coordinates of each pixel point in the second candidate shadow region.
Step 407, for the first candidate shadow area and the second candidate shadow area obtained by matching, performing boundary adjustment on the second area boundary of the second candidate shadow area by using the first area boundary of the first candidate shadow area, so as to obtain a target shadow image corresponding to the image to be processed.
In some embodiments, after obtaining the first candidate shadow area and the second candidate shadow area in one-to-one correspondence through area matching, the computer device may perform boundary adjustment on the second area boundary of the second candidate shadow area by using the first area boundary of the first candidate shadow area, so as to obtain the target shadow image corresponding to the image to be processed.
In one possible implementation, the computer device first sets the first candidate shadow image and the second candidate shadow image in the same image coordinate system, determines the pixel coordinates of each pixel point located at the edge of the region in the first candidate shadow region, and determines the pixel coordinates of each pixel point located at the edge of the region in the second candidate shadow region, so as to perform boundary adjustment on the boundary of the second region by comparing the pixel coordinates of each pixel point located at the edge of the region in the candidate shadow regions that are matched with each other.
Optionally, in the case that the region edge of the first candidate shadow region is contained by the second candidate shadow region, keeping the region boundary of the second candidate shadow region unchanged; in the case where the region edge of the first candidate shadow region is not included in the second candidate shadow region, it is necessary to perform expansion processing on the second candidate shadow region so that the region boundary of the second candidate shadow region and the region boundary of the first candidate shadow region are the same.
In step 408, shadow areas and non-shadow areas in the image to be processed are determined based on the target shadow image.
In some embodiments, after obtaining the target shadow image corresponding to the image to be processed, the computer device may determine the shadow area and the non-shadow area in the image to be processed according to the pixel coordinates of each pixel point in the shadow area and the non-shadow area in the target shadow image.
In one possible implementation, the computer device may place the target shadow image and the image to be processed in the same image coordinate system, and determine the shadow region and the non-shadow region in the image to be processed according to the coordinate correspondence between each pixel point in the target shadow image and each pixel point in the image to be processed.
Step 409, determining a target pixel loss based on the pixel difference between the shadow region and the non-shadow region and the pixel difference between the first edge region of the shadow region and the second edge region of the non-shadow region.
In some embodiments, in order to avoid causing a large color difference between a shadow region and a non-shadow region in an image after performing the shadow removal processing on the image to be processed, the computer device may further perform the shadow feature enhancement processing on the image to be processed first.
In some embodiments, the computer device may determine a target pixel loss based on a pixel difference between a shadow region and a non-shadow region in the image to be processed and a pixel difference between a first edge region of the shadow region and a second edge region of the non-shadow region, so as to perform a shadow feature enhancement process on the image to be processed with the target pixel loss.
In one possible implementation, to determine the pixel difference between the shadow and non-shadow regions in the image to be processed, the computer device may first determine a pixel mean value of the non-shadow region based on the pixel values of the first pixel points in the non-shadow region, thereby determining a first pixel loss, i.e., a pixel difference between the shadow and non-shadow regions in the image to be processed, based on the pixel values of the second pixel points in the shadow region and the pixel mean value.
In one possible implementation, to determine a pixel difference between a first edge region of a shadow region and a second edge region of a non-shadow region, a computer device may first determine first edge pixel points located in the first edge region within the shadow region and second edge pixel points surrounding the first edge pixel points within the non-shadow region, thereby determining a second pixel loss that may be indicative of a pixel difference between the first edge region of the shadow region and the second edge region of the non-shadow region based on a pixel difference between pixel values of the first edge pixel points and pixel means of the second edge pixel points.
Alternatively, the computer device may determine the second edge pixels located around the first edge pixels and located in the non-shadow region by sequentially traversing the first edge pixels located in the first edge region within the shadow region and centering around the first edge pixels.
Further, after obtaining the first pixel loss and the second pixel loss, the computer device may obtain the target pixel loss by means of a loss summation.
In one possible implementation, the pixel mean of the non-shadow region in the image to be processed may be expressed asThe hatched area may be denoted +.>The pixel value of the second pixel point in the shadow area may be expressed as +.>The number of pixels of the second pixel in the hatched area may be expressed as +.>The first edge region of the shadow region can be denoted +.>The number of pixels of the second pixel in the first edge region may be expressed as +.>The pixel mean value of the second edge pixel points around the first edge pixel point in the non-shadow region can be expressed as +>The number of pixels of the second edge pixel around the first edge pixel in the non-shadow region may be expressed as +.>The pixel value of the second pixel point in the first edge area can be expressed as +. >The first pixel loss can thus be expressed as: />The second pixel loss can be expressed asThe target pixel loss can be expressed as +.>。
In one possible implementation, to enable feature enhancement processing to be performed on each shadow region in the image to be processed, the computer device may determine, for each shadow region in the image to be processed, a target pixel loss corresponding to each shadow region.
Step 410, performing shadow feature enhancement processing on the shadow region in the image to be processed based on the target pixel loss, to obtain a feature enhanced image corresponding to the image to be processed.
In some embodiments, after obtaining the target pixel loss corresponding to each shadow region in the image to be processed, the computer device may perform the shadow feature enhancement processing on the shadow region in the image to be processed through the corresponding target pixel loss, so as to obtain the feature enhanced image corresponding to the image to be processed.
In one possible implementation manner, in order to make the boundary between the shadow area and the non-shadow area in the feature enhanced image as smooth as possible, before performing the feature enhancement processing on the shadow area, the computer device may further perform an average filtering processing on the pixel points in the first edge area of the shadow area, so as to obtain the image to be processed after the filtering processing.
Alternatively, the computer device may perform the mean filtering process by checking each pixel point of the first edge region with a mean filter, where the mean filter kernel may beAnd thus assign the pixel mean of the surrounding eight pixels to the center pixel.
In one possible implementation manner, after obtaining the image to be processed after the filtering processing, the computer device may iteratively adjust the second pixel value of each second pixel point in the shadow area based on the target pixel loss corresponding to each shadow area, so as to obtain the feature enhanced image corresponding to the image to be processed.
Optionally, the computer device may end the iterative adjustment of the second pixel point in the shadow region if the target pixel loss is minimized; the iterative adjustment of the second pixel point in the shadow area may also be ended when the iteration number reaches the threshold value, which is not limited in the embodiment of the present application.
In step 411, the feature enhanced image is input into a diffusion model, and the feature enhanced image is subjected to noise adding processing through a noise adding network in the diffusion model, so as to obtain a noise added image.
In some embodiments, after obtaining the target shadow image and the feature enhanced image corresponding to the image to be processed, the computer device may perform a shadow removal process on the feature enhanced image by using the diffusion model to improve the efficiency of image shadow removal.
Optionally, the diffusion model includes a noise adding network and a noise removing network. In one possible implementation, the computer device inputs the feature enhanced image into the diffusion model, and first performs a noise adding process on the feature enhanced image through a noise adding network in the diffusion model to obtain a noise added image.
Step 412, denoising the denoised image through a denoising network in the diffusion model based on the target shadow image and the feature prompt word to obtain a shadowless image, wherein the feature prompt word characterizes the image processing target of the feature enhanced image.
In one possible implementation, after obtaining the noise-added image, the computer device may perform denoising processing on the noise-added image through a denoising network in the diffusion model based on the target shadow image and the feature prompt word, so as to obtain the shadow-free image.
Optionally, the computer device may input the target shadow image into a diffusion model and apply the target shadow image to a denoising network to improve the accuracy of the diffusion model in removing the shadow of the image.
Alternatively, the feature cue word characterizes the image processing target of the feature enhanced image, and may include a forward cue word and a reverse cue word. The forward prompt word is used for representing a forward target in the image shadow removing process, such as high definition; the reverse cue word is used to represent a reverse target in the image shadow removal process, such as blurring, shadow.
Referring to fig. 7, a network structure diagram of a diffusion model according to an exemplary embodiment of the present application is shown. The diffusion model 701 includes a noise adding network 702 and a noise removing network 703. Firstly, the computer equipment performs noise adding processing on the feature enhanced image through the noise adding network 702 to obtain a noise added image, and further performs text coding and semantic information extraction on the feature prompt words, so that in the process of denoising the noise added image, the computer equipment can perform noise removing processing on the noise added image based on the feature prompt words and the target shadow image, and a shadow-free image is obtained.
Referring to fig. 8, a schematic diagram of an unshaded image obtained based on an image processing method according to an exemplary embodiment of the present application is shown. The left column includes a first image to be processed 801, a second image to be processed 802, and a third image to be processed 803, and by applying the image shadow processing method provided in the embodiment of the present application, a first shadow-free image 804, a second shadow-free image 805, and a third shadow-free image 806 shown in the right column can be obtained.
In some embodiments, considering that the purpose of image shadow removal is generally to avoid overlapping shadow areas corresponding to different light sources in a subsequent process of adding virtual light sources, so as to reduce image quality, after performing shadow removal processing on an image to be processed to obtain an unshaded image, the computer device can simulate illumination through the virtual light sources, and add a new illumination rendering effect to the unshaded image.
In one possible implementation manner, the computer device may obtain the illumination parameters of the virtual light source by first obtaining the illumination parameters, where the illumination parameters may include the light source position, the illumination direction and the illumination intensity, and further obtain the shadow map by performing illumination rendering processing according to the illumination parameters and the non-shadow image, so that the shadow map is applied to the non-shadow image, and a target illumination image corresponding to the non-shadow image may be obtained, where a shadow area in the target illumination image is a shadow area generated based on the virtual light source, and as the parameters of the virtual light source change, a shadow area in the target illumination image may also change.
Referring to fig. 9, a schematic diagram of a target illumination image according to an exemplary embodiment of the present application is shown, where a shadow area generated based on virtual light source rendering exists in the target illumination image.
In the above embodiment, in the process of performing the image learning process, the accuracy of generating the first candidate shadow image is improved by performing color space conversion on the image to be processed and sequentially performing binarization processing, switching operation processing and connected domain screening processing; in the process of extracting the image semantic features, the semantic extraction network is utilized to extract the image semantic information, then the dimension adjustment is carried out on the image semantic information, and further the shadow detection network is utilized to carry out image shadow detection, so that the accuracy of generating the second candidate shadow image is improved.
In the process of correcting the second candidate shadow image by using the first candidate shadow image, the first candidate shadow image and the second candidate shadow image are subjected to open-close operation processing respectively, then the first candidate shadow region and the second candidate shadow region are subjected to matching processing, and further region boundary adjustment is directly carried out according to the first candidate shadow region and the second candidate shadow region obtained by matching, so that the generation efficiency of the target shadow image is optimized.
In addition, in the process of carrying out shadow feature enhancement processing on an image to be processed by utilizing a target shadow image, by respectively determining pixel differences between a shadow area and a non-shadow area and pixel differences between a first edge area in the shadow area and a second edge area in the non-shadow area, larger chromatic aberration between the shadow area and the non-shadow area is avoided, and meanwhile, the problem that feature lifting is not obvious under the condition of overlarge area of the shadow area is avoided.
In some embodiments, where the image to be processed is a large resolution image, the image resolution of, for example, a satellite image may be achievedIn the pixel level, if the image to be processed is directly processed by image shadow, a large amount of computing resources are required to be consumed, so that in order to improve the processing efficiency of the image to be processed, the computer device can also perform image segmentation processing on the image to be processed first, and then perform image shadow removal processing on each sub-image to be processed obtained by the segmentation processing.
In an illustrative example, the resolution of the image to be processed isIn the case of (a), the computer device may divide the image to be processed into a resolution +.>Is to be processed.
In one possible implementation, the computer device obtains a plurality of sub-images to be processed by performing image segmentation on the image to be processed first, and an image overlapping area exists between adjacent sub-images to be processed.
Illustratively, as shown in fig. 10, the computer device divides the image to be processed into six sub-images to be processed, wherein for the first sub-image to be processed 1001 and the second sub-image to be processed 1002, there is an image overlap region 1003.
Furthermore, after obtaining a plurality of sub-images to be processed, the computer device can respectively perform image shadow processing on each sub-image to be processed, thereby obtaining a shadow-free shadow image corresponding to each sub-image to be processed. The process of performing image shading processing on each sub-image to be processed may refer to the above embodiments, and will not be described herein.
In one possible implementation manner, after obtaining the non-shadow sub-images corresponding to the sub-images to be processed, the computer device may perform image stitching processing on the non-shadow sub-images according to the image overlapping area between the adjacent sub-images to be processed, so as to obtain the complete non-shadow image corresponding to the image to be processed.
In one possible implementation, the computer device further needs to determine the pixel values of each pixel point in the image overlapping area in the image stitching process, considering that each pixel point in the image overlapping area may correspond to different pixel values in two adjacent shadow images without shadows.
In one possible implementation, the computer device may determine the first pixel value weight of the third pixel point on the first non-shadow image and the second pixel value weight of the third pixel point on the second non-shadow image according to the pixel point positions of each third pixel point in the image overlapping region in the adjacent sub-images to be processed.
Optionally, the closer the third pixel point is to the image edge of the sub-image to be processed, the lower the corresponding pixel value weight. For example, when the third pixel point is close to the image edge of the first sub-image to be processed and far from the image edge of the second sub-image to be processed, the first pixel value weight corresponding to the third pixel point is lower, and the second pixel value weight corresponding to the third pixel point is higher.
In one possible implementation, the computer device may set the sum of the weights of the first pixel value weight and the second pixel value weight of the same pixel point to 1, such that the first pixel value weight and the second pixel value weight are in a negative correlation.
Optionally, the computer device may determine the first pixel value weight and the second pixel value weight of the third pixel point according to a width of a horizontal axis of the image overlapping region in the image coordinate system and a pixel coordinate of the third pixel point.
For example, the width of the horizontal axis of the image overlapping area may be expressed as M pixel points, and in the case where the left edge of the image overlapping area is the image edge of the second sub-image to be processed, the first pixel value weight corresponding to the third pixel point located at the left edge may be determined as 1, and the second pixel value weight corresponding to the third pixel point located at the left edge may be determined as 0. Alternatively, the computer device may start with the left edge of the image overlap region as the horizontal axis, so that the second pixel value weight may be expressed asWherein->Is the horizontal axis coordinate of the third pixel point, < >>The number of pixels in the horizontal axis direction of the image overlapping region.
For example, the vertical axis width of the image overlapping region can be represented as N pixel points, and the upper side edge of the image overlapping region is the second placeIn the case of processing the image edge of the sub-image, the first pixel value weight corresponding to the third pixel point located at the upper side edge may be determined to be 1, and the second pixel value weight corresponding to the third pixel point located at the upper side edge may be determined to be 0. Alternatively, the computer device may start with the upper side edge of the image overlap region as the vertical axis, so that the second pixel value weight may be expressed as Wherein->Is the vertical axis coordinate of the third pixel point, < >>The number of pixels in the vertical axis direction of the image overlap region.
Furthermore, the computer device may determine the target pixel value of the third pixel according to the third pixel value and the first pixel value weight of the third pixel on the first non-shadow image, and the fourth pixel value and the second pixel value weight of the third pixel on the second non-shadow image.
Alternatively, the third pixel value of the third pixel point on the first shadow image without shadow may be expressed asThe first pixel value weight on the first non-negative shadow image may be expressed as +.>The fourth pixel value on the second shadow image without yin may be expressed as +.>The second pixel value weight on the second non-negative shadow image may be expressed as +.>So that the target pixel value of the third pixel point can be expressed as +.>。
Furthermore, after determining the target pixel value corresponding to each third pixel point in the image overlapping area, the computer equipment can perform image stitching processing on the adjacent non-shadow sub-images, so as to obtain the non-shadow image corresponding to the image to be processed.
Optionally, in the case that the image to be processed is a virtual satellite image, after obtaining a shadowless image corresponding to the virtual satellite image, the computer device may apply the shadowless image to a scene needing to be applied to the satellite image, such as a virtual game, virtual reality, map design, animation movie, and the like.
For example, the computer device may apply the shadow-free image corresponding to the satellite image to a flight simulation system. In one possible implementation manner, in order to change the illumination effect in the virtual satellite image in real time according to the virtual light source in the virtual flight scene in the flight simulation system, the computer device needs to acquire the satellite image first, and sequentially performs image segmentation, shadow removal and image stitching on the satellite image to obtain a shadowless satellite image, and then the computer device can obtain a shadow map according to illumination parameters corresponding to the virtual light source in the virtual flight scene through illumination rendering processing, so that the shadow map is applied to the shadowless satellite image, and a virtual satellite image applied to the virtual flight scene can be obtained.
It should be noted that, in the embodiment of the present application, the image data such as the satellite image is acquired strictly according to the requirements of the related laws and regulations, and the subsequent data use and processing behaviors are developed within the authorized range of the laws and regulations.
In the above embodiment, for the to-be-processed image with larger resolution, the to-be-processed image is firstly subjected to segmentation processing, then each to-be-processed sub-image is respectively subjected to image shadow removal processing, after the corresponding non-shadow sub-image is obtained, the complete non-shadow image corresponding to the to-be-processed image is obtained through image stitching, so that a large amount of computing resources are avoided being consumed, and the shadow removal efficiency of the to-be-processed image is improved.
Referring to fig. 11, a flowchart of a method for processing image shadows according to another exemplary embodiment of the present application is shown.
For the image to be processed with larger resolution, in order to reduce the consumption of computing resources, the computer equipment can firstly perform image segmentation on the image to be processed to obtain a plurality of sub-images to be processed, so that a first candidate shadow image and a second candidate shadow image corresponding to the sub-images to be processed can be obtained through respectively performing image learning processing and image semantic feature extraction on the sub-images to be processed, further, a target shadow image corresponding to the sub-images to be processed can be obtained through correcting the second candidate shadow image by using the first candidate shadow image, and a shadow feature enhancement processing is performed on a shadow area in the sub-images to be processed by using the target shadow image, so that a feature enhancement image corresponding to the sub-images to be processed can be obtained, so that a shadow-free image corresponding to the sub-images to be processed can be obtained through performing shadow removal processing on the feature enhancement image by using a diffusion model, and finally, a complete shadow-free image corresponding to the sub-images to be processed can be obtained.
Referring to fig. 12, there is shown a block diagram of an apparatus for processing image shadows according to an exemplary embodiment of the present application, which includes the following modules.
The image obtaining module 1201 is configured to obtain an image to be processed, and a first candidate shadow image and a second candidate shadow image corresponding to the image to be processed, where the first candidate shadow image is obtained by performing an image processing on the image to be processed, and the second candidate shadow image is obtained by performing an image semantic feature extraction on the image to be processed;
an image correction module 1202, configured to correct the second candidate shadow image by using the first candidate shadow image to obtain a target shadow image corresponding to the image to be processed;
the feature enhancement module 1203 is configured to perform a shadow feature enhancement process on a shadow region in the image to be processed by using the target shadow image, so as to obtain a feature enhanced image corresponding to the image to be processed;
and the shadow removing module 1204 is configured to perform shadow removing processing on the feature enhanced image to obtain a non-shadow image corresponding to the image to be processed.
Optionally, the feature enhancement module 1203 includes:
A region determining unit configured to determine a shadow region and a non-shadow region in the image to be processed based on the target shadow image;
a loss determination unit configured to determine a target pixel loss based on a pixel difference between the shadow region and the non-shadow region, and a pixel difference between a first edge region of the shadow region and a second edge region of the non-shadow region;
and the characteristic enhancement unit is used for carrying out shadow characteristic enhancement processing on the shadow area in the image to be processed based on the target pixel loss to obtain the characteristic enhanced image corresponding to the image to be processed.
Optionally, the loss determination unit is configured to:
determining a pixel mean value of the non-shadow region based on a pixel value of a first pixel point in the non-shadow region;
determining a first pixel loss based on a pixel difference between a pixel value of a second pixel point in the shadow region and the pixel mean;
determining first edge pixel points in the shadow region and located in the first edge region, and second edge pixel points around the first edge pixel points in the non-shadow region;
Determining a second pixel loss based on a pixel difference between a pixel value of the first edge pixel point and a pixel mean of the second edge pixel point;
the target pixel loss is determined based on the first pixel loss and the second pixel loss.
Optionally, before performing a shadow feature enhancement process on the shadow area in the image to be processed based on the target pixel loss to obtain the feature enhanced image corresponding to the image to be processed, the apparatus further includes:
the filtering processing module is used for carrying out mean value filtering processing on the pixel points in the first edge area of the shadow area to obtain the image to be processed after the filtering processing;
the feature enhancement unit is used for:
and based on the target pixel loss, carrying out iterative adjustment on second pixel values of all second pixel points in the shadow region in the image to be processed after filtering processing to obtain the feature enhanced image corresponding to the image to be processed.
Optionally, the shadow removing module 1204 is configured to:
inputting the characteristic enhanced image into a diffusion model, and carrying out noise adding processing on the characteristic enhanced image through a noise adding network in the diffusion model to obtain a noise added image;
And denoising the noise-added image through a denoising network in the diffusion model based on the target shadow image and the feature prompt word to obtain the shadow-free image, wherein the feature prompt word characterizes an image processing target of the feature enhanced image.
Optionally, the image acquisition module 1201 is configured to:
acquiring the image to be processed;
performing color space conversion on the image to be processed to obtain the image to be processed after space conversion;
the first candidate shadow image corresponding to the image to be processed is obtained through binarization, opening and closing operation and connected domain screening treatment on the image to be processed after space conversion;
and inputting the image to be processed into a shadow detection model to obtain the second candidate shadow image corresponding to the image to be processed, which is output by the shadow detection model.
Optionally, the image acquisition module 1201 is further configured to:
inputting the image to be processed into the shadow detection model, and extracting semantic information of the image to be processed through a feature extraction network in the shadow detection model to obtain image semantic information corresponding to the image to be processed;
Performing dimension adjustment on the image semantic information through a dimension integration network in the shadow detection model to obtain the image semantic information subjected to dimension adjustment, so that the data dimension of the image semantic information is adapted to the shadow detection network in the shadow detection model;
and performing image shadow detection on the image semantic information subjected to dimension adjustment through the shadow detection network in the shadow detection model to obtain the second candidate shadow image corresponding to the image to be processed.
Optionally, the apparatus further includes:
the sample acquisition module is used for acquiring a sample image and a shadow image true value corresponding to the sample image;
the image output module is used for inputting the sample image into the shadow detection model to obtain a sample shadow image output by the shadow detection model;
a loss determination module for determining a shadow detection loss based on the sample shadow image and the shadow image truth value;
the training module is used for training the shadow detection network in the shadow detection model by using the shadow detection loss to obtain the trained shadow detection network, and the feature extraction network and the dimension integration network in the shadow detection model are obtained by training in advance.
Optionally, the image correction module 1202 is configured to:
performing open-close operation processing on the first candidate shadow image and the second candidate shadow image respectively to obtain the processed first candidate shadow image and the processed second candidate shadow image;
matching a first candidate shadow region in the first candidate shadow image and a second candidate shadow region in the second candidate shadow image;
and carrying out boundary adjustment on the second region boundary of the second candidate shadow region by using the first region boundary of the first candidate shadow region for the first candidate shadow region and the second candidate shadow region obtained by matching, so as to obtain a target shadow image corresponding to the image to be processed.
Optionally, the apparatus further includes:
the image segmentation module is used for carrying out image segmentation on the image to be processed to obtain a plurality of sub-images to be processed, and an image overlapping area exists between the adjacent sub-images to be processed;
the shadow removing module is used for performing image shadow processing on each sub-image to be processed respectively to obtain non-shadow images corresponding to the sub-images to be processed;
and the image stitching module is used for performing image stitching processing on the shadowless sub-images based on the image overlapping area between the adjacent sub-images to be processed to obtain shadowless images corresponding to the images to be processed.
Optionally, the image stitching module is configured to:
determining a first pixel value weight of each third pixel point on a first non-shadow image and a second pixel value weight of each third pixel point on a second non-shadow image based on the pixel point positions of each third pixel point in the image overlapping region in adjacent sub-images to be processed;
determining a target pixel value of the third pixel point based on a third pixel value of the third pixel point on the first non-yin shadow image and the first pixel value weight, and a fourth pixel value of the third pixel point on the second non-yin shadow image and the second pixel value weight;
and performing image stitching processing on the shadowless sub-images based on target pixel values corresponding to all third pixel points in the image overlapping region to obtain shadowless images corresponding to the images to be processed.
Optionally, the apparatus further includes:
the parameter acquisition module is used for acquiring illumination parameters of the virtual light source, wherein the illumination parameters at least comprise the position of the light source, the illumination direction and the illumination intensity;
the rendering module is used for performing illumination rendering processing based on the illumination parameters and the shadowless image to obtain a shadow map;
And the mapping application module is used for applying the shadow mapping to the non-shadow image to obtain a target illumination image corresponding to the non-shadow image.
In summary, in the embodiment of the present application, by performing the image processing and the image semantic feature extraction on the image to be processed, the first candidate shadow image and the second candidate shadow image corresponding to the image to be processed are obtained, so that the first candidate shadow image is used to correct the second candidate shadow image, that is, the target shadow image corresponding to the image to be processed can be obtained, and the detection efficiency of the image shadow is improved. And the shadow feature enhancement processing is carried out on the shadow region in the image to be processed by utilizing the target shadow image to obtain a feature enhancement image corresponding to the image to be processed, so that the feature enhancement image is used for replacing the original image to be processed to carry out the shadow removal processing, a shadow-free image is obtained, the accuracy of removing the shadow of the image is improved, and the generation quality of the shadow-free image is optimized.
Referring to fig. 13, a schematic structural diagram of a computer device according to an exemplary embodiment of the present application is shown. Specifically, the present invention relates to a method for manufacturing a semiconductor device. The computer apparatus 1300 includes a central processing unit (Central Processing Unit, CPU) 1301, a system memory 1304 including a random access memory 1302 and a read only memory 1303, and a system bus 1305 connecting the system memory 1304 and the central processing unit 1301. The computer device 1300 also includes a basic Input/Output system (I/O) 1306 to facilitate the transfer of information between the various devices within the computer, and a mass storage device 1307 for storing an operating system 1313, application programs 1314, and other program modules 1315.
The basic input/output system 1306 includes a display 1308 for displaying information, and an input device 1309, such as a mouse, keyboard, etc., for a user to input information. Wherein the display 1308 and the input device 1309 are connected to the central processing unit 1301 through an input output controller 1310 connected to the system bus 1305. The basic input/output system 1306 may also include an input/output controller 1310 for receiving and processing input from a keyboard, mouse, or electronic stylus, among a plurality of other devices. Similarly, the input output controller 1310 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 1307 is connected to the central processing unit 1301 through a mass storage controller (not shown) connected to the system bus 1305. The mass storage device 1307 and its associated computer-readable media provide non-volatile storage for the computer device 1300. That is, the mass storage device 1307 may include a computer-readable medium (not shown), such as a hard disk or drive.
The computer readable medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes random access Memory (RAM, random Access Memory), read Only Memory (ROM), flash Memory or other solid state Memory technology, compact disk (CD-ROM), digital versatile disk (Digital Versatile Disc, DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 1304 and mass storage device 1307 described above may be referred to collectively as memory.
The memory stores one or more programs configured to be executed by the one or more central processing units 1301, the one or more programs containing instructions for implementing the above-described methods, the central processing unit 1301 executing the one or more programs to implement the methods of processing image shadows provided by the respective method embodiments described above.
According to various embodiments of the present application, the computer device 1300 may also operate by being connected to a remote computer on a network, such as the Internet. I.e., the computer device 1300 may be connected to the network 1311 through a network interface unit 1312 coupled to the system bus 1305, or other types of networks or remote computer systems (not shown) may be coupled using the network interface unit 1312.
The embodiment of the application further provides a computer readable storage medium, wherein at least one computer instruction is stored in the readable storage medium, and the at least one computer instruction is loaded and executed by a processor to implement the method for processing image shadows according to the embodiment.
Alternatively, the computer-readable storage medium may include: ROM, RAM, solid state disk (SSD, solid State Drives), or optical disk, etc. The RAM may include, among other things, resistive random access memory (ReRAM, resistance Random Access Memory) and dynamic random access memory (DRAM, dynamic Random Access Memory).
Embodiments of the present application provide a computer program product comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the image shading processing method described in the above embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments is merely exemplary in nature and is in no way intended to limit the invention, since it is intended that all modifications, equivalents, improvements, etc. that fall within the spirit and scope of the invention.
Claims (15)
1. A method of processing an image shadow, the method comprising:
acquiring an image to be processed, and a first candidate shadow image and a second candidate shadow image corresponding to the image to be processed, wherein the first candidate shadow image is obtained by carrying out image science processing on the image to be processed, and the second candidate shadow image is obtained by carrying out image semantic feature extraction on the image to be processed;
Correcting the second candidate shadow image by using the first candidate shadow image to obtain a target shadow image corresponding to the image to be processed;
performing shadow feature enhancement processing on a shadow region in the image to be processed by using the target shadow image to obtain a feature enhancement image corresponding to the image to be processed;
and performing shadow removal processing on the characteristic enhanced image to obtain a shadowless image corresponding to the image to be processed.
2. The method according to claim 1, wherein the performing, by using the target shadow image, a shadow feature enhancement process on a shadow region in the image to be processed to obtain a feature enhancement image corresponding to the image to be processed includes:
determining a shadow area and a non-shadow area in the image to be processed based on the target shadow image;
determining a target pixel loss based on pixel differences between the shadow region and the non-shadow region, and pixel differences between a first edge region of the shadow region and a second edge region of the non-shadow region;
and carrying out shadow feature enhancement processing on the shadow region in the image to be processed based on the target pixel loss to obtain the feature enhanced image corresponding to the image to be processed.
3. The method of claim 2, wherein the determining a target pixel loss based on a pixel difference between the shadow region and the non-shadow region, and a pixel difference between a first edge region of the shadow region and a second edge region of the non-shadow region, comprises:
determining a pixel mean value of the non-shadow region based on a pixel value of a first pixel point in the non-shadow region;
determining a first pixel loss based on a pixel difference between a pixel value of a second pixel point in the shadow region and the pixel mean;
determining first edge pixel points in the shadow region and located in the first edge region, and second edge pixel points around the first edge pixel points in the non-shadow region;
determining a second pixel loss based on a pixel difference between a pixel value of the first edge pixel point and a pixel mean of the second edge pixel point;
the target pixel loss is determined based on the first pixel loss and the second pixel loss.
4. The method according to claim 2, wherein the performing, based on the target pixel loss, a shadow feature enhancement process on the shadow region in the image to be processed, and before obtaining the feature enhanced image corresponding to the image to be processed, the method further includes:
Carrying out mean value filtering treatment on pixel points in the first edge area of the shadow area to obtain the image to be treated after the filtering treatment;
and performing shadow feature enhancement processing on the shadow region in the image to be processed based on the target pixel loss to obtain the feature enhanced image corresponding to the image to be processed, including:
and based on the target pixel loss, carrying out iterative adjustment on second pixel values of all second pixel points in the shadow region in the image to be processed after filtering processing to obtain the feature enhanced image corresponding to the image to be processed.
5. The method according to claim 1, wherein the performing shadow removal processing on the feature enhanced image to obtain a non-shadow image corresponding to the image to be processed includes:
inputting the characteristic enhanced image into a diffusion model, and carrying out noise adding processing on the characteristic enhanced image through a noise adding network in the diffusion model to obtain a noise added image;
and denoising the noise-added image through a denoising network in the diffusion model based on the target shadow image and the feature prompt word to obtain the shadow-free image, wherein the feature prompt word characterizes an image processing target of the feature enhanced image.
6. The method of claim 1, wherein the acquiring the image to be processed and the first candidate shadow image and the second candidate shadow image corresponding to the image to be processed comprises:
acquiring the image to be processed;
performing color space conversion on the image to be processed to obtain the image to be processed after space conversion;
the first candidate shadow image corresponding to the image to be processed is obtained through binarization, opening and closing operation and connected domain screening treatment on the image to be processed after space conversion;
and inputting the image to be processed into a shadow detection model to obtain the second candidate shadow image corresponding to the image to be processed, which is output by the shadow detection model.
7. The method of claim 6, wherein the inputting the image to be processed into a shadow detection model to obtain the second candidate shadow image corresponding to the image to be processed output by the shadow detection model comprises:
inputting the image to be processed into the shadow detection model, and extracting semantic information of the image to be processed through a feature extraction network in the shadow detection model to obtain image semantic information corresponding to the image to be processed;
Performing dimension adjustment on the image semantic information through a dimension integration network in the shadow detection model to obtain the image semantic information subjected to dimension adjustment, so that the data dimension of the image semantic information is adapted to the shadow detection network in the shadow detection model;
and performing image shadow detection on the image semantic information subjected to dimension adjustment through the shadow detection network in the shadow detection model to obtain the second candidate shadow image corresponding to the image to be processed.
8. The method of claim 7, wherein the method further comprises:
acquiring a sample image and a shadow image true value corresponding to the sample image;
inputting the sample image into the shadow detection model to obtain a sample shadow image output by the shadow detection model;
determining a shadow detection penalty based on the sample shadow image and the shadow image truth value;
training the shadow detection network in the shadow detection model by using the shadow detection loss to obtain the trained shadow detection network, wherein the feature extraction network and the dimension integration network in the shadow detection model are obtained by training in advance.
9. The method according to claim 1, wherein the correcting the second candidate shadow image by using the first candidate shadow image to obtain the target shadow image corresponding to the image to be processed includes:
performing open-close operation processing on the first candidate shadow image and the second candidate shadow image respectively to obtain the processed first candidate shadow image and the processed second candidate shadow image;
matching a first candidate shadow region in the first candidate shadow image and a second candidate shadow region in the second candidate shadow image;
and carrying out boundary adjustment on the second region boundary of the second candidate shadow region by using the first region boundary of the first candidate shadow region for the first candidate shadow region and the second candidate shadow region obtained by matching, so as to obtain a target shadow image corresponding to the image to be processed.
10. The method according to any one of claims 1 to 9, wherein,
the method further comprises the steps of:
image segmentation is carried out on the image to be processed to obtain a plurality of sub-images to be processed, and an image overlapping area exists between adjacent sub-images to be processed;
Respectively carrying out image shadow processing on each sub-image to be processed to obtain a shadow-free image corresponding to each sub-image to be processed;
and performing image stitching processing on the shadowless sub-images based on the image overlapping area between the adjacent sub-images to be processed to obtain shadowless images corresponding to the images to be processed.
11. The method according to claim 10, wherein the performing image stitching processing on the non-shadow sub-images based on the image overlapping area between the adjacent sub-images to be processed to obtain non-shadow images corresponding to the images to be processed includes:
determining a first pixel value weight of each third pixel point on a first non-shadow image and a second pixel value weight of each third pixel point on a second non-shadow image based on the pixel point positions of each third pixel point in the image overlapping region in adjacent sub-images to be processed;
determining a target pixel value of the third pixel point based on a third pixel value of the third pixel point on the first non-yin shadow image and the first pixel value weight, and a fourth pixel value of the third pixel point on the second non-yin shadow image and the second pixel value weight;
And performing image stitching processing on the shadowless sub-images based on target pixel values corresponding to all third pixel points in the image overlapping region to obtain shadowless images corresponding to the images to be processed.
12. The method according to any one of claims 1 to 9, further comprising:
obtaining illumination parameters of a virtual light source, wherein the illumination parameters at least comprise a light source position, an illumination direction and illumination intensity;
performing illumination rendering processing based on the illumination parameters and the shadowless image to obtain a shadow map;
and applying the shadow map to the non-shadow image to obtain a target illumination image corresponding to the non-shadow image.
13. An apparatus for processing an image shadow, the apparatus comprising:
the image acquisition module is used for acquiring an image to be processed, and a first candidate shadow image and a second candidate shadow image corresponding to the image to be processed, wherein the first candidate shadow image is obtained by carrying out image science processing on the image to be processed, and the second candidate shadow image is obtained by carrying out image semantic feature extraction on the image to be processed;
the image correction module is used for correcting the second candidate shadow image by utilizing the first candidate shadow image to obtain a target shadow image corresponding to the image to be processed;
The feature enhancement module is used for carrying out shadow feature enhancement processing on a shadow region in the image to be processed by utilizing the target shadow image to obtain a feature enhancement image corresponding to the image to be processed;
and the shadow removing module is used for carrying out shadow removing processing on the characteristic enhanced image to obtain a shadowless image corresponding to the image to be processed.
14. A computer device, the computer device comprising a processor and a memory; the memory stores at least one computer instruction for execution by the processor to implement the method of processing image shadows according to any one of claims 1 to 12.
15. A computer readable storage medium having stored therein at least one computer instruction that is loaded and executed by a processor to implement the method of image shading as claimed in any one of claims 1 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410049044.7A CN117575976B (en) | 2024-01-12 | 2024-01-12 | Image shadow processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410049044.7A CN117575976B (en) | 2024-01-12 | 2024-01-12 | Image shadow processing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117575976A true CN117575976A (en) | 2024-02-20 |
CN117575976B CN117575976B (en) | 2024-04-19 |
Family
ID=89888415
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410049044.7A Active CN117575976B (en) | 2024-01-12 | 2024-01-12 | Image shadow processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117575976B (en) |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060082578A1 (en) * | 2004-10-15 | 2006-04-20 | Nec Electronics Corporation | Image processor, image processing method, and image processing program product |
US20110170768A1 (en) * | 2010-01-11 | 2011-07-14 | Tandent Vision Science, Inc. | Image segregation system with method for handling textures |
CN103839286A (en) * | 2014-03-17 | 2014-06-04 | 武汉大学 | True-orthophoto optimization sampling method of object semantic constraint |
CN111462222A (en) * | 2020-04-03 | 2020-07-28 | 深圳前海微众银行股份有限公司 | Method, device, equipment and medium for determining reserve of object to be detected |
CN111462098A (en) * | 2020-04-03 | 2020-07-28 | 深圳前海微众银行股份有限公司 | Method, device, equipment and medium for detecting overlapping of shadow areas of object to be detected |
CN113763296A (en) * | 2021-04-28 | 2021-12-07 | 腾讯云计算(北京)有限责任公司 | Image processing method, apparatus and medium |
US20220014684A1 (en) * | 2019-03-25 | 2022-01-13 | Huawei Technologies Co., Ltd. | Image display method and device |
CN114359289A (en) * | 2020-09-28 | 2022-04-15 | 华为技术有限公司 | Image processing method and related device |
CN115272119A (en) * | 2022-07-27 | 2022-11-01 | 重庆西行纪网络科技有限公司 | Image shadow removing method and device, computer equipment and storage medium |
CN115496675A (en) * | 2022-07-21 | 2022-12-20 | 北京联合大学 | Shadow removal method based on Neighborwood attention mechanism |
CN115661505A (en) * | 2022-09-07 | 2023-01-31 | 杭州电子科技大学 | Semantic perception image shadow detection method |
WO2023024697A1 (en) * | 2021-08-26 | 2023-03-02 | 北京旷视科技有限公司 | Image stitching method and electronic device |
WO2023024096A1 (en) * | 2021-08-27 | 2023-03-02 | 深圳市大疆创新科技有限公司 | Image processing method, image processing device, photographing equipment, and readable storage medium |
CN115731442A (en) * | 2022-11-30 | 2023-03-03 | 北京信路威科技股份有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN115953663A (en) * | 2022-12-30 | 2023-04-11 | 杭州电子科技大学 | Weak supervision shadow detection method using line marking |
CN116012232A (en) * | 2021-10-18 | 2023-04-25 | 虹软科技股份有限公司 | Image processing method and device, storage medium and electronic equipment |
CN116152512A (en) * | 2023-02-28 | 2023-05-23 | 长光卫星技术股份有限公司 | Height measuring and calculating method based on building shadow restoration |
-
2024
- 2024-01-12 CN CN202410049044.7A patent/CN117575976B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060082578A1 (en) * | 2004-10-15 | 2006-04-20 | Nec Electronics Corporation | Image processor, image processing method, and image processing program product |
US20110170768A1 (en) * | 2010-01-11 | 2011-07-14 | Tandent Vision Science, Inc. | Image segregation system with method for handling textures |
CN103839286A (en) * | 2014-03-17 | 2014-06-04 | 武汉大学 | True-orthophoto optimization sampling method of object semantic constraint |
US20220014684A1 (en) * | 2019-03-25 | 2022-01-13 | Huawei Technologies Co., Ltd. | Image display method and device |
CN111462222A (en) * | 2020-04-03 | 2020-07-28 | 深圳前海微众银行股份有限公司 | Method, device, equipment and medium for determining reserve of object to be detected |
CN111462098A (en) * | 2020-04-03 | 2020-07-28 | 深圳前海微众银行股份有限公司 | Method, device, equipment and medium for detecting overlapping of shadow areas of object to be detected |
CN114359289A (en) * | 2020-09-28 | 2022-04-15 | 华为技术有限公司 | Image processing method and related device |
CN113763296A (en) * | 2021-04-28 | 2021-12-07 | 腾讯云计算(北京)有限责任公司 | Image processing method, apparatus and medium |
WO2023024697A1 (en) * | 2021-08-26 | 2023-03-02 | 北京旷视科技有限公司 | Image stitching method and electronic device |
WO2023024096A1 (en) * | 2021-08-27 | 2023-03-02 | 深圳市大疆创新科技有限公司 | Image processing method, image processing device, photographing equipment, and readable storage medium |
CN116012232A (en) * | 2021-10-18 | 2023-04-25 | 虹软科技股份有限公司 | Image processing method and device, storage medium and electronic equipment |
CN115496675A (en) * | 2022-07-21 | 2022-12-20 | 北京联合大学 | Shadow removal method based on Neighborwood attention mechanism |
CN115272119A (en) * | 2022-07-27 | 2022-11-01 | 重庆西行纪网络科技有限公司 | Image shadow removing method and device, computer equipment and storage medium |
CN115661505A (en) * | 2022-09-07 | 2023-01-31 | 杭州电子科技大学 | Semantic perception image shadow detection method |
CN115731442A (en) * | 2022-11-30 | 2023-03-03 | 北京信路威科技股份有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN115953663A (en) * | 2022-12-30 | 2023-04-11 | 杭州电子科技大学 | Weak supervision shadow detection method using line marking |
CN116152512A (en) * | 2023-02-28 | 2023-05-23 | 长光卫星技术股份有限公司 | Height measuring and calculating method based on building shadow restoration |
Non-Patent Citations (3)
Title |
---|
SENXIN CAI ET AL: "A Study on the Combination of Image Preprocessing Method Based on Texture Feature and Segmentation Algorithm for Breast Ultrasound Images", 《2022 2ND INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS AND COMPUTER ENGINEERING》, 31 December 2022 (2022-12-31), pages 760 - 764 * |
吴文 等: "基于低尺度细节恢复的单幅图像阴影去除方法", 电子学报, no. 07, 15 July 2020 (2020-07-15), pages 52 - 61 * |
魏明强 等: "基于区间梯度的联合双边滤波图像纹理去除方法", 计算机科学, no. 03, 15 March 2018 (2018-03-15), pages 35 - 40 * |
Also Published As
Publication number | Publication date |
---|---|
CN117575976B (en) | 2024-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10943145B2 (en) | Image processing methods and apparatus, and electronic devices | |
CN111292264B (en) | Image high dynamic range reconstruction method based on deep learning | |
CN110516577B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111275034B (en) | Method, device, equipment and storage medium for extracting text region from image | |
CN111062854B (en) | Method, device, terminal and storage medium for detecting watermark | |
CN112884758B (en) | Defect insulator sample generation method and system based on style migration method | |
US10332262B2 (en) | Removal of background information from digital images | |
CN112101386B (en) | Text detection method, device, computer equipment and storage medium | |
CN109426773A (en) | A kind of roads recognition method and device | |
CN108985201A (en) | Image processing method, medium, device and calculating equipment | |
CN113012068A (en) | Image denoising method and device, electronic equipment and computer readable storage medium | |
US12051225B2 (en) | Generating alpha mattes for digital images utilizing a transformer-based encoder-decoder | |
CN113506305B (en) | Image enhancement method, semantic segmentation method and device for three-dimensional point cloud data | |
CN111815748B (en) | Animation processing method and device, storage medium and electronic equipment | |
CN112257729A (en) | Image recognition method, device, equipment and storage medium | |
CN117575976B (en) | Image shadow processing method, device, equipment and storage medium | |
Lv et al. | Low‐light image haze removal with light segmentation and nonlinear image depth estimation | |
CN115713585A (en) | Texture image reconstruction method and device, computer equipment and storage medium | |
CN114387315A (en) | Image processing model training method, image processing device, image processing equipment and image processing medium | |
CN116977190A (en) | Image processing method, apparatus, device, storage medium, and program product | |
TWM625817U (en) | Image simulation system with time sequence smoothness | |
CN113627342A (en) | Method, system, device and storage medium for video depth feature extraction optimization | |
CN114299105A (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN115222606A (en) | Image processing method, image processing device, computer readable medium and electronic equipment | |
CN112598043A (en) | Cooperative significance detection method based on weak supervised learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |