CN116797954A - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116797954A
CN116797954A CN202210237251.6A CN202210237251A CN116797954A CN 116797954 A CN116797954 A CN 116797954A CN 202210237251 A CN202210237251 A CN 202210237251A CN 116797954 A CN116797954 A CN 116797954A
Authority
CN
China
Prior art keywords
image
images
image set
determining
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210237251.6A
Other languages
Chinese (zh)
Inventor
黄海鹏
刘阳兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan TCL Group Industrial Research Institute Co Ltd
Original Assignee
Wuhan TCL Group Industrial Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan TCL Group Industrial Research Institute Co Ltd filed Critical Wuhan TCL Group Industrial Research Institute Co Ltd
Priority to CN202210237251.6A priority Critical patent/CN116797954A/en
Publication of CN116797954A publication Critical patent/CN116797954A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a storage medium. The method comprises the following steps: the electronic equipment obtains a first non-exposure image set by performing exposure detection on the images to be processed in the images to be processed set; performing similarity detection on the first image set, determining similar images in the first image set, and removing the similar images to obtain a second image set; performing definition detection on the second image in the second image set to obtain a blurred third image set; and determining the images which the user needs to keep in the third image set according to the image screening model. In the embodiment of the application, the electronic equipment screens the low-quality pictures of the image set to be processed, and then sorts the image set to be processed according to the low-quality pictures, so that the time for manually sorting the image set by a user is saved, and the efficiency of sorting a plurality of images is improved.

Description

Image processing method, device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a storage medium.
Background
With the development of smart devices, devices such as smartphones and tablet computers often store a large amount of image data, such as a large number of pictures taken by users. In the process of shooting a photo by a user, the shot picture has lower quality, such as blurring, abnormal exposure and the like, due to the problems of movement of a shooting object or defocusing of a camera, light acquisition of a camera, complex illumination change and the like.
After a large number of pictures are taken, the user screens and sorts the pictures one by one, which requires much time and effort.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a storage medium. The image processing method can save the time of manually arranging the image sets by a user and improve the efficiency of arranging a plurality of images.
In a first aspect, an embodiment of the present application provides an image processing method, including:
performing exposure detection on the images to be processed in the images to be processed to obtain a first non-exposure image set;
performing similarity detection on the first image set, determining similar images in the first image set, and removing the similar images to obtain a second image set;
Performing definition detection on the second image in the second image set to obtain a blurred third image set;
and determining the images which the user needs to keep in the third image set according to the image screening model.
In a second aspect, an embodiment of the present application provides an image processing apparatus including:
the first detection module is used for carrying out exposure detection on the images to be processed in the images to be processed to obtain a non-exposure first image set;
the second detection module is used for detecting the similarity of the first image set, determining similar images in the first image set, and removing the similar images to obtain a second image set;
the third detection module is used for detecting the definition of the second image in the second image set to obtain a blurred third image set;
and the processing module is used for determining the image which needs to be reserved by the user in the third image set according to the image screening model.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements steps in the image processing method provided in the embodiment of the present application when the processor executes the program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform steps in an image processing method provided by embodiments of the present application.
In the embodiment of the application, the electronic equipment obtains a first non-exposure image set by performing exposure detection on the to-be-processed image in the to-be-processed image set; performing similarity detection on the first image set, determining similar images in the first image set, and removing the similar images to obtain a second image set; performing definition detection on the second image in the second image set to obtain a blurred third image set; and determining the images which the user needs to keep in the third image set according to the image screening model. In the embodiment of the application, the electronic equipment screens the low-quality pictures of the image set to be processed, and then sorts the image set to be processed according to the low-quality pictures, so that the time for manually sorting the image set by a user is saved, and the efficiency of sorting a plurality of images is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the application
Fig. 2 is a schematic structural diagram of a similarity detection model according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a first processing unit according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a second processing unit according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an image screening model according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a bottleneck layer according to an embodiment of the present application.
Fig. 7 is a second schematic structural diagram of an image processing method according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a basic model according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
With the development of network technology and the development of smart device technology, a large amount of image data, such as images shot by users, images downloaded from the internet and saved locally, or images cached by application programs, is often stored on various electronic devices, such as smart phones, smart watches, smart glasses, etc.
Images stored on electronic devices are often selected by a user to manually delete images of lower image quality, such as images with poor exposure, blur, focus blur, etc. of the image with the bottom of the image quality. The user needs to screen and delete a plurality of images one by one, which is time-consuming and labor-consuming.
In order to solve the technical problems, the embodiment of the application provides an image processing method, an image processing device, electronic equipment and a storage medium. The image processing method can be applied to electronic equipment such as tablet computers, smart phones and televisions. The image processing method can improve the efficiency of sorting a plurality of images.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the application. The image processing method may include the steps of:
110. and performing exposure detection on the images to be processed in the images to be processed to obtain a first non-exposure image set.
In some embodiments, the electronic device may acquire the set of images to be processed before performing exposure detection on the images to be processed in the set of images to be processed to obtain the non-exposed first image.
A plurality of initial images are stored on the electronic device, and an initial image set is formed by the plurality of initial images. The plurality of initial images may be images stored in the same storage path, such as the plurality of initial images all stored in one album. The plurality of initial images may also be images stored in different storage paths, such as the plurality of initial images stored in different folders.
After the initial image set is acquired, the electronic device may perform size adjustment on each initial image in the initial image set, so as to obtain a plurality of images to be processed with preset sizes.
For example, if an original image has an original size of 1920×1080 and a preset size of 64×64, the original image may be reduced to a to-be-processed image of 64×64.
The initial image is reduced to be an image to be processed, so that the calculation power required to be consumed by the electronic equipment can be reduced in the subsequent image processing process to be processed, and meanwhile, the image processing speed to be processed is improved. After obtaining the to-be-processed image corresponding to each initial image, a plurality of to-be-processed images can be determined as a to-be-processed image set.
It should be noted that the computing performance of different electronic devices is different, for example, the computing performance of electronic devices using processors with different performances is different. Before each initial image in the initial image set is subjected to size adjustment to obtain a plurality of images to be processed with preset sizes, the electronic device can determine the corresponding preset size according to its own calculation performance, for example, under the condition that the calculation performance of the electronic device is high, the preset size to be adjusted of the initial image can be determined to be 400×300. In the case of a low computing performance of the electronic device, the preset size of the initial image to be adjusted may be determined to be 64×64.
In some embodiments, after obtaining the set of images to be processed, the electronic device may perform exposure detection on the images to be processed in the set of images to be processed to obtain a non-exposed first image.
Each image to be processed in the set of images to be processed is an image adopting an RGB color space, and in order to detect the brightness of each image, the RGB (Red, green, blue) color space of each image to be processed can be converted into an HSV (Hue, saturation, value) color space, where Hue is Hue, saturation is Saturation, and Value is brightness.
And then the electronic equipment performs corresponding channel splitting on the HSV color space of each image to be processed so as to obtain a V channel value corresponding to each image to be processed. The V-channel value corresponding to each image to be processed may be a mean value corresponding to the V-channel corresponding to each image to be processed.
In some embodiments, determining whether a V-channel value corresponding to each image to be processed is within a preset range, and if the V-channel value is within the preset range, determining the image to be processed corresponding to the V-channel value as a non-exposed first image. I.e. exposing a normal first image. The plurality of non-exposed first images form a non-exposed first image set.
The preset range can be preset manually or set by the electronic equipment according to actual needs. For example, the preset range may be 35-245, and in the case that the V-channel value is less than 35, the image to be processed is considered to be too dark; in the case where the V-channel value is greater than 245, the image to be processed is considered to be too bright. In both cases, the image to be processed is considered to be exposed abnormally. And the V channel value of the image to be processed is in the range of 35-245, and the image to be processed is considered to be a non-exposure first image.
In some embodiments, the electronic device may further acquire a gray image corresponding to each image to be processed, and then determine whether the image to be processed is the non-exposed first image according to the pixel distribution condition corresponding to the gray value of the gray image.
For example, after obtaining the gray level image corresponding to the image to be processed, the electronic device may determine the number of pixels corresponding to different gray levels according to the gray level image, and if the number of pixels in the preset gray level range occupies more than fifty percent of the total number of pixels, consider the image to be processed as the non-exposed first image. For example, the preset gray value range is 35-245, and the number of pixels in the range of 35-245 occupies more than fifty percent of the total number of pixels of the image to be processed, and the image to be processed is considered to be the non-exposure first image.
In some embodiments, the electronic device may delete the image to be processed with abnormal concentrated exposure; or the electronic equipment can mark the image to be processed with abnormal exposure; or the electronic equipment can put the images to be processed with abnormal exposure into a single album for the user to reserve or delete.
120. And performing similarity detection on the first image set, determining similar images in the first image set, and removing the similar images to obtain a second image set.
In some embodiments, the electronic device may input the first image in the first image set into the similarity detection model, output the first image feature corresponding to the first image, and then determine the similar image in the first image set according to the first image feature corresponding to the first image.
Referring to fig. 2 together, fig. 2 is a schematic structural diagram of a similarity detection model according to an embodiment of the application.
The similarity detection model comprises a first processing unit, a second processing unit, a third processing unit and a fourth processing unit which are sequentially connected, wherein the second processing unit, the third processing unit and the fourth processing unit have the same structure.
The electronic device inputs the first image into a first processing unit for processing, and the first processing unit outputs a first feature vector of the first image. The first feature vector is then input into a second processing unit, which outputs a second feature vector. And inputting the second characteristic vector into a third processing unit, and outputting the third characteristic vector by the third processing unit. The third feature vector is input to a fourth processing unit, which outputs a fourth feature vector.
That is, in the similarity detection model, each next processing unit processes the result output by the previous processing unit, and finally the fourth processing unit inputs the fourth feature vector, and the electronic device determines the fourth feature vector corresponding to each first image as the first image feature corresponding to each first image.
With continued reference to fig. 3, fig. 3 is a schematic structural diagram of a first processing unit according to an embodiment of the application.
As shown in fig. 3, the first processing unit includes a first convolution layer (Convlusion) and an average pooling layer (avgpool). The average pooling layer can reduce errors caused by the increase of the variance of the estimated value due to the limited size of the neighborhood, can retain more background information of the image, emphasizes that the whole characteristic information is subjected to one-layer downsampling, is slightly larger in the contribution of reducing parameter dimension, and is more reflected in the dimension of complete transmission of the information. Therefore, the data volume required to be calculated by the similarity detection model can be reduced, and the speed of the similarity model in subsequent detection is improved.
When the first image is input into the first processing unit, the first image set is input into the first convolution layer, and convolution characteristics corresponding to the first image are output. And then inputting the convolution characteristic corresponding to the first image into an average pooling layer, and outputting a first characteristic vector corresponding to the first image.
In some implementations, the first convolution layer may be a convolution layer of 5*5. The Ksize corresponding to the first processing unit is 5, and the subsequent average pooling layer pair convolution feature pooling visual field can be expanded by adopting the Ksize of 5, so that the first feature vector finally output by the first processing unit can be ensured to have enough image features.
In the conventional manner, 24 input channels are often set, and in this embodiment, the number of input channels in the first processing unit is halved, so that the data to be processed by the first processing unit can be reduced, thereby improving the computing speed. Meanwhile, the 12 input channels are arranged, so that the image feature extraction of the first image can be met, and the accuracy of similarity model detection cannot be affected.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a second processing unit according to an embodiment of the application.
In some embodiments, the second processing unit, the third processing unit, and the fourth processing unit are processing units having the same structure, and the second processing unit is described herein.
The second processing unit comprises 1*1 convolution layers, 3*3 depth convolution layers (depthwise convolution) and 1*1 convolution layers which are connected in sequence.
When the first feature vector output by the first processing unit is input to the second processing unit, the data after channel separation is firstly processed through channel separation (channel split), then the data after channel separation is input to the 1*1 convolution layer, the 3*3 depth convolution layer (depthwise convolution) and the 1*1 convolution layer which are connected in sequence, and the last 1*1 convolution layer outputs a corresponding processing result. And then combining the data after channel separation with a corresponding processing result output by the last 1*1 convolution layer to obtain a combined array (concat), finally performing channel shuffling (channel shuffling) processing, and finally inputting a second feature vector by a second processing unit.
Wherein, in the process of processing the data after channel separation by the 1*1 convolution layer, the 3*3 depth convolution layer (depthwise convolution) and the 1*1 convolution layer which are connected in sequence. After the first 1*1 convolution layer performs feature extraction on the data after channel separation, the extracted features are subjected to batch normalization (Batch Normalization) and then are subjected to activation processing through an activation function relu, so that a result output by the first 1*1 convolution layer is obtained.
The result output by the first 1*1 convolution layer is input to the 3*3 depth convolution layer, and the features extracted by the 3*3 depth convolution layer are subjected to batch normalization processing, so that the result output by the 3*3 depth convolution layer is obtained.
The result output by the 3*3 depth convolution layer is input into the last 1*1 convolution layer, after the last 1*1 convolution layer performs feature extraction on the result output by the 3*3 depth convolution layer, the extracted features are subjected to batch normalization (Batch Normalization) and then are subjected to activation processing through an activation function relu, so that the processing result output by the last 1*1 convolution layer is obtained.
In some embodiments, in the similarity detection model, the first feature vector output at the first processing unit is input into the second processing unit, and the second processing unit is provided with 58 input channels. The second feature vector output by the second processing unit is input to the third processing unit, and the third processing unit is provided with 116 input channels. The third feature vector output by the third processing unit is input to a fourth processing unit, and the fourth processing unit has 116 input channels.
Corresponding input channel numbers are set for the first processing unit, the second processing unit, the third processing unit and the fourth processing unit in a decibel mode, and in the actual image processing process, the input channel numbers are smaller than those in the prior art, so that data to be processed by each processing unit are reduced, and the calculation speed is improved. Meanwhile, the image feature extraction of the first image can be met, and the accuracy of similarity model detection cannot be affected.
In the similarity detection process of the similarity model, the similarity of a plurality of first images can be rapidly detected.
In some embodiments, the electronic device may determine cosine similarity corresponding to each two first images according to the first image features, if the cosine similarity is greater than a preset similarity threshold, determine that the two first images are target first images, and finally determine similar images in the target first images.
For example, the preset similarity threshold is set to 0.837, if the cosine similarity corresponding to the two first images is 0.9, the cosine similarity is greater than the preset similarity threshold of 0.937, and the two first images are determined to be target first images.
After the two target first images are determined, the electronic equipment can acquire the gray distribution condition of the pixels corresponding to each target first image, and the exposure condition corresponding to each target first image is determined according to the gray distribution condition.
For example, in the case of a gray scale distribution of pixels corresponding to one target first image, if the pixels of the target first image are uniformly distributed over a plurality of gray scale values, instead of a large number of pixels being clustered over a certain gray scale range, the exposure of the target first image is considered to be good, and the brightness distribution is uniform.
That is, the electronic device can judge whether the exposure of the first image of the target is good or not according to whether the gray distribution of the pixels is uniform, and the better the gray distribution of the pixels is, the better the exposure of the first image of the target is.
In the two target first images, the target first image with poor exposure can be used as a similar image, then the electronic equipment removes the similar image, the target first image with good exposure is reserved, and the target first image with good exposure can be determined as a second image.
That is, after the electronic device performs similar image removal on all the first images, the electronic device may determine that the surviving first image is the second image. The plurality of second images forms a second image set.
In some embodiments, there are different types of similar images in the first image, for example, a plurality of similar cat images may be used as a group, a plurality of similar person images may be used as a group, then some images may be deleted from each group of similar images, and only one image may be retained as the second image in each group of similar images. For example, if there are three similar cat images, then two cat images may be deleted, and the last cat image left is used as the second image.
The electronic device may take the non-similar image in the first image as the second image and the image remaining in the similar image as the second image, thereby obtaining the second image set.
130. And performing definition detection on the second image in the second image set to obtain a blurred third image set.
In some embodiments, the electronic device may determine a gray level difference between each pixel and an adjacent pixel in the second image, and then determine a sharpness corresponding to the second image according to the gray level difference between each pixel and the adjacent pixel, so as to obtain a blurred third image.
Specifically, the electronic device multiplies the gray level difference between each pixel and the adjacent pixels to obtain a product result corresponding to each pixel in each second image, and then adds the product results corresponding to each pixel in the second image to obtain a definition value corresponding to the second image.
And finally, determining the definition corresponding to each second image according to the definition value, and determining the blurred third image according to the definition corresponding to the second image.
In some embodiments, the electronic device may normalize the sharpness values corresponding to each of the second images to obtain normalized sharpness values corresponding to each of the second images, then determine the sharpness corresponding to each of the second images according to the normalized sharpness values, and determine the blurred third image according to the sharpness corresponding to the second images.
For example, the electronic device may normalize the sharpness value corresponding to each of the second images to a range of 0 to 1, thereby obtaining a normalized sharpness value corresponding to each of the second images. And when the normalized definition value corresponding to a certain second image is lower than the preset normalized definition value, the second image is considered to be a blurred third image. The plurality of blurred third images constitutes a blurred third image set.
In some embodiments, after the blurred third image is determined in the second set of images, the second image in the second set of images may be retained except for the third image.
140. And determining the images which the user needs to keep in the third image set according to the image screening model.
In some embodiments, after the blurred third image is obtained, there may be some images in the blurred third image that the user needs to keep, such as snap shots.
In some embodiments, the electronic device may input the third image in the third image set into the image filtering model to output the image that the user needs to retain. The electronic device may also actively delete images in the third image that are not needed to be retained by the user.
It should be noted that the image filtering model may be generated according to a history habit of the user to process the image set. For example, the user manually deletes some images in the historical image set over the historical time, leaving some images behind. The basic network model corresponding to the image screening model can be trained according to the images deleted by the user in the historical time period and the reserved images, and the image screening model is obtained after the basic network model is trained. The image screening model can screen images which need to be reserved by the user from the third images according to habits of the user.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image screening model according to an embodiment of the application.
As shown in fig. 5, the image filtering model includes a first processing layer, a first bottleneck processing layer, a second bottleneck processing layer, a third bottleneck processing layer, a fourth bottleneck processing layer, a second processing layer, an averaging pooling layer, and a third processing layer that are sequentially connected.
The first Bottleneck processing layer, the second Bottleneck processing layer, the third Bottleneck processing layer and the fourth Bottleneck processing layer of the image screening model are all Bottleneck layers (Bottleneck) adopting a group normalization mode. The number of input channels corresponding to the first bottleneck processing layer is 8, the number of input channels corresponding to the second bottleneck processing layer is 12, the number of input channels corresponding to the third bottleneck processing layer is 16, and the number of input channels corresponding to the fourth bottleneck processing layer is 32.
The first processing layer, the second processing layer and the third processing layer of the image screening model are conv2d convolution layers, the number of input channels corresponding to the first processing layer is 16, the number of input channels corresponding to the second processing layer is 640, and the number of input channels of the third processing layer is 4.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a bottleneck layer according to an embodiment of the application.
As shown in fig. 6, the bottleneck layer includes 1*1, 3*3, and 1*1 convolution layers connected in sequence. The first 1*1 convolution layer performs group normalization (Group Normalization) on the features extracted from the first image, the second image and the third image, and then performs activation function leakage relu processing to finally obtain the result output by the first 1*1 convolution layer.
The result output by the first 1*1 convolution layer is input to the 3*3 depth convolution layer, the features extracted by the 3*3 depth convolution layer are subjected to group normalization processing, and then the result output by the 3*3 depth convolution layer is finally obtained through activating function Leakly relu processing.
The result output by the 3*3 depth convolution layer is input to the last 1*1 convolution layer, and the characteristics extracted by the 1*1 convolution layer are subjected to group normalization processing, so that the result output by the last 1*1 convolution layer is finally obtained.
The result of the last 1*1 convolution layer output and the result of the last processing layer input together constitute the final result of the bottleneck layer input.
In the embodiment of the application, the calculation amount of the image screening model in actual processing task can be reduced by adopting a group normalization mode, so that the processing speed of the image screening model is improved. At the same time, the model can have better convergence by adopting the Leakly relu activation function. Meanwhile, the image screening model can also adopt a cross entropy loss function, and the difference between the predicted result and the real result of the network screening model can be better represented through the loss function.
In some embodiments, after the third image is input into the image screening model, the image screening model may perform screening classification processing on the plurality of third images, and if an image of a protection class exists in the third image, consider the protection class image as an image that needs to be retained by the user. This prevents deletion of images that the user needs to retain.
In some embodiments, after obtaining the image that needs to be saved by the user in the third image set, the image that needs to be saved by the user and the second image that is clear in the second image set may be saved together, for example, the image that needs to be saved by the user and the second image that is clear in the second image set may be saved in one album, folder, or image set. And then the low-quality pictures of the image set to be processed are screened, and the images which are required to be reserved by the user are obtained.
In the embodiment of the application, the electronic equipment obtains a first non-exposure image set by performing exposure detection on the to-be-processed image in the to-be-processed image set; performing similarity detection on the first image set, determining similar images in the first image set, and removing the similar images to obtain a second image set; performing definition detection on the second image in the second image set to obtain a blurred third image set; and determining the images which the user needs to keep in the third image set according to the image screening model. In the embodiment of the application, the electronic equipment screens the low-quality pictures of the image set to be processed, and then sorts the image set to be processed according to the low-quality pictures, so that the time for manually sorting the image set by a user is saved, and the efficiency of sorting a plurality of images is improved.
Referring to fig. 7, fig. 7 is a second flow chart of an image processing method according to an embodiment of the application. The image processing method may include the steps of:
201. and acquiring an image set to be processed.
After the initial image set is acquired, the electronic device may perform size adjustment on each initial image in the initial image set, so as to obtain a plurality of images to be processed with preset sizes.
For example, if an original image has an original size of 1920×1080 and a preset size of 64×64, the original image may be reduced to a to-be-processed image of 64×64.
The initial image is reduced to be an image to be processed, so that the calculation power required to be consumed by the electronic equipment can be reduced in the subsequent image processing process to be processed, and meanwhile, the image processing speed to be processed is improved. After obtaining the to-be-processed image corresponding to each initial image, a plurality of to-be-processed images can be determined as a to-be-processed image set.
It should be noted that the computing performance of different electronic devices is different, for example, the computing performance of electronic devices using processors with different performances is different. Before each initial image in the initial image set is subjected to size adjustment to obtain a plurality of images to be processed with preset sizes, the electronic device can determine the corresponding preset size according to its own calculation performance, for example, under the condition that the calculation performance of the electronic device is high, the preset size to be adjusted of the initial image can be determined to be 400×300. In the case of a low computing performance of the electronic device, the preset size of the initial image to be adjusted may be determined to be 64×64.
202. And converting the RGB color space of the image to be processed into an HSV color space to obtain a V channel value corresponding to the image to be processed.
In some embodiments, each image to be processed in the set of images to be processed is an image using an RGB color space, and to detect brightness of each image, the RGB (Red, green, blue) color space of each image to be processed may be converted into an HSV (Hue, saturation, value) color space, where Hue is a Hue, saturation, and Value is a brightness.
And then the electronic equipment performs corresponding channel splitting on the HSV color space of each image to be processed so as to obtain a V channel value corresponding to each image to be processed. The V-channel value corresponding to each image to be processed may be a mean value corresponding to the V-channel corresponding to each image to be processed.
203. And determining a non-exposed first image according to the V-channel value.
In some embodiments, determining whether a V-channel value corresponding to each image to be processed is within a preset range, and if the V-channel value is within the preset range, determining the image to be processed corresponding to the V-channel value as a non-exposed first image.
The preset range can be preset manually or set by the electronic equipment according to actual needs. For example, the preset range may be 35-245, and in the case that the V-channel value is less than 35, the image to be processed is considered to be too dark; in the case where the V-channel value is greater than 245, the image to be processed is considered to be too bright. In both cases, the image to be processed is considered to be exposed abnormally. And the V channel value of the image to be processed is in the range of 35-245, and the image to be processed is considered to be a non-exposure first image.
204. And inputting the first image into a similarity detection model, and outputting first image features corresponding to the first image.
In some embodiments, the similarity detection model is trained by a basic model, as shown in fig. 8, and fig. 8 is a schematic structural diagram of the basic model according to an embodiment of the present application.
The basic model comprises a first processing unit, a second processing unit, a third processing unit, a fourth processing unit, a global pooling layer and a full connection layer which are sequentially connected.
The electronic device can train the base model using an ImageNet dataset, which can be a dataset made up of images of a size, for example, 400 x 300 images.
And then inputting the data set into a basic model for training, wherein cosine similarity can be adopted as a loss function in the basic model training process, and the model basic model is continuously trained until the basic model converges.
In some embodiments, after the basic model training converges, the global pooling layer and the full connection layer may be cut off, and model parameters corresponding to the first processing unit, the second processing unit, the third processing unit, and the fourth processing unit in the trained basic model are reserved at the same time, so as to obtain a similarity detection model formed by the first processing unit, the second processing unit, the third processing unit, and the fourth processing unit which are sequentially connected.
The global pooling layer and the full connection layer of the trained basic model are removed, and the fourth feature vector output by the fourth unit is directly taken as the first image feature corresponding to the first image, so that the data volume required to be processed by the similarity detection model can be further reduced, and the processing speed of the similarity model is improved. Meanwhile, the detection effect of the similarity detection model can be ensured to be accurate.
205. And determining similar images in the first image set according to the first image features corresponding to the first images, and removing the similar images to obtain a second image set.
In some embodiments, the electronic device may determine cosine similarity corresponding to each two first images according to the first image features, if the cosine similarity is greater than a preset similarity threshold, determine that the two first images are target first images, and finally determine similar images in the target first images.
For example, the preset similarity threshold is set to 0.837, if the cosine similarity corresponding to the two first images is 0.9, the cosine similarity is greater than the preset similarity threshold of 0.937, and the two first images are determined to be target first images.
After the two target first images are determined, the electronic equipment can acquire the gray distribution condition of the pixels corresponding to each target first image, and the exposure condition corresponding to each target first image is determined according to the gray distribution condition.
For example, in the case of a gray scale distribution of pixels corresponding to one target first image, if the pixels of the target first image are uniformly distributed over a plurality of gray scale values, instead of a large number of pixels being clustered over a certain gray scale range, the exposure of the target first image is considered to be good, and the brightness distribution is uniform.
That is, the electronic device can judge whether the exposure of the first image of the target is good or not according to whether the gray distribution of the pixels is uniform, and the better the gray distribution of the pixels is, the better the exposure of the first image of the target is.
In the two target first images, the target first image with poor exposure can be used as a similar image, then the electronic equipment removes the similar image, the target first image with good exposure is reserved, and the target first image with good exposure can be determined as a second image.
That is, after the electronic device performs similar image removal on all the first images, the electronic device may determine that the surviving first image is the second image. The plurality of second images forms a second image set.
206. A gray scale difference between each pixel and an adjacent pixel in the second image is determined.
In some embodiments, in a second image, a target pixel in the second image may be determined, then the gray value of the target pixel and the gray value of a pixel adjacent to the target pixel are determined, and then the gray difference between the target pixel and the adjacent pixel is obtained.
In this way, the gray scale difference between each pixel and the adjacent pixels in each second image can be determined.
207. And determining the definition corresponding to the second image according to the gray level difference between each pixel and the adjacent pixels, and determining the blurred third image according to the definition corresponding to the second image.
In some embodiments, the electronic device multiplies the gray differences between each pixel and the adjacent pixels to obtain a product result corresponding to each pixel in each second image, and then adds the product results corresponding to each pixel in each second image to obtain a sharpness value corresponding to each second image.
And finally, determining the definition corresponding to each second image according to the definition value so as to obtain a third image.
The following formula is used for calculation:
I k (x,y)=[f k (x,y)-f k (x+1,y)]*[f k (x,y)-f k (x,y+1)]
wherein l k (x, y) is the productAs a result, f k (x, y) is the gray value of the current pixel (x, y), f k (x+1, y) is the gray value of the adjacent pixel (x+1, y), f k (x+1, y) is the gray value of the adjacent pixel (x, y+1).
And obtaining a product result corresponding to the processed image in the mode. And the electronic equipment performs summation processing on each product result to obtain a definition value corresponding to the second image.
In some embodiments, the electronic device may normalize the sharpness values corresponding to each of the second images to obtain normalized sharpness values corresponding to each of the second images, then determine the sharpness corresponding to each of the second images according to the normalized sharpness values, and determine the blurred third image according to the sharpness corresponding to each of the second images.
For example, the electronic device may normalize the sharpness value corresponding to each of the second images to a range of 0 to 1, thereby obtaining a normalized sharpness value corresponding to each of the second images. Specifically, a nonlinear activation function sigmoid can be adopted for normalization processing.
And when the normalized sharpness value of a certain second image is lower than the preset normalized sharpness value, the second image is considered to be a blurred third image.
208. And inputting the third image into an image screening model, and outputting the image which needs to be reserved by the user.
In some embodiments, the electronic device may input the first image, the second image, and the third image into an image filtering model, and perform classification processing through the image filtering model, so as to determine an image that needs to be retained by the user.
In some implementations, the electronic device can group the image and the second image that is distinct from the second image into a target image set.
When the user opens the target image set, photos in the target image set are reserved or deleted, and at the moment, the image screening model can be continuously trained according to the operation habit of the user, so that a retrained image screening model is obtained.
When the electronic equipment is used for sorting the image sets next time, the images can be better classified through the retrained image screening model, so that the images needing to be reserved are determined.
In the embodiment of the application, the first image with abnormal exposure is removed, the similar image is removed, the blurred third image which does not need to be reserved is removed, and the remaining images are reserved by carrying out low-quality image detection on the image set to be processed, so that the user is prevented from manually arranging the image set to be processed, and the efficiency of arranging a plurality of images is improved.
In the embodiment of the application, the electronic equipment converts the RGB color space of each image to be processed into the HSV color space by acquiring the image set to be processed so as to obtain the V channel value corresponding to each image to be processed. And determining a non-exposed first image according to the V-channel value.
And then inputting the first image into a similarity detection model, and outputting first image features corresponding to the first image. And determining similar images in the first image set according to the first image features corresponding to the first images, and removing the similar images to obtain a second image set.
The gray scale difference between each pixel and the adjacent pixels in the second image is then determined. And determining the definition corresponding to the second image according to the gray level difference between each pixel and the adjacent pixels, and determining the blurred third image according to the definition corresponding to the second image.
And finally, inputting the third image into an image screening model, and outputting the image which needs to be reserved by the user. Therefore, the arrangement of a plurality of images in the image set is realized, and the time for manually arranging the image set by a user is saved.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the application. The image processing apparatus 300 includes:
the first detection module 310 is configured to perform exposure detection on the to-be-processed image in the to-be-processed image set, so as to obtain a non-exposed first image set.
Before performing exposure detection on the to-be-processed images in the to-be-processed image set to obtain a non-exposed first image, the first detection module 310 is further configured to obtain an initial image set, and perform size adjustment on each initial image in the initial image set to obtain a plurality of to-be-processed images with preset sizes; and generating a to-be-processed image set according to the to-be-processed image.
The first detection module 310 is further configured to convert an RGB color space of the image to be processed into an HSV color space, so as to obtain a V channel value corresponding to the image to be processed; and determining a non-exposure first image in the images to be processed according to the V-channel value.
The first detection module 310 is further configured to determine whether a V-channel value corresponding to the image to be processed is within a preset range; and if the V channel value is in the preset range, determining the image to be processed corresponding to the V channel value as a first image.
The second detection module 320 is configured to perform similarity detection on the first image set, determine similar images in the first image set, and remove the similar images to obtain a second image set.
The second detection module 320 is further configured to input the first image set into a similarity detection model, and output a similar image.
The second detection module 320 is further configured to input a first image in the first image set into the first processing units, each next processing unit processes a result output by the previous processing unit, and the fourth processing unit outputs a first image feature corresponding to the first image; and determining similar images in the first image set according to the first image features.
The second detection module 320 is further configured to input the first image into the first convolution layer, and output a convolution feature corresponding to the first image; and inputting the convolution characteristic corresponding to the first image into an average pooling layer, and outputting a first characteristic vector corresponding to the first image.
The second detection module 320 is further configured to determine cosine similarity corresponding to each two first images according to the first image features; if the cosine similarity is larger than a preset similarity threshold, determining the two first images as target first images; and determining similar images in the first image of the target.
And a third detection module 330, configured to perform sharpness detection on the second image in the second image set, so as to obtain a blurred third image set.
The third detection module 330 is further configured to determine a gray level difference between each pixel and an adjacent pixel in the second image; and determining the definition corresponding to the second image according to the gray level difference between each pixel and the adjacent pixels, and determining the third image according to the definition corresponding to the second image.
The third detection module 330 is further configured to multiply the gray differences between each pixel and the adjacent pixels to obtain a product result corresponding to each pixel in the second image; adding the product results corresponding to each pixel in the second image to obtain a definition value corresponding to the second image; and determining the definition corresponding to the second image according to the definition value, and determining a third image in the third image set according to the definition corresponding to the second image.
The third detection module 330 is further configured to normalize the sharpness values corresponding to each of the second images to obtain normalized sharpness values corresponding to each of the second images; and determining the second image with the normalized definition value lower than the preset normalized definition value as a third image.
And the processing module 340 is configured to determine an image that needs to be retained by the user from the third image.
The processing module 340 is further configured to input the third image into the image filtering model, and output an image that the user needs to keep.
The processing module 340 is further configured to input a third image into the first processing layer, each next processing layer processes a result output by the previous processing layer, and the third processing layer outputs a screening result; and determining the image according to the screening result.
In the embodiment of the application, the electronic equipment obtains a first non-exposure image set by performing exposure detection on the to-be-processed image in the to-be-processed image set; performing similarity detection on the first image set, determining similar images in the first image set, and removing the similar images to obtain a second image set; performing definition detection on the second image in the second image set to obtain a blurred third image set; and determining the images which the user needs to keep in the third image set according to the image screening model. In the embodiment of the application, the electronic equipment screens the low-quality pictures of the image set to be processed, and then sorts the image set to be processed according to the low-quality pictures, so that the time for manually sorting the image set by a user is saved, and the efficiency of sorting a plurality of images is improved.
Accordingly, an embodiment of the present application further provides an electronic device, as shown in fig. 10, where the electronic device may include a memory 401 including one or more computer readable storage media, an input unit 402, a display unit 403, a sensor 404, a processor 405 including one or more processing cores, and a power supply 406. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 10 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein:
The memory 401 may be used to store software programs and modules, and the processor 405 executes various functional applications and data processing by executing the software programs and modules stored in the memory 401. The memory 401 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device (such as audio data, phonebooks, etc.), and the like. In addition, memory 401 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 401 may further include a memory controller to provide access to the memory 401 by the processor 405 and the input unit 402.
The input unit 402 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in one particular embodiment, the input unit 402 may include a touch-sensitive surface, as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations thereon or thereabout by a user (e.g., operations thereon or thereabout by a user using any suitable object or accessory such as a finger, stylus, etc.), and actuate the corresponding connection means according to a predetermined program. Alternatively, the touch-sensitive surface may comprise two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 405, and can receive commands from the processor 405 and execute them. In addition, touch sensitive surfaces may be implemented in a variety of types, such as resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch-sensitive surface, the input unit 402 may also include other input devices. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 403 may be used to display information entered by a user or provided to a user as well as various graphical user interfaces of the electronic device, which may be composed of graphics, text, icons, video and any combination thereof. The display unit 403 may include a display panel, which may be optionally configured in the form of a liquid crystal display (LCD, liquid Crystal Display), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay a display panel, and upon detection of a touch operation thereon or thereabout, the touch-sensitive surface is passed to the processor 405 to determine the type of touch event, and the processor 405 then provides a corresponding visual output on the display panel based on the type of touch event. Although in fig. 10 the touch sensitive surface and the display panel are implemented as two separate components for input and output functions, in some embodiments the touch sensitive surface may be integrated with the display panel to implement the input and output functions.
The electronic device may also include at least one sensor 404, such as a light sensor, a motion sensor, and other sensors. In particular, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or backlight when the electronic device is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and the direction when the device is stationary, and the device can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the electronic device are not described in detail herein.
The processor 405 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 401, and calling data stored in the memory 401, thereby performing overall monitoring of the electronic device. Optionally, the processor 405 may include one or more processing cores; preferably, the processor 405 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 405.
The electronic device also includes a power supply 406 (e.g., a battery) for powering the various components, which may be logically connected to the processor 405 via a power management system so as to perform functions such as managing charge, discharge, and power consumption via the power management system. The power supply 406 may also include one or more of any components, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the electronic device may further include a camera, a bluetooth module, etc., which will not be described herein. In particular, in the present embodiment, the processor 405 in the electronic device loads the computer program stored in the memory 401, and the processor 405 implements various functions by loading the computer program:
performing exposure detection on the images to be processed in the images to be processed to obtain a first non-exposure image set;
performing similarity detection on the first image set, determining similar images in the first image set, and removing the similar images to obtain a second image set;
performing definition detection on the second image in the second image set to obtain a blurred third image set;
and determining the images which the user needs to keep in the third image set according to the image screening model.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer readable storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform the steps of any one of the image processing methods provided by the embodiments of the present application. For example, the instructions may perform the steps of:
Performing exposure detection on the images to be processed in the images to be processed to obtain a first non-exposure image set;
performing similarity detection on the first image set, determining similar images in the first image set, and removing the similar images to obtain a second image set;
performing definition detection on the second image in the second image set to obtain a blurred third image set;
and determining the images which the user needs to keep in the third image set according to the image screening model.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The instructions stored in the storage medium may perform steps in any image processing method provided by the embodiments of the present application, so that the beneficial effects that any image processing method provided by the embodiments of the present application can be achieved, which are detailed in the previous embodiments and are not described herein.
The foregoing has described in detail the image processing method, apparatus, electronic device and storage medium provided by the embodiments of the present application, and specific examples have been applied to illustrate the principles and embodiments of the present application, where the foregoing examples are only used to help understand the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (15)

1. An image processing method, comprising:
performing exposure detection on the images to be processed in the images to be processed to obtain a first non-exposure image set;
performing similarity detection on the first image set, determining similar images in the first image set, and removing the similar images to obtain a second image set;
performing definition detection on the second image in the second image set to obtain a third image set;
and determining the images which need to be reserved by the user in the third image set according to the image screening model.
2. The image processing method according to claim 1, wherein the performing similarity detection on the first image set to determine similar images in the first image set includes:
and inputting the first image set into a similarity detection model, and outputting the similar images.
3. The image processing method according to claim 2, wherein the similarity detection model includes a first processing unit, a second processing unit, a third processing unit, and a fourth processing unit that are sequentially connected, the second processing unit, the third processing unit, and the fourth processing unit having the same structure;
The inputting the first image set into a similarity detection model, outputting the similar image, including:
inputting a first image in the first image set into the first processing unit, processing a result output by a previous processing unit by each next processing unit, and outputting a first image feature corresponding to the first image by the fourth processing unit;
and determining the similar images in the first image set according to the first image features.
4. The image processing method according to claim 3, wherein the first processing unit includes a first convolution layer and an average pooling layer;
the inputting a first image of the first set of images into the first processing unit includes:
inputting the first image into the first convolution layer, and outputting convolution characteristics corresponding to the first image;
and inputting the convolution characteristic corresponding to the first image into the average pooling layer, and outputting a first characteristic vector corresponding to the first image.
5. The image processing method according to claim 2, wherein the performing similarity detection on the first image set to determine similar images in the first image set includes:
Determining cosine similarity corresponding to each two first images according to the first image characteristics;
if the cosine similarity is larger than a preset similarity threshold, determining the two first images as target first images;
and determining the similar image in the target first image.
6. The image processing method according to claim 1, wherein the performing sharpness detection on the second image in the second image set to obtain a blurred third image set includes:
determining a gray scale difference between each pixel and an adjacent pixel in the second image;
and determining the definition corresponding to the second image according to the gray level difference between each pixel and the adjacent pixels, and determining a third image in the third image set according to the definition corresponding to the second image.
7. The method according to claim 6, wherein determining the sharpness corresponding to the second image according to the gray level difference between each pixel and the adjacent pixel, and determining the third image in the third image set according to the sharpness corresponding to the second image, comprises:
multiplying the gray level difference between each pixel and the adjacent pixels to obtain a product result corresponding to each pixel in the second image;
Adding the product results corresponding to each pixel in the second image to obtain a definition value corresponding to the second image;
and determining the definition corresponding to the second image according to the definition value, and determining the third image according to the definition corresponding to the second image.
8. The image processing method according to claim 7, wherein the determining the sharpness corresponding to the second image according to the sharpness value and the third image according to the sharpness corresponding to the second image includes:
normalizing the definition value corresponding to each second image to obtain a normalized definition value corresponding to each second image;
and determining the second image with the normalized definition value lower than a preset normalized definition value as the third image.
9. The image processing method according to claim 1, wherein the performing exposure detection on the to-be-processed image in the to-be-processed image set to obtain a non-exposed first image set includes:
converting the RGB color space of the image to be processed into an HSV color space to obtain a V channel value corresponding to the image to be processed;
And determining a non-exposure first image set in the image to be processed according to the V-channel value.
10. The image processing method according to claim 9, wherein the determining a non-exposed first image set in the image to be processed according to the V-channel value includes:
determining whether a V channel value corresponding to the image to be processed is in a preset range;
and if the V channel value is in the preset range, determining the image to be processed corresponding to the V channel value as a non-exposure first image.
11. The image processing method according to claim 1, wherein the determining, according to the image filtering model, an image that the user needs to keep in the third image set includes:
and inputting the third image set into an image screening model, and outputting the image which the user needs to keep.
12. The image processing method according to claim 11, wherein the image screening model includes a first processing layer, a first bottleneck processing layer, a second bottleneck processing layer, a third bottleneck processing layer, a fourth bottleneck processing layer, a second processing layer, an averaging pooling layer, and a third processing layer, which are sequentially connected, wherein the first bottleneck processing layer, the second bottleneck processing layer, the third bottleneck processing layer, and the fourth bottleneck processing layer are all bottleneck layers adopting a group normalization manner;
The step of inputting the third image set into an image screening model and outputting the image which needs to be reserved by the user comprises the following steps:
inputting a third image in the third image set into the first processing layer, wherein each next processing layer processes the result output by the previous processing layer, and the third processing layer outputs a screening result;
and determining the image which needs to be reserved by the user according to the screening result.
13. An image processing apparatus, comprising:
the first detection module is used for carrying out exposure detection on the images to be processed in the images to be processed to obtain a non-exposure first image set;
the second detection module is used for detecting the similarity of the first image set, determining similar images in the first image set, and removing the similar images to obtain a second image set;
the third detection module is used for detecting the definition of the second image in the second image set to obtain a blurred third image set;
and the processing module is used for determining the images which need to be reserved by the user in the third image set according to the image screening model.
14. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the image processing method according to any of claims 1-12 when the program is executed.
15. A computer readable storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor for performing the steps in the image processing method according to any of claims 1-12.
CN202210237251.6A 2022-03-11 2022-03-11 Image processing method, device, electronic equipment and storage medium Pending CN116797954A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210237251.6A CN116797954A (en) 2022-03-11 2022-03-11 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210237251.6A CN116797954A (en) 2022-03-11 2022-03-11 Image processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116797954A true CN116797954A (en) 2023-09-22

Family

ID=88046552

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210237251.6A Pending CN116797954A (en) 2022-03-11 2022-03-11 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116797954A (en)

Similar Documents

Publication Publication Date Title
CN112162930B (en) Control identification method, related device, equipment and storage medium
CN108924420B (en) Image shooting method, image shooting device, image shooting medium, electronic equipment and model training method
CN108234882B (en) Image blurring method and mobile terminal
WO2018113512A1 (en) Image processing method and related device
CN107330859B (en) Image processing method and device, storage medium and terminal
CN110290426B (en) Method, device and equipment for displaying resources and storage medium
US11537215B2 (en) Ambient light sensing device and method, and interactive device using same
CN110572636A (en) camera contamination detection method and device, storage medium and electronic equipment
CN113132695B (en) Lens shading correction method and device and electronic equipment
CN116168038B (en) Image reproduction detection method and device, electronic equipment and storage medium
CN112308797A (en) Corner detection method and device, electronic equipment and readable storage medium
CN114119432A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112560649A (en) Behavior action detection method, system, equipment and medium
CN112950525A (en) Image detection method and device and electronic equipment
EP3617990B1 (en) Picture processing method and apparatus, computer readable storage medium, and electronic device
CN112333441A (en) Camera detection method and device and electronic equipment
CN111145151B (en) Motion area determining method and electronic equipment
CN114764821B (en) Moving object detection method, moving object detection device, electronic equipment and storage medium
CN111639639B (en) Method, device, equipment and storage medium for detecting text area
WO2023001110A1 (en) Neural network training method and apparatus, and electronic device
CN116797954A (en) Image processing method, device, electronic equipment and storage medium
CN115761809A (en) Projector control method, device and equipment based on gesture recognition and storage medium
CN116797631A (en) Differential area positioning method, differential area positioning device, computer equipment and storage medium
CN111476740A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN113706506B (en) Method and device for detecting assembly state, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication