CN111191606A - Image processing method and related product - Google Patents

Image processing method and related product Download PDF

Info

Publication number
CN111191606A
CN111191606A CN201911418586.2A CN201911418586A CN111191606A CN 111191606 A CN111191606 A CN 111191606A CN 201911418586 A CN201911418586 A CN 201911418586A CN 111191606 A CN111191606 A CN 111191606A
Authority
CN
China
Prior art keywords
image
target
raindrop
preset
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911418586.2A
Other languages
Chinese (zh)
Inventor
孙哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911418586.2A priority Critical patent/CN111191606A/en
Publication of CN111191606A publication Critical patent/CN111191606A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an image processing method and a related product, wherein the method comprises the following steps: acquiring a first image, wherein the first image at least comprises raindrops; performing target detection on the first image to obtain a first target area where the first target is located and a first target type; if the first target type is a preset type, raindrop removing processing is performed on the first target area to obtain a second image, and therefore raindrop removing processing is performed on the target when the target appears.

Description

Image processing method and related product
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and a related product.
Background
In the technical field of video or image shooting, in a rainy environment, a shot video or image may have a situation that raindrops shield a shot target, so that raindrop removing processing needs to be performed on the image. In addition, in the process of shooting a video, if the camera is continuously turned on to shoot the video, the calculation amount of performing raindrop removal processing on the video stream is large, which may result in a large workload.
Disclosure of Invention
The embodiment of the application provides an image processing method and a related product, which can improve the raindrop removal processing efficiency and improve the subsequent target identification precision.
In a first aspect, an embodiment of the present application provides an image processing method, where the method includes:
acquiring a first image, wherein the first image at least comprises raindrops;
performing target detection on the first image to obtain a first target area where a first target is located and a first target type;
and if the first target type is a preset type, carrying out raindrop removing treatment on the first target area to obtain a second image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
an acquisition unit configured to acquire a first image including at least raindrops;
the detection unit is used for carrying out target detection on the first image to obtain a first target area where a first target is located and a first target type;
and the processing unit is used for carrying out raindrop removing processing on the first target area under the condition that the target type is a preset type to obtain a second image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
it can be seen that, in the image processing method and the related product provided in the embodiment of the present application, by acquiring the first image, the first image at least includes raindrops; performing target detection on the first image to obtain a first target area where the first target is located and a first target type; if the first target type is a preset type, raindrop removing processing is performed on the first target area to obtain a second image, and therefore raindrop removing processing is performed on the target when the target appears.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 1B is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 1C is a schematic illustration of an example of determining an object movement trajectory of a first object according to the present application;
fig. 1D is a schematic illustration showing a first image being subjected to raindrop removing processing according to an embodiment of the present disclosure;
fig. 1E is a schematic illustration of another illustration for performing raindrop removal processing on a first image according to an embodiment of the present disclosure;
FIG. 2 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
FIG. 3 is a schematic flowchart of another image processing method provided in the embodiments of the present application;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device related to the embodiments of the present application may include various handheld devices, vehicle-mounted devices, wearable devices (smart watches, smart bracelets, wireless headsets, augmented reality/virtual reality devices, smart glasses), computing devices or other processing devices connected to wireless modems, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like, which have wireless communication functions. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
The following describes embodiments of the present application in detail.
Referring to fig. 1A, fig. 1A is a schematic structural diagram of an electronic device disclosed in an embodiment of the present application, the electronic device 100 includes a storage and processing circuit 110, and a sensor 170 connected to the storage and processing circuit 110, where:
the electronic device 100 may include control circuitry, which may include storage and processing circuitry 110. The storage and processing circuitry 110 may include memory, such as hard drive memory, non-volatile memory (e.g., flash memory or other electronically programmable read-only memory used to form a solid state drive, etc.), volatile memory (e.g., static or dynamic random access memory, etc.), and so on, and embodiments of the present application are not limited thereto. Processing circuitry in storage and processing circuitry 110 may be used to control the operation of electronic device 100. The processing circuitry may be implemented based on one or more microprocessors, microcontrollers, digital signal processors, baseband processors, power management units, audio codec chips, application specific integrated circuits, display driver integrated circuits, and the like.
The storage and processing circuitry 110 may be used to run software in the electronic device 100, such as an Internet browsing application, a Voice Over Internet Protocol (VOIP) telephone call application, an email application, a media playing application, operating system functions, and so forth. Such software may be used to perform control operations such as, for example, camera-based image capture, ambient light measurement based on an ambient light sensor, proximity sensor measurement based on a proximity sensor, information display functionality based on status indicators such as status indicator lights of light emitting diodes, touch event detection based on a touch sensor, functionality associated with displaying information on multiple (e.g., layered) display screens, operations associated with performing wireless communication functionality, operations associated with collecting and generating audio signals, control operations associated with collecting and processing button press event data, and other functions in the electronic device 100, to name a few.
The electronic device 100 may include input-output circuitry 150. The input-output circuit 150 may be used to enable the electronic device 100 to input and output data, i.e., to allow the electronic device 100 to receive data from an external device and also to allow the electronic device 100 to output data from the electronic device 100 to the external device. The input-output circuit 150 may further include a sensor 170. Sensor 170 may include the ultrasonic fingerprint identification module, may also include ambient light sensor, proximity sensor based on light and electric capacity, touch sensor (for example, based on light touch sensor and/or capacitanc touch sensor, wherein, touch sensor may be a part of touch display screen, also can regard as a touch sensor structure independent utility), acceleration sensor, and other sensors etc., the ultrasonic fingerprint identification module can be integrated in the screen below, or, the ultrasonic fingerprint identification module can set up in electronic equipment's side or back, do not do the restriction here, this ultrasonic fingerprint identification module can be used to gather the fingerprint image.
The sensor 170 may include an Infrared (IR) camera and a visible light camera, and when the IR camera takes a picture, the pupil reflects Infrared light, so that the IR camera takes a pupil image more accurately than the RGB camera; the visible light camera needs to carry out more follow-up pupil detection, and calculation accuracy and accuracy are higher than the IR camera, and the commonality is better than the IR camera, but the calculated amount is big.
Input-output circuit 150 may also include one or more display screens, such as display screen 130. The display 130 may include one or a combination of liquid crystal display, organic light emitting diode display, electronic ink display, plasma display, display using other display technologies. The display screen 130 may include an array of touch sensors (i.e., the display screen 130 may be a touch display screen). The touch sensor may be a capacitive touch sensor formed by a transparent touch sensor electrode (e.g., an Indium Tin Oxide (ITO) electrode) array, or may be a touch sensor formed using other touch technologies, such as acoustic wave touch, pressure sensitive touch, resistive touch, optical touch, and the like, and the embodiments of the present application are not limited thereto.
The electronic device 100 may also include an audio component 140. The audio component 140 may be used to provide audio input and output functionality for the electronic device 100. The audio components 140 in the electronic device 100 may include a speaker, a microphone, a buzzer, a tone generator, and other components for generating and detecting sound.
The communication circuit 120 may be used to provide the electronic device 100 with the capability to communicate with external devices. The communication circuit 120 may include analog and digital input-output interface circuits, and wireless communication circuits based on radio frequency signals and/or optical signals. The wireless communication circuitry in communication circuitry 120 may include radio-frequency transceiver circuitry, power amplifier circuitry, low noise amplifiers, switches, filters, and antennas. For example, the wireless Communication circuitry in Communication circuitry 120 may include circuitry to support Near Field Communication (NFC) by transmitting and receiving Near Field coupled electromagnetic signals. For example, the communication circuit 120 may include a near field communication antenna and a near field communication transceiver. The communications circuitry 120 may also include a cellular telephone transceiver and antenna, a wireless local area network transceiver circuitry and antenna, and so forth.
The electronic device 100 may further include a battery, power management circuitry, and other input-output units 160. The input-output unit 160 may include buttons, joysticks, click wheels, scroll wheels, touch pads, keypads, keyboards, cameras, light emitting diodes and other status indicators, and the like.
A user may input commands through input-output circuitry 150 to control the operation of electronic device 100, and may use output data of input-output circuitry 150 to enable receipt of status information and other outputs from electronic device 100.
Referring to fig. 1B, fig. 1B is a schematic flowchart of an image processing method according to an embodiment of the present application, applied to the electronic device shown in fig. 1A, as shown in fig. 1B, the image processing method includes:
101. a first image is acquired, the first image including at least raindrops.
The first image is an image taken in a rainy environment or in an environment where water drops are present. Specifically, the first image may be an image including raindrops captured by a camera of the electronic device in a rainy environment, or may be a video image in which any frame in a captured video includes raindrops when the electronic device captures the video in the rainy environment.
102. And carrying out target detection on the first image to obtain a first target area where the first target is located and a first target type.
Wherein, in different shooting scenes, the electronic device shoots different targets, performs target detection on the first image, and the detected targets are different, for example, in some scenes, people are taken as shooting targets, the first image can be subjected to target detection, the face in the first image is determined, in some scenes, objects are taken as shooting targets, such as advertising boards, license plates, signboards and the like, can be subjected to target detection, the target object in the first image is determined, in some scenes, the information of people and the information of the objects need to be detected, the first image can be subjected to face detection and object detection respectively, the face or the objects included in the first image are determined, for example, in a road shooting scene, the vehicles need to be shot, the people in the road also need to be shot, the first image can be subjected to face detection and license plate detection respectively, and determining the face or the license plate included in the first image.
The first target type refers to a type of an object to which the first target belongs, and the first target type may be different in different shooting scenes, for example, in a road shooting scene, the first target type may be a person, a license plate, or the like, which is not limited herein.
Optionally, in the step 102, performing target detection on the first image to obtain a first target area where the first target is located and a first target type, which may include the following steps:
21. performing target detection on the first image according to a trained target detection model to obtain a first probability that a first target in the first image belongs to the first target type, wherein the target detection model is obtained by training according to a first sample image set in advance;
22. and if the first probability is larger than a preset probability threshold, determining a first target area where the first target is located.
The target detection model may be at least one of the following: a face detection model for performing face detection, an object detection model for performing object detection, and the like, where the object detection model may be, for example, a license plate detection model, a signboard detection model, a tree detection model, and the like, and the memory of the electronic device may store a preset target detection model in advance, and in a specific implementation, the electronic device may set different target detection models for different targets to be detected, for example, if monitoring vehicles and people on a road is required, the memory of the electronic device may store a preset face detection model and a license plate detection model in advance, and further, may perform face detection on a first image according to the preset face detection model to obtain a first probability that a first target in the first image belongs to a face, and if the first probability is greater than a preset probability threshold, may determine a first target region where the face is located, or, the first image can be detected according to a preset license plate detection model to obtain a first probability that a first target in the first image belongs to the license plate, and if the first probability is greater than a preset probability threshold, a first target region where the license plate is located can be determined. For another example, if a tree is to be taken as the shooting target, a preset tree detection model may be stored in the memory of the electronic device in advance, and the tree detection model is used to detect whether the tree exists in the first image and the first target area of the tree in the first image.
Optionally, the target detection model is obtained by training in advance according to a first sample image set, where the first sample image set includes a plurality of sample images for model training, specifically, for detecting targets of different target types, the sample images in the first sample image set are different, for example, if the target detection model is a human face detection model, a plurality of sample images included in the first sample image set are all sample images including a human face, and for example, if the target detection model is a license plate detection model, a plurality of sample images included in the first sample image set are all sample images including a license plate. Taking a trained target detection model as a license plate detection model as an example, a plurality of sample images including a license plate can be obtained in advance, then the plurality of sample images are trained on a preset mobile network (mobile net) model to obtain the trained target detection model, so that after a first image is obtained, the first image can be input into the target detection model to obtain a first probability that the first target belongs to the license plate, and if the first probability is greater than a preset probability threshold value, a first target area where the first target is located is determined, wherein the mobile net model is a model for target detection.
Therefore, the target detection is carried out on the first image according to the pre-trained target detection model, and whether the type of the target to be shot and the first target area of the target exist in the first image or not can be determined more accurately and quickly.
103. And if the first target type is a preset type, carrying out raindrop removing treatment on the first target area to obtain a second image.
The preset type can be set by a user or by default of a system, the preset type can be flexibly set according to a shooting scene, the electronic device can set the preset type according to a target needing to be detected, for example, a license plate or a face needs to be detected in a road shooting scene, the preset type can be set as the license plate or the face, when the first target type is the face or the license plate, raindrop removing processing is performed aiming at the first target without words, and in other shooting scenes, the preset type can be set as other types, which is not limited here. Therefore, when a target appears, the target can be subjected to raindrop removal treatment, and the raindrop removal treatment efficiency and the subsequent target identification precision can be improved due to the fact that only the region where the target is located is subjected to raindrop removal treatment.
In the specific implementation, if the first target type is a preset type, it indicates that a target to be detected exists in the first image, raindrop removal processing is performed on the first target area to obtain a second image, so that the target to be detected can be subjected to raindrop removal processing, and if the first target type is not the preset type, it indicates that the target to be photographed does not exist in the first image, the process can be terminated.
Therefore, the second image with the clearer first target area can be obtained by performing the raindrop removing treatment on the first target area.
Optionally, in the step 103, performing raindrop removing processing on the first target region to obtain the second image includes the following steps:
31. and carrying out raindrop removal treatment on the first target area according to the trained raindrop removal model to obtain the second image, wherein the raindrop removal model is obtained by training according to a second sample image set in advance.
The memory of the electronic device may pre-store a preset trained raindrop removal model, the trained raindrop removal model is obtained by training in advance according to a second sample image set, the second sample image set may include a plurality of pre-acquired sample images, and the trained raindrop removal model may be obtained by training according to the plurality of sample images. Therefore, the raindrop removing processing can be carried out on the first target area in the first image according to the trained raindrop removing model which is trained in advance, and the second image with the clearer first target area is obtained.
Optionally, in this embodiment of the present application, the following steps may also be included:
32. pre-acquiring a second sample image set, wherein the second sample image set comprises a plurality of sample images;
33. and training a preset raindrop removing model according to the plurality of sample images to obtain the trained raindrop removing model, wherein the trained raindrop removing model is used for carrying out raindrop removing treatment on the first target area.
The training method comprises the steps of obtaining a trained raindrop removing model according to a second sample image set, obtaining the second sample image set in advance, wherein the second sample image set comprises a plurality of sample images, the plurality of sample images are all sample images comprising raindrops, and training the preset raindrop removing model according to the plurality of sample images to obtain the trained raindrop removing model.
Optionally, in the step 34, training a preset raindrop removal model according to the plurality of sample images to obtain the trained raindrop removal model, which may include the following steps:
34. sequentially inputting each sample image in the plurality of sample images into the preset raindrop removal model to obtain a plurality of output images, wherein the plurality of output images correspond to the plurality of sample images one to one, and each sample image is an image obtained by adding raindrop processing to an original image;
35. and calculating image errors according to each output image in the plurality of output images and the original image corresponding to the output image in sequence, and stopping training the preset raindrop removal model if the continuously obtained preset number of image errors are smaller than a preset error threshold value to obtain the target raindrop removal calculation model.
The preset raindrop removing model may be a generation countermeasure network (GAN) model, the GAN model is a deep learning model, and the trained raindrop removing model for raindrop removing processing may be obtained by training the GAN model with a plurality of sample images. Specifically, the sample image is an image including raindrops, each sample image in the plurality of sample images can be sequentially input into the preset raindrop removal model to obtain a plurality of output images, the plurality of output images correspond to the plurality of sample images one to one, one sample image is input, and one output image is correspondingly output, each sample image in the plurality of sample images can be an image including raindrops obtained by performing raindrop adding processing on a corresponding original image, therefore, an image error can be calculated between each output image and a corresponding original image without raindrop effect, as the number of input sample images increases, an image error corresponding to the output image output by the preset raindrop removal model becomes smaller, and the image definition of the output image corresponding to the later input sample image is larger than that of the output image corresponding to the earlier input sample image along with the gradual training process of the preset raindrop removal model, that is, the definition of the output image becomes clearer along with the training of the preset raindrop removal model, and therefore, when a preset number of continuously obtained image errors are all smaller than a preset error threshold, the input of the sample image can be terminated, and the trained raindrop removal model can be determined.
Optionally, in a video shooting scene, the first image is any frame of video image in a video, and in this embodiment of the application, after performing target detection on the first image, the method may further include the following steps:
1031. if the first target area and the first target type of the first target are not detected, performing target detection on a video image of a next frame of the first image in the video;
1032. and if a second target area and a second target type of a second target are detected, and the second target type is the preset type, performing raindrop removal processing on the second target area to obtain a third image.
In the embodiment of the application, in a video shooting scene, a shot video includes multiple frames of video images, and it is not necessary to perform raindrop removal processing on all the video images considering that not every frame of video image has a target to be shot, so after detecting a first image, if a first target region and a first target type of the first target are not detected, which indicates that no shot target appears in the first image, the raindrop removal processing is not performed on the first image, the target detection is performed on a next frame of video image of the first image in the video, a second target region and a second target type of a second target are detected, and the second target type is a preset type, and the raindrop removal processing is performed on the second target region to obtain a third image, so that only the video image with the target of the preset type can be subjected to raindrop removal processing, the video image without the preset type of target is not subjected to raindrop removal processing, so that the calculation amount and the workload of the raindrop removal processing can be reduced, and the speed of the raindrop removal processing on the shot video is increased.
Optionally, the first image is any frame of video image in a video, and in this embodiment of the application, after performing target detection on the first image, the method may further include the following steps:
1033. determining a target moving track of the first target according to a plurality of video images which are subjected to target detection in the video;
1034. estimating a third target area of the first target in the next frame of video image according to the target moving track;
1035. and carrying out raindrop removing treatment on the third target area to obtain a fourth image.
In a concrete implementation, referring to fig. 1C, fig. 1C is a schematic diagram illustrating a demonstration that a target moving track of a first target is determined, a target area of the first target included in each of a plurality of video images of the first target may be determined according to the plurality of video images in which target detection has been performed in a video, so as to obtain a plurality of target areas, and thus, a target moving track of a user may be determined according to the plurality of target areas, for example, in a road monitoring scene, a vehicle is traveling on a road, and a target moving track of the vehicle may be determined according to the plurality of video images in which target detection has been performed, where the first target may be a vehicle or a license plate, and further, a third target area of the first target in a next frame of video images may be estimated according to the target moving track, so as to perform raindrop removal on the third target area, the user can more clearly see the specific information of the third target area when viewing the video conveniently.
For example, in a road monitoring scene, a target moving track of a vehicle or a license plate may be determined according to a plurality of video images that have been subjected to target detection, then a third target area of the vehicle or the license plate in a next frame of video image may be estimated according to the target moving track, and finally, raindrop removal processing may be performed on the third target area in the next frame of video image, so that a user may see clear vehicle or license plate information during road monitoring.
Optionally, after performing raindrop removing processing on the first target region in the step 103 to obtain a second image, the method may further include the following steps:
1036. if the preset type is a license plate, performing license plate detection on the second image to obtain a target license plate number of the license plate; and searching a preset vehicle information base according to the target license plate number to obtain target vehicle information corresponding to the target license plate number, wherein the vehicle information base comprises a plurality of license plate numbers and a plurality of pieces of vehicle information corresponding to the license plate numbers one by one.
1037. If the preset type is a human face, matching the second image with a human face sample image in a preset human face sample image library to obtain a target human face sample image successfully matched with the second image; and searching a preset identity information base according to the target face sample image to obtain target identity information corresponding to the target face sample image, wherein the identity information base comprises a plurality of face sample images and a plurality of identity information corresponding to the face sample images one to one.
In a road shooting scene, whether a vehicle violation or a personnel violation exists may need to be monitored, a target license plate number of a vehicle needs to be specifically determined, and target identity information of a person corresponding to a shot face needs to be determined. Specifically, the license plate can be included in the shot first image, raindrop removal processing is performed on the first image, the second image is obtained, license plate detection is performed according to the second image, the target license plate number of the license plate is obtained, please refer to fig. 1D, and fig. 1D is a demonstration schematic diagram for performing raindrop removal processing on the first image provided by the embodiment of the application, wherein the first image includes the license plate shielded by raindrops, the second image with a clearer license plate can be obtained after raindrop removal processing, and then license plate detection can be performed on the second image, so that the target license plate number can be obtained. The memory of the electronic equipment can also pre-store a preset vehicle information base, so that the vehicle information base can be searched according to the target license plate number to obtain target vehicle information corresponding to the target license plate number, and road traffic management personnel can be assisted to handle vehicle violation behaviors. The first image can be shot to include a face, the first image is subjected to raindrop removal processing to obtain a second image, the second image is matched with a face sample image in a preset face sample image library to obtain a target face sample image successfully matched with the second image, a memory of the electronic device can also store a preset identity information library in advance, please refer to fig. 1E, which is another demonstration diagram for performing raindrop removal processing on the first image provided by the embodiment of the application, wherein the first image includes a face shielded by raindrops, the second image with a clear face can be obtained after raindrop removal processing is performed, further, face detection can be performed on the second image to obtain target identity information corresponding to the face, so that the identity information library can be searched according to the target face sample image to obtain target identity information corresponding to the target face sample image, the method is used for assisting road traffic management personnel in handling the violation behaviors of the personnel.
It can be seen that, in the embodiment of the present application, by acquiring the first image, the first image at least includes raindrops; performing target detection on the first image to obtain a first target area where the first target is located and a first target type; if the first target type is a preset type, raindrop removing processing is carried out on the first target area to obtain a second image, so that raindrop removing processing can be carried out on the target when the target appears.
Referring to fig. 2, fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure, applied to an electronic device shown in fig. 1A, the method including:
201. a second sample image set is pre-acquired, the second sample image set comprising a plurality of sample images.
202. And carrying out raindrop adding effect processing on each sample image in the plurality of sample images to obtain a plurality of sample images.
203. And training a preset raindrop removing model according to the plurality of sample images to obtain the trained raindrop removing model.
204. A first image is acquired, the first image including at least raindrops.
205. And performing target detection on the first image according to the trained target detection model to obtain a first probability that a first target in the first image belongs to the first target type, wherein the target detection model is obtained by training according to a first sample image set in advance.
206. And if the first probability is larger than a preset probability threshold, determining a first target area where the first target is located.
207. And if the first target type is a preset type, carrying out raindrop removing treatment on the first target area according to the preset raindrop removing model after training to obtain the second image.
The specific implementation process of steps 201-207 may refer to the corresponding description in steps 101-103, and will not be described herein again.
As can be seen, in the embodiment of the present application, by acquiring a second sample image set in advance, the second sample image set includes a plurality of sample images; carrying out raindrop adding effect processing on each sample image in the plurality of sample images to obtain a plurality of sample images; training a preset raindrop removing model according to a plurality of sample images to obtain a trained raindrop removing model, and acquiring a first image, wherein the first image at least comprises raindrops; performing target detection on the first image to obtain a first target area where the first target is located and a first target type; if first target type is the default type, remove the raindrop to handle to first target region, obtain the second image, so, can remove the raindrop when having the target to appear and handle, because only remove the raindrop to the region of target place and handle, can promote to remove raindrop treatment effeciency to and promote follow-up target recognition accuracy, in addition, remove the raindrop according to the raindrop model of training in advance and handle, can make the second image more clear.
Referring to fig. 3, in accordance with the aforementioned fig. 1B, fig. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application, where the method includes:
in a video shooting scene, as shown in fig. 3, in this embodiment of the present application, in the video shooting scene, multiple consecutive frames of video images in a shooting video may be processed, and considering that a target to be shot does not exist in each frame of video image, it is not necessary to perform raindrop removal processing on all the video images, therefore, video images may be acquired according to a video image sequence of the multiple frames of video images, and target detection may be performed on the video image (e.g., a first image), if a target area and a target type of the target are detected, and the target type is a preset type indicating that a shot target appears in the video image, then the raindrop removal processing may be performed on the target area in the video image, and if the target area and the target type of the target are not detected, indicating that the shot target does not appear in the first image, then the raindrop removal processing may not be performed on the video image, and then carrying out target detection on a next frame of video image of the video image in the video, detecting a target area and a target type of the target, wherein the target type is a preset type, carrying out raindrop removal processing on the target area to obtain an image after raindrop removal, and so on, carrying out raindrop removal processing on only the video image with the target of the preset type, and not carrying out raindrop removal processing on the video image without the target of the preset type, so that the calculation amount and the workload of the raindrop removal processing can be reduced, and the speed of carrying out the raindrop removal processing on the shot video is improved.
It can be seen that in the embodiment of the application, the raindrop removing processing can be performed only on the video image with the target of the preset type, and the raindrop removing processing is not performed on the video image without the target of the preset type, so that the calculation amount and the workload of the raindrop removing processing can be reduced, and the speed of the raindrop removing processing on the shot video can be improved.
The following is a device for implementing the image processing method, specifically as follows:
in accordance with the above, please refer to fig. 4, where fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, the electronic device includes: a processor 410, a communication interface 430, and a memory 420; and one or more programs 421, the one or more programs 421 stored in the memory 420 and configured to be executed by the processor, the programs 421 including instructions for:
acquiring a first image, wherein the first image at least comprises raindrops;
performing target detection on the first image to obtain a first target area where a first target is located and a first target type;
and if the first target type is a preset type, carrying out raindrop removing treatment on the first target area to obtain a second image.
In one possible example, in the aspect of performing object detection on the first image to obtain a first object region where the first object is located and a first object type, the program 421 includes instructions for performing the following steps:
performing target detection on the first image according to a trained target detection model to obtain a first probability that a first target in the first image belongs to the first target type, wherein the target detection model is obtained by training according to a first sample image set in advance;
and if the first probability is larger than a preset probability threshold, determining a first target area where the first target is located.
In one possible example, in said performing a raindrop removal process on said first target area resulting in a second image, said program 421 comprises instructions for:
and carrying out raindrop removing treatment on the first target area according to a preset trained raindrop removing model to obtain the second image, wherein the trained raindrop removing model is obtained by training according to a second sample image set in advance.
In one possible example, the program 421 further includes instructions for performing the steps of:
pre-acquiring a second sample image set, wherein the second sample image set comprises a plurality of sample images;
performing raindrop adding effect processing on each sample image in the plurality of sample images to obtain a plurality of sample images;
and training a preset raindrop removing model according to the plurality of sample images to obtain the trained raindrop removing model, wherein the trained raindrop removing model is used for carrying out raindrop removing treatment on the first target area.
In one possible example, in the training of the preset raindrop removal model according to the plurality of sample images to obtain the trained raindrop removal model, the program 421 includes instructions for:
sequentially inputting each sample image in the plurality of sample images into the preset raindrop removal model to obtain a plurality of output images, wherein the plurality of output images correspond to the plurality of sample images one to one;
and calculating image errors according to the image data of each output image in the plurality of output images and the sample image corresponding to the output image in sequence, and stopping training the preset raindrop removal model if the continuously obtained preset number of image errors are smaller than a preset error threshold value to obtain the target raindrop removal calculation model.
In one possible example, the first image is any frame of a video image in a video, and the program 421 further includes instructions for:
if the first target area and the first target type of the first target are not detected, performing target detection on a video image of a next frame of the first image in the video;
and if a second target area and a second target type of a second target are detected, and the second target type is the preset type, performing raindrop removal processing on the second target area to obtain a third image.
In one possible example, the program 421 further includes instructions for performing the steps of:
if the preset type is a license plate, performing license plate detection on the second image to obtain a target license plate number of the license plate; and searching a preset vehicle information base according to the target license plate number to obtain target vehicle information corresponding to the target license plate number, wherein the vehicle information base comprises a plurality of license plate numbers and a plurality of pieces of vehicle information corresponding to the license plate numbers one by one.
If the preset type is a human face, matching the second image with a human face sample image in a preset human face sample image library to obtain a target human face sample image successfully matched with the second image; and searching a preset identity information base according to the target face sample image to obtain target identity information corresponding to the target face sample image, wherein the identity information base comprises a plurality of face sample images and a plurality of identity information corresponding to the face sample images one to one.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an image processing apparatus 500 applied to an electronic device according to the present embodiment, where the apparatus 500 includes an obtaining unit 501, a detecting unit 502 and a processing unit 503, wherein,
the acquiring unit 501 is configured to acquire a first image, where the first image at least includes raindrops;
the detection unit 502 is configured to perform target detection on the first image to obtain a target area and a target type of a target;
the processing unit 503 is configured to perform raindrop removal processing on the first target area to obtain a second image when the target type is a preset type.
Optionally, in the aspect of performing the target detection on the first image to obtain a first target region where the first target is located and a first target type, the detection unit 502 is specifically configured to:
performing target detection on the first image according to a trained target detection model to obtain a first probability that a first target in the first image belongs to the first target type, wherein the target detection model is obtained by training according to a first sample image set in advance;
and if the first probability is larger than a preset probability threshold, determining a first target area where the first target is located.
Optionally, in terms of performing raindrop removal processing on the first target region to obtain a second image, the processing unit 503 is specifically configured to:
and carrying out raindrop removing treatment on the first target area according to a preset trained raindrop removing model to obtain the second image, wherein the trained raindrop removing model is obtained by training according to a second sample image set in advance.
Optionally, the obtaining unit 501 is further configured to obtain a second sample image set in advance, where the second sample image set includes a plurality of sample images;
the processing unit 503 is further configured to perform raindrop adding processing on each of the plurality of sample images to obtain a plurality of sample images; and training a preset raindrop removing model according to the plurality of sample images to obtain the trained raindrop removing model, wherein the trained raindrop removing model is used for carrying out raindrop removing treatment on the first target area.
Optionally, in the aspect that a preset raindrop removing model is trained according to the plurality of sample images to obtain the trained raindrop removing model, the processing unit 503 is specifically configured to:
sequentially inputting each sample image in the plurality of sample images into the preset raindrop removal model to obtain a plurality of output images, wherein the plurality of output images correspond to the plurality of sample images one to one;
and calculating image errors according to the image data of each output image in the plurality of output images and the sample image corresponding to the output image in sequence, and stopping training the preset raindrop removal model if the continuously obtained preset number of image errors are smaller than a preset error threshold value to obtain the target raindrop removal calculation model.
Optionally, the first image is any frame of video image in a video, and the detection unit 502 is further configured to perform target detection on a next frame of video image of the first image in the video if the first target area and the first target type of the first target are not detected;
the processing unit 503 is further configured to, if a second target area and a second target type of a second target are detected, and the second target type is the preset type, perform raindrop removal processing on the second target area to obtain a third image.
Optionally, the detecting unit 502 is further configured to perform license plate detection on the second image to obtain a target license plate number of the license plate if the preset type is a license plate;
the processing unit 503 is further configured to search a preset vehicle information base according to the target license plate number to obtain target vehicle information corresponding to the target license plate number, where the vehicle information base includes a plurality of license plates and a plurality of pieces of vehicle information corresponding to the license plates one to one;
the processing unit 503 is further configured to, if the preset type is a human face, match the second image with a human face sample image in a preset human face sample image library to obtain a target human face sample image successfully matched with the second image; and searching a preset identity information base according to the target face sample image to obtain target identity information corresponding to the target face sample image, wherein the identity information base comprises a plurality of face sample images and a plurality of identity information corresponding to the face sample images one to one.
It can be seen that the image processing apparatus described in the embodiment of the present application obtains the first image, where the first image includes at least raindrops; performing target detection on the first image to obtain a first target area where the first target is located and a first target type; if the first target type is a preset type, raindrop removing processing is carried out on the first target area to obtain a second image, so that raindrop removing processing can be carried out on the target when the target appears.
It is to be understood that the functions of each program module of the image processing apparatus of this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the relevant description of the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring a first image, wherein the first image at least comprises raindrops;
performing target detection on the first image to obtain a first target area where a first target is located and a first target type;
and if the first target type is a preset type, carrying out raindrop removing treatment on the first target area to obtain a second image.
2. The method according to claim 1, wherein the performing the target detection on the first image to obtain a first target area where the first target is located and a first target type comprises:
performing target detection on the first image according to a trained target detection model to obtain a first probability that a first target in the first image belongs to the first target type, wherein the target detection model is obtained by training according to a first sample image set in advance;
and if the first probability is larger than a preset probability threshold, determining a first target area where the first target is located.
3. The method according to claim 1 or 2, wherein the performing a raindrop removal process on the first target region to obtain a second image comprises:
and carrying out raindrop removal treatment on the first target area according to the trained raindrop removal model to obtain the second image, wherein the raindrop removal model is obtained by training according to a second sample image set in advance.
4. The method of claim 3, further comprising:
pre-acquiring a second sample image set, wherein the second sample image set comprises a plurality of sample images;
and training a preset raindrop removing model according to the plurality of sample images to obtain the trained raindrop removing model, wherein the trained raindrop removing model is used for carrying out raindrop removing treatment on the first target area.
5. The method of claim 4, wherein the training a preset raindrop removal model according to the plurality of sample images to obtain the trained raindrop removal model comprises:
sequentially inputting each sample image in the plurality of sample images into the preset raindrop removal model to obtain a plurality of output images, wherein the plurality of output images correspond to the plurality of sample images one to one, and each sample image is an image obtained by adding raindrop processing to an original image;
and calculating image errors according to each output image in the plurality of output images and the original image corresponding to the output image in sequence, and stopping training the preset raindrop removal model if the continuously obtained preset number of image errors are smaller than a preset error threshold value to obtain the target raindrop removal calculation model.
6. The method according to any one of claims 1-5, wherein the first image is any frame of video image in a video, the method further comprising:
if the first target area and the first target type of the first target are not detected, performing target detection on a video image of a next frame of the first image in the video;
and if a second target area and a second target type of a second target are detected, and the second target type is the preset type, performing raindrop removal processing on the second target area to obtain a third image.
7. The method according to any one of claims 1-5, wherein the first image is any frame of video image in a video, the method further comprising:
determining a target moving track of the first target according to a plurality of video images which are subjected to target detection in the video;
estimating a third target area of the first target in the next frame of video image according to the target moving track;
and carrying out raindrop removing treatment on the third target area to obtain a fourth image.
8. An image processing apparatus, characterized in that the apparatus comprises:
an acquisition unit configured to acquire a first image including at least raindrops;
the detection unit is used for carrying out target detection on the first image to obtain a first target area where a first target is located and a first target type;
and the processing unit is used for carrying out raindrop removing processing on the first target area under the condition that the target type is a preset type to obtain a second image.
9. An electronic device comprising a processor, memory, a communication interface, and one or more programs, the memory for storing the one or more programs and configured for execution by the processor, the programs comprising instructions for performing the steps of the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-7.
CN201911418586.2A 2019-12-31 2019-12-31 Image processing method and related product Pending CN111191606A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911418586.2A CN111191606A (en) 2019-12-31 2019-12-31 Image processing method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911418586.2A CN111191606A (en) 2019-12-31 2019-12-31 Image processing method and related product

Publications (1)

Publication Number Publication Date
CN111191606A true CN111191606A (en) 2020-05-22

Family

ID=70707951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911418586.2A Pending CN111191606A (en) 2019-12-31 2019-12-31 Image processing method and related product

Country Status (1)

Country Link
CN (1) CN111191606A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111752450A (en) * 2020-05-28 2020-10-09 维沃移动通信有限公司 Display method and device and electronic equipment
CN112085680A (en) * 2020-09-09 2020-12-15 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium
CN113221920A (en) * 2021-05-20 2021-08-06 北京百度网讯科技有限公司 Image recognition method, device, equipment, storage medium and computer program product

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020579A (en) * 2011-09-22 2013-04-03 上海银晨智能识别科技有限公司 Face recognition method and system, and removing method and device for glasses frame in face image
CN104331865A (en) * 2014-10-22 2015-02-04 中国科学院深圳先进技术研究院 Video raindrop detection and removing method based on naive Bayesian probability model
US20170278546A1 (en) * 2016-03-25 2017-09-28 Samsung Electronics Co., Ltd. Method and device for processing multimedia information
CN107516295A (en) * 2016-06-15 2017-12-26 诺基亚技术有限公司 The method and apparatus for removing the noise in image
KR101845816B1 (en) * 2017-08-31 2018-05-18 케이에스아이 주식회사 Parking management syste and method for recognizing licence plate removing snowfall/rainfall
CN109035304A (en) * 2018-08-07 2018-12-18 北京清瑞维航技术发展有限公司 Method for tracking target, calculates equipment and device at medium
CN109325538A (en) * 2018-09-29 2019-02-12 北京京东尚科信息技术有限公司 Object detection method, device and computer readable storage medium
CN109348088A (en) * 2018-11-22 2019-02-15 Oppo广东移动通信有限公司 Image denoising method, device, electronic equipment and computer readable storage medium
CN110189290A (en) * 2019-04-08 2019-08-30 广东工业大学 Metal surface fine defects detection method and device based on deep learning
CN110248107A (en) * 2019-06-13 2019-09-17 Oppo广东移动通信有限公司 Image processing method and device
CN110390261A (en) * 2019-06-13 2019-10-29 北京汽车集团有限公司 Object detection method, device, computer readable storage medium and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020579A (en) * 2011-09-22 2013-04-03 上海银晨智能识别科技有限公司 Face recognition method and system, and removing method and device for glasses frame in face image
CN104331865A (en) * 2014-10-22 2015-02-04 中国科学院深圳先进技术研究院 Video raindrop detection and removing method based on naive Bayesian probability model
US20170278546A1 (en) * 2016-03-25 2017-09-28 Samsung Electronics Co., Ltd. Method and device for processing multimedia information
CN107516295A (en) * 2016-06-15 2017-12-26 诺基亚技术有限公司 The method and apparatus for removing the noise in image
KR101845816B1 (en) * 2017-08-31 2018-05-18 케이에스아이 주식회사 Parking management syste and method for recognizing licence plate removing snowfall/rainfall
CN109035304A (en) * 2018-08-07 2018-12-18 北京清瑞维航技术发展有限公司 Method for tracking target, calculates equipment and device at medium
CN109325538A (en) * 2018-09-29 2019-02-12 北京京东尚科信息技术有限公司 Object detection method, device and computer readable storage medium
CN109348088A (en) * 2018-11-22 2019-02-15 Oppo广东移动通信有限公司 Image denoising method, device, electronic equipment and computer readable storage medium
CN110189290A (en) * 2019-04-08 2019-08-30 广东工业大学 Metal surface fine defects detection method and device based on deep learning
CN110248107A (en) * 2019-06-13 2019-09-17 Oppo广东移动通信有限公司 Image processing method and device
CN110390261A (en) * 2019-06-13 2019-10-29 北京汽车集团有限公司 Object detection method, device, computer readable storage medium and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111752450A (en) * 2020-05-28 2020-10-09 维沃移动通信有限公司 Display method and device and electronic equipment
CN112085680A (en) * 2020-09-09 2020-12-15 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium
CN112085680B (en) * 2020-09-09 2023-12-12 腾讯科技(深圳)有限公司 Image processing method and device, electronic equipment and storage medium
CN113221920A (en) * 2021-05-20 2021-08-06 北京百度网讯科技有限公司 Image recognition method, device, equipment, storage medium and computer program product
CN113221920B (en) * 2021-05-20 2024-01-12 北京百度网讯科技有限公司 Image recognition method, apparatus, device, storage medium, and computer program product

Similar Documents

Publication Publication Date Title
CN109241859B (en) Fingerprint identification method and related product
CN106407984B (en) Target object identification method and device
CN109977859B (en) Icon identification method and related device
CN109614865B (en) Fingerprint identification method and related product
CN107657218B (en) Face recognition method and related product
CN111191606A (en) Image processing method and related product
CN107480496A (en) Solve lock control method and Related product
CN110245607B (en) Eyeball tracking method and related product
CN110099219B (en) Panoramic shooting method and related product
CN110427741B (en) Fingerprint identification method and related product
CN109376781B (en) Training method of image recognition model, image recognition method and related device
CN107451446A (en) Solve lock control method and Related product
EP3869389A1 (en) Electronic device, and fingerprint image processing method and related product
CN112703534B (en) Image processing method and related product
CN109086761A (en) Image processing method and device, storage medium, electronic equipment
CN107770478A (en) video call method and related product
CN109213897A (en) Video searching method, video searching apparatus and video searching system
CN110363702B (en) Image processing method and related product
CN110198421B (en) Video processing method and related product
CN108650466A (en) The method and electronic equipment of photo tolerance are promoted when a kind of strong light or reversible-light shooting portrait
CN110162264B (en) Application processing method and related product
CN107330867A (en) Image combining method, device, computer-readable recording medium and computer equipment
CN110796673B (en) Image segmentation method and related product
CN110222576B (en) Boxing action recognition method and device and electronic equipment
CN112989878A (en) Pupil detection method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination