WO2020001197A1 - 图像处理方法、电子设备、计算机可读存储介质 - Google Patents

图像处理方法、电子设备、计算机可读存储介质 Download PDF

Info

Publication number
WO2020001197A1
WO2020001197A1 PCT/CN2019/087588 CN2019087588W WO2020001197A1 WO 2020001197 A1 WO2020001197 A1 WO 2020001197A1 CN 2019087588 W CN2019087588 W CN 2019087588W WO 2020001197 A1 WO2020001197 A1 WO 2020001197A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
scene
label
target
processor
Prior art date
Application number
PCT/CN2019/087588
Other languages
English (en)
French (fr)
Inventor
陈岩
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020001197A1 publication Critical patent/WO2020001197A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image processing method, an electronic device, and a computer-readable storage medium.
  • scene detection and target detection can be performed on the image through image recognition technologies such as neural networks, so that the image can be optimized based on the detection results.
  • image recognition technologies such as neural networks
  • an image processing method an electronic device, and a computer-readable storage medium are provided.
  • An image processing method includes:
  • the light normalization process is a process of eliminating a change in brightness of the image
  • Target detection is performed on the processed image.
  • An electronic device includes a memory and a processor.
  • the memory stores a computer program.
  • the processor causes the processor to perform the following operations:
  • the light normalization process is a process of eliminating a change in brightness of the image
  • Target detection is performed on the processed image.
  • a computer-readable storage medium stores a computer program thereon.
  • the computer program is executed by a processor, the following operations are implemented:
  • the light normalization process is a process of eliminating a change in brightness of the image
  • Target detection is performed on the processed image.
  • the image processing method, electronic device, and computer-readable storage medium provided in the embodiments of the present application can process an image when it is detected that the image contains a backlit scene, and then perform target detection on the processed image, which can improve the accuracy of image target detection Sex.
  • FIG. 1 is a schematic diagram of an internal structure of an electronic device in one or more embodiments.
  • FIG. 2 is a flowchart of an image processing method in one or more embodiments.
  • FIG. 3 is a flowchart of scene detection on an image in one or more embodiments.
  • FIG. 4 is a flowchart of performing light normalization processing on an image in one or more embodiments.
  • FIG. 5 is a flowchart of performing a brightness enhancement process on a backlight region in an image in one or more embodiments.
  • FIG. 6 is a flowchart of an image processing method in one or more embodiments.
  • FIG. 7 is a structural block diagram of an image processing apparatus in one or more embodiments.
  • FIG. 8 is a schematic diagram of an image processing circuit in one or more embodiments.
  • FIG. 1 is a schematic diagram of an internal structure of an electronic device in an embodiment.
  • the electronic device includes a processor, a memory, and a network interface connected through a system bus.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the memory is used to store data, programs, and the like. At least one computer program is stored on the memory, and the computer program can be executed by a processor to implement the image processing method applicable to the electronic device provided in the embodiments of the present application.
  • the memory may include a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and a computer program.
  • the computer program can be executed by a processor to implement an image processing method provided by each of the following embodiments.
  • the internal memory provides a cached operating environment for operating system computer programs in a non-volatile storage medium.
  • the network interface may be an Ethernet card or a wireless network card, and is used to communicate with external electronic devices.
  • the electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device.
  • FIG. 2 is a flowchart of an image processing method according to an embodiment.
  • the image processing method in this embodiment is described by using an electronic device running in FIG. 1 as an example.
  • the image processing method includes operations 202 to 206.
  • Operation 202 Perform scene detection on the image to obtain a scene label of the image.
  • An image is an image captured by an electronic device through a camera.
  • the image may also be an image stored locally on the electronic device, or may be an image downloaded from the network by the electronic device.
  • scene recognition models can be trained according to deep learning algorithms such as VGG (Visual Geometry Group), CNN (Convolutional Neural Network), SSD (single shot multibox detector), and Decision Tree (Decision Tree).
  • the recognition model performs scene recognition on the image.
  • the scene recognition model generally includes an input layer, a hidden layer, and an output layer; the input layer is used to receive the input of the image; the hidden layer is used to process the received image; the output layer is used to output the final result of the image processing, that is, the output image Scene recognition results.
  • the scene of the image can be landscape, beach, blue sky, green grass, snow, night, dark, backlight, sunset, fireworks, spotlight, indoor, macro, etc.
  • the scene label of an image refers to the scene classification mark of the image.
  • the electronic device may determine a scene label of the image based on a scene recognition result of the image. For example, when the scene recognition result of the image is blue sky, the scene label of the image is blue sky.
  • the electronic device may perform scene recognition on the image of the electronic device according to the scene recognition model, and determine a scene label of the image according to the scene recognition result.
  • the scene label includes a backlight scene label
  • the image is subjected to a light normalization process
  • the light normalization process is a process of eliminating a change in image brightness
  • Backlight refers to a situation where the subject is underexposed when the subject is located between the light source and the camera of the electronic device, and the brightness of the foreground area (that is, the subject) in the image is lower than the brightness of the background area.
  • the scene label in the image contains the backlight scene label, which indicates that the brightness of the foreground area is lower than the background area in the image.
  • the light normalization process is a process to eliminate changes in image brightness. Specifically, the light normalization process is performed on an image including a backlight scene label to enhance the brightness of the foreground area in the image and eliminate the brightness between the foreground area and the background area. Variety. Electronic devices can use histogram equalization method, affine-based variation lighting model, and other methods to perform light normalization.
  • target detection is performed on the processed image.
  • Object detection refers to a method of identifying the type of an object in an image and calibrating the position of the object in the image according to the characteristics reflected in the image information.
  • the image feature information of the image can be matched with the feature information corresponding to the stored target tag, and the successfully matched target tag can be obtained as the target tag of the image.
  • the target tags pre-stored in the electronic device may include: portrait, baby, cat, dog, food, text, blue sky, green grass, beach, fireworks, and so on.
  • the electronic device When the electronic device performs target detection on an image, when there is only one target label in the image, the target label is used as the target label of the image; when there are multiple target labels in the image to be detected, the electronic device may Select one or more of the target tags as the target tags. Among them, the electronic device may select a corresponding target label with a larger target area area from a plurality of target labels as a target label of the image; the electronic device may also select a corresponding target label with a higher target area definition from the multiple target labels. As the target label of the image, etc.
  • scene tags of an image are obtained by performing scene detection on the image.
  • the scene tag includes a backlight scene tag
  • the image is subjected to light normalization processing, which can eliminate the brightness change of the image caused by the backlight.
  • Target detection on the processed image can improve the accuracy of image target detection.
  • scene detection is performed on an image, and a process of obtaining a scene label of the image further includes operations 302 to 306. among them:
  • Operation 302 Perform scene detection on the image to obtain an initial result of scene recognition.
  • the electronic device can train a scene recognition model according to deep learning algorithms such as VGG, SSD, and decision tree, and perform scene detection on the image according to the scene recognition model to obtain the initial results of scene recognition.
  • the initial result of scene recognition may include the initial category of scene detection and the confidence level corresponding to the initial category.
  • the initial scene recognition results for the image can be green grass: 70% confidence, blue sky: 80% confidence, backlight: 75% confidence.
  • Operation 304 Acquire a shooting time of the image.
  • the shooting time refers to the time when the electronic device collects the image through the camera.
  • the electronic device records the acquisition time when acquiring images.
  • the electronic device acquires images with the same scene label, it can directly read the shooting time of the images with the same scene label.
  • the initial result of scene detection is corrected according to the shooting time, and a scene label of the image is obtained according to the correction result.
  • the probability of certain scenes in the image can be obtained, and then combined with the initial results of scene detection to correct.
  • the electronic device can pre-store scene categories corresponding to different shooting times and weight values corresponding to the scene categories. Specifically, it may be a result obtained after statistical analysis is performed on a large number of image materials, and corresponding scene categories and weight values corresponding to the scene categories are correspondingly matched for different shooting time intervals according to the results.
  • the shooting time is between 20:00 and 21:00
  • the weight of "night scene” is 9
  • the weight of "blue sky” is -5
  • the weight of "backlight” is 5
  • the shooting time is between 18:00 and 19:00
  • the weight of "Night Scenery” is -2
  • the weight of "Blue Sky” is 6
  • the weight of "Backlight” is 8, and the value range is [-10, 10].
  • the larger the weight value the greater the probability of the scene appearing in the image, and the smaller the weight value, the smaller the probability of the scene appearing in the image. Every time the weight value increases from 0, the confidence of the corresponding scene increases by 1%. Similarly, every time the weight value decreases from 0, the confidence of the corresponding scene decreases by 1%.
  • the electronic device can correct the initial results of image scene recognition according to the scene categories corresponding to different shooting times and the weights corresponding to the scene categories, adjust the initial categories and corresponding confidence levels in the initial results, and obtain the final confidence levels corresponding to each category.
  • the most confident scene category is used as the scene label of the image, which can improve the accuracy of scene detection.
  • a process of performing light normalization processing on an image in the provided image processing method includes operations 402 to 406. among them:
  • Operation 402 Obtain a pixel gray value corresponding to each pixel in the image.
  • An image is made up of multiple pixels.
  • the image may be an RGB image composed of three channels of RGB (Red, Green, Blue, Red, Green, Blue), or a monochrome image composed of one channel. If the image is an RGB image, each pixel in the image has three corresponding RGB channel values.
  • the electronic device can obtain the color value of each pixel in the image, that is, the RGB value, and then convert the RGB value of the pixel to a gray value. Specifically, the average value method can be used to obtain the gray value of the pixel, or an integer method can be used. Gets the pixel value of a pixel. In one embodiment, the electronic device may obtain pixel gray values corresponding to the three RGB channels.
  • Operation 404 Obtain a conversion value corresponding to each pixel according to the equalization function and the pixel gray value.
  • the equalization function is a function that satisfies a single value and single increase, and the dynamic range of the gray value is consistent before and after the change.
  • the equilibrium function may be a cumulative distribution function (CDF).
  • CDF cumulative distribution function
  • Operation 406 Process the pixels of the image according to the conversion value.
  • the electronic device processes each pixel in the image according to the obtained conversion value of the pixel.
  • the electronic device can obtain pixel conversion values of the three channels of RGB to process the pixels.
  • the gray level with a large number of pixels in the image can be processed. Perform widening, and compress the grayscale with a small number of pixels in the image to make the image clearer. The difference in brightness between the foreground area and the background area in the backlight image can be eliminated, and the clarity of the foreground area can be increased.
  • the provided image processing method includes operations 502 to 506. among them:
  • Operation 502 Obtain a backlight region corresponding to the backlight scene label.
  • An image detection model such as a neural network can output the scene label of the image and the position corresponding to the scene label after detecting the image.
  • the scene label of the image may be one or more, and the electronic device may obtain a backlight region corresponding to the backlight scene label in the image. For example, when the image includes a backlight tag and a blue sky tag, the electronic device may obtain a corresponding position of the backlight tag in the image as the backlight region.
  • brightness enhancement processing is performed on the backlight region.
  • the electronic device may pre-store a brightness increase value corresponding to different brightness average values.
  • the electronic device can obtain the brightness value of each pixel in the backlight area, calculate the average brightness value of the backlight area of the image according to the brightness value of each pixel and the number of pixels, and obtain the corresponding brightness increase value according to the brightness average value.
  • Each pixel of the pixel is subjected to brightness enhancement processing.
  • target detection is performed on the processed image.
  • the electronic device obtains the backlight area corresponding to the backlight scene label and performs brightness enhancement processing on the backlight area, which can increase the brightness value of the backlight area in the image, make the backlight area clearer, and then process Target detection can improve the accuracy of target detection.
  • the provided image processing method further includes: performing target detection on the image to obtain multiple target labels of the image and corresponding confidence levels; and using a preset number of target labels selected from high to low as the confidence level as The destination label of the image.
  • Confidence is the degree of confidence in the measured value of the parameter being measured.
  • the preset number can be set according to actual needs, for example, it can be one, two, three, and the like are not limited thereto.
  • the electronic device can perform target detection on the image, identify and locate the target subject in the image. When the electronic device performs target detection on an image, the image feature information of the image can be matched with the feature information corresponding to the stored target tag to obtain multiple target tags and corresponding confidence of the image.
  • the electronic device can match the target tag according to Confidence is sorted from high to low, and a preset number of target labels are obtained as the target labels of the image.
  • the target tags stored in the electronic device may include: portrait, baby, cat, dog, food, text, blue sky, green grass, beach, fireworks and so on.
  • the preset number is 2
  • the multiple target tags corresponding to the output image of the electronic device are: "blue sky” confidence 90%, “cuisine” confidence 85%, "beach” confidence 80%
  • the two target labels selected from high to low are blue sky and food, and the blue sky and food are used as the target labels of the image.
  • the provided image processing method further includes: adjusting a confidence level corresponding to multiple target labels of the image according to the backlight scene label; and using the target label with the highest confidence level as the target label of the image.
  • the electronic device may pre-store the weight corresponding to each target tag when the scene tag of the image is a backlight scene tag. For example, according to statistical analysis on a large number of image materials, it is found that when the scene label of the image is a backlit scene label, the weight of "beach” is 7, the weight of "grassland” is 4, The weight value is 6, the weight of "Food” is -8, and the value range is [-10,10]. Every time the weight value increases from 0, the confidence of the corresponding scene increases by 1%. Similarly, every time the weight value decreases from 0, the confidence of the corresponding scene decreases by 1%.
  • the confidence levels corresponding to the target labels of the image can be blue sky: 95.4%, gourmet: 78.5%, and beach: 85.6%, then the electronic device can set the highest confidence level.
  • the blue sky of the image is used as the target label of the image, and the two target labels with the highest confidence, that is, the blue sky and the beach may be used as the target label of the image.
  • the electronic device can adjust the confidence level corresponding to multiple target labels of the image according to the backlight scene label, use the target label with higher confidence as the target label of the image, or set a preset number of targets selected from high to low. As the target label of the image, the label can improve the accuracy of image target detection.
  • the provided image processing method further includes operations 702 to 606. among them:
  • Operation 602 Obtain a target label and a corresponding label area obtained after the image is subjected to target detection.
  • the electronic device After the electronic device performs target detection on the image, it can output the target label of the image and the label position corresponding to the target label.
  • the target label of the image may be one or more, and the corresponding label area may also be one or more.
  • Operation 604 Acquire a corresponding tag processing parameter according to the target tag.
  • the electronic device can pre-store tag processing parameters corresponding to different target tags.
  • the label processing parameters may include color processing parameters, saturation processing parameters, brightness processing parameters, contrast processing parameters, and the like, but are not limited thereto.
  • the corresponding tag processing parameter is a parameter for increasing saturation
  • the corresponding tag processing parameter may be a parameter for reducing contrast and increasing brightness.
  • Operation 606 Process the label area according to the label processing parameters.
  • the electronic device processes each pixel of the label area according to the label processing parameter.
  • the electronic device can process different label regions according to the label processing parameters corresponding to different target labels. Therefore, the image can be locally processed, and the effect of image processing can be improved.
  • an image processing method is provided, and specific operations for implementing the method are as follows:
  • the electronic device performs scene detection on the image to obtain a scene label of the image.
  • the electronic device performs scene recognition on the image.
  • the scene recognition model can be trained according to deep learning algorithms such as VGG, CNN, SSD, and decision tree, and the scene can be recognized based on the scene recognition model.
  • the scene of the image can be landscape, beach, blue sky, green grass, snow, night, dark, backlight, sunset, fireworks, spotlight, indoor, macro, etc.
  • the electronic device may perform scene recognition on the image of the electronic device according to the scene recognition model, and determine a scene label of the image according to the scene recognition result.
  • the electronic device performs scene detection on the image to obtain the initial result of scene recognition, obtains the shooting time of the image, corrects the initial result of scene detection according to the shooting time, and obtains a scene label of the image according to the correction result.
  • the electronic device can correct the initial results of image scene recognition according to the scene categories corresponding to different shooting times and the weights corresponding to the scene categories, adjust the initial categories and corresponding confidence levels in the initial results, and obtain the final confidence levels corresponding to each category.
  • the most confident scene category is used as the scene label of the image, which can improve the accuracy of scene detection.
  • the electronic device performs a light normalization process on the image, and the light normalization process is a process of eliminating a change in brightness of the image.
  • Backlighting refers to the situation where the brightness of the foreground area in the image is lower than the brightness of the background area when the subject is between the light source and the camera of the electronic device, resulting in insufficient exposure of the subject.
  • the light normalization process is a process to eliminate changes in image brightness. Specifically, the light normalization process is performed on an image including a backlight scene label to enhance the brightness of the foreground area in the image and eliminate the brightness between the foreground area and the background area. Variety.
  • the electronic device obtains a pixel gray value corresponding to each pixel in the image, obtains a conversion value corresponding to each pixel according to the equalization function and the pixel gray value, and processes the pixels of the image according to the conversion value.
  • An image is made up of multiple pixels.
  • the electronic device obtains a conversion value corresponding to each pixel according to the equalization function and processes each pixel in the image.
  • the electronic device can obtain pixel conversion values of the three channels of RGB to process the pixels.
  • the electronic device obtains a backlight region corresponding to the backlight scene label, and performs brightness enhancement processing on the backlight region.
  • An image detection model such as a neural network can output the scene label of the image and the position corresponding to the scene label after detecting the image.
  • the electronic device can pre-store brightness increments corresponding to different brightness averages. The smaller the average brightness value, the higher the corresponding brightness increase value, and the larger the average brightness value, the lower the corresponding brightness increase value.
  • the electronic device can obtain the brightness value of each pixel in the backlight area, calculate the average brightness value of the backlight area of the image according to the brightness value of each pixel and the number of pixels, and obtain the corresponding brightness increase value according to the brightness average value.
  • Each pixel of the pixel is subjected to brightness enhancement processing.
  • the electronic device performs target detection on the processed image.
  • the image feature information of the image can be matched with the feature information corresponding to the stored target tag, and the successfully matched target tag can be obtained as the target tag of the image.
  • the target tags pre-stored in the electronic device may include: portrait, baby, cat, dog, food, text, blue sky, green grass, beach, fireworks, and so on.
  • the electronic device performs target detection on the image to obtain multiple target labels of the image and corresponding confidence levels; a preset number of target labels selected in accordance with the confidence level from high to low are used as target labels of the image.
  • the image feature information of the image can be matched with the feature information corresponding to the stored target tag to obtain multiple target tags and corresponding confidence of the image.
  • the electronic device can match the target tag according Confidence is sorted from high to low, and a preset number of target labels are obtained as the target labels of the image.
  • the electronic device adjusts the confidence level corresponding to multiple target labels of the image according to the backlight scene label; and uses the target label with the highest confidence level as the target label of the image.
  • the electronic device can adjust the confidence level corresponding to multiple target labels of the image according to the backlight scene label, use the target label with higher confidence as the target label of the image, or set a preset number of targets selected from high to low.
  • the label can improve the accuracy of image target detection.
  • the electronic device obtains the target label and the corresponding label area obtained after the image is subjected to target detection, obtains corresponding label processing parameters according to the target label, and processes the label area according to the label processing parameters.
  • the electronic device can pre-store tag processing parameters corresponding to different target tags.
  • the label processing parameters may include color processing parameters, saturation processing parameters, brightness processing parameters, contrast processing parameters, and the like, but are not limited thereto.
  • the electronic device processes each pixel of the label area according to the label processing parameter, and can locally process the image, thereby improving the effect of image processing.
  • FIG. 7 is a structural block diagram of an image processing apparatus according to an embodiment.
  • an image processing apparatus includes: a scene detection module 720, an image processing module 740, and an object detection module 760. among them:
  • a scene detection module 720 is configured to perform scene detection on an image to obtain a scene label of the image.
  • An image processing module 740 is configured to perform a light normalization process on an image when a scene label includes a backlit scene label.
  • the light normalization process is a process to eliminate a change in image brightness.
  • the target detection module 760 is configured to perform target detection on the processed image.
  • the scene detection module 720 may be further configured to perform scene detection on an image, obtain an initial result of scene recognition, acquire a shooting time of the image, correct the initial result of the scene detection according to the shooting time, and obtain an image according to the correction result.
  • Scene tags may be further configured to perform scene detection on an image, obtain an initial result of scene recognition, acquire a shooting time of the image, correct the initial result of the scene detection according to the shooting time, and obtain an image according to the correction result.
  • the image processing module 740 may be further configured to obtain a pixel gray value corresponding to each pixel in the image, obtain a conversion value corresponding to each pixel according to the equalization function and the pixel gray value, and perform a conversion on the image according to the conversion value. Pixels for processing.
  • the image processing module 740 may be further configured to obtain a backlight region corresponding to the backlight scene label, and perform brightness enhancement processing on the backlight region.
  • the target detection module 760 may be further configured to perform target detection on an image, obtain multiple target labels of the image and corresponding confidence levels, and use a preset number of target labels selected from high to low as the confidence level as The destination label of the image.
  • the target detection module 760 may be further configured to adjust the confidence corresponding to multiple target labels of the image according to the backlight scene label, and use the target label with the highest confidence as the target label of the image.
  • the image processing module 740 may be further configured to obtain a target label and a corresponding label area obtained after performing image detection on the target, obtain corresponding label processing parameters according to the target label, and process the label area according to the label processing parameters.
  • the above image processing device can perform scene detection on an image to obtain a scene label of the image.
  • the scene label includes a backlit scene label
  • the image is subjected to light normalization processing to eliminate image brightness changes, and target detection is performed on the processed image. . Since the image can be processed when it is detected that the image contains a backlit scene, and then the target of the processed image can be detected, the accuracy of image target detection can be improved.
  • each module in the above image processing apparatus is for illustration only. In other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the above image processing apparatus.
  • Each module in the image processing apparatus may be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the hardware in or independent of the processor in the computer device, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • each module in the image processing apparatus provided in the embodiments of the present application may be in the form of a computer program.
  • the computer program can be run on a terminal or a server.
  • the program module constituted by the computer program can be stored in the memory of the terminal or server.
  • the computer program is executed by a processor, the operations of the method described in the embodiments of the present application are implemented.
  • An embodiment of the present application further provides a computer-readable storage medium.
  • One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the operations of the image processing method.
  • a computer program product containing instructions that, when run on a computer, causes the computer to perform an image processing method.
  • An embodiment of the present application further provides an electronic device.
  • the above electronic device includes an image processing circuit.
  • the image processing circuit may be implemented by hardware and / or software components, and may include various processing units that define an ISP (Image Signal Processing) pipeline.
  • FIG. 8 is a schematic diagram of an image processing circuit in one embodiment. As shown in FIG. 8, for ease of description, only aspects of the image processing technology related to the embodiments of the present application are shown.
  • the image processing circuit includes an ISP processor 840 and a control logic 850.
  • the image data captured by the imaging device 810 is first processed by the ISP processor 840, which analyzes the image data to capture image statistical information that can be used to determine and / or one or more control parameters of the imaging device 810.
  • the imaging device 810 may include a camera having one or more lenses 812 and an image sensor 814.
  • the image sensor 814 may include a color filter array (such as a Bayer filter).
  • the image sensor 814 may obtain light intensity and wavelength information captured by each imaging pixel of the image sensor 814, and provide a set of raw data that may be processed by the ISP processor 840 Image data.
  • the sensor 820 (such as a gyroscope) may provide parameters (such as image stabilization parameters) of the acquired image processing to the ISP processor 840 based on the interface type of the sensor 820.
  • the sensor 820 interface may use a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the foregoing interfaces.
  • SMIA Standard Mobile Imaging Architecture
  • the image sensor 814 may also send the original image data to the sensor 820, and the sensor 820 may provide the original image data to the ISP processor 840 based on the interface type of the sensor 820, or the sensor 820 stores the original image data in the image memory 830.
  • the ISP processor 840 processes the original image data pixel by pixel in a variety of formats.
  • each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 840 may perform one or more image processing operations on the original image data and collect statistical information about the image data.
  • the image processing operations may be performed with the same or different bit depth accuracy.
  • the ISP processor 840 may also receive image data from the image memory 830.
  • the sensor 820 interface sends the original image data to the image memory 830, and the original image data in the image memory 830 is then provided to the ISP processor 840 for processing.
  • the image memory 830 may be a part of a memory device, a storage device, or a separate dedicated memory in an electronic device, and may include a DMA (Direct Memory Access) feature.
  • DMA Direct Memory Access
  • the ISP processor 840 may perform one or more image processing operations, such as time-domain filtering.
  • the processed image data may be sent to the image memory 830 for further processing before being displayed.
  • the ISP processor 840 receives processing data from the image memory 830 and performs image data processing on the processed data in the original domain and in the RGB and YCbCr color spaces.
  • the image data processed by the ISP processor 840 may be output to a display 870 for viewing by a user and / or further processed by a graphics engine or a GPU (Graphics Processing Unit).
  • the output of the ISP processor 840 may also be sent to the image memory 830, and the display 870 may read image data from the image memory 830.
  • the image memory 830 may be configured to implement one or more frame buffers.
  • the output of the ISP processor 840 may be sent to an encoder / decoder 860 to encode / decode image data.
  • the encoded image data can be saved and decompressed before being displayed on the display 870 device.
  • the encoder / decoder 860 may be implemented by a CPU or a GPU or a coprocessor.
  • the statistical data determined by the ISP processor 840 may be sent to the control logic 850 unit.
  • the statistical data may include image sensor 814 statistical information such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, and lens 812 shading correction.
  • the control logic 850 may include a processor and / or a microcontroller that executes one or more routines (such as firmware), and the one or more routines may determine the control parameters of the imaging device 810 and the ISP processing according to the received statistical data. Parameters of the controller 840.
  • control parameters of the imaging device 810 may include sensor 820 control parameters (e.g., gain, integration time for exposure control, image stabilization parameters, etc.), camera flash control parameters, lens 812 control parameters (e.g., focal length for focusing or zooming), or these A combination of parameters.
  • ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (eg, during RGB processing), and lens 812 shading correction parameters.
  • the electronic device can implement the image processing method described in the embodiment of the present application according to the image processing technology.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which is used as external cache memory.
  • RAM is available in various forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR, SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR dual data rate SDRAM
  • SDRAM enhanced SDRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • SLDRAM synchronous Link (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Abstract

一种图像处理方法,包括:对图像进行场景检测,得到图像的场景标签,当场景标签中包含逆光场景标签时,对图像进行消除图像亮度变化的光照归一化处理,对处理后的图像进行目标检测。

Description

图像处理方法、电子设备、计算机可读存储介质
相关申请的交叉引用
本申请要求于2018年06月29日提交中国专利局、申请号为2018106950557、发明名称为“图像处理方法和装置、电子设备、计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及一种图像处理方法、电子设备、计算机可读存储介质。
背景技术
随着计算机技术的快速发展,使用移动设备拍摄照片的现象越来越频繁。在拍照的过程中或在拍照之后,可以通过神经网络等图像识别技术对图像进行场景检测和目标检测,从而根据检测结果对图像进行优化处理。然而,传统技术中存在目标检测准确性低的问题。
发明内容
根据本申请的各种实施例提供一种图像处理方法、电子设备、计算机可读存储介质。
一种图像处理方法,包括:
对图像进行场景检测,得到所述图像的场景标签;
当所述场景标签中包含逆光场景标签时,对所述图像进行光照归一化处理,所述光照归一化处理是消除图像亮度变化的处理;及
对处理后的图像进行目标检测。
一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如下操作:
对图像进行场景检测,得到所述图像的场景标签;
当所述场景标签中包含逆光场景标签时,对所述图像进行光照归一化处理,所述光照归一化处理是消除图像亮度变化的处理;及
对处理后的图像进行目标检测。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如下操作:
对图像进行场景检测,得到所述图像的场景标签;
当所述场景标签中包含逆光场景标签时,对所述图像进行光照归一化处理,所述光照归一化处理是消除图像亮度变化的处理;及
对处理后的图像进行目标检测。
本申请实施例提供的图像处理方法、电子设备和计算机可读存储介质,可以在检测到图像包含逆光场景时对图像进行处理,再对处理后的图像进行目标检测,可以提高图像目标检测的准确性。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本发明的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根 据这些附图获得其他的附图。
图1为一个或多个实施例中电子设备的内部结构示意图。
图2为一个或多个实施例中图像处理方法的流程图。
图3为一个或多个实施例中对图像进行场景检测的流程图。
图4为一个或多个实施例中对图像进行光照归一化处理的流程图。
图5为一个或多个实施例中对图像中逆光区域进行亮度增强处理的流程图。
图6为一个或多个实施例中图像处理处理方法的流程图。
图7为一个或多个实施例中图像处理装置的结构框图。
图8为一个或多个实施例中图像处理电路的示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
图1为一个实施例中电子设备的内部结构示意图。如图1所示,该电子设备包括通过系统总线连接的处理器、存储器和网络接口。其中,该处理器用于提供计算和控制能力,支撑整个电子设备的运行。存储器用于存储数据、程序等,存储器上存储至少一个计算机程序,该计算机程序可被处理器执行,以实现本申请实施例中提供的适用于电子设备的图像处理方法。存储器可包括非易失性存储介质及内存储器。非易失性存储介质存储有操作系统和计算机程序。该计算机程序可被处理器所执行,以用于实现以下各个实施例所提供的一种图像处理方法。内存储器为非易失性存储介质中的操作系统计算机程序提供高速缓存的运行环境。网络接口可以是以太网卡或无线网卡等,用于与外部的电子设备进行通信。该电子设备可以是手机、平板电脑或者个人数字助理或穿戴式设备等。
图2为一个实施例中图像处理方法的流程图。本实施例中的图像处理方法,以运行于图1中的电子设备上为例进行描述。如图2所示,图像处理方法包括操作202至操作206。
操作202,对图像进行场景检测,得到图像的场景标签。
图像是指电子设备通过摄像头采集的图像。在一个实施例中,图像也可以是存储在电子设备本地的图像,还可以是电子设备从网络下载的图像等。具体地,对图像进行场景识别,可以根据VGG(Visual Geometry Group)、CNN(Convolutional Neural Network)、SSD(single shot multibox detector)、决策树(Decision Tree)等深度学习算法训练场景识别模型,根据场景识别模型对图像进行场景识别。场景识别模型一般包括输入层、隐层和输出层;输入层用于接收图像的输入;隐层用于对接收到的图像进行处理;输出层用于输出对图像处理的最终结果即输出图像的场景识别结果。
图像的场景可以是风景、海滩、蓝天、绿草、雪景、夜景、黑暗、逆光、日落、烟火、聚光灯、室内、微距等。图像的场景标签是指图像的场景分类标记。具体地,电子设备可以将图像的场景识别结果确定图像的场景标签。例如,当图像的场景识别结果为蓝天时,则图像的场景标签为蓝天。电子设备可以根据场景识别模型对电子设备的图像进行场景识别,并根据场景识别结果确定图像的场景标签。
操作204,当所述场景标签中包含逆光场景标签时,对图像进行光照归一化处理,光照归一化处理是消除图像亮度变化的处理。
逆光是指当被拍摄的主体位于光源与电子设备的摄像头之间时,造成的被拍摄的主体曝光不充分而导致图像中前景区域(即被拍摄的主体)亮度低于背景区域亮度的情况。图像中的场景标签包含逆光场景标签则说明图像中出现前景区域的亮度低于背景区域的情况。光照归一化处理是消除图像亮度变化的处理,具体地,对包含逆光场景标签的图像进 行光照归一化处理,可以使图像中前景区域的亮度增强,消除前景区域和背景区域之间的亮度变化。电子设备可以采用直方图均衡法、基于仿射变化光照模型等方法进行光照归一化处理。
操作206,对处理后的图像进行目标检测。
目标检测是指根据图像信息反映的特征辨识图像中物体的类别并标定图像中物体的位置的方法。电子设备在对图像进行目标检测时,可将图像的图像特征信息与已存储的目标标签对应的特征信息进行匹配,获取匹配成功的目标标签作为图像的目标标签。电子设备中预存的目标标签可包括:人像、婴儿、猫、狗、美食、文本、蓝天、绿草、沙滩、烟火等。电子设备在对待图像进行目标检测时,当图像中仅存在一个目标标签时,则将上述目标标签作为图像的目标标签;当上述待检测图像中存在多个目标标签,则电子设备可从多个目标标签中选取一个或多个作为目标标签。其中,电子设备可从多个目标标签中选取对应的目标区域面积较大的目标标签作为图像的目标标签;电子设备也可从多个目标标签中选取对应的目标区域清晰度较高的目标标签作为图像的目标标签等。
本申请提供的实施例中,通过对图像进行场景检测,得到图像的场景标签,当场景标签中包含逆光场景标签时,对图像进行光照归一化处理,可以消除因逆光造成的图像亮度变化,再对处理后的图像进行目标检测,可以提高图像目标检测的准确性。
如图3所示,在一个实施例中,提供的图像处理方法中对图像进行场景检测,得到图像的场景标签的过程还包括操作302至操作306。其中:
操作302,对图像进行场景检测,得到场景识别的初始结果。
电子设备可以根据VGG、SSD、决策树等深度学习算法训练场景识别模型,根据场景识别模型对图像进行场景检测,得到场景识别的初始结果。场景识别的初始结果可以包括场景检测的初始类别及初始类别对应的置信度。例如,图像的场景识别初始结果可以为绿草:置信度为70%、蓝天:置信度为80%、逆光:置信度为75%。
操作304,获取图像的拍摄时间。
拍摄时间是指电子设备通过摄像头采集图像的时间。一般情况下,电子设备在采集图像时会对采集时间进行记录。电子设备在获取具有相同场景标签的图像时,可以直接读取该具有相同场景标签的图像的拍摄时间。
操作306,根据拍摄时间对场景检测的初始结果进行校正,根据校正结果得到图像的场景标签。
根据拍摄时间可以得到图像中出现某些场景的概率,再结合场景检测的初始结果进行矫正。电子设备可以预存不同的拍摄时间对应的场景类别及场景类别对应的权值。具体地,可以是根据对大量的图像素材进行统计学分析后得出的结果,根据结果相应地为不同的拍摄时间区间匹配对应的场景类别及场景类别对应的权值。例如:拍摄时间为20时至21时之间,“夜景”的权值为9、“蓝天”权值为-5、“逆光”的权值为5,拍摄时间为18时至19时之间,“夜景”的权值为-2、“蓝天”的权值为6、“逆光”的权值为8,权值的取值范围为[-10,10]。权值越大说明在该图像中出现该场景的概率就越大,权值越小说明在该图像中出现该场景的概率就越小。权值从0开始每增加1,则对应场景的置信度增加1%,同样的,权值从0开始每减少1,则对应的场景的置信度减少1%
电子设备可以根据不同拍摄时间对应的场景类别及场景类别对应的权值对图像场景识别的初始结果进行校正,调整初始结果中初始类别及对应的置信度并获取各个类别对应的最终置信度,将置信度最高的场景类别作为图像的场景标签,可以提高场景检测的准确度。
如图4所示,在一个实施例中,提供的图像处理方法中对图像进行光照归一化处理的过程包括操作402至操作406。其中:
操作402,获取图像中各像素点对应的像素灰度值。
图像是由多个像素点组成的。图像可以是由RGB(Red、Green、Blue,红、绿、蓝)三通道构成的RGB图像,也可以是由一个通道构成的单色图像。若图像为RGB图像时,则图像中的每一个像素点都有对应的RGB三个通道值。电子设备可以获取图像中各个像素点的颜色值即RGB值,再将像素点的RGB值转化为灰度值,具体地,可以采用平均值法获取像素点的灰度值、也可以用整数方法获取像素点的像素值。在一个实施例中,电子设备可以分别获取RGB三通道对应的像素点灰度值。
操作404,根据均衡函数与像素灰度值得到各像素点对应的转化值。
均衡函数是满足单值单增并且变化前后灰度值动态范围一致的函数。具体地,均衡函数可以是累计分布函数(cumulative distribution function,CDF)。电子设备可以将根据均衡函数与各像素点对应的像素灰度值直接得到各像素点对应的转化值。
操作406,根据转化值对图像的像素点进行处理。
电子设备根据获取到的像素点的转化值对图像中各个像素点进行处理。当图像为RGB图像时,电子设备可以分别获取RGB三个通道的像素点转化值对像素点进行处理。
通过获取图像中各像素点的像素灰度值,并根据均衡函数得到各像素点对应的转化值,并根据转化值对图像的像素点进行处理,可以对图像中像素点个数多的灰度进行展宽,而对图像中像素点个数少的灰度进行压缩,从而使图像更加清晰,可以消除逆光图像中前景区域和背景区域的亮度值差值,增加前景区域的清晰度。
如图5所示,在一个实施例中,提供的图像处理方法包括操作502至操作506。其中:
操作502,获取逆光场景标签所对应的逆光区域。
神经网络等图像检测模型对图像进行检测后可以输出图像的场景标签及场景标签对应的位置。图像的场景标签可以是1个或多个,电子设备可以获取图像中逆光场景标签对应的逆光区域。例如,当图像中包含逆光标签、蓝天标签时,则电子设备可以获取逆光标签在图像中对应的位置作为逆光区域。
操作504,对逆光区域进行亮度增强处理。
具体地,电子设备可以预存不同亮度均值对应的亮度增值。亮度均值越小,对应的亮度增值越高,亮度均值越大,对应的亮度增值越低。电子设备可以获取逆光区域中各个像素点的亮度值,根据各像素点的亮度值和像素点的数量计算图像逆光区域的亮度均值,并根据亮度均值获取对应的亮度增值,根据亮度增值对逆光区域的各个像素点进行亮度增强处理。
操作506,对处理后的图像进行目标检测。
当图像的场景标签包含逆光场景标签时,电子设备获取逆光场景标签对应的逆光区域,对逆光区域进行亮度增强处理,可以提高图像中逆光区域的亮度值,使逆光区域更加清晰,再对处理后的图像进行目标检测,可以提高目标检测的准确性。
在一个实施例中,提供的图像处理方法还包括:对图像进行目标检测,得到图像的多个目标标签及对应的置信度;将按照置信度从高到低选取的预设数量的目标标签作为图像的目标标签。
置信度是被测量参数的测量值的可信程度。预设数量可以根据实际需求进行设定,例如可以是1个、2个、3个等不限于此。电子设备可以对图像进行目标检测,识别并定位图像中目标主体。电子设备在对图像进行目标检测时,可将图像的图像特征信息与已存储的目标标签对应的特征信息进行匹配,得到图像的多个目标标签及对应的置信度,电子设备可以将目标标签按照置信度从高到低进行排序,获取预设数量的目标标签作为图像的目标标签。电子设备中已存储的目标标签可包括:人像、婴儿、猫、狗、美食、文本、蓝天、绿草、沙滩、烟火等。例如,当预设数量为2个时,若电子设备输出图像对应的多个目标标签为:“蓝天”置信度90%,“美食”置信度85%,“海滩”置信度80%,则按照置信度从高到低选取的2个目标标签为蓝天和美食,则将蓝天和美食作为该图像的目标标 签。
在一个实施例中,提供的图像处理方法还包括:根据逆光场景标签调整所述图像的多个目标标签对应的置信度;将置信度最高的目标标签作为所述图像的目标标签。
电子设备可以预存当图像的场景标签为逆光场景标签时,各目标标签对应的权值。例如,根据对大量的图像素材进行统计学分析后得出,当图像的场景标签为逆光场景标签时,则“海滩”的权值为7,“草地”的权值为4,“蓝天”的权值为6,“美食”的权值为-8,权值的取值范围为[-10,10]。权值从0开始每增加1,则对应场景的置信度增加1%,同样的,权值从0开始每减少1,则对应的场景的置信度减少1%。则在上述例子中,对图像中的目标标签进行调整后可以得到图像的目标标签对应的置信度分别为蓝天:95.4%、美食:78.5%、海滩:85.6%,则电子设备可以将置信度最高的蓝天作为图像的目标标签,也可以将置信度最高的2个目标标签即蓝天和海滩作为图像的目标标签。
电子设备可以根据逆光场景标签调整图像的多个目标标签对应的置信度,将置信度较高的目标标签作为图像的目标标签,也可以将按照置信度从高到低选取的预设数量的目标标签作为图像的目标标签,可以提高图像目标检测的准确性。
如图6所示,在一个实施例中,提供的图像处理方法还包括操作702至操作606。其中:
操作602,获取图像进行目标检测后得到的目标标签及对应的标签区域。
电子设备对图像进行目标检测后可以输出图像的目标标签及目标标签对应的标签位置。图像的目标标签可以是1个或多个,则对应的标签区域也可以是1个或多个。
操作604,根据目标标签获取对应的标签处理参数。
电子设备可以预存不同目标标签对应的标签处理参数。标签处理参数可以包括色彩处理参数、饱和度处理参数、亮度处理参数、对比度处理参数等不限于此。例如,当目标标签为“美食”时,对应的标签处理参数为提高饱和度的参数;当目标标签为“人像”时,对应的标签处理参数可以为减小对比度、增加亮度的参数等。目标标签对应的标签处理参数可以有多个。
操作606,根据标签处理参数对标签区域进行处理。
具体地,电子设备根据标签处理参数对标签区域的各个像素点进行处理。电子设备可以根据不同目标标签对应的标签处理参数对不同标签区域进行处理。从而,可以对图像进行局部处理,提高图像处理的效果。
在一个实施例中,提供了一种图像处理方法,实现该方法的具体操作如下所述:
首先,电子设备对图像进行场景检测,得到图像的场景标签。电子设备对图像进行场景识别,可以根据VGG、CNN、SSD、决策树等深度学习算法训练场景识别模型,根据场景识别模型对图像进行场景识别。图像的场景可以是风景、海滩、蓝天、绿草、雪景、夜景、黑暗、逆光、日落、烟火、聚光灯、室内、微距等。电子设备可以根据场景识别模型对电子设备的图像进行场景识别,并根据场景识别结果确定图像的场景标签。
可选地,电子设备对图像进行场景检测,得到场景识别的初始结果,获取图像的拍摄时间,根据拍摄时间对场景检测的初始结果进行校正,根据校正结果得到图像的场景标签。电子设备可以根据不同拍摄时间对应的场景类别及场景类别对应的权值对图像场景识别的初始结果进行校正,调整初始结果中初始类别及对应的置信度并获取各个类别对应的最终置信度,将置信度最高的场景类别作为图像的场景标签,可以提高场景检测的准确度。
接着,当所述场景标签中包含逆光场景标签时,电子设备对图像进行光照归一化处理,光照归一化处理是消除图像亮度变化的处理。逆光是指当被拍摄的主体位于光源与电子设备的摄像头之间时,造成的被拍摄的主体曝光不充分而导致图像中前景区域亮度低于背景区域亮度的情况。光照归一化处理是消除图像亮度变化的处理,具体地,对包含逆光场景标签的图像进行光照归一化处理,可以使图像中前景区域的亮度增强,消除前景区域 和背景区域之间的亮度变化。
可选地,电子设备获取图像中各像素点对应的像素灰度值,根据均衡函数与像素灰度值得到各像素点对应的转化值,根据转化值对图像的像素点进行处理。图像是由多个像素点组成的。电子设备根据均衡函数获取各个像素点对应的转化值对图像中各个像素点进行处理。当图像为RGB图像时,电子设备可以分别获取RGB三个通道的像素点转化值对像素点进行处理。
可选地,电子设备获取逆光场景标签所对应的逆光区域,对逆光区域进行亮度增强处理。神经网络等图像检测模型对图像进行检测后可以输出图像的场景标签及场景标签对应的位置。电子设备可以预存不同亮度均值对应的亮度增值。亮度均值越小,对应的亮度增值越高,亮度均值越大,对应的亮度增值越低。电子设备可以获取逆光区域中各个像素点的亮度值,根据各像素点的亮度值和像素点的数量计算图像逆光区域的亮度均值,并根据亮度均值获取对应的亮度增值,根据亮度增值对逆光区域的各个像素点进行亮度增强处理。
接着,电子设备对处理后的图像进行目标检测。电子设备在对图像进行目标检测时,可将图像的图像特征信息与已存储的目标标签对应的特征信息进行匹配,获取匹配成功的目标标签作为图像的目标标签。电子设备中预存的目标标签可包括:人像、婴儿、猫、狗、美食、文本、蓝天、绿草、沙滩、烟火等。
可选地,电子设备对图像进行目标检测,得到图像的多个目标标签及对应的置信度;将按照置信度从高到低选取的预设数量的目标标签作为图像的目标标签。电子设备在对图像进行目标检测时,可将图像的图像特征信息与已存储的目标标签对应的特征信息进行匹配,得到图像的多个目标标签及对应的置信度,电子设备可以将目标标签按照置信度从高到低进行排序,获取预设数量的目标标签作为图像的目标标签。
可选地,电子设备根据逆光场景标签调整所述图像的多个目标标签对应的置信度;将置信度最高的目标标签作为所述图像的目标标签。电子设备可以根据逆光场景标签调整图像的多个目标标签对应的置信度,将置信度较高的目标标签作为图像的目标标签,也可以将按照置信度从高到低选取的预设数量的目标标签作为图像的目标标签,可以提高图像目标检测的准确性。
可选地,电子设备获取图像进行目标检测后得到的目标标签及对应的标签区域,根据目标标签获取对应的标签处理参数,根据标签处理参数对标签区域进行处理。电子设备可以预存不同目标标签对应的标签处理参数。标签处理参数可以包括色彩处理参数、饱和度处理参数、亮度处理参数、对比度处理参数等不限于此。电子设备根据标签处理参数对标签区域的各个像素点进行处理,可以对图像进行局部处理,提高图像处理的效果。
应该理解的是,虽然图2-6的流程图中的各个操作按照箭头的指示依次显示,但是这些操作并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些操作的执行并没有严格的顺序限制,这些操作可以以其它的顺序执行。而且,图2-6中的至少一部分操作可以包括多个子操作或者多个阶段,这些子操作或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子操作或者阶段的执行顺序也不必然是依次进行,而是可以与其它操作或者其它操作的子操作或者阶段的至少一部分轮流或者交替地执行。
图7为一个实施例的图像处理装置的结构框图。如图7所示,一种图像处理装置包括:场景检测模块720、图像处理模块740和目标检测模块760。其中:
场景检测模块720,用于对图像进行场景检测,得到图像的场景标签。
图像处理模块740,用于当场景标签中包含逆光场景标签时,对图像进行光照归一化处理,光照归一化处理是消除图像亮度变化的处理。
目标检测模块760,用于对处理后的图像进行目标检测。
在一个实施例中,场景检测模块720还可以用于对图像进行场景检测,得到场景识别的初始结果,获取图像的拍摄时间,根据拍摄时间对场景检测的初始结果进行校正,根据校正结果得到图像的场景标签。
在一个实施例中,图像处理模块740还可以用于获取图像中各像素点对应的像素灰度值,根据均衡函数与像素灰度值得到各像素点对应的转化值,根据转化值对图像的像素点进行处理。
在一个实施例中,图像处理模块740还可以用于获取逆光场景标签所对应的逆光区域,对逆光区域进行亮度增强处理。
在一个实施例中,目标检测模块760还可以用于对图像进行目标检测,得到图像的多个目标标签及对应的置信度,将按照置信度从高到低选取的预设数量的目标标签作为图像的目标标签。
在一个实施例中,目标检测模块760还可以用于根据逆光场景标签调整图像的多个目标标签对应的置信度,将置信度最高的目标标签作为图像的目标标签。
在一个实施例中,图像处理模块740还可以用于获取图像进行目标检测后得到的目标标签及对应的标签区域,根据目标标签获取对应的标签处理参数,根据标签处理参数对标签区域进行处理。
上述图像处理装置,可以对图像进行场景检测,得到图像的场景标签,当场景标签中包含逆光场景标签时,对图像进行消除图像亮度变化的光照归一化处理,对处理后的图像进行目标检测。由于可以在检测到图像包含逆光场景时对图像进行处理,再对处理后的图像进行目标检测,可以提高图像目标检测的准确性。
上述图像处理装置中各个模块的划分仅用于举例说明,在其他实施例中,可将图像处理装置按照需要划分为不同的模块,以完成上述图像处理装置的全部或部分功能。
关于图像处理装置的具体限定可以参见上文中对于图像处理方法的限定,在此不再赘述。上述图像处理装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
本申请实施例中提供的图像处理装置中的各个模块的实现可为计算机程序的形式。该计算机程序可在终端或服务器上运行。该计算机程序构成的程序模块可存储在终端或服务器的存储器上。该计算机程序被处理器执行时,实现本申请实施例中所描述方法的操作。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行图像处理方法的操作。
一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行图像处理方法。
本申请实施例还提供一种电子设备。上述电子设备中包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。图8为一个实施例中图像处理电路的示意图。如图8所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。
如图8所示,图像处理电路包括ISP处理器840和控制逻辑器850。成像设备810捕捉的图像数据首先由ISP处理器840处理,ISP处理器840对图像数据进行分析以捕捉可用于确定和/或成像设备810的一个或多个控制参数的图像统计信息。成像设备810可包括具有一个或多个透镜812和图像传感器814的照相机。图像传感器814可包括色彩滤镜阵列(如Bayer滤镜),图像传感器814可获取用图像传感器814的每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器840处理的一组原始图像数据。传感器820(如陀螺仪)可基于传感器820接口类型把采集的图像处理的参数(如防抖参数)提供给ISP 处理器840。传感器820接口可以利用SMIA(Standard Mobile Imaging Architecture,标准移动成像架构)接口、其它串行或并行照相机接口或上述接口的组合。
此外,图像传感器814也可将原始图像数据发送给传感器820,传感器820可基于传感器820接口类型把原始图像数据提供给ISP处理器840,或者传感器820将原始图像数据存储到图像存储器830中。
ISP处理器840按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器840可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。
ISP处理器840还可从图像存储器830接收图像数据。例如,传感器820接口将原始图像数据发送给图像存储器830,图像存储器830中的原始图像数据再提供给ISP处理器840以供处理。图像存储器830可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。
当接收到来自图像传感器814接口或来自传感器820接口或来自图像存储器830的原始图像数据时,ISP处理器840可进行一个或多个图像处理操作,如时域滤波。处理后的图像数据可发送给图像存储器830,以便在被显示之前进行另外的处理。ISP处理器840从图像存储器830接收处理数据,并对所述处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。ISP处理器840处理后的图像数据可输出给显示器870,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,ISP处理器840的输出还可发送给图像存储器830,且显示器870可从图像存储器830读取图像数据。在一个实施例中,图像存储器830可被配置为实现一个或多个帧缓冲器。此外,ISP处理器840的输出可发送给编码器/解码器860,以便编码/解码图像数据。编码的图像数据可被保存,并在显示于显示器870设备上之前解压缩。编码器/解码器860可由CPU或GPU或协处理器实现。
ISP处理器840确定的统计数据可发送给控制逻辑器850单元。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜812阴影校正等图像传感器814统计信息。控制逻辑器850可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定成像设备810的控制参数及ISP处理器840的控制参数。例如,成像设备810的控制参数可包括传感器820控制参数(例如增益、曝光控制的积分时间、防抖参数等)、照相机闪光控制参数、透镜812控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及透镜812阴影校正参数。
电子设备根据上述图像处理技术可以实现本申请实施例中所描述的图像处理方法。
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种图像处理方法,包括:
    对图像进行场景检测,得到所述图像的场景标签;
    当所述场景标签中包含逆光场景标签时,对所述图像进行光照归一化处理,所述光照归一化处理是消除图像亮度变化的处理;及
    对处理后的图像进行目标检测。
  2. 根据权利要求1所述的方法,其特征在于,所述对图像进行场景检测,得到所述图像的场景标签,包括:
    对所述图像进行场景检测,得到场景识别的初始结果;
    获取所述图像的拍摄时间;及
    根据所述拍摄时间对所述场景检测的初始结果进行校正,根据校正结果得到所述图像的场景标签。
  3. 根据权利要求1所述的方法,其特征在于,所述对所述图像进行光照归一化处理,包括:
    获取所述图像中各像素点对应的像素灰度值;
    根据均衡函数与所述像素灰度值得到各像素点对应的转化值;及
    根据所述转化值对所述图像的像素点进行处理。
  4. 根据权利要求1所述的方法,其特征在于,还包括:
    获取所述逆光场景标签所对应的逆光区域;
    对所述逆光区域进行亮度增强处理;及
    对处理后的图像进行目标检测。
  5. 根据权利要求1所述的方法,其特征在于,所述对处理后的图像进行目标检测,包括:
    对所述图像进行目标检测,得到所述图像的多个目标标签及对应的置信度;及
    将按照置信度从高到低选取的预设数量的目标标签作为所述图像的目标标签。
  6. 根据权利要求5所述的方法,其特征在于,还包括:
    根据所述逆光场景标签调整所述图像的多个目标标签对应的置信度;及
    将置信度最高的目标标签作为所述图像的目标标签。
  7. 根据权利要求1所述的方法,其特征在于,还包括:
    获取所述图像进行目标检测后得到的目标标签及对应的标签区域;
    根据所述目标标签获取对应的标签处理参数;及
    根据所述标签处理参数对所述标签区域进行处理。
  8. 一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如下操作:
    对图像进行场景检测,得到所述图像的场景标签;
    当所述场景标签中包含逆光场景标签时,对所述图像进行光照归一化处理,所述光照归一化处理是消除图像亮度变化的处理;及
    对处理后的图像进行目标检测。
  9. 根据权利要求8所述的电子设备,其特征在于,所述处理器执行所述对图像进行场景检测,得到所述图像的场景标签时,还执行如下操作:
    对所述图像进行场景检测,得到场景识别的初始结果;
    获取所述图像的拍摄时间;及
    根据所述拍摄时间对所述场景检测的初始结果进行校正,根据校正结果得到所述图像的场景标签。
  10. 根据权利要求8所述的电子设备,其特征在于,所述处理器执行所述对所述图像进行光照归一化处理时,还执行如下操作:
    获取所述图像中各像素点对应的像素灰度值;
    根据均衡函数与所述像素灰度值得到各像素点对应的转化值;及
    根据所述转化值对所述图像的像素点进行处理。
  11. 根据权利要求8所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行如下操作:
    获取所述逆光场景标签所对应的逆光区域;
    对所述逆光区域进行亮度增强处理;及
    对处理后的图像进行目标检测。
  12. 根据权利要求8所述的电子设备,其特征在于,所述处理器执行所述对处理后的图像进行目标检测时,还执行如下操作:
    对所述图像进行目标检测,得到所述图像的多个目标标签及对应的置信度;及
    将按照置信度从高到低选取的预设数量的目标标签作为所述图像的目标标签。
  13. 根据权利要求12所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行如下操作:
    根据所述逆光场景标签调整所述图像的多个目标标签对应的置信度;及
    将置信度最高的目标标签作为所述图像的目标标签。
  14. 根据权利要求8所述的电子设备,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行如下操作:
    获取所述图像进行目标检测后得到的目标标签及对应的标签区域;
    根据所述目标标签获取对应的标签处理参数;及
    根据所述标签处理参数对所述标签区域进行处理。
  15. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如下操作:
    对图像进行场景检测,得到所述图像的场景标签;
    当所述场景标签中包含逆光场景标签时,对所述图像进行光照归一化处理,所述光照归一化处理是消除图像亮度变化的处理;及
    对处理后的图像进行目标检测。
  16. 根据权利要求15所述的计算机可读存储介质,其特征在于,所述处理器执行所述对图像进行场景检测,得到所述图像的场景标签时,还执行如下操作:
    对所述图像进行场景检测,得到场景识别的初始结果;
    获取所述图像的拍摄时间;及
    根据所述拍摄时间对所述场景检测的初始结果进行校正,根据校正结果得到所述图像的场景标签。
  17. 根据权利要求15所述的计算机可读存储介质,其特征在于,所述处理器执行所述对所述图像进行光照归一化处理时,还执行如下操作:
    获取所述图像中各像素点对应的像素灰度值;
    根据均衡函数与所述像素灰度值得到各像素点对应的转化值;及
    根据所述转化值对所述图像的像素点进行处理。
  18. 根据权利要求15所述的计算机可读存储介质,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行如下操作:
    获取所述逆光场景标签所对应的逆光区域;
    对所述逆光区域进行亮度增强处理;及
    对处理后的图像进行目标检测。
  19. 根据权利要求15所述的计算机可读存储介质,其特征在于,所述处理器执行所述对处理后的图像进行目标检测时,还执行如下操作:
    对所述图像进行目标检测,得到所述图像的多个目标标签及对应的置信度;及
    将按照置信度从高到低选取的预设数量的目标标签作为所述图像的目标标签。
  20. 根据权利要求19所述的计算机可读存储介质,其特征在于,所述计算机程序被所述处理器执行时,使得所述处理器还执行如下操作:
    根据所述逆光场景标签调整所述图像的多个目标标签对应的置信度;及
    将置信度最高的目标标签作为所述图像的目标标签。
PCT/CN2019/087588 2018-06-29 2019-05-20 图像处理方法、电子设备、计算机可读存储介质 WO2020001197A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810695055.7 2018-06-29
CN201810695055.7A CN108805103B (zh) 2018-06-29 2018-06-29 图像处理方法和装置、电子设备、计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020001197A1 true WO2020001197A1 (zh) 2020-01-02

Family

ID=64073079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/087588 WO2020001197A1 (zh) 2018-06-29 2019-05-20 图像处理方法、电子设备、计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN108805103B (zh)
WO (1) WO2020001197A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667012A (zh) * 2020-06-10 2020-09-15 创新奇智(广州)科技有限公司 分类结果修正方法、装置、修正设备及可读存储介质
CN111798389A (zh) * 2020-06-30 2020-10-20 中国工商银行股份有限公司 自适应图像增强方法及装置
CN112164032A (zh) * 2020-09-14 2021-01-01 浙江华睿科技有限公司 一种点胶方法、装置、电子设备及存储介质
CN112732553A (zh) * 2020-12-25 2021-04-30 北京百度网讯科技有限公司 图像测试方法、装置、电子设备及存储介质
CN112966639A (zh) * 2021-03-22 2021-06-15 新疆爱华盈通信息技术有限公司 车辆检测方法、装置、电子设备及存储介质
CN113298829A (zh) * 2021-06-15 2021-08-24 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN113673268A (zh) * 2021-08-11 2021-11-19 广州爱格尔智能科技有限公司 一种用于不同亮度下的识别方法、系统及设备
CN114760422A (zh) * 2022-03-21 2022-07-15 展讯半导体(南京)有限公司 一种逆光检测方法及系统、电子设备及存储介质
CN115086566A (zh) * 2021-03-16 2022-09-20 广州视源电子科技股份有限公司 图片场景检测方法、装置
CN116559181A (zh) * 2023-07-07 2023-08-08 杭州灵西机器人智能科技有限公司 基于光度立体视觉的缺陷检测方法、系统、装置及介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805103B (zh) * 2018-06-29 2020-09-11 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN111368587B (zh) * 2018-12-25 2024-04-16 Tcl科技集团股份有限公司 场景检测方法、装置、终端设备及计算机可读存储介质
CN111753599B (zh) * 2019-03-29 2023-08-08 杭州海康威视数字技术股份有限公司 人员操作流程检测方法、装置、电子设备及存储介质
CN111652207B (zh) * 2019-09-21 2021-01-26 深圳久瀛信息技术有限公司 定位式数据加载装置和方法
CN110765525B (zh) * 2019-10-18 2023-11-10 Oppo广东移动通信有限公司 生成场景图片的方法、装置、电子设备及介质
CN111127476B (zh) * 2019-12-06 2024-01-26 Oppo广东移动通信有限公司 一种图像处理方法、装置、设备及存储介质
CN114118114A (zh) * 2020-08-26 2022-03-01 顺丰科技有限公司 一种图像检测方法、装置及其存储介质
CN112822413B (zh) * 2020-12-30 2024-01-26 Oppo(重庆)智能科技有限公司 拍摄预览方法、装置、终端和计算机可读存储介质
CN112950635A (zh) * 2021-04-26 2021-06-11 Oppo广东移动通信有限公司 灰点检测方法、灰点检测装置、电子设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102013006A (zh) * 2009-09-07 2011-04-13 泉州市铁通电子设备有限公司 一种基于逆光环境的人脸自动检测识别的方法
CN102447815A (zh) * 2010-10-09 2012-05-09 中兴通讯股份有限公司 视频图像的处理方法及装置
CN103617432A (zh) * 2013-11-12 2014-03-05 华为技术有限公司 一种场景识别方法及装置
CN104778674A (zh) * 2015-04-30 2015-07-15 武汉大学 一种基于时间序列的顺逆光交通图像自适应增强方法
CN107742274A (zh) * 2017-10-31 2018-02-27 广东欧珀移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN107784315A (zh) * 2016-08-26 2018-03-09 深圳光启合众科技有限公司 目标对象的识别方法和装置,及机器人
CN108805103A (zh) * 2018-06-29 2018-11-13 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845549B (zh) * 2017-01-22 2020-08-21 珠海习悦信息技术有限公司 一种基于多任务学习的场景与目标识别的方法及装置
CN107622281B (zh) * 2017-09-20 2021-02-05 Oppo广东移动通信有限公司 图像分类方法、装置、存储介质及移动终端

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102013006A (zh) * 2009-09-07 2011-04-13 泉州市铁通电子设备有限公司 一种基于逆光环境的人脸自动检测识别的方法
CN102447815A (zh) * 2010-10-09 2012-05-09 中兴通讯股份有限公司 视频图像的处理方法及装置
CN103617432A (zh) * 2013-11-12 2014-03-05 华为技术有限公司 一种场景识别方法及装置
CN104778674A (zh) * 2015-04-30 2015-07-15 武汉大学 一种基于时间序列的顺逆光交通图像自适应增强方法
CN107784315A (zh) * 2016-08-26 2018-03-09 深圳光启合众科技有限公司 目标对象的识别方法和装置,及机器人
CN107742274A (zh) * 2017-10-31 2018-02-27 广东欧珀移动通信有限公司 图像处理方法、装置、计算机可读存储介质和电子设备
CN108805103A (zh) * 2018-06-29 2018-11-13 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667012A (zh) * 2020-06-10 2020-09-15 创新奇智(广州)科技有限公司 分类结果修正方法、装置、修正设备及可读存储介质
CN111798389A (zh) * 2020-06-30 2020-10-20 中国工商银行股份有限公司 自适应图像增强方法及装置
CN111798389B (zh) * 2020-06-30 2023-08-15 中国工商银行股份有限公司 自适应图像增强方法及装置
CN112164032A (zh) * 2020-09-14 2021-01-01 浙江华睿科技有限公司 一种点胶方法、装置、电子设备及存储介质
CN112164032B (zh) * 2020-09-14 2023-12-29 浙江华睿科技股份有限公司 一种点胶方法、装置、电子设备及存储介质
CN112732553A (zh) * 2020-12-25 2021-04-30 北京百度网讯科技有限公司 图像测试方法、装置、电子设备及存储介质
CN115086566A (zh) * 2021-03-16 2022-09-20 广州视源电子科技股份有限公司 图片场景检测方法、装置
CN115086566B (zh) * 2021-03-16 2024-03-29 广州视源电子科技股份有限公司 图片场景检测方法、装置
CN112966639A (zh) * 2021-03-22 2021-06-15 新疆爱华盈通信息技术有限公司 车辆检测方法、装置、电子设备及存储介质
CN112966639B (zh) * 2021-03-22 2024-04-26 新疆爱华盈通信息技术有限公司 车辆检测方法、装置、电子设备及存储介质
CN113298829B (zh) * 2021-06-15 2024-01-23 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN113298829A (zh) * 2021-06-15 2021-08-24 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN113673268B (zh) * 2021-08-11 2023-11-14 广州爱格尔智能科技有限公司 一种用于不同亮度下的识别方法、系统及设备
CN113673268A (zh) * 2021-08-11 2021-11-19 广州爱格尔智能科技有限公司 一种用于不同亮度下的识别方法、系统及设备
CN114760422A (zh) * 2022-03-21 2022-07-15 展讯半导体(南京)有限公司 一种逆光检测方法及系统、电子设备及存储介质
CN116559181A (zh) * 2023-07-07 2023-08-08 杭州灵西机器人智能科技有限公司 基于光度立体视觉的缺陷检测方法、系统、装置及介质
CN116559181B (zh) * 2023-07-07 2023-10-10 杭州灵西机器人智能科技有限公司 基于光度立体视觉的缺陷检测方法、系统、装置及介质

Also Published As

Publication number Publication date
CN108805103B (zh) 2020-09-11
CN108805103A (zh) 2018-11-13

Similar Documents

Publication Publication Date Title
WO2020001197A1 (zh) 图像处理方法、电子设备、计算机可读存储介质
WO2019233263A1 (zh) 视频处理方法、电子设备、计算机可读存储介质
CN108764208B (zh) 图像处理方法和装置、存储介质、电子设备
WO2019233393A1 (zh) 图像处理方法和装置、存储介质、电子设备
US11138478B2 (en) Method and apparatus for training, classification model, mobile terminal, and readable storage medium
CN108764370B (zh) 图像处理方法、装置、计算机可读存储介质和计算机设备
CN110149482B (zh) 对焦方法、装置、电子设备和计算机可读存储介质
US11178324B2 (en) Focusing method and device, electronic device and computer-readable storage medium
US11233933B2 (en) Method and device for processing image, and mobile terminal
WO2019233271A1 (zh) 图像处理方法、计算机可读存储介质和电子设备
WO2019233262A1 (zh) 视频处理方法、电子设备、计算机可读存储介质
WO2019237887A1 (zh) 图像处理方法、电子设备、计算机可读存储介质
WO2020001196A1 (zh) 图像处理方法、电子设备、计算机可读存储介质
CN108961302B (zh) 图像处理方法、装置、移动终端及计算机可读存储介质
CN108198152B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
CN108897786B (zh) 应用程序的推荐方法、装置、存储介质及移动终端
WO2019233260A1 (zh) 广告信息推送方法和装置、存储介质、电子设备
CN108804658B (zh) 图像处理方法和装置、存储介质、电子设备
CN109712177B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
WO2019223513A1 (zh) 图像识别方法、电子设备和存储介质
CN108848306B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
WO2020034776A1 (zh) 图像处理方法和装置、终端设备、计算机可读存储介质
CN111277699B (zh) 闪光灯色温校准方法、装置、电子设备和可读存储介质
WO2023125750A1 (zh) 一种图像去噪方法、装置和存储介质
CN107454317B (zh) 图像处理方法、装置、计算机可读存储介质和计算机设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19826917

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19826917

Country of ref document: EP

Kind code of ref document: A1