WO2021098609A1 - 图像检测方法、装置及电子设备 - Google Patents
图像检测方法、装置及电子设备 Download PDFInfo
- Publication number
- WO2021098609A1 WO2021098609A1 PCT/CN2020/128786 CN2020128786W WO2021098609A1 WO 2021098609 A1 WO2021098609 A1 WO 2021098609A1 CN 2020128786 W CN2020128786 W CN 2020128786W WO 2021098609 A1 WO2021098609 A1 WO 2021098609A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- detected
- light source
- area
- foreground
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/98—Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- This application belongs to the field of electronic technology, and in particular relates to image detection methods, devices and electronic equipment based on artificial intelligence (AI) terminals.
- AI artificial intelligence
- Existing mobile terminals can only detect whether there is a backlight during the shooting process, but cannot detect whether the captured image has backlight blur.
- the embodiments of the present application provide an image detection method, device, and electronic equipment based on an artificial intelligent terminal, which can effectively detect whether an image taken under backlight conditions is blurred.
- an image detection method including:
- the blur degree of the image to be detected is determined based on the light source area and the foreground area.
- the light source area detection and foreground area detection are performed on the image to be detected to determine the influence of the light source on the sharpness of the image to be detected, thereby effectively detecting whether the photographed image is blurred under backlight conditions.
- the determining the light source area of the image to be detected includes:
- the area of the pixel point whose brightness value is greater than the preset brightness threshold is determined as the light source area of the image to be detected.
- the to-be-detected image is converted to HSV (Hue, Saturation, Value) color space or LAB (CIELab color model) color space through color space conversion.
- HSV Human, Saturation, Value
- LAB CIELab color model
- the image to be detected can also be converted into other color spaces to determine the brightness value of each pixel of the image to be detected.
- the light source area of the image to be detected can be determined based on the threshold segmentation method, which can accurately and quickly determine the light source area and improve the efficiency of image detection.
- the determining the foreground area of the image to be detected includes:
- the position of the foreground object in the image to be detected is determined, and the position of the foreground object in the image to be detected is determined as the foreground area of the image to be detected.
- the foreground target may refer to a target with dynamic characteristics in the image to be detected, such as humans, animals, etc.; the foreground target may also refer to a scene that is closer to the viewer and has static characteristics, such as flowers. , Food, etc.
- the determining the blur degree of the image to be detected based on the light source area and the foreground area includes:
- the blur degree of the image to be detected is determined based on the number of all pixels in the light source area and the number of all pixels in the foreground area.
- the determining the blur degree of the image to be detected based on the number of all pixels in the light source area and the number of all pixels in the foreground area includes:
- the function value of the increasing function of the ratio of the number of all pixels in the light source area to the number of all pixels in the foreground area is greater than a predetermined threshold, it is determined that the image to be detected is a blurred image.
- the determining the blur degree of the image to be detected based on the light source area and the foreground area includes:
- the blur degree of the image to be detected is determined based on the number of all pixels in the light source area and the area of the foreground area.
- the acquiring the image to be detected includes:
- determining the degree of blur of the image to be detected based on the light source area and the foreground area includes:
- the current shooting mode is switched from the first shooting mode to the second shooting mode, wherein the first shooting mode and the The second shooting mode is a different shooting mode.
- the first shooting mode is Torch mode
- the second shooting mode is Flash mode.
- the switching of the current shooting mode from the first shooting mode to the second shooting mode is specifically: the electronic device sends a control instruction to the flash module of the electronic device to switch the flash from the always-on mode to the mode that flashes once during shooting.
- an image detection device including:
- the image acquisition module is used to acquire the image to be detected
- the first determining module is configured to determine the light source area of the image to be detected and the foreground area of the image to be detected;
- the second determining module is configured to determine the blur degree of the image based on the light source area and the foreground area.
- an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and running on the processor.
- the processor executes the computer program, The steps of the image detection method as described in the foregoing first aspect are implemented.
- embodiments of the present application provide a computer-readable storage medium that stores a computer program that, when executed by a processor, implements the image detection method described in the first aspect A step of.
- the embodiments of the present application provide a computer program product, which when the computer program product runs on a terminal device, causes the terminal device to execute the image detection method described in any one of the above-mentioned first aspects.
- the embodiment of the present application has the beneficial effect that the light source area detection and foreground area detection of the image to be detected are used to determine the influence of the light source on the clarity of the image to be detected, thereby effectively detecting the image taken under backlight conditions. Is it fuzzy?
- FIG. 1a is a schematic diagram of an image taken based on the Torch mode of an existing electronic device without a backlight light source;
- FIG. 1b is a schematic diagram of an image taken based on the Flash mode of an existing electronic device under the condition of a backlight light source with a light intensity of 23 lux and a light intensity area of 50%;
- FIG. 1c is a schematic diagram of an image captured by an existing electronic device based on the Torch mode without a backlight light source
- FIG. 1d is a schematic diagram of an image taken based on the Flash mode of an existing electronic device under the condition of a backlight light source with a light intensity of 23 lux and a light intensity area of 50%;
- FIG. 2 is a schematic structural diagram of an electronic device to which the image detection method provided by an embodiment of the present application is applicable;
- FIG. 3 is a schematic diagram of the software architecture of an electronic device to which the image detection method provided by an embodiment of the present application is applicable;
- Figure 4a is a schematic diagram of a set of display interfaces provided by an embodiment of the present application.
- Figure 4b is a schematic diagram of another set of display interfaces provided by an embodiment of the present application.
- FIG. 5 is a schematic flowchart of an image detection method provided by an embodiment of the present application.
- Figure 6a is an image diagram of the image to be detected in the current color space
- Fig. 6b is a schematic diagram of an image after light source segmentation is performed on the image to be detected
- FIG. 7 is a schematic diagram of a prospect target provided by an embodiment of the present application.
- Fig. 8a is a schematic diagram of an image of a foam box as the foreground target provided by the present application.
- Fig. 8b is an image diagram of another foreground target provided by the present application as a foam box
- FIG. 9 is an image schematic diagram of an image to be detected in an image detection method provided by an embodiment of the present application.
- FIG. 10 is a schematic structural diagram of an image detection device provided by an embodiment of the present application.
- FIG. 11 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- the term “if” can be construed as “when” or “once” or “in response to determination” or “in response to detecting “.
- the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “in response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
- the backlight referred to in this application refers to the situation where the subject is just between the light source and the camera (camera). This situation can easily cause insufficient exposure of the subject and result in the captured image Can not clearly reflect facial skin details, such as fine lines, nasolabial folds, dark circles, red areas, acne, pores, pigmentation, blackheads and other details. Images captured in different shooting modes will show different degrees of clarity due to the presence of backlight sources.
- the above-mentioned shooting modes include a first shooting mode (Torch mode) in which the flash is always on and a second shooting mode (Flash mode) in which the flashing light flashes during shooting.
- Figures 1a and 1c are images taken based on Torch mode and Flash mode, respectively, of an existing electronic device without a backlight light source;
- Figures 1b and 1d are images of an existing electronic device at the same light intensity (23Lux), the same Images taken in Torch mode and Flash mode under the conditions of a light intensity area (50%) of a backlight light source. It can be seen from Figure 1a and Figure 1c that in the absence of a backlight, both Torch mode and Flash mode can clearly capture detailed features (acne). Comparing Fig. 1c and Fig. 1d, it can be seen that the existence of the backlight light source does not have much influence on the image taken in Flash mode, and the detailed features (acne) can also be taken out. Comparing Fig.
- the image detection method in the embodiments of the present application is mainly used to detect images taken in Torch mode, and can switch the current shooting mode from the above Torch mode to the Flash mode when the image to be detected is detected as a blurred image .
- shooting based on different light intensities and different light source areas is carried out by dichotomy to determine the light intensity of 23 Lux and the light source area of 25% as the critical value for whether the image taken based on the Torch mode is blurred.
- the area of the light source is the most important factor that affects whether the foreground is blurred. For the same foreground (the same face in different photos can be approximated as the same foreground), the larger the light source area, the greater the impact on the clarity of the foreground. If the foreground is large enough and the light source area is small enough, the influence of the light source area on the clarity of the foreground can also be ignored. Therefore, the important parameter in detecting whether the feature details of the image to be detected is blurred is the relationship between the area of the light source and the area of the foreground. Based on the detection of the light source area and the detection and foreground area detection, it is possible to determine whether the image to be detected has backlight blurring.
- the image to be detected has backlight blurring
- the image is not used as the object of facial skin condition evaluation, and then Improve the accuracy of the evaluation of facial skin condition.
- the foreground refers to the person or object in front of or near the front of the subject in the lens.
- the foreground in the embodiments of the present application may include various object types, such as people, vehicles, plants, animals, buildings, ground, sky, tables, chairs, door frames and other objects.
- the image detection method provided by the embodiments of this application can be applied to electronic devices, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, and super mobile personal
- AR augmented reality
- VR virtual reality
- For terminal devices such as ultra-mobile personal computers (UMPCs), netbooks, and personal digital assistants (personal digital assistants, PDAs), the embodiments of this application do not impose any restrictions on the specific types of terminal devices.
- the wearable device can also be a general term for using wearable technology to intelligently design daily wear and develop wearable devices, such as glasses, gloves, Watches, clothing and shoes, etc.
- a wearable device is a portable device that is directly worn on the body or integrated into the user's clothes or accessories.
- Wearable devices are not only a kind of hardware device, but also realize powerful functions through software support, data interaction, and cloud interaction.
- wearable smart devices include full-featured, large-sized, complete or partial functions that can be realized without relying on smart electronic devices, such as smart watches or smart glasses, and only focus on a certain type of application function, and need to interact with other devices such as smart electronic devices. Used in conjunction, such as various smart bracelets and smart jewelry for physical sign monitoring.
- FIG. 2 shows a block diagram of a part of the structure of an electronic device provided by an embodiment of the present application.
- the electronic device includes: a radio frequency (RF) circuit 110, a memory 120, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a wireless fidelity (WiFi) module 170, and a processor 180, power supply 190, camera 191 and other components.
- RF radio frequency
- FIG. 1 does not constitute a limitation on the electronic device, and may include more or fewer components than those shown in the figure, or a combination of certain components, or different component arrangements.
- the RF circuit 110 can be used for receiving and sending signals during information transmission or communication. In particular, after receiving the downlink information of the base station, it is processed by the processor 180; in addition, the designed uplink data is sent to the base station.
- the RF circuit includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
- the RF circuit 110 may also communicate with the network and other devices through wireless communication.
- the above-mentioned wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile Communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE)), Email, Short Messaging Service (SMS), etc.
- GSM Global System of Mobile Communication
- GPRS General Packet Radio Service
- CDMA Code Division Multiple Access
- WCDMA Wideband Code Division Multiple Access
- LTE Long Term Evolution
- Email Short Messaging Service
- the memory 120 may be used to store software programs and modules.
- the processor 180 executes various functional applications and data processing of the electronic device by running the software programs and modules stored in the memory 120.
- the memory 120 may mainly include a program storage area and a data storage area.
- the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data (such as audio data, phone book, etc.) created by the use of electronic devices, etc.
- the memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
- the input unit 130 may be used to receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the electronic device 100.
- the input unit 130 may include a touch panel 131 and other input devices 132.
- the touch panel 131 also known as a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 131 or near the touch panel 131. Operation), and drive the corresponding connection device according to the preset program.
- the touch panel 131 may include two parts: a touch detection device and a touch controller.
- the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 180, and can receive and execute the commands sent by the processor 180.
- the touch panel 131 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
- the input unit 130 may also include other input devices 132.
- the other input device 132 may include, but is not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick.
- the display unit 140 may be used to display information input by the user or information provided to the user and various menus of the electronic device.
- the display unit 140 may include a display panel 141.
- the display panel 141 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc.
- the touch panel 131 can cover the display panel 141. When the touch panel 131 detects a touch operation on or near it, it transmits it to the processor 180 to determine the type of the touch event, and then the processor 180 responds to the touch event. The type provides corresponding visual output on the display panel 141.
- the touch panel 131 and the display panel 141 are used as two independent components to realize the input and input functions of the electronic device, but in some embodiments, the touch panel 131 and the display panel 141 can be integrated And realize the input and output functions of electronic equipment.
- the electronic device 100 may also include at least one sensor 150, such as a light sensor, a motion sensor, and other sensors.
- the light sensor may include an ambient light sensor and a proximity sensor.
- the ambient light sensor can adjust the brightness of the display panel 141 according to the brightness of the ambient light.
- the proximity sensor can close the display panel 141 and the display panel 141 when the electronic device is moved to the ear. / Or backlight.
- the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three axes), and can detect the magnitude and direction of gravity when it is stationary.
- the audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the electronic device.
- the audio circuit 160 can transmit the electrical signal converted from the received audio data to the speaker 161, which is converted into a sound signal for output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal, which is then output by the audio circuit 160. After being received, it is converted into audio data, and then processed by the audio data output processor 180, and then sent to, for example, another electronic device through the RF circuit 110, or the audio data is output to the memory 120 for further processing.
- WiFi is a short-distance wireless transmission technology. Electronic devices can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 170. It provides users with wireless broadband Internet access.
- FIG. 1 shows the WiFi module 170, it is understandable that it is not a necessary component of the electronic device 100 and can be omitted as needed without changing the essence of the invention.
- the processor 180 is the control center of the electronic device. It uses various interfaces and lines to connect the various parts of the entire electronic device, runs or executes software programs and/or modules stored in the memory 120, and calls data stored in the memory 120 , Perform various functions of electronic equipment and process data, so as to monitor the electronic equipment as a whole.
- the processor 180 may include one or more processing units; preferably, the processor 180 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, and application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 180.
- the electronic device 100 also includes a power supply 190 (such as a battery) for supplying power to various components.
- a power supply 190 such as a battery
- the power supply can be logically connected to the processor 180 through a power management system, so that functions such as charging, discharging, and power management are realized through the power management system. .
- the electronic device 100 may also include a camera 191.
- the camera 191 is used to capture still images or videos.
- the object generates an optical image through the lens and is projected to the photosensitive element.
- the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
- CMOS complementary metal-oxide-semiconductor
- the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
- ISP outputs digital image signals to DSP for processing.
- DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
- the position of the camera on the electronic device 100 may be front or rear, which is not limited in the embodiment of the present application.
- the electronic device 100 may include a single camera, a dual camera, or a triple camera, etc., which is not limited in the embodiment of the present application.
- the electronic device 100 may include three cameras, of which one is a main camera, one is a wide-angle camera, and one is a telephoto camera.
- the multiple cameras may be all front-mounted, or all rear-mounted, or partly front-mounted and some rear-mounted, which is not limited in the embodiment of the present application.
- the electronic device 100 may also include a flash module, etc., which will not be repeated here.
- the electronic device 100 may also include a Bluetooth module, etc., which will not be repeated here.
- the electronic device 100 may also include a neural-network (NN) computing processor (NPU).
- NPU neural-network computing processor
- the input information Fast processing and continuous self-learning.
- applications such as intelligent cognition of the electronic device 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, and so on.
- the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
- FIG. 3 is a schematic diagram of the software structure of the electronic device 100 according to an embodiment of the present application.
- the Android system is divided into four layers, namely the application layer, the application framework layer (framework, FWK), the system layer, and the hardware abstraction layer. Communication with the layer through software interface.
- the application layer can be a series of application packages, which can include applications such as short message, calendar, camera, video, navigation, gallery, and call.
- the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
- the application framework layer may include some predefined functions, such as functions for receiving events sent by the application framework layer.
- the application framework layer can include a window manager, a resource manager, and a notification manager.
- the window manager is used to manage window programs.
- the window manager can obtain the size of the display, determine whether there is a status bar, lock the screen, take a screenshot, etc.
- the content provider is used to store and retrieve data and make these data accessible to applications.
- the data may include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
- the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
- the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can disappear automatically after a short stay without user interaction.
- the notification manager is used to notify download completion, message reminders, and so on.
- the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
- the application framework layer can also include:
- a view system the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
- the view system can be used to build applications.
- the display interface can be composed of one or more views.
- a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
- the phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connecting, hanging up, etc.).
- the system layer can include multiple functional modules. For example: sensor service module, physical state recognition module, 3D graphics processing library (for example: OpenGL ES), etc.
- the sensor service module is used to monitor sensor data uploaded by various sensors at the hardware layer and determine the physical state of the electronic device 100;
- Physical state recognition module used to analyze and recognize user gestures, faces, etc.
- the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
- the system layer can also include:
- the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
- the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
- the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
- the hardware abstraction layer is the layer between hardware and software.
- the hardware abstraction layer can include display drivers, camera drivers, sensor drivers, etc., used to drive related hardware at the hardware layer, such as display screens, cameras, sensors, and so on.
- the system layer can also include an image processing library.
- the camera application can obtain the to-be-detected image collected by the camera of the electronic device. And based on the image to be detected, light source area detection and foreground area detection are performed to determine the influence of the light source on the sharpness of the image to be detected, and then to determine whether the image to be detected has a backlight blur problem.
- the following embodiments can be implemented on the electronic device 100 having the above hardware structure/software structure.
- the following embodiments will take the electronic device 100 as an example to describe the image detection method provided by the embodiments of the present application.
- Fig. 4a shows a graphical user interface (GUI) of the electronic device (such as a mobile phone) as shown in Fig. 2 and Fig. 3.
- the GUI may be the desktop 401 of the electronic device.
- the electronic device detects After the user clicks the icon 402 of the facial skin condition evaluation application (Application, APP) on the desktop 401, the facial skin condition evaluation application can be started.
- Another GUI shown in Fig. 4b the GUI can be called the processing interface 403 of the facial skin condition evaluation APP.
- the electronic device detects that the user clicks the icon 404 for obtaining photos on the processing interface 403, the electronic device can open the album. The user selects a picture from the album and loads it into the image box 405. Or when the user clicks on the icon 404 and then starts the camera and shoots through the camera in the first shooting mode, and then loads the captured image into the image frame 405.
- the image frame 405 can also display the first shooting after starting the camera. Preview screen in mode.
- the electronic device can automatically start to detect the image to be detected in the image frame 405, and detect whether the image to be detected is blurred.
- the above processing interface 403 includes a first control that is also used to instruct blur detection.
- the electronic device 100 detects that the user clicks on the first control, the electronic device responds to the to-be-detected in the image frame 405.
- the image is inspected to detect whether the image to be inspected is blurred. Specifically, the light source area of the image to be detected and the foreground area of the image to be detected can be determined, the blur degree of the image is determined based on the light source area and the foreground area, and whether the image is blurred is determined based on the blur degree of the image.
- the electronic device 100 converts the color space of the image to be detected to obtain the brightness value of each pixel in the image after the color space conversion, and based on the brightness value of each pixel and the preset value.
- Set the brightness threshold to compare and determine the number of pixels exceeding the preset brightness threshold to determine the light source area.
- the feature point detection method is used to determine the foreground target of the image to be detected, and the foreground area is determined by determining the position of the foreground target in the image to be detected.
- the blur degree of the image to be detected is calculated based on the number of all pixels in the light source area and the number of all pixels in the foreground area, and the blur degree of the image to be detected is compared with a predetermined threshold.
- the blur degree of the image is greater than or If it is equal to the predetermined threshold, it is determined that the image to be detected has the problem of backlight blur; if the degree of image blur is less than the predetermined threshold, it is determined that the image to be detected does not have the problem of backlight blur.
- the current shooting mode is switched from the first shooting mode to the second shooting mode, wherein the first shooting mode is switched from the first shooting mode to the second shooting mode.
- the first shooting mode and the second shooting mode are different shooting modes. Or it is detected that the image to be detected obtained from the album is blurred under your control conditions, and the user is prompted to select another image or take another image.
- FIG. 5 is a schematic implementation flowchart of an image detection method provided by an embodiment of the present application.
- the method can be implemented in an electronic device (such as a mobile phone, a tablet computer, etc.) as shown in FIG. 2 and FIG.
- the method may include the following steps:
- the above-mentioned image to be detected may be an image currently captured, or a certain frame of image in the preview screen after starting the camera, or an image in an album.
- the electronic device when the user clicks the icon 404 shown in FIG. 4b, the electronic device will open the album for the user to select the photos that have been taken to obtain the image to be detected, and load the obtained image to be detected into the image frame. 405; or when the user clicks on the icon 404, the camera is started and in the first shooting mode, the image is captured by the camera to obtain the image to be detected, and then the image to be detected is loaded into the image frame 405.
- the preview image in the first shooting mode after starting the camera is displayed in the image frame 405, and a certain frame of the preview image is used as the image to be detected.
- the acquired image should contain the face image.
- the part of the image that only contains the face image can be intercepted as the image to be detected, that is, the original image.
- the image to be detected acquired by the embodiment may refer to an image to be detected including a face image.
- S102 Determine the light source area of the image to be detected and the foreground area of the image to be detected.
- the light source area of the image to be detected is determined by performing light source area detection on the image to be detected.
- the brightness of all pixels in the image after the color space conversion is determined by performing color space conversion on the image to be detected, and the light source area is determined based on the brightness of all pixels.
- the color space converted image is converted back to the original color space, and the foreground target of the image to be detected is detected based on the feature point detection mode, and the position of the foreground target in the image to be detected is determined to determine the foreground area.
- the electronic device 100 when the electronic device 100 detects that the image to be detected has been loaded into the image frame 405, it will automatically perform blur detection on the image to be detected loaded in the image frame 405.
- the electronic device 100 will perform light source area detection on the image to be detected to determine the light source area.
- the light source area detection can be obtained by converting the color space of the image to obtain the brightness value of each pixel, and determining the light source area based on the threshold segmentation method, and then re-converting the color space converted image to the original color space.
- the foreground object of the image to be detected is detected, and the position of the foreground object in the image to be detected is determined, so as to determine the foreground area.
- the foregoing determining the light source area of the image to be detected includes:
- the area of the pixel point whose brightness value is greater than the preset brightness threshold is determined as the light source area of the image to be detected.
- the light source area can be determined using the threshold segmentation method.
- the image is converted to HSV (Hue, Saturation, Value) color space or LAB (CIELab color) through color space conversion.
- HSV Human, Saturation, Value
- LAB CIELab color
- the brightness value of each pixel in the image can be obtained. It is understandable that the image can also be converted to other color spaces to determine the brightness value of each pixel of the image, which is not limited here.
- the aforementioned preset brightness threshold may be determined based on viewing the segmented light source map through the mask generation method.
- Fig. 6a is the image in the current color space
- Fig. 6b is the image after light source segmentation.
- the brightness value of each pixel of the image divided by the light source is compared with the preset brightness threshold. If the brightness value of the pixel is greater than the preset brightness threshold, it is determined that the pixel is the pixel of the light source area. Therefore, only The area of the pixel point whose brightness value is greater than the preset brightness threshold is determined as the light source area of the image to be detected.
- the foregoing determining the foreground area of the image to be detected includes:
- the position of the foreground object in the image to be detected is determined, and the position of the foreground object in the image to be detected is determined as the foreground area of the image to be detected.
- the aforementioned foreground target may refer to a target with dynamic characteristics in the image to be detected, such as humans, animals, etc.; the foreground target may also refer to a scene that is closer to the viewer and has static characteristics, such as Flowers, food, etc.
- the trained foreground detection model can be used to detect the foreground target in the image to be detected.
- the foreground detection model may be a model with a foreground target detection function, such as Single Shot Multibox Detection (SSD).
- SSD Single Shot Multibox Detection
- other foreground detection methods can also be used, such as detecting whether there is a foreground target (such as a human face) in the image to be detected through a pattern recognition algorithm, and after detecting the presence of the foreground target, using a target positioning algorithm or a target tracking algorithm Determine the position of the foreground target in the image to be detected.
- the method of adjusting parameters includes but not limited to stochastic gradient descent algorithm, power update algorithm, etc.
- the foreground target is determined by performing foreground target detection on the image to be detected converted to the original color space based on the foreground detection model, and then the foreground area is determined based on the position of the foreground target in the image to be detected.
- the aforementioned foreground target is a face image region
- the foreground detection model may be based on the HIA276 feature point detection method to perform face feature detection, and then frame the foreground region containing the face.
- the above-mentioned foreground area detection can be realized by the neural-network (NN) computing processor (NPU) of the above-mentioned electronic device 100.
- the NPU performs face recognition on the image to be detected and uses a rectangular frame. Select the foreground area that contains the human face (foreground target), and automatically output the area of the target rectangular area.
- the rectangular area shown in FIG. 7 is the foreground area, and the specific position of the foreground area can be determined by determining the position coordinates of the pixels of the four vertices of the rectangle in the image to be detected. It should be understood that the foreground area containing the human face (foreground target) can also be selected by a circle, a triangle, or a variety of graphics such as a pentagon.
- the area of the foreground area can be determined.
- the length and width of the rectangular area can be determined, and then the area of the rectangular area can be determined. The area of the foreground area.
- the aforementioned prospect target may also be other items.
- the foreground target is a foam box.
- S103 Determine the blur degree of the image to be detected based on the light source area and the foreground area.
- the blur degree of the image to be detected is determined based on the number of all pixels in the light source area and the number of all pixels in the foreground area.
- the light source can be
- the function value of the strictly increasing function of the ratio of all the pixels in the area to all the pixels in the foreground area is used as the function value of whether the backlight light source will cause the blurring of the image to be detected, that is, the number of pixels with the brightness value greater than the preset brightness threshold and the foreground
- the function value of the strictly increasing function of the ratio of all the pixels in the area is the blur degree S of the image to be detected.
- the blur degree of the image to be detected is determined based on the number of all pixels in the light source area and the area of the foreground area.
- the function value of the strictly increasing function of the ratio of the area of the region is taken as the blur degree S of the image to be detected.
- the foregoing S103 includes:
- the function value of the increasing function of the ratio of the number of all pixels in the light source area to the number of all pixels in the foreground area is greater than a predetermined threshold, it is determined that the image to be detected is a blurred image.
- the fuzzy degree of the image to be detected (that is, the function value of the increasing function of the ratio of the number of all pixels in the light source area to the number of all pixels in the foreground area) is compared with a predetermined threshold, if the image is If the degree of blur is greater than or equal to the predetermined threshold, it is determined that the detection result is: the image to be detected is a blurred image; if the degree of blur of the image is less than the predetermined threshold, the detection result is determined to be: the image to be detected is a clear image.
- the above-mentioned predetermined threshold may be based on a predetermined threshold obtained by tuning parameters after training and testing a large number of sample images, that is, if the degree of blur of the image to be detected is greater than or equal to the predetermined threshold, it can be determined that the image to be detected is a blurred image .
- the image classifier can be constructed based on the KNN (K-Nearest Neighbor) classifier.
- the principle of the KNN classifier is to compare the images in the test set with each image in the training set, and compare the images of the most similar training set. Assign the label value to this image. For example, in CIFAR-10, it is to compare 32x32x3 pixel blocks. The simplest method is to compare pixel by pixel, and finally add up all the difference values to get the difference value of the two images, and according to the classification decision principle (such as majority voting), Determine the category of the image.
- the initial model parameters and initial predetermined threshold of the image classifier will be given in advance. Through training and testing the image classifier is the process of adjusting the parameters of the initial model parameters and the predetermined threshold, so as to meet the accuracy of the classification. Rate required image classifier.
- the image classifier is trained and tested based on the sample image to determine the predetermined threshold.
- the aforementioned sample image includes a training set sample image and a test set sample image
- the training set sample image includes a first training set sample image and a second training set sample image
- the test set sample image includes a first test set sample image and a second training set sample image.
- Two sample images of the test set The first training set sample image is a training set sample image with a clear label
- the second training set sample image is a training set sample image with a fuzzy label
- the first test set sample image is a test set sample image with a clear label
- the above-mentioned second test set sample image is a test set sample image whose label is fuzzy. For example, 100 sample images are divided into two groups, the first group is 50 sample images labeled as "blur".
- the second group is 50 sample images labeled "clear”. Then 30 sample images out of the 50 sample images labeled “blurred” are used as the first training set sample image, and 20 sample images out of the 50 sample images labeled “blurred” are used as the first test set sample image , 30 sample images out of 50 sample images labeled "clear” are used as the second training set sample images, and 20 sample images out of 50 sample images labeled "clear” are used as the second test set sample images .
- training image classifiers There are many methods for training image classifiers, such as the AadaBboost method based on Hhaar features, the SVM (Support Vector Machine) method, and so on. Specifically, first, the training set sample images and the test set sample images are constructed; then based on the training set sample images, traverse optimization is performed to find the trainer or combination of trainers with the best classification effect, and the combination of trainers or trainers As the image classifier of this embodiment; finally, use the test set sample image to verify the classification accuracy of the searched image classifier, and it can be used after it reaches the accuracy requirement (for example, the accuracy rate is 70%); if not If the requirements are met, the initial model parameters and the predetermined threshold are adjusted, and the whole process is repeated continuously until the accuracy requirement is finally reached, so that the classifier can determine the predetermined threshold.
- the accuracy requirement for example, the accuracy rate is 70%
- the image detection method provided in this embodiment can not only be used for the backlight blur detection of a human face, but also be used for the backlight blur detection of other objects.
- the images of FIGS. 8a and 8b are respectively loaded into the image frame 405 and then the first control is clicked to trigger the image detection operation.
- the detection result corresponding to FIG. 8a is: the image is clear (the image does not have the problem of backlight blur); the detection result corresponding to FIG. 8b is: the image is blurred (the image has the problem of backlight blur).
- the electronic device can automatically perform backlight blur detection on the acquired image to be detected after acquiring the image to be detected. It can also perform backlight blur detection after the user clicks the first control to output the corresponding detection result.
- the user is prompted to reacquire the image, such as selecting another image or switching to Flash mode to shoot again to obtain the image, so as to improve the accuracy of facial skin evaluation.
- the image to be detected in FIG. 9 is loaded into the image frame 405 and then the first control is clicked to trigger the detection operation of the image to be detected.
- the detection result corresponding to FIG. 9 is: the image is blurred ( The image has the problem of backlight blur).
- acquiring the image to be detected above refers to acquiring the preview frame image in the first shooting mode.
- determining the degree of blur of the image to be detected based on the light source area and the foreground area includes:
- the current shooting mode is switched from the first shooting mode to the second shooting mode, wherein the first shooting mode and the The second shooting mode is a different shooting mode.
- the acquired image to be detected is the preview frame image in the first shooting mode. Since the preview frame image in the first shooting mode is a blurred image, if the image obtained in the first shooting mode is It must be blurred, and the current shooting mode needs to be switched from the first shooting mode to the second shooting mode.
- the first shooting mode is Torch mode
- the second shooting mode is Flash mode.
- the switching of the above-mentioned current shooting mode from the first shooting mode to the second shooting mode is specifically: the electronic device 100 sends a control instruction to the flash module of the electronic device to switch the flash from the always-on mode to the mode that flashes once during shooting.
- the image detection method provided by the embodiments of the present application can effectively detect whether the image taken under backlight conditions is blurred based on the artificial intelligence terminal, and determine the influence of the light source on the image clarity by detecting the light source area and the foreground area of the image, and then Effectively detect whether the image taken under backlight conditions is blurred.
- FIG. 10 shows a structural block diagram of an image detection device provided in an embodiment of the present application. For ease of description, only parts related to the embodiment of the present application are shown.
- the image detection device includes:
- the image acquisition module 11 is used to acquire an image to be detected
- the first determining module 12 is configured to determine the light source area of the image to be detected and the foreground area of the image to be detected;
- the second determining module 13 is configured to determine the degree of blur of the image based on the light source area and the foreground area.
- the foregoing first determining module 12 includes:
- a conversion unit configured to perform color space conversion on the image to be detected, and obtain the brightness value of each pixel of the image after the color space conversion
- the first determining unit is configured to determine an area of a pixel with a brightness value greater than a preset brightness threshold as the light source area of the image to be detected.
- the foregoing first determining module 12 includes:
- the detection unit is used to detect the foreground target of the image to be detected
- the second determining unit is configured to determine the position of the foreground object in the image to be detected, and determine the position of the foreground object in the image to be detected as the foreground area of the image to be detected.
- the foregoing second determining module 13 includes:
- the third determining unit is configured to determine the blur degree of the image to be detected based on the number of all pixels in the light source area and the number of all pixels in the foreground area.
- the foregoing third determining unit unit includes:
- a judging unit for judging whether the function value of the increasing function of the ratio of the number of all pixels in the light source area to the number of all pixels in the foreground area is greater than a predetermined threshold
- the fourth determining unit is configured to determine that the image to be detected is blurred if the function value of the increasing function of the ratio of the number of all pixels in the light source area to the number of all pixels in the foreground area is greater than a predetermined threshold image.
- the above-mentioned third determining unit is further configured to determine the blur degree of the image to be detected based on the number of all pixels in the light source area and the area of the foreground area.
- the above-mentioned image acquisition module 11 includes:
- the first image acquisition unit is configured to acquire a preview frame image in the first shooting mode.
- the above-mentioned second determining module 13 is further configured to switch the current shooting mode from the first shooting mode to the second shooting mode if it is determined that the preview frame image is a blurred image based on the light source area and the foreground area.
- a shooting mode wherein the first shooting mode and the second shooting mode are different shooting modes.
- the image detection device provided in this embodiment can also determine the influence of the light source on the clarity of the image by detecting the light source area and the foreground area of the image, thereby determining whether the image has the problem of backlight blur, and improving the evaluation of the facial skin condition. The accuracy of the result.
- FIG. 11 is a schematic structural diagram of an electronic device provided by an embodiment of this application.
- the electronic device 11 of this embodiment includes: at least one processor 110 (only one is shown in FIG. 11), a memory 111, and is stored in the memory 111 and can be stored in the at least one processor 110
- the processor 110 executes the computer program 112 the steps in any of the foregoing image detection method embodiments are implemented.
- the electronic device 11 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
- the electronic device may include, but is not limited to, a processor 110 and a memory 111.
- FIG. 11 is only an example of the electronic device 11, and does not constitute a limitation on the electronic device 11. It may include more or less components than those shown in the figure, or a combination of certain components, or different components. , For example, can also include input and output devices, network access devices, and so on.
- the so-called processor 110 may be a central processing unit (Central Processing Unit, CPU), and the processor 110 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), and application specific integrated circuits (Application Specific Integrated Circuits). , ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the memory 111 may be an internal storage unit of the electronic device 11 in some embodiments, such as a hard disk or a memory of the electronic device 11. In other embodiments, the memory 111 may also be an external storage device of the electronic device 11, for example, a plug-in hard disk equipped on the electronic device 11, a smart memory card (Smart Media Card, SMC), and a secure digital device. (Secure Digital, SD) card, flash card (Flash Card), etc. Further, the memory 111 may also include both an internal storage unit of the electronic device 11 and an external storage device. The memory 111 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program. The memory 111 can also be used to temporarily store data that has been output or will be output.
- a boot loader BootLoader
- An embodiment of the present application also provides a network device, which includes: at least one processor, a memory, and a computer program stored in the memory and running on the at least one processor, and the processor executes The computer program implements the steps in any of the foregoing method embodiments.
- the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
- the embodiments of the present application provide a computer program product.
- the steps in the foregoing method embodiments can be realized when the mobile terminal is executed.
- the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- the computer program can be stored in a computer-readable storage medium.
- the computer program can be stored in a computer-readable storage medium.
- the steps of the foregoing method embodiments can be implemented.
- the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
- the computer-readable medium may at least include: any entity or device capable of carrying the computer program code to the photographing device/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), and random access memory (RAM, Random Access Memory), electric carrier signal, telecommunications signal and software distribution medium.
- ROM read-only memory
- RAM random access memory
- electric carrier signal telecommunications signal and software distribution medium.
- U disk mobile hard disk, floppy disk or CD-ROM, etc.
- computer-readable media cannot be electrical carrier signals and telecommunication signals.
- the disclosed apparatus/network equipment and method may be implemented in other ways.
- the device/network device embodiments described above are only illustrative.
- the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
- components can be combined or integrated into another system, or some features can be omitted or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
一种基于人工智能(Artificial Intelligence,AI)终端的图像检测方法、装置及电子设备,包括:获取待检测图像(S101);确定所述待检测图像的光源区域以及所述待检测图像的前景区域(S102);基于所述光源区域和所述前景区域确定所述待检测图像的模糊程度(S103),通过对待检测图像进行光源区域检测和前景区域检测来确定光源对待检测图像清晰程度的影响,进而有效地检测出在逆光条件下拍摄图像是否模糊。
Description
本申请要求于2019年11月22日提交国家知识产权局、申请号为201911159693.8、申请名称为“图像检测方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请属于电子技术领域,尤其涉及基于人工智能(Artificial Intelligence,AI)终端的图像检测方法、装置及电子设备。
现有的移动终端(例如手机、平板、照相机)在进行拍摄时,只能对拍摄过程中是否存在逆光源进行检测,而无法检测出拍摄图像是否存在逆光模糊。
发明内容
本申请实施例提供了一种基于人工只智能终端的图像检测方法、装置及电子设备,可以有效检测出在逆光条件下拍摄图像是否模糊。
第一方面,本申请实施例提供了一种图像检测方法,包括:
获取待检测图像;
确定所述待检测图像的光源区域以及所述待检测图像的前景区域;
基于所述光源区域和所述前景区域确定所述待检测图像的模糊程度。
在第一方面中,通过对待检测图像进行光源区域检测和前景区域检测来确定光源对待检测图像清晰程度的影响,进而有效地检测出在逆光条件下拍摄图像是否模糊。
在第一方面的一种可能的实现方式中,所述确定所述待检测图像的光源区域,包括:
对所述待检测图像进行颜色空间转换,获取所述颜色空间转换后的图像的各个像素点的亮度值;
将亮度值大于预设亮度阈值的像素点的区域确定为所述待检测图像的光源区域。
示例性的,通过颜色空间转换将该待检测图像转换到HSV(Hue,Saturation,Value)颜色空间或LAB(CIELab color model)颜色空间。
应理解,还可以将待检测图像转换到其他颜色空间中来确定待检测图像每个像素点的亮度值。
通过将待检测图像进行颜色空间转换以确定每个像素点的亮度后,并基于阈值分割法就能够确定出待检测图像的光源区域,能够准确快速的确定出光源区域,提高图像检测的效率。
在第一方面的一种可能的实现方式中,所述确定待检测图像的前景区域,包括:
检测所述待检测图像的前景目标;
确定所述前景目标在所述待检测图像中的位置,将所述前景目标在所述待检测图 像中的位置确定为待检测图像的前景区域。
应理解,所述前景目标可以是指所述待检测图像中具有动态特征的目标,例如人、动物等;所述前景目标还可以是指距离观赏者较近且具有静态特征的景物,例如鲜花、美食等。
在第一方面的一种可能的实现方式中,所述基于所述光源区域和所述前景区域确定所述待检测图像的模糊程度,包括:
基于所述光源区域内所有像素点的数量和所述前景区域内所有像素点的数量确定所述待检测图像的模糊程度。
结合上一种可能的实现方式,所述基于所述光源区域内所有像素点的数量和所述前景区域内所有像素点的数量确定所述待检测图像的模糊程度,包括:
判断所述光源区域内所有像素点的数量和所述前景区域内所有像素点的数量的比值的递增函数的函数值是否大于预定阈值;
若所述光源区域内所有像素点的数量和所述前景区域内所有像素点的数量的比值的递增函数的函数值大于预定阈值,则确定所述待检测图像为模糊图像。
在第一方面的一种可能的实现方式中,所述基于所述光源区域和所述前景区域确定所述待检测图像的模糊程度,包括:
基于所述光源区域的内所有像素点的数量和所述前景区域的面积确定所述待检测图像的模糊程度。
在第一方面的一种可能的实现方式中,所述获取待检测图像,包括:
在第一拍摄模式下,获取预览帧图像;
相应的,基于所述光源区域和所述前景区域确定所述待检测图像的模糊程度包括:
若基于所述光源区域和所述前景区域确定所述预览帧图像为模糊图像,则将当前的拍摄模式从所述第一拍摄模式切换至第二拍摄模式,其中所述第一拍摄模式和所述第二拍摄模式为不同的拍摄模式。
示例性的,上述第一拍摄模式为Torch模式,上述第二拍摄模式为Flash模式。上述当前拍摄模式从第一拍摄模式切换到第二拍摄模式,具体为:电子设备发送控制指令给到电子设备的闪光灯模块,将闪光灯由常亮模式切换到拍摄时闪烁一次的模式。
第二方面,本申请实施例提供了一种图像检测装置,包括:
图像获取模块,用于获取待检测图像;
第一确定模块,用于确定所述待检测图像的光源区域以及所述待检测图像的前景区域;
第二确定模块,用于基于所述光源区域和所述前景区域确定所述图像的模糊程度。
第三方面,本申请实施例提供了一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如前述第一方面所述图像检测方法的步骤。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如前述第一方面所述图像检测方法的步骤。
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端 设备上运行时,使得终端设备执行上述第一方面中任一项所述的图像检测方法。
可以理解的是,上述第二方面至第五方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
本申请实施例与现有技术相比存在的有益效果是:通过对待检测图像进行光源区域检测和前景区域检测来确定光源对待检测图像清晰程度的影响,进而有效地检测出在逆光条件下拍摄图像是否模糊。
图1a是现有的电子设备在无逆光光源的情况下基于Torch模式拍摄得到的图像示意图;
图1b是现有的电子设备在逆光光源的光强为23lux且光强面积为50%的逆光光源条件下基于Flash模式拍摄得到的图像示意图;
图1c是现有的电子设备在无逆光光源的情况下基于Torch模式拍摄得到的图像示意图;
图1d是现有的电子设备在逆光光源的光强为23lux且光强面积为50%的逆光光源条件下基于Flash模式拍摄得到的图像示意图;
图2是本申请一实施例提供的图像检测方法所适用于的电子设备的结构示意图;
图3是本申请一实施例提供的图像检测方法所适用于的电子设备的软件架构示意图;
图4a是本申请实施例提供的一组显示界面示意图;
图4b是本申请实施例提供的另一组显示界面示意图;
图5是本申请一实施例提供的图像检测方法的流程示意图;
图6a是待检测图像在当前颜色空间下的图像示意图;
图6b是待检测图像进行光源分割后的图像示意图;
图7是本申请实施例提供的前景目标的示意图;
图8a是本申请提供的前景目标为泡沫盒的图像示意图;
图8b是本申请提供的另一前景目标为泡沫盒的图像示意图;
图9是本申请实施例提供的图像检测方法的待检测图像的图像示意图;
图10是本申请实施例提供的图像检测装置的结构示意图;
图11是本申请实施例提供的电子设备的结构示意图。
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
需要说明的是,本申请所指的逆光是指是由于被摄主体恰好处于光源和照相机(摄像头)之间的情况,这种情况极易造成使被摄主体曝光不充分,导致拍摄得到的图像不能清晰地体现面部肌肤细节,如细纹,法令纹,黑眼圈,红区,痘痘,毛孔,色斑,黑头等细节特征。在不同的拍摄模式下拍摄得到的图像会由于逆光光源的存在而呈现不同的清晰程度。上述拍摄模式包括闪光灯常亮的第一拍摄模式(Torch模式)和拍摄时闪关灯闪烁的第二拍摄模式(Flash模式)。图1a和图1c是现有的电子设备在无逆光光源的情况下分别基于Torch模式和Flash模式拍摄得到的图像;图1b和图1d是现有的电子设备在相同光强(23Lux),相同光强面积(50%)的逆光光源条件下分别基于Torch模式和Flash模式拍摄得到的图像。由图1a和图1c可以看出,在没有逆光光源的情况下,Torch模式和Flash模式都能将细节特征(痘痘)清晰地拍摄出来。对比图1c和图1d可以看出,逆光光源的存在对Flash模式拍摄的图像没有太大影响,同样能将细节特征(痘痘)拍摄出来。而对比图1a和图1b可以看出,逆光光源的存在对Torch模式拍摄的图像的清晰程度有明显的影响,会导致细节特征(痘痘)模糊。因此本申请实施例的图像检测方法主要用于对在Torch模式下,拍摄得到的图像进行检测,并能够在检测到待检测图像为模糊图像时,将当前拍摄模式从上述Torch模式切换至Flash模式。在实际应用中,通过二分法基于不同的光强和不同的光源面积进行拍摄进而确定光强为23Lux和光源面积为25%作为基于Torch模式拍摄的图像是否模糊的临界值。而其中光源面积是影响前景是否模糊最重要的因素,针对同一前景(不同照片的同一张人脸可以近似看为同一前景),光源面积越大,对前景的清晰程度的影响也越大,而若前景足够大,光源面积足够小,也可以忽略光源面积对前景的清晰程度的影响。因此在检测待检测图像的特征细节是否模糊的重要参数就是光源面积与前景面积的关系。基于光源区域的及检测和前景区域的检测就能够确定待检测图像是否存在逆光模糊的问题,若检测到待检测图像存在逆光模糊的问题,则不采用该图像作为面部肌肤状态评价的对象,进而提高面部肌肤状态的评价的准确度。需要说明的是,前景是指镜头中位于主体前面或靠近前沿的人或物。本申请实施例中的前景 可以包括多种物体类型,例如人物、车辆、植物、动物、建筑、地面、天空、桌子、椅子、门框等物品。
本申请实施例提供的图像检测方法可以应用于电子设备、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等终端设备上,本申请实施例对终端设备的具体类型不作任何限制。
作为示例而非限定,当所述电子设备为可穿戴设备时,该可穿戴设备还可以是应用穿戴式技术对日常穿戴进行智能化设计、开发出可以穿戴的设备的总称,如眼镜、手套、手表、服饰及鞋等。可穿戴设备即直接穿在身上,或是整合到用户的衣服或配件的一种便携式设备。可穿戴设备不仅仅是一种硬件设备,更是通过软件支持以及数据交互、云端交互来实现强大的功能。广义穿戴式智能设备包括功能全、尺寸大、可不依赖智能电子设备实现完整或者部分的功能,如智能手表或智能眼镜等,以及只专注于某一类应用功能,需要和其它设备如智能电子设备配合使用,如各类进行体征监测的智能手环、智能首饰等。
图2示出的是与本申请实施例提供的电子设备的部分结构的框图。参考图2,电子设备包括:射频(Radio Frequency,RF)电路110、存储器120、输入单元130、显示单元140、传感器150、音频电路160、无线保真(wireless fidelity,WiFi)模块170、处理器180、电源190以及摄像头191等部件。本领域技术人员可以理解,图1中示出的电子设备结构并不构成对电子设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图2对电子设备的各个构成部件进行具体的介绍:
RF电路110可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器180处理;另外,将设计上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路110还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE))、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器120可用于存储软件程序以及模块,处理器180通过运行存储在存储器120的软件程序以及模块,从而执行电子设备的各种功能应用以及数据处理。存储器120可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器120可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元130可用于接收输入的数字或字符信息,以及产生与电子设备100的用 户设置以及功能控制有关的键信号输入。具体地,输入单元130可包括触控面板131以及其他输入设备132。触控面板131,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板131上或在触控面板131附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板131可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器180,并能接收处理器180发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板131。除了触控面板131,输入单元130还可以包括其他输入设备132。具体地,其他输入设备132可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元140可用于显示由用户输入的信息或提供给用户的信息以及电子设备的各种菜单。显示单元140可包括显示面板141,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板141。进一步的,触控面板131可覆盖显示面板141,当触控面板131检测到在其上或附近的触摸操作后,传送给处理器180以确定触摸事件的类型,随后处理器180根据触摸事件的类型在显示面板141上提供相应的视觉输出。虽然在图1中,触控面板131与显示面板141是作为两个独立的部件来实现电子设备的输入和输入功能,但是在某些实施例中,可以将触控面板131与显示面板141集成而实现电子设备的输入和输出功能。
电子设备100还可包括至少一种传感器150,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板141的亮度,接近传感器可在电子设备移动到耳边时,关闭显示面板141和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别电子设备姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于电子设备还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路160、扬声器161,传声器162可提供用户与电子设备之间的音频接口。音频电路160可将接收到的音频数据转换后的电信号,传输到扬声器161,由扬声器161转换为声音信号输出;另一方面,传声器162将收集的声音信号转换为电信号,由音频电路160接收后转换为音频数据,再将音频数据输出处理器180处理后,经RF电路110以发送给比如另一电子设备,或者将音频数据输出至存储器120以便进一步处理。
WiFi属于短距离无线传输技术,电子设备通过WiFi模块170可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图1示出了WiFi模块170,但是可以理解的是,其并不属于电子设备100的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器180是电子设备的控制中心,利用各种接口和线路连接整个电子设备的各 个部分,通过运行或执行存储在存储器120内的软件程序和/或模块,以及调用存储在存储器120内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。可选的,处理器180可包括一个或多个处理单元;优选的,处理器180可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器180中。
电子设备100还包括给各个部件供电的电源190(比如电池),优选的,电源可以通过电源管理系统与处理器180逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
电子设备100还可以包括摄像头191。摄像头191用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。可选地,摄像头在电子设备100的上的位置可以为前置的,也可以为后置的,本申请实施例对此不作限定。
可选地,电子设备100可以包括单摄像头、双摄像头或三摄像头等,本申请实施例对此不作限定。
例如,电子设备100可以包括三摄像头,其中,一个为主摄像头、一个为广角摄像头、一个为长焦摄像头。
可选地,当电子设备100包括多个摄像头时,这多个摄像头可以全部前置,或者全部后置,或者一部分前置、另一部分后置,本申请实施例对此不作限定。
另外,尽管未示出,电子设备100还可以包括闪光灯模块等,在此不再赘述。
另外,尽管未示出,电子设备100还可以包括蓝牙模块等,在此不再赘述。
另外,尽管未示出,电子设备100还可以包括神经网络(neural-network,NN)计算处理器(NPU),通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
电子设备100的软件系统可以采用分层架构、事件驱动架构、微核架构、微服务架构或云架构。图3是本申请实施例的电子设备100的软件结构示意图。以电子设备100操作系统为Android系统为例,在一些实施例中,将Android系统分为四层,分别为应用程序层、应用程序框架层(framework,FWK)、系统层以及硬件抽象层,层与层之间通过软件接口通信。
如图3所示,所述应用程序层可以一系列应用程序包,应用程序包可以包括短信息,日历,相机,视频,导航,图库,通话等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层可以包括一些预先定义的函数,例如用于接收应用程序框架层所发送的事件的函数。
如图3所示,应用程序框架层可以包括窗口管理器、资源管理器以及通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
应用程序框架层还可以包括:
视图系统,所述视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
系统层可以包括多个功能模块。例如:传感器服务模块,物理状态识别模块,三维图形处理库(例如:OpenGL ES)等。
传感器服务模块,用于对硬件层各类传感器上传的传感器数据进行监测,确定电子设备100的物理状态;
物理状态识别模块,用于对用户手势、人脸等进行分析和识别;
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
系统层还可以包括:
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
硬件抽象层是硬件和软件之间的层。硬件抽象层可以包括显示驱动,摄像头驱动,传感器驱动等,用于驱动硬件层的相关硬件,如显示屏、摄像头、传感器等。
系统层中还可以包括图像处理库。在启动相机应用后,相机应用可以获取到电子设备的摄像头采集到的待检测图像。并基于该待检测图像进行光源区域检测和前景区域检测来确定光源对待检测图像清晰程度的影响,进而确定待检测图像是否存在逆光模糊的问题。
以下实施例可以在具有上述硬件结构/软件结构的电子设备100上实现。以下实施例将以电子设备100为例,对本申请实施例提供的图像检测方法进行说明。
图4a示出了如图2、图3所示的电子设备(如手机)的一种图形用户界面(Graphical User Interface,GUI),该GUI可以为该电子设备的桌面401,当电子设备检测到用户点击桌面401上的面部肌肤状态评价应用(Application,APP)的图标402的操作后,可以启动面部肌肤状态评价应用。图4b所示的另一种GUI,该GUI可以称为面部肌肤状态评价APP的处理界面403,当电子设备检测到用户点击处理界面403上的获取照片的图标404后,电子设备可以打开相册,由用户从该相册中选择图片加载到图像框405中。或者当用户点击图标404后启动相机并在第一拍摄模式下,通过摄像头进行拍摄,再将拍摄得到的图像加载到图像框405中,还可以在图像框405中显示启动相机后在第一拍摄模式下的预览画面。
当待检测图像加载到图像框405后,电子设备可以自动开始对图像框405中的待检测图像进行检测,检测出该待检测图像是否模糊。
在一种实现的可能方式中,上述处理界面403包括还用于指示进行模糊检测的第一控件,当电子设备100检测到用户点击该第一控件时,电子设备对图像框405中的待检测图像进行检测,检测该待检测图像是否模糊。具体可以通过确定该待检测图像的光源区域和该待检测图像的前景区域,并基于光源区域和前景区域确定图像的模糊程度,基于该图像的模糊程度判断该图像是否模糊。
具体地,在进行模糊检测时,电子设备100会通过将待检测图像进行颜色空间转换后得颜色空间转换后的图像中每个像素点的亮度值,并基于每个像素点的亮度值与预设亮度阈值进行比较后确定超过预设亮度阈值的像素点的数量来确定光源区域。再通过特征点检测方法确定出该待检测图像的前景目标,通过确定该前景目标在所述待检测图像中的位置来确定前景区域。基于光源区域内所有像素点的数量和前景区域内所有像素点的数量计算得到该待检测图像的模糊程度,并将该待检测图像的模糊程度与预定阈值进行比较,若图像的模糊程度大于或等于预定阈值,则确定待检测图像存在逆光模糊的问题;若图像的模糊程度小于预定阈值,则确定该待检测图像不存在逆光模糊的问题。
若检测出在逆光条件下拍摄的图像或者在逆光条件下预览画面的某一帧图像是模糊的,则将当前的拍摄模式从所述第一拍摄模式切换至第二拍摄模式,其中所述第一拍摄模式和所述第二拍摄模式为不同的拍摄模式。或者检测出从相册中获取到的待检测图像存在你管条件下模糊的情况,则提示用户选择另一张图像,或重新拍摄一张图像。
请参阅图5,图5是本申请实施例提供的图像检测方法的示意性实现流程图,该方法可以在如图2、图3所示的电子设备(例如手机、平板电脑等)中实现。如图5所示,该方法可以包括以下步骤:
S101:获取待检测图像。
上述待检测图像可以是当前拍摄得到的图像,也可以是启动相机后预览画面中的某一帧图像,还可以是相册中的图像等。
示例性的,当用户点击如图4b所示的图标404后,电子设备会打开相册供用户选择已经拍摄的照片以此来获取待检测图像,并可以将获取到的待检测图像加载到图像框405中;或者当用户点击图标404后启动相机并在第一拍摄模式下,通过摄像头拍 摄图像以此来获取待检测图像,再将拍摄得到的待检测图像加载到图像框405中,还可以在图像框405中显示启动相机后在第一拍摄模式下的预览画面,将预览画面的某一帧图像作为待检测图像。
具体地,为了对面部肌肤状态进行评价,则获取到的图像应该包含人脸图像,当获取到包含人脸图像的图像后,可以截取只包含人脸图像的部分图像作为待检测图像,即本实施例获取的待检测图像可以是指包含人脸图像的待检测图像。
S102:确定所述待检测图像的光源区域以及所述待检测图像的前景区域。
具体地,通过对所述待检测图像进行光源区域检测以确定该待检测图像的光源区域。通过对待检测图像进行颜色空间转换后确定颜色空间转换后的图像中所有像素点的亮度,基于所有像素点的亮度确定光源区域。再将进行了颜色空间转换的图像重新转换到原来的颜色空间,基于特征点检测模式来检测待检测图像的前景目标,并确定该前景目标在该待检测图像中的位置,以此来确定前景区域。
示例性的,当电子设备100检测到该待检测图像已经加载到图像框405中后,会自动对加载在图像框405中的待检测图像进行模糊检测。
示例性的,当用户点击第一控件时,可以触发对加载在图像框405中的待检测图像进行模糊检测,此时电子设备100会对该待检测图像进行光源区域检测来确定光源区域。其中,光源区域检测可以是通过将图像进行颜色空间转换后得到各个像素点的亮度值,并基于阈值分割法来确定光源区域,再将进行了颜色空间转换的图像重新转换到原来的颜色空间,基于特征点检测模式来检测待检测图像的前景目标,并确定该前景目标在该待检测图像中的位置,以此来确定前景区域。
在一个实施例中,上述确定所述待检测图像的光源区域,包括:
对所述待检测图像进行颜色空间转换,获取所述颜色空间转换后的图像的各个像素点的亮度值;
将亮度值大于预设亮度阈值的像素点的区域确定为所述待检测图像的光源区域。
具体地,基于该待检测图像的各个像素点的亮度值利用阈值分割法就能够确定光源区域,首先通过颜色空间转换将该图像转换到HSV(Hue,Saturation,Value)颜色空间或LAB(CIELab color model)颜色空间中,就能够得到该图像中每一个像素点的亮度值。可以理解的是,还可以将图像转换到其他颜色空间中来确定图像每个像素点的亮度值,在此不加以限制。
具体地,若将图像转换到HSV颜色空间中,则每个像素点的V值(Value)就是该像素点的亮度值;若将图像转换到LAB颜色空间中,则每个像素点的L值就是该像素点的亮度值。需要说明的是,上述亮度值也可以是原本亮度值的一个单调递增/递减的数学变换,比如L’=2L等。
具体地,上述预设亮度阈值可以基于通过mask生成法查看分割出来的光源图来确定。如图6a和图6b,所示图6a为当前颜色空间下的图像;图6b为进行了光源分割后的图像。通过光源分割后的图像的每个像素点的亮度值与预设亮度阈值进行比较,若像素点的亮度值大于预设亮度阈值,则确定该像素点是光源区域的像素点,因此,只需将亮度值大于预设亮度阈值的像素点的区域确定为所述待检测图像的光源区域。
在一实施例中,上述确定所述待检测图像的前景区域,包括:
检测所述待检测图像的前景目标;
确定所述前景目标在所述待检测图像中的位置,将所述前景目标在所述待检测图像中的位置确定为待检测图像的前景区域。
需要说明的是,上述前景目标可以是指所述待检测图像中具有动态特征的目标,例如人、动物等;所述前景目标还可以是指距离观赏者较近且具有静态特征的景物,例如鲜花、美食等。
在本实施例中,可以采用训练后的前景检测模型对待检测图像中的前景目标进行检测。示例性的,该前景检测模型可以为单点多盒检测(Single Shot Multibox Detection,SSD)等具有前景目标检测功能的模型。当然,也可以采用其他前景检测方式,例如通过模式识别算法检测所述待检测图像中是否存在前景目标(例如人脸),在检测出存在所述前景目标后,通过目标定位算法或目标跟踪算法确定所述前景目标在所述待检测图像中的位置。
需要说明的是,本领域技术人员在本发明揭露的技术范围内,可容易想到的其他检测前景目标的方案也应在本发明的保护范围之内,在此不一一赘述。
以采用训练后的前景检测模型对待检测图像中的前景目标进行检测为例说明前景检测模型的具体训练过程:
预先获取样本图片以及所述样本图片对应的检测结果,其中,所述样本图片对应的检测结果包括该样本图片中各个前景目标的类别和位置;
利用初始的前景检测模型检测上述样本图片中的前景目标,并根据预先获取的所述样本图片对应的检测结果,计算该初始的前景检测模型的检测准确率;
若上述检测准确率小于预设的第一检测阈值,则调整初始的前景检测模型的参数,再通过参数调整后的前景检测模型检测所述样本图片,直到调整后的前景检测模型的检测准确率大于或等于所述第一检测阈值,并将该前景检测模型作为训练后的前景检测模型。其中,调整参数的方法包括但不限于随机梯度下降算法、动力更新算法等。
通过对转换到原来颜色空间的待检测图像基于上述前景检测模型进行前景目标检测来确定前景目标,然后基于前景目标在该待检测图像的位置确定前景区域。
示例性的,上述前景目标为人脸图像区域,该前景检测模型可以是基于HIA276特征点检测方法进行人脸特征检测,然后框选出包含人脸的前景区域。在本实施例中,上述前景区域检测可以由上述电子设备100的神经网络(neural-network,NN)计算处理器(NPU)来实现,通过NPU对该待检测图像进行人脸识别并利用矩形框框选出包含人脸(前景目标)的前景区域,并自动输出该目标矩形区域的面积。
示例性的,如图7所示的矩形区域就是前景区域,通过确定该矩形的四个顶点的像素在该待检测图像中的位置坐标就能够确定出该前景区域的具体位置。应理解的是,包含人脸(前景目标)的前景区域还可以通过圆形来框选出,也可以通过三角形来框选出,还可以通过五边形等多种图形来框选出。
通过计算该矩形区域的面积就能够确定该前景区域的面积,通过确定该矩形区域的四个顶点的位置坐标,进而确定该矩形区域的长和宽,进而确定该矩形区域的面积,即得到该前景区域的面积。
需要说明的是,上述前景目标还可以是其他物品。示例性的,如图8a和图8b所 示,前景目标为泡沫盒。
S103:基于所述光源区域和所述前景区域确定所述待检测图像的模糊程度。
具体地,基于所述光源区域内所有像素点的数量和所述前景区域内所有像素点的数量确定所述待检测图像的模糊程度。
具体地,由于光源区域的大小是影响待检测图像是否模糊最重要的因素,而前景区域的大小也是影响待检测图像是否模糊的重要因素,因此为了确定该待检测图像的模糊程度,可以将光源区域的所有像素点与前景区域的所有像素点的比值的严格递增函数的函数值作为逆光光源是否会造成待检测图像模糊的影响程度,即将亮度值大于预设亮度阈值的像素点的数量与前景区域内所有的像素点的比值的严格递增函数的函数值该待检测图像的模糊程度S。
在一个实施例中,上述模糊程度S的计算公式为:S=f(a,b)=a/b,其中a为上述光源区域内所有像素点的数量,b为上述前景区域内所有像素点的数量,S为模糊程度。
在另一实施例中,上述模糊程度S的计算公式为:S=f(a,b)=log(a/b),其中a为上述光源区域内所有像素点的数量,b为上述前景区域内所有像素点的数量,S为模糊程度。
需要说明的是,只要模糊程度是a/b的一个严格递增函数就能够体现光源区域面积和前景区域面积对模糊程度的影响,因此上述f(a,b)具有无穷多种变形,如f(a,b)=log2(a-b)等,在此不一一列出。
在另一实施例中,基于所述光源区域的内所有像素点的数量和所述前景区域的面积确定所述待检测图像的模糊程度。将光源区域的所有像素点与前景区域的面积的比值的严格递增函数的函数值作为逆光光源是否会造成待检测图像模糊的影响程度,即将亮度值大于预设亮度阈值的像素点的数量与前景区域的面积的比值的严格递增函数的函数值作为该待检测图像的模糊程度S。
上述模糊程度S的计算公式为S=f(a,b)=a/b,其中a为上述光源区域内所有像素点的数量,b为上述前景区域的面积,S为模糊程度。
在另一实施例中,上述模糊程度S的计算公式为:S=f(a,b)=log(a/b),其中a为上述光源区域内所有像素点的数量,b为上述前景区域的面积,S为模糊程度。
在一实施例中,上述S103包括:
判断所述光源区域内所有像素点的数量和所述前景区域内所有像素点的数量的比值的递增函数的函数值是否大于预定阈值;
若所述光源区域内所有像素点的数量和所述前景区域内所有像素点的数量的比值的递增函数的函数值大于预定阈值,则确定所述待检测图像为模糊图像。
具体地,基于该待检测图像的模糊程度(即光源区域内所有像素点的数量与所述前景区域内所有像素点的数量的比值的递增函数的函数值)与预定阈值进行比较,若图像的模糊程度大于或等于预定阈值,则确定检测结果为:待检测图像为模糊图像;若图像的模糊程度小于预定阈值,则确定检测结果为:待检测图像为清晰图像。
需要说明的是,上述预定阈值可以基于大量样本图像进行训练和测试后进行调参得到的预定阈值,即待检测图像的模糊程度大于或等于该预定阈值,则可确定该待检 测图像为模糊图像。
具体地,可以基于KNN(K-Nearest Neighbor)分类器来构建图像分类器,KNN分类器的原理就是将测试集中的图像与训练集中的每一张图像进行比较,将最相似的训练集图像的标签值赋给这张图像。例如在CIFAR-10中,就是比较32x32x3的像素块,最简单的方法就是逐个像素比较,最后将差异值全部加起来,得到两个图像的差异值,并根据分类决策原则(比如多数表决),决定图像的类别。在构建图像分类器时会预先赋予该图像分类器的初始模型参数以及初始预定阈值,通过训练和测试该图像分类器就是对初始模型参数和预定阈值进行参数调整的过程,进而能够得到满足分类准确率要求的图像分类器。
具体地,还可以基于训练集样本图像进行遍历寻优,寻找分类效果最好的训练器或者训练器的组合,将该训练器或训练器的组合作为本实施例的图像分类器。
具体地,基于样本图像对所述图像分类器进行训练和测试以确定预定阈值。
具体地,上述样本图像包括训练集样本图像和测试集样本图像,训练集样本图像包括第一训练集样本图像和第二训练集样本图像,上述测试集样本图像包括第一测试集样本图像和第二测试集样本图像。上述第一训练集样本图像为标签为清晰的训练集样本图像,第二训练集样本图像为标签为模糊的训练集样本图像,上述第一测试集样本图像为标签为清晰的测试集样本图像,上述第二测试集样本图像为标签为模糊的测试集样本图像。例如将100张样本图像分别分为两组,第一组是50张标签为“模糊”的样本图像。第二组是50张标签为“清晰”的样本图像。再将50张标签为“模糊”的样本图像中的30张样本图像作为第一训练集样本图像,将50张标签为“模糊”的样本图像中的20张样本图像作为第一测试集样本图像,将50张标签为“清晰”的样本图像中的30张样本图像作为第二训练集样本图像,将50张标签为“清晰”的样本图像中的20张样本图像作为第二测试集样本图像。
训练图像分类器的方法有很多种,如基于Hhaar特征的AadaBboost方法,SVM(支持向量机,Support Vector Machine)方法等等。具体地,首先,构建训练集样本图像和测试集样本图像;然后基于训练集样本图像进行遍历寻优,寻找分类效果最好的训练器或者训练器的组合,将该训练器或训练器的组合作为本实施例的图像分类器;最后,用测试集样本图像验测试集验证寻找到的图像分类器的分类准确率,达到准确率要求(例如准确率为70%)后即可使用;如不满足要求,则调整初始模型参数和预定阈值,不断循环重复整个过程直到最终达到准确率要求,使得该分类器能够确定出预定阈值。
在本实施例中,通过上述调参过程后确定当上述模糊程度S=a/b时,可以上述预定阈值为0.12;当模糊程度S=log(a/b)时,确定上述预设模糊阈值为-0.3269。
需要说明的是,本实施例提供的图像检测方法不仅能够用于人脸的逆光模糊检测,还能用于其他物品的逆光模糊检测。
示例性的,如图8a和图8b所示,分别将图8a和图8b的图像加载到图像框405中后点击第一控件以触发对图像的检测操作。此时,图8a对应的检测结果为:图像清晰(图像不存在逆光模糊的问题);图8b对应的检测结果为:图像模糊(图像存在逆光模糊的问题)。
在本实施例中,电子设备通过获取待检测图像后能够自动对获取的待检测图像进 行逆光模糊检测,也可以在用户点击第一控件后进行逆光模糊检测,以输出对应的检测结果,若检测到图像存在逆光模糊的情况,则提示用户重新获取图像,例如选择其他图像或者转换为Flash模式再次进行拍摄以获取图像,以此来提高面部肌肤评价的准确度。
示例性的,如图9所示,将图9的待检测图像加载到图像框405中后点击第一控件以触发对待检测图像的检测操作,此时图9对应的检测结果为:图像模糊(图像存在逆光模糊的问题)。
在一个实施例中,上述获取待检测图像是指在第一拍摄模式下,获取预览帧图像。相应的,基于所述光源区域和所述前景区域确定所述待检测图像的模糊程度包括:
若基于所述光源区域和所述前景区域确定所述预览帧图像为模糊图像,则将当前的拍摄模式从所述第一拍摄模式切换至第二拍摄模式,其中所述第一拍摄模式和所述第二拍摄模式为不同的拍摄模式。
在本实施例中,获取的待检测图像是在第一拍摄模式下的预览帧图像,由于在第一拍摄模式下的预览帧图像为模糊图像,因此若在第一拍摄模式下拍摄得到的图像必然是模糊的,则需要将当前拍摄模式从第一拍摄模式切换到第二拍摄模式。
示例性的,上述第一拍摄模式为Torch模式,上述第二拍摄模式为Flash模式。上述当前拍摄模式从第一拍摄模式切换到第二拍摄模式,具体为:电子设备100发送控制指令给到电子设备的闪光灯模块,将闪光灯由常亮模式切换到拍摄时闪烁一次的模式。
本申请实施例提供的图像检测方法,能够基于人工智能终端有效地检测出在逆光条件下拍摄图像是否模糊,通过对图像进行光源区域检测和前景区域检测来确定光源对图像清晰程度的影响,进而有效地检测出在逆光条件下拍摄图像是否模糊。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
对应于上文实施例所述的图像检测方法,图10示出了本申请实施例提供的图像检测装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。
参照图10,该图像检测装置包括:
图像获取模块11用于获取待检测图像;
第一确定模块12用于确定所述待检测图像的光源区域以及所述待检测图像的前景区域;
第二确定模块13用于基于所述光源区域和所述前景区域确定所述图像的模糊程度。
可选地,上述第一确定模块12包括:
转换单元,用于对所述待检测图像进行颜色空间转换,获取所述颜色空间转换后的图像的各个像素点的亮度值;
第一确定单元,用于将亮度值大于预设亮度阈值的像素点的区域确定为所述待检测图像的光源区域。
可选地,上述第一确定模块12包括:
检测单元,用于检测所述待检测图像的前景目标;
第二确定单元,用于确定所述前景目标在所述待检测图像中的位置,将所述前景目标在所述待检测图像中的位置确定为待检测图像的前景区域。
可选地,上述第二确定模块13包括:
第三确定单元,用于基于所述光源区域内所有像素点的数量和所述前景区域内所有像素点的数量确定所述待检测图像的模糊程度。
可选地,上述第三确定单元单元包括:
判断单元,用于判断所述光源区域内所有像素点的数量和所述前景区域内所有像素点的数量的比值的递增函数的函数值是否大于预定阈值;
第四确定单元,用于若所述光源区域内所有像素点的数量和所述前景区域内所有像素点的数量的比值的递增函数的函数值大于预定阈值,则确定所述待检测图像为模糊图像。
可选地,上述第三确定单元还用于基于所述光源区域的内所有像素点的数量和所述前景区域的面积确定所述待检测图像的模糊程度。
可选地,上述图像获取模块11包括:
第一图像获取单元,用于在第一拍摄模式下,获取预览帧图像。
相应的,上述第二确定模块13还用于若基于所述光源区域和所述前景区域确定所述预览帧图像为模糊图像,则将当前的拍摄模式从所述第一拍摄模式切换至第二拍摄模式,其中所述第一拍摄模式和所述第二拍摄模式为不同的拍摄模式。
需要说明的是,上述模块/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
因此,本实施例提供的图像检测装置同样能够通过对图像进行光源区域检测和前景区域检测来确定光源对图像清晰程度的影响,进而确定图像是否存在逆光模糊的问题,提高对面部肌肤状态的评价结果的准确度。
图11为本申请一实施例提供的电子设备的结构示意图。如图11所示,该实施例的电子设备11包括:至少一个处理器110(图11中仅示出一个)、存储器111以及存储在所述存储器111中并可在所述至少一个处理器110上运行的计算机程序112,所述处理器110执行所述计算机程序112时实现上述任意各个图像检测方法实施例中的步骤。
所述电子设备11可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。该电子设备可包括,但不仅限于,处理器110、存储器111。本领域技术人员可以理解,图11仅仅是电子设备11的举例,并不构成对电子设备11的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如还可以包括输入输出设备、网络接入设备等。
所称处理器110可以是中央处理单元(Central Processing Unit,CPU),该处理器110还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器111在一些实施例中可以是所述电子设备11的内部存储单元,例如电子设备11的硬盘或内存。所述存储器111在另一些实施例中也可以是所述电子设备11的外部存储设备,例如所述电子设备11上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器111还可以既包括所述电子设备11的内部存储单元也包括外部存储设备。所述存储器111用于存储操作系统、应用程序、引导装载程序(BootLoader)、数据以及其他程序等,例如所述计算机程序的程序代码等。所述存储器111还可以用于暂时地存储已经输出或者将要输出的数据。
本申请实施例还提供了一种网络设备,该网络设备包括:至少一个处理器、存储器以及存储在所述存储器中并可在所述至少一个处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任意各个方法实施例中的步骤。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述各个方法实施例中的步骤。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记 载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。
Claims (10)
- 一种图像检测方法,其特征在于,包括:获取待检测图像;确定所述待检测图像的光源区域以及所述待检测图像的前景区域;基于所述光源区域和所述前景区域确定所述待检测图像的模糊程度。
- 如权利要求1所述的图像检测方法,其特征在于,所述确定所述待检测图像的光源区域,包括:对所述待检测图像进行颜色空间转换,获取所述颜色空间转换后的图像的各个像素点的亮度值;将亮度值大于预设亮度阈值的像素点的区域确定为所述待检测图像的光源区域。
- 如权利要求1所述的图像检测方法,其特征在于,所述确定待检测图像的前景区域,包括:检测所述待检测图像的前景目标;确定所述前景目标在所述待检测图像中的位置,将所述前景目标在所述待检测图像中的位置确定为待检测图像的前景区域。
- 如权利要求1所述的图像检测方法,其特征在于,所述基于所述光源区域和所述前景区域确定所述待检测图像的模糊程度,包括:基于所述光源区域内所有像素点的数量和所述前景区域内所有像素点的数量确定所述待检测图像的模糊程度。
- 如权利要求4所述的图像检测方法,其特征在于,所述基于所述光源区域内所有像素点的数量和所述前景区域内所有像素点的数量确定所述待检测图像的模糊程度,包括:判断所述光源区域内所有像素点的数量和所述前景区域内所有像素点的数量的比值的递增函数的函数值是否大于预定阈值;若所述光源区域内所有像素点的数量和所述前景区域内所有像素点的数量的比值的递增函数的函数值大于预定阈值,则确定所述待检测图像为模糊图像。
- 如权利要求1所述的图像检测方法,其特征在于,所述基于所述光源区域和所述前景区域确定所述待检测图像的模糊程度,包括:基于所述光源区域的内所有像素点的数量和所述前景区域的面积确定所述待检测图像的模糊程度。
- 如权利要求1至6任一项所述的图像检测方法,其特征在于,所述获取待检测图像,包括:在第一拍摄模式下,获取预览帧图像;相应的,基于所述光源区域和所述前景区域确定所述待检测图像的模糊程度包括:若基于所述光源区域和所述前景区域确定所述预览帧图像为模糊图像,则将当前的拍摄模式从所述第一拍摄模式切换至第二拍摄模式,其中所述第一拍摄模式和所述第二拍摄模式为不同的拍摄模式。
- 一种图像检测装置,其特征在于,包括:图像获取模块,用于获取待检测图像;第一确定模块,用于确定所述待检测图像的光源区域以及所述待检测图像的前景区域;第二确定模块,用于基于所述光源区域和所述前景区域确定所述图像的模糊程度。
- 一种电子设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述的图像检测方法。
- 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的图像检测方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/778,469 US20230245441A9 (en) | 2019-11-22 | 2020-11-13 | Image detection method and apparatus, and electronic device |
EP20890350.0A EP4047549A4 (en) | 2019-11-22 | 2020-11-13 | METHOD AND DEVICE FOR IMAGE DETECTION AND ELECTRONIC DEVICE |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911159693.8A CN112950525B (zh) | 2019-11-22 | 2019-11-22 | 图像检测方法、装置及电子设备 |
CN201911159693.8 | 2019-11-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021098609A1 true WO2021098609A1 (zh) | 2021-05-27 |
Family
ID=75980830
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/128786 WO2021098609A1 (zh) | 2019-11-22 | 2020-11-13 | 图像检测方法、装置及电子设备 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230245441A9 (zh) |
EP (1) | EP4047549A4 (zh) |
CN (1) | CN112950525B (zh) |
WO (1) | WO2021098609A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113298801A (zh) * | 2021-06-15 | 2021-08-24 | 浙江大豪明德智控设备有限公司 | 缝头一体机的检测方法、装置及系统 |
CN117726788A (zh) * | 2023-05-16 | 2024-03-19 | 荣耀终端有限公司 | 一种图像区域定位方法和电子设备 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112950525B (zh) * | 2019-11-22 | 2024-09-17 | 华为技术有限公司 | 图像检测方法、装置及电子设备 |
KR102187123B1 (ko) * | 2020-02-12 | 2020-12-04 | 주식회사 카카오뱅크 | 홀로그램 검출 서비스 제공 서버 및 홀로그램 검출 방법 |
CN115880300B (zh) * | 2023-03-03 | 2023-05-09 | 北京网智易通科技有限公司 | 图像模糊检测方法、装置、电子设备和存储介质 |
CN116818798A (zh) * | 2023-05-31 | 2023-09-29 | 成都瑞波科材料科技有限公司 | 用于涂布工艺的彩虹纹检测装置、方法及涂布工艺设备 |
CN118038310B (zh) * | 2024-01-12 | 2024-10-11 | 广东机电职业技术学院 | 一种视频背景消除方法、系统、设备及存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011076198A (ja) * | 2009-09-29 | 2011-04-14 | Mitsubishi Electric Corp | 画像処理装置、画像処理用プログラムおよび画像処理方法 |
CN103177422A (zh) * | 2011-12-20 | 2013-06-26 | 富士通株式会社 | 背光补偿方法和系统 |
CN103646392A (zh) * | 2013-11-21 | 2014-03-19 | 华为技术有限公司 | 逆光检测方法及设备 |
CN107451969A (zh) * | 2017-07-27 | 2017-12-08 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、移动终端及计算机可读存储介质 |
CN108734676A (zh) * | 2018-05-21 | 2018-11-02 | Oppo广东移动通信有限公司 | 图像处理方法和装置、电子设备、计算机可读存储介质 |
CN110111281A (zh) * | 2019-05-08 | 2019-08-09 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、电子设备和存储介质 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6529355B2 (ja) * | 2015-06-18 | 2019-06-12 | キヤノン株式会社 | 撮像制御装置、その制御方法およびプログラム |
US9516237B1 (en) * | 2015-09-01 | 2016-12-06 | Amazon Technologies, Inc. | Focus-based shuttering |
CN106973236B (zh) * | 2017-05-24 | 2020-09-15 | 湖南盘子女人坊文化科技股份有限公司 | 一种拍摄控制方法及装置 |
CN107958231B (zh) * | 2017-12-25 | 2022-01-11 | 深圳云天励飞技术有限公司 | 光场图像过滤方法、人脸分析方法及电子设备 |
CN108764040B (zh) * | 2018-04-24 | 2021-11-23 | Oppo广东移动通信有限公司 | 一种图像检测方法、终端及计算机存储介质 |
CN112950525B (zh) * | 2019-11-22 | 2024-09-17 | 华为技术有限公司 | 图像检测方法、装置及电子设备 |
-
2019
- 2019-11-22 CN CN201911159693.8A patent/CN112950525B/zh active Active
-
2020
- 2020-11-13 WO PCT/CN2020/128786 patent/WO2021098609A1/zh unknown
- 2020-11-13 EP EP20890350.0A patent/EP4047549A4/en active Pending
- 2020-11-13 US US17/778,469 patent/US20230245441A9/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011076198A (ja) * | 2009-09-29 | 2011-04-14 | Mitsubishi Electric Corp | 画像処理装置、画像処理用プログラムおよび画像処理方法 |
CN103177422A (zh) * | 2011-12-20 | 2013-06-26 | 富士通株式会社 | 背光补偿方法和系统 |
CN103646392A (zh) * | 2013-11-21 | 2014-03-19 | 华为技术有限公司 | 逆光检测方法及设备 |
CN107451969A (zh) * | 2017-07-27 | 2017-12-08 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、移动终端及计算机可读存储介质 |
CN108734676A (zh) * | 2018-05-21 | 2018-11-02 | Oppo广东移动通信有限公司 | 图像处理方法和装置、电子设备、计算机可读存储介质 |
CN110111281A (zh) * | 2019-05-08 | 2019-08-09 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、电子设备和存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4047549A4 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113298801A (zh) * | 2021-06-15 | 2021-08-24 | 浙江大豪明德智控设备有限公司 | 缝头一体机的检测方法、装置及系统 |
CN117726788A (zh) * | 2023-05-16 | 2024-03-19 | 荣耀终端有限公司 | 一种图像区域定位方法和电子设备 |
Also Published As
Publication number | Publication date |
---|---|
US20230245441A9 (en) | 2023-08-03 |
CN112950525A (zh) | 2021-06-11 |
CN112950525B (zh) | 2024-09-17 |
EP4047549A4 (en) | 2022-12-28 |
EP4047549A1 (en) | 2022-08-24 |
US20230005254A1 (en) | 2023-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021098609A1 (zh) | 图像检测方法、装置及电子设备 | |
WO2021135601A1 (zh) | 辅助拍照方法、装置、终端设备及存储介质 | |
CN110147805B (zh) | 图像处理方法、装置、终端及存储介质 | |
CN111541907B (zh) | 物品显示方法、装置、设备及存储介质 | |
WO2019052329A1 (zh) | 人脸识别方法及相关产品 | |
WO2019105457A1 (zh) | 图像处理方法、计算机设备和计算机可读存储介质 | |
CN108269530A (zh) | Amoled显示屏的亮度调节方法及相关产品 | |
CN108921941A (zh) | 图像处理方法、装置、存储介质和电子设备 | |
WO2020048392A1 (zh) | 应用程序的病毒检测方法、装置、计算机设备及存储介质 | |
CN111857793B (zh) | 网络模型的训练方法、装置、设备及存储介质 | |
CN110839128B (zh) | 拍照行为检测方法、装置及存储介质 | |
CN116826892B (zh) | 充电方法、充电装置、电子设备及可读存储介质 | |
CN111353946B (zh) | 图像修复方法、装置、设备及存储介质 | |
CN110807769B (zh) | 图像显示控制方法及装置 | |
CN111984803A (zh) | 多媒体资源处理方法、装置、计算机设备及存储介质 | |
CN111556248B (zh) | 拍摄方法、装置、存储介质及移动终端 | |
CN111275607A (zh) | 界面显示方法、装置、计算机设备及存储介质 | |
CN115775395A (zh) | 图像处理方法及相关装置 | |
US20240233083A1 (en) | Image fusion method and apparatus, storage medium and mobile terminal | |
CN113518171B (zh) | 图像处理方法、装置、终端设备和介质 | |
CN109816047B (zh) | 提供标签的方法、装置、设备及可读存储介质 | |
CN112308104A (zh) | 异常识别方法、装置及计算机存储介质 | |
CN114756149B (zh) | 呈现数据标签的方法、装置、电子设备及存储介质 | |
CN110458289B (zh) | 多媒体分类模型的构建方法、多媒体分类方法及装置 | |
WO2024012354A1 (zh) | 一种显示方法及电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20890350 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020890350 Country of ref document: EP Effective date: 20220516 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |