CN115760931A - Image processing method and electronic device - Google Patents

Image processing method and electronic device Download PDF

Info

Publication number
CN115760931A
CN115760931A CN202111032943.9A CN202111032943A CN115760931A CN 115760931 A CN115760931 A CN 115760931A CN 202111032943 A CN202111032943 A CN 202111032943A CN 115760931 A CN115760931 A CN 115760931A
Authority
CN
China
Prior art keywords
image
registered
electronic device
camera
detection result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111032943.9A
Other languages
Chinese (zh)
Inventor
郜文美
胡宏伟
卢曰万
梅苑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111032943.9A priority Critical patent/CN115760931A/en
Priority to PCT/CN2022/116270 priority patent/WO2023030398A1/en
Publication of CN115760931A publication Critical patent/CN115760931A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The application discloses an image processing method and electronic equipment. The method comprises the following steps: the method comprises the steps that the electronic equipment acquires collected images of an RGB camera and an UV camera at the same time, namely a first image and a second image; the electronic device registers the first image and the second image; the electronic equipment performs algorithm processing such as face detection and feature point positioning on the registered first image to obtain a target area; in the target area, the electronic equipment carries out sunscreen detection on the registered second image; and the electronic equipment fuses the detection result of the sunscreen and the first image after registration. On one hand, the method can solve the problems of insufficient definition and unattractive appearance of the second image; on the other hand, the face can be accurately detected and recognized, and upper-layer various applications based on face detection, feature point positioning and face recognition, such as smearing region detection of sunscreen cream and the like, can be constructed.

Description

Image processing method and electronic device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method and an electronic device.
Background
With the rapid development of computer technologies, the functions of electronic devices such as smart phones and tablet computers are more and more diversified. An Ultraviolet (UV) camera is introduced to the electronic device, and is combined with a Red Green Blue (RGB) camera to shoot and present sunscreen applied to the face of a user.
However, the UV camera can only present a gray-scale image based on the absorption and reflection capability of the object to the ultraviolet rays, and the reduction capability to the details in the image is poor, so that the image (also referred to as UV image) photographed by the UV camera has the problem of being unsightly and unclear.
Disclosure of Invention
The embodiment of the application provides an image processing method and electronic equipment, and can solve the problems that in the prior art, UV images are not attractive and not clear.
In a first aspect, an embodiment of the present application provides an image processing method, which may include: the electronic equipment acquires a first image acquired by an RGB camera and a second image acquired by a UV camera; the first image and the second image are acquired simultaneously; the electronic equipment registers the first image and the second image, and pixel points of the registered first image correspond to pixel points of the registered second image one by one; the electronic equipment fuses the registered first image and the registered second image to obtain a fused image; and displaying the fused image by the electronic equipment.
The electronic device may be implemented by an electronic device (e.g., a mobile phone, a tablet, a computer, a notebook, an intelligent mirror, an intelligent watch, etc.) including a Red Green Blue (RGB) camera and an Ultraviolet (UV) camera, or a system including a photographing device and a display device. According to the method, the first image and the second image after registration can be fused by registering the first image and the second image, and the fused image combines the clear first image and the clear second image, so that the image is clearer and more attractive.
In some embodiments, the electronic device may identify a face, a face mask, and other regions in the registered first image, and because the first image and the second image are registered, so that pixel points on the first image and the second image correspond to each other one by one, and the identification result of the first image may be mapped onto the second image, so that the identification result may be applied to the second image to detect a sunscreen product in a specific region, where the sunscreen product may be sunscreen cream, sunscreen spray, and the like. Sunscreen is exemplified in the description related to the embodiments of the present application.
In the scheme provided by the application, the electronic equipment comprising the RGB camera and the UV camera can perform hardware registration on the RGB camera and the UV camera before leaving a factory.
Specifically, a transformation relation between the graphs of the RGB camera and the UV camera is determined based on parameters of the RGB camera and the UV camera, such as focal length, resolution, camera position, rotation direction and the like, the graphs of the RGB camera or the UV camera are transformed through the transformation relation, hardware registration is achieved through the transformed images, the proportion and the field angle of the two images after the hardware registration are consistent, and a foundation is provided for fusion of the two images.
In conjunction with the first aspect, in some embodiments, the RGB camera and the UV camera on the electronic device may capture images at the same time.
With reference to the first aspect, in some embodiments, the electronic device may perform an algorithmic registration of a first image captured by the RGB camera and a second image captured by the UV camera, and the algorithmic registration method may include: the electronic equipment respectively extracts the characteristic lines of the first image and the second image; the electronic equipment matches the characteristic line of the first image with the characteristic line of the second image; the electronic equipment determines transformation parameters of the first image and the second image based on the matched characteristic lines; the electronic equipment transforms the first image through the transformation parameters to obtain a transformed first image, or transforms the second image through the transformation parameters to obtain a transformed second image, and the transformed first image and the transformed second image are registered, or the transformed second image and the first image are registered. Through the algorithm registration, the fusion quality of the first image and the second image can be improved.
In one possible implementation, the feature line may be an edge line, and the matching, by the electronic device, the feature line of the first image and the feature line of the second image includes: the electronic device determines the edge line with the closest inclination angle and position in the first image and the second image as the matched edge line.
In another possible implementation, the feature line may be a human face contour, and the matching, by the electronic device, the feature line of the first image and the feature line of the second image includes: and the electronic equipment determines the face contour with the contour characteristic closest to the contour characteristic of the face contour of the first image in the face contour of the second image as the matched face contour.
In combination with the first aspect, in some embodiments, the electronic device may determine, based on the matched feature line, transformation parameters of the first image and the second image, including: the matched characteristic lines comprise a first characteristic line and a second characteristic line, the first characteristic line is positioned on the first image, and the second characteristic line is positioned on the second image; the electronic equipment selects a plurality of first coordinate points on a first characteristic line; the electronic equipment selects second coordinate points corresponding to the first coordinate points on a second characteristic line; the electronic device calculates transformation parameters of the first image and the second image according to the plurality of first coordinate points and the plurality of second coordinate points.
With reference to the first aspect, in some embodiments, the electronic device may fuse the registered first image and the registered second image to obtain a fused image, including: the electronic equipment identifies a target area in the registered first image; the electronic equipment performs sunscreen detection on the registered second image to obtain a detection result in the target area; and the electronic equipment superimposes the detection result in the target area on the registered first image to obtain a fused image.
The fused image combines the first image and the second image, so that the image is clearer and more attractive; and the pixel points of the registered first image and the registered second image are in one-to-one correspondence, and the first image and the second image can form a mutual mapping relation.
Specifically, a target area obtained based on the first image may be mapped on the second image, and a detection result of sunscreen within the target area of the second image may be mapped on the first image.
With reference to the first aspect, in one possible implementation, the detecting result may include: and the pixel points correspond to at least one of the smearing thickness and the effectiveness of the sunscreen cream.
With reference to the first aspect, in a possible implementation, the pixel value of the first pixel point in the target region in the fused image is a pixel value corresponding to the detection result of the first pixel point, or is a weighted sum of the pixel value corresponding to the detection result of the first pixel point and the pixel value of the first pixel point in the registered first image; and the pixel value of the second pixel point outside the target area in the fused image is the pixel value of the second pixel point in the registered first image.
With reference to the first aspect, in a possible implementation manner, the performing, by the electronic device, sunscreen detection on the registered second image to obtain a detection result in the target area includes: and carrying out sunscreen cream detection on the image in the target area in the second image after registration to obtain a detection result in the target area.
With reference to the first aspect, in another possible implementation manner, the performing, by the electronic device, sunscreen detection on the registered second image to obtain a detection result in the target area includes: carrying out sunscreen detection on the second image after registration to obtain a detection result in the whole image range; and obtaining the detection result in the target area from the detection result in the whole image range.
In conjunction with the first aspect, in some embodiments, the electronic device may identify a target region in the registered first image, including: the electronic equipment identifies a face region in the registered first image; the electronic equipment identifies an eye region, an eyebrow region and a mouth region in the human face; the electronic equipment removes an eye region, an eyebrow region and a mouth region in the face region to obtain a target region.
With reference to the first aspect, in some embodiments, before the electronic device acquires the first image captured by the RGB camera and the second image captured by the UV camera, the electronic device further includes: the electronic equipment responds to the detected first operation, displays a user interface, starts an RGB camera and a UV camera, and the user interface comprises a preview area; the electronic equipment displays the fused image and comprises: and the electronic equipment displays the fused image in the preview area.
In one possible implementation, an electronic device includes the RGB camera and the UV camera.
In another possible implementation, the electronic device collects a first image through the RGB camera and a second image through the UV camera, including: the electronic equipment receives a first image and a second image which are acquired by shooting equipment, the shooting equipment comprises an RGB (red, green and blue) camera and a UV (ultraviolet) camera, the first image is acquired by the shooting equipment through the RGB camera, and the second image is acquired by the shooting equipment through the UV camera.
In a second aspect, an embodiment of the present application provides an electronic device, including: one or more processors, memory; the memory coupled with the one or more processors, the memory to store computer program code, the computer program code including computer instructions, the one or more processors to invoke the computer instructions to cause the electronic device to perform:
acquiring a first image acquired by an RGB camera and a second image acquired by a UV camera; the first image and the second image are acquired simultaneously;
registering the first image and the second image, wherein pixel points of the registered first image correspond to pixel points of the registered second image one by one;
fusing the registered first image and the registered second image to obtain a fused image;
and displaying the fused image.
In one possible implementation manner, the electronic device provided by the second aspect includes the RGB camera and the UV camera.
In another possible implementation manner, the electronic device provided in the second aspect acquires the first image through an RGB camera, and acquires the second image through a UV camera, and includes: the electronic equipment receives a first image and a second image which are acquired by shooting equipment, the shooting equipment comprises an RGB (red, green and blue) camera and an UV (ultraviolet) camera, the first image is acquired by the shooting equipment through the RGB camera, and the second image is acquired by the shooting equipment through the UV camera.
In combination with the second aspect, in some embodiments, the one or more processors, registering the first image and the second image, include performing:
respectively extracting characteristic lines of the first image and the second image;
matching the characteristic line of the first image with the characteristic line of the second image;
determining transformation parameters of the first image and the second image based on the matched characteristic lines;
and transforming the first image through the transformation parameters to obtain a transformed first image, or transforming the second image through the transformation parameters to obtain a transformed second image, and registering the transformed first image with the transformed second image, or registering the transformed second image with the first image.
In one possible implementation, the feature lines may be edge lines, and the one or more processors perform matching the feature lines of the first image and the feature lines of the second image, including performing:
and determining edge lines with the same inclination angle and position in the first image and the second image as matched edge lines.
In another possible implementation method, the feature lines may be edge lines, the feature lines are human face contours, and the one or more processors perform matching the feature lines of the first image and the feature lines of the second image, including performing:
and determining the face contour with the contour features closest to the contour features of the face contour of the first image in the face contours of the second image as the matched face contour.
In combination with the second aspect, in some embodiments, the one or more processors perform determining transformation parameters for the first image and the second image based on the matched feature lines, including performing:
the matched characteristic lines comprise a first characteristic line and a second characteristic line, the first characteristic line is positioned on the first image, and the second characteristic line is positioned on the second image;
selecting a plurality of first coordinate points on a first characteristic line;
selecting second coordinate points corresponding to the first coordinate points on the second characteristic line;
the transformation parameters of the first image and the second image are calculated from the plurality of first coordinate points and the plurality of second coordinate points.
With reference to the second aspect, in some embodiments, the one or more processors perform fusing the registered first image and the registered second image to obtain a fused image, including performing:
identifying a target region in the registered first image;
performing sunscreen detection on the registered second image to obtain a detection result in the target area;
and superposing the detection result in the target area to the registered first image to obtain a fused image.
With reference to the second aspect, in a possible implementation manner, the performing, by the one or more processors, sunscreen detection on the registered second image to obtain a detection result in the target area includes performing:
and carrying out sunscreen cream detection on the image in the target area in the second image after registration to obtain a detection result in the target area.
With reference to the second aspect, in another possible implementation manner, the performing, by the one or more processors, sunscreen detection on the registered second image to obtain a detection result in the target area includes performing:
carrying out sunscreen detection on the second image after registration to obtain a detection result in the whole image range;
and acquiring the detection result in the target area from the detection result in the full-image range.
In some embodiments, in combination with the first aspect, the one or more processors perform identifying the target region in the registered first image, including performing:
identifying a face region in the registered first image;
identifying an eye region, an eyebrow region and a mouth region in the human face;
and removing the eye region, the eyebrow region and the mouth region in the face region to obtain a target region.
With reference to the second aspect, in a possible implementation manner, the detection result includes: and the pixel points correspond to at least one of the smearing thickness and the effectiveness of the sunscreen cream.
With reference to the second aspect, in a possible implementation manner, the pixel value of the first pixel point in the target region in the fused image is a pixel value corresponding to the detection result of the first pixel point, or is a weighted sum of the pixel value corresponding to the detection result of the first pixel point and the pixel value of the first pixel point in the registered first image;
and the pixel value of a second pixel point outside the target area in the fused image is the pixel value of the second pixel point in the registered first image.
In combination with the second aspect, in some embodiments, the one or more processors, prior to performing acquiring the first image captured by the RGB camera and the second image captured by the UV camera, perform further comprising:
responding to the detected first operation, displaying a user interface, and starting an RGB camera and a UV camera, wherein the user interface comprises a preview area;
displaying the fused image includes:
and displaying the fused image in a preview area.
In one possible implementation, the electronic device provided in the second aspect includes a display, and the one or more processors execute displaying the fused image, including:
and displaying the fused image through a display.
In another possible implementation manner, the electronic device provided in the second aspect includes a communication interface, and the one or more processors execute displaying the fused image, including:
and sending the fused image to display equipment through a communication interface, wherein the display equipment is used for displaying the fused image.
In a third aspect, an embodiment of the present application provides a chip applied to an electronic device, where the chip includes one or more processors, and the processor is configured to invoke a computer instruction to cause the electronic device to perform a method as described in the first aspect and any possible implementation manner of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product including instructions, which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect and any one of the possible implementation manners of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, which includes instructions that, when executed on an electronic device, cause the electronic device to perform the method described in the first aspect and any possible implementation manner of the first aspect.
It is understood that the electronic device provided by the second aspect, the chip provided by the third aspect, the computer program product provided by the fourth aspect, and the computer storage medium provided by the fifth aspect are all configured to execute the method provided by the embodiment of the present application. Therefore, the beneficial effects achieved by the method can refer to the beneficial effects in the corresponding method, and the details are not repeated here.
Drawings
Fig. 1A is a schematic diagram of a system including a shooting device and a display device according to an embodiment of the present application;
fig. 1B is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present disclosure;
fig. 1C to fig. 1D are schematic diagrams of two cameras provided in the embodiment of the present application, which are front cameras;
fig. 1E is a schematic diagram of a software structure of an electronic device according to an embodiment of the present application;
FIGS. 2A-2D are schematic diagrams of a set of user interfaces provided by embodiments of the present application;
FIGS. 3A-3D are schematic diagrams of another set of user interfaces provided by embodiments of the present application;
FIGS. 4A-4B are schematic flow charts of a set of image processing methods provided by an embodiment of the present application;
fig. 5 is a schematic diagram of a hardware registration method provided in an embodiment of the present application;
fig. 6A is a schematic diagram of an algorithm registration method based on edge line detection according to an embodiment of the present application;
fig. 6B is a schematic flowchart of an algorithm registration method based on edge line detection according to an embodiment of the present application;
7A-7B are schematic flow charts of a set of algorithm registration methods based on face contour detection provided by the embodiments of the present application;
8A-8B are schematic flow charts of a set of image fusion methods provided by an embodiment of the present application;
fig. 8C is a schematic diagram of generating a face mask according to an embodiment of the present application;
fig. 9A-9B are schematic diagrams illustrating a method for presenting the detection results of a group of sunscreens provided in the present application.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
To facilitate understanding of embodiments of the present application, some terms referred to in the present application are described below.
1. RGB camera and UV camera
The RGB camera, i.e., the RGB color camera, is generally a main camera of an electronic device, and may be equipped with a white light flash or a dual-color temperature flash. An image captured by an RGB camera (also referred to as an RGB image) is a color image. Meanwhile, the current algorithms of face detection, feature point positioning, face recognition and the like are generally performed based on RGB images, so that the positions of faces, eyes, eyebrows and the like can be accurately detected and recognized through the RGB images.
The key components of the UV camera, namely the ultraviolet camera, are an imaging sensor sensitive to ultraviolet light and a light filtering coating film on a lens. The imaging sensor is composed of photodiodes arranged in an array, and the photodiodes may be a Complementary Metal Oxide Semiconductor (CMOS) which is used for performing photoelectric conversion, that is, converting optical signals into electrical signals; the function of the filter coating is to allow only ultraviolet light to pass through the lens and reach the imaging sensor for imaging, but visible light, infrared light and the like outside the ultraviolet band cannot pass through the lens and cannot form images on the imaging sensor. Typically, the imaging of the UV camera is a grey-scale map. When the imaging sensor senses ultraviolet light, the imaging area is displayed as white or grey white; the imaging area does not perceive the ultraviolet light, and the imaging area appears black.
In the embodiment of the application, the electronic equipment adopts a plurality of cameras, and comprises at least one RGB camera and at least one UV camera, wherein the RGB camera and the UV camera are positioned on the same side of the electronic equipment, the smaller the distance is, the better the distance is, and the position relation can be an up-down relation or a left-right relation. In addition, the two cameras can be front cameras or rear cameras.
2. Hardware registration
The coordinates of the same photographic subject in the two images are not at the same coordinate position due to the difference in mounting position, the difference in angle of view, distortion, and the like of the RGB camera and the UV camera. In order to eliminate or reduce the difference between the coordinates of the same subject in the two images, hardware registration may be performed.
The hardware registration refers to designing positions of the RGB camera and the UV camera before leaving a factory, the RGB camera and the UV camera are as close as possible, further, a transformation relation between the images of the RGB camera and the UV camera is determined based on parameters of the RGB camera and the UV camera, such as focal length, resolution, camera position, rotation direction and the like, the images of the RGB camera or the UV camera are transformed through the transformation relation, hardware registration is achieved for the transformed images, the proportion and the visual angle of the two images after the hardware registration are kept consistent, and a basis is provided for fusion of the two images. For a specific hardware registration method, reference may be made to the following description related to hardware registration in the embodiments of the registration method, and details are not repeated here.
3. Algorithm registration
The algorithm registration can be performed after the hardware registration, or the registration of two images (i.e. the images of the RGB camera and the UV camera) can be realized only by using the algorithm registration, i.e. the transformation relation between the images of the RGB camera and the UV camera is determined.
Wherein the algorithm registration may be implemented based on an edge detection algorithm or a contour detection algorithm. For a specific method for algorithm registration, reference may be made to the following description about algorithm registration in the embodiments of the registration method, and details are not repeated here.
Due to the absorption and reflection capability of the object to ultraviolet rays, the UV camera can only present gray-scale images, and the reduction capability to details in the images is poor, so that the images (also called UV images) shot by the UV camera have the problems of being not beautiful and not clear. In addition, because the definition of the UV image is poor, and existing algorithms such as face detection, feature point positioning, face recognition and the like are performed based on RGB images, the algorithms cannot be accurately detected and recognized for the UV image, and therefore, upper-layer applications based on the algorithms cannot be constructed based on the UV image, such as detection of a sunscreen cream smearing area.
The embodiment of the application provides an image processing method, images of the same scene shot by an RGB camera and an UV camera at the same time, namely the RGB images and the UV images are registered, pixel points of the registered RGB images correspond to pixel points of the registered UV images one by one, namely the pixel points of the same shot object in the RGB images and the UV images are consistent in coordinates, and further the registered RGB images and the UV images are fused.
According to the method, the RGB image and the UV image after registration can be fused through registration of the RGB image and the UV image, and the fused image combines the clear RGB image and the clear UV image, so that the RGB image and the UV image are clearer and more attractive.
In some embodiments, one implementation of fusing the registered RGB image and UV image may be: identifying target areas, such as human faces and human face masks, in the RGB images after registration; and the registered UV image is subjected to sunscreen detection to obtain detection results in the face mask, such as sunscreen thickness, effectiveness and the like; further, the detection result in the face mask is superposed on the RGB image after registration to obtain a fused image.
According to the embodiment, the areas such as the faces, the face masks and the like in the RGB images after registration can be identified, the RGB images are registered with the UV images, so that the RGB images correspond to the pixel points on the UV images one by one, and the identification results of the RGB images are mapped onto the UV images, so that the identification results can be applied to the UV images, and the sunscreen cream detection in the specific areas is realized.
The following describes apparatuses and systems related to embodiments of the present application.
The image fusion method in the embodiment of the application can be realized by electronic equipment comprising an RGB camera and an UV camera independently, and can also be realized by a system consisting of shooting equipment and display equipment.
Fig. 1A is a schematic diagram of a system including a shooting device and a display device according to an embodiment of the present application.
As shown in fig. 1A, the system wirelessly communicates with a shooting device 11 and a display device 12, and the shooting device 11 and the display device 12 can establish a connection to transmit data. The connection can be established through short-distance wireless communication such as Bluetooth, WLAN and the like, mobile communication and wired connection.
In one implementation, the capture device 11 may include an RGB camera 11a, a UV camera 11b, and an ultraviolet flash 11c. When the shooting device 11 detects a user operation for triggering shooting, in response to the operation, the shooting device 11 simultaneously turns on the RGB camera 11a and the UV camera 11b; respectively collecting images at the same time through the RGB camera 11a and the UV camera 11b to obtain RGB images and UV images; then, registering and fusing the RGB image and the UV image; transmitting the fused image to a display device 12 connected to the photographing device 11; the display device 12 receives and displays the fused image.
The ultraviolet flash lamp 11c is used for supplementing ultraviolet light under the condition that the intensity of ultraviolet light is very weak, for example, when the UV camera 11b collects images in an indoor scene, the ultraviolet flash lamp 11c needs to be turned on to supplement ultraviolet light. Because the indoor ultraviolet intensity is very weak, can not obtain enough clear ultraviolet image, consequently under indoor scene, can open ultraviolet flash light 11c and carry out the ultraviolet light filling to guarantee the image quality of indoor ultraviolet image. When the lamp is in the daytime outdoors, the ultraviolet intensity in sunlight is enough, and the ultraviolet flash lamp 11c is not started at the time, so that ultraviolet light supplement is not needed.
In another implementation, the capture device 11 may transmit the RGB image and the UV image to the display device 12; the display device 12 may include a display 11d, and the display device 12 receives the RGB image and the UV image and displays the fused image through the display 11d by registering and fusing the RGB image and the UV image.
Fig. 1B is a schematic diagram of a hardware structure of an electronic device 100 according to an embodiment of the present disclosure. The electronic device 100 may be an electronic device including an RGB camera and a UV camera, or a photographing device 11, and may also be the display device 12 described above.
The electronic device 100 may be a portable electronic device such as a cell phone, a tablet, a smart watch, a smart mirror, an Ultraviolet (UV) independent camera, and the like. Exemplary embodiments of portable electronic devices include, but are not limited to, portable electronic devices that carry iOS, android, microsoft, or other operating systems. The portable electronic device described above may also be other portable electronic devices such as a Laptop computer (Laptop) with a touch sensitive surface (e.g., a touch screen display and/or a touchpad), etc. It should also be understood that in some embodiments, the electronic device may also be a desktop computer.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management Module 140, a power management Module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication Module 150, a wireless communication Module 160, an audio Module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor Module 180, a button 190, a motor 191, an indicator 192, at least two cameras 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
For example, when the electronic device 100 is a smart mirror, the sensor module 180, the mobile communication module 150, the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, the keys 190, the motor 191, the indicator 192, etc. may not be necessary components of the smart mirror.
Processor 110 may include one or more processing units, such as: the Processor 110 may include an Application Processor (AP), a modem Processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural-Network Processing Unit (NPU), among others. Wherein, the different processing units may be independent devices or may be integrated in one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
It is understood that an AE system may also be included in the processor 110. The AE system may be specifically provided in the ISP. The AE system can be used to implement automatic adjustment of exposure parameters. Alternatively, the AE system may also be integrated in other processor chips. The embodiments of the present application do not limit this.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices 100, such as AR devices and the like.
The charging management module 140 is configured to receive charging input from a charger. The charging management module 140 may also supply power to the electronic device 100 through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charging management module 140, and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power Amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194.
The Wireless Communication module 160 may provide solutions for Wireless Communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs), such as Wireless Fidelity (Wi-Fi) Networks, bluetooth (BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), near Field Communication (NFC), infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques.
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, connected to the display screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The Display panel may be a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), an Active Matrix Organic Light-Emitting Diode (Active-Matrix Organic Light-Emitting Diode, AMOLED), a flexible Light-Emitting Diode (FLED), a Mini LED, a Micro-OLED, a Quantum Dot Light-Emitting Diode (Quantum Dot Light Emitting Diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement the acquisition function via the ISP, camera 193, video codec, GPU, display screen 194, application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image or video visible to the naked eye. The ISP can also carry out algorithm optimization on noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a Complementary Metal-Oxide-Semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image or video signal. And the ISP outputs the digital image or video signal to the DSP for processing. The DSP converts the digital image or video signal into image or video signal in standard RGB, YUV and other formats.
In the embodiment of the present application, the electronic device 100 may include at least two cameras 193, and the at least two cameras 193 may include an RGB camera and a UV camera.
The digital signal processor is used for processing digital signals, and can process digital images or video signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a Neural-Network (NN) computing processor, which processes input information quickly by using a biological Neural Network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image and video playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into analog audio signals for output, and also used to convert analog audio inputs into digital audio signals.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. The electronic device 100 may be provided with at least one microphone 170C.
The earphone interface 170D is used to connect a wired earphone.
The sensor module 180 may include 1 or more sensors, which may be of the same type or different types. It is understood that the sensor module 180 shown in fig. 1B is only an exemplary division manner, and other division manners are possible, and the present application is not limited thereto.
The pressure sensor 180A is used for sensing a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but have different touch operation intensities may correspond to different operation instructions.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C to assist in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The method can also be used for identifying the posture of the electronic equipment 100, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100.
The ambient light sensor 180L is used to sense ambient light brightness.
The fingerprint sensor 180H is used to acquire a fingerprint.
The temperature sensor 180J is used to detect temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be attached to and detached from the electronic device 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
Fig. 1A, 1C, and 1D are schematic structural diagrams of some electronic devices according to embodiments of the present disclosure.
As shown in fig. 1C, 1D, the electronic device may include an RGB camera 11a, a UV camera 11b, an ultraviolet flash 11C, and a display screen 11D. The RGB camera 11a and the UV camera 11b may be front cameras, i.e. located on the same side as the display screen.
In another implementation, the RGB camera 11a and the UV camera 11b of the electronic device may be rear cameras, i.e., opposite the display screen.
As shown in fig. 1A, the photographing apparatus may include an RGB camera 11A and a UV camera 11b, and the display apparatus may include a display screen.
The smaller the distance d between the RGB camera and the UV camera, the better, for example, d is more than 0 and less than or equal to 3cm, and the position relationship can be left-right relationship or up-down relationship. In addition, the RGB camera and the UV camera may be a front camera or a rear camera.
Fig. 1E is a schematic diagram of a software structure of an electronic device 100 according to an embodiment of the present disclosure.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the system is divided into four layers, an application layer, an application framework layer, a Runtime (Runtime) and system library, and a kernel layer, from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 1E, the application package may include applications (also referred to as applications) such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, UV camera, video, short message, etc.
The Application framework layer provides an Application Programming Interface (API) and a Programming framework for the Application program of the Application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 1E, the application framework layer may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog interface. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Runtime (Runtime) includes a core library and a virtual machine. Runtime is responsible for scheduling and management of the system.
The core library comprises two parts: one part is a function which needs to be called by a programming language (for example, java language), and the other part is a core library of the system.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes programming files (e.g., java files) of the application layer and the application framework layer as binary files. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface Manager (Surface Manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), two-dimensional graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provides a fusion of two-Dimensional (2-Dimensional, 2D) and three-Dimensional (3-Dimensional, 3D) layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing 3D graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises a display driver, a camera driver, an audio driver, a sensor driver and a virtual card driver.
The following describes some scenarios provided by the present application for accurately detecting and presenting information of sun cream application.
As will be appreciated, a "user interface" is a media interface for interaction and information exchange between an application or operating system and a user that enables conversion between an internal form of information and a form that is acceptable to the user. A common presentation form of the user interface is a Graphical User Interface (GUI), which refers to a user interface related to computer operations and displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
The user interface 21 shown in fig. 2A is an exemplary diagram of a main interface of the electronic device. As shown in fig. 2A, the user interface 21 includes icons of a plurality of applications (e.g., an icon 210 of a camera, an icon of a UV camera, an icon of a weather application, an icon of a calendar application, an icon of a photo album application, an icon of an application store application, a setting application icon, a browser application icon, an information application icon, a dial application icon, and the like). The content displayed on the user interface 21 is not limited in the embodiment of the present application.
1. UV camera as a shooting mode in camera applications (FIGS. 2A-2D)
When the electronic apparatus 100 detects a user operation (such as a touch/click operation) by the user on the camera application icon 210, the electronic apparatus 100 may display the photographing interface 22 as shown in fig. 2B in response to the operation. The capture interface 22 may be a default capture mode user interface on which the user may complete the capture.
As shown in fig. 2B, the capture interface 22 may include a parameter adjustment area 220, a preview area 221, a capture mode selection area 222, an album shortcut control 2231, a shutter control 2232, and a camera flip control 2233.
The parameter adjustment area 220 may include a flash button, a setup button, etc., which may enable the flash to be turned on or off. The setting button can realize the adjustment of shooting parameters. The parameter adjustment area 220 may also include other controls, for example only.
Preview area 221 may be used to display a preview image. The preview image is a fusion image obtained by registering and fusing RGB images and UV images respectively acquired by an RGB camera and a UV camera at the same time.
One or more shooting modes may be displayed in the shooting mode selection area 222. The one or more photographing modes may include: night view mode 2221, smart portrait mode 2222, photograph mode 2223, video mode 2224, UV mode 2225, and more 2226.
The album shortcut control 2231 may be used to open the album. In response to a user operation, such as a touch operation, acting on the album shortcut control 2231, the electronic device 100 may open the album.
The shutter control 2232 can be used to take pictures or record video. The electronic device 100 can detect a user operation on the shutter control 2232, in response to which the electronic device 100 can save the preview image in the preview area 221. In addition, the electronic device 100 may also display thumbnails of the saved images in the album shortcut control 2231.
The camera roll-over control 2233 can be used to implement a roll-over camera. Upon detecting a user operation, such as a touch operation, acting on the camera flip control 2233, the electronic apparatus 100 can flip the camera used for shooting in response to the operation, such as switching the rear camera to the front camera or switching the front camera to the rear camera.
When a user operation acting on the photographing mode is detected, the electronic apparatus 100 may turn on the photographing mode selected by the user. For example, when a user operation acting on UV mode 2225 is detected, electronic device 100 may turn on the UV mode, displaying shooting interface 23 shown in fig. 2C, i.e., turn on the RGB camera and the UV camera; respectively collecting images at the same time through an RGB camera and an UV camera to obtain an RGB image and a UV image; the RGB image and the UV image are then registered and fused, and the fused image is displayed in a preview area 231, as shown in fig. 2C.
Alternatively, the fused image may be obtained by superimposing an RGB image and a UV image at a ratio of 50% transparency, and it should be understood that the RGB image and the UV image may also be fused at other suitable ratios, which is not limited in this embodiment of the present application.
Upon detecting a user operation on the shutter control 2332 on the shooting interface 23, the electronic device 100 may save the currently displayed fused image, and may also store it in an album, in response to the operation.
Upon detecting a user operation on the album shortcut control 2331, the electronic device 100 may open the album and display the merged image that the user has last saved, e.g., display the image browsing interface 24 as shown in fig. 2D.
The fused image is clearer and more attractive due to the combination of the RGB image, and is also suitable for other algorithms such as face detection, feature point positioning and the like, and upper-layer application based on the algorithms can be constructed.
2. UV camera as a separate application (FIGS. 3A-3D)
Likewise, as shown in the user interface 31 of fig. 3A, the electronic device 100 may detect a user operation (such as a touch/click operation) on the icon 310 of the UV camera by the user, and in response to the operation, the electronic device 100 may turn on the RGB camera and the UV camera at the same time, and display the photographing interface 32 shown in fig. 3B or display the photographing interface 33 shown in fig. 3C. The photographing interface 32 or the photographing interface 33 may be a user interface of a default photographing mode of the UV camera, on which the user can complete photographing.
As shown in fig. 3B and 3C, the shooting interface 32, 33 may include parameter adjustment areas 320, 330, preview areas 321, 331, shooting mode selection areas 322, 332, album shortcut controls 3231, 3331, and shutter controls 3232, 3332.
The parameter adjustment regions 320, 330 may include a flash button, a setup button, etc., which may enable the flash to be turned on or off. The setting button can realize the adjustment of shooting parameters. The parameter adjustment regions 320, 330 may also include other controls, for example only.
The preview areas 321, 331 may be used to display preview images. The preview image is a fusion image obtained by registering and fusing RGB images and UV images respectively acquired by the RGB camera and the UV camera at the same time.
One or more shooting modes may be displayed in the shooting mode selection areas 322, 332. The one or more shooting mode options may include the exemplary UV photo mode 3221, UV video mode 3222, and more 3223 of fig. 3B; the exemplary UV mode 3321 and further 3322 of fig. 3C may also be included.
When a user operation acting on the photographing mode is detected, the electronic apparatus 100 may turn on the photographing mode selected by the user.
For example, as shown in fig. 3B, when detecting a user operation acting on the UV photographing mode 3221, the electronic device 100 may turn on the UV photographing mode, and display the photographing interface 32 shown in fig. 3B, that is, turn on the RGB camera and the UV camera at the same time; the RGB camera and the UV camera respectively collect images at the same time to obtain an RGB image and a UV image; and then, registering and fusing the RGB image and the UV image, and further displaying the fused image.
When a user operation acting on the shutter control 3232 on the photographing interface 23 is detected, the electronic apparatus 100 can perform photographing based on the current photographing mode. For example, when the user touches or clicks the shutter control 3232, if the current shooting mode is the UV shooting mode, the electronic device saves the current preview image; if the current shooting mode is the UV recording mode, the electronic device starts recording, and stops recording when the shutter control 3232 is clicked again.
For another example, as shown in fig. 3C, when a user operation acting on the UV mode 3321 is detected, the electronic device 100 may turn on the UV mode, display the shooting interface 33 shown in fig. 3C, and turn on both the RGB camera and the UV camera. In one possible implementation, in response to a short press operation on the shutter control 3332, the electronic device 100 may take a picture, saving the current preview image; in response to a long press on shutter control 3332, electronic device 100 may take a picture and record a video, saving the captured video.
Alternatively, the fused image may be obtained by superimposing an RGB image and a UV image at a ratio of 50% transparency, and it should be understood that the RGB image and the UV image may also be fused at other suitable ratios, which is not limited by the embodiment of the present application.
The album shortcut controls 3231, 3331 may be used to open the album. For example, in response to a user operation, such as a touch operation, acting on the album shortcut controls 3231, 3331, the electronic device 100 may open the album, displaying the last captured image, such as the image browsing interface 34 shown in FIG. 3D.
Each user interface described above may further include more or fewer controls, which is not limited in this embodiment of the application.
The following describes an image processing method according to an embodiment of the present application, and as exemplarily shown in fig. 4A and 4B, the method is a flowchart of the image processing method. The method may include, but is not limited to, the steps of:
s401: the electronic equipment acquires an RGB image acquired through an RGB camera and a UV image acquired through a UV camera; the RGB image and the UV image are acquired simultaneously.
Alternatively, the RGB camera and the UV camera may be hardware registered.
S402: the electronic equipment registers the RGB image UV image, and pixel points of the RGB image after registration correspond to pixel points of the UV image after registration one by one.
Here, the registration may refer to hardware registration, algorithm registration, or a combination of hardware registration and algorithm registration. For example, hardware registration is performed on the electronic device before shipping, at this time, algorithm registration is performed, and specifically, the electronic device extracts feature lines of an RGB image and an UV image respectively; the electronic equipment matches the characteristic lines of the RGB image with the characteristic lines of the UV image; the electronic equipment determines the transformation parameters of the RGB image and the UV image based on the matched characteristic lines; the electronic equipment transforms the RGB image through the transformation parameters to obtain a transformed RGB image, or transforms the UV image through the transformation parameters to obtain a transformed RGB image, the transformed RGB image is registered with the UV image, and the transformed UV image is registered with the RGB image. The characteristic line can be an edge line or a face contour. For a specific method of hardware registration and algorithm registration, reference may be made to the following description related to the registration method embodiment, and details are not repeated here.
The registration process is a process of calculating transformation parameters of the RGB image and the UV image and carrying out image transformation through the transformation parameters. Wherein the transformation parameters comprise transformation parameters M for converting from RGB image to UV image 1 And/or conversion parameters M for converting UV images into RGB images 2 . When the transformation parameter is M 1 According to transformation parameters M 1 And transforming the RGB image to obtain a transformed RGB image which is an image registered with the UV image and is also called the registered RGB image, wherein the registered UV image is the original UV image. When the transformation parameter is M 2 According to transformation parameters M 2 And transforming the UV image to obtain a transformed UV image which is an image registered with the RGB image and is also called a registered UV image, wherein the registered RGB image is the original RGB image.
S403: and the electronic equipment fuses the registered RGB image and the registered UV image to obtain a fused image.
For a specific fusion method, reference may be made to the following description related to the image fusion method embodiment, which is not described herein again.
For example, the electronic device identifies a target region in the registered RGB image; the electronic equipment performs sunscreen detection on the UV image after registration to obtain a detection result in the target area; and the electronic equipment superimposes the detection result in the target area on the RGB image after registration to obtain a fused image.
Optionally, the target region may be a face mask, that is, a region obtained by removing an eye region, an eyebrow region, and a mouth region from a face region.
For another example, after registering the RGB image and the UV image, the electronic device may directly fuse the registered RGB image and the registered UV image, and then display the fused image. Specifically, the pixel value of the pixel point of the fusion image is the weighted sum of the pixel value of the pixel point of the RGB image and the pixel value of the pixel point of the UV image.
For example, the fused image may be an RGB image and a UV image each superimposed at a 50% transparency ratio. The two images may also be fused in other proportions, which is not limited in the embodiment of the present application.
S404: and the electronic equipment displays the fused image.
The electronic device may display the fused image on its display screen, or may send the fused image to another device for display by the other device, for example, after the shooting device executes the above steps S401 to S403, the fused image is sent to the display device, and the display device displays the fused image.
The following describes a registration method provided in an embodiment of the present application.
The electronic device 100 provided by the embodiment of the application includes at least one RGB camera and at least one UV camera, and in order to realize fusion of RGB images captured by the RGB camera and UV images captured by the UV camera, synchronization and pixel-level alignment of the RGB images and the UV images are firstly ensured. The term "synchronization" refers to that two cameras shoot objects in the same scene at the same time. "pixel level alignment" means that any point in the photographic subject has the same coordinates on the RGB image and the UV image.
However, due to the influence of factors such as the difference of the installation positions of the two cameras, the difference of the field angles, distortion and the like, the coordinates of the key points of the human face on the acquired RGB image and the acquired UV image are not at the same coordinate position. For example, assume that there are two key points (e.g. central points of two eyes) of a face on an RGB image, and the coordinate positions of the key points are (x) 1 ,y 1 ),(x 2 ,y 2 ) The RGB image and the corresponding UV image use the coordinate of the upper left corner as the origin(0, 0), the coordinates of these two face keypoints on the UV image are not (x) 1 ,y 1 ),(x 2 ,y 2 ) But rather (x) 1x1 ,y1+δ y1 ),(x 2x2 ,y 2y2 ) Wherein (δ) x1 ,δ y1 ) And (delta) x2 ,δ y2 ) Respectively the coordinate offset of these two points on the UV image relative to the RGB image.
Since the coordinate offset is an unknown quantity, the RGB camera and the UV camera need to be registered to be determined, and the registration method may be hardware registration, algorithm registration, or a combination of hardware registration and algorithm registration. The following are introduced separately:
1. hardware registration
Hardware registration is to determine coordinate offset of a point on a UV image relative to an RGB image based on physical positions of two cameras and parameters such as internal reference and external reference. The hardware registration may be completed by the electronic device 100 before shipment.
Specifically, the principle of registration may be: there is a transformation relationship between the coordinate points of the UV image and the RGB image taken simultaneously, i.e. there is a transformation relationship between the coordinate points of the UV image and the RGB image
[x’,y’]=[x,y]×R+T (1)
Where R is a transformation matrix for converting RGB image coordinates to UV image coordinates, T is a bias vector, (x ', y') represents coordinates of a point on the UV image, and (x, y) represents coordinates of the same point on the RGB image. The purpose of hardware calibration is to calculate R and T and realize the mapping of RGB image and UV image.
In one implementation, as shown in fig. 5, the mobile phone may capture a calibration image through an RGB camera and a UV camera thereon to obtain an RGB image and a UV image, and based on parameters such as positions, focal lengths, resolutions, and the like of the two cameras, two key points are respectively taken from the UV image and the RGB image, and coordinates of the two key points on the RGB image are respectively marked as a 1 (x 1 ,y 1 ),A 2 (x 2 ,y 2 ) And coordinates on the UV image are respectively marked as B 1 (x 1 ’,y 1 ’),B 2 (x 2 ’,y 2 '),; r and T can be calculated by substituting the coordinates of the two key points into equation (1).
The requirement of hardware registration is that the scale and field angle of the UV camera to the map of the RGB camera (i.e., RGB image and UV image) need to be consistent.
When the UV camera and the RGB camera shoot, the ratio and the angle of view of the collected images may be different. When the image scale and the field angle of the RGB image and the UV image are different, the image needs to be cropped to obtain an image with the same scale. The adjustment of the scale and the angle of view of the image may be completed before the electronic device 100 leaves the factory.
Through the above hardware calibration process of the RGB camera and the UV camera on the same electronic device 100, the quality of the fused image of the RGB image and the UV image can be improved.
2. Algorithm registration
After the RGB camera and the UV camera on the electronic equipment respectively acquire images, algorithm registration can be carried out on the RGB image and the UV image, and the subsequent fusion quality of the two images can be improved. The algorithm registration method can be implemented based on an edge detection algorithm or a contour detection algorithm, which is respectively described as follows:
registering based on an edge detection algorithm:
fig. 6A is a schematic flowchart illustrating an algorithm registration process of an RGB image and a UV image based on an edge line detection algorithm.
S601: the electronic device extracts edge lines of the RGB image and the UV image, respectively.
The edge lines of the RGB image and the UV image may be extracted by an edge line detection algorithm, such as a sobel operator, a canny operator, and the like.
As shown in fig. 6B, the edge line a extracted on the RGB image 1 、A 2 And A 3 And edge lines B extracted on the UV image 1 、B 2 And B 3
S602: and matching the edge line of the RGB image with the edge line of the UV image.
Whether the edge lines of the two graphs are matched or not can be determined by the inclination angles and the positions of the edge lines in the graphs, wherein the positions of the edge lines in the graphs can be the positions of the geometric centers of the edge lines in the graphs, and can also be the positions of the intersections of the extension lines of the edge lines and the edges of the images. Two edge lines having the same inclination angle and position are corresponded. Specifically, the electronic apparatus 100 may calculate the tilt angles and positions of the edge lines in the RGB image and the tilt angles and positions of the edge lines in the UV image, and determine two edge lines whose tilt angles and positions are closest as the matching two edge lines.
S603: the RGB image and the UV image are aligned based on the matched edge lines.
In one possible implementation, the electronic device may select N pairs of matched edge lines from the matched edge lines, N being a positive integer, e.g., select 3 pairs of matched edge lines, edge line a in an RGB image 1 、A 2 And A 3 Matching the edge lines B of the UV images respectively 1 、B 2 And B 3 (ii) a The mutual transformation parameters of the RGB image and the UV image can be calculated based on the position and the inclination angle of each pair of edge lines, and the transformation parameters calculated based on 3 pairs of matched edge lines are averaged to obtain the final transformation parameters.
In another possible implementation, the electronic device 100 may select or randomly select a set of coordinate points (x) on an edge line of the RGB image 1 ,y 1 ),…,(x n ,y n ) Correspondingly, a set of coordinate points (x) is selected on the edge line matched with the coordinate points on the UV image 1 ’,y 1 ’),…,(x n ’,y n '). i. n is a positive integer, i is not more than n, (x) i ,y i ) And (x) i ’,y i ') corresponds. Further, the electronic device may calculate the transformation parameters of the RGB image and the UV image according to the two sets of coordinate points.
The registration method based on the contour line algorithm comprises the following steps:
fig. 7A and 7B are schematic diagrams illustrating a flowchart of performing algorithm registration on an RGB image and a UV image based on a contour detection algorithm.
S701: and extracting the face contour on the RGB image.
The electronic device may detect key points of a face, such as key points of a face contour, on the RGB image based on an existing face key point detection algorithm, so as to obtain the face contour on the RGB image. The face key point detection algorithm can be an image segmentation model, a semantic segmentation model or a neural network model for extracting a face contour.
S702: and extracting the face contour on the UV image.
The electronic device extracts the face contour on the UV image using a face contour extraction model. The face contour extraction model can be an image segmentation model, a semantic segmentation model or a neural network model for extracting a face contour. Due to the accuracy of the model extraction, there may be multiple face contours in the extraction operation on the UV image.
S703: and matching the face contour of the UV image with the face contour of the RGB image.
The electronic device 100 may extract contour features of the face contour of the UV image and the face contour of the RGB image respectively by using a contour feature extraction model, and further, a face contour in the face contour of the UV image, which is closest to the contour features of the face contour of the RGB image, is a matched face contour. The contour feature extraction model can be an active contour model, also called a SNAKE model, or a model obtained by convolutional neural network training. The profile feature may be at least one of an area, perimeter, center of circle, etc. of the profile.
S704: and aligning the RGB image and the UV image based on the matched face contour.
The electronic device 100 may calculate transformation parameters for the RGB image and the UV image, and may transform the RGB image and the UV image according to the transformation parameters for registration.
Specifically, the electronic device 100 may randomly choose a set of coordinate points (x) on the contour of the RGB image 1 ,y 1 ),…,(x n , y n ) Simultaneously selecting a corresponding set of coordinate points (x) on the corresponding contour on the UV image 1 ’,y 1 ’),…,(x n ’,y n '). Electronic device100 may calculate the transformation parameters for the RGB image and the UV image from the two sets of coordinate points described above.
The transformation parameters in the above-mentioned algorithmic registration include transformation parameter M for converting from RGB image to UV image 1 And/or a transformation parameter M for converting a UV image into an RGB image 2 . According to transformation parameters M 1 And transforming the RGB image to obtain a transformed RGB image which is an image registered with the UV image and is also called the registered RGB image, wherein the registered UV image is the original UV image. According to transformation parameters M 2 And transforming the UV image to obtain a transformed UV image which is an image registered with the RGB image and is also called a registered UV image, wherein the registered RGB image is the original RGB image.
The image transformation may be an affine transformation, a non-rigid transformation, or the like.
3. And accurately registering the RGB image and the UV image by combining two modes of hardware registration and algorithm registration.
The embodiment of the application provides a method for accurately registering an RGB image and an UV image by combining two modes of hardware registration and algorithm registration. After the RGB camera and the UV camera are registered by the method of hardware registration of the production line, some deviation may exist. In order to achieve better fusion presentation of the RGB image and the UV image, the RGB image and the UV image may be further subjected to algorithm registration.
In the application, the pixels on the registered RGB image and the UV image correspond one to one.
The image fusion method provided by the embodiment of the present application is described as follows.
After the RGB image and the UV image are registered, the registered RGB image and UV image may be fused, and the fused image may be displayed. The following describes an image fusion method according to an embodiment of the present application, such as the flowchart shown in fig. 8A and 8B, which may include, but is not limited to, the following steps:
s801: and acquiring a target area based on the registered RGB images.
The target area can be a face, a cheek, a forehead, a face mask and the like, and can also be other areas. The embodiments of the present application take the target area as a face mask as an example for illustration.
For example, when a finger is reached based on an RGB image, a position where the finger is pointed is acquired, and a target region is generated with the position as a center. The target area may be a circular, rectangular or triangular area centered at the location; the target area may also be a face area where the position is located, for example, when the finger points to a position on the forehead, the target area is an area where the forehead is located.
In one implementation, the target region is a face mask, and one implementation of recognizing the face mask based on RGB images may be: it should be understood that, first, the electronic device 100 identifies a face region in the RGB image based on a face detection (face detection) algorithm; secondly, the electronic device 100 identifies five sense organs in the face region by using a feature point location (landmark) algorithm, that is, identifies an eye region, an eyebrow region, a mouth region, and the like in the face region, further removes the eye region, the eyebrow region, and the mouth region in the face region, and the remaining region is face mask. As shown in the example face mask of FIG. 8C, the white areas are face masks.
S802: and carrying out sunscreen detection on the UV image in the target area range to obtain a sunscreen detection result.
The principle of sunscreen cream detection: because chemical sun cream can absorb the ultraviolet ray, to the region of having paintd chemical sun cream in the people's face, because it has absorbed a large amount of ultraviolet rays, it is less to reflect the ultraviolet ray of UV camera, its imaging area who corresponds can present black, otherwise, do not paint the region of chemical sun cream in the people's face, it is more to reflect the ultraviolet ray of UV camera, its imaging area who corresponds can show for white, chemical sun cream thickness is different, the amount of the ultraviolet ray of reflecting UV camera is different, the colour that its corresponding imaging area presented can be different, chemical sun cream is thicker, the formation of image colour is darker.
Taking the target area as the face mask as an example, in a possible implementation manner, the electronic device may perform sunscreen detection on an image within the range of the face mask in the UV image.
In another possible implementation manner, the electronic device may also perform full-map sunscreen detection on the UV image, and then perform error subtraction on the detection result by using the face mask, that is, remove the detection result of the area other than the face mask in the detection result of the full map, to obtain the detection result within the face mask range.
Specifically, the sunscreen cream detection is performed on each pixel point according to the data of each pixel point on the UV image, and the detection result of each pixel point is determined, and the detection result may include at least one of the smearing thickness and the effectiveness of the sunscreen cream corresponding to the pixel point, which is not limited in this embodiment of the application.
The electronic device 100 may determine, based on the gray value corresponding to each pixel point, a pixel point on the UV image to which the sunscreen is applied and the application thickness and effectiveness of the sunscreen corresponding to each pixel point. The application thickness or effectiveness of the sunscreen corresponding to one pixel point can be obtained based on the pixel value (for example, the gray value) of the pixel point, and the smaller the gray value is, the larger the thickness is. The effectiveness of the sunscreen comprises effectiveness and ineffectiveness, and if the gray value of a pixel point is lower than a first threshold value, the sufficient amount of sunscreen applied to a position corresponding to the pixel point is effective; if the gray value of the pixel point is higher than the first threshold and lower than the second threshold, it is indicated that the position corresponding to the pixel point is smeared with the sunscreen, but the smearing amount is insufficient, and the pixel point is invalid; and if the gray value of the pixel point is higher than the second threshold, it is indicated that the position corresponding to the pixel point is not coated with the sunscreen.
S803: and fusing the detection results of the registered RGB images and the sunscreen cream to obtain a fused image, as shown in FIG. 9A.
The detection result of the sunscreen comprises the smearing thickness and effectiveness of the sunscreen corresponding to each pixel point in the target area.
In a specific implementation of S803, a detection image generated based on a detection result of the sunscreen cream is used as a foreground image, the registered RGB image is used as a background image, and the detection result of the sunscreen cream is superimposed on the registered RGB image, so that the obtained image is a fused image.
The detection image comprises all pixel points in the target area, and the pixel value of one pixel point on the detection image is the pixel value corresponding to the detection result of the pixel point, and is obtained from the detection result of the pixel point in the detection result of the sunscreen cream. For example, the pixel value is purple, which represents a pixel point coated with sunscreen cream, and the thicker the thickness is, the darker the purple is, whereas the smaller the thickness is, the lighter the purple is; for another example, the representative pixel point has a sufficient amount of sunscreen applied and is red, and the representative pixel point has an insufficient amount of sunscreen applied.
Furthermore, the marking mode for detecting the image can also comprise edge lines obtained by outlining the effective area of the sunscreen cream
Wherein, the darker the color of the edge line obtained by drawing indicates that the coating thickness of the sunscreen cream in the area is larger.
Wherein, the superposition may include, but is not limited to, the following two implementations, as shown in fig. 9A, for example:
implementation mode one
The detection image may be overlaid over the registered RGB image. At this time, the pixel value of the pixel point in the target area (e.g., face mask) in the obtained fused image is the pixel value corresponding to the detection result of the pixel point. And the pixel value of the pixel point in the image after fusion except the face mask is the pixel value of the pixel point in the RGB image after registration.
Implementation mode two
The detection image and the registered RGB image can be fused with different proportions of transparency. At this time, the obtained pixel value of the pixel point in the target area (e.g., face mask) in the fused image is the weighted sum of the pixel value corresponding to the detection result of the pixel point and the pixel value of the pixel point in the RGB image after registration, as shown in formula (2); and the pixel value of a pixel point in the fused image except the face mask is the pixel value of the pixel point in the RGB image after registration.
P i,j =a×p1 i,j +b×p2 i,j (2)
In formula (2), (i, j) is the bit of the pixel point in the target region in the fused imageAnd setting coordinates, wherein a is the opacity of the detection image of the pixel point, and b is the opacity of the RGB image after registration. 0 < a < 1,0 < b < 1 and a + b =1.p is a radical of i,j The pixel value corresponding to the pixel point with the position coordinate (i, j) in the target area in the fused image is obtained; p1 i,j Detecting a pixel value corresponding to a pixel point with a position coordinate (i, j) in a target area in an image; p2 i,j And (3) obtaining a pixel value corresponding to a pixel point with the position coordinate (i, j) in the target area in the RGB image after registration.
For example, if the transparency of the detection result of the sunscreen is 80%, the pixel values of the three channels R, G, and B of the pixel point in the target area (e.g., face mask) in the fused image are respectively equal to the pixel value of 0.2+ of the pixel value of the pixel point in the registered RGB image, where the pixel value of the pixel point is 0.8.
An example of a fused image provided by the present application is presented below.
As shown in fig. 9B, the fused image exhibits the thickness of the sunscreen.
In one implementation, the electronic device 100 may superimpose the application thickness of the sunscreen corresponding to the pixel point on the RGB image for fusion presentation. Electronic device 100 may determine the thickness of applying the sunscreen on the UV image based on the gray value corresponding to each pixel point, mark the thickness of applying the sunscreen of each pixel point with different colors according to the prior art, and superimpose the thickness of applying the sunscreen with the marked color on the RGB image, thereby visually presenting the thickness of applying the sunscreen. The electronic device 100 may indicate the application thickness of the sunscreen by adjusting the shade of the color of the pixel point on the RGB image, for example, the lighter the color is, the thinner the application thickness of the sunscreen of the pixel point is indicated; the darker the color, the thicker the applied thickness of the sunscreen cream representing the pixel points.
It should be understood that the pixel value of the pixel point in the face mask in the fused image is the pixel value corresponding to the smearing thickness of the sunscreen cream of the pixel point, or is the weighted sum of the pixel value corresponding to the smearing thickness of the sunscreen cream of the pixel point and the pixel value of the pixel point in the registered RGB image; and the pixel values of the pixel points except the face mask in the fused image are the pixel values of the same pixel point in the registered RGB image.
As shown in fig. 9B, the fused image exhibits sunscreen effectiveness.
In yet another implementation, as shown in fig. 9B, the electronic device 100 may perform fusion presentation on the RGB image based on effectiveness of applying the sunscreen to the pixel points. In some embodiments, the electronic device 100 may indicate the effectiveness of the pixel points in applying sunscreen by filling different colors or by outlining different colors. The effectiveness of sunscreen may include the absence of sunscreen/clean makeup removal, the application of sunscreen in insufficient amounts (poor sunscreen), and the application of sunscreen in sufficient amounts (good sunscreen). As shown in fig. 9B, when it is detected that the pixel point is not coated with the sunscreen cream, color marking is not performed on the RGB map; when the pixel point is detected to be smeared with sunscreen cream but not enough in quantity, namely the sunscreen effect is not good, the color (for example, red) of warning can be marked; when the pixel point is detected to be coated with the sun cream in a sufficient amount, that is, when the sun protection effect is good, the forward color (for example, green or blue) can be marked. And finally, the electronic equipment superposes the marked color on the RGB image, and the pixel value corresponding to the effectiveness of the sun cream smearing of the pixel point in the face mask in the specific fused image refers to the pixel value corresponding to the smearing thickness of the sun cream of the pixel point, which is not repeated herein.
The application is not limited to the labeling mode and the labeling color of the detection result of the sunscreen.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of this application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be further understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Furthermore, the terms "first," "second," and the like are used to distinguish between different objects, and are not used to describe a particular order. The term "plurality" means two or more than two.
It should be further appreciated that reference throughout this application to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by a person skilled in the art that the embodiments described herein can be combined with other embodiments.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (25)

1. An image processing method applied to an electronic device, the method comprising:
the electronic equipment acquires a first image acquired through the RGB camera and a second image acquired through the UV camera; the first image and the second image are acquired simultaneously;
the electronic equipment registers the first image and the second image, and pixel points of the registered first image correspond to pixel points of the registered second image one by one;
the electronic equipment fuses the registered first image and the registered second image to obtain a fused image;
and the electronic equipment displays the fused image.
2. The method of claim 1, wherein the electronic device registers the first image and the second image, comprising:
the electronic equipment respectively extracts characteristic lines of the first image and the second image;
the electronic equipment matches the characteristic line of the first image with the characteristic line of the second image;
the electronic equipment determines transformation parameters of the first image and the second image based on the matched characteristic lines;
the electronic equipment transforms the first image through the transformation parameters to obtain a transformed first image, or transforms the second image through the transformation parameters to obtain a transformed second image, the transformed first image is registered with the second image, and the transformed second image is registered with the first image.
3. The method of claim 2, wherein the feature line is an edge line, and wherein the matching, by the electronic device, the feature line of the first image and the feature line of the second image comprises:
the electronic equipment determines the edge line with the closest inclination angle and position in the first image and the second image as the matched edge line.
4. The method according to claim 2, wherein the characteristic line is a human face contour, and the electronic device matches the characteristic line of the first image with the characteristic line of the second image, and the method comprises:
and the electronic equipment determines the face contour with the contour feature closest to the contour feature of the face contour of the first image in the face contours of the second image as the matched face contour.
5. The method of any of claims 2-4, wherein the electronic device determines transformation parameters for the first image and the second image based on the matched feature lines, comprising:
the matched characteristic lines comprise a first characteristic line and a second characteristic line, the first characteristic line is positioned on the first image, and the second characteristic line is positioned on the second image;
the electronic equipment selects a plurality of first coordinate points on the first characteristic line;
the electronic equipment selects second coordinate points corresponding to the first coordinate points on the second characteristic line;
the electronic device calculates transformation parameters of the first image and the second image according to the plurality of first coordinate points and the plurality of second coordinate points.
6. The method of claim 1, wherein the electronic device fusing the registered first image and the registered second image to obtain a fused image, comprising:
the electronic device identifies a target region in the registered first image;
the electronic equipment performs sunscreen detection on the registered second image to obtain a detection result in the target area;
and the electronic equipment superimposes the detection result in the target area on the registered first image to obtain the fused image.
7. The method according to claim 6, wherein the electronic device performs sunscreen detection on the registered second image to obtain a detection result in the target area, and the method comprises:
and carrying out sunscreen cream detection on the image in the target area in the second image after registration to obtain a detection result in the target area.
8. The method according to claim 6, wherein the electronic device performs sunscreen detection on the registered second image to obtain a detection result in the target area, and the method comprises:
performing sunscreen detection on the registered second image to obtain a detection result in the whole image range;
and obtaining the detection result in the target area from the detection result in the full-image range.
9. The method according to any one of claims 6-8, wherein the electronic device identifies a target region in the registered first image, including:
the electronic equipment identifies a face region in the registered first image;
the electronic equipment identifies an eye region, an eyebrow region and a mouth region in the face region;
and the electronic equipment removes the eye region, the eyebrow region and the mouth region in the face region to obtain the target region.
10. The method according to any one of claims 6-9, wherein the detection result comprises: and the pixel points correspond to at least one of the smearing thickness and the effectiveness of the sunscreen cream.
11. The method according to any one of claims 6 to 10, wherein a pixel value of a first pixel point in the target region in the fused image is a pixel value corresponding to a detection result of the first pixel point, or is a weighted sum of the pixel value corresponding to the detection result of the first pixel point and the pixel value of the first pixel point in the registered first image;
and the pixel value of a second pixel point outside the target area in the fused image is the pixel value of the second pixel point in the registered first image.
12. The method of any of claims 1-11, wherein prior to the electronic device acquiring the first image captured by the RGB camera and the second image captured by the UV camera, the method further comprises:
the electronic equipment responds to the detected first operation, displays a user interface, starts the RGB camera and the UV camera, and comprises a preview area;
the electronic device displaying the fused image comprises: and the electronic equipment displays the fused image in the preview area.
13. An electronic device comprising one or more memories and one or more processors, wherein the one or more memories are configured to store data and instructions, and wherein the one or more processors are configured to call the data and instructions stored by the memories, and perform:
acquiring a first image acquired by an RGB camera and a second image acquired by a UV camera; the first image and the second image are acquired simultaneously;
registering the first image and the second image, wherein pixel points of the registered first image correspond to pixel points of the registered second image one by one;
fusing the registered first image and the registered second image to obtain a fused image;
and displaying the fused image.
14. The electronic device of claim 13, wherein the one or more processors perform the registering the first image and the second image comprises performing:
respectively extracting characteristic lines of the first image and the second image;
matching the characteristic line of the first image with the characteristic line of the second image;
determining transformation parameters of the first image and the second image based on the matched characteristic lines;
and transforming the first image through the transformation parameters to obtain a transformed first image, or transforming the second image through the transformation parameters to obtain a transformed second image, registering the transformed first image with the transformed second image, and registering the transformed second image with the first image.
15. The electronic device of claims 13 and 14, wherein the feature line is an edge line, and wherein the one or more processors perform the matching the feature line of the first image and the feature line of the second image, comprising performing:
and determining edge lines with the same inclination angle and position in the first image and the second image as matched edge lines.
16. The electronic device of any of claims 13-15, wherein the feature lines are human face contours, and wherein the one or more processors perform the matching of the feature lines of the first image and the feature lines of the second image, comprising performing:
and determining the face contour with the contour feature closest to the contour feature of the face contour of the first image in the face contour of the second image as the matched face contour.
17. The electronic device of any of claims 13-16, wherein the one or more processors perform the determining transformation parameters for the first image and the second image based on the matched feature lines comprises performing:
the matched characteristic lines comprise a first characteristic line and a second characteristic line, the first characteristic line is positioned on the first image, and the second characteristic line is positioned on the second image;
selecting a plurality of first coordinate points on the first characteristic line;
selecting second coordinate points corresponding to the first coordinate points on the second characteristic line;
calculating transformation parameters of the first image and the second image according to the plurality of first coordinate points and the plurality of second coordinate points.
18. The electronic device of claims 13-17, wherein the one or more processors perform the fusing the registered first image and the registered second image to obtain a fused image, comprising performing:
identifying a target region in the registered first image;
performing sunscreen detection on the registered second image to obtain a detection result in the target area;
and superposing the detection result in the target area to the registered first image to obtain the fused image.
19. The electronic device of claims 13-18, wherein the one or more processors perform the sunscreen detection on the registered second image to obtain a detection result within the target area, including performing:
and carrying out sunscreen detection on the image in the target area in the second image after registration to obtain a detection result in the target area.
20. The electronic device of claims 13-19, wherein the one or more processors perform the sunscreen detection on the registered second image to obtain a detection result in the target area, including performing:
performing sunscreen detection on the registered second image to obtain a detection result in the whole image range;
and obtaining the detection result in the target area from the detection result in the full-image range.
21. The electronic device of any of claims 13-20, wherein the one or more processors perform the identifying the target region in the registered first image comprising performing:
identifying a face region in the registered first image;
identifying an eye region, an eyebrow region and a mouth region in the human face;
and removing the eye region, the eyebrow region and the mouth region in the face region to obtain the target region.
22. The electronic device of any of claims 18-21, wherein the detection result comprises: and the pixel points correspond to at least one of the smearing thickness and the effectiveness of the sunscreen cream.
23. The electronic device according to any one of claims 18 to 22, wherein a pixel value of a first pixel point in the target region in the fused image is a pixel value corresponding to a detection result of the first pixel point, or is a weighted sum of the pixel value corresponding to the detection result of the first pixel point and the pixel value of the first pixel point in the registered first image;
and the pixel value of a second pixel point outside the target area in the fused image is the pixel value of the second pixel point in the registered first image.
24. The electronic device of any of claims 1-23, wherein the one or more processors perform the performing before the acquiring the first image captured by the RGB camera and the second image captured by the UV camera, the performing further comprising:
responding to the detected first operation, displaying a user interface, and starting the RGB camera and the UV camera, wherein the user interface comprises a preview area;
displaying the fused image comprises:
and displaying the fused image in the preview area.
25. A computer-readable storage medium, comprising: computer instructions; the computer instructions, when executed on an electronic device, cause the electronic device to perform the method of any of claims 1-12.
CN202111032943.9A 2021-09-03 2021-09-03 Image processing method and electronic device Pending CN115760931A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111032943.9A CN115760931A (en) 2021-09-03 2021-09-03 Image processing method and electronic device
PCT/CN2022/116270 WO2023030398A1 (en) 2021-09-03 2022-08-31 Image processing method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111032943.9A CN115760931A (en) 2021-09-03 2021-09-03 Image processing method and electronic device

Publications (1)

Publication Number Publication Date
CN115760931A true CN115760931A (en) 2023-03-07

Family

ID=85332589

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111032943.9A Pending CN115760931A (en) 2021-09-03 2021-09-03 Image processing method and electronic device

Country Status (2)

Country Link
CN (1) CN115760931A (en)
WO (1) WO2023030398A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023198073A1 (en) * 2022-04-15 2023-10-19 华为技术有限公司 Facial feature detection method, and readable medium and electronic device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040125996A1 (en) * 2002-12-27 2004-07-01 Unilever Home & Personal Care Usa, Division Of Conopco, Inc. Skin diagnostic imaging method and apparatus
CN109064465A (en) * 2018-08-13 2018-12-21 上海试美网络科技有限公司 It is a kind of that labeling method is merged with the skin characteristic of natural light based on UV light
CN213522136U (en) * 2020-12-16 2021-06-22 北京优彩科技有限公司 Image sensor and imaging device
CN113034354B (en) * 2021-04-20 2021-12-28 北京优彩科技有限公司 Image processing method and device, electronic equipment and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023198073A1 (en) * 2022-04-15 2023-10-19 华为技术有限公司 Facial feature detection method, and readable medium and electronic device

Also Published As

Publication number Publication date
WO2023030398A1 (en) 2023-03-09

Similar Documents

Publication Publication Date Title
WO2020168956A1 (en) Method for photographing the moon and electronic device
WO2021136050A1 (en) Image photographing method and related apparatus
CN109191549B (en) Method and device for displaying animation
WO2020077511A1 (en) Method for displaying image in photographic scene and electronic device
WO2021078001A1 (en) Image enhancement method and apparatus
CN112328130B (en) Display processing method and electronic equipment
WO2022017261A1 (en) Image synthesis method and electronic device
CN112287852B (en) Face image processing method, face image display method, face image processing device and face image display equipment
WO2023284715A1 (en) Object reconstruction method and related device
CN110248037B (en) Identity document scanning method and device
WO2020113534A1 (en) Method for photographing long-exposure image and electronic device
WO2022179604A1 (en) Method and apparatus for determining confidence of segmented image
CN115272138B (en) Image processing method and related device
CN112749613A (en) Video data processing method and device, computer equipment and storage medium
US20230005277A1 (en) Pose determining method and related device
CN113538227B (en) Image processing method based on semantic segmentation and related equipment
CN110138999B (en) Certificate scanning method and device for mobile terminal
WO2022156473A1 (en) Video playing method and electronic device
EP4276760A1 (en) Pose determination method and related device
WO2023030398A1 (en) Image processing method and electronic device
CN114979457B (en) Image processing method and related device
WO2024007715A1 (en) Photographing method and related device
US20240012451A1 (en) Display method and related apparatus
CN113591514B (en) Fingerprint living body detection method, fingerprint living body detection equipment and storage medium
CN115150542B (en) Video anti-shake method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination