CN108564540B - Image processing method and device for removing lens reflection in image and terminal equipment - Google Patents
Image processing method and device for removing lens reflection in image and terminal equipment Download PDFInfo
- Publication number
- CN108564540B CN108564540B CN201810179783.2A CN201810179783A CN108564540B CN 108564540 B CN108564540 B CN 108564540B CN 201810179783 A CN201810179783 A CN 201810179783A CN 108564540 B CN108564540 B CN 108564540B
- Authority
- CN
- China
- Prior art keywords
- image
- area
- human eye
- structured light
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 21
- 238000012545 processing Methods 0.000 claims abstract description 45
- 238000003702 image correction Methods 0.000 claims abstract description 19
- 238000000034 method Methods 0.000 claims description 16
- 238000001514 detection method Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 6
- 239000011521 glass Substances 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000005452 bending Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G06T5/77—
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Abstract
The application discloses an image processing method, an image processing device and terminal equipment for removing lens reflection in an image, wherein the image processing method for removing the lens reflection in the image comprises the following steps: generating a color image and a depth image; extracting a face region from the color image based on a face recognition algorithm; determining a light reflection area in the face area according to the color image and the depth image; and performing image correction processing on the light reflection area. According to the image processing method, the image processing device and the terminal equipment for removing the lens reflection in the image, the face area is extracted from the color image by generating the color image and the depth image based on the face recognition algorithm, the reflection area in the face area is determined according to the color image and the depth image, the reflection area is subjected to image correction processing, the reflection area can be accurately recognized, the reflection in the reflection area is removed, and the image effect is improved.
Description
Technical Field
The present application relates to the field of information processing technologies, and in particular, to an image processing method and apparatus for removing reflection of a lens in an image, and a terminal device.
Background
With the continuous progress of science and technology, the camera function of the mobile phone is more powerful, the pixel number of the camera is higher and higher, and the image is clearer and clearer. People can take pictures anytime and anywhere by using the mobile phone. However, in the process of photographing and imaging the portrait, the loss of eye details caused by the reflection of the glasses is a problem which is difficult to solve. At present, the reflection removing technology can include two methods, one is to perform pretreatment before photographing, such as adding a polarizing film or adjusting the angles of a light source and a lens, and even to respectively photograph a photograph with glasses and a photograph without glasses for synthesis; after photographing and imaging, reflection light is removed through post-image processing, for example, a blue channel of an eye region of a picture is extracted to perform differential processing with other channels, so that blue reflection light is eliminated. The first method complicates the photographing process and reduces the imaging quality by adding a polarizer. The second method is difficult to accurately obtain the correct eye region, and because the eye mirror reflection light spots have complexity, the size, color and distribution of the light spots are difficult to predict, the post-processing efficiency is low, and the cost is high.
Content of application
The application provides an image processing method and device for removing lens reflection in an image and terminal equipment, and aims to solve the problem that eye details are lost due to the fact that glasses lenses reflect light when people images are shot in the prior art.
The embodiment of the application provides an image processing method for removing reflection of a lens in an image, which comprises the following steps: generating a color image and a depth image;
Extracting a face region from the color image based on a face recognition algorithm;
Determining a light reflection area in the face area according to the color image and the depth image; and
And performing image correction processing on the light reflecting area.
Optionally, generating a color image and a depth image includes:
Capturing a color image shot by a camera through an image sensor;
A depth image generated with structured light is acquired by a structured light sensor.
Optionally, determining a light reflection region in the face region according to the color image and the depth image includes:
Acquiring intensity information of structured light when the depth image is generated, and determining a lens area according to the intensity information;
Acquiring a human eye area from the lens area according to the depth image;
Determining a non-human eye area in the lens area according to the lens area and the human eye area;
And carrying out highlight detection on the non-human eye area to determine a light reflecting area.
Optionally, acquiring a depth image generated by using the structured light through a structured light sensor includes:
Projecting structured light to a human face through a structured light projector, and acquiring a structured light image passing through the human face through the structured light sensor;
And performing decoding processing on the structured light image to generate the depth image.
Optionally, performing decoding processing on the structured light image to generate the depth image, includes:
Decoding phase information corresponding to the deformed position pixels in the structured light image;
Converting the phase information into height information;
And generating the depth image according to the height information.
Optionally, the image correction processing on the light reflection area includes:
Acquiring skin feature information of the face area;
And carrying out image correction processing on the light reflecting area according to the skin characteristic information of the face area.
Optionally, performing highlight detection on the non-human eye region to determine a light reflection region, including:
Acquiring brightness information of pixels in the non-human eye region;
When the brightness information of the pixels in the non-human eye area is larger than a preset brightness value, determining highlight pixels in the non-human eye area;
And taking the set of the high-brightness pixels as the light reflection area.
Optionally, after a human eye region is obtained from the lens region according to the depth image, the human eye region is sharpened by using an edge enhancement algorithm.
Another embodiment of the present application provides an image processing apparatus for removing reflection of a lens in an image, including: a generation module for generating a color image and a depth image;
The extraction module is used for extracting a face area from the color image based on a face recognition algorithm;
The determining module is used for determining a light reflecting area in the face area according to the color image and the depth image; and
And the processing module is used for correcting the image correction processing of the light reflecting area.
Optionally, the generating module is configured to:
Capturing a color image shot by a camera through an image sensor;
A depth image generated with structured light is acquired by a structured light sensor.
Optionally, the determining module is configured to:
Acquiring intensity information of structured light when the depth image is generated, and determining a lens area according to the intensity information;
Acquiring a human eye area from the lens area according to the depth image;
Determining a non-human eye area in the lens area according to the lens area and the human eye area;
And carrying out highlight detection on the non-human eye area to determine a light reflecting area.
Optionally, the generating module is configured to:
Projecting structured light to a human face through a structured light projector, and acquiring a structured light image passing through the human face through the structured light sensor;
And performing decoding processing on the structured light image to generate the depth image.
Optionally, the generating module is configured to:
Decoding phase information corresponding to the deformed position pixels in the structured light image;
Converting the phase information into height information;
And generating the depth image according to the height information.
Optionally, the processing module is configured to:
Acquiring skin feature information of the face area;
And carrying out image correction processing on the light reflecting area according to the skin characteristic information of the face area.
Optionally, the determining module is configured to:
Acquiring brightness information of pixels in the non-human eye region;
When the brightness information of the pixels in the non-human eye area is larger than a preset brightness value, determining highlight pixels in the non-human eye area;
And taking the set of the high-brightness pixels as the light reflection area.
Optionally, the determining module is further configured to:
After a human eye region is obtained from the lens region according to the depth image, sharpening is carried out on the human eye region by utilizing an edge enhancement algorithm.
A further embodiment of the present application provides a non-transitory computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements an image processing method for removing lens reflections in an image according to the embodiment of the first aspect of the present application.
In another embodiment of the present application, a terminal device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the image processing method for removing lens reflections in an image according to the embodiment of the first aspect of the present application.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
By generating a color image and a depth image, extracting a face region from the color image based on a face recognition algorithm, determining a reflection region in the face region according to the color image and the depth image, and performing image correction processing on the reflection region, the reflection region can be accurately recognized, reflection in the reflection region is removed, and the image effect is improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of an image processing method for removing lens glints in an image according to one embodiment of the present application;
FIG. 2 is a flow diagram of decoding a structured light image to generate a corresponding depth image according to one embodiment of the present application;
FIG. 3 is a schematic view of a scene of structured light measurements according to one embodiment of the present application;
FIG. 4 is a flow diagram of determining a light reflection area in a face region from a color image and a depth image according to one embodiment of the present application;
FIG. 5 is a schematic view of a scene for determining a lens region according to an embodiment of the present application;
FIG. 6 is a schematic view of a scene defining a lens region and a retro-reflective region according to one embodiment of the present application;
FIG. 7 is a flow diagram of determining a light reflection area in a face region from a color image and a depth image according to another embodiment of the present application;
FIG. 8 is a block diagram of an image processing apparatus for removing lens reflections from an image according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The following describes an image processing method, an image processing device and a terminal device for removing lens reflection in an image according to the present application with reference to the drawings.
FIG. 1 is a flowchart of an image processing method for removing lens glints in an image according to one embodiment of the present application.
As shown in fig. 1, the image processing method for removing the reflection of the lens in the image includes:
S101, generating a color image and a depth image.
At present, the technology of mobile terminals is becoming strong, and people can take pictures anytime and anywhere by using smart phones. However, in the process of photographing and imaging the portrait, the loss of eye details caused by the reflection of the glasses is a problem which is difficult to solve. In order to solve the above problems, the present application provides an image processing method for removing reflection of a lens in an image.
In this embodiment, a color image taken by the camera may be captured by the image sensor, and a depth image generated using structured light may be acquired by the structured light sensor.
Specifically, structured light can be projected to a human face through a structured light projector, a structured light image passing through the human face is acquired through a structured light sensor, and then the structured light image is decoded, so that a corresponding depth image is generated. The process of decoding the structured-light image to generate the corresponding depth image, as shown in fig. 2, may further include:
S201, decoding phase information corresponding to the deformed position pixel in the structured light image.
S202, converting the phase information into height information.
And S203, generating a depth image according to the height information.
That is, the acquisition of information related to the three-dimensional model of the face of the user may be performed by projecting structured light to the face of the user. The structured light is infrared light, and can be laser stripes, Gray codes, sine stripes or patterns such as non-uniform speckles. After the patterns pass through the face, the contour and depth information of the face, such as the height of the nose, the face shape and the like, can be acquired.
The following description will be made by taking a widely-used fringe projection technique as an example. When the surface structured light is used for projection, as shown in fig. 3, a sinusoidal stripe is generated through computer programming, the sinusoidal stripe is projected to a measured object through projection equipment, a camera is used for shooting the bending degree of the stripe modulated by an object, the bending stripe is demodulated to obtain a phase, and then the phase is converted into a height. Of course, the most important point is the system registration, including the registration of the camera with the projection device, which is likely to cause errors. Specifically, the phase difference can be obtained by subtracting the phase of the bending stripe from the phase of the reference stripe, the phase difference represents the height information of the measured object relative to the reference surface, and the phase difference is substituted into a phase and height conversion formula, so that the three-dimensional model of the measured object is obtained. It should be understood that, in practical applications, the structured light used in the embodiments of the present application may be any pattern other than the stripes, according to different application scenarios.
And S102, extracting a face region from the color image based on a face recognition algorithm.
Specifically, the position, occupied size, and the like of the face in the color image can be identified and acquired based on a face identification algorithm. And extracting a face region according to the information.
And S103, determining a light reflecting area in the human face area according to the color image and the depth image.
The light reflecting area in the human face area is an area which is reflected when the glasses are irradiated by light.
In an embodiment of the present application, as shown in fig. 4, determining a retroreflective area in the face area according to the color image and the depth image may further include the following steps:
S401, acquiring intensity information of the structured light when the depth image is generated, and determining a lens area according to the intensity information.
S402, acquiring a human eye area from the lens area according to the depth image.
And S403, determining a non-human eye area in the lens area according to the lens area and the human eye area.
S404, highlight detection is carried out on the non-human eye area, and a light reflection area is determined.
Specifically, luminance information of pixels in the non-human eye region can be acquired. When the brightness information of the pixels in the non-human eye area is larger than the preset brightness value, the high-brightness pixels in the non-human eye area can be determined. Then, the set of highlight pixels is taken as a light reflection area.
As shown in fig. 5, when a color image of a human face is taken by a color camera and structured light is emitted by an infrared emitter (structured light projection device), when infrared ray (structured light) a1 is irradiated to a spectacle lens, part of the light is reflected by a mirror surface and part of the light is transmitted through the spectacle lens and irradiated to the surface of the human eye. The infrared ray (structured light) a2 directly irradiated on the skin of the human face without passing through the spectacle lens is stronger in infrared signal of the infrared ray a2 received by the infrared sensor (structured light receiver) because the infrared sensor (structured light receiver) does not reflect the infrared ray. Since the difference in intensity information between the two is large, as shown in fig. 6, the lens area a1 can be determined from the intensity information of the infrared ray received by the infrared sensor. The structured light can also acquire the depth information of the human face, and the identification precision reaches the millimeter level, so that the infrared light can be decoded to acquire the depth image of the human eye part and the edge contour information thereof, and the human eye area A2 can be determined. In lens area a1, the remaining portion of the eye area a2, which is the non-eye area, is removed. And detecting the brightness information of the pixels in the area, wherein the part of the brightness information, which is larger than the preset brightness value, is the light reflecting area. The preset brightness value can be set according to actual conditions, and the preset brightness value is always larger than the corresponding brightness value during normal exposure because the light reflected is more light.
And S104, performing image correction processing on the light reflecting area.
In an embodiment of the present application, skin feature information of a face region may be acquired, and then image correction processing may be performed on the light reflection region according to the skin feature information of the face region. For example: the missing details in the retroreflective regions can be supplemented with the color, brightness, etc. characteristics of the human face skin around the eyeglass frame.
In addition, in order to further improve the effect of the correction, as shown in fig. 7, the method may further include the steps of:
S405, after a human eye area is obtained from the lens area according to the depth image, sharpening is carried out on the human eye area through an edge enhancement algorithm.
Due to the problem of the photographing angle, some reflected lights can also shield the human eye area, so that the details of human eyes are lost. Therefore, the edge enhancement algorithm can be used for sharpening the human eye area, so that the display effect of the human eye area is enhanced, and the influence of the reflex is weakened.
According to the identity authentication method, the color image and the depth image are generated, the face area is extracted from the color image based on the face recognition algorithm, the light reflection area in the face area is determined according to the color image and the depth image, the image correction processing is carried out on the light reflection area, the light reflection area can be accurately recognized, the light reflection in the light reflection area is removed, and the image effect is improved.
In order to implement the above embodiments, the present application further provides an image processing apparatus for removing reflection of a lens in an image.
Fig. 8 is a block diagram of an image processing apparatus for removing mirror reflection in an image according to an embodiment of the present application.
As shown in fig. 8, the apparatus includes a generation module 810, an extraction module 820, a determination module 830, and a processing module 840.
The generating module 810 is configured to generate a color image and a depth image.
And an extracting module 820, configured to extract a face region from the color image based on a face recognition algorithm.
And a determining module 830, configured to determine a light reflection area in the face area according to the color image and the depth image.
The processing module 840 is configured to correct the image correction processing on the light reflection area.
It should be noted that the foregoing explanation of the image processing method for removing the mirror reflection in the image is also applicable to the image processing apparatus for removing the mirror reflection in the image in the embodiment of the present application, and details not published in the embodiment of the present application are not repeated herein.
The image processing device for removing the reflection of the lens in the image, provided by the embodiment of the application, can accurately identify the reflection area, remove the reflection in the reflection area and improve the image effect by generating the color image and the depth image, extracting the face area from the color image based on the face recognition algorithm, determining the reflection area in the face area according to the color image and the depth image, and performing image correction processing on the reflection area.
In order to achieve the above embodiments, the present application also proposes a non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor is capable of implementing the image processing method of removing lens glints in an image as in the foregoing embodiments.
In order to implement the above embodiment, the present application further provides a terminal device.
As shown in fig. 9, the terminal device 90 includes: a processor 91, a memory 92, and an image processing circuit 93.
Wherein the memory 92 is used for storing executable program code; the processor 91 implements the image processing method of removing the mirror reflection in the image as in the foregoing embodiment by reading the executable program code stored in the memory 92 and the image processing circuit 93 processing the image.
S101', a color image and a depth image are generated.
S102', extracting a face region from the color image based on a face recognition algorithm.
And S103', determining a light reflection area in the human face area according to the color image and the depth image.
And S104', performing image correction processing on the light reflecting area.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Claims (16)
1. An image processing method for removing lens reflection in an image, comprising:
Generating a color image and a depth image;
Extracting a face region from the color image based on a face recognition algorithm;
Determining a light reflection area in the face area according to the color image and the depth image, wherein the intensity information of structured light when the depth image is generated is obtained, a lens area is determined according to the intensity information, a human eye area is obtained from the lens area according to the depth image, a non-human eye area in the lens area is determined according to the lens area and the human eye area, highlight detection is carried out on the non-human eye area, and the light reflection area is determined; and
And performing image correction processing on the light reflecting area.
2. The method of claim 1, wherein generating color and depth images comprises:
Capturing a color image shot by a camera through an image sensor;
A depth image generated with structured light is acquired by a structured light sensor.
3. The method of claim 2, wherein acquiring the depth image generated with structured light by a structured light sensor comprises:
Projecting structured light to a human face through a structured light projector, and acquiring a structured light image passing through the human face through the structured light sensor;
And performing decoding processing on the structured light image to generate the depth image.
4. The method of claim 3, wherein decoding the structured-light image to generate the depth image comprises:
Decoding phase information corresponding to the deformed position pixels in the structured light image;
Converting the phase information into height information;
And generating the depth image according to the height information.
5. The method according to claim 1, wherein the image correction processing of the light reflection area includes:
Acquiring skin feature information of the face area;
And carrying out image correction processing on the light reflecting area according to the skin characteristic information of the face area.
6. The method of claim 1, wherein highlight detecting the non-human eye region, determining a light reflection region, comprises:
Acquiring brightness information of pixels in the non-human eye region;
When the brightness information of the pixels in the non-human eye area is larger than a preset brightness value, determining highlight pixels in the non-human eye area;
And taking the set of the high-brightness pixels as the light reflection area.
7. The method of claim 1, further comprising:
After a human eye region is obtained from the lens region according to the depth image, sharpening is carried out on the human eye region by utilizing an edge enhancement algorithm.
8. An image processing apparatus for removing reflection from a lens in an image, comprising:
A generation module for generating a color image and a depth image;
The extraction module is used for extracting a face area from the color image based on a face recognition algorithm;
The determination module is used for determining a light reflection region in the face region according to the color image and the depth image, acquiring intensity information of structured light when the depth image is generated, determining a lens region according to the intensity information, acquiring a human eye region from the lens region according to the depth image, determining a non-human eye region in the lens region according to the lens region and the human eye region, performing highlight detection on the non-human eye region, and determining the light reflection region; and
And the processing module is used for correcting the image correction processing of the light reflecting area.
9. The apparatus of claim 8, wherein the generation module is to:
Capturing a color image shot by a camera through an image sensor;
A depth image generated with structured light is acquired by a structured light sensor.
10. The apparatus of claim 9, wherein the generation module is to:
Projecting structured light to a human face through a structured light projector, and acquiring a structured light image passing through the human face through the structured light sensor;
And performing decoding processing on the structured light image to generate the depth image.
11. The apparatus of claim 10, wherein the generation module is to:
Decoding phase information corresponding to the deformed position pixels in the structured light image;
Converting the phase information into height information;
And generating the depth image according to the height information.
12. The apparatus of claim 8, wherein the processing module is to:
Acquiring skin feature information of the face area;
And carrying out image correction processing on the light reflecting area according to the skin characteristic information of the face area.
13. The apparatus of claim 8, wherein the determination module is to:
Acquiring brightness information of pixels in the non-human eye region;
When the brightness information of the pixels in the non-human eye area is larger than a preset brightness value, determining highlight pixels in the non-human eye area;
And taking the set of the high-brightness pixels as the light reflection area.
14. The apparatus of claim 8, wherein the determination module is further configured to:
After a human eye region is obtained from the lens region according to the depth image, sharpening is carried out on the human eye region by utilizing an edge enhancement algorithm.
15. A non-transitory computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the image processing method for removing lens glints in an image according to any one of claims 1 to 7.
16. A terminal device, comprising a memory and a processor, wherein the memory stores computer readable instructions, and the instructions, when executed by the processor, cause the processor to execute the image processing method for removing the mirror reflection in the image according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810179783.2A CN108564540B (en) | 2018-03-05 | 2018-03-05 | Image processing method and device for removing lens reflection in image and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810179783.2A CN108564540B (en) | 2018-03-05 | 2018-03-05 | Image processing method and device for removing lens reflection in image and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108564540A CN108564540A (en) | 2018-09-21 |
CN108564540B true CN108564540B (en) | 2020-07-17 |
Family
ID=63532316
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810179783.2A Expired - Fee Related CN108564540B (en) | 2018-03-05 | 2018-03-05 | Image processing method and device for removing lens reflection in image and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108564540B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111582005B (en) * | 2019-02-18 | 2023-08-15 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable medium and electronic equipment |
WO2021115550A1 (en) * | 2019-12-09 | 2021-06-17 | Huawei Technologies Co., Ltd. | Separation of first and second image data |
CN112995529B (en) * | 2019-12-17 | 2022-07-26 | 华为技术有限公司 | Imaging method and device based on optical flow prediction |
CN113055579B (en) * | 2019-12-26 | 2022-02-01 | 深圳市万普拉斯科技有限公司 | Image processing method and device, electronic equipment and readable storage medium |
CN111464745B (en) * | 2020-04-14 | 2022-08-19 | 维沃移动通信有限公司 | Image processing method and electronic equipment |
CN113542580B (en) * | 2020-04-22 | 2022-10-28 | 华为技术有限公司 | Method and device for removing light spots of glasses and electronic equipment |
CN112001296B (en) * | 2020-08-20 | 2024-03-29 | 广东电网有限责任公司清远供电局 | Three-dimensional safety monitoring method and device for transformer substation, server and storage medium |
CN113486714B (en) * | 2021-06-03 | 2022-09-02 | 荣耀终端有限公司 | Image processing method and electronic equipment |
CN114565531A (en) * | 2022-02-28 | 2022-05-31 | 上海商汤临港智能科技有限公司 | Image restoration method, device, equipment and medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020579A (en) * | 2011-09-22 | 2013-04-03 | 上海银晨智能识别科技有限公司 | Face recognition method and system, and removing method and device for glasses frame in face image |
CN103093210A (en) * | 2013-01-24 | 2013-05-08 | 北京天诚盛业科技有限公司 | Method and device for glasses identification in face identification |
CN105117695A (en) * | 2015-08-18 | 2015-12-02 | 北京旷视科技有限公司 | Living body detecting device and method |
CN105959543A (en) * | 2016-05-19 | 2016-09-21 | 努比亚技术有限公司 | Shooting device and method of removing reflection |
EP3073313A1 (en) * | 2015-03-23 | 2016-09-28 | Seiko Epson Corporation | Light guide device, head-mounted display, and method of manufacturing light guide device |
CN106326828A (en) * | 2015-11-08 | 2017-01-11 | 北京巴塔科技有限公司 | Eye positioning method applied to face recognition |
CN106503644A (en) * | 2016-10-19 | 2017-03-15 | 西安理工大学 | Glasses attribute detection method based on edge projection and color characteristic |
EP3217210A1 (en) * | 2016-03-11 | 2017-09-13 | Valeo Vision | Image projector comprising a screen and a light source with electroluminescent rods |
CN107301392A (en) * | 2017-06-20 | 2017-10-27 | 华天科技(昆山)电子有限公司 | Wafer level image harvester |
CN107493411A (en) * | 2017-08-09 | 2017-12-19 | 广东欧珀移动通信有限公司 | Image processing system and method |
-
2018
- 2018-03-05 CN CN201810179783.2A patent/CN108564540B/en not_active Expired - Fee Related
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103020579A (en) * | 2011-09-22 | 2013-04-03 | 上海银晨智能识别科技有限公司 | Face recognition method and system, and removing method and device for glasses frame in face image |
CN103093210A (en) * | 2013-01-24 | 2013-05-08 | 北京天诚盛业科技有限公司 | Method and device for glasses identification in face identification |
EP3073313A1 (en) * | 2015-03-23 | 2016-09-28 | Seiko Epson Corporation | Light guide device, head-mounted display, and method of manufacturing light guide device |
CN105117695A (en) * | 2015-08-18 | 2015-12-02 | 北京旷视科技有限公司 | Living body detecting device and method |
CN106326828A (en) * | 2015-11-08 | 2017-01-11 | 北京巴塔科技有限公司 | Eye positioning method applied to face recognition |
EP3217210A1 (en) * | 2016-03-11 | 2017-09-13 | Valeo Vision | Image projector comprising a screen and a light source with electroluminescent rods |
CN105959543A (en) * | 2016-05-19 | 2016-09-21 | 努比亚技术有限公司 | Shooting device and method of removing reflection |
CN106503644A (en) * | 2016-10-19 | 2017-03-15 | 西安理工大学 | Glasses attribute detection method based on edge projection and color characteristic |
CN107301392A (en) * | 2017-06-20 | 2017-10-27 | 华天科技(昆山)电子有限公司 | Wafer level image harvester |
CN107493411A (en) * | 2017-08-09 | 2017-12-19 | 广东欧珀移动通信有限公司 | Image processing system and method |
Also Published As
Publication number | Publication date |
---|---|
CN108564540A (en) | 2018-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108564540B (en) | Image processing method and device for removing lens reflection in image and terminal equipment | |
US10997696B2 (en) | Image processing method, apparatus and device | |
CN107563304B (en) | Terminal equipment unlocking method and device and terminal equipment | |
CN107564050B (en) | Control method and device based on structured light and terminal equipment | |
KR101259835B1 (en) | Apparatus and method for generating depth information | |
US9811729B2 (en) | Iris recognition via plenoptic imaging | |
CN107517346B (en) | Photographing method and device based on structured light and mobile device | |
US11138740B2 (en) | Image processing methods, image processing apparatuses, and computer-readable storage medium | |
CN107370951B (en) | Image processing system and method | |
CN107392874B (en) | Beauty treatment method and device and mobile equipment | |
CN107705278B (en) | Dynamic effect adding method and terminal equipment | |
CN107491675B (en) | Information security processing method and device and terminal | |
CN108924426B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN107610171B (en) | Image processing method and device | |
CN107480615B (en) | Beauty treatment method and device and mobile equipment | |
CN107613239B (en) | Video communication background display method and device | |
CN107592491B (en) | Video communication background display method and device | |
JP5857712B2 (en) | Stereo image generation apparatus, stereo image generation method, and computer program for stereo image generation | |
CN107370952B (en) | Image shooting method and device | |
US20200045246A1 (en) | Image processing method, electronic device, and computer-readable storage medium | |
CN107734266B (en) | Image processing method and apparatus, electronic apparatus, and computer-readable storage medium | |
CN107705276B (en) | Image processing method and apparatus, electronic apparatus, and computer-readable storage medium | |
CN108760245A (en) | Optical element detection method and device, electronic equipment, readable storage medium storing program for executing | |
CN109120846B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN107563302B (en) | Face restoration method and device for removing glasses |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200717 |