WO2023160075A1 - Procédé et appareil de retouche d'image, et dispositif et support - Google Patents

Procédé et appareil de retouche d'image, et dispositif et support Download PDF

Info

Publication number
WO2023160075A1
WO2023160075A1 PCT/CN2022/134873 CN2022134873W WO2023160075A1 WO 2023160075 A1 WO2023160075 A1 WO 2023160075A1 CN 2022134873 W CN2022134873 W CN 2022134873W WO 2023160075 A1 WO2023160075 A1 WO 2023160075A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
area
lens
lens area
environment
Prior art date
Application number
PCT/CN2022/134873
Other languages
English (en)
Chinese (zh)
Inventor
邵昌旭
许亮
李轲
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2023160075A1 publication Critical patent/WO2023160075A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular to an image restoration method, device, equipment and medium.
  • the face image in the vehicle can be analyzed to obtain the attributes and states of objects such as drivers or passengers.
  • the fatigue detection algorithm is likely to output wrong state detection results due to the reflection of the glasses. Missing or false positives about the driver's state will increase driving risks and affect user experience.
  • the embodiments of the present disclosure provide at least one image restoration method, device, device and medium.
  • an image restoration method comprising:
  • the image of the lens area is repaired to obtain a repaired target image.
  • the determining the reflective area in the lens area image according to the matching result of the lens area image and the environment image includes: combining the lens area image with the environment image Matching is performed to determine the marked area in the environment image that matches the lens area image; extracting the feature profile of the marked area; using the feature profile of the marked area to segment the lens area image to obtain the reflective areas.
  • the method includes: performing glasses recognition on the facial image, and determining glasses worn by the target object in the facial image Image of the lens area in glasses.
  • the determining the reflective area in the lens area image according to the matching result of the lens area image and the environment image includes: responding to determining that there is reflection in the lens area image According to the matching result of the lens area image and the environment image, the reflective area in the lens area image is determined.
  • the determining that there is a reflection phenomenon in the lens area image includes: in response to the existence of an image area that successfully matches the lens area image and the environment image, determining There are reflections.
  • the determining that there is a reflection phenomenon in the lens area image includes: determining a first area in the lens area image whose pixel brightness value is greater than or equal to a preset brightness threshold; in response to the The area ratio of the first area to the lens area image satisfies a preset area condition, and it is determined that there is a reflection phenomenon in the lens area image.
  • the determining that there is a reflection phenomenon in the lens area image includes: determining a first area in the lens area image whose pixel brightness value is greater than or equal to a preset brightness threshold; in response to the The eye area in the lens area is blocked by the first area, and it is determined that there is reflection phenomenon in the image of the lens area.
  • the acquiring the facial image of the target object and the environment image including the surrounding environment of the target object includes: acquiring the facial image of the target object in the vehicle captured by the first camera; acquiring The environment image collected by the second camera, the environment image includes an external environment image of the vehicle.
  • the method further includes: based on the target image, identifying the The state of the target object.
  • an image restoration device comprising:
  • An image acquisition module configured to acquire a face image of the target object and an environment image including the surrounding environment of the target object, the face image including a lens area image of glasses worn by the target object;
  • a reflective area determining module configured to determine the reflective area in the lens area image according to the matching result of the lens area image and the environment image;
  • the image processing module is configured to repair the image of the lens area according to the reflective area to obtain a repaired target image.
  • the reflective area determining module is specifically configured to: match the lens area image with the environment image, and determine a mark in the environment image that matches the lens area image region; extracting the characteristic contour of the marked region; performing region segmentation on the lens region image by using the characteristic contour of the marked region to obtain the reflective region.
  • the image acquisition module is further configured to: perform glasses recognition on the facial image, and determine the An image of the lens area in the glasses worn by the target subject.
  • the reflective area determining module is specifically configured to: in response to determining that there is a reflective phenomenon in the lens area image, according to the matching result of the lens area image and the environment image, determine the reflective areas in the image of the lens area described above.
  • the reflective area determination module when used to determine that there is a reflective phenomenon in the lens area image, it is specifically configured to: respond to the successful matching between the lens area image and the environment image In the image area of the lens, it is determined that there is a reflection phenomenon in the image of the lens area.
  • the reflective area determination module when used to determine that there is a reflective phenomenon in the image of the lens area, it is specifically used to: determine that the pixel brightness value in the image of the lens area is greater than or equal to a preset The first area of the brightness threshold; in response to the area ratio of the first area to the lens area image meeting a preset area condition, it is determined that there is a reflection phenomenon in the lens area image.
  • the reflective area determination module when used to determine that there is a reflective phenomenon in the image of the lens area, it is specifically used to: determine that the pixel brightness value in the image of the lens area is greater than or equal to a preset A first area of the brightness threshold; in response to the eye area in the lens area being blocked by the first area, it is determined that there is a reflection phenomenon in the image of the lens area.
  • the image acquisition module is specifically configured to: acquire the face image of the target object in the vehicle captured by the first camera; acquire the environment image captured by the second camera, and the environment image An image of the external environment of the vehicle is included.
  • the device further includes a state recognition module, configured to, after repairing the lens area image and obtaining the repaired target image: identify the target object based on the target image state.
  • an electronic device in a third aspect, includes a memory and a processor, the memory is used to store computer instructions executable on the processor, and the processor is used to implement the present disclosure when executing the computer instructions.
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the image restoration method described in any embodiment of the present disclosure is implemented.
  • the image repair method provided by the technical solution of the embodiment of the present disclosure can accurately find the reflective area in the glasses by matching the lens area image in the glasses worn by the target object with the environment image of the surrounding environment, so that the image can be detected based on the position and shape of the reflective area. and other information to repair the image of the lens area, so as to achieve a better effect of weakening or eliminating the reflection in the image of the lens area.
  • the occluded face information of the target object is restored in the repaired target image, which reduces the impact of the reflection on the lens on the algorithm in the subsequent application and improves the accuracy of the subsequent algorithm.
  • Fig. 1 is a flowchart of an image restoration method shown in at least one embodiment of the present disclosure
  • Fig. 2 is a flowchart of another image restoration method shown in at least one embodiment of the present disclosure
  • Fig. 3 is a flowchart of another image restoration method shown in at least one embodiment of the present disclosure.
  • Fig. 4 is a block diagram of an image restoration device shown in at least one embodiment of the present disclosure.
  • Fig. 5 is a block diagram of another image restoration device shown in at least one embodiment of the present disclosure.
  • Fig. 6 is a schematic diagram of a hardware structure of an electronic device according to at least one embodiment of the present disclosure.
  • first, second, third, etc. may be used in this specification to describe various information, the information should not be limited to these terms. These terms are only used to distinguish information of the same type from one another. For example, without departing from the scope of this specification, first information may also be called second information, and similarly, second information may also be called first information. Depending on the context, the word “if” as used herein may be interpreted as “at” or “when” or “in response to a determination.”
  • FIG. 1 is a flowchart of an image restoration method shown in at least one embodiment of the present disclosure, and the method may include the following steps:
  • step 102 a face image of the target object and an environment image including the surrounding environment of the target object are acquired.
  • the face image includes a lens area image of glasses worn by the target object.
  • the target object wears glasses
  • the lens area image is an image of the area where the lens in the glasses is located.
  • the environment image captured by at least one camera can be obtained.
  • the multiple environment images can be images of environments in different orientations around the target object.
  • This embodiment does not limit the manner of acquiring the face image of the target object and the environment image of the surrounding environment of the target object.
  • different acquisition methods may be used. The following are examples of several acquisition methods:
  • the face image of the target object can be based on the image collected by the front camera of the mobile phone on the face of the target object, and the environment image can be based on the image captured by the rear camera of the mobile phone in front of the target object.
  • Images collected from the environment For example, a face image and an environment image of a target object may be acquired based on an image captured by a camera.
  • the face image of the target object when the target object is inside the vehicle, the face image of the target object can be collected based on the camera inside the vehicle, and the environment image outside the vehicle can be collected based on the camera outside the vehicle or facing outside the vehicle.
  • the collection time of the face image and the environment image may be at the same time or different time.
  • the face image and the environment image with the same acquisition time or adjacent acquisition time may be used.
  • step 104 the reflective area in the lens area image is determined according to the matching result of the lens area image and the environment image.
  • the lens in the glasses worn by the target object has reflective phenomenon
  • the image of the scene in the surrounding environment reflected by the lens will appear in the lens area image, and the area where the image is located in the lens area image is the reflective area.
  • the lens region image can be matched with the environment image to find a region with high similarity in the two images, and then determine the reflective region in the lens region image according to the region with high similarity.
  • the lens has reflections, when reflecting the surrounding scenery, due to various factors such as the reflection angle, the curvature of the lens, and the material of the lens, generally speaking, the image of the scene in the reflective area is compared with the environment image.
  • the scene image in the image will change in shape, color, etc.
  • the similarity condition may be a similarity requirement on the shape of the region, the pixel value, and the like.
  • the reflection area in the lens area image is determined according to a matching result between the lens area image and the environment image. In this implementation manner, it may first be determined whether there is a reflection phenomenon in the image of the lens area. When there is a reflection phenomenon, then determine the reflection area in the image of the lens area, so as to reduce unnecessary consumption of computing resources.
  • the following examples illustrate several methods for determining whether there is a reflection phenomenon in the image of the lens area, but it can be understood that the specific implementation is not limited to the following examples:
  • the image of the lens area is still first matched with the image of the environment. If the similarity between the image of a certain part or all of the image of the lens area and the image of a certain local area of the environment image satisfies the set condition, then it is determined that there is an image area that successfully matches the image of the lens area and the environment image, and the lens area There are reflections in the image. Then the reflective area can be determined according to the successfully matched image area.
  • the image area of the lens due to reflection is likely to have a higher brightness value than other parts without reflection.
  • the first area in the image of the lens area whose pixel brightness value is greater than or equal to a preset brightness threshold may be determined.
  • the area ratio of the first area to the lens area image satisfies the preset area condition, for example, when the area ratio of the first area to the lens area image is greater than or equal to 10%, it indicates that there is reflection phenomenon in the lens area image.
  • the area ratio of the first area to the image of the lens area does not meet the preset area condition, it means that there may be no reflection phenomenon on the lens or the reflection area is too small to ignore the reflection phenomenon. In other examples, it may also be determined directly according to the area size of the first area whether there is a reflection phenomenon in the image of the lens area.
  • the first area in the lens area image whose pixel brightness values are greater than or equal to a preset brightness threshold may be determined.
  • the reflection on the lens does not affect the image of the eye behind the lens, the reflection can be ignored. And if the reflection on the lens blocks the eye area, the reflection cannot be ignored.
  • step 106 the lens area image is repaired according to the reflective area to obtain a repaired target image.
  • the image restoration technology can be used to repair the reflective area in the image of the lens area, eliminate or weaken the impact caused by the reflection, and obtain the image of the non-reflective lens area, that is, the target after repair image.
  • the image reflected by the lens may be superimposed and mixed with the image of the face area of the target object behind the lens, making it difficult to identify the face information of the target object.
  • information such as the structural shape of the reflective area and the edge color of the reflective area can be used to infer the information content of the reflective area, and then the reflective area can be filled.
  • the image information in the area where the reflective area matches the environment image can be used as a reference to repair the reflective area and restore the image behind the lens.
  • This embodiment does not limit the restoration algorithm specifically used in the above restoration process, for example, an image quality enhancement algorithm, a picture completion algorithm, a super-resolution technology, and the like.
  • a neural network model for image restoration can be pre-trained, and the reflective area, lens area image, and environment image are input into the neural network model, and an inpainted target image is output.
  • the neural network can predict the face information of the target object in the lens area image by learning the image information of the non-reflective area around the reflective area and learning the batch image samples.
  • the image repair method provided by the technical solution of the embodiment of the present disclosure can accurately find the reflective area in the glasses by matching the lens area image in the glasses worn by the target object with the environment image of the surrounding environment, so that the image can be detected based on the position and shape of the reflective area. and other information to repair the image of the lens area.
  • the traditional reflection repair method since the specific position of the reflection area cannot be determined, the image of the lens area can only be roughly repaired as a whole. Compared with the traditional reflection restoration method, this method better realizes the effect of weakening or eliminating the reflection in the image of the lens area.
  • the occluded face information of the target object is restored in the repaired target image, especially the information of the eye area, which reduces the impact of the reflection on the lens on the algorithm in the subsequent application and improves the accuracy of the subsequent algorithm.
  • Fig. 2 is another image restoration method provided by at least one embodiment of the present disclosure, and the method may include the following processing, wherein the same steps as the flow in Fig. 1 will not be described in detail again.
  • step 202 a face image of the target object and an environment image including the surrounding environment of the target object are acquired.
  • the face image includes a lens area image of glasses worn by the target object.
  • the image restoration method in this embodiment can be used as a preprocessing step of various image recognition algorithms, for example, it can be applied to various cabin vision algorithms.
  • step 204 the lens area image is matched with the environment image, and a marked area in the environment image that matches the lens area image is determined.
  • the lens area image can be matched with the environment image. If the similarity between a certain part in the lens area image and a certain area in the environment image satisfies certain conditions, then the area in the environment image is determined is the marked area that matches the image of the lens area.
  • the marked area contains scenes in the known environment, such as houses, trees, and so on.
  • step 206 the feature contour of the marked area is extracted.
  • the feature contour is a group or several groups of interconnected curves that outline the scene in the marked area, and these curves are composed of a series of edge points.
  • the marked area includes a tree and an electric pole
  • the extracted feature contours include the outline of the above-mentioned tree and the contour of an electric pole.
  • This embodiment is not limited to the specific manner of extracting the feature contour of the marked area, for example, image segmentation, edge detection and other manners may be used for extraction.
  • step 208 the image of the lens area is segmented using the characteristic contour of the marked area to obtain the reflective area.
  • This embodiment does not limit the specific manner used for region segmentation.
  • the extracted feature contour can be used as a region mask, and the region mask covers the region within the range of the feature contour.
  • the feature contour of the combination of the above-mentioned outline of a tree and the outline of a utility pole can be used as an area mask, or it can be used as two area masks respectively, that is, an area mask corresponding to a tree and an electric pole corresponding to An area mask of .
  • an area mask corresponding to a tree and an electric pole corresponding to An area mask of According to the shape of the area covered by the area mask, an area similar to the shape is fitted in the image of the lens area, and the reflective area is obtained by segmentation.
  • multiple reflective areas can be divided.
  • the feature contour can be used as the detection target, and the target detection is performed in the lens region image to obtain the region contour with the highest confidence, and the reflective region is segmented from the lens region image according to the region contour.
  • the part in the image of the lens area is directly determined as the reflective area.
  • Such a method is faster than the method of this embodiment in speed, but the accuracy of the determined reflective area is not as high as that of the method of this embodiment.
  • the image presented in the lens area image is a mixture of the image of the environment reflected by the lens and the image of the target subject's face area behind the lens, so that it is difficult to separate the real image from the lens area image when matching. reflective area.
  • step 210 the lens area image is repaired according to the reflective area to obtain a repaired target image.
  • the image restoration method finds the marked area in the environment image that matches the image in the lens area image by matching the lens area image in the glasses worn by the target object with the environment image including the surrounding environment. Since the marked area is an area in the exact and clear environment image, the feature outline extracted based on the marked area will be clearer and more reliable, and closer to the shape of the actual scene in the environment, and the reflection of the lens reflection is exactly the scene in the environment. In this way, the reflective area obtained through feature contour segmentation is closer to the actual reflective area on the lens, so that the image of the lens area can be repaired based on information such as the position and shape of the reflective area.
  • the occluded face information of the target object is restored in the repaired target image, especially the information of the eye area, which reduces the impact of the reflection on the lens on the algorithm in the subsequent application and improves the accuracy of the subsequent algorithm.
  • Fig. 3 provides the image restoration method of another embodiment of the present disclosure, and this method can be applied to the field of intelligent cabin, for example, can be by DMS (Driver Monitoring System, driver monitoring system), OMS (Occupancy Monitoring System, passenger monitoring system ), executed by the intelligent driving system or the cloud, etc., including the following processing, wherein the same steps as those shown in Fig. 1 and Fig. 2 will not be described in detail again.
  • DMS Driver Monitoring System, driver monitoring system
  • OMS Occupancy Monitoring System, passenger monitoring system
  • step 302 the face image of the target object in the vehicle captured by the first camera is acquired.
  • the first camera may be a camera facing the interior of the vehicle, and the first camera captures images inside the vehicle to obtain an image inside the vehicle, and performs image analysis on the image inside the vehicle to obtain a face image of the target object.
  • Target objects can be drivers, passengers, safety officers, etc. in the vehicle.
  • step 304 glasses recognition is performed on the face image, and an image of a lens area in glasses worn by the target object in the face image is determined.
  • glasses detection may be performed on the face image first, and if glasses are detected, glasses recognition may be further performed on the face image to obtain an image of the lens area in the glasses worn by the target object.
  • Another example is to directly perform glasses recognition on the facial image, determine whether the target object wears glasses, and obtain an image of the lens area in the glasses worn by the target object when it is determined that the target object wears glasses.
  • step 306 the environment image captured by the second camera is acquired, and the environment image includes an external environment image of the vehicle.
  • the second camera may be a camera facing outside the vehicle, and the second camera captures an environmental image outside the vehicle to obtain an environmental image.
  • the environment image contains data of at least one camera, or data of multiple cameras.
  • the method in this embodiment can be used to restore the driver's face image collected by the DMS.
  • the second camera can use the vehicle's front-facing camera and/or side-view camera, and the captured environmental images include the front and/or side of the vehicle external environment.
  • a surround-view camera facing the outside of the vehicle can be used, and the captured environment image includes a panoramic image outside the vehicle.
  • step 308 the lens area image is matched with the environment image, and a marked area in the environment image that matches the lens area image is determined.
  • the image of the image of the lens area matches a certain area of the image outside the vehicle of the environment image, mark the area outside the vehicle as a marked area.
  • step 310 the feature contour of the marked region is extracted to obtain a region mask.
  • the feature contour of the marked region is extracted, and the feature contour is used as a region mask.
  • step 312 the region image of the lens region is segmented by using the region mask to obtain the reflective region.
  • step 314 the lens area image is repaired according to the reflective area to obtain a repaired target image.
  • step 316 the state of the target object is identified based on the target image.
  • the state of the target object may represent the emotional or physical state of the target object, specifically, may include at least one of the following: normal state, fatigue state, and distraction state.
  • the target image that is, the image of the repaired lens area
  • the state recognition model can be a pre-trained neural network model, which can recognize the state of the target object based on closed eyes, distance between eyelids, fast blinking speed, gaze direction, and jumping movement.
  • the target image may be filled into the facial image to obtain a repaired facial image.
  • the face image is input to the state recognition model, and the state recognition model can combine the features of the eyes and other features of the face, such as yawning of the mouth, changes in facial expressions, etc., to recognize the state of the target object.
  • the eye-related state of the target object can be identified based on the repaired target image. Specifically, eye features can be extracted from the target image to identify the direction of sight of the target object or the state of eye opening and closing, and the length of time for the sight of the target object to maintain one direction or the duration of eye closure can be detected according to the video stream to determine whether the target object is Being distracted or fatigued, or determining the target object's level of distraction or fatigue.
  • the image can be pre-processed through the lens reflection elimination technology, and then poured into the algorithm module to improve the accuracy and availability of the recognition algorithm.
  • the detection accuracy will be significantly reduced due to the reduction of feature information on the image, that is, false positives or missed negatives.
  • the method of this embodiment can segment and restore the reflection of the lens according to the outline of the scene outside the vehicle, restore the key information required by the fatigue monitoring algorithm, and improve the accuracy of the vision algorithm inside the vehicle.
  • FIG. 4 the figure is a block diagram of an image restoration device shown in at least one embodiment of the present disclosure, and the device includes:
  • the image acquisition module 41 is configured to acquire a face image of the target object and an environment image including the surrounding environment of the target object, the face image including a lens area image of glasses worn by the target object.
  • the reflective area determination module 42 is configured to determine the reflective area in the lens area image according to the matching result of the lens area image and the environment image.
  • the image processing module 43 is configured to repair the image of the lens area according to the reflective area to obtain a repaired target image.
  • the reflective area determination module 42 is specifically configured to: match the lens area image with the environment image, determine a marked area in the environment image that matches the lens area image; extract The characteristic contour of the marked area; using the characteristic contour of the marked area to segment the image of the lens area to obtain the reflective area.
  • the image acquisition module 41 is further configured to: perform glasses recognition on the facial image after acquiring the facial image of the target object, and determine the location of the target object in the facial image. Image of the lens area in the worn glasses.
  • the reflective area determination module 42 is specifically configured to: determine the lens area according to the matching result between the lens area image and the environment image in response to determining that there is a reflective phenomenon in the lens area image Reflective areas in the image.
  • the reflective area determination module 42 when used to determine that there is a reflective phenomenon in the lens area image, it is specifically configured to: respond to the existence of an image area that successfully matches the lens area image and the environment image , to determine that there is a reflection phenomenon in the image of the lens area.
  • the reflective area determination module 42 when used to determine that there is a reflective phenomenon in the image of the lens area, it is specifically used to: determine that the pixel brightness value in the image of the lens area is greater than or equal to a preset brightness threshold The first area: in response to the area ratio of the first area in the lens area image meeting a preset area condition, determine that there is a reflection phenomenon in the lens area image.
  • the reflective area determination module 42 when used to determine that there is a reflective phenomenon in the image of the lens area, it is specifically used to: determine that the pixel brightness value in the image of the lens area is greater than or equal to a preset brightness threshold First area: in response to the eye area in the lens area being blocked by the first area, determine that there is a reflection phenomenon in the image of the lens area.
  • the image acquisition module 41 is specifically configured to: acquire the face image of the target object in the vehicle captured by the first camera; acquire the environment image captured by the second camera, the environment image includes the An image of the vehicle's external environment.
  • the device further includes a state recognition module 44, configured to, after repairing the image of the lens area and obtaining the repaired target image: based on the target image, identify the The state of the target object.
  • An embodiment of the present disclosure also provides an electronic device. As shown in FIG.
  • the device 62 is configured to implement the image restoration method described in any embodiment of the present disclosure when executing the computer instructions.
  • An embodiment of the present disclosure further provides a computer program product, the product includes a computer program/instruction, and when the computer program/instruction is executed by a processor, the image restoration method described in any embodiment of the present disclosure is implemented.
  • An embodiment of the present disclosure further provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the image restoration method described in any embodiment of the present disclosure is implemented.
  • the device embodiment since it basically corresponds to the method embodiment, for related parts, please refer to the part description of the method embodiment.
  • the device embodiments described above are only illustrative, and the modules described as separate components may or may not be physically separated, and the components shown as modules may or may not be physical modules, that is, they may be located in One place, or it can be distributed to multiple network modules. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution in this specification. It can be understood and implemented by those skilled in the art without creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

Conformément à des modes de réalisation, la présente invention concerne un procédé et un appareil de retouche d'image, ainsi qu'un dispositif et un support. Le procédé consiste à : acquérir une image faciale d'un objet cible et une image d'environnement, qui comprend l'environnement ambiant de l'objet cible, l'image faciale comprenant une image de zone de verre de lunettes de lunettes portées par l'objet cible ; selon un résultat de correspondance de l'image de zone de verre de lunettes et de l'image d'environnement, déterminer une zone de réflexion de lumière dans l'image de zone de verre de lunettes ; et, selon la zone de réflexion de lumière, retoucher l'image de zone de verre de lunettes, de façon à obtenir une image cible retouchée. Au moyen du présent procédé, une image de zone de verre de lunettes peut être retouchée sur la base d'une zone de réflexion de lumière, permettant ainsi d'obtenir un meilleur effet d'annulation de réflexion de lumière.
PCT/CN2022/134873 2022-02-28 2022-11-29 Procédé et appareil de retouche d'image, et dispositif et support WO2023160075A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210188238.6A CN114565531A (zh) 2022-02-28 2022-02-28 一种图像修复方法、装置、设备和介质
CN202210188238.6 2022-02-28

Publications (1)

Publication Number Publication Date
WO2023160075A1 true WO2023160075A1 (fr) 2023-08-31

Family

ID=81716112

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/134873 WO2023160075A1 (fr) 2022-02-28 2022-11-29 Procédé et appareil de retouche d'image, et dispositif et support

Country Status (2)

Country Link
CN (1) CN114565531A (fr)
WO (1) WO2023160075A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565531A (zh) * 2022-02-28 2022-05-31 上海商汤临港智能科技有限公司 一种图像修复方法、装置、设备和介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018011681A (ja) * 2016-07-20 2018-01-25 富士通株式会社 視線検出装置、視線検出プログラムおよび視線検出方法
CN108564540A (zh) * 2018-03-05 2018-09-21 广东欧珀移动通信有限公司 去除图像中镜片反光的图像处理方法、装置和终端设备
CN111582005A (zh) * 2019-02-18 2020-08-25 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读介质及电子设备
CN113055579A (zh) * 2019-12-26 2021-06-29 深圳市万普拉斯科技有限公司 图像处理方法、装置、电子设备及可读存储介质
CN114565531A (zh) * 2022-02-28 2022-05-31 上海商汤临港智能科技有限公司 一种图像修复方法、装置、设备和介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018011681A (ja) * 2016-07-20 2018-01-25 富士通株式会社 視線検出装置、視線検出プログラムおよび視線検出方法
CN108564540A (zh) * 2018-03-05 2018-09-21 广东欧珀移动通信有限公司 去除图像中镜片反光的图像处理方法、装置和终端设备
CN111582005A (zh) * 2019-02-18 2020-08-25 Oppo广东移动通信有限公司 图像处理方法、装置、计算机可读介质及电子设备
CN113055579A (zh) * 2019-12-26 2021-06-29 深圳市万普拉斯科技有限公司 图像处理方法、装置、电子设备及可读存储介质
CN114565531A (zh) * 2022-02-28 2022-05-31 上海商汤临港智能科技有限公司 一种图像修复方法、装置、设备和介质

Also Published As

Publication number Publication date
CN114565531A (zh) 2022-05-31

Similar Documents

Publication Publication Date Title
CN107862299B (zh) 一种基于近红外与可见光双目摄像头的活体人脸检测方法
WO2021196738A1 (fr) Procédé et appareil de détection de l'état d'un enfant, dispositif électronique et support de stockage
WO2021016873A1 (fr) Procédé de détection d'attention basé sur un réseau neuronal en cascade, dispositif informatique et support d'informations lisible par ordinateur
WO2022083504A1 (fr) Modèle d'apprentissage automatique, procédés et systèmes d'élimination de personnes indésirables à partir de photographies
EP3338217A1 (fr) Détection et masquage de caractéristique dans des images sur la base de distributions de couleurs
CN111062292B (zh) 一种疲劳驾驶检测装置与方法
CN111652082B (zh) 人脸活体检测方法和装置
CN108416291B (zh) 人脸检测识别方法、装置和系统
JP2003015816A (ja) ステレオカメラを使用した顔・視線認識装置
Yuen et al. On looking at faces in an automobile: Issues, algorithms and evaluation on naturalistic driving dataset
CN111914748B (zh) 人脸识别方法、装置、电子设备及计算机可读存储介质
WO2023160075A1 (fr) Procédé et appareil de retouche d'image, et dispositif et support
CN110047059B (zh) 图像处理方法、装置、电子设备及可读存储介质
CN111158457A (zh) 一种基于手势识别的车载hud人机交互系统
CN111222444A (zh) 一种考虑驾驶员情绪的增强现实抬头显示方法和系统
CN111814603A (zh) 一种人脸识别方法、介质及电子设备
CN113781421A (zh) 基于水下的目标识别方法、装置及系统
CN114663863A (zh) 图像处理方法、装置、电子设备和计算机存储介质
CN112183200B (zh) 一种基于视频图像的眼动追踪方法和系统
CN111738241B (zh) 基于双摄像头的瞳孔检测方法及装置
WO2018051836A1 (fr) Dispositif de détection d'iris, procédé de détection d'iris, programme de détection d'iris et support d'enregistrement sur lequel un programme de détection d'iris est enregistré
KR20130126386A (ko) 적응적 피부색 검출 방법, 그리고 이를 이용한 얼굴 검출 방법 및 그 장치
Jacob Comparison of popular face detection and recognition techniques
JPH07311833A (ja) 人物の顔の検出装置
CN114692775A (zh) 模型训练、目标检测及渲染方法、存储介质、程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22928335

Country of ref document: EP

Kind code of ref document: A1