CN116132732A - Video processing method, device, electronic equipment and storage medium - Google Patents

Video processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116132732A
CN116132732A CN202310093799.2A CN202310093799A CN116132732A CN 116132732 A CN116132732 A CN 116132732A CN 202310093799 A CN202310093799 A CN 202310093799A CN 116132732 A CN116132732 A CN 116132732A
Authority
CN
China
Prior art keywords
target
area
video
target object
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310093799.2A
Other languages
Chinese (zh)
Inventor
蒋羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202310093799.2A priority Critical patent/CN116132732A/en
Publication of CN116132732A publication Critical patent/CN116132732A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to a video processing method, a device, an electronic apparatus and a storage medium, wherein the video processing method comprises the following steps: acquiring a video image, wherein the video image comprises a target object to be processed and an occlusion region for occluding the target object; and performing image processing on the part of the target object which is not blocked by the blocking area to obtain a processed video image. According to the video processing method, the video processing device, the electronic equipment and the storage medium, the problem that the processed video image display effect is poor due to the fact that the shielding object exists in front of the target object can be solved, the image processing of the target object can be achieved without affecting the shielding area, abnormal display of non-target objects in the shielding area can be avoided, and the display effect of the processed video image is improved.

Description

Video processing method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of video processing, and in particular relates to a video processing method, a video processing device, electronic equipment and a storage medium.
Background
With the continuous development of video technology, processing of target objects in video images, such as live video, recorded video, etc., can be achieved, for example, adjusting the color, brightness, applying deformation, etc., of local objects. Here, when a target object is processed, there may be an obstruction that obstructs the target object, for example, in a live video scene, when a deformation is applied to a face of a person, there may be an obstruction such as an earphone, a human hand, or the like, which obstructs a part of the face.
However, in the related video processing technology, when a video scene with an occlusion object is present before processing a target object, the occlusion object is processed together with the target object, so that deformation distortion, abnormal display, and the like of a non-target object may occur, resulting in poor display effect of the processed video image.
Disclosure of Invention
The disclosure provides a video processing method, a device, an electronic device and a storage medium, so as to at least solve the problem of poor display effect of a processed video image caused by the existence of a shielding object in front of a target object in the related art. The technical scheme of the present disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a video processing method, including: acquiring a video image, wherein the video image comprises a target object to be processed and an occlusion region for occluding the target object; and performing image processing on the part of the target object which is not blocked by the blocking area to obtain a processed video image.
Optionally, the image processing is performed on a portion of the target object that is not blocked by the blocking area, so as to obtain a processed video image, including: determining a first target area in the video image, wherein the first target area is an area containing the target object; determining the occlusion region in the first target region; image processing is carried out on the part, which is not blocked by the blocking area, of the target object in the first target area, so as to obtain a second target area; and obtaining the processed video image based on the second target area.
Optionally, the image processing is performed on a portion of the first target area where the target object is not blocked by the blocking area, so as to obtain a second target area, including: image processing is carried out on the part of the target object in the first target area, which is not blocked by the blocking area, and the blocking area, so as to obtain a candidate second target area; and superposing the unprocessed shielding region on the processed shielding region in the candidate second target region to obtain a second target region.
Optionally, before performing image processing on a portion of the first target area where the target object is not blocked by the blocking area, the video processing method further includes: and removing the shielding region from the first target region according to the position of the shielding region in the first target region, so as to obtain the first target region after shielding is removed.
Optionally, the removing the occlusion region from the first target region according to the position of the occlusion region in the first target region, to obtain a first target region after occlusion removal, includes:
and setting all pixel values of the shielding region in the first target region to be preset pixel values according to the position of the shielding region in the first target region, so as to obtain a first target region after shielding is removed.
Optionally, the image processing of the portion of the first target area, where the target object is not blocked by the blocking area, obtains a second target area, including: responding to the image processing not meeting a preset complement condition, and performing image processing on the part of the first target area, which is not blocked by the blocking area, of the target object to obtain a second target area; and in response to the image processing meeting the preset complementing condition, performing pixel complementing on the part, which is blocked by the blocking area, of the target object in the first target area to obtain a complemented target object, wherein the complementing condition comprises: the image processing belongs to a preset image processing type and/or comprises removing the shielding area from the video image; and carrying out the image processing on the complemented target object to obtain a second target area.
Optionally, the obtaining the processed video image based on the second target area includes: overlapping the shielding area to the second target area according to the position of the shielding area in the video image to obtain the processed video image; or, displaying the second target area in the processed video image entirely, and not displaying the shielding area in the processed video image.
Optionally, the determining the first target area in the video image includes: determining a visible outline of the target object in the video image; repairing the visible outline based on the preset outline characteristics of the target object to obtain a repaired outline of the target object; and determining an area surrounded by the repair contour as the first target area.
Optionally, the image processing includes performing at least one of the following on global or local: deformation, pixel filtering, color adjustment and image superposition, wherein the video image is a video image acquired in real time.
According to a second aspect of embodiments of the present disclosure, there is provided a video processing apparatus including: an acquisition unit configured to acquire a video image, wherein the video image includes a target object to be processed and an occlusion region occluding the target object; and the processing unit is configured to perform image processing on the part of the target object which is not blocked by the blocking area to obtain a processed video image.
Optionally, the processing unit includes: a first target area determining unit configured to determine a first target area in the video image, wherein the first target area is an area containing the target object; an occlusion region determining unit configured to determine the occlusion region in the first target region; a second target area determining unit configured to perform the image processing on a portion of the first target area where the target object is not blocked by the blocking area, to obtain a second target area; and a video image determining unit configured to obtain a processed video image based on the second target area.
Optionally, the second target area determining unit is further configured to: image processing is carried out on the part of the target object in the first target area, which is not blocked by the blocking area, and the blocking area, so as to obtain a candidate second target area; and superposing the unprocessed shielding region on the processed shielding region in the candidate second target region to obtain a second target region.
Optionally, the video processing apparatus further includes a removing unit configured to remove the occlusion region from the first target region according to a position of the occlusion region in the first target region before performing image processing on a portion of the first target region where the target object is not occluded by the occlusion region, to obtain a first target region from which the occlusion is removed.
Optionally, the removal unit is further configured to: and setting all pixel values of the shielding region in the first target region to be preset pixel values according to the position of the shielding region in the first target region, so as to obtain a first target region after shielding is removed.
Optionally, the second target area determining unit is further configured to: responding to the fact that the image processing does not meet the preset complement condition, and performing image processing on the part, which is not blocked by the blocking area, of the target object in the first target area to obtain a second target area; and responding to the image processing meeting a preset complementing condition, carrying out pixel complementing on the part, which is blocked by the blocking area, of the target object in the first target area to obtain a complemented target object, wherein the complementing condition comprises the following steps: the image processing belongs to a preset image processing type and/or comprises removing the shielding area from the video image; and carrying out the image processing on the complemented target object to obtain a second target area.
Optionally, the video image determining unit is further configured to: overlapping the shielding area to the second target area according to the position of the shielding area in the video image to obtain the processed video image; or, displaying the second target area in the processed video image entirely, and not displaying the shielding area in the processed video image.
Optionally, the first target area determining unit is further configured to: determining a visible outline of the target object in the video image; repairing the visible outline based on the preset outline characteristics of the target object to obtain a repaired outline of the target object; and determining an area surrounded by the repair contour as the first target area.
Optionally, the image processing includes performing at least one of the following on global or local: deformation, pixel filtering, color adjustment and image superposition, wherein the video image is a video image acquired in real time.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising: a processor; a memory for storing the processor-executable instructions, wherein the processor-executable instructions, when executed by the processor, cause the processor to perform a video processing method according to an exemplary embodiment of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform a video processing method according to an exemplary embodiment of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer instructions which, when executed by a processor, implement a video processing method according to exemplary embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the video image comprising the target object to be processed and the shielding area can be obtained, and the processed video image is obtained by carrying out image processing on the part of the target object which is not shielded by the shielding area, so that the image processing on the target object can be realized without influencing the shielding area, abnormal display of non-target objects in the shielding area can be avoided, and the display effect of the processed video image is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is a schematic diagram showing an example implementation scenario of a video processing method according to an example embodiment.
Fig. 2 is a flow chart illustrating a video processing method according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a video image processing method in accordance with an exemplary embodiment.
Fig. 4 is a flowchart illustrating steps of determining a first target area in a video processing method according to an exemplary embodiment.
Fig. 5 is a schematic diagram showing a method for determining a first target area in a video processing method according to an exemplary embodiment.
Fig. 6 is a flowchart illustrating steps of obtaining a second target area in a video processing method according to an exemplary embodiment.
Fig. 7 is a flowchart illustrating an example of a video processing method according to an exemplary embodiment.
Fig. 8A is a schematic diagram showing a video image processed according to a conventional video processing method.
Fig. 8B is a schematic diagram illustrating a video image processed by a video processing method according to an exemplary embodiment.
Fig. 9 is a block diagram of a video processing apparatus according to an exemplary embodiment.
Fig. 10 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, in this disclosure, "at least one of the items" refers to a case where three types of juxtaposition including "any one of the items", "a combination of any of the items", "an entirety of the items" are included. For example, "including at least one of a and B" includes three cases side by side as follows: (1) comprises A; (2) comprising B; (3) includes A and B. For example, "at least one of the first and second steps is executed", that is, three cases are juxtaposed as follows: (1) performing step one; (2) executing the second step; (3) executing the first step and the second step.
As described above, in the related video processing technology, when an occlusion object exists in front of a target object to be processed, image processing is applied to the occlusion object and the target object together, so that the occlusion object as a non-target object is also processed together, resulting in poor display effect of the processed video image and affecting the viewing experience.
Taking a live video scene as an example, with the rapid development of the short video industry, the use frequency of the face special effects of the people is increased gradually under the live video, picture and live video scenes, for example, in order to improve the visual sense of the video or highlight the theme which the video wants to express, the face deformation of the people in the video, the video special effects such as attachments, e.g. face thinning, color makeup and the like, can be applied to the people in the video.
However, in a live, in-line scene, a part of the shield appearing in front of the face of the person may cause distortion of the video effect, for example, in some food recommended live video scenes, after the user opens the face deformation effect, in order to highlight the subject of the video, an invader such as food, chopsticks, etc. may appear in front of the face of the user, thereby causing deformation of the invader such as food, chopsticks, etc.; as another example, in some live video scenes with added color effects, when an intruding object such as headphones, hands, etc. suddenly covers the face of the user, the color effect originally applied to the face may be caused to fail.
In the above-mentioned scene, in the related video processing technology, when processing a scene having a deformed target obstacle or deformed region invaded object, the detection and processing of the interfering object are not performed, and thus, deformation distortion, abnormal display of a non-target object, and the like may occur. In particular, in a real-time video scene, when the intensity of the deformation effect is set to be large by the user, the phenomenon of such abnormal deformation is more remarkable, thereby affecting the display effect of the video.
In view of the foregoing, a video processing method, a video processing apparatus, an electronic device, a computer-readable storage medium, and a computer program product according to exemplary embodiments of the present disclosure will be provided below with reference to the accompanying drawings, to implement a video processing and display manner, which may provide a robust and more robust video special effect presentation and processing scheme, and solve at least one of the problems of abnormal deformation of an invaded object, loss of special effect application of image processing, and the like.
Here, it should be noted that although the description is given in the context of a live video scene as an example, it should be understood that the video processing method according to the exemplary embodiment of the present disclosure is not limited to this scene, and may be applied to any scene requiring video processing, for example, a scene in which a post-production of a video is shot or recorded, video frame special effect processing, and the like.
Further, although the description has been made in the context of applying special effects to faces of persons as an example, it is to be understood that the effects of the video processing method according to the exemplary embodiments of the present disclosure are not limited thereto, and may also be used for improving video quality, improving video display effects, improving local sharpness of video, and the like at the time of post-video processing, for example, improving colors or brightness of persons or objects in video images, adding animated special effects to persons or objects, repairing light-shadow defects in video photographing, and the like, and any video processing scene requiring processing for distinguishing objects.
An aspect according to an exemplary embodiment of the present disclosure proposes a video processing method that may be applied in any video processing scenario, and an exemplary implementation scenario of the video processing method according to an exemplary embodiment of the present disclosure is given below with reference to fig. 1.
As shown in fig. 1, when a user obtains video to be processed (for example, live video in real time or recorded video, etc.) from a server 130 or a video processing platform through a network 120 using a video application client at a user terminal (for example, a mobile phone 111, a desktop computer 112, a tablet computer 113, etc.), the server 130 or the video processing platform may transmit video to the user terminal such as the mobile phone 111, the desktop computer 112, the tablet computer 113 through the network 120, and the user may process the received video through the video application client and display the processed video.
Specifically, in the above-described process, the user terminal may process the received video according to the video processing method of the exemplary embodiment of the present disclosure. In particular, the user terminal may acquire a video image including the target object to be processed and an occlusion region occluding the target object. The user terminal can perform image processing on the part of the target object which is not shielded by the shielding area, and a processed video image is obtained. According to the method, the image processing of the target object can be realized without influencing the shielding area, so that abnormal display of the non-target object in the shielding area can be avoided, the display effect of the processed video image is improved, and the viewing experience of a video viewer is improved.
It should be noted that, although the above description is given by taking the user terminal as an example, the implementation subject of the video processing method may be any electronic device, where the electronic device may include, for example, a smart phone, a tablet computer, a notebook computer, a digital assistant, a wearable device, an in-vehicle terminal, and the like, which are entity devices as video processing hardware, and may also include software running on the entity devices as video processing software.
Further, although the above-described video processing is described above by the user terminal by way of example in fig. 1, for example, as shown in fig. 1 above, the user terminal may perform video processing on video received from the server, or the user terminal may perform real-time or post-video processing on video photographed or recorded by itself, or, the exemplary embodiments of the present disclosure are not limited thereto, and the process of requesting and performing the above-described video processing by the server may be transmitted to the user terminal, and then the processed video may be transmitted to the user terminal.
Fig. 2 is a flow chart illustrating a video processing method according to an exemplary embodiment. As described above, the video processing method according to the exemplary embodiment of the present disclosure may perform image processing on a portion of a target object, which is not blocked by a blocking area, based on an acquired video image, to obtain a processed video image, and thus, may implement image processing on the target object without affecting the blocking area, so that abnormal display of non-target objects in the blocking area may be avoided, and display effects of the processed video image may be improved.
As shown in fig. 2, a video processing method according to an exemplary embodiment of the present disclosure may include the steps of:
in step S210, a video image may be acquired, where the video image includes a target object to be processed and an occlusion region that occludes the target object.
Here, the video image may be a video image acquired in real time, such as live video, but is not limited thereto, and the video image may be a video image photographed or recorded in advance.
The target object to be processed may be any object in the video image, such as but not limited to including a person, an object, a background, etc., as well as any local part in the video image, such as a center region, a corner region, etc., of the video image. The target object may be a moving object in the video or a stationary object.
In step S220, image processing may be performed on a portion of the target object that is not blocked by the blocking area, to obtain a processed video image.
As an example, as shown in fig. 3, step S220 may include the steps of:
in step S310, a first target region in the video image may be determined.
In this step, the first target area is an area containing the target object. For example, in a video image, a portion of the target object may be exposed while another portion may be occluded by an obstruction. As an example, the target object may be a face of a person, and in the video image, the face of the person may be covered by headphones, a human hand, or the like for a portion (e.g., chin). Here, the first target area may be an area where the exposed portion of the target object and the portion of the target object that is blocked are included.
As an example, in a real-time video scene, the real-time detection and contour recognition of the contour region of the target object such as a face may be performed on a video image acquired in real time based on a pre-trained first detection model, where the first detection model may be, but is not limited to, a real-time detection segmentation method based on a central network (centrnet), a convolutional neural network (Mask region-based convolutional neural network, mask R-CNN) based on a Mask region, a central Mask (centrmask), a split-by-position object (Segmenting Objects by Locations, SOLO) method, or an image algorithm based on clustering, watershed, or the like, and the present disclosure does not particularly limit the first detection model and its training manner, or may use any other detection method to determine the first target region in the video image.
As an example, as shown in fig. 4, the first target region in the video image may be determined by:
in step S410, the visible outline of the target object in the video image is determined.
In this step, the visible outline of the target object in the video image may be first identified, and the shape of the visible outline may not be the actual shape of the target object because there may be a portion that is occluded, for example, as shown in fig. 5, in the case where the target object is a face of a person, the visible outline 510 is a discontinuous circle that lacks a part of an arc due to the presence of an occlusion.
In step S420, the visible contour is repaired based on the preset contour characteristic of the target object, to obtain a repaired contour of the target object.
Here, the contour characteristic corresponding to the target object may be set in advance, for example, in the case where the target object is a face of a person, the contour shape thereof may be a circle or an ellipse, and have a continuously smooth contour curve. Taking fig. 5 as an example, in the case where the visible contour 510 is determined in step S410, the missing contour portion of the visible contour 510 may be repaired according to the preset contour characteristic to obtain the supplementary contour 520, and thus, the entire repair contour formed by the visible contour 410 and the supplementary contour 520 together may be obtained.
In step S430, an area surrounded by the repair contour is determined as a first target area.
In this step, the target object may be considered to be located within the repair contour, and thus, the area surrounded by the repair contour may be determined as the first target area.
In this way, the first target area determined by determining the visible outline of the target object and repairing the visible outline of the target object may include the actual outline shape of the target object, so that even if a shielding object exists, image processing can be performed on the area where the whole target object is located, the integrity of the processed target object is ensured, and the processing effect is improved.
In step S320, an occlusion region may be determined in the first target region.
In this step, the first target area may be detected, an area forming an occlusion for the target object may be determined, and for example, a non-target object present in the first target area or a portion outside the target object may be identified.
As an example, the abnormal object that intrudes into the face may be recognized in real time and the boundary of the abnormal object may be segmented based on a pre-trained second detection model, where the second detection model may be, but is not limited to, a segmentation method based on a U-Net method, a deep lab method, a full convolution neural network (Fully Convolution Network, FCN) method, etc., and the present disclosure is not limited to the detection of the second detection model and the training method thereof, and may also determine the occlusion region of the occlusion target object using any other detection method. Alternatively, the second detection model may be implemented in the same model as the first detection model.
In step S330, image processing may be performed on a portion of the first target area where the target object is not blocked by the blocking area, to obtain a second target area.
In this step, image processing includes, but is not limited to, performing at least one of the following on global or local: morphing, pixel filtering, color adjustment, and image superimposition.
In the first example, the image processing of the portion of the target object that is not blocked by the blocking area may be implemented by performing image processing on the whole first target area, and specifically, the image processing may be performed on the portion of the first target area that is not blocked by the blocking area and the blocking area, so as to obtain a candidate second target area; and superposing the unprocessed shielding region on the processed shielding region in the candidate second target region to obtain the second target region.
In this example, the desired image processing may be performed on the whole of the first target region including the target object and the occlusion region to implement the image processing on the target object, in which process the occlusion region is also subjected to the image processing, so that the original unprocessed occlusion region may be superimposed on the processed occlusion region in the selected second target region to obtain the second target region, so that only the image processing on the first target region is shown in the second target region, while the occlusion region still maintains the original form. Therefore, the problem that the shielding area is processed in the processing of the target object can be solved by superposing the original unprocessed shielding area, and the processing of the target object can be rapidly realized under the condition that the shielding area is kept unchanged.
As an example, the example may be applied to a scene in which the area of the target object is not enlarged (i.e., the area of the target object after the processing is smaller than or equal to the area of the target object before the processing), for example, reducing the local or global area of the target object, adding an additive to the target object, performing color or brightness adjustment on the target object, and the like. Taking the live video scenario as an example, this example can be applied to, but is not limited to: the special effects of no deformation such as beauty, skin grinding and the like generated based on a filtering algorithm or a deep learning algorithm; global or local thin face special effects realized based on elastic deformation, facial feature point recognition and a deep learning algorithm; based on the facial anatomy positional relationship, facial attachments such as cat ears, beard ears, and the like are generated. On the basis of the face with the special effects, the detected shielding area is displayed as a top layer image layer, and the display of the shielding area (such as food, chopsticks and the like) cannot be affected.
In a second example, as shown in fig. 6, in step S610, in response to the image processing meeting a preset complement condition, pixel complement may be performed on a portion of the first target region where the target object is blocked by the blocking region, to obtain a complement target object; in step S620, image processing may be performed on the completed target object to obtain a second target area.
Here, the completion condition may include: the image processing belongs to a preset image processing type; and/or the image processing includes removing occlusion regions from the video image.
As an example, the preset image processing type may include, but is not limited to, special effect processing that does not cause deformation, such as beauty, skin abrasion, etc., generated based on a filtering algorithm or a deep learning algorithm; global or local thin face special effect processing based on elastic deformation, facial feature point recognition and deep learning algorithm; based on the facial anatomy positional relationship, a process of generating facial attachments (e.g., cat ears, beard ears, etc.) is performed. However, the example is not limited thereto and may be applied to a scene in which any image processing is performed on a target object. This example may be applied, but is not limited to, in video live scenes.
Further, in the case where the image processing includes removing the occlusion region from the video image, in the video image obtained by the final processing, the content of the occlusion region may not be displayed, but the entire target object may be displayed entirely.
In this example, the portion of the target object that is blocked by the blocked area may be pixel-complemented, i.e., the blocked area may be pixel-complemented, based on the portion of the target object that is not blocked by the blocked area. After the target object is complemented, the whole complemented target object may be subjected to a desired image processing to obtain a second target area. Therefore, the non-occluded part and the occluded part of the target object can be uniformly subjected to image processing, so that the integrity of the target object after image processing is maintained, and more possibility is provided for subsequent processing.
As an example, pixel completion processing may be performed based on a pre-trained image generation model, so as to complete a currently missing occlusion region of the target object, thereby obtaining a complete target object after completion, for example, obtaining a real-time image of the complete target object. Here, the image generation model may be, but is not limited to, a network structure such as a pixel generation pre-training (Generative Pretraining from Pixels, IGPT), a generation countermeasure network (conformer based Metric, generative Adversarial Networks, CM-GAN) based on a measure of consistency, or the like, and other generation countermeasure network (Generative Adversarial Networks, GAN) structures, and the present disclosure is not limited to the image generation model and the training method thereof, and may be a pixel-complementing method for a portion of the target object that is blocked by the blocking region, other than the image generation model.
In addition, in response to the image processing not meeting the preset complement condition, the complement operation may not be performed, and the image processing may be performed on the portion of the first target area where the target object is not blocked by the blocking area, so as to obtain the second target area.
The above describes the procedure of image processing of the portion of the first target area where the target object is not blocked by the blocked area, in which the image processing described in step S330 may be directly performed on the first target area in the video image to obtain the second target area, and in particular, the above processing may be applied to the first target area including the portion of the target object not blocked by the blocked area and the blocked area, in which no particular processing is performed on the blocked area.
However, the exemplary embodiments of the present disclosure are not limited thereto, and the occlusion region may be removed before performing the image processing. Specifically, before image processing is performed on a portion of the first target region where the target object is not blocked by the blocking region, the blocking region may be removed from the first target region according to the position of the blocking region in the first target region, so as to obtain the first target region after the blocking is removed. Based on this, the processing described in step S330 described above may be performed for the first target area after the occlusion removal, resulting in the second target area. Therefore, after the shielding area is removed, the image processing of the first target area can be more convenient, the calculated amount caused by the existence of the shielding area is reduced, and the processing efficiency is improved.
As an example, the pixel values of the occlusion region in the first target region may be all set to preset pixel values according to the position of the occlusion region in the first target region, so as to obtain the first target region after occlusion is removed.
Here, the preset pixel value may be, for example, a value of 0, so that the first target area after occlusion includes the target object and the occluded area filled with black. Therefore, the shielding areas can be unified into the same pixel value, so that the management and storage of the pixel data in subsequent processing are facilitated, and the calculation speed is improved. Although the occlusion region is described herein as being removed by setting the pixel values of the occlusion region to the same value throughout, exemplary embodiments of the present disclosure are not limited thereto, and for example, all pixels in the occlusion region may be removed to facilitate subsequent image processing.
For the process of whether to remove the occlusion region, in the first example of step S330, image processing may be performed on the portion of the first target region where the target object is not occluded by the occlusion region and the original occlusion region, to obtain the candidate second target region. Since in this example the unprocessed original occlusion region is subsequently superimposed on the processed occlusion region, the processed occlusion region resulting from processing based on the original occlusion region is covered and thus not shown in the processed video image. The above-mentioned process may also be performed after the occlusion region is removed, and specifically, image processing may be performed on a portion of the first target region where the target object is not occluded by the occlusion region and the occlusion region after the removal, to obtain a candidate second target region.
In the second example of step S330, as described with reference to fig. 6, the portion of the first target region where the target object is blocked by the blocking region may be pixel-complemented, i.e., the original blocking region may be pixel-complemented. In this example, since the original occlusion region does not exist again after the completion, the completed region is shown, and thus the completion may be performed after the occlusion region is removed, or the direct completion may be performed without removing the occlusion region.
In step S340, a processed video image may be obtained based on the second target region.
In this step, the second target area may be directly displayed in the processed video image without complementing the target object, so as to obtain a final video image.
Further, in the case of complementing the target object, in one example, the occlusion region may be superimposed on the second target region according to the position of the occlusion region in the video image, resulting in a processed video image. Alternatively, in another example, the second target region may be entirely displayed in the processed video image and the occlusion region is not displayed in the processed video image. The case where the occlusion region is not shown in this example is more suitable for the case where the target object to which the image processing is applied is highly emphasized and the visual region of the target object is not desired to be occluded by another interfering object than the above case where the occlusion region is superimposed.
Therefore, based on the completed target object, the shielding area can be selectively overlapped back to the video image or the shielding area is not displayed any more according to actual needs, so that the image processing is more flexible, and the method can be suitable for more video processing requirements.
A video processing method according to an exemplary embodiment of the present disclosure is described above with reference to fig. 2 to 6, and an example of the video processing method will be described below with reference to fig. 7 with a real-time video scene as a reference.
As shown in fig. 7, in step S701, real-time image data may be acquired. Here, the real-time image data may be, for example, but not limited to, real-time picture data in a live video scene, a link video scene. The data may be initiated by the client and accepted by the server, for example, and all calculation steps may be performed at the server, although it is not limited thereto, and all or a portion of the calculation steps may be performed by the client itself.
In step S702, a first target region containing a target object may be detected. The target object may be an object or region in the image where a special effect is expected to act, for example, a human face, and the special effect may be a thin-face special effect, for example.
In step S703, an occlusion region in the first target region may be detected. The blocking area may be, for example, an intruding object covered by the coverage target object.
In step S704, the occlusion region may be removed. Specifically, after detecting the occlusion region covering the target object, the occlusion region may be removed, e.g., the occlusion region may be scratched out of the original image or reset with the same pixel value.
In step S705, it may be determined whether the target object is complemented. After the shielding area is scratched out in the original image, a blank display is necessarily arranged in the original shielding area, so that whether the target object is subjected to image completion or not needs to be judged, and the complete target object is recovered through the help of an algorithm.
If the completion is required, in step S706, the target object may be completed, and then step S707 is performed to perform image processing on the first target area. For example, for some effects that act on the target object and stretch the area, it is necessary to perform image complement on the target object to fill in the subtracted portion of the covered area, so as to ensure that the visual effect finally presented is complete.
If no complement is required, the image processing may be performed directly in step S707 for the first target area. For example, for some applications of reducing effects on target objects, the area to be cut out may be selected not to be complemented, for example, for a thin face effect, after the shielding area is cut out, a thin face special effect is applied under the condition that the area to be cut out is not complemented, after the application, the part of the shielding area to be cut out is compressed along with the thin face effect, at this time, the original image of the shielding area is added for combination, and finally, the visual effect can reach the vacant position after the shielding area part completely covers the thin face.
In step S708, it may be determined whether or not to superimpose the occlusion region. If the occlusion region needs to be superimposed, in step S709, the occlusion region may be superimposed, and if not, the processed video image may be directly obtained. Specifically, after applying the special effect to the target object, it is possible to select whether or not to re-superimpose the subtracted covering area image layers. In the case that the target object is complemented, it may be chosen not to overlap.
When the video processing method according to the exemplary embodiment of the present disclosure is applied to a real-time video scene, the user experience of special effect application in the real-time video scene can be improved, for example, when a user (for example, a host) turns on a face shaping special effect and deformation of an object in front of the face occurs, such obvious visual anomaly can divert the attention of a viewer and is unfavorable for the transmission of video content. By applying the video processing method according to the exemplary embodiment of the present disclosure, the situation may be optimized, so that the objects in the shielding area are no longer affected by the image processing on the target object and have abnormal deformation, thereby improving the video ornamental value, removing the visual interference, and enabling the viewer to pay more attention to the content expressed by the user, and simultaneously giving the user a better special effect use experience.
Fig. 8A and 8B are diagrams respectively showing a video image processed according to an existing video processing method and a video image processed according to a video processing method of an exemplary embodiment.
Fig. 8A and 8B are effects contrasted after applying lateral reduction to an original image having an occlusion region. As shown in fig. 8A, when processing is performed by applying the conventional method, it is apparent that the effect of the lateral reduction of only the target is expected, and in the overlapping region of the occlusion region and the target object, the occlusion region is also deformed, which is not expected. In the case of using the processing method of the embodiment of the present disclosure, however, as shown in fig. 8B, the result of the final effect application appears that only the target object is changed, while the occlusion region overlapped on the target object remains the same.
Fig. 9 is a block diagram of a video processing apparatus according to an exemplary embodiment. Referring to fig. 9, the video processing apparatus includes an acquisition unit 100 and a processing unit 200.
The acquisition unit 100 is configured to acquire a video image including a target object to be processed and an occlusion region occluding the target object.
The processing unit 200 is configured to perform image processing on a portion of the target object that is not blocked by the blocking area, resulting in a processed video image.
As an example, the processing unit 200 may include a first target area determining unit 210, an occlusion area determining unit 220, a second target area determining unit 230, and a video image determining unit 240.
The first target area determination unit 210 is configured to determine a first target area in the video image, wherein the first target area is an area containing a target object.
The occlusion region determining unit 220 is configured to determine an occlusion region in the first target region.
The second target area determining unit 230 is configured to perform image processing on a portion of the first target area where the target object is not blocked by the blocking area, to obtain a second target area.
The video image determining unit 240 is configured to obtain a processed video image based on the second target area.
As an example, the second target area determination unit 230 is further configured to: image processing is carried out on the part, which is not shielded by the shielding area, of the target object in the first target area and the shielding area, so that a candidate second target area is obtained; and superposing the unprocessed shielding region on the processed shielding region in the candidate second target region to obtain the second target region.
As an example, the video processing apparatus further includes a removal unit configured to: before image processing is carried out on a part of the first target area, which is not shielded by the shielding area, of the target object, the shielding area is removed from the first target area according to the position of the shielding area in the first target area, and the first target area after shielding is removed is obtained.
As an example, the removal unit is further configured to: and setting all pixel values of the shielding region in the first target region to be preset pixel values according to the position of the shielding region in the first target region, so as to obtain the first target region after shielding is removed.
As an example, the second target area determination unit 230 is further configured to: responding to the fact that the image processing does not meet the preset complement condition, and performing image processing on the part, which is not blocked by the blocking area, of the target object in the first target area to obtain a second target area; and responding to the image processing meeting a preset complementing condition, carrying out pixel complementing on the part, which is blocked by the blocking area, of the target object in the first target area to obtain the complemented target object, wherein the complementing condition comprises the following steps: the image processing is of a preset image processing type and/or the image processing comprises removing occlusion regions from the video image; and performing image processing on the completed target object to obtain a second target area.
As an example, the video image determination unit 240 is further configured to: overlapping the shielding region to a second target region according to the position of the shielding region in the video image to obtain a processed video image; alternatively, the second target region is entirely displayed in the processed video image, and the occlusion region is not displayed in the processed video image.
As an example, the first target area determination unit 210 is further configured to: determining a visible outline of the target object in the video image; repairing the visible outline based on the preset outline characteristics of the target object to obtain a repaired outline of the target object; the area surrounded by the repair contour is determined as a first target area.
As an example, image processing includes performing at least one of the following on global or local: deformation, pixel filtering, color adjustment and image superposition, wherein the video image is a video image acquired in real time.
The specific manner in which the individual units perform the operations in relation to the apparatus of the above embodiments has been described in detail in relation to the embodiments of the method and will not be described in detail here.
Fig. 10 is a block diagram of an electronic device, according to an example embodiment. As shown in fig. 10, the electronic device 10 includes a processor 101 and a memory 102 for storing processor-executable instructions. Here, the processor executable instructions, when executed by the processor, cause the processor to perform the video processing method as described in the above exemplary embodiments.
By way of example, the electronic device 10 need not be a single device, but may be any means or collection of circuits capable of executing the above-described instructions (or sets of instructions) alone or in combination. The electronic device 10 may also be part of an integrated control system or system manager, or may be configured as an electronic device that interfaces with local or remote (e.g., via wireless transmission).
In electronic device 10, processor 101 may include a Central Processing Unit (CPU), a Graphics Processor (GPU), a programmable logic device, a special purpose processor system, a microcontroller, or a microprocessor. By way of example and not limitation, processor 101 may also include an analog processor, a digital processor, a microprocessor, a multi-core processor, a processor array, a network processor, and the like.
The processor 101 may execute instructions or code stored in the memory 102, wherein the memory 102 may also store data. The instructions and data may also be transmitted and received over a network via a network interface device, which may employ any known transmission protocol.
The memory 102 may be integrated with the processor 101, for example, RAM or flash memory disposed within an integrated circuit microprocessor or the like. In addition, the memory 102 may include a stand-alone device, such as an external disk drive, a storage array, or any other storage device usable by a database system. The memory 102 and the processor 101 may be operatively coupled or may communicate with each other, for example, through an I/O port, a network connection, etc., such that the processor 101 is able to read files stored in the memory 102.
In addition, the electronic device 10 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the electronic device 10 may be connected to each other via a bus and/or a network.
In an exemplary embodiment, a computer readable storage medium may also be provided, which when executed by a processor of an electronic device, enables the electronic device to perform the video processing method as described in the above exemplary embodiment. The computer readable storage medium may be, for example, a memory including instructions, alternatively the computer readable storage medium may be: read-only memory (ROM), random-access memory (RAM), random-access programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, nonvolatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD+RW, DVD-ROM, DVD-R, DVD + R, DVD-RW, DVD+RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, blu-ray or optical disk storage, hard Disk Drives (HDD), solid State Disks (SSD), card memory (such as multimedia cards, secure Digital (SD) cards or ultra-fast digital (XD) cards), magnetic tape, floppy disks, magneto-optical data storage, hard disks, solid state disks, and any other means configured to store computer programs and any associated data, data files and data structures in a non-transitory manner and to provide the computer programs and any associated data, data files and data structures to a processor or computer to enable the processor or computer to execute the programs. The computer programs in the computer readable storage media described above can be run in an environment deployed in a computer device, such as a client, host, proxy device, server, etc., and further, in one example, the computer programs and any associated data, data files, and data structures are distributed across networked computer systems such that the computer programs and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by one or more processors or computers.
In an exemplary embodiment, a computer program product may also be provided, which comprises computer instructions which, when executed by a processor, implement the video processing method as described in the above exemplary embodiment.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A video processing method, the video processing method comprising:
acquiring a video image, wherein the video image comprises a target object to be processed and an occlusion region for occluding the target object;
And performing image processing on the part of the target object which is not blocked by the blocking area to obtain a processed video image.
2. The video processing method according to claim 1, wherein the performing image processing on the portion of the target object that is not blocked by the blocking area, to obtain a processed video image, includes:
determining a first target area in the video image, wherein the first target area is an area containing the target object;
determining the occlusion region in the first target region;
image processing is carried out on the part, which is not blocked by the blocking area, of the target object in the first target area, so as to obtain a second target area;
and obtaining the processed video image based on the second target area.
3. The video processing method according to claim 2, wherein the image processing the portion of the first target area where the target object is not blocked by the blocking area to obtain a second target area includes:
image processing is carried out on the part of the target object in the first target area, which is not blocked by the blocking area, and the blocking area, so as to obtain a candidate second target area;
And superposing the unprocessed shielding region on the processed shielding region in the candidate second target region to obtain a second target region.
4. The video processing method according to claim 2, characterized in that before image processing is performed on a portion of the first target area where the target object is not blocked by the blocking area, the video processing method further comprises:
and removing the shielding region from the first target region according to the position of the shielding region in the first target region, so as to obtain the first target region after shielding is removed.
5. The method according to claim 4, wherein the removing the occlusion region from the first target region according to the position of the occlusion region in the first target region, to obtain the first target region after occlusion removal, comprises:
and setting all pixel values of the shielding region in the first target region to be preset pixel values according to the position of the shielding region in the first target region, so as to obtain a first target region after shielding is removed.
6. The video processing method according to claim 2, wherein the performing image processing on the portion of the first target area where the target object is not blocked by the blocking area to obtain a second target area includes:
Responding to the image processing not meeting a preset complement condition, and performing image processing on the part of the first target area, which is not blocked by the blocking area, of the target object to obtain a second target area;
and in response to the image processing meeting the preset complementing condition, performing pixel complementing on the part, which is blocked by the blocking area, of the target object in the first target area to obtain a complemented target object, wherein the complementing condition comprises: the image processing belongs to a preset image processing type and/or comprises removing the shielding area from the video image; and carrying out the image processing on the complemented target object to obtain a second target area.
7. The method according to claim 6, wherein the obtaining the processed video image based on the second target area includes:
overlapping the shielding area to the second target area according to the position of the shielding area in the video image to obtain the processed video image; or alternatively, the process may be performed,
and displaying the second target area in the processed video image entirely, and not displaying the shielding area in the processed video image.
8. The method of video processing according to claim 2, wherein said determining a first target area in the video image comprises:
determining a visible outline of the target object in the video image;
repairing the visible outline based on the preset outline characteristics of the target object to obtain a repaired outline of the target object;
and determining an area surrounded by the repair contour as the first target area.
9. The video processing method of claim 2, wherein the image processing comprises performing at least one of the following on a global or local basis: deformation, pixel filtering, color adjustment and image superposition, wherein the video image is a video image acquired in real time.
10. A video processing apparatus, the video processing apparatus comprising:
an acquisition unit configured to acquire a video image, wherein the video image includes a target object to be processed and an occlusion region occluding the target object;
and the processing unit is configured to perform image processing on the part of the target object which is not blocked by the blocking area to obtain a processed video image.
11. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions,
wherein the processor executable instructions, when executed by the processor, cause the processor to perform the video processing method according to any one of claims 1 to 9.
12. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the video processing method according to any one of claims 1 to 9.
CN202310093799.2A 2023-01-30 2023-01-30 Video processing method, device, electronic equipment and storage medium Pending CN116132732A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310093799.2A CN116132732A (en) 2023-01-30 2023-01-30 Video processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310093799.2A CN116132732A (en) 2023-01-30 2023-01-30 Video processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116132732A true CN116132732A (en) 2023-05-16

Family

ID=86311428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310093799.2A Pending CN116132732A (en) 2023-01-30 2023-01-30 Video processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116132732A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503289A (en) * 2023-06-20 2023-07-28 北京天工异彩影视科技有限公司 Visual special effect application processing method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503289A (en) * 2023-06-20 2023-07-28 北京天工异彩影视科技有限公司 Visual special effect application processing method and system
CN116503289B (en) * 2023-06-20 2024-01-09 北京天工异彩影视科技有限公司 Visual special effect application processing method and system

Similar Documents

Publication Publication Date Title
Guo et al. Progressive image inpainting with full-resolution residual network
Fu et al. A fusion-based enhancing method for weakly illuminated images
EP3520081B1 (en) Techniques for incorporating a text-containing image into a digital image
TWI539813B (en) Image composition apparatus and method
Patwardhan et al. Video inpainting under constrained camera motion
US20180300937A1 (en) System and a method of restoring an occluded background region
US9699380B2 (en) Fusion of panoramic background images using color and depth data
WO2020108610A1 (en) Image processing method, apparatus, computer readable medium and electronic device
CN109462747B (en) DIBR system cavity filling method based on generation countermeasure network
US10769849B2 (en) Use of temporal motion vectors for 3D reconstruction
CN107172354B (en) Video processing method and device, electronic equipment and storage medium
Choi et al. Space-time hole filling with random walks in view extrapolation for 3D video
CN107622504B (en) Method and device for processing pictures
JP2008276410A (en) Image processor and method
Chen et al. On-line visualization of underground structures using context features
CN116132732A (en) Video processing method, device, electronic equipment and storage medium
Abdulla et al. An improved image quality algorithm for exemplar-based image inpainting
Wang et al. Stereoscopic image retargeting based on 3D saliency detection
CA3173542A1 (en) Techniques for re-aging faces in images and video frames
Dwivedi et al. Single image dehazing using extended local dark channel prior
WO2022021287A1 (en) Data enhancement method and training method for instance segmentation model, and related apparatus
Chamaret et al. Video retargeting for stereoscopic content under 3D viewing constraints
WO2022016326A1 (en) Image processing method, electronic device, and computer-readable medium
Gsaxner et al. DeepDR: Deep Structure-Aware RGB-D Inpainting for Diminished Reality
Wang et al. Near-infrared fusion for deep lightness enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination