CN107346534B - Method and system for detecting and eliminating shadow of video object in mediated reality - Google Patents

Method and system for detecting and eliminating shadow of video object in mediated reality Download PDF

Info

Publication number
CN107346534B
CN107346534B CN201710571523.5A CN201710571523A CN107346534B CN 107346534 B CN107346534 B CN 107346534B CN 201710571523 A CN201710571523 A CN 201710571523A CN 107346534 B CN107346534 B CN 107346534B
Authority
CN
China
Prior art keywords
shadow
processing computer
color
video
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710571523.5A
Other languages
Chinese (zh)
Other versions
CN107346534A (en
Inventor
钟秋发
锡泊
黄煦
高晓光
李晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Zhongke Hengyun Software Technology Co ltd
Original Assignee
Hebei Zhongke Hengyun Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Zhongke Hengyun Software Technology Co ltd filed Critical Hebei Zhongke Hengyun Software Technology Co ltd
Priority to CN201710571523.5A priority Critical patent/CN107346534B/en
Publication of CN107346534A publication Critical patent/CN107346534A/en
Application granted granted Critical
Publication of CN107346534B publication Critical patent/CN107346534B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method and a system for detecting and eliminating shadow of a video object in mediated reality, wherein the method comprises the following steps: shooting a video image by using a depth camera, manufacturing an equal-proportion model according to the video image, and marking points by using a single color with large contrast; outputting a shadow map on the equal-scale model by using a visual image processing computer, and sending the shadow map to a video image processing computer; the video image processing computer detects the shadow of the whole shot video frame on the shadow map by adopting a shadow detection method based on the shadow map; the video image processing computer eliminates the detected shadows by using a method based on color consistency. The method has the characteristic of high real-time performance, and is suitable for detecting and eliminating shadow before virtual-real fusion of mediated reality.

Description

Method and system for detecting and eliminating shadow of video object in mediated reality
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a system for detecting and eliminating shadow of a video object in mediated reality.
Background
Virtual-real fusion scene generation technology based on video materials is becoming a technology development trend and a typical research hotspot in the directions of virtual reality and augmented reality. In order to ensure that the video can be processed with the consistency of the virtual and real shadows at the later stage, the color of the foreground object of the video needs to be restored, wherein an important step is to detect and eliminate the object shadow in the video.
Shadow detection algorithms have been widely studied, for example, a detection method based on texture analysis, which uses brightness to judge possible shadow areas and then combines texture features to segment out shadows; the detection method based on physical characteristics comprises the steps of firstly modeling shadow pixels, and then detecting shadows from a preselected area by using the model; predicting the size, shape and direction of the shadow according to the light source, the object shape and the ground by using a geometric-based detection method; the detection method based on the color space selects a new color space, and compared with the RGB color space, the difference between the brightness and the chromaticity is more obvious.
These methods are not highly real-time, and the accuracy is also significantly reduced particularly when the image pickup apparatus is in irregular motion. The reason is that these algorithms do not need to be used in situations where real-time requirements are particularly high, only the case of stationary camera devices is considered.
Disclosure of Invention
The object of the present invention is to solve at least one of the technical drawbacks mentioned.
Therefore, the invention aims to provide a method and a system for detecting and eliminating shadow of a video object in mediated reality.
In order to achieve the above object, an embodiment of an aspect of the present invention provides a method for detecting and eliminating shadow of a video object in mediated reality, including the following steps:
step S1, shooting a video image by using a depth camera, making an equal-proportion model according to the video image, and marking points by using a single color with large contrast;
step S2, outputting a shadow map on the equal-scale model by using a visual image processing computer, and sending the shadow map to a video image processing computer;
step S3, the video image processing computer detecting the shadow of the entire captured video frame on the shadow map by using a shadow detection method based on the shadow map;
in step S4, the video image processing computer eliminates the shadow detected in step S3 by a color consistency-based method.
Further, in the step S3, the shadow detection method based on the shadow map includes the following steps:
step S31, obtaining model parameters through training shadow samples, and determining a shadow model when all pixels are processed;
step S32, calculating the matching degree between the current pixel and the shadow model on each channel, including: calculating the matching degree of the current pixel and the shadow model on each channel, weighting the matching value of each channel, and comparing the weighted value with a preset threshold value to judge whether the pixel is a shadow;
step S33, judging whether the current pixel belongs to the object or the shadow according to the matching degree, if so, executing step S34, otherwise, returning to judge whether the pixels are all processed;
step S34, if it is a shadow, the shadow model is updated.
Further, in step S31, in the HSV color space, the H, S, V channel color values of each pixel point in the shadow region are counted, histogram comparison analysis is performed with the corresponding non-shadow region, independent gaussian models are respectively established on the three channels based on the statistical information, a shadow gaussian model in the HSV color space is established, model parameters are obtained by training shadow samples, and when all pixels are processed, the shadow model is determined.
Further, in step S4, the method for color consistency based on color includes:
step S41, calculating the optimal matching distance between the shadow region Ds and the non-shadow region Dt in the cluster by using an EMD algorithm, and performing linear optimization on the two clusters by combining vector quantization;
step S42, converting the color of each cluster in the shadow region Ds into a vector, and calculating a weighted average of the distances between the vector and the corresponding cluster color in the non-shadow region Dt;
step S43, adjusting the color of each pixel in the shadow area according to the above two steps, so that the color of the shadow portion is fused to the whole image, performing color consistency processing at the shadow boundary, and recoloring the boundary and the shadow area surrounded by the boundary to eliminate the shadow.
An embodiment of another aspect of the present invention provides a system for detecting and eliminating shadow of a video object in mediated reality, including: the system comprises virtual reality head-mounted equipment, a visual processing computer, a video processing computer, a 3D depth camera, a simulator operation instrument and a display screen, wherein the virtual reality head-mounted equipment is connected with the visual processing computer, the visual processing computer is connected with the video processing computer, and the 3D depth camera is connected with the video processing computer; the simulator operation instrument and the display screen are connected with a view processing computer, wherein the 3D depth camera is used for collecting video images, making an equal-proportion model according to the video images, and marking points by using a single color with large contrast; the view image processing computer is used for outputting the shadow map on the equal-scale model and sending the shadow map to the video image processing computer; the video image processing computer is used for detecting the shadow of the whole shooting video frame on the shadow map by adopting a shadow detection method based on the shadow map, and eliminating the detected shadow by adopting a method based on color consistency.
Further, the video image processing computer adopts a shadow detection method based on a shadow map, and comprises the following steps:
the video image processing computer obtains model parameters by training shadow samples, and determines a shadow model when all pixels are processed; calculating the matching degree of the current pixel and the shadow model on each channel, comprising the following steps: calculating the matching degree of the current pixel and the shadow model on each channel, weighting the matching value of each channel, and comparing the weighted value with a preset threshold value to judge whether the pixel is a shadow; and judging whether the current pixel belongs to the object or the shadow according to the matching degree, and updating the shadow model if the current pixel belongs to the object or the shadow.
Further, the video image processing computer performs histogram comparison analysis on H, S, V channel color values of all pixel points in a shadow region and corresponding non-shadow regions by counting under an HSV color space, establishes independent Gaussian models on three channels based on statistical information, establishes a shadow Gaussian model under the HSV color space, obtains model parameters by training shadow samples, and determines the shadow model after all pixels are processed.
Further, the video image processing computer adopts a method based on color consistency, comprising the following steps: calculating the optimal matching distance between the shadow region Ds and the non-shadow region Dt clusters by using an EMD algorithm, and performing linear optimization on the two clusters by combining vector quantization; converting the color of each cluster in the shadow region Ds into a vector, and calculating a weighted average of the distance between the vector and the corresponding cluster color in the non-shadow region Dt; and adjusting the color of each pixel of the shadow area according to the two steps so as to enable the color of the shadow part to be fused with the whole image, performing color consistency processing at the shadow boundary, and recoloring the boundary and the shadow area surrounded by the boundary so as to eliminate the shadow.
According to the method and the system for detecting and eliminating the shadow of the video object in the mediated reality, a shadow detection method based on a shadow map is adopted to detect the shadow on the whole shot video frame, the color and brightness information of the shadow map is introduced into a traditional Gaussian mixture model, a shadow Gaussian model is established based on statistical characteristics, the edge of the object is distinguished from the shadow edge, so that the shadow part is determined, then the video shadow is eliminated in real time based on color consistency, the shadow elimination under different scenes is accurate and real-time, and the shadow detection rate and the shadow distinguishing degree are obviously improved. The method has the characteristic of high real-time performance, and is suitable for detecting and eliminating shadow before virtual-real fusion of mediated reality.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a method for mediating video object shadow detection and elimination in reality according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a shadow map-based shadow detection method according to an embodiment of the invention;
fig. 3 is a block diagram of a system for mediating video object shadow detection and elimination in reality according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
As shown in fig. 1, the method for detecting and eliminating shadow of video object in mediated reality according to the embodiment of the present invention includes the following steps:
and step S1, shooting a video image by using the depth camera, making an equal-scale model according to the video image, and marking points by using a single color with large contrast.
In this step, a three-dimensional model of the entire scene of the video image taken by the depth camera is created in advance. The scene can be an indoor scene such as a classroom and a laboratory, and parameters such as light source color, intensity and direction can be basically controlled.
And step S2, outputting the shadow map on the equal-scale model by using the visual image processing computer, and sending the shadow map to the video image processing computer.
In step S3, the video image processing computer detects the shadow of the entire captured video frame on the shadow map using a shadow map-based shadow detection method.
Specifically, as shown in fig. 2, for the shadow elimination problem, the shadow detection method based on the shadow map introduces the color and brightness information of the shadow map into the traditional gaussian mixture model, establishes the shadow gaussian model based on statistical characteristics, and distinguishes the object edge from the shadow edge.
And step S31, obtaining model parameters through training shadow samples, and determining the shadow model when all pixels are processed.
In an embodiment of the invention, in an HSV color space, H, S, V channel color values of all pixel points in a shadow area are counted, histogram comparison analysis is carried out on the color values and corresponding non-shadow areas, independent Gaussian models are respectively established on three channels based on statistical information, a shadow Gaussian model in the HSV color space is established, model parameters are obtained by training shadow samples, and after all pixels are processed, the shadow model is determined.
Step S32, calculating the matching degree between the current pixel and the shadow model on each channel, including: and calculating the matching degree of the current pixel and the shadow model on each channel, weighting the matching value of each channel, and comparing the weighted value with a preset threshold value to judge whether the pixel is a shadow or not.
Step S33, judging whether the current pixel belongs to the object or the shadow according to the matching degree, if so, executing step S34, otherwise, returning to judge whether the pixels are all processed;
in step S34, if the shadow is generated, the shadow model is updated.
In step S4, the video image processing computer removes the shadow detected in step S3 by a color consistency-based method.
Particularly, the mediated reality light source is indoors, so that the number, the direction and the intensity are easy to control. The task of shadow elimination is to achieve a consistent effect of the visual perception of the shadow area and the non-shadow area without changing the information of the non-shadow area, which is a process of converting or otherwise restoring the brightness and color of the shadow area. Determining an illumination model and an illumination proportion coefficient and an elimination formula according to illumination original information, performing color matching on a given image and pre-stored data, and using a three-dimensional joint histogram in a LaB color space.
In this step, a method based on color consistency is adopted, including:
step S41, calculating the optimal matching distance between the shadow region Ds and the non-shadow region Dt in the cluster by using an EMD algorithm, and performing linear optimization on the two clusters by combining vector quantization;
step S42, converting the color of each cluster in the shadow region Ds into a vector, and calculating the weighted average of the distance between the vector and the corresponding cluster color in the non-shadow region Dt;
step S43, adjusting the color of each pixel in the shadow area according to the above two steps, so that the color of the shadow portion is fused to the whole image, performing color consistency processing at the shadow boundary, and recoloring the boundary and the shadow area surrounded by the boundary to eliminate the shadow, thereby achieving better effect.
The method based on color consistency is accurate and real-time for shadow elimination under different scenes, and is remarkably improved in shadow detection rate and shadow discrimination.
As shown in fig. 3, the system for detecting and eliminating shadow of video object in mediated reality according to the embodiment of the present invention includes: virtual reality head mounted device 100, visual processing computer 300, video processing computer 200, 3D depth camera 600, simulator operating instrumentation and display screen 400.
In one embodiment of the present invention, the virtual reality head mounted device 100 is connected to the vision processing computer 300 through USB3.0 and HDMI interface, the 3D depth camera 600 is connected to the video processing computer 200 through USB3.0, the vision processing computer 300 and the video processing computer 200 are connected through LAN, and the 3D depth camera 600 is connected to the virtual reality head mounted device 100; the simulator operating instrument and the display screen 400 are connected to the view processing computer 300 through USB 3.0.
In one embodiment of the invention, the virtual reality head mounted device 100 may employ an Oculus Rift virtual reality device. The 3D depth Camera 600 employs ZED stereo Camera or Intel RealSense SR300 (this Camera is mounted on the Oculus Rift headset 100 helmet).
In addition, the present invention uses the following three-dimensional engine software: unity or Unity.
The 3D depth camera 600 is used to collect video images, create an equal-scale model from the video images, and mark points with a single color with a large contrast.
A three-dimensional model of the whole scene of a video image shot by a depth camera is made in advance. The scene can be an indoor scene such as a classroom and a laboratory, and parameters such as light source color, intensity and direction can be basically controlled.
In one embodiment of the present invention, the video images collected by the 3D depth camera 600 include: color video, depth video, and infrared video.
The view image processing computer 300 is used for outputting the shadow map on the equal-scale model and sending the shadow map to the video image processing computer.
The video image processing computer 200 is configured to detect shadows of an entire captured video frame on a shadow map using a shadow map-based shadow detection method, and to eliminate the detected shadows using a color consistency-based method.
Specifically, the video image processing computer 200 introduces the color and brightness information of the shadow map into the conventional gaussian mixture model, establishes a shadow gaussian model based on statistical characteristics, and distinguishes the object edge from the shadow edge, aiming at the problem of shadow elimination based on the shadow detection method of the shadow map.
First, the video image processing computer 200 obtains model parameters by training shadow samples, and determines a shadow model when all pixels are processed.
In an embodiment of the invention, in an HSV color space, H, S, V channel color values of all pixel points in a shadow area are counted, histogram comparison analysis is carried out on the color values and corresponding non-shadow areas, independent Gaussian models are respectively established on three channels based on statistical information, a shadow Gaussian model in the HSV color space is established, model parameters are obtained by training shadow samples, and after all pixels are processed, the shadow model is determined.
The video image processing computer 200 calculates the matching degree of the current pixel and the shadow model on each channel, including: and calculating the matching degree of the current pixel and the shadow model on each channel, weighting the matching value of each channel, and comparing the weighted value with a preset threshold value to judge whether the pixel is a shadow or not.
The video image processing computer 200 judges whether the current pixel belongs to the object itself or the shadow according to the matching degree, and updates the shadow model if the current pixel belongs to the shadow.
The mediated reality light source is indoors, so that the number, the direction and the intensity are easy to control. The task of shadow elimination is to achieve a consistent effect of the visual perception of the shadow area and the non-shadow area without changing the information of the non-shadow area, which is a process of converting or otherwise restoring the brightness and color of the shadow area. Determining an illumination model and an illumination proportion coefficient and an elimination formula according to illumination original information, performing color matching on a given image and pre-stored data, and using a three-dimensional joint histogram in a LaB color space.
The video image processing computer 200 employs a color consistency-based approach to eliminate detected shadows, including:
the video image processing computer 200 calculates the best matching distance between the clusters of the shadow region Ds and the non-shadow region Dt by using the EMD algorithm, performs linear optimization on the two clusters by combining vector quantization, converts the color of each cluster in the shadow region Ds into a vector, and calculates the weighted average of the distances between the color of each cluster in the shadow region Ds and the color of the corresponding cluster in the non-shadow region Dt. The video image processing computer adjusts the color of each pixel of the shadow area according to the two steps so as to enable the color of the shadow part to be fused with the whole image, color consistency processing is carried out at the shadow boundary, and the boundary and the shadow area enclosed by the boundary are recoloring so as to eliminate the shadow.
The method and the system for detecting and eliminating the shadow of the video object in the mediated reality can achieve the processing speed of about 60 frames per second on average and basically meet the frame number required before the frame insertion of the mediated reality video.
And (3) defining a shadow detection rate eta and a shadow division xi according to a shadow elimination algorithm quantitative evaluation analysis method. The algorithm and the like are subjected to actual video test, and experimental results show that the algorithm can well eliminate shadows in different scenes, and the shadow detection rate and the shadow zone division indexes are remarkably improved compared with the existing algorithm. In addition, the algorithm parameters are obtained through sample training, and the complexity of parameter setting in the shadow elimination process is reduced.
The video is tested by using a unit simulator teaching and research room, wherein Sp is a statistical parametric method, Snp is a statistical nonparametric method, and Dnm1 and Dnm2 are two types of judgment non-model methods respectively, and the results are compared as shown in Table 1.
Figure BDA0001349800450000071
TABLE 1
According to the method and the system for detecting and eliminating the shadow of the video object in the mediated reality, a shadow detection method based on a shadow map is adopted to detect the shadow on the whole shot video frame, the color and brightness information of the shadow map is introduced into a traditional Gaussian mixture model, a shadow Gaussian model is established based on statistical characteristics, the edge of the object is distinguished from the shadow edge, so that the shadow part is determined, then the video shadow is eliminated in real time based on color consistency, the shadow elimination under different scenes is accurate and real-time, and the shadow detection rate and the shadow distinguishing degree are obviously improved. The method has the characteristic of high real-time performance, and is suitable for detecting and eliminating shadow before virtual-real fusion of mediated reality.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention. The scope of the invention is defined by the appended claims and their full range of equivalents.

Claims (6)

1. A method for detecting and eliminating shadow of video object in mediated reality is characterized by comprising the following steps:
step S1, shooting a video image by using a depth camera, making an equal-proportion model according to the video image, and marking points by using a single color with large contrast;
step S2, outputting a shadow map on the equal-scale model by using a visual image processing computer, and sending the shadow map to a video image processing computer;
step S3, the video image processing computer detects the shadow of the entire captured video frame on the shadow map by using a shadow detection method based on the shadow map, wherein model parameters are obtained by training shadow samples, and when all pixels are processed, a shadow model is determined, including: in an HSV color space, H, S, V channel color values of all pixel points in a shadow area are counted, histogram comparison analysis is carried out on the color values and corresponding non-shadow areas, independent Gaussian models are respectively established on three channels based on statistical information, a shadow Gaussian model in the HSV color space is established, model parameters are obtained by training shadow samples, and after all pixels are processed, the shadow model is determined;
in step S4, the video image processing computer eliminates the shadow detected in step S3 by a color consistency-based method.
2. The method for detecting and eliminating shadow of video object in mediated reality according to claim 1, wherein in the step S3, the shadow detection method based on shadow map comprises the following steps:
step S31, obtaining model parameters through training shadow samples, and determining a shadow model when all pixels are processed;
step S32, calculating the matching degree between the current pixel and the shadow model on each channel, including: calculating the matching degree of the current pixel and the shadow model on each channel, weighting the matching value of each channel, and comparing the weighted value with a preset threshold value to judge whether the pixel is a shadow;
step S33, judging whether the current pixel belongs to the object or the shadow according to the matching degree, if so, executing step S34, otherwise, returning to judge whether the pixels are all processed;
step S34, if it is a shadow, the shadow model is updated.
3. The method for detecting and eliminating shadow of video object in mediated reality according to claim 1, wherein in the step S4, the method based on color consistency is adopted, comprising:
step S41, calculating the optimal matching distance between the shadow region Ds and the non-shadow region Dt in the cluster by using an EMD algorithm, and performing linear optimization on the two clusters by combining vector quantization;
step S42, converting the color of each cluster in the shadow region Ds into a vector, and calculating a weighted average of the distances between the vector and the corresponding cluster color in the non-shadow region Dt;
step S43, adjusting the color of each pixel in the shadow area according to the above two steps, so that the color of the shadow portion is fused to the whole image, performing color consistency processing at the shadow boundary, and recoloring the boundary and the shadow area surrounded by the boundary to eliminate the shadow.
4. A system for detecting and eliminating shadows in video objects in mediated reality, comprising: the system comprises virtual reality head-mounted equipment, a visual processing computer, a video processing computer, a 3D depth camera, a simulator operation instrument and a display screen, wherein the virtual reality head-mounted equipment is connected with the visual processing computer, the visual processing computer is connected with the video processing computer, and the 3D depth camera is connected with the video processing computer; the simulator operation instrument and the display screen are connected with a view processing computer,
the 3D depth camera is used for collecting video images, making an equal-proportion model according to the video images, and marking points by using a single color with large contrast;
the view processing computer is used for outputting the shadow map on the equal-scale model and sending the shadow map to the video processing computer;
the video processing computer is used for detecting the shadow of the whole shot video frame on the shadow map by adopting a shadow detection method based on the shadow map and eliminating the detected shadow by adopting a method based on color consistency, wherein the video processing computer carries out histogram comparison analysis with a corresponding non-shadow region by carrying out statistics on H, S, V channel color values of all pixel points in a shadow region in an HSV color space, establishes independent Gaussian models on three channels respectively based on statistical information, establishes a shadow Gaussian model in the HSV color space, obtains model parameters by training shadow samples, and determines the shadow model after all pixels are processed.
5. The system for video object shadow detection and elimination in mediated reality of claim 4, wherein the video processing computer employs a shadow map-based shadow detection method comprising:
the video processing computer obtains model parameters by training the shadow samples, and determines a shadow model when all pixels are processed; calculating the matching degree of the current pixel and the shadow model on each channel, comprising the following steps: calculating the matching degree of the current pixel and the shadow model on each channel, weighting the matching value of each channel, and comparing the weighted value with a preset threshold value to judge whether the pixel is a shadow; and judging whether the current pixel belongs to the object or the shadow according to the matching degree, and updating the shadow model if the current pixel belongs to the object or the shadow.
6. The system for video object shadow detection and elimination in mediated reality of claim 4, wherein the video processing computer employs a color consistency based approach comprising: calculating the optimal matching distance between the shadow region Ds and the non-shadow region Dt clusters by using an EMD algorithm, and performing linear optimization on the two clusters by combining vector quantization; converting the color of each cluster in the shadow region Ds into a vector, and calculating a weighted average of the distance between the vector and the corresponding cluster color in the non-shadow region Dt; and adjusting the color of each pixel of the shadow area according to the two steps so as to enable the color of the shadow part to be fused with the whole image, performing color consistency processing at the shadow boundary, and recoloring the boundary and the shadow area surrounded by the boundary so as to eliminate the shadow.
CN201710571523.5A 2017-07-13 2017-07-13 Method and system for detecting and eliminating shadow of video object in mediated reality Expired - Fee Related CN107346534B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710571523.5A CN107346534B (en) 2017-07-13 2017-07-13 Method and system for detecting and eliminating shadow of video object in mediated reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710571523.5A CN107346534B (en) 2017-07-13 2017-07-13 Method and system for detecting and eliminating shadow of video object in mediated reality

Publications (2)

Publication Number Publication Date
CN107346534A CN107346534A (en) 2017-11-14
CN107346534B true CN107346534B (en) 2020-10-30

Family

ID=60256912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710571523.5A Expired - Fee Related CN107346534B (en) 2017-07-13 2017-07-13 Method and system for detecting and eliminating shadow of video object in mediated reality

Country Status (1)

Country Link
CN (1) CN107346534B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111526291B (en) * 2020-04-29 2022-07-05 济南博观智能科技有限公司 Method, device and equipment for determining monitoring direction of camera and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102568005A (en) * 2011-12-28 2012-07-11 江苏大学 Moving object detection method based on Gaussian mixture model
CN103208126A (en) * 2013-04-17 2013-07-17 同济大学 Method for monitoring moving object in natural environment
CN105205834A (en) * 2015-07-09 2015-12-30 湖南工业大学 Target detection and extraction method based on Gaussian mixture and shade detection model
CN105844714A (en) * 2016-04-12 2016-08-10 广州凡拓数字创意科技股份有限公司 Augmented reality based scenario display method and system
CN106204751A (en) * 2016-07-13 2016-12-07 广州大西洲科技有限公司 The real-time integration method of real-world object and virtual scene and integration system
CN106408515A (en) * 2016-08-31 2017-02-15 郑州捷安高科股份有限公司 Augmented reality-based vision synthesis system
CN104077776B (en) * 2014-06-27 2017-03-01 深圳市赛为智能股份有限公司 A kind of visual background extracting method based on color space adaptive updates
CN106843456A (en) * 2016-08-16 2017-06-13 深圳超多维光电子有限公司 A kind of display methods, device and virtual reality device followed the trail of based on attitude

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL196161A (en) * 2008-12-24 2015-03-31 Rafael Advanced Defense Sys Removal of shadows from images in a video signal
CN102332157B (en) * 2011-06-15 2013-04-03 湖南领创智能科技有限公司 Method for eliminating shadow
CN106558103A (en) * 2015-09-24 2017-04-05 鸿富锦精密工业(深圳)有限公司 Augmented reality image processing system and augmented reality image processing method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102568005A (en) * 2011-12-28 2012-07-11 江苏大学 Moving object detection method based on Gaussian mixture model
CN103208126A (en) * 2013-04-17 2013-07-17 同济大学 Method for monitoring moving object in natural environment
CN104077776B (en) * 2014-06-27 2017-03-01 深圳市赛为智能股份有限公司 A kind of visual background extracting method based on color space adaptive updates
CN105205834A (en) * 2015-07-09 2015-12-30 湖南工业大学 Target detection and extraction method based on Gaussian mixture and shade detection model
CN105844714A (en) * 2016-04-12 2016-08-10 广州凡拓数字创意科技股份有限公司 Augmented reality based scenario display method and system
CN106204751A (en) * 2016-07-13 2016-12-07 广州大西洲科技有限公司 The real-time integration method of real-world object and virtual scene and integration system
CN106843456A (en) * 2016-08-16 2017-06-13 深圳超多维光电子有限公司 A kind of display methods, device and virtual reality device followed the trail of based on attitude
CN106408515A (en) * 2016-08-31 2017-02-15 郑州捷安高科股份有限公司 Augmented reality-based vision synthesis system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HSV 自适应混合高斯模型的运动目标检测;林庆,徐柱,王士同,詹永照;《计算机科学》;20101015;第37卷(第10期);全文 *

Also Published As

Publication number Publication date
CN107346534A (en) 2017-11-14

Similar Documents

Publication Publication Date Title
US9530192B2 (en) Method for determining stereo quality score and automatically improving the quality of stereo images
EP2915333B1 (en) Depth map generation from a monoscopic image based on combined depth cues
US8644596B1 (en) Conversion of monoscopic visual content using image-depth database
US8687887B2 (en) Image processing method, image processing apparatus, and image processing program
US20180357819A1 (en) Method for generating a set of annotated images
KR20090084563A (en) Method and apparatus for generating the depth map of video image
CN107909081B (en) Method for quickly acquiring and quickly calibrating image data set in deep learning
WO2015180527A1 (en) Image saliency detection method
TWI489395B (en) Apparatus and method for foreground detection
CN102609950B (en) Two-dimensional video depth map generation process
WO2014187223A1 (en) Method and apparatus for identifying facial features
CN110189294B (en) RGB-D image significance detection method based on depth reliability analysis
JP2017534046A (en) Building height calculation method, apparatus and storage medium
CN108257165B (en) Image stereo matching method and binocular vision equipment
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
CN104202547A (en) Method for extracting target object in projection picture, projection interaction method and system thereof
CN102340620B (en) Mahalanobis-distance-based video image background detection method
CN102333234B (en) Binocular stereo video state information monitoring method and device
CN104243970A (en) 3D drawn image objective quality evaluation method based on stereoscopic vision attention mechanism and structural similarity
CN107346534B (en) Method and system for detecting and eliminating shadow of video object in mediated reality
CN102510437B (en) Method for detecting background of video image based on distribution of red, green and blue (RGB) components
US20190172212A1 (en) Multi-modal data fusion for scene segmentation
CN111435429A (en) Gesture recognition method and system based on binocular stereo data dynamic cognition
CN109167988B (en) Stereo image visual comfort evaluation method based on D + W model and contrast
CN113221603A (en) Method and device for detecting shielding of monitoring equipment by foreign matters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201030

CF01 Termination of patent right due to non-payment of annual fee