WO2012175703A1 - Procédé et système de détection fiable d'un objet réfléchissant par éclairage de la scène au moyen de la lumière d'affichage - Google Patents

Procédé et système de détection fiable d'un objet réfléchissant par éclairage de la scène au moyen de la lumière d'affichage Download PDF

Info

Publication number
WO2012175703A1
WO2012175703A1 PCT/EP2012/062131 EP2012062131W WO2012175703A1 WO 2012175703 A1 WO2012175703 A1 WO 2012175703A1 EP 2012062131 W EP2012062131 W EP 2012062131W WO 2012175703 A1 WO2012175703 A1 WO 2012175703A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
display
images
image
objects
Prior art date
Application number
PCT/EP2012/062131
Other languages
English (en)
Inventor
Zoran ZIVKOVIC
Hendrikus Willem Groot Hulze
Original Assignee
Trident Microsystems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Trident Microsystems, Inc. filed Critical Trident Microsystems, Inc.
Priority to KR1020147001185A priority Critical patent/KR20140057522A/ko
Priority to JP2014516374A priority patent/JP2014520469A/ja
Publication of WO2012175703A1 publication Critical patent/WO2012175703A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means

Definitions

  • the present invention applies to the field of user interface, image sensors, video processing, image analysis, display technology and display backlight control.
  • Buttons on a remote control are usually used to send commands to a TV or a set top box to adjust video and audio, change channels, etc. More natural and richer user control can be achieved using other sensors (e.g. camera) to detect the users and pose of certain objects used for device control (e.g. detect if the users are wearing glasses used for viewing stereoscopic 3D data and then automatically switch between 2D and 3D mode on the displays that can show stereoscopic 3D data) .
  • Robust and reliable user/object detection using cameras is often difficult. The major reason is the complex algorithms required to detect the objects of interest in highly variable environments such as a typical living room. Furthermore, the highly variable light conditions that can be expected in the viewing environment make detection even more difficult.
  • the display is a light source.
  • the display light (and more specifically, the display backlight) and the information about the amount of light coming out of the display is used to increase the performance of a camera based object detection system by separating display generated light from other unknown (ambient) light sources.
  • temporal modulation is introduced to the properties of the display light, e.g. intensity or other properties, such as wavelength.
  • temporal modulation of light invisible for human eyes is used.
  • the temporal modulation is detected in the camera (light sensor) signal and used to extract only the part of the signal corre- sponding to the display light. In this way the influence of the ambient light sources is eliminated.
  • This "demodulated" signal (image) is then further analyzed to detect the users or/and objects and their pose, to control one or more devices. Close-by and reflective objects will provide strong reflection of the emitted display light and will be easier to detect since they are isolated from the background which will not reflect the modulated display light as strongly.
  • Figure 1 illustrates an example embodiment of the whole system of the user aware display.
  • Light sensor e.g. camera
  • the camera data is used for analyzing the scene. Part of the light in the scene comes from the display itself.
  • the camera data might be used to detect user but also some other relevant objects such as: glasses that enable to watch stereo- scopic 3D, or remote control or special device used for control using the camera.
  • the objects that need to be detected might have parts of special reflective material to help accurate detection, or be close by, e.g. for the hand gesture control.
  • Figure 2 illustrates the basic principle.
  • Light modulation is introduced by the display (for example by controlling the display backlight) . Preferably high-frequency light modulation in- visible to human eyes is used.
  • the camera captures images illuminated by the light. A number of camera images are used and combined to separate display generated light from the other (ambient) light sources.
  • Figure 3 illustrates an example of the base processing blocks and data flow.
  • a number of camera images are used and combined to separate the display generated light from the other (ambient) light sources in the demodulation block. The result is an image (signal) that corresponds to the display generated light and where the unknown ambient light influence is removed.
  • a scene analysis engine analyzes the demodulated data and detects users and/or other objects. The extracted information about the users or/and the objects may be used to control the TV (or the set-top box) . The information about the light coming out of the display (available at the display control unit) is used to control the demodulation.
  • the light sensor can be controlled as well; for example to synchronize its data capture time with the display illumination.
  • the scene analysis engine can also use the information about the generated display light that depends on the displayed content, for example to the scene analysis engine can be made aware of time periods where the amount of light is low such that the detection could be difficult.
  • Figure 4 illustrates an example of demodulation by subtracting two images. Two images are shown on the display and the camera is on the top of the display similar to Figure 1. One of the displayed images is darker and the other one brighter. As a result, the amount of light coming from the display is changing and it is captured by the camera. The close-by objects (for example, the user) and highly reflective objects in the two scenes (glasses in the top row and the object in the hand of the user in the bottom row) are clearly visible in the image on the right showing the difference between the two captured images. Information about the display light color and object reflectance properties can be used by the scene analysis engine to further improve detection to the specific objects of interest.
  • FIG. 2 An example of how the different light sources are reflected is presented in Figure 2. If the amount of light coming from the display is changing, the reflected display light Idisplay from the object will also change in synchronous way, see examples in Figure 4 and 5a. The ambient light is assumed constant and the object is assumed to be still. Therefore the part corresponding to the ambient light lambient i s constant. To increase the visibility of the objects in low light, some parts of the objects can be made of a special reflective mate- rial. If enough light comes from the display, such object will be visible by the camera also in low light. The highly reflective object and the close-by object will exhibit large change in Idisplay an d in this way it can be distinguished from other objects.
  • Light sensor will have a certain dynamic range that will determine the minimum measurable reflected display light Idisplay relative to the ambient light lambient- l n l° w light conditions the influence of the display light, Idisplay i n Equation (1), will be significant. If there is a lot of ambient light, lambient will be dominant part of Equation (1) and it might be dif ⁇ ficult to measure the changes in Idisplay due to the limited precision that is used to capture the images. Assuming that the ambient light is continuous, the ratio of the display generated light with respect to the ambient light can be increased in the following way. Instead of continuously emitting light, the display light is emitted in shorter light pulses of higher amplitude.
  • FIG. 5b illustrates how shorter light pulses can be used to increase Idisplay/- 1 -ambient ra tio with re ⁇ spect to the situation presented in Figure 5a where the display is always emitting light.
  • the camera (light sensor) exposure should be limited to the time interval of the light pulse. Longer camera exposure will reduce the improvement in the Idisplay/ ⁇ ambient ratio since the camera integrates the light.
  • a reference exposure when there is no light coming from the display can be used to measure lambient an ⁇ the ambient light level can be removed by subtracting it in equation (1) .
  • the reference exposure measurements can be performed between the light pulses.
  • Figure 6b illustrates the use of pulse modulation and the reference exposure in between the light pulses. The camera is measuring during the light pulse and also between the pulses. The camera measurement between the pulses is used as reference ambient light measurement . A longer duty cycle of the display backlight is preferred in practice because for most light sources more light can be generated by the display. In Figure 6a and 6b it is assumed that the duty cycle (period Al) is 50% of the frame period (period A1+A2) .
  • the reference exposure period should be reduced.
  • the same exposure period should be used for the measurement during the period when the display light is on.
  • FIG. 6c illustrates compensation of the fluctuation caused by the image on the display.
  • the ambient light level is removed and the extracted display light reflection for the two periods Idisplay ( A ) anc ⁇ ⁇ display ( B ) depend on the image on the display.
  • the modulation is compensated by dividing Id splay by the spatial average brightness of the displayed image.
  • ADC analog to digital conversion
  • Another approach to deal with the limited dynamic range of the light sensor is to focus on the range of signal that is of in- terest. For example for making the hand gesture interface, detecting human skin is of interest.
  • the sensor sensitivity could be set in such way that the display light modulation reflected on the human skin does not exceed the dynamic range of the sensor. For other brighter objects this might cause overexposure but this is not important for the detection.
  • the range of the skin reflection can be initialized and adapted by using some procedure for detecting skin colored regions. For example a face detector can be used. Faces are detected and based on the pixel information in the detected face region and the current camera settings, dynamic range corresponding to skin can be estimated.
  • Synchronization Usually high-frequent light modulation invisible for human eyes is used. However, the light sensor does not need to measure at the same high frequencies but can operate on lower frequencies by skipping some frames. For example in Figure 6, the light sensor can measure during period Al and then take the first reference ambient light measure during B2 period.
  • the light sensor needs to be synchronized with the display light illumination in some way. This could be done for example using some trigger signal to synchronize the light sensor and the light source.
  • Another way is adaptive synchronization where some automatic procedure is used that is analyzing the measured signal, for example the phase lock loop (PLL) approach.
  • PLL phase lock loop
  • Rolling shutter camera as a light sensor is an important use case because of the low costs of such cameras.
  • each image line integrates over a different time period since the lines are reset, exposed and read out sequentially.
  • the lines that integrate over transi- tions of the display light modulation could also be used, but they will have worse signal to noise ratio.
  • a solution is to combine multiple captured camera images and select the image lines captured at proper time instances to be able to measure the highest amplitude of the modulated signal for each line. To ensure that after a number of images all image lines get the highest and the lowest amplitude, the frequencies of the camera and the display light modulation should be different.
  • Figure 8 illustrates an example demodulation by combining a number of images captured by a rolling shutter camera.
  • the display backlight was emitting square pulses at 90Hz and camera was capturing at 120Hz with exposure time of 20%. Four captured images are shown. A white object was present close to the camera visible at the left side of the images, to illustrate which image lines are captured when the display backlight was on
  • Dual mode operation In practice, it is preferred to have the display backlight mostly turned on, for example 80%, to maximize the amount of light coming from the display. Introducing modulation requires dark periods that can reduce the amount of light coming from the display. If short dark periods are used, the system will need more images and longer time to generate the demodulated image. The same holds for short bright periods, for example in backlight dimming applications. As result the detection of moving objects will be more difficult. Furthermore, the detection of the user actions will be slower. A duty cycle between 20% and 80% is a typical useful range for dimming applications.
  • - user detection stand-by mode when the user not using the user interface, but for example watching a video.
  • Short dark periods are used to maximize the picture quality. For example flashing backlight with 80% duty cycle. As result many images are used for demodulation and the user detection is slower, but the system only needs to detect that the user wants to switch to the interactive mode.
  • An alternative is not to increase the duty cycle but to generate a light level modulation between a high and low level were the light is never turned off. As result the amplitude of the modulated display light will be smaller and more difficult to detect.
  • - interactive mode when switched to interaction mode it is assumed that the user wants to control the display and perform some selections, so the full brightness of the screen is not so important. Longer darker periods are used to increase the reaction speed of the user detection. As result the brightness of the screen can get lower, but this can serve also as a natural indication that the display is in the interaction mode (display menu mode) .
  • the principle can be applied in displays with scanning backlight.
  • the segments are refreshed per column in a scanning order locked to the video refresh rate. Every video frame can hold a different light level. Still short dark periods can be introduced by switching off all columns at the same time for a few milliseconds. See the example in figure 11.
  • the light level is controlled by a PWM (pulse width modulation) signal at a higher frequency of e.g. 600Hz, as indicated in Figure 11c.
  • Another display backlight scanning technology is used for sequential crosstalk reduction and motion portrayal improvement.
  • the duty cycle of the PWM signals is already low (e.g. 50%) and the frequency is equal and locked to the fre- quency of the video (e.g. 60Hz) , see figure 10a.
  • "light on" period is split or/and shifted in time to insert a black period for all backlight columns.
  • FIG 10b and 10c This might have some consequences to the picture quality because the ideal scanning is interrupted.
  • This embodiment can also be combined with systems that use dimming.
  • Frame sequential 3D TV implementation Combination of the flashing light modulation with frame sequential 3D systems is often beneficial since they usually already have dark periods when switching between the left and right eye image to reduce the cross talk between the two images. These dark periods can be then used. Depending on the 3D technology the dark periods might need to be made longer. Similar combination can be used for other sequential techniques (for example, some color-sequential implementations) where the dark periods are usually introduced to reduce the cross talk between the different images that are sequentially displayed.
  • the assumption is that the objects are still. If the measurements are done on high frequencies, the movement between the captured frames will be small. For lower measurement frequencies the moving objects can be tracked first and aligned before they are checked for the appearance changes due to the display light changes .
  • the dif- ference between the reference exposures can be used to detect the changes due to the movement.
  • the changing areas in these frames correspond to the motion and they can be excluded from analysis.
  • Another method is to use the reference exposures (A2 and B2) for object tracking and mo- tion based compensation.
  • the proposed light modulation/demodulation method of the pre- sent invention is used to extract only the part of the signal corresponding to the known display light and in this way be robust against the influence of the unknown ambient light.
  • the close-by and reflective objects will be strongly visible in the resulting images since they provide strong reflection of the emitted display light.
  • the term "close-by” is intended to include object within the range of zero to some meters for example two to three meters. To distinguish between different reflective objects or/and extract their pose and other properties, further processing is needed.
  • Detecting users and their hands is very important for realizing user interfaces. Hands are usually reflecting lot of light and it would be possible to detect them close to the display even if small amount of light is emitted by the display.
  • the display light will illuminate only the close-by objects so first a threshold is applied to remove pixels that belong to the far away background. Usually this simple procedure will be good enough.
  • a model of the background to remove the static objects in the background, for example a long term temporal average image of the scene. The result is segmentation where image pixels are labeled as foreground and background pixels.
  • Connected regions are detected and small connected regions are removed.
  • Connected regions are groups of interconnected foreground pixels where interconnected means that at least one of the neighboring pixels is also labeled as foreground.
  • Small connected regions are regions with fewer pixels than the object is expected to have. For example minimum size of human hand at maximum distance of 3 meter for a certain camera can be calculated and this number of pixels can be used to remove all regions with less pixels.
  • Each connected region can be further split if needed, using some segmentation algorithm like "Mean Shift" [2] such as defined in D. Comanicu, P. Meer: “Mean shift: A robust approach toward feature space analysis", IEEE Trans. Pattern Anal. Machine In- tell., May 2002".
  • An example of a demodulated image and result of the segmentation is presented in Figure 9. Two connected regions are detected presented in the right image and the differ- ent gray values are used to mark the pixels belonging to each of the regions .
  • Contours of detected connected segments are extracted.
  • the contours of the segments are then described by a set of features that can be used to distinguish different shapes.
  • Hu moments [1] such as described in M. K. Hu, “Visual Pattern Recognition by Moment Invariants", IRE Trans. Info. Theory, vol. IT-8, pp.179-187, 1962 and Wu, M.-F.; the disclosures of which are incorporated herein by reference.
  • the extracted features are passed to the detector for detecting specific shapes.
  • the specific shape of a reflective marker can be detected.
  • the detector for example for open hand shape, pointing hand shape and closed hand shape can be used.
  • the output of the user detector is also temporally fil- tered. Detections that remain stable for some time are considered true detections. If the user hands and their shape are detected, this information could be used to design some touch- free user interface. If a special marker is detected corresponding to 3D viewing glasses, this can be used to automati- cally select 2D or 3D viewing mode of the display.
  • the mentioned shape detector is constructed using a statistical pattern recognition classifier function.
  • the classifier function is automatically constructed using a statistical pattern recognition technique AdaBoost.
  • AdaBoost statistical pattern recognition technique constructs the function automatically from two large sets of images: one dataset containing the shape examples that need to be detected (for example open hand) and the other one containing random shapes .
  • Detecting 3D viewing glasses for automatic switching between 2D and 3D mode of the display is another application.
  • the detection can be made robust to light conditions if the glasses are made of highly light reflective material.
  • Example in Figure 4 shows that the highly reflective objects are clearly visible in the demodulated images and possibly easy to detect.
  • the light modulation can be introduced also by presented graphics directly.
  • Cur- rently many displays allow fast updates of the graphics (for example 240Hz screens) which allows also including the modulation directly into the presented graphics.
  • Advantage of such system is that there is no need then for the special control of the backlight.
  • Another advantage is that by modulating graphics only at the part of the screen, it is possible to generate "localized light modulation" for the part of the screen, even if the display backlight does not support this directly. Modulating the light from just a part of the screen can have different purposes. For example if only the "menu object" generates modulated light it will illuminate human hand or a finger only when it gets close to the objects and in this way make detection more robust.
  • Photometric stereo such as is described in E. Prados, 0. Fau- geras, Shape From Shading: a well-posed problem?, IEEE conference CVPR, 2005 and C. Hernandez, G. Vogiatzis, G. J. Brostow, B. Stenger, R. Cipolla, Non-rigid Photometric Stereo with Colored Lights, 2007, the disclosures of which are incorporated herein by reference, [3] [4] is a technique where 3D structure is reconstructed by analyzing how known light is reflected form objects and analyzing the introduced shadows. For the 3D recon- struction usually multiple images are needed of an object which is illuminated by known light sources from different directions.
  • the 3D reconstruction using similar processing principles can be realized using the localized display light illumination (either by the display backlight or graphics as de- scribed above) to generate the set of images of an object illuminated by known light sources from different directions.
  • Scanning backlight is an example case where display light is emitted from different parts of the screen at different times.
  • An example implementation of such system performs the following in one preferred embodiment:
  • the results are four images of the object illuminated from four different known light sources.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Liquid Crystal Display Device Control (AREA)
  • Processing Or Creating Images (AREA)
  • Details Of Television Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

Procédé de génération de signaux de commande basé en particulier sur un mouvement ou une pose d'au moins un observateur, qui englobe les opérations suivantes : détection d'une pluralité d'images d'une scène éclairée par la lumière ambiante et par la lumière produite par le dispositif d'affichage, où chacune de la pluralité d'images comprend une quantité sensiblement constante de lumière ambiante et une quantité prédéterminée de lumière générée par le dispositif d'affichage, la quantité prédéterminée de lumière variant entre au moins deux images consécutives de la pluralité des images; séparation de la lumière générée par l'affichage de la lumière ambiante au moyen de repères connus de la lumière générée par l'affichage afin d'extraire une partie de la scène éclairée; et génération d'un signal de commande en réponse à la partie extraite.
PCT/EP2012/062131 2011-06-24 2012-06-22 Procédé et système de détection fiable d'un objet réfléchissant par éclairage de la scène au moyen de la lumière d'affichage WO2012175703A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020147001185A KR20140057522A (ko) 2011-06-24 2012-06-22 디스플레이 광원 장면 조명을 사용하여 신뢰할만한 객체 반사 감지를 위한 방법 및 시스템
JP2014516374A JP2014520469A (ja) 2011-06-24 2012-06-22 ディスプレイ光のシーン照明を使用して反射物体を高信頼度で検出するための方法及びシステム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP11171325.1 2011-06-24
EP11171325 2011-06-24

Publications (1)

Publication Number Publication Date
WO2012175703A1 true WO2012175703A1 (fr) 2012-12-27

Family

ID=46397230

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2012/062131 WO2012175703A1 (fr) 2011-06-24 2012-06-22 Procédé et système de détection fiable d'un objet réfléchissant par éclairage de la scène au moyen de la lumière d'affichage

Country Status (3)

Country Link
JP (1) JP2014520469A (fr)
KR (1) KR20140057522A (fr)
WO (1) WO2012175703A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9277132B2 (en) 2013-02-21 2016-03-01 Mobileye Vision Technologies Ltd. Image distortion correction of a camera with a rolling shutter
CN107105221A (zh) * 2017-03-23 2017-08-29 武汉云信众汇通讯科技有限公司 一种3d电视自适应控制方法及装置
WO2017158587A1 (fr) 2016-03-13 2017-09-21 B. G. Negev Technologies And Applications Ltd., At Ben-Gurion University Procédés de production d'images vidéo indépendantes de l'éclairage d'arrière-plan
WO2019030061A1 (fr) * 2017-08-08 2019-02-14 Osram Gmbh Reconstruction de surface d'un objet éclairé par analyse stéréo-photométrique
US10679397B1 (en) 2018-12-13 2020-06-09 Universal City Studios Llc Object tracking animated figure systems and methods
EP3839880A1 (fr) * 2019-12-20 2021-06-23 Koninklijke Philips N.V. Système permettant d'effectuer une correction d'image en lumière ambiante
CN113273176A (zh) * 2019-01-02 2021-08-17 杭州他若定位科技有限公司 使用基于图像的对象跟踪的自动化电影制作
WO2021250007A1 (fr) * 2020-06-10 2021-12-16 Koninklijke Philips N.V. Ciblage du rapport signal/bruit

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101594417B1 (ko) * 2014-04-28 2016-02-17 한국원자력연구원 비가시 환경에 강한 영상 획득 장치

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080231564A1 (en) * 2007-03-16 2008-09-25 Sony Corporation Display apparatus and method for controlling the same

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080231564A1 (en) * 2007-03-16 2008-09-25 Sony Corporation Display apparatus and method for controlling the same

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
C. HERNANDEZ; G. VOGIATZIS; G. J. BROSTOW; B. STENGER; R. CIPOLLA, NON-RIGID PHOTOMETRIC STEREO WITH COLORED LIGHTS, 2007
D. COMANICU; P. MEER: "Mean shift: A robust approach toward feature space analysis", IEEE TRANS. PATTERN ANAL. MACHINE IN- TELL., May 2002 (2002-05-01)
E. PRADOS; O. FAU- GERAS: "Shape From Shading: a well-posed problem?", IEEE CONFERENCE CVPR, 2005
M. K. HU: "Visual Pattern Recognition by Moment In- variants", IRE TRANS. INFO. THEORY, vol. IT-8, 1962, pages 179 - 187, XP011217262, DOI: doi:10.1109/TIT.1962.1057692

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10079975B2 (en) 2013-02-21 2018-09-18 Mobileye Vision Technologies Ltd. Image distortion correction of a camera with a rolling shutter
US10834324B2 (en) 2013-02-21 2020-11-10 Mobileye Vision Technologies Ltd. Image distortion correction of a camera with a rolling shutter
US9277132B2 (en) 2013-02-21 2016-03-01 Mobileye Vision Technologies Ltd. Image distortion correction of a camera with a rolling shutter
US20190068862A1 (en) * 2016-03-13 2019-02-28 B.G. Negev Technologies And Applications Ltd. At Ben-Gurion University Methods of producing video images that are independent of the background lighting
WO2017158587A1 (fr) 2016-03-13 2017-09-21 B. G. Negev Technologies And Applications Ltd., At Ben-Gurion University Procédés de production d'images vidéo indépendantes de l'éclairage d'arrière-plan
US10630907B2 (en) 2016-03-13 2020-04-21 B.G. Negev Technologies And Applications Ltd., At Ben Gurion University Methods of producing video images that are independent of the background lighting
CN107105221A (zh) * 2017-03-23 2017-08-29 武汉云信众汇通讯科技有限公司 一种3d电视自适应控制方法及装置
CN107105221B (zh) * 2017-03-23 2019-04-09 武汉云信众汇通讯科技有限公司 一种3d电视自适应控制方法及装置
WO2019030061A1 (fr) * 2017-08-08 2019-02-14 Osram Gmbh Reconstruction de surface d'un objet éclairé par analyse stéréo-photométrique
US11354842B2 (en) 2018-12-13 2022-06-07 Universal City Studios Llc Object tracking animated figure systems and methods
WO2020123863A1 (fr) * 2018-12-13 2020-06-18 Universal City Studios Llc Systèmes et procédés de figure animée suivant un objet
CN113168237A (zh) * 2018-12-13 2021-07-23 环球城市电影有限责任公司 物体跟踪的动画形象的系统及方法
US10679397B1 (en) 2018-12-13 2020-06-09 Universal City Studios Llc Object tracking animated figure systems and methods
US11907414B2 (en) 2018-12-13 2024-02-20 Universal City Studios Llc Object tracking animated figure systems and methods
CN113273176A (zh) * 2019-01-02 2021-08-17 杭州他若定位科技有限公司 使用基于图像的对象跟踪的自动化电影制作
CN113273176B (zh) * 2019-01-02 2023-05-30 杭州他若定位科技有限公司 使用基于图像的对象跟踪的自动化电影制作
EP3839880A1 (fr) * 2019-12-20 2021-06-23 Koninklijke Philips N.V. Système permettant d'effectuer une correction d'image en lumière ambiante
WO2021122582A1 (fr) * 2019-12-20 2021-06-24 Koninklijke Philips N.V. Système pour effectuer une correction d'image de lumière ambiante
US11871117B2 (en) 2019-12-20 2024-01-09 Koninklijke Philips N.V. System for performing ambient light image correction
WO2021250007A1 (fr) * 2020-06-10 2021-12-16 Koninklijke Philips N.V. Ciblage du rapport signal/bruit

Also Published As

Publication number Publication date
JP2014520469A (ja) 2014-08-21
KR20140057522A (ko) 2014-05-13

Similar Documents

Publication Publication Date Title
WO2012175703A1 (fr) Procédé et système de détection fiable d'un objet réfléchissant par éclairage de la scène au moyen de la lumière d'affichage
US20200334491A1 (en) Enhanced Contrast for Object Detection and Characterization by Optical Imaging Based on Differences Between Images
AU2018247216B2 (en) Systems and methods for liveness analysis
EP3367661B1 (fr) Procédé et système pour utiliser des émissions de lumière par caméra de détection de profondeur pour capturer des images vidéo dans des conditions de faible luminosité
CN101385069B (zh) 与具有半透明表面的屏幕一起使用的用户输入装置、系统、方法以及计算机程序
KR101861393B1 (ko) 통합형 저전력 깊이 카메라 및 비디오 프로젝터 장치
CN107066962B (zh) 用于通过光学成像进行的对象检测和表征的增强对比度
WO2018208470A1 (fr) Dispositif de suivi vestimentaire et poses d'objets portatifs
EP3391648A1 (fr) Ensemble caméra de profondeur à fenêtre de distance
KR102481774B1 (ko) 이미지 장치 및 그것의 동작 방법
US20130194401A1 (en) 3d glasses, display apparatus and control method thereof
US20120120208A1 (en) 3D shutter glasses with frame rate detector
KR20120049230A (ko) 롤링 이미지 캡처 시스템에서의 주변 보정
EP3099058A1 (fr) Procédé pour a détection d'une copie de vidéo et dispositif correspondant
US20150062013A1 (en) Rolling Shutter Synchronization of a Pointing Device in an Interactive Display System
KR20100097166A (ko) 파워 절약 투과형 디스플레이
US8654103B2 (en) Interactive display
US20200096760A1 (en) Communication apparatus, communication system, data communication method, and program
TW201443403A (zh) 光學偵測裝置及其同步調整方法
Borsato et al. Building structured lighting applications using low-cost cameras
JP2015126281A (ja) プロジェクタ、情報処理方法及びプログラム
Zivkovic et al. Mid-air interactive display using modulated display light
CN117061715A (zh) 投影图像参数确定方法、投影设备及计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12730490

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014516374

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20147001185

Country of ref document: KR

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 12730490

Country of ref document: EP

Kind code of ref document: A1