EP2780780B1 - Positionsbestimmung eines objekts mittels erfassung eines positionsmusters durch optischen sensor - Google Patents
Positionsbestimmung eines objekts mittels erfassung eines positionsmusters durch optischen sensor Download PDFInfo
- Publication number
- EP2780780B1 EP2780780B1 EP12794895.8A EP12794895A EP2780780B1 EP 2780780 B1 EP2780780 B1 EP 2780780B1 EP 12794895 A EP12794895 A EP 12794895A EP 2780780 B1 EP2780780 B1 EP 2780780B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- modulation
- bit
- location information
- pattern
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003287 optical effect Effects 0.000 title claims description 59
- 238000000034 method Methods 0.000 claims description 51
- 238000011156 evaluation Methods 0.000 claims description 26
- 230000002123 temporal effect Effects 0.000 claims description 24
- 230000008859 change Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 14
- 230000008447 perception Effects 0.000 claims description 12
- 238000001514 detection method Methods 0.000 description 31
- 238000010586 diagram Methods 0.000 description 16
- 238000012545 processing Methods 0.000 description 13
- 230000005540 biological transmission Effects 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 6
- 238000013459 approach Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 238000002156 mixing Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000007480 spreading Effects 0.000 description 2
- 241001136792 Alle Species 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012567 pattern recognition method Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
- G06F3/0317—Detection arrangements using opto-electronic means in co-operation with a patterned surface, e.g. absolute position or relative movement detection for an optical mouse or pen positioned with respect to a coded surface
- G06F3/0321—Detection arrangements using opto-electronic means in co-operation with a patterned surface, e.g. absolute position or relative movement detection for an optical mouse or pen positioned with respect to a coded surface by optically sensing the absolute position with respect to a regularly patterned surface forming a passive digitiser, e.g. pen optically detecting position indicative tags printed on a paper sheet
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Definitions
- the present invention relates to devices for determining a position of a (physical) object and to image processing devices and optical sensor devices that may be used therein.
- the present invention further relates to methods for determining a position of an object.
- the present invention relates to determining the position of the object on, at or in relation to a screen or display surface.
- Determining the position of a physical object can be used in the context of user interfaces to enable tracking and/or visualization of an actual, physical position of the physical object, e.g. by software running on a computer.
- TUI tangible user interfaces
- For one part of the "tangible user interface” (TUI) application area one wants to place physical objects on a flat computer screen whose position and, if applicable, orientation can be determined automatically by the computer. This allows the physical objects to be linked to representations on the screen, so that movement of these objects can cause an immediate reaction in the computer. This creates the impression that the physical objects belong to the representations on the screen, and the representations thus become directly "tangible”.
- the technical teachings disclosed herein describe techniques that efficiently enable such position detection.
- One method to determine the position of the objects is to capture the objects with a camera that is either mounted above the screen or below the screen (eg in conjunction with a transparent projection screen), which is used for example in the product Microsoft Surface TM .
- a matrix of light-sensitive sensors that replace the camera is integrated directly into the screen.
- These approaches therefore require either additional external cameras and/or special screen hardware.
- Another well-known approach see eg the international patent application with the publication number WO 01/15059 A2 from 2000 ) does not require any special screens.
- This approach displays image signals on the Screen is shown from which the position on the screen can be determined when it is recognized and evaluated by the objects placed on it.
- the objects placed on the screen have optical sensors and a radio channel to the computer in order to be able to recognize the position and transmit it to the computer.
- the superimposed information is location-dependent in relation to the screen, ie different patterns are superimposed in different areas of the screen, the recognition of which allows a direct conclusion to be drawn about the location.
- These patterns can be formed either in the area or in time. In particular, it is intended that the patterns are active for all image areas at the same time. This results in the desire that the superimposed patterns should be as invisible as possible for the user.
- WO01/15059 A2 To solve this problem, it is only proposed to use special screen hardware that can emit light signals in the non-visible range. A solution using a conventional screen is not described.
- the present invention describes how it is also possible to embed patterns on normal screens that are below the human perception threshold but can still be recognized by the objects if the pattern, sensor hardware and signal processing are selected appropriately. This is the principle of "watermark embedding".
- the published US patent application with publication number US 2007/0001950 A1 describes a method and system for presenting data on a medium for capture by an input device.
- the method embeds a symbol design, such as an embedded interaction code (EIC), into an image on a display screen, such as a liquid crystal display (LCD).
- EIC embedded interaction code
- a grid having a plurality of pixels defines a size of an EIC pattern on an LCD.
- a region of the grid is used to embed positional data and/or metadata information.
- the publication US 2005/099405 A1 shows a light-sensitive pen as an input device, which interacts with a display system.
- a sequence of patterns with a specific light intensity for each area of the display is captured with the light-sensitive pen in order to determine the position of the light-sensitive pen relative to the display system.
- the publication US 2011/014982 A1 shows a position detection system which detects a target device position relative to an image area.
- a mark is incorporated into an original image as a position determination pattern.
- a position detection unit detects the mark and then determines the target device position relative to the image area.
- the publication US 2008/211183 A1 shows a computer controlled game character.
- the game character has a base for placement on a display surface.
- the base includes an image capture device for capturing a control image.
- the game character further includes control means coupled to the image capture device for controlling the appearance or position of the game character in dependence on the control image.
- the object of the present invention is to provide a device and a method for determining the position of an object, which can cooperate with normal screen hardware or projection hardware and thereby provide a signal generated by the device or the pattern used in the process is as imperceptible or as little perceptible as possible to a human observer.
- the object of the present invention is achieved by a device according to claim 1, a method according to claim 7 and a computer program according to claim 8.
- the invention provides a device for determining a position of an object in relation to a representation of an image to be displayed.
- the device has as features an input for at least one image to be displayed, a position pattern generator, a combination unit, an optical sensor, a filter and a determination device.
- the position pattern generator is configured to generate a position pattern that is divided into a plurality of pattern sections, wherein each of the pattern sections has a unique bit pattern from a plurality of bit patterns and wherein the bit patterns are generally Gray-coded.
- the combination unit is configured to combine the position pattern with the at least one image to be displayed and to provide a corresponding combination image.
- the optical sensor is configured to optically capture an image section of the combination image, wherein the image section correlates with the position of the object.
- the filter is configured to extract at least one pattern section of the position pattern from the image section and to provide at least one corresponding extracted pattern section.
- the determining device is configured to determine the position of the object based on the at least one extracted pattern section.
- Some embodiments are therefore based on the fact that the bit patterns, due to the Gray coding or generalized Gray coding for the various pattern sections, manage with a relatively small number of bits per bit pattern in order to obtain a comparably fine position resolution.
- the small number of bits makes it possible to combine the bits with the image to be displayed in such a way that the image to be displayed is changed so slightly that it is not perceptible to a human observer or at least not noticeable. This is done by exploiting the fact that the information to be transmitted for the individual areas or pattern sections is known in advance. If the information is transmitted by a bit pattern, it is possible, without functional restrictions, to assign bit patterns to neighboring areas or pattern sections that are as similar as possible.
- Image areas transmit an X and a Y coordinate (in particular their own X and Y coordinates) as information.
- Two neighboring Areas are characterized by the fact that one of these coordinates is increased or decreased by one. If Gray coding is used to assign the numbers to a bit pattern, the neighboring bit patterns only differ in a single bit. Crosstalk between two areas then leads to the symbols for all bits except one constructively overlapping (amplifying) and only a single bit "mixing". This ensures reliable detection of the common bits and the "mixed bit" can even be used to detect that the sensor is located between the two areas without the coding of the symbols being extended.
- Generalized Gray coding is a Gray code in which only one element changes from one pattern section to an adjacent pattern section, but an element can comprise one bit (i.e. normal Gray code) or more than one bit.
- generalized Gray codes are codes that have both a reflective property and a unit distance property, where the unit distance can be one bit or more bits.
- a modulation sequence that implements the combination of the image to be displayed with the position pattern can be significantly shorter than with other positioning principles.
- the signal energy of the combination e.g. in the form of a modulation
- no orthogonal sequences with the corresponding required length need to be used in neighboring pattern sections (or areas). If the sensor partially sees two (or more) neighboring pattern sections, the signal energy remains the sum of the partial areas seen.
- the modulation depth can be reduced without increasing the sequence length by spreading. This makes it practical to embed the modulation invisibly in all (or at least some relevant) image areas in the manner of a watermark and still enable fast position detection.
- a device for determining a position of an object within a display of an image to be displayed has the following features: an input for the image to be displayed; a modulation image generator for generating at least one modulation image, wherein the at least one modulation image is divided into a plurality of fields and a modulation image value of a specific field represents location information of the field within the modulation image; an image modulator for modulating the image to be displayed with the modulation image, wherein a difference between the image to be displayed and the image modulated with the at least one modulation image is below a human perception threshold; an output for the image modulated with the at least one modulation image for display on a display device; an optical sensor for detecting an optical signal emitted by an image section of the image modulated with the modulation image and for generating a corresponding detection signal; and an evaluation unit for determining image section location information based on the optical signal emitted by the image section, in that the evaluation unit is configured to extract location information of at least one field located at least partially
- Fig.1 shows a schematic block diagram of a device for determining a position of an object according to at least one possible embodiment of the technical teaching disclosed herein.
- the device 10 receives an image 3 to be displayed (ie corresponding image data) at an input 11.
- the image 3 to be displayed is to be displayed by means of a display device 4, wherein the display device 4 can be, for example, a screen, a digital projector with a corresponding projection surface or a other device for displaying information in optical form.
- a movable object 5 can now take up a position in relation to the display 4 by, for example, placing or placing the object 5 on the horizontally aligned display surface of the display device 4 and thereby aligning it with a specific image section of the image displayed by the display device 4.
- the object 5 it is not necessary for the object 5 to be in direct contact with the display device 4, but it should only be clear, in order to clearly and correctly determine the position of the object, which image section a current position of the object 5 corresponds to.
- the display surface of the display device 4 does not have to be aligned horizontally, but can also be inclined or vertical. In these cases, it is expedient if the object adheres to the display surface by suitable measures, can be hung up or can be set up. Alternatively, the object can also be held or guided by a user.
- the movable object 5 is in Fig.1 exemplified as a game piece of a board game, but can take on a variety of other forms.
- the image section which is assigned to the position of object 5 is shown in Fig.1 indicated by a dashed circle within the display area of the display device 4.
- the movable object 5 is usually coupled to an optical sensor 15 in such a way that the latter is directed in particular at the aforementioned image section 6.
- the optical sensor 15 is configured to optically capture the image section 6.
- the image section 6 corresponds to the position of the object 5.
- the optical sensor 15 can be integrated into the object 5 or attached to it for this purpose.
- the optical sensor 15 is part of the device 10 for determining the position of the object 5.
- An image section captured by the optical sensor 15 is transmitted to a filter 16.
- the filter 16 is configured to filter out a bit pattern or multiple bit patterns from the image section that contain position information of the image section.
- the bit pattern or multiple bit patterns are represented by information that was combined with the image 3 to be displayed before the image 3 to be displayed is displayed by means of the display device 4. If the filter 16 detects a bit pattern used for position determination within the image section with sufficient reliability, it transmits the bit pattern or a corresponding pattern section to a determination device 17 of the device 10.
- the determination device 17 is configured to determine the position of the object 5 on the basis of the at least one extracted pattern section or the corresponding bit pattern.
- the device 10 for determining the position of the object 5 comprises a position pattern generator 12 and a combination unit 14.
- the position pattern generator 12 provides a position pattern 13.
- the position pattern 13 is divided into a plurality of pattern sections.
- Each of the pattern sections has a unique bit pattern from a plurality of bit patterns that enable identification of the respective pattern section.
- the bit patterns are gray-coded or generalized gray-coded.
- the position pattern is typically a two-dimensional pattern and the pattern sections typically form a two-dimensional array. According to the Gray code, the bit patterns of two adjacent pattern sections differ in only one bit.
- the image 3 to be displayed and the position pattern 13 are combined by means of the combination unit 14, so that a corresponding combination image is obtained, which is provided for display on the display device 4.
- the optical sensor 15 can be connected to the filter 16, for example, via a flexible cable or a wireless connection (radio connection, infrared connection, ultrasonic connection or the like). In this way, the optical sensor 15 is movable relative to the rest of the device 10.
- the flexible cable or the wireless connection can also be provided between the filter 16 and the determination device 17, or also at the output of the determination device 17, at which the position information is provided.
- Fig.2 shows a schematic block diagram of a device for determining a position of an object according to at least one further possible embodiment of the technical teaching disclosed herein.
- the device 20 comprises an input 11 for the image 3 to be displayed, a modulation image generator 22, an image modulator 24, an optical sensor 15 and an evaluation unit 27.
- the image 3 to be displayed is forwarded within the device 20 from the input 11 to an input of the image modulator 24.
- Another input of the image modulator 24 is connected to an output of the modulation image generator 22.
- the modulation image generator 22 generates at least one modulation image which is divided into a plurality of fields.
- a modulation image value of a specific field represents location information of the field within the modulation image.
- the modulation image serves a similar purpose to the position pattern from the Fig.1 illustrated embodiment and may even correspond to it.
- the plurality of fields serves a similar purpose as the plurality of pattern sections of the embodiment of Fig.1 .
- the modulation image value of a particular field is also associated with the unique bit pattern of a particular pattern section, which in connection with the description of Fig.1 mentioned, comparable in function and/or purpose.
- the image modulator 24 is configured to modulate the image 3 to be displayed with the modulation image.
- the image 3 to be displayed can be understood as a carrier signal and the modulation image as useful information, the term "useful information" being geared to the purposes of position determination.
- the modulation slightly changes the image 3 to be displayed without a viewer of the modulated image displayed by the display device 4 perceiving a noticeable or disturbing effect.
- the difference between the image 3 to be displayed and the modulated image is thus below an (average) human perception threshold.
- the modulation image is thus practically invisible to the human viewer, although it is nonetheless displayed in the visible wavelength range.
- the display device 4 is a standard display device designed for image reproduction in the visible wavelength range and does not comprise any device with which a defined input signal can be reproduced in a non-visible wavelength range (e.g. in the infrared range or ultraviolet range).
- a non-visible wavelength range e.g. in the infrared range or ultraviolet range.
- the optical sensor 15 detects an optical signal that is emitted from a section of the image modulated with the modulation image and generates a corresponding detection signal.
- the section of the image from which the optical signal is detected correlates with the position of the object.
- the optical sensor 15 can be mechanically coupled to the object, e.g. by integrating the optical sensor 15 into the object or attaching it to it. It is also conceivable that the optical sensor 15 itself represents the object.
- Image section location information can now be determined using the evaluation unit 27. To do this, the evaluation unit extracts location information from at least one field that is at least partially located in the image section from the optical signal emitted by the image section.
- the evaluation unit can comprise a demodulator that is configured to demodulate the detection signal and to determine a modulation signal, possibly contained in the detection signal, for the at least one field (which is at least partially located in the image section).
- the location information of the field or fields that are located in the image section results from the modulation signal determined by the demodulator.
- the modulation is carried out with defined parameters.
- the defined modulation parameters can be known to both the image modulator 24 and the evaluation unit 27.
- the modulation can be carried out with a predefined temporal frequency or spatial frequency. Since the modulation should only change the image to be displayed to the extent that a resulting difference remains below the human perception threshold, a limitation of the amplitude range used by the modulation can also be used as a parameter to support differentiation.
- the amplitude of the modulation can be understood as, for example, a change in brightness or intensity caused by the modulation, either of the entire image 3 to be displayed or of a color channel thereof.
- the modulation of the image to be displayed with the modulation image can have an amplitude range and the evaluation unit can comprise an amplitude-sensitive filter.
- the amplitude-sensitive filter can be configured to detect and, if necessary, extract a temporal change and/or a local change in the modulation image value that has an amplitude within the limited amplitude range.
- noise influences or changes in the image 3 to be displayed can also cause corresponding temporal changes and/or local changes in the modulation image value, these, however, do not usually form valid location information due to their randomness and can thus be sorted out in the further course of signal processing.
- the modulation changes the image 3 to be displayed at a pixel based on the actual value (e.g. brightness, (color channel) intensity, etc.), whereby this actual value is generally not known, however, since it is not known in advance which image section 6 the optical sensor 15 will capture and since the image 3 to be displayed can also change dynamically, in particular if it is a television image, a video image, an animation or the graphic output of a computer game or computer-assisted game.
- the image modulator 24 can, for example, suspend the modulation at regular intervals and thus transmit the image 3 to be displayed to the display device 4.
- the evaluation unit 27 can be transmitted information by a special preceding or following bit pattern, which states that in a subsequent or previous time interval only the image 3 to be displayed is or was displayed.
- the evaluation unit 27 can thus use the corresponding detection signal as a reference value and in this way determine the location information more reliably and/or efficiently.
- the location information contained in the modulation image can be gray-coded with respect to adjacent fields.
- the evaluation unit can be configured to evaluate extracted location information that cannot be clearly assigned to a field by evaluating relatively unambiguous bit values of bits of the (gray-coded) location information to delimit a spatial area in which the image section is located.
- relatively indifferent or uncertain bit values of bits of the (gray-coded) location information can indicate that the image section overlaps at least two fields of the modulation image.
- a bit value that lies between 0 and 1 can also result for one or more bits of the location information, in particular if two or more fields of the modulation image are at least partially contained in the image section (overlap or simultaneous presence of at least two fields of the modulation image).
- the process of detecting the optical signal can be understood, at least in some embodiments of the technical teaching disclosed herein, as an analog process in which properties, intensity, color composition and others of the optical signal emitted by the image section are recorded.
- relatively unambiguous bit values and relatively indifferent or uncertain bit values can be made, for example, using two threshold values, with a first threshold value of 0.1 and a second threshold value of 0.9, for example. If a bit value lies in the interval [0; 0.1], it can be assumed that it is a relatively unambiguous bit value, namely a "0". Likewise, if a bit value lies between 0.9 and 1, it can be assumed that it is the relatively unambiguous bit value "1". However, if the bit value lies in the interval [0.1; 0.9] (excluding the interval boundaries), no unambiguous statement can be made about the bit value, so the corresponding bit value is classified as relatively indifferent or uncertain.
- threshold values 0.1 and 0.9 mentioned are to be understood as examples and other threshold values are also conceivable, e.g. 0.15 or 0.2 or 0.25 for the lower threshold value, and 0.75 or 0.8 or 0.85 for the upper threshold value.
- a relatively indifferent or uncertain bit value can be caused in particular by the fact that the image section captured by the image sensor does not only contain one field of the modulation image, but that the image section contains two or more fields with different (area) proportions.
- a bit value lying between the lower threshold value and the upper threshold value thus indicates that the image section is located on the border between two or more fields of the modulation pattern.
- the detection signal can be understood, at least in some embodiments, as a "fuzzy logic" signal which is evaluated by the evaluation unit 27 in order to determine probabilities or proportions of fields of the modulation image that are contained in the image section.
- the device for determining a position of an object within a display of an image to be displayed can comprise a fuzzy logic which is configured to evaluate a detection signal provided by the optical sensor 15 and to indicate which field of the modulation pattern or which fields of the modulation pattern are contained in the image section with approximately which proportion.
- the optical sensor which is typically coupled to the object, has only a few light sensors, each of which detects a small area of the screen of e.g. 1 or 2 mm 2 area as a point (either only as a brightness value or as three brightnesses of the colors red, green and blue or as another property of the optical signal emitted by the image section).
- a temporal pattern is transmitted in each of the small screen areas. These patterns contain temporal synchronization information, an X coordinate and a Y coordinate of the respective area.
- the object or optical sensor may be located in the border area between two (or more) areas, so that the patterns of these areas overlap on the sensor.
- Crosstalk occurs between the patterns of adjacent image areas, which leads to interference between the patterns (interchannel interference, ICI), which can make it difficult or impossible to recognize the patterns (especially in position determination techniques that do not use the technical teaching disclosed here).
- ICI channel interference
- a common approach in communication transmission would be to use orthogonal sequences for modulation for the patterns in the adjacent channels, which can still be recognized even in the event of interference between the symbols.
- the disadvantage of this is that the time required for transmission must be extended, so that the information can be transmitted less frequently.
- the technical teaching disclosed here describes an alternative possibility that offers several advantages. This takes advantage of the fact that the information to be transmitted for the individual areas is known in advance. If the information is transmitted using a bit pattern, it is possible to assign bit patterns that are as similar as possible to neighboring areas without functional restrictions. Image areas transmit an X and a Y coordinate as information; two neighboring areas are characterized, for example, by the fact that one of these coordinates is increased or decreased by 1. If Gray coding is used to assign the numbers to a bit pattern, the neighboring bit patterns only ever differ in a single bit (or in several bits in a generalized Gray code). Crosstalk between two areas then leads to the symbols for all bits constructively superimposing (amplifying) on one bit and only a single bit "mixing".
- This method even works when several adjacent areas overlap, e.g. because the sensor observes an image area on which several areas are located at the same time. In this case, as many bits as correspond to the size of the area are mixed together, all other bits reinforce each other constructively, as they are identical for all symbols. This makes it possible to divide the screen into very small areas and to determine the local position of objects with one and the same screen modulation more or less precisely, depending on how large the screen area is that the sensor of the object is observing. The spatial resolution can therefore be easily scaled.
- the y values of neighboring areas have the same value in the x direction, and the x values have the same value in the y direction. This acts as further constructive crosstalk if the corresponding two areas overlap on the sensor.
- the three components synch, X and Y coordinates can be divided into three channels transmitted in parallel, e.g. the blue, red and green color information. In this case, an RGB sensor is used in the object.
- Fig.3 shows an example of a possible position pattern or a possible modulation image.
- the position pattern or modulation image can be extended downwards and to the right.
- the letters B, R and G stand for the color channels blue, red and green.
- a synchronization signal is transmitted via the blue color channel.
- the synchronization signal can be, for example, a periodic modulation pattern within the blue color channel, which can be recognized by the evaluation device or the determination device.
- the X coordinate of the respective pattern section of the position pattern or the field of the modulation image is transmitted via the red color channel.
- the Y coordinate of the pattern section or field is transmitted via the green color channel.
- the screen is divided into a matrix of small areas of eg 1*1 mm 2 .
- the areas are numbered in the x and y directions, and an area is uniquely identified by a pair of values (x,y).
- Each of the areas should now be provided with a modulation that transmits the coordinates (x,y) of the area.
- the numbers X and Y are Gray-coded so that the values for adjacent image areas differ by exactly 1 bit.
- X and Y are now transmitted sequentially, with the transmission in all areas being synchronized with each other, i.e. the "first" bit of the x or y coordinate is transmitted at the same time for all areas.
- An extra synchronization pattern is transmitted to synchronize the receiver with the transmitter.
- the transmission of the 3 parts synchronization, x and y can take place sequentially, but it can also be placed on the 3 color channels red, green and blue. In the latter case, the transmission of the synchronization, the X coordinate and the Y coordinate takes place simultaneously, as shown schematically in the following image.
- Fig.4 shows two examples of the effects when the patches overlap on the sensor.
- the field of view of the sensor ie the captured image section
- the field of view of the sensor ie the captured image section
- Sensor 1 the captured image section
- Sensor 2 the captured image section
- the senor In the case of sensor 1, the sensor is located exactly between four areas. The contributions of the four areas mix to the same extent, ie the bit of the X or Y coordinate that changes from one area to the adjacent one will take on an intermediate value, the other bits are unique, as shown in the following table.
- Each of the four areas occupies a quarter of the area of the field of view of the optical sensor 15.
- the modulations of neighboring areas do not interfere with each other, but rather reinforce each other constructively, there is no need for orthogonal modulation sequences in neighboring areas.
- the modulation sequence is therefore significantly shorter.
- the signal obtained is strengthened, since now not just one sub-area with its orthogonal sequence provides the signal energy, but all visible sub-areas. This means that the modulation depth can be reduced at fine resolution (small areas) to such an extent that it remains below the human perception threshold.
- a Manchester coding can be used, which replaces each bit of the Gray code with a sequence (1,0) or (0,1). This differential coding allows the decoder to calculate the difference between two consecutive images in order to recognize the transition of the modulation from 0 to 1 or from 1 to 0.
- the absolute brightness level is essentially irrelevant.
- this form of Manchester coding can also be used only every n bits or pilot cells can be integrated at suitable intervals.
- Fig.5 illustrates a position pattern 13 or modulation image that can be used for a temporal modulation of the image 3 to be displayed.
- the position pattern 13 is divided into several pattern sections, each pattern section comprising a pixel or a plurality of pixels. As an example, a section of a pattern section is shown enlarged. Since all the pixels or pixels belonging to this section belong to the same pattern section, they all follow the same temporal bit pattern.
- the Fig.5 The temporal bit pattern shown as an example comprises eight bits. For comparison, another bit pattern in Fig.5 which belongs to a right-adjacent pattern section within the position pattern 13. Since this is an adjacent pattern section, the bit patterns differ according to the Gray coding in only one bit, here the third bit.
- Fig.6 illustrates a spatial bit pattern that can occur within a pattern section.
- the spatial bit pattern can be, for example, a 3x3 matrix or a 4x4 matrix (generally m ⁇ n matrix) and can optionally be repeated within the corresponding pattern section.
- the optical sensor 15 expediently also comprises a matrix of pixels that comprises at least as many pixels as the spatial bit pattern.
- Fig.7 illustrates a temporal-spatial bit pattern in which the individual pixels of an array of pixels change according to a specific temporal pattern.
- the location information can also be divided into the different color channels of a color display of the image to be shown.
- Fig.8 illustrates a principle for modulating the image 3 to be displayed with the position pattern or the modulation image.
- An upper timing diagram in Fig.8 schematically represents the brightness progression of a pixel of the image 3 to be displayed over time.
- the brightness value for this pixel remains constant in a certain time interval, for example because the image 3 to be displayed is a still image during this time interval.
- dynamic images such as a television image or a video-encoded image can also be used, in which the brightness value usually changes constantly.
- the Brightness value is only an example of an image property of the image to be displayed and that other image properties can also be used.
- a bit pattern of a pattern section corresponding to the above-mentioned pixel of the image to be displayed ie the pattern section is associated with a region of the image to be displayed in which the above-mentioned pixel is located.
- the bit pattern is a binary temporal pattern extending between a bit pattern start and a bit pattern end.
- Modulating the image to be displayed with the position pattern or the modulation image leads to a brightness gradient, which is shown as an example in a time diagram below by Fig.8 is shown schematically.
- the bit pattern of the pattern section leads to a change in the brightness value of the pixel, with an amplitude of the brightness change corresponding, for example, to a least significant bit (LSB).
- LSB least significant bit
- the brightness information of the pixel or the intensity information of a color channel of the pixel can have a resolution of 256 different brightness or intensity levels. If the amplitude of the modulation corresponds to the least significant bit (LSB), the ratio between the modulation amplitude and the bit resolution of the image to be displayed or of a color channel thereof is approximately 0.4%.
- Such a small modulation compared to the total bit resolution of the image to be displayed is typically not perceptible to a human observer. It may even be the case that even values of 10% for the ratio between modulation amplitude and bit resolution are still below the human perception threshold, especially if the modulation is distributed over time and/or space so that the human observer does not notice it or only notices it insignificantly.
- Other possible values for the ratio between modulation amplitude and brightness or intensity resolution are 5%, 3%, 2% and 1%.
- the combination unit 14 or the image modulator 24 can be configured to keep the image 3 to be displayed constant for a short time, i.e. to "freeze” it, so to speak, in order to generate a still image and to improve the recognition of the position pattern by the determination device 17 or the evaluation unit 27.
- Fig.9 schematically represents a conversion of a temporal bit pattern into a Manchester-coded bit pattern.
- the Manchester-coded bit pattern can be obtained, for example, by an XOR operation of a clock signal with the bit pattern.
- the bit pattern binary modulates the phase position of a clock signal.
- Another possible interpretation of the Manchester code is that the edges of the Manchester-coded bit pattern, related to the clock signal, carry the information.
- a falling edge means a logical 0
- a rising edge means a logical 1. This means that there is always at least one edge per bit from which the clock signal can be derived.
- the Manchester code is self-synchronizing and independent of the DC voltage level. As already mentioned above, the transition of the modulation from 0 to 1 or from 1 to 0 can be recognized, regardless of the current absolute brightness level of the image to be displayed or of a pixel thereof.
- Fig.10 shows a schematic block diagram of a device for determining a position of an object according to a further embodiment of the technical teaching disclosed herein.
- the device for determining the position comprises a first separate unit 40, which in turn comprises the optical sensor 15, the filter 16 and a transmitter 36.
- the separate unit 40 is independent of another possible separate unit, which comprises the determination device and a receiver 38.
- the transmitter 36 is configured to send object position data or intermediate data that are required for determining the position of the object. Intermediate data is thus data that occurs as part of an information transmission from the optical sensor to the determination device 17.
- the separate unit 40 can in particular be coupled to the object 5 or integrated into it. In this way, the object together with the separate unit 40 can be moved with respect to the representation of the image to be displayed by means of the display device 4.
- the receiver 38 and the determination device 17 can, however, be relatively stationary with respect to the display device and/or a computer system connected to it (image data source (eg DVD player, video recorder, television receiver, etc.)).
- image data source eg DVD player, video recorder, television receiver, etc.
- the object position data or the intermediate data can be sent from the transmitter 36 to the receiver 38, for example, on the basis of radio signals, infrared signals, ultrasonic signals or other possible transmission techniques.
- Fig. 11 shows a schematic block diagram of a device for determining the position of an object, which is constructed similarly to the device according to the Fig.10
- the first separate unit 40 also comprises the determination device 17, so that the position data can be determined within the separate unit 40.
- the transmitter 36 thus sends the position data to the receiver 38, which makes it available for further processing, e.g. to a computer system, another system or a graphical user interface.
- Fig. 12 shows a schematic block diagram of the display device 4 and a pattern detection device 30 according to a further embodiment of the technical teaching disclosed herein.
- the pattern detection device 30 comprises the optical sensor 15, the filter 16, a Gray decoder 32 and an interface 34. Similar to what was described in connection with the device for determining the position of the object, the optical sensor 15 is configured to optically detect an image section 6 of a combination image that comprises a combination of an image to be displayed and a position pattern. The image section or data describing the image section are transmitted from the optical sensor 15 to the filter 16.
- the filter 16 is configured to extract at least one Gray-coded pattern section of the position pattern from the image section 6 or the corresponding image section information.
- the Gray decoder 32 is configured to decode the Gray-coded pattern section and to provide decoded information.
- the filter 16 and the Gray decoder 32 can also work together or be integrated into one unit, since the extraction of the at least one Gray-coded pattern section can benefit from information determined in connection with the decoding of the Gray-coded pattern section or from the decoded information at the output of the Gray decoder 32.
- the decoded information is forwarded by the Gray decoder 32 to the interface 34, which provides this or information derived therefrom for further processing outside the pattern detection device 30.
- the pattern detection device 30 can, for example, be used in a similar way to the first separate unit 40 described in the Figures 10 and 11 Thus, the pattern detection device 30 may be coupled to or integrated into the object 5 to enable position detection for the object 5.
- the interface 34 may be a radio transmitter, an infrared transmitter, an ultrasonic transmitter, a cable connection, or another suitable interface for transmitting information to an external receiver.
- a position pattern portion in the combination image can be selected such that it is below a human perception threshold. Furthermore, the position pattern portion can have defined properties with respect to an amplitude, color information, a spatial frequency and/or a temporal frequency.
- the filter 16 can be configured to extract image portions from the combination image that correspond to the defined properties of the position pattern portion.
- the position pattern portion in the combination image can consist of a slight modulation of the image to be displayed (e.g. temporal modulation, spatial modulation or temporal-spatial modulation), so that the position pattern portion has only a limited amplitude, e.g. with respect to a brightness value or an intensity value.
- the filter 16 may comprise a demodulator for demodulating the combination image.
- the position pattern may include at least one temporal synchronization information and the pattern detection device may comprise a synchronization device for synchronizing the pattern detection device 30 on the basis of the synchronization information.
- Fig. 13 shows a schematic flow diagram of a method for determining a position of an object according to at least one embodiment of the technical teaching disclosed herein.
- a position pattern is generated which is divided into a plurality of pattern sections, each of the pattern sections having a unique bit pattern from a plurality of bit patterns.
- the bit patterns are gray-coded.
- the position pattern can either be generated dynamically or generated in such a way that a stored position pattern is read out from a memory and made available for further processing.
- the Gray coding of the bit patterns is to be understood in such a way that two bit patterns belonging to adjacent pattern sections differ by a maximum of a certain number of bits. According to the classic Gray coding, the bit patterns of adjacent pattern sections differ from one another by only one bit.
- Gray coding According to a generalized definition of Gray coding, several bits of the bit patterns can also change from one pattern section to the adjacent pattern section, although an upper limit is typically not exceeded. In particular, it can be provided that the same number of bits always change from one pattern section to neighboring pattern sections, eg always two bits or three bits. In this way, the bit patterns of neighboring pattern sections are similar to one another. Due to the Gray coding, there are no abrupt changes in the coding of the bit patterns, such as occur with conventional binary coding when a transition from a power of two to the next higher power of two occurs.
- the position pattern is combined with at least one image to be displayed and a corresponding combination image is provided.
- the at least one image to be displayed can be a single image from a sequence of images to be displayed, so that the position pattern can be combined with different images to be displayed at different times. Even if the change in the image to be displayed may make the subsequent recognition of the position pattern more difficult, suitable measures can be used to reduce the position pattern recognition to a relatively high level. reliable manner. In this context, particular mention should be made of Manchester coding of the position pattern or the regular insertion of reference images which are not combined with the position pattern and thus enable a conclusion to be drawn about the position pattern, for example by means of a difference formation.
- An image section of the combination image is captured during a step 56 of the method for determining the object position.
- the image section correlates with the position of the object. This means that certain positions of the object are assigned certain image sections.
- certain positions of the object are assigned certain image sections.
- the object is placed on or within a representation of the image to be displayed and thus covers or occupies an image section.
- This covered image section or section occupied by the object can then correspond to the captured image section.
- only a part of the covered image section or section occupied by the object is captured and is used further as a captured image section within the method for determining the position.
- two image sections or more image sections that are assigned to different areas of the object, for example a first edge area and a second edge area.
- a step 58 of the method for determining the position at least one pattern section of the position pattern is extracted from the image section. Furthermore, a corresponding extracted pattern section is provided for further processing.
- the pattern section can be extracted using pattern recognition methods. It is helpful that the bit patterns can be relatively short due to the Gray coding of the bit patterns and that a neighborhood relationship between two or more pattern sections is also reflected in a similarity of the respective bit patterns. Since it can happen that the image section does not only capture one pattern section, but two or more pattern sections, it is possible that two or more pattern sections are captured. From this information, an intermediate position of the object can be determined if necessary, as will be described below.
- the position of the object is determined in a step 60 of the at least one extracted pattern section. Due to the division of the position pattern into the plurality of pattern sections and the combination of the position pattern with the at least In an image to be displayed, a connection can be established between the at least one extracted pattern section and a point or area within the image to be displayed. The position of the object can now either correspond to this point or area of the representation of the image to be displayed or it can correlate in a specific, typically previously known way with the said point or area.
- Combining the position pattern with the at least one image to be displayed in step 54 comprises modulating the image to be displayed with the position pattern.
- generation of the position pattern may include Manchester coding of the position pattern.
- the method for determining the position of the object may further comprise sending object position data or intermediate data required for determining the position from a transmitter to a receiver.
- Figures 14a and 14b show a schematic flow diagram of another possible embodiment of a method for determining the position of an object.
- the method comprises the Fig. 13 known steps 52-58, so that their description in connection with the description of Fig. 13
- the step 60 after which the position of the object is determined on the basis of the at least one extracted pattern section, is described in further detail in steps 62 to 66.
- a bit probability pattern is determined on the basis of the extracted pattern section and a signal strength of individual bit pattern parts.
- the bit probability pattern shows relatively reliable bit pattern parts and relatively uncertain bit pattern parts.
- a reliability or confidence for the determined bit value can also be determined.
- the bit values and bit probability values can also be represented, for example, in the form of intermediate values that lie between two regular bit values. For example, two regular bit values can be a logical "0" and a logical "1" and intermediate values in the interval from 0 to 1 can indicate whether a bit has the logical value "0" or the logical value "1" with a higher probability.
- bit value can be represented as an analog signal or as a digital signal with a higher resolution (i.e. more discretization levels) than the actual bit pattern.
- a possible image section position is limited to one pattern section or several pattern sections of the pattern sections located in the image section by determining the relatively reliable bit pattern parts using the Gray coding of the pattern sections.
- the relatively reliable bit pattern parts can typically be interpreted in such a way that they indicate matching bit pattern parts of the various bit patterns that belong to the pattern sections present in the image section. Due to the properties of the Gray coding, an approximate position of the image section can already be determined, the accuracy depending on the maximum number of pattern sections that can be present in the image section captured by the optical sensor.
- a measure for an intermediate position of the image section with respect to two or more pattern sections can now also be determined using the relatively uncertain bit pattern parts, as shown in step 66 of the method for determining the position of the object.
- the optical signals of the pattern sections contained in the image section overlap and a corresponding superimposed optical signal is detected by the optical sensor 15.
- a surface portion of a pattern section located in the image section determines the proportion to which the optical signal emitted by the corresponding pattern section is included in the detection signal provided by the optical sensor 15.
- the detection signal represents, for example, a weighted sum of the individual optical signals emitted by the various pattern sections contained to a greater or lesser extent in the image section.
- the weighting factors are the ratios of the respective sample section areas contained in the image section to the total area of the image section.
- An image processing device comprises an input for an image to be displayed; a modulation image generator (22) for generating at least one modulation image, wherein the modulation image is divided into a plurality of fields and a modulation image value of a specific field represents location information of the field within the modulation image; an image modulator for modulating the image to be displayed with the modulation image, wherein a difference between the image to be displayed and an image modulated with the modulation image is below a human perception threshold; and an output for the image modulated with the modulation image for display on a display device which can be connected to the output, so that the Location information can be reconstructed by evaluating the displayed image modulated with the modulation image.
- image processing device and the device for determining an object position are listed, which may but do not necessarily have to be present.
- the image processing device and the device for determining a position may in particular have one or more of the following features.
- the difference between the image to be displayed and the image modulated with the modulation image can be in a wavelength range visible to humans.
- the image modulator can be configured to additively superimpose the image to be displayed and the modulation image.
- the image to be displayed can have a bit resolution and the modulation can affect a low-order part of the bit resolution.
- the modulation can affect (only) the least significant bits of pixels of the image to be displayed.
- the image to be displayed can have at least a first color channel and a second color channel
- the modulation image generator (22) is configured to represent a first spatial coordinate of the location information by a modulation of the first color channel and a second spatial coordinate of the location information by a modulation of the second color channel in the modulation image.
- the location information can be Gray-coded with respect to adjacent fields of the modulation image.
- the modulation image generator (22) can be configured to generate a sequence of modulation images, wherein the location information represented by the modulation image value of the specific field of a specific modulation image of the sequence is part of a combined location information of the specific field, so that the combined location information can be reconstructed from the individual location information of the sequence of modulation images.
- the sequence of modulation images can comprise at least one temporal synchronization signal.
- the image modulator may comprise a Manchester encoder configured to generate from the modulation image a first Manchester-coded modulation image and a second Manchester-coded modulation image, wherein the image modulator is configured to successively modulate the image to be displayed with the first Manchester-coded modulation image and the second Manchester-coded modulation image and to generate two corresponding modulated images for display on the display device.
- the image modulator may have a modulation depth with respect to the image to be displayed that is less than or equal to 10% of a bit resolution of the image to be displayed.
- An optical sensor device may comprise: an optical sensor for detecting electromagnetic radiation to generate a corresponding detection signal; a demodulator configured to demodulate the detection signal and to determine a modulation signal possibly contained in the detection signal, wherein the demodulator comprises an amplitude-sensitive filter configured to extract at least one of a temporal change and a local change detection signal value having an amplitude within the limited amplitude range for further processing.
- aspects have been described in the context of a device, it is to be understood that these aspects also represent a description of the corresponding method, so that a block or component of a device can also be understood as a corresponding method step or as a feature of a method step. Analogously, aspects described in the context of or as a method step also represent a description of a corresponding block or detail or feature of a corresponding device.
- Some or all of the method steps can be performed by a hardware apparatus (or using a hardware apparatus), such as a microprocessor, a programmable computer, or an electronic circuit. In some embodiments, some or more of the key method steps can be performed by such an apparatus.
- embodiments of the invention may be implemented in hardware or in software.
- the implementation may be performed using a digital storage medium, such as a floppy disk, a DVD, a Blu-ray Disc, a CD, a ROM, a PROM, an EPROM, a EEPROM or a FLASH memory, a hard disk or another magnetic or optical storage medium storing electronically readable control signals which can interact or interact with a programmable computer system to carry out the respective method. Therefore, the digital storage medium may be computer-readable.
- Some embodiments according to the invention thus comprise a data carrier having electronically readable control signals capable of interacting with a programmable computer system such that one of the methods described herein is carried out.
- embodiments of the present invention may be implemented as a computer program product having a program code, wherein the program code is operable to perform one of the methods when the computer program product is run on a computer.
- the program code can, for example, also be stored on a machine-readable medium.
- inventions include the computer program for performing one of the methods described herein, wherein the computer program is stored on a machine-readable medium.
- an embodiment of the method according to the invention is thus a computer program which has a program code for carrying out one of the methods described herein when the computer program runs on a computer.
- a further illustrative example is thus a data carrier (or a digital storage medium or a computer-readable medium) on which the computer program for carrying out one of the methods described herein is recorded.
- a further embodiment of the method according to the invention is thus a data stream or a sequence of signals which represents the computer program for carrying out one of the methods described herein.
- the data stream or the sequence of signals can be configured, for example, to be transferred via a data communications connection, for example via the Internet.
- a further embodiment comprises a processing device, for example a computer or a programmable logic device, which is configured or adapted to carry out one of the methods described herein.
- a processing device for example a computer or a programmable logic device, which is configured or adapted to carry out one of the methods described herein.
- a further embodiment comprises a computer on which the computer program for carrying out one of the methods described herein is installed.
- a further embodiment according to the invention comprises a device or a system which is designed to transmit a computer program for carrying out at least one of the methods described herein to a recipient.
- the transmission can be carried out electronically or optically, for example.
- the recipient can be, for example, a computer, a mobile device, a storage device or a similar device.
- the device or system can, for example, comprise a file server for transmitting the computer program to the recipient.
- a programmable logic device e.g., a field programmable gate array, an FPGA
- a field programmable gate array may interact with a microprocessor to perform any of the methods described herein.
- the methods are performed by any hardware device. This may be general-purpose hardware such as a computer processor (CPU) or hardware specific to the method such as an ASIC.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Editing Of Facsimile Originals (AREA)
- Position Input By Displaying (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102011086318A DE102011086318A1 (de) | 2011-11-14 | 2011-11-14 | Positionsbestimmung eines Objekts mittels Erfassung eines Positionsmusters durch optischen Sensor |
PCT/EP2012/072513 WO2013072316A2 (de) | 2011-11-14 | 2012-11-13 | Positionsbestimmung eines objekts mittels erfassung eines positionsmusters durch optischen sensor |
Publications (3)
Publication Number | Publication Date |
---|---|
EP2780780A2 EP2780780A2 (de) | 2014-09-24 |
EP2780780B1 true EP2780780B1 (de) | 2024-06-19 |
EP2780780C0 EP2780780C0 (de) | 2024-06-19 |
Family
ID=47278772
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP12794895.8A Active EP2780780B1 (de) | 2011-11-14 | 2012-11-13 | Positionsbestimmung eines objekts mittels erfassung eines positionsmusters durch optischen sensor |
Country Status (9)
Country | Link |
---|---|
US (1) | US9659232B2 (pt) |
EP (1) | EP2780780B1 (pt) |
JP (1) | JP2015506006A (pt) |
CN (1) | CN104040468B (pt) |
BR (1) | BR112014012404A2 (pt) |
DE (1) | DE102011086318A1 (pt) |
IN (1) | IN2014KN01020A (pt) |
RU (1) | RU2597500C2 (pt) |
WO (1) | WO2013072316A2 (pt) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102120864B1 (ko) * | 2013-11-06 | 2020-06-10 | 삼성전자주식회사 | 영상 처리 방법 및 장치 |
US20170026720A1 (en) * | 2015-05-28 | 2017-01-26 | Hashplay Inc. | Systems and method for providing multimedia content in a network |
US9952821B2 (en) * | 2015-09-01 | 2018-04-24 | Electronics And Telecommunications Research Institute | Screen position sensing method in multi display system, content configuring method, watermark image generating method for sensing screen position server, and display terminal |
CN108027258B (zh) * | 2015-09-15 | 2020-11-27 | 佩佩尔+富克斯有限公司 | 用于可靠地确定对象的位置的装备和方法 |
US10078877B2 (en) * | 2015-11-23 | 2018-09-18 | The King Abdulaziz City For Science And Technology (Kacst) | Method and system for efficiently embedding a watermark in a digital image for mobile applications |
KR102376402B1 (ko) * | 2017-07-27 | 2022-03-17 | 후아웨이 테크놀러지 컴퍼니 리미티드 | 다초점 디스플레이 디바이스 및 방법 |
CN107422926B (zh) * | 2017-08-01 | 2020-12-18 | 英华达(上海)科技有限公司 | 一种输入方法及装置 |
JP7371443B2 (ja) * | 2019-10-28 | 2023-10-31 | 株式会社デンソーウェーブ | 三次元計測装置 |
CN117940882A (zh) * | 2021-09-17 | 2024-04-26 | 谷歌有限责任公司 | 对显示器的位置进行编码和识别 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5625353A (en) * | 1992-12-29 | 1997-04-29 | Kabushiki Kaisha Sankyo Seiki Seisakusho | Device for transmitting signals from position detector and method of such signal transmission |
JPH07212584A (ja) * | 1994-01-20 | 1995-08-11 | Omron Corp | 画像処理装置並びにそれを用いた複写機 |
JPH09251152A (ja) * | 1996-03-18 | 1997-09-22 | Toshiba Corp | 表示装置一体型ペン位置検出装置 |
WO2001015059A2 (en) | 1999-08-24 | 2001-03-01 | Gamalong Ltd. | System and method for detecting the location of a physical object placed on a screen |
JP3523618B2 (ja) * | 2001-08-02 | 2004-04-26 | シャープ株式会社 | 座標入力システムおよび座標入力システムに用いる座標パターン形成用紙 |
US6917360B2 (en) * | 2002-06-21 | 2005-07-12 | Schlumberger Technology Corporation | System and method for adaptively labeling multi-dimensional images |
US7421111B2 (en) * | 2003-11-07 | 2008-09-02 | Mitsubishi Electric Research Laboratories, Inc. | Light pen system for pixel-based displays |
JP2005327259A (ja) * | 2004-04-12 | 2005-11-24 | Campus Create Co Ltd | 指示位置検出装置 |
JP2006031859A (ja) * | 2004-07-16 | 2006-02-02 | Toshiba Corp | サーボ情報回復機能を有する磁気ディスクドライブ及び同ドライブにおけるサーボ情報回復方法 |
US20060242562A1 (en) * | 2005-04-22 | 2006-10-26 | Microsoft Corporation | Embedded method for embedded interaction code array |
WO2006120633A2 (en) * | 2005-05-11 | 2006-11-16 | Koninklijke Philips Electronics N.V. | Computer controlled pawn |
RU2324236C2 (ru) * | 2005-06-14 | 2008-05-10 | ЭлДжи ЭЛЕКТРОНИКС ИНК. | Способ сопоставления изображения, сфотографированного камерой, с данными карты в портативном терминале и способ наведения по маршруту перемещения |
US7619607B2 (en) * | 2005-06-30 | 2009-11-17 | Microsoft Corporation | Embedding a pattern design onto a liquid crystal display |
JP2009245349A (ja) * | 2008-03-31 | 2009-10-22 | Namco Bandai Games Inc | 位置検出システム、プログラム、情報記憶媒体及び画像生成装置 |
US8102282B2 (en) * | 2009-03-10 | 2012-01-24 | Pixart Imaging Inc. | Encoding and decoding method for microdot matrix |
CN101882012A (zh) * | 2010-06-12 | 2010-11-10 | 北京理工大学 | 基于投影跟踪的笔式交互系统 |
-
2011
- 2011-11-14 DE DE102011086318A patent/DE102011086318A1/de active Pending
-
2012
- 2012-11-13 JP JP2014541627A patent/JP2015506006A/ja active Pending
- 2012-11-13 IN IN1020KON2014 patent/IN2014KN01020A/en unknown
- 2012-11-13 EP EP12794895.8A patent/EP2780780B1/de active Active
- 2012-11-13 WO PCT/EP2012/072513 patent/WO2013072316A2/de active Application Filing
- 2012-11-13 RU RU2014124191/08A patent/RU2597500C2/ru active
- 2012-11-13 CN CN201280066462.2A patent/CN104040468B/zh active Active
-
2013
- 2013-11-13 BR BR112014012404A patent/BR112014012404A2/pt not_active Application Discontinuation
-
2014
- 2014-05-14 US US14/120,358 patent/US9659232B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
BR112014012404A2 (pt) | 2017-06-13 |
US20140348379A1 (en) | 2014-11-27 |
RU2014124191A (ru) | 2015-12-27 |
EP2780780A2 (de) | 2014-09-24 |
US9659232B2 (en) | 2017-05-23 |
DE102011086318A1 (de) | 2013-05-16 |
WO2013072316A3 (de) | 2013-07-18 |
WO2013072316A2 (de) | 2013-05-23 |
JP2015506006A (ja) | 2015-02-26 |
CN104040468B (zh) | 2016-12-21 |
CN104040468A (zh) | 2014-09-10 |
RU2597500C2 (ru) | 2016-09-10 |
EP2780780C0 (de) | 2024-06-19 |
IN2014KN01020A (pt) | 2015-10-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2780780B1 (de) | Positionsbestimmung eines objekts mittels erfassung eines positionsmusters durch optischen sensor | |
EP2619975B1 (de) | Verfahren zum unterscheiden von hintergrund und vordergrund einer szenerie sowie verfahren zum ersetzen eines hintergrundes in bildern einer szenerie | |
DE60023900T2 (de) | Darstellungssysem für rechner und optisches spurfolgeverfahren eines drahtlosen zeigers | |
DE69810977T2 (de) | Transparenter Dateneinbau in einem Videosignal | |
DE19983341B4 (de) | Verfahren und Einrichtung zur Erfassung stereoskopischer Bilder unter Verwendung von Bildsensoren | |
DE112015003407T5 (de) | Unsichtbare optische Kennung zur Übermittlung von Informationen zwischen Rechenvorrichtungen | |
EP3155732A1 (de) | Optische freiraumübertragung | |
DE102014118893A1 (de) | Austauschen von Informationen zwischen Laufzeit-Entfernungsmessgeräten | |
DE102013200381A1 (de) | Verfahren und Vorrichtung zum Anzeigen einer Verkehrsinformation sowie zum Erfassen einer von einem Verkehrszeichen angezeigten Verkehrsinformation und Wechselverkehrskennzeichen | |
EP3634003A1 (de) | Verfahren und vorrichtung zur zeitlichen synchronisation der optischen übertragung von daten im freien raum | |
EP1288843A2 (de) | Verfahren zur Erkennung eines Codes | |
DE102008006532A1 (de) | Anzeigen einer Nutzinformation auf einem Anzeigeelement | |
EP3023916B1 (de) | Codieren/decodieren von informationen aus einer graphischen informationseinheit | |
DE1914764A1 (de) | Schaltungsanordnung zur Durchfuehrung der Formatgestaltung bei der Darstellung von Symbolen bei lichtausstrahlenden Flaechen | |
DE102007041719B4 (de) | Verfahren zur Erzeugung erweiterter Realität in einem Raum | |
DE102011117654A1 (de) | Verfahren zum Betreiben einer Bildverarbeitungseinrichtung sowie entsprechende Bildverarbeitungseinrichtung | |
WO2018069218A1 (de) | Fernsehübertragungssystem zur erzeugung angereicherter bilder | |
EP3584764B1 (de) | Verfahren zur ansteuerung einer maschine mittels mindestens einer raumkoordinate als ansteuergrösse | |
EP2478705A1 (de) | Verfahren und vorrichtung zum erzeugen von teilansichten und/oder einer raumbildvorlage aus einer 2d-ansicht für eine stereoskopische wiedergabe | |
CN108898544A (zh) | 一种基于双通道的光栅防伪方法 | |
EP4075396A1 (de) | Verfahren und anordnung zur optischen erfassung eines kopfes einer person | |
DE19711125C2 (de) | Vorrichtung zur Umsetzung von, aus der Bildwiedergabe und Bildverarbeitung dienenden Ausgangsgeräten stammenden Bildinformationen in eine für Blinde wahrnehmbare taktile Form | |
DE102015208084A1 (de) | Verfahren zum Generieren eines Kontrastbildes einer Objektbeschaffenheit und diesbezügliche Vorrichtungen | |
WO2019029985A1 (de) | Verfahren zum betreiben einer autostereoskopischen anzeigevorrichtung und autostereoskopische anzeigevorrichtung | |
WO2021028178A1 (de) | Verfahren zum übertragen einer metainformation zu einem bild eines videostroms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20140512 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20180212 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V. |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20240109 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 502012017263 Country of ref document: DE |
|
U01 | Request for unitary effect filed |
Effective date: 20240716 |
|
U07 | Unitary effect registered |
Designated state(s): AT BE BG DE DK EE FI FR IT LT LU LV MT NL PT RO SE SI Effective date: 20240902 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240920 |
|
U20 | Renewal fee paid [unitary effect] |
Year of fee payment: 13 Effective date: 20240910 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240919 |